\documentclass{ian} \usepackage{dsfont} \usepackage{etoolbox} \usepackage{graphicx} \usepackage{xcolor} \usepackage{largearray} \usepackage{iantheo} \usepackage[reset_at_theo]{point} \usepackage[problem_in_eq]{problemset} \definecolor{Complete Equation}{HTML}{4B0082} \definecolor{Big Equation}{HTML}{DAA520} \definecolor{Medium Equation}{HTML}{32CD32} \definecolor{Simple Equation}{HTML}{4169E1} \definecolor{Quantum Monte Carlo}{HTML}{DC143C} \def\eqformat#1{{\bf\color{#1}#1}} \def\problemformat#1{{\bf Exercise #1}} \begin{document} \pagestyle{empty} \hbox{} \hfil{\bf\LARGE Bose-Einstein condensation\par \vskip10pt \hfil and the Simplified Approach to interacting Bosons } \vskip20pt \hfil{\bf\large Ian Jauslin}\par \hfil{\it Department of Mathematics, Rutgers University}\par \hfil{\tt\color{blue}\href{mailto:ian.jauslin@rutgers.edu}{ian.jauslin@rutgers.edu}}\par \vskip20pt \hfil {\bf Abstract}\par \medskip These are lecture notes for a short course on the Simplified Approach to interacting Bosons. We discuss properties of systems of many interacting Bosons, such as gasses of Helium atoms. We pay special attention to the phenomenon of Bose-Einstein condensation, which is a quantum phase in which a positive fraction of particles are all in the same state. We first give an overview of Bose-Einstein condensation in non-interacting systems. Next, we introduce the Simplified Approach, and prove some of its properties. \vskip20pt \tableofcontents \vskip40pt \setcounter{page}1 \pagestyle{plain} \section{Introduction} \indent This course is about systems of many interacting Bosons. The first question that we will address is: why should we study interacting Bosons in the first place? Our answer is threefold. First, Bosons exist, and if we want to understand the natural world, we are going to have to study them. Second, systems of many interacting Bosons exhibit interesting and non-trivial physical behavior. Third, they are very challenging to study mathematically, which means that we need to develop new mathematical tools. Let us expand on each of these points. \bigskip \point One of the great successes of science is the understanding that matter is made of particles, and the properties of matter can be understood from the microscopic dynamics of the underlying particles. But can we actually do this? The branch of physics that addresses this question is statistical mechanics, which aims to compute the behavior of matter starting from the microscopic laws of motion guiding its constituents particles. These laws of motion are quantum mechanical, and, in quantum mechanics there are two types of particles: Fermions (such as electrons, protons, neutrinos, etc) or Bosons (such as Helium atoms, photons, Higgs particles, etc). Here, we will focus on Bosons. A good example to keep in mind throughout this course is a gas of Helium atoms in a container. So Bosonic systems exist. \bigskip \point Helium has many interesting properties, but perhaps the most intriguing is that, when the temperature is small enough (below $2\mathrm{K}$, that is, $-271^\circ\mathrm{C}$ or $-456^\circ\mathrm{F}$), it forms a superfluid phase, which looks like a liquid phase, but it can flow without any viscosity, which leads to some fairly surprising behaviors. For instance, when a superfluid is put into a container, capillary forces attract it to the edges of the container, but, since there is no viscosity to compensate for them, these forces pull the superfluid all the way up the wall and trickling down the sides. Another interesting aspect of superfluidity is that classical models do not exhibit superfluidity, so this phase is intrinsically quantum. \bigskip \indent Superfluids are very interesting, but not so simple to study. Instead, we will simplify things and focus on a different, but also inherently quantum phase: the Bose-Einstein condensate. This phase was first predicted by Bose\-~\cite{Bo24} and Einstein\-~\cite{Ei24} nearly a century ago, and occurs when most of the particles are all in the same quantum state. In other words, most of the particles are doing the same thing at the same time. As the superfluid phase, this phenomenon does not occur for classical particles, so this is an inherently quantum phase. In fact, it seems somewhat related to the superfluid: if the particles are doing the same thing, why would there be viscosity? (The actual picture is much more complicated than that however.) \bigskip \indent Thus, systems of many Bosons exhibit non-trivial physical behavior. \bigskip \point Bose-Einstein condensation has had a resurgence in popularity after 1995, when it was first observed experimentally\-~\cite{AEe95,DMe95}. In particular, the mathematical physics community set off to solve an important problem: the works of Bose and Einstein are done in a much simplified model, in which particles are assumed not to interact with each other. This makes things much simpler: if the particles are independent, it suffices to understand how a single particle behaves, since the situation is identical for all the others. However, in the presence of interactions, no one has yet been able to prove that Bose-Einstein condensation even occurs (and real Bosonic systems have interactions, except for photons, but for those, we don't even know how to write down an appropriate wave equation\-~\cite{Op31}). (Here, the word ``prove'' is key: most everyone accepts Bose-Einstein condensation occurs in interacting systems, but we don't have a mathematical proof yet.) In fact, before 1998, there was no complete mathematical handle on pretty much any of the properties of interacting Bosonic gasses. In 1998, Lieb and Yngvason\-~\cite{LY98} computed the asymptotic behavior at low density of the ground-state energy (see\-~(\ref{lhy_e})) and this started a flurry of activity. Back in 1957, Lee, Huang and Yang\-~\cite{LHY57} had made a series of predictions, one of which was a low-density expansion for the ground-state energy that went one step further that what Lieb and Yngvason proved in 1998. The mathematical physics community rose to the challenge, and embarked upon a twenty year effort to prove the Lee-Huang-Yang formula. This was finally accomplished in 2020, after a long series of works culminating in\-~\cite{YY09,FS20,BCS21,FS23}. \bigskip \indent However, these successes only concern the ground-state energy. In particular, there still is no proof of Bose-Einstein condensation. The results on the ground-state energy were obtained by finding a robust mathematical basis for {\it Boglyubov theory}. Bogolyubov theory\-~\cite{Bo47} is an approach that was introduced by Bogolyubov in the 1940's to understand the low-density properties of Bose gasses. One can do a lot with Bogolyubov theory, and many predictions can be made from it\-~\cite{LHY57}. However, to this day, only its approach to the ground-state energy has been made rigorous. \bigskip \indent In this course, we will present a different approach, called the {\it Simplified Approach}. The Simplified Approach was first introduced by Lieb in 1963\-~\cite{Li63}, but after a short series of papers by Lieb and coauthors\-~\cite{LS64,LL64}, it has largely been forgotten. We have recently had a fresh look at this approach, in a collaboration with Carlen and Lieb\-~\cite{CJL20,CJL21,CHe21,Ja22,Ja23,Ja23b} and found that the predictions of the Simplified Approach far exceeded our expectations. Indeed, we found that it is as good as Bogolyubov theory at low densities\-~\cite{CJL20,CJL21} (at least as far as the ground state is concerned), it also reproduces the behavior of the Bose gas at high densities\-~\cite{CJL20}. We have even found very good agreement for some systems {\it over the entire range of densities}\-~\cite{CHe21,Ja23b}. \bigskip \indent The Simplified Approach consists in replacing the original problem that involves an infinite number of particles with a non-linear, non-local equation for a single particle. This is a familiar approach in physics, where many-body problems are replaced with non-linear effective equations. The Simplified Approach actually gives us several levels of approximation, and defines a family of equations that are more or less accurate, and correspondingly, harder or easier to solve. Some can be studied analytically, whereas others are only really amenable to a numerical analysis. \bigskip \indent In conclusion, systems of many interacting Bosons are interesting: they exist, are physically non-trivial, and mathematically challenging. \bigskip {\bf Outline}:\par \indent We will start out this course with a discussion of Bose-Einstein condensation in non-interacting systems, see Section\-~\ref{sec:statmech}. The arguments presented there follow the original derivation by Bose and Einstein\-~\cite{Bo24,Ei24}. The discussion will begin with a much abbreviated introduction to quantum statistical mechanics. Next, we will present the Simplified Approach, see Section\-~\ref{sec:simplified}. In that section, we will also present the conjectures on the Bose gas that the Simplified Approach was developed to address. In doing so, we will give an overview of scattering theory, which underlies the intuition in the low-density regime. Some of the more abstract mathematical results needed for this section have been deferred to Appendices\-~\ref{app:functional_analysis} and\-~\ref{app:harmonic}. Finally, in Section\-~\ref{sec:open}, we discuss some open problems. There are also some exercises at the very end of this document. \section{Bose-Einstein condensation}\label{sec:statmech} \indent Bose-Einstein condensation serves as the main motivation for the discussion in this course. Proving the existence of Bose-Einstein condensation in interacting systems is the main open problem in the field, and the principle reason for the entire discussion that follows this section is to try new ways of approaching that. In this section, we will discuss what Bose-Einstein condensation is in more detail, and prove that it occurs in a much simplified model: non-interacting Bosons. But first, we must define what a ``Boson'' is, as well as the model we will be using. \bigskip \subsection{Elements of quantum statistical mechanics} \indent One of the great achievements of modern science is the understanding that macroscopic matter is made of microscopic elementary particles, and that the properties of the whole stem from the interaction between these elementary particles. The goal of statistical mechanics is to understand how this works, that is, the relation between the microscopic and the macroscopic worlds. \bigskip \subsubsection{Classical statistical mechanics} \indent Let us think classically for the moment. Classical particles are governed by Newton's second law of motion: $F=ma$. In the case of conservative systems (at their deepest level, all systems are actually conservative: friction is an effective description of a complicated, conservative behavior), this can be restated in the Hamiltonian formalism. If I have $N$ particles, the Hamiltonian is a function on a $6N$-dimensional manifold (think $\mathbb R^{6N}$ for simplicity) that may look something like this: \begin{equation} H_N(x_1,\cdots,x_n;p_1,\cdots,p_N)=\sum_{i=1}^N\frac1{2m}p_i^2+\sum_{1\leqslant i0$ (yes, in this formalism, temperature is a parameter) and the {\it chemical potential} $\mu\in\mathbb R$. Given these two, the probability of a given configuration $(x_1,\cdots,x_N;p_1,\cdots,p_N)$ with $N$ particles is \begin{equation} \frac1{\Xi}e^{-\beta H_N(x_1,\cdots,x_N;p_1,\cdots,p_N)+\beta\mu N} \label{gc} \end{equation} where \begin{equation} \beta=\frac1{k_BT} \end{equation} in which $k_B$ is the {\it Boltzmann constant} and $\Xi$ is a normalization: \begin{equation} \Xi:=1+\sum_{N=1}^\infty\frac1{N!}\int dx_1\cdots dx_Ndp_1\cdots dp_N\ e^{-\beta H_N(x_1,\cdots,x_N;p_1,\cdots,p_N)+\beta\mu N} . \end{equation} We immediately notice an issue: if the $x_i$ take values anywhere in $\mathbb R^3$, $\Xi$ will come out infinite! To avoid this, let us restrict $x_i\in[0,L]^3$ for some finite $L$, and, later one, we will take $L\to\infty$. Note that $\Xi$ involves a sum over $N$: in the grand-canonical picture, we consider all possible numbers of particles. In addition, note the $1/N!$ factor, which indicates that all the particles are identical, so the labelling $i=1,\cdots,N$ should not affect the statistics! \bigskip \indent Why do we choose a probability of the form\-~(\ref{gc})? The real reason is that it is compatible with the laws of thermodynamics\-~\cite{Ru99,Ga99}, but that is a discussion for another time. Instead, let us have a look at the properties of this distribution, and check that they make sense. The first term in the exponential is $e^{-\beta H}$, and, since $\beta>0$, it says that configurations with large energies will be less likely. The parameter $\beta$ (the temperature) quantifies how much less likely. This is rather sensible: we know that energy tends to be minimized, and that this is more true at low temperatures than higher ones. The second term is $e^{\beta\mu N}$ controls the number of particles. If $\mu>0$, then configurations with more particles are more likely, whereas $\mu<0$ makes more particles unlikely. Thus the chemical potential $\mu$ controls the {\it density} of particles. \bigskip \indent Why do we use the chemical potential instead of simply fixing the number of particles (which is, after all, the most direct way of setting the density)? This can be done: in statistical mechanics, the situation in which the number of particles is fixed is called the {\it canonical formalism}. The parameters are then the temperature and the number of particles. One can show that the two formalisms are equivalent: there exists a mapping between the density and chemical potential that makes all average observables identical in both formalisms. However, in certain situations, it may be easier to compute these averages in one formalism rather than the other. In the case of Bose-Einstein condensation, the computation is easy in the grand-canonical, and impossibly difficult in the canonical, which is why we use the chemical potential instead of the density. \bigskip \indent Thinking probabilistically in this manner, we can then compute the average behavior of these particles, and compute properties for the macroscopic system. More specifically, we define an {\it observable} $A$ as a map from $\mathbb N$ (the number of particles) to functions on the phase space with $N$ particles: $A_N(x_1,\cdots,x_N;p_1,\cdots,p_N)$. The average of an observable $A$ is \begin{equation} \begin{largearray} \left = \frac1\Xi \Bigg( A_0 + \\\hfill+ \sum_{N=1}^\infty\frac1{N!}\int dx_1\cdots dx_Ndp_1\cdots dp_N\ A_N(x_1,\cdots,x_N;p_1,\cdots,p_N)e^{-\beta H_N(x_1,\cdots,x_N;p_1,\cdots,p_N)+\beta\mu N} \Bigg) . \end{largearray} \end{equation} An example of an observable could be the number of particles itself: \begin{equation} \mathcal N_N:=N \end{equation} or the total energy \begin{equation} \mathcal E_N(x_1,\cdots,x_N;p_1,\cdots,p_N):=H_N(x_1,\cdots,x_N;p_1,\cdots,p_N) . \end{equation} \subsubsection{Quantum statistical mechanics} \indent We have thus defined the grand-canonical ensemble for systems of classical particles. What about quantum particles? The microscopic state of a system of quantum particles is given by a wavefunction, for instance, $\psi\in L_2(\mathbb R^3)^{\otimes N}$, and the dynamics is given by a Hamiltonian operator, for instance, \begin{equation} H_N=-\sum_{i=1}^N\frac1{2m}\Delta_{i}+\sum_{1\leqslant i :=\frac1\Xi \left( A_0+ \sum_{N=1}^\infty \mathrm{Tr}_{L_{2,\mathrm{sym}}^{(N)}(\mathbb R^3)}(A_Ne^{-\beta H_N+\beta \mu\mathcal N_N}) \right) \end{equation} with \begin{equation} \Xi:= 1+ \sum_{N=1}^\infty \mathrm{Tr}_{L_{2,\mathrm{sym}}^{(N)}(\mathbb R^3)}(e^{-\beta H_N+\beta \mu\mathcal N_N}) . \end{equation} \bigskip \indent As in the classical case, we immediately notice an issue: the trace in $\Xi$ is infinite! To prevent this we replace $\mathbb R^3$ with $[0,L)^3$: \begin{equation} \Xi:= 1+ \sum_{N=1}^\infty \mathrm{Tr}_{L_{2,\mathrm{sym}}^{(N)}([0,L)^3)}(e^{-\beta H_N+\beta \mu\mathcal N_N}) . \label{Xi_box} \end{equation} \subsection{The ideal Bose gas} \indent Let us now apply these concepts to the non-interacting (a.k.a. the ideal) Bose gas. The Hamiltonian is \begin{equation} H_N=-\sum_{i=1}^N\frac1{2m}\Delta_{i} . \end{equation} We will consider this system on the cube $[0,L]^3$ with periodic boundary conditions (other boundary conditions can be treated similarly). In other words, we replace $\mathbb R^3$ with $\mathbb T^3:=(\mathbb R/(L\mathbb Z))^3$. This Hamiltonian is easily diagonalized: the eigenvectors of $\frac1{2m}\Delta$ are plane waves: \begin{equation} e^{ikx} ,\quad k\in\frac{2\pi}L\mathbb Z^3 \end{equation} whose eigenvalue is \begin{equation} \epsilon_k:=\frac{k^2}{2m} . \end{equation} We therefore have a (Hilbert) basis, in which each element with $N$ particles is indexed by an unordered sequence of $N$ wavevectors $k\in\frac{2\pi}L\mathbb Z^3$ (the vectors need not be different). Equivalently, a basis vector can be written as a function that takes a wavevector $k$ and returns the number of particles of that wavevector. This allows us to compute $\Xi$ in the case $\mu<0$: we write the sum over $N$ of the trace in\-~(\ref{Xi_box}) as \begin{equation} \Xi= \prod_{k\in\frac{2\pi}L\mathbb Z^3} \sum_{N_k=0}^\infty e^{-\beta(\frac{k^2}{2m}-\mu)N_k} \end{equation} In the case $\mu<0$, this series is absolutely convergent: \begin{equation} \Xi= \prod_{k\in\frac{2\pi}L\mathbb Z^3} \frac1{1-e^{-\beta(\frac{k^2}{2m}-\mu)}} . \label{Xi_ideal} \end{equation} \bigskip \indent Let us now compute some observables. Let $\mathcal B_q$ be the number of particles in the state $q$ whose average is thus easily computed: \begin{equation} \left<\mathcal B_q\right>= \frac1\Xi \left( \sum_{N_q=0}^\infty N_qe^{-\beta(\frac{q^2}{2m}-\mu)N_q} \right) \prod_{k\in\frac{2\pi}L\mathbb Z^3\setminus\{q\}} \sum_{N_k=0}^\infty e^{-\beta(\frac{k^2}{2m}-\mu)N_k} \label{BE_pre} \end{equation} and so (see Exercise\-~\ref{ex:BE_distr}) \begin{equation} \left<\mathcal B_q\right>= \frac1{e^{\beta(\frac{q^2}{2m}-\mu)}-1} \label{BE} \end{equation} which is called the {\it Bose-Einstein distribution}. It is the distribution of particles as a function of the wavevector $q$. \subsection{Bose-Einstein condensation in the ideal Bose gas} \indent By summing the number of particles of momentum $q$, we can compute the average total number of particles: \begin{equation} \left<\mathcal N\right> =\sum_{q\in\frac{2\pi}L\mathbb Z^3}\left<\mathcal B_q\right> =\sum_{q\in\frac{2\pi}L\mathbb Z^3} \frac1{e^{\beta(\frac{q^2}{2m}-\mu)}-1} \end{equation} and the average density: \begin{equation} \rho_L(\mu):=\frac{\left<\mathcal N\right>}{L^3} = \frac1{L^3} \sum_{q\in\frac{2\pi}L\mathbb Z^3} \frac1{e^{\beta(\frac{q^2}{2m}-\mu)}-1} . \label{rhomu_pre} \end{equation} The density is an important quantity. As was discussed above, the mapping between density and chemical potential is what establishes the equivalence between the grand-canonical and canonical formalisms (which are the points of view with fixed chemical potential and fixed density respectively). Therefore, we will be interested in the mapping $\mu\mapsto\rho_L(\mu)$. But first, let us simplify the expression by taking the $L\to\infty$ limit. \bigskip \indent For fixed $\mu<0$, this is a Riemann sum, so the limit $L\to\infty$ turns the sum to an integral (see Exercise\-~\ref{ex:bec}): \begin{equation} \rho(\mu):=\lim_{L\to\infty}\rho_L(\mu) = \int_{\mathbb R^3}\frac{dq}{(2\pi)^3}\ \frac1{e^{\beta(\frac{q^2}{2m}-\mu)}-1} . \label{rhomu} \end{equation} By dominated convergence, one easily checks (see Exercise\-~\ref{ex:bec}) that this is a strictly increasing function of $\mu$. Furthermore, by dominated convergence, (see Exercise\-~\ref{ex:bec}) \begin{equation} \lim_{\mu\to-\infty}\rho(\mu)=0 \label{lim_rhomu_infty} \end{equation} and \begin{equation} \lim_{\mu\to0}\rho(\mu)= \int_{\mathbb R^3}\frac{dq}{(2\pi)^3}\ \frac1{e^{\beta\frac{q^2}{2m}}-1} = \left(\frac{2m}\beta\right)^{\frac32} \int_{\mathbb R^3}\frac{dq}{(2\pi)^3}\ \frac1{e^{q^2}-1} =:\rho_c(\beta) \label{lim_rhomu_0} \end{equation} Thus, for $\mu\in(-\infty,0)$, $\rho(\mu)$ increases from $0$ to $\rho_c(\beta)$, which is finite. For $\mu>0$, the right side of\-~(\ref{rhomu}) is infinite. In particular, proceeding in this way, we only cover a finite range of densities. What of the densities with $\rho>\rho_c(\beta)$? In other words, can we set the chemical potential $\mu$ in such a way that $\rho>\rho_c(\beta)$? \bigskip \indent To achieve this, instead of fixing $\mu$ and taking $L\to\infty$, we allow $\mu$ to depend on $L$. To make this clear, we denote $\mu$ by $\mu_L$. We will choose $\mu_L$ such that $\mu_L\to0$ as $L\to\infty$ (if $\mu_L$ stayed converged to a negative value, then we would find the same result as above). Doing so changes the computation in one way: when passing to the $L\to\infty$ limit, we must pay attention to the $q=0$ term in the sum. Let us separate it out: \begin{equation} \rho_L(\mu_L) = \frac1{L^3} \frac1{e^{-\beta\mu_L}-1} + \frac1{L^3} \sum_{q\in\frac{2\pi}L\mathbb Z^3\setminus\{0\}} \frac1{e^{\beta(\frac{q^2}{2m}-\mu_L)}-1} . \end{equation} The second term converges uniformly in $\mu$: \begin{equation} \frac1{L^3}\frac1{e^{\beta(\frac{q^2}{2m}-\mu_L)}-1} \leqslant \frac1{L^3} \frac1{e^{\beta\frac{2\pi^2}{mL^2}}-1} \end{equation} which is bounded. Thus, taking the limit $L\to\infty$, \begin{equation} \lim_{L\to\infty}\rho_L(\mu_L) = \lim_{L\to\infty} \frac1{L^3} \frac1{e^{-\beta\mu_L}-1} + \int_{\mathbb R^3}\frac{dq}{(2\pi)^3}\ \frac1{e^{\beta\frac{q^2}{2m}}-1} \end{equation} that is \begin{equation} \lim_{L\to\infty}\rho_L(\mu_L) = \lim_{L\to\infty} \frac1{L^3} \frac1{e^{-\beta\mu_L}-1} + \rho_c(\beta) . \end{equation} Thus, any density $\rho>\rho_c(\beta)$ can be attained by taking $\mu_L$ such that \begin{equation} \frac1{L^3} \frac1{e^{-\beta\mu_L}-1} =\rho-\rho_c(\beta) \end{equation} that is \begin{equation} \mu_L=-\frac1\beta\log\left(1+\frac1{L^3(\rho-\rho_c(\beta))}\right) . \end{equation} In this case, \begin{equation} \lim_{L\to\infty}\frac1{L^3}\left<\mathcal B_0\right> =\lim_{L\to\infty}\frac1{L^3}\frac1{e^{-\beta\mu_L}-1} =\rho-\rho_c(\beta) \end{equation} and for $q\neq 0$, \begin{equation} \lim_{L\to\infty}\frac1{L^3}\left<\mathcal B_q\right> =\lim_{L\to\infty}\frac1{L^3}\frac1{e^{\beta(\frac{q^2}{2m}-\mu_L)}-1} 0 . \end{equation} Thus, choosing $\mu_L$ in this way, we find that the number of particles in the state $q=0$ is $L^3(\rho-\rho_c)$, that is, the number of particles in this state is proportional to $N$. There is a {\it macroscopic} proportion of particles that are all in this one state. Whereas in every other state $q\neq0$, the fraction of particles is 0. \bigskip \indent Let's take a step back. We have been working in the grand-canonical ensemble, that is, with a fixed chemical potential and a variable number of particles. Proceeding in this way, we have computed the average density, and found a bijection between the chemical potential and the density, provided we allow the chemical potential to depend on the size of the system $L$. This allows us to switch the point of view and consider the system at fixed density and temperature. Thinking in this way, we find that the behavior of the system depends in a non-trivial way on the density. If $\rho<\rho_c(\beta)$, then $\left<\mathcal B_0\right>/L^3\to0$, but if $\rho>\rho_c(\beta)$, then $\left<\mathcal B_0\right>/L^3\to c_0>0$, see Figure\-~\ref{fig:phase}. Or, to put it into words, for small densities, the system behaves like a regular, classical gas, but when the density exceeds $\rho_c(\beta)$, a huge number of particles all occupy the same state. This phenomenon is called Bose-Einstein condensation. \begin{figure} \hfil\includegraphics[width=8cm]{bec_phase.pdf} \caption{The phase diagram of the ideal gas of Helium-4 atoms. The $y$-axis is the mass density.} \label{fig:phase} \end{figure} \bigskip \indent This is significant for several reasons. For one, this is a new phase of matter that is uniquely quantum: classical systems do not behave like this. In addition, Bose-Einstein condensation may be the underlying mechanism for some truly astonishing phenomena: superfluidity and superconductivity. Fluids flow, and, while doing so, they lose energy to friction. This is visible in everyday life through the concept of {\it viscosity}. A superfluid has zero viscosity, which means that it can flow without any friction. This leads to truly astonishing behaviors: flows can separate and recombine without any losses, which allows superfluids to flow through membranes that are completely water-tight for ordinary fluids. In addition, because no energy is lost through viscosity, the capillary force becomes dominant, and superfluids can be seen to creep up the sides of any container they are put in and escape. This is fun and non-trivial behavior, but it also comes with serious technological applications. The most astonishing of which may be superconductivity. Electrical currents are caused by electrons flowing through metals. As they do so, they are slowed by resistance, and the metal heats up. Superconductivity is a phenomenon by which electrons can flow through a metal {\it without resistance}. In most applications, resistance is waste, which makes superconducting metals extremely useful in many applications. One way to understand the phenomenon is that electrons pair up into an effective excitation called a {\it Cooper pair}. Now an electron is a Fermion, but a pair of electrons is a Boson, so Cooper pairs are Bosonic. If these form the analogue of a superfluid phase, they can move around without resistance. But what does all this have to do with Bose-Einstein condensation? The answer is not so straightforward, but it has been posited that Bose-Einstein condensation is the underlying mechanism for both superfluidity and superconductivity. This can be understood intuitively: if most particles are in the same state, they are all doing the same thing, and since friction comes from colliding particles, friction can disappear. (This is a very handwavy explanation: the actual relation between Bose-Einstein condensation, superfluidity and superconductivity is a rather difficult subject). \bigskip \indent Bose-Einstein condensation was first predicted by Bose\-~\cite{Bo24} and Einstein\-~\cite{Ei24} in the 1920's (following roughly the argument given above). It took until 1995 before it was finally observed experimentally\-~\cite{AEe95,DMe95}. The experiments consist in trapping very cold Bosonic atoms (rubidium and sodium) at a high enough density so as to be in the Bose-Einstein condensate phase. The trap is then released, and the momentum of the atoms measured. The result is a sharply peaked distribution: many atoms were in the same state! \section{The Simplified Approach}\label{sec:simplified} \subsection{Interacting Bose gasses} \indent We have proved that the ideal Bose gas exhibits Bose-Einstein condensation. The approach we took consisted in computing exact expressions for the partition function $\Xi$ and the average occupation numbers $\left<\mathcal B_q\right>$. We were able to do this because the ideal Bose gas is an ``exactly solvable model'', but this is a very special feature of that specific model. Furthermore, the ideal gas is not very realistic: there are no interactions between particles, which is not how real matter behaves. Let us consider a more realistic Hamiltonian: \begin{equation} H_N=-\sum_{i=1}^N\frac1{2m}\Delta_{i}+\sum_{1\leqslant i\displaystyle ll} 0&\mathrm{if\ }|x|>R\\ \infty&\mathrm{otherwise} . \end{array}\right. \end{equation} In other words, the scattering equation is \begin{equation} \left\{\begin{array}{>\displaystyle ll} \psi(x)=0&\mathrm{if\ }|x|\leqslant R\\[0.3cm] -\frac1m\Delta\psi=0&\mathrm{otherwise} . \end{array}\right. \end{equation} In spherical coordinates, assuming $\psi$ is spherically symmetric, that is, it only depends on $r\equiv|x|$, \begin{equation} \Delta\psi=\frac1r\partial_r^2(r\psi) \end{equation} so $\Delta\psi=0$ is \begin{equation} r\psi=cr-a \end{equation} and so, for $r>R$, \begin{equation} \psi=c-\frac ar \end{equation} and, using $\psi(R)=0$ and $\psi\to1$, \begin{equation} c=1 ,\quad a=R \end{equation} so \begin{equation} 1-\psi=\frac Rr . \end{equation} The constant $c$ is dimensionless, and the constant $a$ has the dimension of a length. In this example, we see that $a$ is the radius of the potential. \bigskip \indent Let us now turn to a different potential: the soft core potential: \begin{equation} v(x)=\left\{\begin{array}{>\displaystyle ll} 0&\mathrm{if\ }|x|>R\\ U&\mathrm{otherwise} \end{array}\right. \end{equation} with $U>0$. The scattering equation is thus \begin{equation} \left\{\begin{array}{>\displaystyle ll} -\frac1m\Delta\psi+U\psi=0&\mathrm{if\ }|x|\leqslant R\\[0.3cm] -\frac1m\Delta\psi=0&\mathrm{otherwise} \end{array}\right. \end{equation} which, in spherical coordinates is \begin{equation} \left\{\begin{array}{>\displaystyle ll} -\frac1{mr}\partial_r^2(r\psi)+U\psi=0&\mathrm{if\ }|x|\leqslant R\\[0.3cm] -\frac1{mr}\partial_r^2(r\psi)=0&\mathrm{otherwise} . \label{scat_softcore_spherical} \end{array}\right. \end{equation} The solution is (see Exercise\-~\ref{ex:scattering_softcore}), using the fact that $r\psi$ should equal $0$ at $r=0$ and that $r\psi$ is continuous at $r=R$, \begin{equation} r\psi= \left\{\begin{array}{>\displaystyle ll} (R-a)\frac{\sinh(\sqrt{Um}r)}{\sinh(\sqrt{Um}R)}&\mathrm{if\ }r\leqslant R\\[0.3cm] r-a&\mathrm{otherwise} \end{array}\right. \label{sol_softcore} \end{equation} with \begin{equation} a=R-\frac{\tanh(\sqrt{Um}R)}{\sqrt{Um}} . \label{scat_softcore} \end{equation} Note that $a$ is an increasing function of $U$, ranging from $0$ for $U=0$ to $R$ at infinity. Therefore, the solution of the scattering equation for large $r$ looks like that of a hard core, but with radius $a$ instead of $R$. \bigskip \indent This length scale $a$ is called the {\it scattering length}. We can define it for potentials that are compactly supported, that is, for which there exists $R$ such that $v(x)=0$ for $|x|>R$. In this case, for $|x|>R$, \begin{equation} \psi(x)=1-\frac a{|x|} \end{equation} where $a$ is the scattering length. Alternatively, the scattering length can be computed as follows. \bigskip \theo{Lemma}\label{lemma:scattering} If $v$ is spherically symmetric, compactly supported and integrable, then \nopagebreakaftereq \begin{equation} a=\frac m{4\pi}\int dx\ v(x)\psi(x) . \end{equation} \endtheo \restorepagebreakaftereq \bigskip \indent\underline{Proof}: By the scattering equation\-~(\ref{scateq}), \begin{equation} \frac m{4\pi}\int dx\ v(x)\psi(x) = \frac1{4\pi}\int dx\ \Delta\psi(x) \end{equation} which we rewrite in spherical coordinates, using $\Delta \psi=\frac1{r^2}\partial_r(r^2\partial_r\psi)$: \begin{equation} \frac m{4\pi}\int dx\ v(x)\psi(x) = \int_0^\infty dr\ \partial_r(r^2\partial_r\psi(r)) = \lim_{R\to\infty}\int_0^R dr\ \partial_r(r^2\partial_r\psi(r)) = \lim_{R\to\infty}R^2\partial_r\psi(R) . \end{equation} Since $v$ has compact support, $v(x)=0$ for $|x|>R$, so \begin{equation} \psi=1-\frac ar \end{equation} and \begin{equation} \partial_r\psi=\frac a{r^2} \end{equation} so \begin{equation} \lim_{R\to\infty}R^2\partial_r\psi(R) =a \end{equation} which proves the lemma. \qed \bigskip \indent This lemma gives us an alternative definition of the scattering length, which does not require $v$ to be compactly supported, just that it be in $L_1(\mathbb R^3)$ so that the integral is finite. \subsection{The Lee-Huang-Yang predictions}\label{sec:lhy} \indent By following the prescriptions of Bogolyubov theory, Lee, Huang and Yang\-~\cite{LHY57} made a number of predictions about the ground state (zero-temperature state) of the interacting Bose gas. Here, we will focus on their predictions for the energy per particle, the condensate fraction and the two-point correlation function. \bigskip \indent As was mentioned above, we study the ground state of the Hamiltonian\-~(\ref{Ham}). Let us be more specific. As we did in section\-~\ref{sec:statmech}, we will consider the system to be in a finite volume with periodic boundary conditions. We denote the volume by $V$, and place the particles on the torus $\mathbb R^3/(V^{\frac13}\mathbb Z)^3$. Unlike section\-~\ref{sec:statmech}, we will work in the canonical ensemble, which means that we fix the number of particles to some $N\in\mathbb N$. We will be interested in the {\it thermodynamic limit}, in which \begin{equation} N\to\infty ,\quad V\to\infty ,\quad \frac NV=\rho \end{equation} with $\rho$ (the density) fixed. In this case, the Hamiltonian\-~(\ref{Ham}) has pure-point spectrum (by Theorem\-~\ref{theo:compact_schrodinger}, $e^{-H_N}$ is compact, so it has discrete spectrum, and therefore so does $H_N$). Its lowest eigenvalue is denoted by $E_0$, and is called the {\it ground-state energy}. The corresponding eigenvector is denoted by $\psi_0$: \begin{equation} H_N\psi_0=E_0\psi_0 . \label{eigenvalue} \end{equation} (Note that the ground state is unique, see Exercise\-~\ref{ex:perron_frobenius}.) \bigskip \indent The ground-state energy per particle is defined as \begin{equation} e_0:=\lim_{\displaystyle\mathop{\scriptstyle N,V\to\infty}_{\frac NV=\rho}}\frac{E_0}N . \end{equation} Lee, Huang and Yang predicted the following low-density expansion for the ground state energy. \bigskip \theo{Conjecture}{\cite[(25)]{LHY57}}\label{conjecture:lhy_e} For any potential $v\in L_1(\mathbb R^3)$ with $v\geqslant 0$ (actually, the potential may includes a hard-core component), as $\rho\to 0$, \begin{equation} e_0=\frac{2\pi}m\rho a\left(1+\frac{128}{15\sqrt\pi}\sqrt{\rho a^3}+o(\sqrt{\rho})\right) \label{lhy_e} \end{equation} where $a$ is the scattering length of the potential, and \nopagebreakaftereq \begin{equation} \lim_{\rho\to 0}\frac{o(\sqrt{\rho})}{\sqrt{\rho}}=0 . \end{equation} \endtheo \restorepagebreakaftereq \bigskip The first two orders of this expansion thus only depend on the potential through its scattering length. This is a universality result: any two potentials that share the same scattering length, even if they are radically different, will yield the same first two orders of the low-density expansion of the ground state energy. \bigskip \indent The condensate fraction is the fraction of particles that are in the Bose-Einstein condensate. In the case of the ideal Bose gas, the particles condensed in the $k=0$ state, which is the constant wavefunction. In the interacting case, the condensate state will still be the constant state, simply because the condensate state is unique, and the system is invariant under translations. Therefore, the condensate state must be invariant under translations, and so must be the constant state \begin{equation} \varphi_0=\frac1{\sqrt V} . \end{equation} To compute the average number of particles in the condensate state, we project the wavefunction onto the condensate state: let \begin{equation} P_i:=\mathds 1^{\otimes(i-1)}\otimes\left|\varphi_0\right>\left<\varphi_0\right|\otimes \mathds 1^{\otimes(N-i)} \label{Pi} \end{equation} be the projector onto the subspace where the $i$-th particle is in the constant state (we use the ``bra-ket'' notation of quantum mechanics: $\left|\varphi_0\right>\left<\varphi_0\right|f\equiv\left<\varphi_0,f\right>\varphi_0$). The average number of particles in the condensate is then \begin{equation} \sum_{i=1}^N\left<\psi_0\right|P_i\left|\psi_0\right> . \end{equation} The condensate fraction is then \begin{equation} \eta_0:= \lim_{\displaystyle\mathop{\scriptstyle N,V\to\infty}_{\frac NV=\rho}} \frac1N \sum_{i=1}^N\left<\psi_0\right|P_i\left|\psi_0\right> . \label{eta_Pi} \end{equation} The Lee-Huang-Yang prediction of the condensation fraction for low densities is the following. \bigskip \theoname{Conjecture}{\cite[(41)]{LHY57}}\label{conjecture:lhy_h} For any potential $v\in L_1(\mathbb R^3)$ with $v\geqslant 0$ (actually, the potential may includes a hard-core component), as $\rho\to 0$, \begin{equation} 1-\eta_0\sim\frac{8\sqrt{\rho a^3}}{3\sqrt\pi} \label{lhy_h} \end{equation} where $a$ is the scattering length of the potential. \endtheo \bigskip In particular, this conjecture would imply Bose-Einstein condensation, which means that \begin{equation} \eta_0>0 . \end{equation} \bigskip \indent The two-point correlation function is the joint probability (in the usual quantum mechanical probability distribution $|\psi|^2$ of finding a particle at $y$ and another at $y'$: \begin{equation} C_2(y-y'):= \lim_{\displaystyle\mathop{\scriptstyle N,V\to\infty}_{\frac NV=\rho}} \sum_{i,j=1}^N\left<\psi_0\right|\delta(x_i-y)\delta(x_j-y')\left|\psi_0\right> . \label{C2} \end{equation} Lee, Huang and Yang made the following prediction. \bigskip \theoname{Conjecture}{\cite[(48)]{LHY57}}\label{conjecture:lhy_C2} For any potential $v\in L_1(\mathbb R^3)$ with $v\geqslant 0$ (actually, the potential may includes a hard-core component), as $\sqrt{\rho a}|x|\to \infty$, \begin{equation} \frac1{\rho^2}C_2(x)-1\sim\frac{16\rho a^3}{\pi^3(\sqrt{\rho a}|x|)^4} \end{equation} where $a$ is the scattering length of the potential. \endtheo \bigskip \indent Of these three conjectures, the only one that has been proved is Conjecture\-~\ref{conjecture:lhy_e}, and even so, the proof does not apply to all interactions $v$. The first order term, $2\pi\rho a/m$ was proved by Lieb and Yngvason\-~\cite{LY98}, and the second order term was proved for potentials that are $L_1$ and $L_3$ by\-~\cite{FS20,FS23} and\-~\cite{YY09,BCS21}. \cite{FS20} proved that\-~(\ref{lhy_e}) holds as an upper bound for soft-core potentials of finite range, and extended thir result to hard core potentials in\-~\cite{FS23}. \cite{YY09}\-~proved the corresponding upper bound for small potentials, which was generalized to $L_3$ finite-range potentials by\-~\cite{BCS21}. The other two conjectures are still wide open. In particular, there is still no proof of Bose-Einstein condensation for interacting models. \subsection{Overview of the Simplified Approach} \indent The Simplified Approach is what its name suggests: a simplified way of studying the interacting Bose gas that is considerably simpler than dealing with the Hamiltonian\-~(\ref{Ham}). By using the term ``simplified'', we mean to imply that the results found using the approach are not (at least not yet) proven to be valid for the original Hamiltonian\-~(\ref{Ham}). In other words, it has (so far) not been used to prove any properties of\-~(\ref{Ham}) (though there is ongoing work in that direction). However, we have ample numerical and analytical evidence that the Simplified Approach captures a lot of the physics of the Bose gas. In fact, it captures significantly more than Bogolyubov theory, which is also an approximate approach. This makes it quite an intriguing theory, and seems to be worth studying in more detail. We will define the Simplified Approach in the next section; for now, let us discuss the results predicted by Simplified Approach, to motivate the more detailed study of it that will follow. \bigskip \indent The Simplified approach reduces the computation of the ground state energy to solving partial differential equations on $\mathbb R^3$ which is non-linear and non-local. ``Equation{\it s}'' plural, because the Simplified Approach actually provides a family of equations that are more or less accurate and, correspondingly, harder or easier to study. Here we will discuss three of these equations, called the Big\-~(\ref{bigeq}), Medium\-~(\ref{medeq}) and Simple\-~(\ref{simpleq}) Equations. Solving these equations is not easy, but it is much more approachable than solving the equation\-~(\ref{eigenvalue}), which involves an infinite number of particles, whereas the Simplified Approach provides single-particle equations. \bigskip \indent Within the Simplified approach, we have proved Conjectures\-~\ref{conjecture:lhy_e}\-~\cite{CJL20}, \ref{conjecture:lhy_h}\-~\cite{CJL21} and\-~\ref{conjecture:lhy_C2}\-~\cite{Ja22}. In particular, we have proved that Bose-Einstein condensation occurs in the Simplified Approach. Thus, the Simplified Approach tells us at least as much as Bogolyubov theory (at least for the ground state of\-~(\ref{Ham}): it does not have anything to say about the thermal state (at least for now)). It goes quite a bit further. The large density expansion of the ground state energy per particle has long been computed for the Bose gas, at least in the case in which the Fourier transform of the potential is non-negative: $\hat v\geqslant 0$: \begin{equation} e_0\sim_{\rho\to\infty}\frac\rho2\int dx\ v(x) . \label{e_highrho} \end{equation} Within the Simplified Approach, this also holds. This is where things get significant: the Simplified Approach gives accurate results for {\it both low and high densities}. This is rather unusual: there are many effective theories in physics (Bogolyubov theory is one of them), but in almost all such cases, the effective theory works in just one parameter regime. The Simplified Approach works in two opposite regimes. \bigskip \indent A natural question that arises is: if the Simplified Approach is accurate for small and large densities, how well does it do for intermediate densities? The energy per particle is plotted in Figure\-~\ref{fig:energy}. The red crosses come from a Quantum Monte Carlo computation, which is the gold standard for quantum mechanical predictions (even though the theoretical control over the accuracy of this method is relatively incomplete; quantum mechanics is hard...). The gray dotted line is the Bogolyubov prediction, which we see is good for small densities, but quickly falls off the mark. The predictions for the Big, Medium and Simple Equations are plotted as well. The Simple Equation is accurate at small and large densities, as was mentioned above, but not great at intermediate densities. The Medium and Big Equations are quite accurate at {\it all} densities, especially the Big Equation which visually looks like a fit of the Quantum Monte Carlo data. This means that the Big Equation is capturing at least some of the physics of the many-body Bose gas {\it at all densities}! In addition, the Big Equation has a distinct advantage over Quantum Monte Carlo: it is much easier to solve numerically. Quantum Monte Carlo requires a significant amount of artistry: it is too costly to simulate very many particles, so the results need to be inferred from computations with a few dozen particles at most. On the other hand, the Big Equation is a partial differential equation that can be solved numerically with any of the usual tools. \bigskip \begin{figure} \hfil\includegraphics[width=8cm]{energy.pdf} \caption{% \cite[Fig 1]{CHe21} The predictions of the energy per particle as a function of the density for the \eqformat{Simple Equation}, \eqformat{Medium Equation}, and \eqformat{Big Equation}, compared to a \eqformat{Quantum Monte Carlo} (QMC) simulation and the Lee-Huang-Yang (LHY) formula. } \label{fig:energy} \end{figure} \indent The condensate fraction is plotted in Figure\-~\ref{fig:condensate}. Again, we see that the Bogolyubov prediction is good for very small densities, but quickly goes wrong. The Big and Medium equations, on the other hand, seem to be very accurate throughout the range of densities. \bigskip \begin{figure} \hfil\includegraphics[width=8cm]{condensate.pdf} \caption{% \cite[Fig 3]{CHe21} The predictions of the condensate fraction as a function of the density for the \eqformat{Simple Equation}, \eqformat{Medium Equation}, and \eqformat{Big Equation}, compared to a \eqformat{Quantum Monte Carlo} (QMC) simulation and the Bogolyubov prediction (Bog). } \label{fig:condensate} \end{figure} \indent Thus, the Simplified Approach provides us with a simple tool to study systems of interacting Bosons. The simplest equations can be studied analytically, and reproduce known and conjectured results\-~\cite{CJL20,CJL21,Ja22}. The more complicated equations are surprisingly accurate, while being much simpler to handle than traditional methods\-~\cite{CHe21,Ja23b}. In particular, the Big Equation has been used\-~\cite{Ja23b} to find evidence for a previously undiscovered phase in Bosons at intermediate densities. \subsection{The Simplified Approach and its approximations} \indent The Simplified Approach dates back to a paper by Lieb from 1963\-~\cite{Li63}. The discussion below follows\-~\cite{Li63} closely (though with more detail). The starting point of the Simplified Approach is\-~(\ref{eigenvalue}), which we write out using\-~(\ref{Ham}): \begin{equation} -\sum_{i=1}^N\frac1{2m}\Delta_{i}\psi_0+\sum_{1\leqslant i\max\{\frac d2,1\}$ and $v\geqslant 0$, then\-~(\ref{simpleq}) has an integrable solution $u$ satisfying $0\leqslant u(x)\leqslant 1$. \endtheo \bigskip \indent The crucial idea of the proof is that the Simple Equation is much easier to solve if we change the point of view: instead of fixing $\rho$ and computing $e$ and $u$, we fix $e$ and compute $\rho$ and $u$. Doing so turns the term $-4eu$ into a linear term. Next, we write the Simple Equation as a fixed point equation, by defining \begin{equation} \left(-\frac1m\Delta+v+4e\right)u_n=v+2e\rho_{n-1} u_{n-1}\ast u_{n-1} ,\quad u_0:=0 \end{equation} with \begin{equation} \rho_n:=\frac{2e}{\int dx\ (1-u_n(x))v(x)} . \end{equation} We will then prove the convergence of the iteration using the monotone convergence theorem. But first, let us prove a simple useful lemma. \bigskip \theo{Lemma}\label{lemma:intu} If the solution of the simple equation exists in $L_1(\mathbb R^d)$, then \nopagebreakaftereq \begin{equation} \int dx\ u(x)=\frac1\rho \end{equation} \endtheo \restorepagebreakaftereq \bigskip \indent\underline{Proof}: We integrate both sides of the Simple equation\-~(\ref{simpleq}): \begin{equation} 0=\frac{2e}\rho-4e\int dx\ u(x)+2e\rho\left(\int dx\ u(x)\right)^2 \end{equation} which we solve: \begin{equation} \int dx\ u(x)=\frac1\rho . \end{equation} \qed \bigskip \indent Let us translate this to the language of functional analysis: let \begin{equation} K_e:=\left(-\frac1m\Delta+v+4e\right)^{-1} \end{equation} which is an operator from the Sobolev space $W_{2,p}(\mathbb R^d)$ to $L_p(\mathbb R^d)$ (see Appendix\-~\ref{app:sobolev}). Indeed, if $f\in W_{2,p}(\mathbb R^d)$, then, by the Sobolev inequality, $f\in L_q(\mathbb R^d)$ with $\frac1q=\frac1p-\frac 2d$ and, by the triangle and H\"older inequalities, \begin{equation} \left\|-\frac1m\Delta f+vf+4ef\right\|_p \leqslant \frac1m\|\Delta f\|_p+4e\|f\|_p+\|v\|_{\frac d2}\|f\|_q <\infty \end{equation} so $(-\frac1m\Delta+v+4e)$ maps $L_2$ to $W_{2,p}$ (this is where the condition that $v$ be at least $L_{\frac d2}$ comes from). In terms of $K_e$, \begin{equation} u_n=K_e(v+2e\rho_{n-1}u_{n-1}\ast u_{n-1}) . \end{equation} \bigskip \theo{Lemma}\label{lemma:Ke_pos} The operator $K_e$ is positivity preserving, see Appendix\-~\ref{app:positivity_preserving}. \endtheo \bigskip \indent\underline{Proof}: This follows directly from Theorem\-~\ref{theo:schrodinger}, taking $v$ to be $m(v+4e)$ and the fact that $\mathrm{spec}(-\Delta+m(x+4e)\geqslant 4em$. \qed \bigskip \theo{Lemma}\label{lemma:monotone} $\rho_n$ and $u_n$ are pointwise increasing in $n$ and \nopagebreakaftereq \begin{equation} \int dx\ u_n(x)\leqslant\frac1{2e}\int dx\ (1-u_n(x))v(x)\equiv\frac1{\rho_n} . \end{equation} \endtheo \restorepagebreakaftereq \bigskip \indent\underline{Proof}: We proceed by induction. \bigskip \point We have \begin{equation} u_0=0 ,\quad \rho_0=\frac{2e}{\int dx\ v(x)} \end{equation} and \begin{equation} u_1=K_ev ,\quad \rho_1=\frac{2e}{\int dx\ v(x)-\int dx\ v(x)K_ev(x)} . \end{equation} By Lemma\-~\ref{lemma:Ke_pos}, \begin{equation} u_1\geqslant 0\equiv u_0 ,\quad \rho_1\geqslant\frac{2e}{\int dx\ v(x)}\equiv \rho_0 . \end{equation} Finally, $u_1$ satisfies \begin{equation} -\frac1m\Delta u_1+4e u_1+vu_1=v \end{equation} so, integrating both sices, \begin{equation} \int dx\ u_1(x)=\frac1{4e}\int dx\ v(x)(1-u_1(x)) \leqslant\frac1{2e}\int dx\ v(x)(1-u_1(x)) . \label{intu1} \end{equation} \bigskip \point Suppose the lemma holds for $n\geqslant 1$. By Lemma\-~\ref{lemma:Ke_pos} and the inductive hypothesis, \begin{equation} u_{n+1}=K_ev+2e\rho_nK_e u_n\ast u_n\geqslant K_ev+2e\rho_{n-1}K_e u_{n-1}\ast u_{n-1} \equiv u_n . \end{equation} In addition, \begin{equation} \rho_{n+1}=\frac{2e}{\int dx\ v(x)(1-u_{n+1}(x))} \geqslant\frac{2e}{\int dx\ v(x)(1-u_{n}(x))}=\rho_n . \end{equation} Finally, \begin{equation} -\frac1m\Delta u_{n+1}+4eu_{n+1}+v u_{n+1}=v+2e\rho_n u_n\ast u_n \end{equation} so, integrating both sides, \begin{equation} \int dx\ u_{n+1}(x) =\frac1{4e}\int dx\ v(x)(1-u_{n+1}(x)) +\frac12\rho_n\left(\int dx\ u_n(x)\right)^2 \label{intun} \end{equation} so \begin{equation} \int dx\ u_{n+1}(x) \leqslant\frac1{4e}\int dx\ v(x)(1-u_{n+1}(x)) +\frac12\rho_n\int dx\ u_n(x)\int dx\ u_{n+1}(x) \end{equation} and, by the inductive hypothesis, \begin{equation} \int dx\ u_n(x) \leqslant \frac1{2e}\int dx\ (1-u_n(x))v(x) =\frac1{\rho_n} \end{equation} so \begin{equation} \int dx\ u_{n+1}(x) \leqslant\frac1{4e}\int dx\ v(x)(1-u_{n+1}(x)) +\frac12\int dx\ u_{n+1}(x) \end{equation} which implies \begin{equation} \int dx\ u_{n+1}(x) \leqslant\frac1{2e}\int dx\ v(x)(1-u_{n+1}(x)) . \end{equation} \qed \bigskip \theo{Lemma}\label{lemma:bound} For all $n$, \nopagebreakaftereq \begin{equation} 0\leqslant u_n(x)\leqslant 1 . \end{equation} \endtheo \restorepagebreakaftereq \bigskip \indent\underline{Proof}: We proceed by induction. The lemma is trivial for $n=0$ since $u_0=0$. \bigskip \point First, we prove that $u_{n+1}\geqslant 0$. We have \begin{equation} u_{n+1}=K_e(v+2e\rho_n u_n\ast u_n) \end{equation} so, by Lemma\-~\ref{lemma:Ke_pos}, \begin{equation} u_{n+1}\geqslant 0 . \end{equation} \bigskip \point We now turn to $u_{n+1}\leqslant 1$. We have \begin{equation} \frac1m\Delta u_{n+1}=v(u_{n+1}-1)+4eu_{n+1}-2e\rho_n u_n\ast u_n . \end{equation} By Young's inequality (Theorem\-~\ref{theo:young}), \begin{equation} |u_n\ast u_n|\leqslant\|u_n\|_1\|u_n\|_\infty \leqslant\|u_n\|_1 \end{equation} and, by Lemma\-~\ref{lemma:monotone}, $\|u_n\|_1\leqslant\frac1{\rho_n}$ so \begin{equation} \frac1m\Delta u_{n+1}\geqslant v(u_{n+1}-1)+4eu_{n+1}-2e . \end{equation} Let \begin{equation} A:=\{x:\ u_{n+1}(x)>1\} . \end{equation} Suppose that $A\neq\emptyset$. For $x\in A$, \begin{equation} \frac1m\Delta u_{n+1} \geqslant2e>0 \end{equation} Therefore, $u_{n+1}$ is subharmonic on $A$ (see Appendix\-~\ref{app:harmonic}). By Theorem\-~\ref{theo:harmonic} $u_{n+1}$ achieves its maximum on the boundary of $A$, but, by definition, $u_{n+1}=1$ on its boundary. Therefore, inside $A$, \begin{equation} u_{n+1}\leqslant 1 \end{equation} which is a contradiction. Therefore $A$ is empty (note that $A$ is an open set because $u_{n+1}$ is continuous, by virtue of being in $W_{2,p}(\mathbb R^d)$) and so \begin{equation} u_{n+1}\leqslant 1 . \end{equation} \qed \bigskip \theo{Lemma} Let \begin{equation} u(x):=\lim_{n\to\infty}u_n(x) ,\quad \rho(e):=\lim_{n\to\infty}\rho_n . \end{equation} Both limits exist, $u\in W_{2,p}(\mathbb R^d)$ and $u$ solves the Simple equation\-~(\ref{simpleq}). \endtheo \bigskip \indent\underline{Proof}: The limit of $u_n$ exist by Lemmas\-~\ref{lemma:monotone} and\-~(\ref{lemma:bound}) and the monotone convergence theorem. Furthermore, by Lemma\-~\ref{lemma:monotone}, \begin{equation} \rho_n\leqslant\frac1{\int dx\ u_n(x)} \leqslant\frac1{\int dx\ u_1(x)}\equiv \frac1{\int dx\ K_ev(x)}<\infty \end{equation} so, by monotone convergence, $\rho_n$ converges as well. In addition, by Lemma\-~\ref{lemma:monotone}, \begin{equation} \int dx\ u(x)\leqslant \frac1{2e}\|v\|_1 \end{equation} so $u\in L_1(\mathbb R^d)$, and so, by dominated convergence, $u_n$ converges in $L_1(\mathbb R^d)$. Moreover, by Lemma\-~\ref{lemma:bound}, \begin{equation} \|u\|_p^p\leqslant\|u\|_1 \end{equation} and since $u_n-u\leqslant 1$, \begin{equation} \|u_n-u\|_p^p\leqslant\|u_n-u\|_1 \end{equation} so $u_n$ converges in $L_p$ as well. Now, by Young's inequality (Theorem\-~\ref{theo:young}), \begin{equation} \|u\ast u-u_n\ast u_n\|_p =\|(u+u_n)\ast(u-u_n)\|_p \leqslant (\|u\|_1+\|u_n\|_1)\|u_n-u\|_p \end{equation} and so \begin{equation} \|(v+2e\rho_n u_n\ast u_n)-(v+2e\rho u\ast u)\|_p \leqslant 2e\rho\|u_n\ast u_n-u\ast u\|_p +2e\|u_n\ast u_n\|_p(\rho-\rho_n) . \end{equation} Thus, $v+2e\rho_n u_n\ast u_n$ converges in $L_p$, so $K_e(v+2e\rho_n u_n\ast u_n)$ converges in $W_{2,p}$. Therefore, \begin{equation} u=K_e(v+2e\rho u\ast u) \end{equation} so $u$ satisfies the Simple Equation. \qed \bigskip \indent We are almost there: we have proved the existence of the solution of the modified problem where $e$ is fixed instead of $\rho$. To prove Theorem\-~\ref{theo:existence}, we need to prove that $e\mapsto\rho(e)$ can be inverted locally. \bigskip \theo{Lemma}\label{lemma:surjective} The map $e\mapsto\rho(e)$ is continuous, and $\rho(0)=0$ and $\lim_{e\to\infty}\rho(e)=\infty$. Therefore $e\mapsto\rho(r)$ can be inverted locally. \endtheo \bigskip \indent\underline{Proof}: The trick of the proof is to consider the sequences \begin{equation} a_n:=\int dx\ u_n(x) ,\quad b_n:=\frac1{\rho_n}\equiv\frac1{2e}\int dx\ (1-u_n(x))v(x) . \end{equation} By dominated convergence and Lemma\-~\ref{lemma:intu}, \begin{equation} \lim_{n\to\infty}a_n=\int dx\ u(x)=\frac1\rho ,\quad \lim_{n\to\infty}b_n=\frac1\rho \end{equation} and since $u_n$ is increasing, \begin{equation} a_n\leqslant\frac1\rho\leqslant b_n . \end{equation} Now, by\-~(\ref{intun}) \begin{equation} 2a_{n+1} =b_{n+1} +\rho_na_n^2 \end{equation} and so, since $b_n=1/\rho_n$, \begin{equation} 2(a_{n+1}-a_n)-(b_{n+1}-b_n)=\rho_n(b_n-a_n)^2 \end{equation} which we sum for $n=1\cdots$: \begin{equation} \frac1\rho+b_1-2a_1=\sum_{n=1}^\infty \rho_n(b_n-a_n)^2 . \end{equation} Now, by\-~(\ref{intu1}), \begin{equation} a_1=\frac1{4e}\int dx\ v(x)(1-u_1(x)) =\frac12b_1 \end{equation} so \begin{equation} \frac1\rho=\sum_{n=1}^\infty\rho_n(b_n-a_n)^2 \end{equation} and since $\rho_n$ is increasing, \begin{equation} \sum_{n=1}^\infty(b_n-a_n)^2\leqslant \frac1{\rho_1^2} =\left(\frac1{2e}\int dx\ (1-K_ev(x))v(x)\right)^2 \leqslant\left(\frac1{2e}\int dx\ v(x)\right)^2 \end{equation} which is bounded uniformly in $e$ on any compact interval $e\in[e_1,e_2]$. Thus, $b_n-a_n\to0$ uniformly in $e$ on any compact interval and thus $\rho_n$ converges uniformly on $[e_1,e_2]$ so $\rho$ is continuous on $[e_1,e_2]$. Since $e_1,e_2$ can be chosen arbitrarily, this holds for all $e$. \bigskip \indent Finally, \begin{equation} \rho_n(0)=0 \end{equation} so $\rho(0)=0$, and \begin{equation} \lim_{e\to\infty}\rho(e)=\infty \end{equation} so, since $\rho_n$ converges uniformly on all compacts, \begin{equation} \lim_{e\to\infty}\rho(e)=\infty . \end{equation} \qed \bigskip \indent This proves Theorem\-~\ref{theo:existence}, and thus establishes the existence of a solution to the Simple Equation. But what of the uniqueness? \bigskip \theoname{Theorem}{Uniqueness} For any fixed $e$, the solution $(u,\rho)$ to the Simple Equation with $u\in L_1(\mathbb R^d)$ and $u(x)\in[0,1]$ is unique. \endtheo \bigskip \indent\underline{Proof}: Consider two solutions $(u,\rho)$ and $(u',\rho')$ with $u'(x)\in[0,1]$. We first show that $u'\geqslant u_n$ by induction. It is trivial at $n=0$, and for $n\geqslant 1$, \begin{equation} u'-u_n=2eK_e(\rho'u'\ast u'-\mathds 1_{n>1}\rho_{n-1}u_{n-1}\ast u_{n-1}) \end{equation} where $\mathds 1_{n>1}$ is equal to 1 if $n>1$ and $0$ otherwise. Since \begin{equation} \rho'=\frac{2e}{\int dx\ (1-u'(x))v(x)} \end{equation} we have \begin{equation} \rho'\geqslant\frac{2e}{\int dx\ (1-u_{n-1}(x))v(x)}\equiv \rho_{n-1} \end{equation} so \begin{equation} \rho'u'\ast u'\geqslant\rho_{n-1}u_{n-1}\ast u_{n-1} \end{equation} and \begin{equation} u'\geqslant u_n . \end{equation} Thus \begin{equation} u'\geqslant u . \end{equation} However, by Lemma\-~\ref{lemma:intu}, \begin{equation} \int dx\ u'(x) =\int dx\ u(x)=\frac1\rho \end{equation} so \begin{equation} u= u' . \end{equation} \qed \bigskip \indent Thus, taking the point of view in which the energy $e$ is fixed and $\rho$ is computed as a fuction of $e$, the solution is unique. However, this does not imply that the solution to the problem in which $\rho$ is fixed is unique: the mapping $e\mapsto\rho(e)$ may not be injective (it is surjective by Lemma\-~\ref{lemma:surjective}). To prove that it is injective, one could prove that $\rho$ is an increasing function of $e$ (physically, it should be: the higher the density is, the higher the energy should be because the potnetial is repulsive). This has been proved for small and large values of $e$\-~\cite{CJL21}, but, in general, it is still an open problem. \subsection{Energy of the Simple Equation} \indent The Simplified Approach provides a natural prediction for the ground state energy per-particle (see\-~(\ref{E0})): \begin{equation} e=\frac\rho2\int dx\ (1-u(x))v(x) . \end{equation} We have already seen numerical evidence for the fact that the Big and Medium equations provide very accurate predictions for the energy, see Figure\-~\ref{fig:energy}. Let us now discuss an analytical result, which we can prove for the Simple Equation. \bigskip \indent As was explained in Section\-~\ref{sec:lhy}, the ground state energy of the Bose gas has been computed at low and high densities, see\-~(\ref{lhy_e}) and\-~(\ref{e_highrho}). We have proved that both of these asymptotic expansions hold for the prediction of the Simple Equation. \bigskip \theoname{Theorem}{\cite[Theorem 1.4]{CJL20}}\label{theo:energy} For the Simple Equation, under the assumptions of Theorem\-~\ref{theo:existence} for $d=3$, as $\rho\to0$, \begin{equation} e=\frac{2\pi}m\rho a\left(1+\frac{128}{15\sqrt\pi}\sqrt{\rho a^3}+o(\sqrt\rho)\right) \label{low_density} \end{equation} where $a$ is the scattering length of the potential, and as $\rho\to\infty$ \nopagebreakaftereq \begin{equation} e\sim_{\rho\to\infty}\frac\rho2\int dx\ v(x) . \label{high_density} \end{equation} \endtheo \restorepagebreakaftereq \bigskip It is rather striking that the Simple Equation agrees with the Bose gas {\it both} at high and low densities. Note, however, that whereas\-~(\ref{e_highrho}) holds only for potentials of positive type (that is the Fourier transform of the potential $\hat v$ is non-negative), (\ref{high_density}) holds regardless of the sign of $\hat v$. The asymptotic agreement at high densities therefore only holds for positive type potentials. \bigskip \indent\underline{Proof}: \medskip \point Let us first prove\-~(\ref{high_density}), which is easier. By\-~(\ref{energy}), it suffices to prove that \begin{equation} \lim_{\rho\to\infty}\int dx\ u(x)v(x) =0 . \label{limintuv} \end{equation} Let \begin{equation} \chi_a:=\{x:\ v(x)\geqslant a\} \end{equation} and decompose \begin{equation} \int dx\ u(x)v(x) = \int_{\chi_a}dx\ u(x)v(x) + \int_{\mathbb R^d\setminus\chi_a}dx\ u(x)v(x) \end{equation} which, by Lemmas\-~\ref{lemma:intu} and\-~\ref{lemma:bound}, is bounded by \begin{equation} \int dx\ u(x)v(x) \leqslant \int_{\chi_a}dx\ v(x) + \frac a\rho . \end{equation} Since $v$ is integrable, $\int_{\chi_a}dx\ v(x)\to0$ as $a\to\infty$. Therefore, taking $a=\sqrt{\rho}$, \begin{equation} \int_{\chi_a}v(x)\ dx + \frac a\rho \mathop{\longrightarrow}_{\rho\to\infty}0 . \end{equation} This proves\-~(\ref{high_density}). \bigskip \point Let us now turn to\-~(\ref{low_density}). The scheme of the proof is as follows. We first approximate the solution $u$ by $w$, which is defined as the decaying solution of \begin{equation} -\frac1m\Delta w(x)=(1-u(x))v(x) . \label{u1} \end{equation} The energy of $w$ is defined to be \begin{equation} e_w:=\frac\rho2\int(1-w(x))v(x)\ dx \label{ew} \end{equation} and, as we will show, it is {\it close} to $e$, more precisely, \begin{equation} e-e_w=\frac{16\sqrt 2(me)^{\frac32}}{15\pi^2}\int dx\ v(x)+o(\rho^{\frac32}) . \label{eew} \end{equation} In addition, (\ref{u1}) is quite similar to the scattering equation\-~(\ref{scateq}). In fact we will show that $e_w$ is {\it close} to the energy $2\pi\rho a/m$ of the scattering equation \begin{equation} e_w-\frac{2\pi}m\rho a = -\frac{16\sqrt 2(me)^{\frac32}}{15\pi^2}\int dx\ \varphi(x) v(x)+o(\rho^{\frac32}) . \label{ewephi} \end{equation} Summing\-~(\ref{eew}) and\-~(\ref{ewephi}), we find \begin{equation} e=\frac{2\pi}m\rho a\left(1+\frac{32\sqrt 2(me)^{\frac32}}{15\pi^2\rho}+o(\sqrt\rho)\right) , \end{equation} from which\-~(\ref{low_density}) follows. We are thus left with proving\-~(\ref{eew}) and\-~(\ref{ewephi}). \bigskip \subpoint{\bf Proof of\-~(\ref{eew})}. By\-~(\ref{energy}) and\-~(\ref{ew}), \begin{equation} e-e_w=\frac\rho2\int dx\ (w(x)-u(x))v(x) . \end{equation} We will work in Fourier space \begin{equation} \hat u(k):=\int dx\ e^{ikx}u(x) \label{fourieru} \end{equation} which satisfies, by\-~(\ref{simpleq}), \begin{equation} \left(\frac1mk^2+4e\right)\hat u(k) =\hat S(k) +2e\rho \hat u^2(k) \end{equation} with \begin{equation} \hat S(k):=\int dx\ e^{ikx}(1-u(x))v(x) . \label{S} \end{equation} Therefore, \begin{equation} \hat u(k)=\frac1\rho\left(\frac{k^2}{4me}+1-\sqrt{\left(\frac{k^2}{4me}+1\right)^2-\frac\rho{2e}\hat S(k)}\right) . \label{hatu} \end{equation} Similarly, the Fourier transform of $w$ is \begin{equation} \hat w(k):=\int dx\ e^{ikx}w(x)=\frac{m\hat S(k)}{k^2} . \label{u1hat} \end{equation} Note that, as $|k|\to\infty$, $\hat u\sim \frac{m\hat S(k)}{k^2}$, so, while $\hat u$ is not integrable, $\hat u-\hat w$ is. We invert the Fourier transform: \begin{equation} u(x)-w(x)=\frac1{8\pi^3\rho}\int dk\ e^{-ikx} \left(\frac{k^2}{4me}+1-\sqrt{\left(\frac{k^2}{4me}+1\right)^2-\frac\rho{2e}\hat S(k)}-\frac{m\rho\hat S(k)}{k^2}\right) . \end{equation} We change variables to $\tilde k:=\frac k{2\sqrt{me}}$: \begin{equation} u(x)-w(x) = \frac{(me)^{\frac32}}{\rho\pi^3}\int d\tilde k\ e^{-i2\sqrt{me}\tilde kx} \left(\tilde k^2+1-\sqrt{(\tilde k^2+1)^2-\frac\rho{2e}\hat S(2\tilde k\sqrt{me})}-\frac{\frac\rho{2e}\hat S(2\tilde k\sqrt{me})}{2\tilde k^2}\right) . \end{equation} Furthermore, \begin{equation} s\mapsto\left|\tilde k^2+1-\sqrt{(\tilde k^2+1)^2-s}-\frac{s}{2\tilde k^2}\right| \end{equation} is monotone increasing. In addition, by~\-(\ref{S}) and~\-(\ref{simpleq}), and using the fact that $u(x)\leqslant 1$ (see Lemma\-~\ref{lemma:bound}) and $v(x)\geqslant 0$, \begin{equation} |\hat S(k)|\leqslant\int dx\ |(1-u(x))v(x)|=\frac{2e}\rho . \end{equation} Therefore \begin{equation} \left| \tilde k^2+1-\sqrt{(\tilde k^2+1)^2-\frac\rho{2e}\hat S(2\tilde k\sqrt{me})}-\frac{\frac\rho{2e}\hat S(2\tilde k\sqrt{me})}{2\tilde k^2} \right| \leqslant \left| \tilde k^2+1-\sqrt{(\tilde k^2+1)^2-1}-\frac{1}{2\tilde k^2} \right| . \end{equation} Therefore, \begin{equation} |u(x)-w(x)|\leqslant \frac{(me)^{\frac32}}{\rho\pi^3}\int d\tilde k\ \left|\tilde k^2+1-\sqrt{(\tilde k^2+1)^2-1}-\frac{1}{2\tilde k^2}\right| =\frac{32\sqrt 2(me)^{\frac32}}{15\pi^2\rho} . \end{equation} By dominated convergence, and using the fact that $\hat S(0)=\frac{2e}\rho$, \begin{equation} \begin{largearray} \lim_{e\to 0}\frac1{(me)^{\frac32}}(e-e_w) =-\lim_{e\to 0}\frac\rho{2(me)^{\frac32}}\int dx\ (u(x)-w(x))v(x) \\[0.5cm]\hfill = -\frac12\int dx\ v(x)\left( \frac1{\pi^3}\int\left(\tilde k^2+1-\sqrt{(\tilde k^2+1)^2-1}-\frac1{2\tilde k^2}\right)d \tilde k \right) =\frac{16\sqrt 2}{15\pi^2}\int dx\ v(x) . \end{largearray} \end{equation} Finally, since $u(x)\geqslant 0$, \begin{equation} e\leqslant \frac\rho 2\int dx\ v(x) \end{equation} so $e=O(\rho)$, and $e^{\frac32}=O(\rho^{\frac32})$. This proves\-~(\ref{eew}). Incidentally, again by dominated convergence, \begin{equation} u(x)-w(x)= \frac{(me)^{\frac32}}{\rho\pi^3}\int\left(\tilde k^2+1-\sqrt{(\tilde k^2+1)^2-1}-\frac{1}{2\tilde k^2}\right)\ d\tilde k =-\frac{32\sqrt 2(me)^{\frac32}}{15\pi^2\rho}+\sqrt\rho f_\rho(x) \label{uapproxw} \end{equation} with \begin{equation} 0\leqslant f_\rho(x)\leqslant\frac{32\sqrt2(me)^{\frac32}}{15\pi^2\rho} ,\quad f_\rho(x)\mathop{\longrightarrow}_{\rho\to0}0 \end{equation} pointwise in $x$. \bigskip \subpoint{\bf Proof of\-~(\ref{ewephi})}. Let \begin{equation} \xi(r):=w(r)-\varphi(r). \label{xi} \end{equation} By\-~(\ref{u1}), (\ref{scateq}) and\-~(\ref{simpleq}), \begin{equation} (-\Delta+v(x))\xi(x)=-(u(x)-w(x))v(x) . \end{equation} Therefore, by Lemma\-~\ref{lemma:scattering}, \begin{equation} e_w-\frac{2\pi}m\rho a=-\frac\rho2\int dx\ \xi(x)v(x) =-\frac\rho2\int dx\ v(x)(-\Delta+v)^{-1}((u-w)v)(x) \end{equation} and \begin{equation} (-\Delta+v)^{-1}v(x)=\varphi(x) \end{equation} so \begin{equation} e_w-\frac{2\pi}m\rho a =-\frac\rho2\int dx\ \varphi(x)(u(x)-w(x))v(x) . \end{equation} By\-~(\ref{uapproxw}), \begin{equation} e_w-\frac{2\pi}m\rho a =\frac{16\sqrt 2(me)^{\frac32}}{15\pi^2}\int dx\ \varphi(x)v(x) -\frac{\rho^{\frac32}}2\int dx\ \varphi(x)f_\rho(x)v(x) . \end{equation} Since $x\mapsto f_\rho(x)$ is bounded, we can use dominated convergence to show\-~(\ref{ewephi}). \qed \subsection{Condensate fraction} \indent The computation of the condensate fraction is not a sstraightforward as the energy. In fact, the Simplified Approach really only gives a clear prediction for the energy, through\-~(\ref{E0}) which translates into\-~(\ref{energy}). How do we compute the condensate fraction without knowing the expression of the wavefunction? The idea is to rewrite the condensate fraction in terms of the ground-state energy of an effective Hamiltonian. This is done in detail in Exercise\-~\ref{ex:feynman_hellman}. Proceeding in this way, we get a prediction for the condensate fraction. As was shown in Figure\-~\ref{fig:condensate}, the prediction of the Big and Medium equations are rather accurate for all densities, as the energy was. For the Simple Equation, we can prove an asymptotic expansion for low densities. \bigskip \theoname{Theorem}{\cite[Theorem 1.6]{CJL21}} For the Simple Equation, if $(1+|x|^4)v(x)\in L_1(\mathbb R^3)\cap L_2(\mathbb R^3)$ and $v\geqslant 0$, as $\rho\to0$, \begin{equation} 1-\eta\sim\frac{8\sqrt{\rho a^3}}{3\sqrt\pi} \end{equation} where $a$ is the scattering length of the potential. \endtheo \bigskip Thus the Simple Equation reproduces the same prediction\-~(\ref{lhy_h}) as Bogolyubov theory. \bigskip \indent The proof of this theorem is a bit more involved than the computation of the energy, though it is similar in spirit. We refer interested readers to\-~\cite{CJL21} for details. \section{Open problems}\label{sec:open} \indent The Simplified Approach is thus a powerful tool to study the ground state of systems of interacting Bosons. It reproduces the predictions of Bogolyubov theory, but goes far beyond: it is asymptotically correct at high densities, and very accurate at intermediate densities as well. This tool may open the door to new attempts at proving some of the important outstanding problems concerning the Bose gas. \bigskip \indent There are two directions that could be investigated to make a rigorous connection between the Simplified Approach and the Bose gas. One is to control the error made in the factorization assumption (see Assumption\-~\ref{assum:factorization}). Another is to construct an approximation of the ground state wavefunction directly, for instance, in Bijl-Dingle-Jastrow form: \begin{equation} \psi(x_1,\cdots,x_N)=\prod_{i . \end{equation} If this expression is finite, then $A$ is said to be trace-class. \endtheo \bigskip \theoname{Definition}{Compact operators} The set of compact operators on a Hilbert space is the closure of the set of finite-rank operators. \endtheo \bigskip \theoname{Theorem}{\cite[Theorem 6.6]{Te14}} If $A$ is a compact self-adjoint operator on a Hilbert space, then its spectrum consists of an at most countable set of eigenvalues. \endtheo \bigskip \theo{Theorem}\label{theo:trace_compact} Trace class operators are compact. \endtheo \bigskip \theoname{Theorem}{\cite[Lemma 5.5]{Te14}}\label{theo:compact_ideal} If $A$ is compact and $B$ is bounded, then $KA$ and $AK$ are compact. \endtheo \bigskip \theo{Theorem}\label{theo:compact_schrodinger} Let $\Delta$ be the Laplacian on $L_2(\mathbb R^d/(L\mathbb Z)^d)$ (the same result holds for Dirichlet and Neumann boundary conditions as well). For any function $v$ on $[0,L]^d$ that is bounded below, $e^{t(-\Delta+v)}$ is compact for any $t\geqslant 0$. \endtheo \bigskip \indent\underline{Proof}: First of all, $e^{-t\Delta}$ is trace class: using the basis $e^{ikx}$ with $k\in(\frac{2\pi}L\mathbb Z)^d$ \begin{equation} \mathrm{Tr}e^{-t\Delta} =\sum_{k\in(\frac{2\pi}L\mathbb Z)^d}e^{-tk^2}<\infty . \end{equation} By Theorem\-~\ref{theo:trace_compact}, $e^{-t\Delta}$ is compact. By the Trotter product formula, Theorem\-~\ref{theo:trotter}, \begin{equation} e^{-t\Delta+tv} =\lim_{N\to\infty}(e^{-\frac tN\Delta}e^{\frac tNv})^N \end{equation} and by Theorem\-~\ref{theo:compact_ideal}, $(e^{-\frac tN\Delta}e^{\frac tNv})^N$ is compact. Since the set of compact operators is closed, so is $e^{t(-\Delta+v)}$. \qed \subsection{Positivity preserving operators}\label{app:positivity_preserving} \theoname{Definition}{Positivity preserving operators}\label{def:positivity_preserving} An operator $A$ from a Banach space of real-valued functions $\mathcal B_1$ to another $\mathcal B_2$ is said to be {\it positivity preserving} if, for any $f\in\mathcal B_1$ such that $f(x)\geqslant0$, $Af(x)\geqslant 0$. \endtheo \bigskip \theo{Lemma}\label{lemma:add_inv} If $\mathrm{spec}(A+B)>\epsilon>0$, and $e^{-tA}$ and $e^{-t B}$ are positivity preserving for all $t>0$, then for all $t>0$, $e^{-t(A+B)}$ and $(A+B)^{-1}$ are positivity preserving. \endtheo \bigskip \indent\underline{Proof}: We write \begin{equation} (A^{-1}+B)^{-1} = \int_0^\infty dt\ e^{-t(A^{-1}+B)} \end{equation} indeed, by the spectral theorem, \begin{equation} \int_0^\infty dt\ e^{-t(A^{-1}+B)} = \int_0^\infty dt\ \int_{\mathrm{spec}(A^{-1}+B)}d\sigma\ e^{-t\sigma}P_\sigma = \int_{\mathrm{spec}(A^{-1}+B)}d\sigma\ \frac1\sigma P_\sigma =(A^{-1}+B)^{-1} . \end{equation} By the Trotter product formula (Theorem\-~\ref{theo:trotter}), \begin{equation} e^{-t(A^{-1}+B)} = \lim_{N\to\infty}(e^{-\frac tNA^{-1}}e^{-\frac tNB})^N . \end{equation} \qed \bigskip \theoname{Theorem}{Positivity of the heat kernel}\label{theo:heat} Given $t>0$, the operator $e^{t\Delta}$ from $L_2(\mathbb R^d)$ to $L_2(\mathbb R^d)$ is positivity preserving. \endtheo \bigskip \indent\underline{Proof}: If \begin{equation} \psi_t(x)=e^{t\Delta}\psi_0(x) \end{equation} then \begin{equation} \partial_t\psi_t(x)=\Delta\psi_t(x) . \end{equation} The Fourier transform of $\psi_t$ is defined as \begin{equation} \hat\psi_t(k):=\int dx\ e^{ikx}\psi_t(x) \end{equation} which satisfies \begin{equation} \partial_t\hat\psi_t(k)=-k^2\hat\psi(k) \end{equation} so \begin{equation} \hat\psi_t(k)=e^{-tk^2}\hat\psi_0(k) \end{equation} thus, taking an inverse Fourier transform, \begin{equation} \psi_t(x)=\left(\int\frac{dk}{(2\pi)^d}\ e^{-ikx}e^{-tk^2}\right)\ast\psi_0(x) \end{equation} and \begin{equation} \int\frac{dk}{(2\pi)^d}\ e^{-ikx}e^{-tk^2} =\prod_{i=1}^d\int_{-\infty}^\infty\frac{dk_i}{2\pi}\ e^{-ik_ix_i}e^{-tk_i^2} =\prod_{i=1}^d\frac1{2\sqrt{t\pi}}e^{-\frac1{4t}x_i^2} =\frac1{(4\pi t)^{\frac d2}}e^{-\frac1{4t}|x|^2} \geqslant 0 . \end{equation} \qed \bigskip \theo{Theorem}\label{theo:schrodinger} Given $t>0$ and a function $v$, the operator $e^{-t(-\Delta+v)}$ is positivity preserving. In addition, if $\mathrm{spec}(-\Delta+v)>\epsilon>0$, then $(-\Delta+v)^{-1}$ is positivity preserving. \endtheo \bigskip \indent\underline{Proof}: By the Trotter product formula (Theorem\-~\ref{theo:trotter}), \begin{equation} e^{-t(-\Delta+v)} = \lim_{N\to\infty}(e^{\frac tN\Delta}e^{-\frac tNv})^N \end{equation} which is positivity preserving, because $e^{-\frac tNv}$ is a multiplication operator by a non-negative function. The second point follows from lemma\-~\ref{lemma:add_inv}, in which we take $A=-\Delta$ and $B=v$ ($e^{-tv}$ is a multiplication operator by a non-negative function, which is positivity preserving, and so is $e^{t\Delta}$, by Theorem\-~\ref{theo:heat}). \qed \bigskip {\bf Remark}: In particular, taking the potential to be $\eta+v(x)$, this implies that $e^{-t(-\Delta+v(x)+\eta)}$ and $(-\Delta+v(x)+\eta)^{-1}$ are positivity preserving for any $\eta,v(x)\geqslant 0$. \bigskip \theo{Lemma}\label{lemma:conv} If $A$ is positivity preserving, $f\in L_1(\mathbb R^d)$ such that $f\geqslant 0$ and $\mathrm{spec}(A-f\ast)>\epsilon>0$ and $e^{-tA}$ is positivity preserving, then $(A-f\ast)^{-1}$ is positivity preserving. \endtheo \bigskip \indent\underline{Proof}: This follows from lemma\-~\ref{lemma:add_inv}, noting that $e^{tf\ast}$ is obviously positivity preserving. \qed \bigskip {\bf Remark}: In particular, if $v(x),\eta> 0$, $\|f\|_1\leqslant\eta$, then $(-\Delta+v+\eta-f\ast)^{-1}$ is positivity preserving. The assumption $\|f\|_1\leqslant\eta$ ensures that $-\Delta+v+\eta-f\ast$ has a spectrum that is bounded away from 0. \subsection{The Perron-Frobenius theorem} \theoname{Theorem}{(Generalized) Perron-Frobenius theorem}\label{theo:perron_frobenius} Let $A$ be a compact operator from a Banach space of real-valued functions $\mathcal B_1$ to another $\mathcal B_2$. If $A$ is positivity preserving (see Definition\-~\ref{def:positivity_preserving}) and has a finite and positive spectral radius $r(A)$ ($r(A):=\sup_{\lambda\in\mathrm{spec}(A)}|\lambda|$), then $r(A)$ is a non-degenerate eigenvalue whose corresponding eigenvector $f$ is non-negative: \nopagebreakaftereq \begin{equation} Af=r(A)f ,\quad f(x)\geqslant 0 . \end{equation} \endtheo \restorepagebreakaftereq \section{Elements of harmonic analysis}\label{app:harmonic} \indent In this appendix, we state some useful results from harmonic analysis. \bigskip \theoname{Definition}{Harmonic, subharmonic and superharmonic functions} A function $f\in \mathcal C^2(\mathbb R^d)$ is said to be {\it harmonic} on $A\subset\mathbb R^d$ if $\forall x\in A$, \begin{equation} \Delta f(x)=0 \end{equation} it is {\it subharmonic} if \begin{equation} \Delta f(x)\geqslant 0 \end{equation} and {\it superharmonic} if \nopagebreakaftereq \begin{equation} \Delta f(x)\leqslant 0 . \end{equation} \endtheo \restorepagebreakaftereq \bigskip \theoname{Theorem}{\cite[Theorem 9.4]{LL01}}\label{theo:harmonic} A function that is subharmonic on $A$ achieves its maximum on the boundary of $A$. A function that is superharmonic on $A$ achieves its minimum on the boundary of $A$. \endtheo \section{Proof of Theorem \expandonce{\ref{theo:compleq}}}\label{app:proof_factorization} \subsection{Factorization} \indent We will first compute $u_3,u_4$. We need to compute these up to order $V^{-2}$, because one of the terms in the equation is of order $V$ (see below). \bigskip \subsubsection{Factorization of $g_3$} \theo{Lemma}\label{lemma:g3} Assumption\-~\ref{assum:factorization} with $i=3$ and\-~(\ref{cd_g3g4}) imply that \begin{equation} g_3(x,y,z)=(1-u_3(x-y))(1-u_3(x-z))(1-u_3(y-z))(1+O(V^{-2})) \end{equation} with \begin{equation} u_3(x-y):=u(x-y)+\frac{w_3(x-y)}V \label{u3} \end{equation} \nopagebreakaftereq \begin{equation} w_3(x-y):=(1-u(x-y))\int dz\ u(x-z)u(y-z) . \label{w3} \end{equation} \endtheo \restorepagebreakaftereq \bigskip \indent\underline{Proof}: Using\-~(\ref{cd_g3g4}) in\-~(\ref{g_factorized}), \begin{equation} g_2(x_1-x_2)=(1-u_3(x_1-x_2)) \int \frac{dx_3}V\ (1-u_3(x_1-x_3))(1-u_3(x_2-x_3)) . \label{g2_factor_inproof} \end{equation} \bigskip \point We first expand to order $V^{-1}$. By\-~(\ref{assum_bound}), \begin{equation} \int\frac{dz}V\ u_3(x-z)=O(V^{-1}) \label{f3V1} \end{equation} so, \begin{equation} g_2(x-y)\equiv 1-u(x-y)=(1-u_3(x-y)) \left(1 +O(V^{-1})\right) . \end{equation} Therefore, \begin{equation} u_3(x-y)=u(x-y)(1+O(V^{-1})) . \label{3V} \end{equation} \bigskip \point We push the expansion to order $V^{-2}$: (\ref{g2_factor_inproof}) is \begin{equation} g_2(x-y)=(1-u_3(x-y))\int\frac{dz}{V}\ \left( 1 -u_3(x-z)-u_3(y-z) +u_3(x-z)u_3(y-z) \right) . \end{equation} By\-~(\ref{3V}), \begin{equation} \begin{largearray} (1-u_3(x-y)) =(1-u(x-y)) \cdot\\[0.3cm]\hfill\cdot \left(1+\int\frac{dz}{V}\ (u(x-z)+u(y-z)-u(x-z)u(y-z))+O(V^{-2})\right) . \end{largearray} \end{equation} Furthermore, by\-~(\ref{cd_g3g4}), \begin{equation} \int\frac{dx}V\ (1-u(x))=1 \end{equation} so \begin{equation} \int dx\ u(x)=0 . \label{intu0} \end{equation} Thus, \begin{equation} 1-u_3(x-y)=(1-u(x-y)) \left(1-\int\frac{dz}{V}\ u(x-z)u(y-z)+O(V^{-2})\right) . \end{equation} \qed \subsubsection{Factorization of $g_4$} \theo{Lemma}\label{lemma:g4} Assumption\-~\ref{assum:factorization} and\-~(\ref{cd_g3g4}) imply that \begin{equation} g_4(x_1,x_2,x_3,x_2)= \left(\prod_{i\displaystyle l} \frac{N^2}{2V^2}(1-u_4(x-y))\int dzdt\ v(z-t)(1-u_4(z-t)) \Pi(x-y-z-y) =\\[0.5cm]\indent= \frac{N^2}{2V^2}(1-u(x-y))\int dzdt\ v(z-t)(1-u(z-t)) -\\[0.5cm]\indent- \frac{N^2}{V^3}w_3(x-y)\int dzdt\ v(z-t)(1-u(z-t)) -\\[0.5cm]\indent- \frac{N^2}{V^3}(1-u(x-y))\int dzdt\ v(z-t)w_3(z-t) +\\[0.5cm]\indent+ \frac{N^2}{2V^2}(1-u(x-y))\int dzdt\ v(z-t)(1-u(z-t)) \left(\Pi(x,y,z,t)-1\right) +O(V^{-1}) \end{array} \end{equation} in which the only term of order $V$ is the first one, and is equal to the term of order $V$ in $E_0g_2$: \begin{equation} (1-u(x-y))\frac{N^2}{2V}\int dz\ v(z)(1-u(z)) \end{equation} and thus cancels out. Therefore, recalling that $\rho=\frac NV$, \begin{equation} \left( -\frac1m\Delta +v(x-y) +\bar G_3(x-y) +\bar G_4(x-y) -\bar E_0 \right) (1-u(x-y)) = O(V^{-1}) \label{g2bar} \end{equation} with \begin{equation} \bar G_3(x-y):= \rho\int dz\ (v(x-z)+v(y-z))(1-u(x-z))(1-u(y-z)) \end{equation} and by\-~(\ref{w3}), %break to avoid big white spaces \vfill \eject \begin{equation} \begin{array}{r@{\ }>\displaystyle l} \bar G_4(x-y) :=& -\frac\rho2\left(5+2\rho \int dr\ u(x-r)u(y-r)\right)\int \frac{dzdt}V\ v(z-t)(1-u(z-t)) -\\[0.3cm]&- \rho^2\int \frac{dzdt}V\ v(z-t)(1-u(z-t))\int dr\ u(z-r)u(t-r) +\\&+ \frac{\rho^2}2\int dzdt\ v(z-t)(1-u(z-t)) \left(\Pi(x,y,z,t)-1\right) \end{array} \end{equation} \begin{equation} \bar E_0= \frac\rho2\int \frac{dxdy}V\ v(x-y)(1-u(x-y)) . \end{equation} \bigskip \point Thus, in the thermodynamic limit, using translation invariance, \begin{equation} \bar E_0= -\frac\rho2\int dz\ v(z)(1-u(z)) . \end{equation} Furthermore, \begin{equation} \bar G_3(x)= -4\bar E_0 -2\rho u\ast S(x) . \end{equation} In addition, \begin{equation} -\frac{5\rho}2\int \frac{dzdt}V\ v(z-t)(1-u(z-t)) = 5\bar E_0 \end{equation} \begin{equation} -\rho^2\int dr\ u(x-r)u(y-r)\int\frac{dzdt}V\ v(z-t)(1-u(z-t)) = 2\rho\bar E_0 u\ast u(x-y) \end{equation} \begin{equation} -\rho^2\int\frac{dzdt}V\ v(z-t)(1-u(z-t))\int dr\ u(z-r)u(t-r) = -\rho^2\int dz\ S(z)u\ast u(z) \end{equation} and, by\-~(\ref{intu0}) and\-~(\ref{Pi}), \begin{equation} \begin{largearray} \frac{\rho^2}2\int dzdt\ v(z-t)(1-u(z-t))(\Pi(x,y,z,t)-1) = \\[0.5cm]\indent =\rho^2\left( \int dz\ S(z)u\ast u(z) -\frac2\rho u\ast u(x-y)\bar E_0 +u\ast u\ast S(x-y) \right.\\[0.5cm]\hfill\left. -2u\ast(u(u\ast S))(x-y) +\frac12\int dzdt\ u(y-z)u(y-t)u(x-z)u(x-t)S(z-t) \right) . \end{largearray} \end{equation} Inserting these into\-~(\ref{g2bar}), we find\-~(\ref{compleq}). \qed \vfill \eject \begin{thebibliography}{WWW99} \small \IfFileExists{bibliography/bibliography.tex}{\input bibliography/bibliography.tex}{} \end{thebibliography} \vfill \eject \exercises \problem\label{ex:BE_distr} (solution on p.\-~\ref{sol:BE_distr})\par \smallskip Prove\-~(\ref{BE}) starting from\-~(\ref{BE_pre}). \bigskip \problem\label{ex:bec} (solution on p.\-~\ref{sol:bec})\par \smallskip Check\-~(\ref{rhomu}) explicitly using the Riemann theorem. Prove that $\mu\mapsto\rho(\mu)$ is strictly increasing using the dominated convergence theorem. Prove\-~(\ref{lim_rhomu_infty}) and\-~(\ref{lim_rhomu_0}). \bigskip \problem\label{ex:FD_distr} (solution on p.\-~\ref{sol:FD_distr})\par Compute the analogue of\-~(\ref{BE}) for an ideal gas of {\it Fermions}: \begin{equation} \left<\mathcal B_q\right>= \frac1{e^{\beta(\frac{q^2}{2m}-\mu)}+1} \label{FD} \end{equation} {\it Hint}: Proceed in the same way as the ideal gas of Bosons by parametrizing the space using the occupation number of plane waves $e^{ikx}$. Except that now, because the wavefunctions are antisymmetric, there can only be 0 or 1 particle in each state, so the sums over $N_k$ are over $\{0,1\}$. Can there be Bose-Einstein condensation for the ideal gas of Fermions? \bigskip \problem\label{ex:scattering_softcore} (solution on p.\-~\ref{sol:scattering_softcore})\par \smallskip Prove that the solution of\-~(\ref{scat_softcore_spherical}) is\-~(\ref{sol_softcore}).\par {\it Hint}: use the fact that $\psi\to1$ as $|x|\to\infty$, as well as the fact that $\psi$ is continuous at $0$ and at $R$. \bigskip \problem\label{ex:perron_frobenius} (solution on p.\-~\ref{sol:perron_frobenius})\par \smallskip Prove that the ground state $\psi_0$ of\-~(\ref{Ham}) is unique and $\psi_0\geqslant 0$.\par {\it Hint}: Instead of the Hamiltonian $H_N$, consider the operator $e^{-tH_N}$ for $t\geqslant0$. Check that Theorem\-~\ref{theo:schrodinger} applies, so that $e^{-tH_N}$ is positivity preserving (see Definition\-~\ref{def:positivity_preserving}). Use the Perron-Frobenius theorem (Theorem\-~\ref{theo:perron_frobenius}) to conclude. \bigskip \problem\label{ex:nonneg} (solution on p.\-~\ref{sol:nonneg})\par \smallskip In this exercise, we will derive an alternate proof that $\psi_0\geqslant 0$ under the extra assumption that $v$ is continuous. To do so, consider the energy of $\psi_0$: \begin{equation} \mathcal E(\psi_0):=\left<\psi_0\right|H_N\left|\psi_0\right> =\int_{(\mathbb R/(L\mathbb Z))^{3N}} dx\ \left( -\frac1{2m}\psi_0^*(x)\Delta\psi(x) +V(x)|\psi_0(x)|^2 \right) \end{equation} where $\Delta$ is the Laplacian on $\mathbb R^{3N}$ and $V(x)\equiv\sum_{i= \frac{e^{-\beta(\frac{k^2}{2m}-\mu)}}{1-e^{-\beta(\frac{k^2}{2m}-\mu)}} \frac{ \prod_{k\in\frac{2\pi}L\mathbb Z^3} \frac1{1-e^{-\beta(\frac{k^2}{2m}-\mu)}} } {\prod_{k\in\frac{2\pi}L\mathbb Z^3} \frac1{1-e^{-\beta(\frac{k^2}{2m}-\mu)}} } = \frac{e^{-\beta(\frac{k^2}{2m}-\mu)}}{1-e^{-\beta(\frac{k^2}{2m}-\mu)}} \end{equation} from which\-~(\ref{BE}) follows. \bigskip \solution{bec} \point (\ref{rhomu_pre}) is of the form \begin{equation} \rho_L(\mu) = \frac1{(2\pi)^3} \frac1D^3\sum_{k\mathbb Z^3}f(Dk) \end{equation} with \begin{equation} D\equiv\frac L{2\pi} ,\quad f(k):=\frac1{e^{\beta(\frac{k^2}{2m}-\mu)-1}} \end{equation} so\-~(\ref{rhomu}) follows from the Riemann theorem. \bigskip \point For $\mu<0$, \begin{equation} \left|\frac1{e^{\beta(\frac{q^2}{2m}-\mu)}-1}\right| \end{equation} is integrable and, for $\mu_0<\mu<0$, \begin{equation} \left|\partial_\mu\frac1{e^{\beta(\frac{q^2}{2m}-\mu)}-1}\right| = \beta\frac{e^{\beta(\frac{q^2}{2m}-\mu)}}{(e^{\beta(\frac{q^2}{2m}-\mu)}-1)^2} \leqslant \beta\frac{e^{\beta(\frac{q^2}{2m}-\mu_0)}}{(e^{\beta\frac{q^2}{2m}}-1)^2} . \end{equation} Now, since \begin{equation} e^{\beta\frac{q^2}{2m}}-1 \sim_{q\to0}\frac{\beta q^2}{2m} \end{equation} this is integrable in dimensions three or more. \bigskip \point Since \begin{equation} \left|\frac1{e^{\beta(\frac{q^2}{2m}-\mu)}-1}\right| \leqslant \frac1{e^{\beta(\frac{q^2}{2m})}-1} \end{equation} is integrable, (\ref{lim_rhomu_infty}) and\-~(\ref{lim_rhomu_0}) follow from dominated convergence. \begin{equation} \lim_{\mu\to-\infty}\rho(\mu)=0 \end{equation} \solution{FD_distr} In the case of Fermions, each state can be occupied by at most one particle, so \begin{equation} \Xi= \prod_{k\in\frac{2\pi}L\mathbb Z^3} \sum_{N_k=0}^1 e^{-\beta(\frac{k^2}{2m}-\mu)N_k} \equiv \prod_{k\in\frac{2\pi}L\mathbb Z^3} \left( 1+ e^{-\beta(\frac{k^2}{2m}-\mu)} \right) \end{equation} and \begin{equation} \left<\mathcal B_q\right>= \frac1\Xi e^{-\beta(\frac{q^2}{2m}-\mu)} \prod_{k\in\frac{2\pi}L\mathbb Z^3\setminus\{q\}} \left( 1+ e^{-\beta(\frac{k^2}{2m}-\mu)} \right) \end{equation} so \begin{equation} \left<\mathcal B_q\right>= \frac{e^{-\beta(\frac{q^2}{2m}-\mu)}} {1+e^{-\beta(\frac{q^2}{2m}-\mu)}} = \frac{1} {1+e^{\beta(\frac{q^2}{2m}-\mu)}} . \end{equation} There cannot be Bose-Einstein condensation for Fermions, because each state can only be occupied by at most one particle, so it is not possible for a state to have an occupation number that is proportional to $N$. \bigskip \solution{scattering_softcore} For $r\leqslant R$, \begin{equation} \partial_r^2(r\psi)=mUr\psi \end{equation} so \begin{equation} r\psi=A\cosh(\sqrt{mU}r)+B\sinh(\sqrt{mU}r) . \end{equation} For $r>R$, \begin{equation} \partial_r^2(r\psi)=0 \end{equation} so, since $\psi\to1$ as $r\to\infty$, \begin{equation} r\psi=r-a . \end{equation} Since $r\psi=0$ at $r=0$, \begin{equation} r\psi=B\sinh(\sqrt{mU}r) \end{equation} and since $r\psi$ is continuous at $R$, \begin{equation} B=\frac{R-a}{\sinh(\sqrt{mU}R)} . \end{equation} This implies\-~(\ref{sol_softcore}). \bigskip \solution{perron_frobenius} By Theorem\-~\ref{theo:schrodinger}, $e^{-tH_N}$ is positivity preserving. In addition, $e^{-tH_N}$ is compact by Theorem\-~\ref{theo:compact_schrodinger}, and the spectrum of $e^{-tH_N}$ is $e^{-t\ \mathrm{spec}(H_N)}$, and the largest eigenvector of $e^{-tH_N}$ with the largest eigenvalue is the ground-state of $H_N$. Finally, since $\mathrm{spec}(H_N)\geqslant 0$ (because $-\Delta\geqslant 0$ and $v\geqslant 0$, we can apply the Perron-Frobenius theorem, which implies that $\psi_0$ is unique and $\geqslant 0$. \bigskip \solution{nonneg} The potential term $\int dx\ V(x)|\psi_0|^2$ is obviously the same for $\psi_0$ and $|\psi_0|$, so we only need to worry about the kinetic term. Let \begin{equation} P:=\{x\in\mathbb R^{3N}:\ \psi_0(x)\neq0\} . \end{equation} Now, $\psi_0$ is twice contiuously differentiable, since it is an eigenfunction and $v$ is continous (see\-~\cite[Theorem 11.7(vi)]{LL01}), so $|\psi_0|$ is twice continuously differentiable on $P$, and so \begin{equation} -\int dx\ |\psi_0|\Delta|\psi_0| \equiv -\int_P dx\ |\psi_0|\Delta|\psi_0| = \int_P dx\ (\nabla|\psi_0|)^2 +\int_{\partial P} dx\ |\psi_0(x)|(n(x)\cdot\nabla|\psi_0|) \end{equation} where $n(x)$ is the normal vector at $x$ ($\partial P$ is differentiable because $\psi_0$ is as well), but since $\psi_0(x)=0$ on $\partial P$, \begin{equation} -\int dx\ |\psi_0|\Delta|\psi_0| = \int_P dx\ (\nabla|\psi_0|)^2 = \int_P dx\ \left(\nabla\psi_0\frac{\psi_0}{|\psi_0|}\right)^2 = \int_P dx\ (\nabla\psi_0)^2 = -\int_P dx\ \psi_0\Delta\psi_0 . \end{equation} Thus, $\mathcal E(|\psi_0|)=\mathcal E(\psi_0)$. Since $\psi_0$ is the minimizer of $\mathcal E(\psi_0)$, so is $|\psi_0|$, but since the ground state is unique, \begin{equation} \psi_0=|\psi_0| \end{equation} so $\psi_0\geqslant 0$. \bigskip \solution{feynman_hellman} Let $\tilde\psi_0(\epsilon)$ denote the ground state of $\tilde H_N(\epsilon)$ with $\|\tilde\psi_0(\epsilon)\|_2=1$. We have \begin{equation} \tilde E_0(\epsilon) =\left<\tilde\psi_0(\epsilon)\right|\tilde H_N(\epsilon)\left|\tilde\psi_0(\epsilon)\right> \end{equation} so \begin{equation} \partial_\epsilon\tilde E_0|_{\epsilon=0} = 2\left<\partial_\epsilon\tilde\psi_0|_{\epsilon=0}\right|H_N\left|\psi_0\right> + \left<\psi_0\right|\partial_\epsilon\tilde H_N|_{\epsilon=0}\left|\psi_0\right> \end{equation} that is \begin{equation} \partial_\epsilon\tilde E_0|_{\epsilon=0} = 2E_0\left<\partial_\epsilon\tilde\psi_0|_{\epsilon=0},\psi_0\right> + \left<\psi_0\right|\sum_{i=1}^NP_i\left|\psi_0\right> . \end{equation} In addition, \begin{equation} 2\left<\partial_\epsilon\tilde\psi_0|_{\epsilon=0},\psi_0\right> = \partial_\epsilon\left<\tilde\psi_0(\epsilon),\tilde\psi_0(\epsilon)\right>|_{\epsilon=0} = \partial_\epsilon1|_{\epsilon=0} =0 . \end{equation} Thus, \begin{equation} \frac1N \partial_\epsilon\tilde E_0|_{\epsilon=0} = \frac1N\sum_{i=1}^N\left<\psi_0\right|P_i\left|\psi_0\right> . \end{equation} \end{document}