# The Berry Phase The $\gamma_n(t)$ that we derived above is known as the Berry phase. Perhaps the first question that comes to mind about it is whether it is a phase at all. We can show that it is, but showing that $\gamma(t)$ is in fact *real*. We know that $\langle{\psi_n | \psi_n} \rangle = 1$. From there we look at $$\frac{d}{dt} \left( \langle{\psi_n}\rangle \right) = \langle{\dot \psi_n | \psi_n} \rangle + \langle{ \psi_n | \dot\psi_n} \rangle = \left(\langle{ \psi_n | \dot\psi_n} \rangle\right)^* + \langle{ \psi_n | \dot\psi_n} \rangle = 2 \Re \left( \langle{ \psi_n | \dot\psi_n} \rangle \right) = 0$$ So the real part of $\langle{ \psi_n | \dot\psi_n} \rangle$ is 0, meaning that the Berry phase is real. Typically, $H(t)$ gets its time dependence through some parameter which depends on $t$, for example on the width of the infinite square well. In that case we express $H$ as $H(\vec{R}(t))$ where $\vec{R}$ is just the set of parameters that depend on $t$. Then $$\dot{\psi}_n = (\vec{\nabla}_{R} \psi_n) \cdot \dot{R}$$ and so \begin{aligned} \gamma_n(t) &= i \int_0^t \langle{ \psi_n | \vec{\nabla_R}\psi_n} \rangle \cdot \dot{R}(t') dt' \\ &= i \int_{\vec{R}(0)}^{\vec{R}(t)} \langle{ \psi_n | \vec{\nabla_R}\psi_n} \rangle \cdot d\vec{R}. \end{aligned} This is a line integral through parameter space which is independent of time. The Berry phase introduced is independent of how fast the path from $\vec{R}(0)$ to $\vec{R}(t)$ is taken. This is in contrast to $\theta_n(t)$ which has explicit time dependence, where the more time you take the more phase you accumulate. If we consider the case where $\vec{R}(0) = \vec{R}(t)$, then for only one parameter the Berry phase will always be zero. A non-zero Berry phase arises only for more than one time-dependent parameter. In the case of more than one parameter then, the value of the integral depends only on the number of poles enclosed by the path, discretising the value of $\gamma_n$, and yielding a result that is path independent. ## Measuring a Phase? How can we measure a phase? It may seem difficult since phases often don't contribute to observables that we are interested in measuring. Though it may be impossible to measure an *overall* phase, *relative* phases are fair game. By splitting a wave packet over two paths of different lengths and using interference, we can get information about this phase. ![Splitting mirrors.](figures/2paths.svg) To see this, consider the two paths depicted in Figure\ref{fig:2paths}. The longer path is associated with an accumulated phase of $e^{i \gamma}$ for some $\gamma$. The total wavefunction is then \begin{aligned} \psi_{tot} &= \frac{1}{2} \psi_0 + \frac{1}{2} \psi_0 e^{i \gamma} \\ |\psi_{tot}|^2 &= \frac{1}{4} |\psi_0|^2 \left( 1 + e^{i \gamma} \right) \left( 1 + e^{-i \gamma} \right) \\ &= \frac{1}{2} |\psi_0|^2 (1 + \cos(\gamma)) \\ &= |\psi_0|^2 \cos(\gamma/2) \end{aligned} We see that relative phases do show up non-trivially when going from amplitudes to probabilities, and exploiting these will be our method toward measuring the Berry phase. # Aharonov-Bohm Effect First predicted by Ehrenberg and Siday (10 years earlier than Aharonov and Bohm!), this effect is due to the coupling of the electromagnetic potential to an electron's wave function, as we will see. The setup is depicted in Figure \ref{fig:ehrenberg_siday}. The important thing to note here is that the magnetic field $\mathbf{B}$ is non-zero only inside the solenoid. Which means for both electron paths, through B or C, the magnetic field is constantly zero. The same cannot be said of the vector potential however. ![ehrenberg_siday](figures/ehrenberg_siday.svg) We know that the flux $\phi$ through the solenoid is $\phi=B\cdot \pi r^2$ for $r$ the radius of the solenoid, and that $\mathbf{B} = \nabla \times \mathbf{A}$. The vector potential $\mathbf{A}$ changes our Hamiltonian to $$H = \frac{(\mathbf{p} + e\mathbf{A})^2}{2m} + V(\mathbf{r}).$$ If you are wondering about this change, the main point is that the momentum operator has changed in the standard way for quantum mechanics to include the vector potential such that the canonical commutation relation $[r_i, p_j] = i\hbar \delta_{ij}$ still holds. How now do we solve for $\psi$ in $H \psi = E \psi$ for this Hamiltonian? There is a simple way to solve this using the solution when $\mathbf{A}=0$, which we denote as $\psi_0$. We define $$\psi (\mathbf{r}) = e^{i g(\mathbf{r})} \psi_0(\mathbf{r}) ;\text{ where } g(\mathbf{r}) = -\frac{e}{\hbar} \int_{\mathbf{r}_0}^{\mathbf{r}} \mathbf{A}(\mathbf{r}) \cdot d \mathbf{r}$$ $g(\mathbf{r})$ is defined as a line integral, which is only well-defined if $\mathbf{B} = \nabla \times \mathbf{A} = 0$. This is precisely the situation we have engineered with the solenoid. It is simple to check that this solution works: \begin{aligned} (\mathbf{p} + e\mathbf{A})\psi &= (-i\hbar\nabla + e\mathbf{A})\psi \\ &= -i\hbar e^{i g} i (\nabla g) \psi_0 - i\hbar e^{ig} \nabla \psi_0 + e \mathbf{A} e^{ig}\psi_0 \\ &= -i\hbar e^{i g} i (-\frac{eA}{\hbar}) \psi_0 - i\hbar e^{ig} \nabla \psi_0 + e \mathbf{A} e^{ig}\psi_0 \\ &= e^{ig}(\mathbf{p}\psi_0) \end{aligned} Applying this operator twice then will give $e^{ig} (\mathbf{p}^2 \psi_0)$. And so since $\psi_0$ satisfies $H_0 \psi_0= E_0 \psi_0$, the function $\psi$ defined in Equation \ref{eq:psi_def} satisfies the equivalent equation with $\mathbf{A}$ in it. Turning back to the beam splitter and Figure \ref{fig:ehrenberg_siday}, we can see that each path will pick up a different phase factor. $$\psi_0 e^{ig} \rightarrow \psi_0 e^{\pm i \gamma} ; \text{ where } \gamma = -\frac{e}{\hbar} \int \mathbf{A}(\mathbf{r}) \cdot d \mathbf{r}$$ If, contrary to the figure, we take two semi-circular paths around the solenoid, we can evaluate what the phase associated with each path is using the circle element on the path $r d\phi$. $$\gamma = -\frac{e}{\hbar} \int \frac{\phi}{2 \pi r} \hat{\phi} \cdot r d\phi \hat{\phi} = -\frac{e \phi}{ 2 \pi \hbar} \int d\phi = \pm \frac{e \phi}{2\hbar}.$$ The *difference* between the two paths is then $e \phi/\hbar$. As we saw in Equation \ref{eq:interference}, this difference can be measured in experiment through interference. Some comments: * $\mathbf{A}$ matters for this derivation, *not* the magnetic field $\mathbf{B}$! * $\gamma$ is explicitly gauge invariant (adding some term $\nabla f$ to $\mathbf{A}$ doesn't change the result) ### Connection to the Berry Phase If we confine the electron to a box at some point $\mathbf{R}(t)$, and slowly move the box around the solenoid, we want to find the Berry phase that this electron acquires as the box moves around the solenoid. If we had no solenoid, the wavefunction that we would have would be centred around $\mathbf{R}$, and going around in a circle. This can be written as $\psi_0(\mathbf{r}-\mathbf{R})$. With the solenoid present we use the same trick as we did above by setting $\psi$ to $$\psi (\mathbf{r}) = e^{i g(\mathbf{r})} \psi_0(\mathbf{r} - \mathbf{R}); \text{ where } g(\mathbf{r}) = -\frac{e}{\hbar} \int_{\mathbf{R}}^{\mathbf{r}} \mathbf{A} \cdot d \mathbf{r}$$ Then, $$\nabla_{\mathbf{R}} \psi(\mathbf{r}) = -i \frac{e}{\hbar} \mathbf{A}(\mathbf{R}) e^{i g(\mathbf{r})} \psi_0(\mathbf{r} - \mathbf{R}) - e^{i g(\mathbf{r})} \nabla_{\mathbf{r}} \psi_0(\mathbf{r} - \mathbf{R})$$ where we used the fact that $\nabla_{\mathbf{R}} \psi_0(\mathbf{r} - \mathbf{R}) = -\nabla_{\mathbf{r}} \psi_0(\mathbf{r} - \mathbf{R})$. We can now use this to evaluate inner product inside the integral of the Berry phase: \begin{aligned} \langle \psi| \nabla_{\mathbf{R} \rangle \psi} &= \int e^{-ig} \psi_0(\mathbf{r} - \mathbf{R}) \left[ -\frac{e}{\hbar}\mathbf{A}(\mathbf{R}) e^{ig}\psi_0(\mathbf{r} - \mathbf{R}) - e^{ig} \nabla_{\mathbf{r}} \psi_0(\mathbf{r} - \mathbf{R})\right] d^3r \\ &= -\frac{ie}{\hbar} \mathbf{A}(\mathbf{R}) - \frac{i}{\hbar} \langle{\mathbf{p}}{\psi_0}\rangle. \end{aligned} But the expectation value of the momentum is 0 for a particle confined in a box! So we find that \begin{aligned} \gamma_n &= i \oint -\frac{ie}{\hbar} \mathbf{A}(\mathbf{R}) \cdot d\mathbf{R} \\ &= \frac{e}{\hbar} \int \nabla \times \mathbf{A} da \\ &= \frac{e\phi}{\hbar}. \end{aligned}
 --- title: "Adiabatic theorem" header-includes: - \usepackage{physics} - \usepackage{booktabs} - \usepackage{mathtools} - \usepackage{braket} output: pdf_document: keep_tex: true --- # Proof of adiabatic theorem So far we have just $\textit{stated}$ the theorem and shown a couple examples. So far we have just *stated* the theorem and shown a couple examples. Now we will formulate the theorem in a mathematical way and prove it. Our starting point is the \emph{time-dependent Schroedinger equation}: Our starting point is the *time-dependent Schroedinger equation*: $$i \hbar \frac{\partial}{\partial t} \left|\Psi(t)\right> = H(t) \left|\Psi(t)\right>\,$$ ... ... @@ -30,21 +43,21 @@ equation $$H(t) \psi_n(t) = E_n(t) \psi_n(t)\,$$ where we now consider time $t$ as a $\emph{parameter}$ of the Hamiltonian that we where we now consider time $t$ as a *parameter* of the Hamiltonian that we can fix to some arbitrary value. Be careful here: the states $\psi_n(t)$ \emph{do not solve the time-dependent Schroedinger equation \eqref{eq:TDSE}!} (despite the occurance of $H(t)$ in the equation). Be careful here: the states $\psi_n(t)$ *do not solve the time-dependent Schroedinger equation \eqref{eq:TDSE}!* (despite the occurance of $H(t)$ in the equation). In contrast, the $\psi_n(t)$ that we have just defined solve a stationary Schr\"{o}dinger equation (we fix $t$ to some value). As a result they must constitute a complete, orthogonal set for any time $t$. That is: $\braket{\psi_n(t)}{\psi_m(t)} = orthogonal set for any time$t$. That is:$\langle{\psi_n(t) | \psi_m(t)} \rangle = \delta_{nm}$. If we take two different times$t$and$t'$, we cannot say anything about the inner product$\braket{\psi_n(t)}{\psi_m(t')}$in general, since the$\psi_m$could evolve in any way in principle, such that the overlap between$\psi_n$and$\psi_m$is no longer 0. > If we take two different times$t$and$t'$, we cannot say anything about the > inner product$\langle{\psi_n(t) | \psi_m(t')} \rangle$in general, since the$\psi_m$> could evolve in any way in principle, such that the overlap between >$\psi_n$and$\psi_m$is no longer 0. Because the$\psi_n(t)$form a complete orthonormal set, we can express the ... ... @@ -93,12 +106,12 @@ By projecting these expressions on the$\psi_m$eigenstate (which amounts to applying$\bra{\psi_m}$from the left on both sides), we find that $$\dot{c}_m = - \sum_n c_n \braket{\psi_m}{\dot{\psi}_n} e^{i (\theta_n - \theta_m)} \dot{c}_m = - \sum_n c_n \langle{\psi_m| \dot\psi_n} \rangle e^{i (\theta_n - \theta_m)}$$ where we used the orthogonality of$\psi_n$and$\psi_m$to eliminate the sum on the left-hand side. In order to make progress evaluating what$\braket{\psi_m}{\dot{\psi}_n}$might In order to make progress evaluating what$\langle{\psi_m| \dot\psi_n} \rangle$might be, we take the derivative of the time-independent Schr\"{o}dinger equation which we used to define the$\psi_n$in the first place (Equation \ref{eq:time_dep_H}). ... ... @@ -110,46 +123,45 @@ Again projecting this onto the$\psi_mgives us: \begin{aligned} \matrixel{\psi_m}{\dot{H}}{\psi_n} + \matrixel{\psi_m}{H}{\dot{\psi_n}} = \dot{E}_n \braket{\psi_m}{\psi_n} + E_n \braket{\psi_m}{\dot{\psi}_n} \\ \matrixel{\psi_m}{\dot{H}}{\psi_n} + E_m\braket{\psi_m}{\dot{\psi_n}} = \dot{E}_n \delta_{nm} + E_n \braket{\psi_m}{\dot{\psi}_n} \\ \langle{\psi_m | \dot{H} | \psi_n }\rangle + \langle{\psi_m | {H} | \dot \psi_n }\rangle = \dot{E}_n \langle{\psi_m| \psi_n} \rangle + E_n \langle{\psi_m| \dot\psi_n} \rangle \\ \langle{\psi_m | \dot{H} | \psi_n }\rangle + E_m\langle{\psi_m| \dot\psi_n} \rangle = \dot{E}_n \delta_{nm} + E_n \langle{\psi_m| \dot\psi_n} \rangle \\ \end{aligned} So forn\neq m$, we find that $$\braket{\psi_m}{\dot{\psi}_n} = \frac{\matrixel{\psi_m}{\dot{H}}{\psi_n}}{E_n - E_m}. \langle{\psi_m| \dot\psi_n} \rangle = \frac{\langle{\psi_m | \dot{H} | \psi_n }\rangle}{E_n - E_m}.$$ When we have slow, gradual change being applied to the system,$\dot{H}$is small. Now we are finally ready to make the$\textbf{approximation}$that you must have been anticipating since reading the title of these notes. We$\textit{neglect}$the contributions when$n \neq m$and just get: *neglect* the contributions when$n \neq m$and just get: $$\dot{c}_m = -c_m \braket{\psi_m}{\dot{\psi}_m}. \dot{c}_m = -c_m \langle{\psi_m| \dot\psi_m} \rangle.$$ $$\begin{quote}{\textbf{Aside:}} What is considered \say{slow} is determined by E_n - E_m. If the energy difference is \textit{large}, then a gradual change could mean something that is much faster than we might expect. We also see that in cases of degeneracy (when E_n = E_m despite n \neq m), there isn't \textit{any} definition of slow that is slow enough. The adiabatic approximation breaks down here. Similarly if we change the system in such a way that two energy levels that were separated come together or switch places, the approximation will again break down. \end{quote}$$ > What is considered \say{slow} is determined by$E_n - E_m$. If the energy > difference is *large*, then a gradual change could mean something that > is much faster than we might expect. We also see that in cases of degeneracy > (when$E_n = E_m$despite$n \neq m$), there isn't *any* definition of > slow that is slow enough. The adiabatic approximation breaks down here. > Similarly if we change the system in such a way that two energy levels that > were separated come together or switch places, the approximation will again > break down. Solving equation \ref{eq:cm} gives $$c_m(t) = c_m(0) e^{i \gamma(t)} ; \text{ where } \gamma(t) = i \int_0^t \braket{\psi_m}{\dot{\psi}_n} dt' \text{ where } \gamma(t) = i \int_0^t \langle{\psi_m| \dot\psi_n} \rangle dt'$$ (You may notice that the factor$i$appears twice in the above expression, both in the expression$e^{i \gamma(t)}$and in the definition of$\gamma(t)$. This is ... ...  ... ... @@ -6,16 +6,10 @@ processes in thermodynamics, which is characterised by no energy exchange between a system and its environment. The \textbf{adiabatic theorem}, from Max Born and Vladimir Fock, states that: $$\begin{quote}{\textbf{Aside:}} A physical system remains in its \textit{instantaneous eigenstate} if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum. \end{quote}$$ > A physical system remains in its *instantaneous eigenstate* if a given > perturbation is acting on it slowly enough and if there is a gap between the > eigenvalue and the rest of the Hamiltonian's spectrum. Let's unpack what this means through a classical example to solidify our ... ... @@ -43,7 +37,7 @@ $$Now the question we want to answer is: if we change the width of the well by moving the right wall outwards, how does the wavefunction change? The answer depends on \textit{how fast} we move one of the walls. depends on *how fast* we move one of the walls. The adiabatic theorem tells us that if system Hamiltonian changes gradually enough from some initial form \hat{H}(0) to a final form \hat{H}(T), then if ... ...  This diff is collapsed. This diff is collapsed. This diff is collapsed. This diff is collapsed.  ... ... @@ -35,16 +35,26 @@ common.configure_plotting() ## Heuristic derivation <<<<<<< HEAD The WKB wavefunction can be derived following physical intuition. We will follow this approach before doing a formal derivation. Let us recall that: ======= The WKB wavefunction can be derived as follows by considering the following assumptions: >>>>>>> eed1e4215688d87d6774c8db51da6a2a1bc8b42c * A smooth potential can be decomposed into piecewise constant pads. * There is no back reflection in a smooth potential. <<<<<<< HEAD A general ansatz for a wavefunction has an amplitude and a phase, that is, ======= The later can be justified by considering sufficiently small pads. In general, a wavefunction has an amplitude and a phase, that is, >>>>>>> eed1e4215688d87d6774c8db51da6a2a1bc8b42c$$ \psi(x) = A(x)e^{i\phi(x)}. $$<<<<<<< HEAD ### Propagating waves: E > V(x) ... ... @@ -112,9 +122,14 @@$$ \psi(x) \sim \exp\left( \frac{i}{\hbar} \sum_{j=0}^N p(x_j) \Delta x\right) =\exp\left( \frac{\pm i}{\hbar} \int_{x_0}^x p(x') d x'\right). $$In the last equality, we take the limi \Delta x \rightarrow 0. Second, let us consider the amplitude. In quantum mechanics, probability currents are conserved. The current is given by, ======= First, let us focus on the phase. By moving along a single pad, the phase evolve as \Delta \phi = \phi(x_{i+1})-\phi(x_{i}). Over a constant potential, the acquired phase is \Delta \phi = p(x_i) \Delta x / \hbar, where p(x)=\sqrt{2m(E-V(x))}. Therefore, the total phase can be obtaining by summing the contributions of all the pads, that is, >>>>>>> eed1e4215688d87d6774c8db51da6a2a1bc8b42c$$ j(x) = |\psi(x)^2| v(x) = |A(x)|^2 \frac{p(x)}{m}, $$<<<<<<< HEAD where the velocity is v(x) = p(x)/m. Since the current is constant, we find that the amplitude of the wavefunction goes as$$ A(x) \sim 1/\sqrt{p(x)}. ... ... @@ -173,10 +188,25 @@ For the case$E < V(x)$, we use the same replacement as before. ## Summary The WKB wavefunction can be derived from two assumptions: a smooth potential, and a slowly varying wavefunction. The general solution is, ======= Note that, for the moment, it is implicitly assumed that$E>V(x)$. In the last equality, the sum is taken to the continuum limit. Second, consider the amplitude. The current is given by$j(x) = |\psi(x)^2| v(x)$where the velocity is$v(x) = p(x)/m$. Since there is no back reflection, the current is constant. Therefore,$|\psi(x)^2| \sim 1/p(x)$. From here, we find that the amplitude of the wavefunction goes as$A(x) \sim 1/\sqrt{p(x)}$. Then, the WKB wavefunction will be given by, $$\psi_{WKB}(x) = \frac{1}{\sqrt{p(x)}} \exp\left( \frac{i}{\hbar} \int_{x_0}^x p(x') d x'\right).$$ ### WKB for evanescent waves We assumed that$E>V(x)$, but this heuristic derivation holds for$E < V(x)$as well. In this case, the wavefunction does not accumulate a phase, but accumulates a decaying amplitude. On the formal level,$p(x)=\sqrt{2m(E-V(x))}=i\sqrt{2m(V(x)-E)}=i|p(x)|$. Therefore, the WKB function in a region where$E < V(x)$is, >>>>>>> eed1e4215688d87d6774c8db51da6a2a1bc8b42c $$\begin{split} \psi(x)_{E > V(x)} &= \frac{A}{\sqrt{p(x)}}e^{\frac{i}{\hbar} \int_x^{x_1} p(x') dx'} + \frac{B}{\sqrt{p(x)}}e^{-\frac{i}{\hbar} \int_x^{x_1} p(x') dx'},\\ \psi(x)_{E < V(x)} &= \frac{C}{\sqrt{|p(x)}|}e^{\frac{1}{\hbar} \int_x^{x_1} |p(x')| dx'} + \frac{D}{\sqrt{|p(x)|}}e^{-\frac{1}{\hbar} \int_x^{x_1} |p(x')|dx'}. \end{split}$$ <<<<<<< HEAD As can be seen, its phase sums over all the contributions of$p(x)$, and the amplitude is inversly proportional to it. The WKB approximation breaks down at the turning points$x_0 = x_0(E)$, where$p(x_0)=0$. ======= ## Formal derivation >>>>>>> eed1e4215688d87d6774c8db51da6a2a1bc8b42c  ... ... @@ -15,6 +15,8 @@ nav: - Tunneling: 'wkb_tunel.md' - Adiabatic approximartion: - Adiabatic theorem: 'adiabatic_theorem.md' - Theorem proof: 'adiabatic_proof.md' - Berry phase: 'adiabatic_berry.md' - Exercises: 'adiabatic_exercises.md' theme: name: material ... ...  # The Berry Phase The$\gamma_n(t)$that we derived above is known as the Berry phase. Perhaps the first question that comes to mind about it is whether it is a phase at all. We can show that it is, but showing that$\gamma(t)$is in fact *real*. We know that$\langle{\psi_n | \psi_n} \rangle = 1$. From there we look at $$\frac{d}{dt} \left( \langle{\psi_n}\rangle \right) = \langle{\dot \psi_n | \psi_n} \rangle + \langle{ \psi_n | \dot\psi_n} \rangle = \left(\langle{ \psi_n | \dot\psi_n} \rangle\right)^* + \langle{ \psi_n | \dot\psi_n} \rangle = 2 \Re \left( \langle{ \psi_n | \dot\psi_n} \rangle \right) = 0$$ So the real part of$\langle{ \psi_n | \dot\psi_n} \rangle$is 0, meaning that the Berry phase is real. Typically,$H(t)$gets its time dependence through some parameter which depends on$t$, for example on the width of the infinite square well. In that case we express$H$as$H(\vec{R}(t))$where$\vec{R}$is just the set of parameters that depend on$t. Then $$\dot{\psi}_n = (\vec{\nabla}_{R} \psi_n) \cdot \dot{R}$$ and so \begin{aligned} \gamma_n(t) &= i \int_0^t \langle{ \psi_n | \vec{\nabla_R}\psi_n} \rangle \cdot \dot{R}(t') dt' \\ &= i \int_{\vec{R}(0)}^{\vec{R}(t)} \langle{ \psi_n | \vec{\nabla_R}\psi_n} \rangle \cdot d\vec{R}. \end{aligned} This is a line integral through parameter space which is independent of time. The Berry phase introduced is independent of how fast the path from\vec{R}(0)$to$\vec{R}(t)$is taken. This is in contrast to$\theta_n(t)$which has explicit time dependence, where the more time you take the more phase you accumulate. If we consider the case where$\vec{R}(0) = \vec{R}(t)$, then for only one parameter the Berry phase will always be zero. A non-zero Berry phase arises only for more than one time-dependent parameter. In the case of more than one parameter then, the value of the integral depends only on the number of poles enclosed by the path, discretising the value of$\gamma_n$, and yielding a result that is path independent. ## Measuring a Phase? How can we measure a phase? It may seem difficult since phases often don't contribute to observables that we are interested in measuring. Though it may be impossible to measure an *overall* phase, *relative* phases are fair game. By splitting a wave packet over two paths of different lengths and using interference, we can get information about this phase. ![Splitting mirrors.](figures/2paths.svg) To see this, consider the two paths depicted in Figure\ref{fig:2paths}. The longer path is associated with an accumulated phase of$e^{i \gamma}$for some$\gamma. The total wavefunction is then \begin{aligned} \psi_{tot} &= \frac{1}{2} \psi_0 + \frac{1}{2} \psi_0 e^{i \gamma} \\ |\psi_{tot}|^2 &= \frac{1}{4} |\psi_0|^2 \left( 1 + e^{i \gamma} \right) \left( 1 + e^{-i \gamma} \right) \\ &= \frac{1}{2} |\psi_0|^2 (1 + \cos(\gamma)) \\ &= |\psi_0|^2 \cos(\gamma/2) \end{aligned} We see that relative phases do show up non-trivially when going from amplitudes to probabilities, and exploiting these will be our method toward measuring the Berry phase. # Aharonov-Bohm Effect First predicted by Ehrenberg and Siday (10 years earlier than Aharonov and Bohm!), this effect is due to the coupling of the electromagnetic potential to an electron's wave function, as we will see. The setup is depicted in Figure \ref{fig:ehrenberg_siday}. The important thing to note here is that the magnetic field\mathbf{B}$is non-zero only inside the solenoid. Which means for both electron paths, through B or C, the magnetic field is constantly zero. The same cannot be said of the vector potential however. ![ehrenberg_siday](figures/ehrenberg_siday.svg) We know that the flux$\phi$through the solenoid is$\phi=B\cdot \pi r^2$for$r$the radius of the solenoid, and that$\mathbf{B} = \nabla \times \mathbf{A}$. The vector potential$\mathbf{A}$changes our Hamiltonian to $$H = \frac{(\mathbf{p} + e\mathbf{A})^2}{2m} + V(\mathbf{r}).$$ If you are wondering about this change, the main point is that the momentum operator has changed in the standard way for quantum mechanics to include the vector potential such that the canonical commutation relation$[r_i, p_j] = i\hbar \delta_{ij}$still holds. How now do we solve for$\psi$in$H \psi = E \psi$for this Hamiltonian? There is a simple way to solve this using the solution when$\mathbf{A}=0$, which we denote as$\psi_0\$. We define $$\psi (\mathbf{r}) = e^{i g(\mathbf{r})} \psi_0(\mathbf{r}) ;\text{ where } g(\mathbf{r}) = -\frac{e}{\hbar} \int_{\mathbf{r}_0}^{\mathbf{r}} \mathbf{A}(\mathbf{r}) \cdot d \mathbf{r}$$