The application of Schrödinger's equation to an open system in the present sense is a large part of the formal theory of scattering. The traditional approach is to expand the wavefunction in a set of traveling waves, at least in the asymptotic region. This implicitly sets the boundary conditions employed in the analysis. With the present interest in the quantum transport properties of (often complex) fabricated structures, purely numerical techniques for solving Schrödinger's equation have become more important. In these techniques one has a direct representation of the wavefunction as a complex-valued function of position, typically on a discrete basis (using finite-difference or finite-element techniques, for example). In this situation, the appropriate boundary conditions must be explicitly specified, and the proper choice of boundary conditions is a prerequisite to obtaining any meaningful results.

Let us first consider the steady-state case in a one-dimensional system extending over the interval . In general, we seek wavefunctions corresponding to traveling waves incident from either the left or right. These states will include a reflected component which appears at the same boundary as the incident wave, and a transmitted component which appears at the opposite boundary. For example, for an eigenstate incident from the left, we have

We know the value of **A** (typically **A=1**), but we do not know the value of
**B** or **C**. A straightforward way to evaluate is to temporarily
assume **C=1**, from which we obtain the initial conditions
and . The steady-state Schrödinger
equation may then be integrated from **x=l** to **x= 0**, and the solution
may then be normalized so that **A=1**.

A more elegant approach is the Quantum Transmitting Boundary
Method (QTBM) of Lent and Kirkner (1990). The essence of this approach
is to apply * mixed* boundary conditions at each boundary. The mixed
boundary conditions involve fixing the value of a linear combination of
the wavefunction and its gradient. At the left-hand boundary:

Solving for **A** we obtain:

A similar expression for the incident amplitude at the right-hand
boundary (let us call it **D**) may be readily derived:

Equations (12.143) and (12.144) are the QTBM boundary conditions. They define an implicit relationship between and and thus they must be solved along with Schrödinger's equation itself. This is readily done in a numerical approach in which the Schrödinger equation is approximated by a set of algebraic equations: One simply adds (12.143) and (12.144) to the set and solves them simultaneously. The QTBM is readily extended to two-dimensional problems (Lent and Kirkner, 1990) and to problems involving complex energy band structures which require more than one basis function per unit cell (Frensley and Luscombe, 1990). Note that the QTBM boundary conditions are energy-dependent, this dependence being implicit in the dependence of (12.143) and (12.144) on and .

If the problem is time-dependent (typically because the potential
varies with time), the problem of boundary conditions is much more
complex. If we start with the knowledge that the electron in question
is in a particular eigenstate of the Hamiltonian at **t=0**, at
some later time **t** when the potential has changed perceptibly the
electron will not in general be in an eigenstate of , but will be
in a superposition of such eigenstates. Let us focus our attention on
the boundary at **x=0** and assume that the potential does not vary in its
immediate neighborhood. The wavefunction with unit incident amplitude
will be of the general form:

and all we know about the reflected wave is that it is a
solution of Schrödinger's equation and all of its momenta should be
negative. (However, a momentum-space expansion of is not
feasible because we only wish to deal with over a small range in
**x**.) Mains and Haddad (1988a) have reported calculations of the
transient response of a resonant-tunneling diode using

with assumed to be slowly varying in space and time. Inserting (12.146) into Schrödinger's equation gives

They used the first-derivative term of (12.147) to update the value of (Dirichlet boundary condition) in a time-integration procedure. This amounts to looking a short distance into the domain to determine what is coming out.

Let us consider another scheme for determining the boundary
condition, which I have not tested in a practical computation, but which
has the pedagogical advantage of explicitly displaying the non-Markovian
nature of the problem. Suppose that we are implementing a discrete
time-integration scheme with step size and that we wish to apply a
Neumann spatial boundary condition at **x=0**. Then we need a way to
determine the value of at the next time-step.
Fourier transform Schrödinger's equation and solve for **k** to obtain

In the case of the reflected waves propagating out of the **x=0** boundary
we would choose the negative sign on the square root.
Now suppose that we approximate the right-hand side of (12.148),
over an appropriate range of energies, by a polynomial in :

Inverting the Fourier transform, we obtain an expression for the gradient of :

where the latter expression is a finite-difference approximation to the differential operator, and we approximate this operator using only the values of at times prior to because those are the only known values. (Thus the time-reversal symmetry is broken.) Note that (12.150) explicitly demonstrates the dependence of the boundary condition on the prior history of the system and thus shows its non-Markovian character. The finite-difference coefficients may be obtained from the by expanding in a Taylor series. One thus obtains the set of equations:

which must be solved to find the .

The essence of this scheme is that we use the previously calculated values of the wavefunction at the boundary to attempt to predict the next value of the gradient. This is a particular example of linear prediction (Makhoul, 1975). It also illustrates a general property of derivations of irreversible phenomena in quantum mechanics: When one attempts to remove (or at least ignore) the effects of some of the degrees of freedom in a system (in this case the spatial locations outside the boundary), they reassert themselves in the time domain, in the form of non-Markovian terms (Zwanzig, 1964).

Thu Jun 8 17:53:37 CDT 1995