# Mathematics Dissertation Topics

Quantum Operator Theory concerns the analytic properties of mathematical models of quantum systems. Its achievements are among the most profound and most fascinating in Quantum Theory, e.g., the calculation of the energy levels of atoms and molecules which lies at the core of Computational Quantum Chemistry.

Among the many problems one can study, we give a short list:

- The atomic Schrödinger operator (Kato's theorem and all that);
- The periodic Schrödinger operator (describing crystals);
- Scattering properties of Schrödinger operators (describing collisions etc);
- Spectral and scattering properties of mesoscopic systems (quantum wires, dots etc);
- Phase space bounds (say, upper bounds on the number of energy levels) with applications, e.g., the Stability of Matter or Turbulence.

**Key words:** differential operators, spectral theory, scattering theory.

**Recommended modules:** Functional Analysis, Measure and Integration theory, Partial Differential Equations.

**References: **

!$[1]$! M. Melgaard, G. Rozenblum, Schrödinger operators with singular potentials, in: * Stationary partial differential equations* Vol. II, 407--517, Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, 2005.

!$[2]$! Reed, M., Simon, B., * Methods of modern mathematical physics. Vol. I-IV*. Academic Press, Inc., New York, 1975, 1978,1979,1980.

Quantum Mechanics (QM) has its origin in an effort to understand the properties of atoms and molecules. Its first achievement was to establish the Schrödinger equation by explaining the stability of the hydrogen atom; but hydrogen is special because it is exactly solvable. When we proceed to a molecule, however, the QM problem cannot be solved in its full generality. In particular, we cannot determine the solution (i.e., the *ground state*) to !$HΨ=EΨ$!, where !$H$! denotes the Hamiltonian of the molecular system, !$Ψ$! is the wavefunction of the system, and !$E$! is the lowest possible energy. This problem corresponds to finding the minimum of the spectrum of !$H$! or, equivalently, !$$E= \inf \{ \, \mathcal{E}^{\rm QM}(Ψ) \, : \, Ψ \in \mathcal{H}, \:\: \| Ψ \|_{L^{2}} =1 \, \}, where \ \mathcal{E}^{\rm QM}(Ψ):= \langle Ψ, H Ψ \rangle_{L^{2}}$$! and !$\mathcal{H}$! is the variational (Hilbert) space. For systems involving a few (say today six or seven) electrons, a direct Galerkin discretization is possible, which is known as *Full CI* in Computational Chemistry. For larger systems, with !$N$! electrons, say, this direct approach is out of reach due to the excessive dimension of the space !$ℜ^{3N}$! on which the wavefunctions are defined and the problem has to be approximated. Quantum Chemistry (QC), as pioneered by Fermi, Hartree, Löwdin, Slater, and Thomas, emerged in an attempt to develop various *ab initio* approximations to the full QM problem. The approximations can be divided into *wavefunction methods* and *density functional theory* (DFT). For both, the fundamental questions include minimizing configuration, divided into Question I (i) necessary and sufficient conditions for existence of a ground state (=a minimizer), and Question I (ii) uniqueness of a minimizer, and Question II, necessary and sufficient conditions for multiple (nonminimal) solutions (i.e., *excited states*).

A magnetic field has two effects on a system of electrons: (i) it tends to align their spins, and (ii) it alters their translational motion. The first effect appears when one adds a term of the form !$-eħm^{-1} {s} \cdot \mathcal{B}$! to the Hamiltonian, while the second, *diamagnetic* effect arises from the usual kinetic energy !$(2m)^{-1} | {\mathbf p} |^{2}$! being replaced by !$(2m)^{-1} | \mathbf {p} -(e/c) \mathcal{A}|^{2}$!. Here !${\mathbf p}$! is the momentum operator, !$\mathcal{A}$! is the vector potential, !$\mathcal{B}$! is the magnetic field associated with !$\mathcal{A}$!, and !${s}$! is the angular momentum vector. Within the numerical practice, one approach is to apply a perturbation method to compute the variations of the characteristic parameters of, say, a molecule, with respect to the outside perturbation. It is interesting to go beyond and consider the full minimization problem of the perturbed energy. In Hartree-Fock theory, one only takes into account the effect (ii), whereas in nonrelativistic DFT it is common to include the spin-dependent term and to *ignore* (ii) and to study the minimization of the resulting nonlinear functional, which depends upon *two densities*, one for spin "up" electrons and the other for spin "down" electrons. Each density satisfies a normalisation constraint which can be interpreted as the total number of spin "up" or "down" electrons.

The proposed project concerns the above-mentioned problems within the context of DFT in the presence of an external magnetic field. More specifically, molecular Kohn-Sham (KS) models, which turned DFT into a useful tool for doing calculations, are studied for the following settings:

Recent results on rigorous QC are found in the references.

- As a first step towards systems subject to a magnetic field, Question I(i) is addressed for the unrestricted KS model, which is suited for the study of open shell molecular systems (i.e., systems with a odd number of electrons such as radicals, and systems with an even number of electrons whose ground state is not a spin singlet state). The aim is to consider the (standard and extended) local density approximation (LDA) to DFT.
- The spin-polarized KS models in the presence of an external magnetic field with constant direction are studied while taking into account a realistic local spin-density approximation, in short LSDA.

**Key words:** differential operators, spectral theory, scattering theory.

**Recommended modules:** Functional Analysis, Measure and Integration theory, Partial Differential Equations.

**References: **

!$[1]$! M. Melgaard, G. Rozenblum, Schrödinger operators with singular potentials, in: * Stationary partial differential equations* Vol. II, 407--517, Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, 2005.

!$[2]$! Reed, M., Simon, B., * Methods of modern mathematical physics. Vol. I-IV*. Academic Press, Inc., New York, 1975, 1978,1979,1980.

Resonances play an important role in Chemistry and Molecular Physics. They appear in many dynamical processes, e.g. in reactive scattering, state-to-state transition probabilities and photo-dissociation, and give rise to long-lived states well above scattering thresholds. The aim of the project is carry out a *rigorous mathematical study on the use of Complex Absorbing Potentials (CAP) to compute resonances in Quantum Dynamics.*

In a typical quantum scattering scenario particles with positive energy arrive from infinity, interact with a localized potential !$V(x)$! whereafter they leave to infinity. The absolutely continuous spectrum of the the corresponding Schrödinger operator !$T(\hbar)=-\hbar^{2}D+V(x)$! coincides with the positive semi-axis. Nevertheless, the Green function !$G(x,x'; z)= \langle x | (T(\hbar)-z)^{-1}| x \rangle$! admits a meromorphic continuation from the upper half-plane !$\{ \, {\rm Im}\, z >0 \,\}$! to (some part of) the lower half-plane !$\{ \, {\rm Im}\, z < 0 \,\}$!. Generally, this continuation has *poles* !$z_{k} =E_{k}-i Γ_{k}/2$!, !$Γ_{k}>0$!, which are called *resonances* of the scattering system. The probability density of the corresponding "eigenfunction" !$Ψ_{k}(x)$! decays in time like !$e^{-t Γ_{k}/ \hbar}$!, thus physically !$Ψ_{k}$! represents a metastable state with a decay rate !$Γ_{k}/ \hbar$! or, re-phrased, a lifetime !$\tau_{k}=\hbar / Γ_{k}$!. In the semi-classical limit !$\hbar \to 0$!, resonances !$z_{k}$! satisfying !$Γ_{k}=\mathcal{O}(\hbar)$! (equivalently, with lifetimes bounded away from zero) are called "long-lived".

Physically, the eigenfunction !$Ψ_{k}(x)$! only make sense near the interaction region, whereas its behaviour away from that region is evidently nonphysical (Outgoing waves of exponential growth). As a consequence, a much used approach to compute resonances approximately is to perturb the operator !$T(\hbar)$! by a *smooth absorbing potential* !$-iW(x)$! which is supposed to vanish in the interaction region and to be positive outside. The resulting Hamiltonian !$T_{W}(\hbar):=T(\hbar)-iW(x)$! is a non-selfadjoint operator and the effect of the potential !$W(x)$! is to *absorb* outgoing waves; on the contrary, a real-valued positive potential would reflect the waves back into the interaction region. In some neighborhood of the positive axis, the spectrum of !$T_{W}(\hbar)$! consists of discrete eigenvalues !$\tilde{z}_{k}$! corresponding to !$L^{2}$!-eigenfunctions !$\widetilde{Ψ}_{k}$!.

As mentioned above, the CAP method has been widely used in Quantum Chemistry and numerical results obtained by CAP are very good. The drawback with the use of CAP is that there are no proof that the correct resonances are obtained. (This is in stark contrast to the mathematically rigorous method of complex scaling). In applications it is assumed implicitly that the eigenvalues !$\tilde{z}_{k}$! near to the real axis are small perturbations of the resonances !$z_{k}$! and, likewise, the associated eigenfunctions !$\widetilde{Ψ}_{k}$!, !$Ψ_{k}(x)$! are close to each other in the interaction region. Stefanov (2005) proved that *very close* to the real axis (namely, for !$| {\rm Im}\, \tilde{z}_{k}| =\mathcal{O}(\hbar^{n})$! provided !$n$! is large enough), this is in fact true. Stefanov's proof relies on a series of ingenious developments by several people, most notably Helffer (1986), Sjöstrand (1986, 1991, 1997, 2001, 2002), and Zworski (1991, 2001).

The first part of the project would be to understand in details Stefanov's work [2] and, subsequently, several open problems await.

**Key words:** operator and spectral theory, semiclassical analysis, micro local analysis.

**Recommended modules:**Functional Analysis, Measure and Integration theory, Partial Differential Equations.

**References: **

!$[1]$! J. Kungsman, M. Melgaard, Complex absorbing potential method for Dirac operators. Clusters of resonances, J. Ope. Th., to appear.

!$[2]$! P. Stefanov, Approximating resonances with the complex absorbing potential method, Comm. Part. Diff. Eq. 30 (2005), 1843--1862.

The Choquard equation in three dimensions reads:

!$$\begin{equation} \tag*{(0.1)} -Δ u - \left( \int_{ℜ^{3}} u^{2}(y) W(x-y) \, dy \right) u(x) = -l u , \end{equation}$$! where !$W$! is a positive function. It comes from the functional:

!$$\mathcal{E}^{\rm NR}(u) = \int_{ℜ^{3}} | \nabla u |^{2} \, dx -\int \int | u(x) |^{2} W(x-y) |u(y)|^{2} \, dx dy,$$!

which, in turn, arises from an approximation to the Hartree-Fock theory of a one-component plasma when !$W(y) =1/ | y | $! (Coulomb case). Lieb (1977) proved that there exists a unique minimizer to the constrained problem !$E^{\rm NR}(\nu) = \inf \{ \, \mathcal{E}(u) \, : \, u \in \mathcal{H}^{1}(ℜ^{3}), \| u \|_{L^{2}} \leq \nu \, \}$!.

The mathematical difficulty of the functional is caused by the minus sign in !$\mathcal{E}^{\rm NR}$!, which makes it impossible to apply standard arguments for convex functionals. Lieb overcame the lack of convexity by using the theory of symmetric decreasing functions. Later Lions (1980) proved that the unconstrained problem (0.1) possesses infinitely many solutions. For the constrained problem, seeking radially symmetric, normalized functions !$\| u \|_{L^{2}} =+1$!, or more generally, seeking solutions belonging to:

!$$\mathcal{C}_{N}= \{ \, φ \in \mathcal{H}_{\rm r}^{1} (ℜ^{3}) \, : \, \| φ \|_{L^{2}} =N \, \} ,$$! the situation is much more complicated and conditions on !$W$! are necessary. In the Coulomb case, Lions proves that there exists a sequence !$(l_{j}, u_{j})$!, with !$l_{j} > 0$!, and !$u_{j}$! satisfies !${(0.1)}$! (with !$l=l_{j}$!) and belongs to !$\mathcal{C}_{1}$!

We may replace the negative Laplace operator by the so-called quasi-relativistic operator, i.e., the pseudodifferential operator !$\sqrt{ -δ +m^{2} } -m$!; this is the kinetic energy operator of a relativistic particle of mass !$m \geq 0$!. It is defined via multiplication in the Fourier space with the symbol !$\sqrt{k^{2} +m^{2}} -m$!, which is frequently used in relativistic quantum physics models as a suitable replacement of the full (matrix valued) Dirac operator. The associated time-dependent equation arises as an effective dynamical description for an !$N$!-body quantum system of relativistic bosons with two-body interaction given by Newtonian gravity, as recently shown by Elgart and Schlein (2007). This system models a *Boson star*.

Several questions arise for the quasi-relativistic Choquard equation (existence, uniqueness, positive solutions etc) and the first part of the project would be to get acquainted with recent (related) results, e.g., [1] and [2].

**Key words:** operator and spectral theory, semiclassical analysis, micro local analysis.

**Recommended modules:**Functional Analysis, Measure and Integration theory, Partial Differential Equations.

**References: **

!$[1]$! S. Cingolani, M. Clapp, S. Secchi, Multiple solutions to a magnetic nonlinear Choquard equation, Zeitschrift fr Angewandte Mathematik und Physik (ZAMP) , vol. 63 (2012), 233-248.

!$[2]$! M. Melgaard, F. D. Zongo, Multiple solutions of the quasi relativistic Choquard equation, J. Math. Phys. !${53}$!(2012), 033709 (12 pp).

These are ideas that various faculty members have suggested for thesis topics over the years. The list is by no means exhaustive. If you a potential advisor in mind, that person may well have other ideas. Or you may even have your own idea for a project. We encourage this route as well, but please be aware that this will put some additional responsibility on you to identify sources.

**1. Inversive Geometry**

The geometry of the plane in which figures are equivalent when one is the "inverse" of another with respect to a fixed circle has a long history. It is an interesting non-Euclidean geometry in its own right, a link with classical projective geometry and the mechanism for proof of many otherwise difficult Euclidean theorems including Feuerbach's Theorem on the 9-point circle.

For further information, see Bruce Peterson.

**1. The Four Color Theorem**

For many years, perhaps the most famous unsolved problem in mathematics asked whether every possible map on the surface of a sphere could be colored in such a way that any two adjacent countries were distinguishable using only four colors. It is easy to produce maps requiring at least four colors, but the proof that four colors are always sufficient did not appear until 1976. Topics for a thesis would include the history of the problem, including the mistakes made in early "proofs", extension of the problem to more complicated surfaces (what for instance happens if the maps are drawn on the surface of an inner tube?), and an explication of the final correct proof. The proof itself marks a milestone in mathematics in that it is readily understandable, but impossible to check because it involves computer verification of an enormous number of special cases. That is, anyone can check any individual step, but no one can check them all. The thesis would not involve computer work.

Reference:

Sandy Hunt, "The Four Color Theorem," Senior Thesis, 1987.

Jennifer Paris, "The Four Color Problem," Senior Thesis, 1991.

For additional information, see Bruce Peterson.

**2. Additive Number Theory**

We know a good deal about the multiplicative properties of the integers -- for example, every integer has a unique prime decomposition. Less is known about how to decompose integers additively. For instance, in how many ways can we write an integer as the sum of two squares? of four squares? How many ways can we write the number 1 as the sum of three cubes? Is every number the sum of two primes (Goldbach's conjecture)? What is the relationship between the divisors of a number and its partitions (additive decompositions)?

For related ideas, see Waring's Problem (topic #22).and Additive Bases (topic #25).

Reference:

Hardy and Wright, An Introduction to the Theory of Numbers

Peter Schumer, Introduction to Number Theory

For further information, see Peter Schumer, David Dorman, or Priscilla Bremser.

**3. Fermat's Last Theorem**

About 1631, Pierre de Fermat claimed to have proved that the equation xn + yn = zn had no solutions in integers except in the special cases n = 1 and n = 2. He claimed that there are no integers x,y,z such that x3 + y3 = z3, and none if the exponent is 4, 5, etc. This theorem has recently been proved by Andrew Wiles of Princeton University. While a discussion of his proof is well beyond the undergraduate level a thesis could include proofs of several of special cases, a discussion of quadratic number fields and norms (material intersecting with and on the same level as MA 302), and perhaps a discussion of Bernoulli numbers, irregular primes and Kummer's Theorem, which establishes a large class of numbers for which Fermat's Last Theorem is known to be true.

References:

T. Nuovo, "Fermat's Last Theorem", Senior Thesis 1985.

Paul Ribenbaum,*13 Lectures on Fermat's Last Theorem,* Springer-Verlag, 1979.

For further information, see Peter Schumer, David Dorman, or Priscilla Bremser.

**4. Mersenne Primes and Perfect Numbers**

Numbers like 6 and 28 were called perfect by Greek mathematicians and numerologists since they are equal to the sum of their proper divisors (e.g., 6 = 1 + 2 + 3). Euclid showed that if 2p-1 is a prime number then n = 2p-1(2p-1) is perfect. Since then (about 300 B.C.) there has been a great deal of interest in finding large perfect numbers. Consequently, some mathematicians have tried to determine the values of p for which 2p-1 is prime. Such primes are called Mersenne primes after Friar Marin Mersenne who in 1644 conjectured all values of p ² 257 for which 2p-1 was believed to be prime. There still remain many open questions, for example, do there exist any odd perfect numbers? This would make a very nice thesis topic for anyone wishing an introduction to number theory. Research could include some interesting computer work if desired.

Reference:

T. McCoy, "Mersenne Primes and Percent Numbers", Senior Thesis, 1987

5. Paris, "The Generation of Amicable Numbers", Senior Thesis, 1985

For further information, see Peter Schumer, David Dorman, or Priscilla Bremser.

**6. Group Decision-Making**

Arrow's Impossibility Theorem has generated a great deal of research on developing mathematical models of individual and group decision-making. Recent results indicate that any "reasonable" voting procedure must either be dictatorial or subject to strategic manipulation. Many "possibility" theorems have been proved for voting mechanisms which satisfy relaxed versions of Arrow's axioms.

References:

Kelley, *Arrow Impossibility Theorems.*

For further information, see Mike Olinick.

**7. Logistic Models of Population Growth**

The classic logistic model for the growth of a population P over time t is = f(P) where f(P) is a quadratic function of P. How does one fit this model to real data? What qualitative and quantitative predictions can be obtained if one considers replacing the quadratic by a cubic in P? Are there other meaningful extensions of the logistic model? How are the Lotka-Volterra models of competition and predation affected by the assumption that one species grows logistically in the absence of the other?

For further information, see Mike Olinick.

**8. Topics in Mathematical Bioeconomics**

Mathematical bioeconomics is concerned with the development, analysis and testing of mathematical models of economic problems relating to the growth and management of biological populations. A typical problem in this field would ask how to maximize the present value of discounted net economic revenue associated with the hunting and capture of whales. How does an optimal strategy vary with the number of competing whaling fleets?

References:

Colin Clark, *Mathematical Bioeconomics*

For further information, see Mike Olinick.

**9. Brouwer's Fixed Point Theorem**

In 1912, a Dutch mathematician L E.J. Brouwer proved that every continuous function from a n-cell to itself has at least one fixed point; that is, if f:B¬B is continuous where B is homeomorphic to the set of points at most one unit from the origin in Euclidean n-space, then there is a point p in B such that f(p) = p. This result, a fundamental theorem in topology, has also had many applications in economics and the theory of games. In recent years, new and simpler existence proofs of the fixed point theorem have been discovered. There has also been much progress on the problem of computationally determining fixed points.

References:

Joel Franklin, *Methods of Mathematical Economics.*Herbert Scarf,

*The Computation of Economic Equilibria*.

For further information, see Mike Olinick.

**10. Models of Tumor Growth**

The Gompertz growth law, = b N log(), is a widely used deterministic model for the growth of tumor. Here N is the number of tumor cells at time t, K is the largest tumor size and b is a positive constant. A thesis in this area would begin with an investigation of the mathematical properties of this model and the statistical tests for deciding when it is a good one. The thesis would then move to a consideration of stochastic models of the tumor growth process.

Reference:

B. Hanson and C. Tier, "A Stochastic Model of Tumor Growth", *Mathematical Biosciences,* 59 (1982).

For further information, see Mike Olinick.

**11. Mathematical Models of Conventional Warfare**

Most defense spending and planning is determined by assessments of the conventional (ie. non-nuclear) military balance. The dynamic nature of warfare has historically been modelled by a particular simple linked system of differential equations first studied by F. W. Lanchester. The Lanchester approach has recently been challenged and other mathematical models have been proposed.

Reference:

Joshua Epstein, *The Calculus of Conventional War*

James Taylor. *Lanchester Models of Warfare.*

For further information, see Mike Olinick.

**12. The Fundamental Theorem of Algebra**

The Fundamental Theorem of Algebra states that every non-constant polynomial with complex coefficients has a complex root. C.F. Gauss was the first person to give a proof of this result; in fact, he discovered four different proofs. All known proofs require some complex analysis. However, the theorem is one of algebra and a purely algebraic proof would be nice to find. Emil Artin has given one that's almost purely algebraic. A senior thesis project could include a presentation of several different types of proof and a search for an algebraic one.

References:

Any text in complex analysis.

J Munkres,*An Introduction to Topology.*Serge Lang,

*Algebra*(for Artin's proof).

For further information, see Priscilla Bremser or David Dorman.

**13. Algebraic Numbers**

A real number r is "algebraic" if r is the root of a polynomial with integer coefficients. Thus every rational number is algebraic as are many of the more familiar irrational numbers such as the square root of 2 and the l7th root of 3. Cantor proved that the set of algebraic numbers is countable, so that "most" real numbers are not algebraic. Liouville was the first to show explicitly that a certain number was not algebraic. Later in the l9th Century, proofs were discovered that e and pi are not algebraic. All these proofs are within the grasp of a senior mathematics major. Although much progress has been made in this century, there are still a number of open questions; for example, is e + pi algebraic?

Reference:

W. Tabor, In Search of Irrational and Transcendental Numbers, Senior Thesis, 1984.

For further information Peter Schumer, or David Dorman.

**14. Nonstandard Analysis**

Would you like to see epsilons and deltas returned to Greek 101, where they belong? Your beginning calculus teachers only pay lip service to them anyway, fudging the definition of limit through phrases like "a tiny bit away" or "as close as you please." Non-standard analysis makes this hand-waving legitimate. In some ways it leaps back in time past the 19th Century godfathers of modern analysis to the founders of calculus by introducing, but in a rigorous way, "infinitesimals" into the real number system. Mathematics is not a static, immutable body of knowledge. New approaches to old problems are constantly being investigated and, if found promising, developed. Nonstandard analysis is a good and exciting example of this mathematical fact of life.

References:

A. Robinson,*Non-standard Analysis*(1970).

H Jerome Keisler, *Elementary Calculus*(1976).

H Jerome Keisler, *Foundations of Infinitesimal Calculus*(1976).

For further information, see Mike Olinick.

**15. Galois Theory**

The relation between fields, vector spaces, polynomials, and groups was exploited by Galois to give a beautiful characterization of the automorphisms of fields. From this work came the proof that a general solution for fifth degree polynomial equations does not exist. Along the way it will be possible to touch on other topics such as the impossibility of trisecting an arbitrary angle with straight edge and compass or the proof that the number e is transcendental. This material is accessible to anyone who has had MA302.

Reference:

Artin, *Galois Theory*or Herstein, *Topics in Algebra.*

For additional information, see Priscilla Bremser, David Dorman, or Peter Schumer.

**16. Prime Number Theorem**

Mathematicians since antiquity have tried to find order in the apparent irregular distribution of prime numbers. Let PI(x) be the number of primes not exceeding x. In the 1790's, both Gauss and Legendre proposed that the ratio PI(x)/() would approach 1 as x tended to infinity. Many of the greatest mathematicians of the 19th Century attempted to prove this result and in so doing developed the theory of functions of a complex variable to a very high degree. Partial results were obtained by Chebyshev in 1851 and Riemann in 1859, but the Prime Number Theorem (as it is now called) remained a conjecture until Hadamard and de la Valle' Poussin independently and simultaneously proved it in 1896.

Reference:

A. Koo, "The Prime Number Theorem" Senior Thesis, 1985.

For further information, see Peter Schumer.

**17. Waring's Problem**

In 1770, the English mathematician Waring stated (without proof) that every positive integer can be expressed as the sum of 4 squares, of 9 cubes, of 19 4th powers, etc. Waring's Problem is to show that for every k there is a finite number g(k) so that all positive integers can be expressed as the sum of g(k) kth powers. In the same year, Lagrange succeeded in demonstrating that g(2) = 4, namely that every integer is the sum of 4 squares. The existence of g(k) for several other specific values of k was subsequently proven but it wasn't until 1909 that Hilbert proved the existence of g(k) for every k. However, Hilbert's proof did not determine the numerical value of g(k) for any k.

Senior thesis work in this area could include Lagrange's four square proof, the nine cube problem, Hilbert's inductive proof, and/or related results such as the fact that every sufficiently large integer is expressible as the sum of 7 cubes.

References:

Hardy and Wright, An Introduction to the Theory of Numbers

Margaret Russell, "Waring's Problem", Senior Thesis, 1985.

Peter Schumer, Introduction to Number Theory

For further information, see Peter Schumer.

**18. Twin Primes**

Primes like 3 and 5 or 101 and 103 are called twin primes since their difference is only 2. It is unknown whether or not there are infinitely many twin primes. In 1737, Leonard Euler showed that the series S() extended over all primes diverges; this gives an analytic proof that there are infinitely many primes. However, in 1919 Viggo Brun proved the following: if q runs through the series of twin primes, then S() converges. Hence most primes are not twin primes. Brun's proof is rather difficult but does not depend too heavily on a strong number theory background. A computer search for large twin primes could be fun too.

Reference:

E. Landau, *Elementary Number Theory,* Chelsea, 1958; pp. 88-103.

For further information, see Peter Schumer, David Dorman, or Priscilla Bremser.

**19. Continued Fractions**

Have you ever wondered why 22/7 is a pretty good approximation to pi or why 355/113 is an excellent one? Do numbers like make any sense?

The above are examples of infinite continued fractions (in fact, x is the positive square root of 2). Their properties have been studied over the centuries and many interesting, yet readily accessible, results have been discovered. Moreover, their theory is intimately related to the solution of Diophantine equations, Farey fractions, and the approximation of irrationals by rational numbers.

References:

L. Homrighausen, "Continued Fractions", Senior Thesis, 1985.

Rob Jenkins, "On Continued Fractions", Senior Thesis, 1987.

Peter Schumer, Introduction to Number Theory

For further information, see Peter Schumer, David Dorman, or Priscilla Bremser.

**20. Additive Bases**

Attempts to prove Goldbach's Conjecture have led to the development of new areas in additive number theory (see Additive Number Theory, topic #3). One such area originated with the work of the Russian mathematician Schnirelmann. He proved that there is a finite number k so that all integers are the sum of at most k primes. This result may seen innocuous enough, but its proof involved creating the idea of an additive basis and developing the notion of set density, and it involved some interesting combinatorial manipulations. Subsequent work has centered upon results with bases other than primes, determining effective values for k, and studying how sparse a set can be and still generate the integers -- the theory of essential components.

Reference:

A. Y. Kinchin, *Three Pearls of Number Theory,* Graylock, 1956.

For further information, see Peter Schumer or David Dorman.

**21. Primality Testing and Factoring**

This topic involves simply determining whether a given integer n is prime or composite, and if composite, determining its prime factorization. Checking all trial divisors less than the square root of n suffices but it is clearly totally impractical for large n. Mathematicians have developed very sophisticated methods to deal with this problem. Those methods often utilize the particular form of n -- e.g., n may be of the form (a) 2p - 1 or (b) 22q + 1. How did Euler determine (a) was prime for p = 31 or (b) was composite for q = 5? Why did Euler initially think that 1,000,009 was prime before rectifying his mistake? What theorems have been used to allow computers to determine the primality of (a) for p = 859,433?

This could be a very interesting and challenging topic with either a mathematical or computational focus.

Reference:

Hans Riesel, Prime Numbers and Computer Methods for Factorization

Peter Schumer, Introduction to Number Theory

For further information, see Peter Schumer or David Dorman.

**22. Introduction to Analytic Number Theory**

Analytic number theory involves applying calculus and complex analysis to the study of the integers. Its origins date back to Euler's proof of the infinitude of primes (1737), Dirichlet's proof of infinitely many primes in an arithmetic progression (1837), and Vinogradov's theorem that all sufficiently large odd integers are the sum of three primes (1937). (Did you spot the arithmetic progression in the sentence above?) Other topics could include Chebyshev's theorems on the distribution of primes, the Prime Number Theorem (topic #25), summatory relationships for various number theoretic functions, Bertrand's postulate, results on lattice points visible from the origin, Kronecker and Weyl's theorems on uniform distributions, and much more.

There is an endless wealth of topics here for anyone already versed in elementary number theory.

Reference:

T. Apostol, *Introduction to Analytic Number Theory*, Springer, 1976.

For further information, see Peter Schumer.

**23. Finite Fields**

A finite field is, naturally, a field with finitely many elements. For example, **Z**/(p), where p is a prime number, is a finite field. Are there other types of finite fields? If so, how can their structure be characterized? Are there different ways of representing their elements and operations? A thesis on finite fields could being with these questions and then investigate polynomials and equations over finite fields, applications (in coding theory, for example) and/or the history of the topic.

For further information, see Priscilla Bremser or David Dorman.

**24. Infinite Products and the Gamma Function**

"Infinite Products" does for multiplication what Infinite Series does for addition. In what sense can one say that a product of infinitely many factors converges to a number? To what does it converge? Can one generalize the idea of n! for n an integer to x! where x is a real number? This topic is closely related to a beautiful and powerful instrument called the Gamma Function. Infinite products have recently been used to investigate the probability of eventual nuclear war.

Reference:

T.J.I. Bromwich, *An Introduction to the Theory of Infinite Series.*

For further information, see Mike Olinick.

**25. Computer Simulation and Recognition of Language**

This thesis would involve some original research into the development of numerical measures that could be used to decide in what natural language (e.g., English, French) a given piece of text was written. We're also interested in investigating whether prose styles of different authors can be distinguished by the computer.

For further information, see Mike Olinick.

**26. Representation Theory**

Representation theory is one of the most fruitful and useful areas of mathematics. The theory of group representations was introduced by Frobenius in 1896 as an attempt to generalize the theory of characters of finite abelian groups. The development of the theory was carried on at the turn of the century by Frobenius as well as Shur and Burnside. Since that time the research in representation theory has been extremely active and its applications to all areas of mathematics has been very fruitful. In fact there are some theorems for which only representation theoretic proofs are known. Representation theory also has wide and profound applications outside mathematics. Most notable of these are in chemistry and physics. For example, the theory of molecular bonding, quantum mechanics, and the eight-fold way are best explained in terms of representation theory. A thesis in this area might restrict itself to linear representation of finite groups. Here one only needs background in linear and abstract algebra. Two possible goals are the Frobenius Reciprocity Law and/or Burnside's *pq* theorem which states that a group whose order is divisible by only two primes is solvable.

Reference:

Serre, *Linear Representations of Finite Groups.*

For further information, see David Dorman.

**27. Lie Groups**

Lie groups are all around us. In fact unless you had a very unusual abstract algebra course the ONLY groups you know are Lie groups. (Don't worry there are very important non-Lie groups out there.) Lie group theory has had an enormous influence in all areas of mathematics and has proved to be an indispensable tool in physics and chemistry as well. A thesis in this area would study manifold theory and the theory of matrix groups. The only prerequisites for this topic are calculus, linear and abstract algebra. One goal is the classification of some families of Lie groups.

Reference:

Curtis, *Matrix Groups*.

For further information, see David Dorman or Emily Proctor.

**28. Quadratic Forms and Class Numbers**

The theory of quadratic forms introduced by Lagrange in the late 1700's and was formalized by Gauss in 1801. The ideas included are very simple yet quite profound. A quadratic form is an expression *f(x,y)= ax**2**+ bxy + cy**2*with *a* and *b* integers. A typical question one might ask is give a **fixed** quadratic form, what integers, *n* , can such a form represent. For example, what integers can be represented by *f(x,y) = x**2**+ y**2**?*

One can show that any prime congruent to 1 modulo 4 can be represented but no prime congruent to 3 modulo r can. Of course, 2 can be represented as f(1,1). A thesis in this area could investigate the classification of definite and indefinite forms, the determination of class numbers and proving the finiteness of class numbers.

Reference: Davenport, *The Higher Arithmetic.*

For further information, see David Dorman.

**29. Generalizations of the Real Numbers**

Let **R**n be the vector space of n-tuples of real numbers with the usual vector addition and scalar multiplication. For what values of n can we multiply vectors to get a new element of **R**n? The answer depends on what mathematical properties we want the multiplication operation to satisfy.

For example, if n=1 we get the ordered field of real numbers with its rich structure which enables us to do most any algebraic or analytic manipulation. If we're just slightly less fussy, for n=2 we get the field of complex numbers. For n=4 we get the division algebra of quaternions. For n=8 we get the weak division algebra called octonians. What happens when n=3?

A thesis in this area would involve learning about the discoveries of these various "composition algebras" and studying the main theorems:

1. the fundamental theorem of algebra

2. Frobenius' theorem on division algebras.

Reference:

Simon Altmann, Hamilton, Rodrigues, and the Quaternion Scandal, *Mathematics Magazine,* Vol. 62, No. 5, Dec. 89, 291-308.

For further information, see Peter Schumer.

**30. The Arithmetic-Geometric Inequality and Other Famous Inequalities**

Inequalities are fundamental tools used by many practicing mathematicians on a regular basis. This topic combines ideas of algebra, analysis, geometry, and number theory. We use inequalities to compare two numbers or two functions. Recall the Cauchy-Schwartz inequality and the Triangle inequality from MA201. These are examples of the types of relationships that could be investigated in a thesis. You could find different proofs of the inequality, research its history and find generalizations.

References:

Hardy, Littlewood, and P—lya,*Inequalities,* Cambridge, 1952.

Beckenbach and Bellman, *An Introduction to Inequalities,* MAA, 1961.

For further information, see Bill Peterson.

**31. History of Mathematics**

There are many topics in the history of mathematics that could be developed into a thesis -- the history of an individual or group of mathematicians (e.g. Ramanujan or women in mathematics), the history of mathematics in a specific region of the world (e.g. Islamic, Chinese, or the development of mathematics in the U.S.), or the history of a particular branch of mathematics (e.g. calculus, logic, geometry, algebra, or computer science).

For further information, see Michael Olinick or Peter Schumer.

**32. Decision-Theoretic Analysis and Simulation**

Medical researchers and policy makers often face difficult decisions which require them to choose the best among two or more alternatives using whatever data are available. The reformulation of a hypothesis testing problem as a three-decision problem offers a convenient and more suitable way to frame many statistical problems. An axiomatic formulation of a decision problem uses loss functions, various decision criteria such as maximum likelihood and minimax, and Bayesian analysis to lead investigators to good decisions. To compare these approaches to the more traditional Neyman-Pearson Hypotheses testing, computer simulation using massive resampling MA 311 or its equivalent provide the needed background.

References:

Foundations, Concepts and Methods, Springer-Verlag, 1980.

Emerson, J.D. and Tritchler, D. "The Three Decision Problem in Medical

Decision Making." *Statistics in Medicine*, 6 (1987), 101‑112

For further information, see John Emerson.

**33. Bayesian Statistics: Theory and Decision Making**

The power of modern computers has made possible the analysis of complex data set using Bayesian models and hierarchical models. These models assume that the parameters of a model are themselves random variables and therefore that they have a probability distribution. Bayesian models may begin with prior assumptions about these distributions, and may incorporate data from previous studies, as a starting point for inference based on current data. This project would investigate the conceptual and theoretical underpinnings of this approach, and compare it to the traditional tools of mathematical statistics as studied in Ma 311. It could culminate in an application that uses real data to illustrate the power of the Bayesian approach.

Background for thesis: MA 310, with Ma 311 a plus.

References:

Gelman, A., Carlin, J.B., Stern, H.S., and Rubin, D.B. (1994). **Bayesian****Data Analysis**Lee, Peter M. (1989).

**Bayesian Statistics: An Introduction**. Oxford University Press, New York.

Pollard, William E. (1986).

**Bayesian Statistics for Evaluation Research: An**

**Introduction**, SAGE

For further information, see John Emerson.

**34. Random Effects Models in Analysis of Variance**

Measurements which arise from one or more categorical variables that define groups are often analyzed using ANOVA (Analysis of Variance). Linear models specify parameters that account for the differences among the groups. Sometimes these differences exhibit more variability than can be explained by these "fixed effects", and then the parameters are permitted to come from a random distribution, giving "random effects."

This modeling approach has proved useful and powerful for analyzing multiple data sets that arise from different research teams in different places. For example the "meta-analysis" of data from medical research studies or from social science studies often employs random effects models. This project would investigate random effects models and their applications. A simulation project could illustrate the properties and the strengths of the models when used for making inferences.

Needed background: Ma 310, with Ma 311 a plus.

References:

Emerson, J.D., Hoaglin, D.C., and Mosteller, F. "A Modified Random-Effects Procedure for Combing Risk Differences in Sets of 2 X 2 Tables from Clinical Trials," *Journal of the Italian Statistical Society* 2 (1993, appeared in January 1995), 269-290.

Hoaglin, D.C., F. Mosteller, and J. W. Tukey, Eds., *Fundamentals of Exploratory Analysis of Variance*. New York: Wiley, 1991.

For further information, see John Emerson.

**35. Modern Analysis of Variance**

Tables containing amounts or measurements often arise as responses to three or more factors, each having two or more levels. Analysis of variance is the study of how the factors, both separately and jointly, account for the variability of the responses. Modern analysis of variance increasingly combines computer-generated ANOVA tables with the use of exploratory and graphical techniques in assessing potentially complex relationships among variables. This investigation would examine models and their assumptions for three-way and higher-way tables. Use of a statistical package like SAS would enable the application of the theory to a data set having current interesting an applications area. Background for thesis: MA 310, with 311 a plus.

References:

L. Ott, An Introduction to Statistical Methods and Data Analysis, Duxbury Press, 1984.

John D. Emerson, "Transformation and Reformulation in the Analysis of Variance", unpublished draft of a chapter.

For further information, see John Emerson.

**36. Pseudo-Random Number Generation**

Because a computer is deterministic, it cannot generate truly random numbers. Clever algorithms are currently under investigation for generating a sequence of pseudo-random numbers -- a sequence that has most of the statistical properties that make numbers appear random. A thesis project could explore methods of generating pseudo-random numbers from a variety of discrete and continuous probability distributions. The project would culminate in an application to an applied problem that uses statistical simulation. Needed background: MA310 and programming experience in C, Pascal or FORTRAN.

References:

G. Rothman, "Pseudo-Random Number Generation", Middlebury College Thesis, Middlebury, Vermont, 1984.

Morgan, B.J.T. Elements of Simulation, Chapman and Hall, 1984.

For further information, see John Emerson.

**37. Tilings and Patterns**

Mosaics of multicolored tiles, cobblestone streets, quilts, and honeycombs are examples of tilings. The art of tilings has been studied a great deal, but the science of the designs is a relatively new field of mathematics. Some possible topics in this area are: symmetry, topology, transitivity, tilings using only certain shapes, or tiles with special characteristics, classification of patterns, colorings, and geometry. The problems in this area are easy to state and understand, although not always easy to solve. The pictures are great and the history of tilings and patterns goes back to antiquity. An example of a specific problem that a thesis might investigate is: Devise a scheme for the description and classification of all tilings by angle-regular hexagons.

Reference:

GrŸnbaum and Shephard, *Tilings and Patterns,* Freeman, 1987.

Danzer, GrŸnbaum, and Shephard, "Equitransitive Tilings, or How to Discover New Mathematics",*Mathematics Magazine*, April, 1987.

For further information, see Priscilla Bremser.

**38. Iterated Function Systems**

Roughly speaking, a contraction of the plane is a transformation f:**R****2** ¨ **R****2** such that if P and Q are any two points in the plane then the distance from f(P) to f(Q) is strictly less than the distance from P to Q -- i.e. f decreases all distances. For any contraction f there is a point Pf such that regardless of where P is in the plane, the sequence of points f(P), f(f(P)), f(f(f(P))), ... always converges to Pf!! Now if F = {f1, f2, . . . , fn} is any finite set of contractions then there is not necessarily a single point but nonetheless a compact set CF such that regardless of where P is in the plane, the sequence of points fi1(P), fi2(fi1(P)), fi3(fi2(fi1(P))), . . (where each of fij ë F) always converges to a point on the set CF!!

The actual shape that this set CF depends dramatically on the choice of the set of contractions F. With a little effort CF can even be made to look like a tree or a flower!!

A thesis in this area would involve learning about these contraction mapping theorems in the plane and in other metric spaces, learning how the choice of contractions effects the shape of CF and possibly writing computer programs to generate CF from F. The theoretical work is an extension of the kind of mathematics encountered in MA401. Any programming would only require CX213.

Reference:

Barnsley, *Fractals Everywhere*

For further information, see Michael Olinick.

**39. Branching Processes**

Consider a population of individuals which produce offspring of the same kind. Associating a probability distribution with the number of offspring an individual will produce in each generation gives rise to a stochastic (i.e., random) model called a branching process. The earliest applications concerned the disappearance of "family names," as passed on from fathers to sons. Modern applications involve inheritance of genetic traits, propagation of jobs in a computer network, and particle decay in nuclear chain reactions. A key tool in the study of branching processes is the theory of generating functions, which is an interesting area of study in its own right.

Reference:

Jagers, P. *Branching processes with biological applications*. (New York: Wiley, 1975).

For more information, see Bill Peterson.

**40. The Poisson Process and Order Statistics**

The Poisson Process is a fundamental building block for continuous time probability models. The process counts the number of "events" that occur during the time interval [0,*T*], where the times between successive events are independent and have a common exponential distribution. Incoming calls to a telephone switchboard, decays of radioactive particles, or student arrivals to the Proctor lunch line are all events that might be modeled in this way. Poisson processes in space (rather than time) have been used to model distributions of stars and galaxies, or positions of mutations along a chromosome. Starting with characterizations of the Poisson process, a thesis might develop some of its important properties and applications. An example: Conditional on *n* Poisson events having occurred during the interval [0,*T*], the *n* occurrence times "look like" (in the sense of joint probability distribution) the ordered values of *n* points selected uniformly on that interval.

Reference: W. Feller, *An Introduction to Probability Theory and Its Applications*, *Volume II*, 2e (New York: Wiley, 1971), Chapter 1.

For more information, see Bill Peterson.

**41. Majorization and Schur Convexity**

Two famous problems in elementary probability are the "Birthday Problem" and the "Coupon Collector's Problem." From the first, we learn that if all 365 possible birth-dates (ignoring leap year) are equally likely, then with 23 people in a room there is a better than even chance that two will share the same birth-date! For the second, imagine that each box of your favorite breakfast cereal contains a coupon bearing one of the letters "P", "R", "I", "Z" and "E". Assuming the letters are equally likely to appear, the expected number of boxes required to collect a set of coupons spelling P-R-I-Z-E is given be 5 * [1 + 1/2 + 1/3 + 1/4 + 1/5] = 11.42

Now suppose that the "equally likely" assumptions are dropped. Intuition suggests (?) that the chance of a birthday match with 23 people goes up, and the expected time to collect the coupons is longer. But how does one prove such claims? A thesis might investigate the theory of majorization, which provides important tools for establishing these and other inequalities.

For more information, see Bill Peterson.

**42. Cover Times**

This is a modern topic combining ideas from probability and graph theory. A "cover time" is the expected time to visit all vertices when a random walk is performed on a connected graph. Here is a simple example (reported by Jay Emerson from his recent Ph.D. qualifying exam at Yale!): Consider a rook moving on a 2x2 chessboard. From any square on the board, the rook has two available moves. If the successive choices are made by tossing a coin, what is the expected number of moves until the rook has visited each square on the board? In more formal language, this asks for the cover time for the cyclic graph C4. (Answer: 6 moves).

References: Blom, G. and Sandell D. (1992), "Cover Times for Random Walks on Graphs." *The Mathematical Scientist*, **17**, 111-119.

For more information, see Bill Peterson

**43. Reliability Theory**

Reliability theory is concerned with computing the probability that a system, typically consisting of multiple components, will function properly (e.g., the system might be a computer communication network, whose the components are the links and nodes). The components are subject to deterioration and failure effects, which are modeled as random processes, and the status of the system is determined in some way by the status of the components. For example, a series system functions if and only if each component functions, whereas a parallel system functions if and only if at least one component functions. In more complicated systems, it is not easy to express system reliability exactly as a function of component reliabilities, and one seeks instead various bounds on performance.

Reference: S.M. Ross, *Introduction to Probability Models* (Academic Press, 1993), Chapter 9.

For more information, see Bill Peterson.

**44. Measure and Integration**

The Riemann integral studied in MA 112 and MA 401 suffers from a few major deficiencies. Specifically, in order to be Riemann integrable, a function must be continuous almost everywhere. However, many interesting functions that show up as limits of integrable functions or even as derivatives do not enjoy this property. Certainly one would want at least every derivative to be integrable. To this end, Henri Lebesgue announced a new integral in 1901 that was completely divorced from the concept of continuity and instead depended on a concept referred to as measure theory. Interesting in their own right, the theorems of measure theory lead to facinating and paradoxical insight into the structure of sets.

Reference: H.L. Royden, *Real Analysis*

**45. Chvátal’s Conjecture**

Given a collection

*F*of sets which is closed downward, a collection in which any set in

*F*is also in

*F*, what is the largest intersecting sub-collection? That is, we want a set of sets from

*F*such that any two sets have a non-empty intersection. What is the structure of such a sub-collection? Chvátal conjectured more than thirty years ago that a largest intersecting sub-collection can always be realized by a star, i.e., a sub-collection of sets in

*F*each of which contain some fixed element. The conjecture remains open, though some particular cases have been solved.

Please see:

Ian Anderson,

*Combinatorics of Finite Sets,*Dover, New York, 1987.

Chvátal's homepage: http://users.encs.concordia.ca/~chvatal/

For more information see John Schmitt**46. Snark Hunting***"We have sailed many months, we have sailed many weeks, (Four weeks to the month you may mark), But never as yet ('tis your Captain who speaks) Have we caught the least glimpse of a Snark!"*

The above lines are from a Lewis Carroll poem entitled "The Hunting of the Snark." A snark is a mythical creature that is not often seen and difficult to capture. When Martin Gardner applied the name to a particular class of graphs in 1976, a time when only four graphs (including the Petersen graph of course) were known to be in the class, it was an appropriate name. Snarks were hunted by Bill Tutte (while writing under the pseudonym Blanche Descartes) as a way to approach the then unsolved Four Color Problem. They were both an elusive and worthy prey. Now there exists several infinite classes of snarks and they have proved to be useful, though not yet in the way Tutte envisioned.

See:

L. Carroll,

*The Hunting of the Snark*, (Annotated by M. Gardner), Penguin, 1967.

D.A. Holton and J. Sheehan,

*The Petersen Graph*, Cambridge University Press, 1993.

M. Gardner, Mathematical games,

*Scientific American,*234, No. 4, 1976, 126-30 and No. 9, 1976, 163-171.

For more information see John Schmitt

**47. Two-Dimensional Orbifolds**

Spheres and tori are examples of closed surfaces. There is a well-known classification theorem whereby we are able to completely characterize any surface based on only two pieces of information about the surface. A 2-dimensional orbifold is a generalization of a surface. The main difference is that in general, an orbifold may have what are known as singular points. A thesis in this area could examine Thurston's generalization of the surface classification theorem to 2-dimensional orbifolds. Another direction could be an examination of groups of transformations of the 2-dimensional plane which are used to produce flat 2-orbifolds. This subject is full of big ideas but can be pleasantly hands-on at the same time.

Reference:

Jeffrey Weeks, The Shape of Space

For further information, see Emily Proctor.

(For classification of 2-manifolds, see Wolf, p.77, Theorem 2.5.5)

**48. Matrix Groups**

In linear algebra, we learn about n-by-n matrices and how they represent transformations of n-dimensional space. In abstract algebra, we learn about how certain collections of n-by-n matrices form groups. These groups are very interesting in their own rights, both in understanding what geometric properties of n-dimensional space they preserve, and because of the fact that they are examples of objects known as manifolds. There are many senior projects that could grow out of this rich subject. (See, for example, #27 above.)

Reference:

Kristopher Tapp, Matrix Groups for Undergraduates

Morton L. Curtis, Matrix Groups

For further information, see Emily Proctor.

**49. The Erlangen Program**

From the time that Euclid set down his axioms of geometry in the Elements in about 300 B.C. till the middle of the 19th century, mathematicians studied geometry in a so-called synthetic way, based on drawing conclusions from the given axioms. In the late 19th century, geometry was revolutionized by the realization that if Euclid's fifth axiom, the parallel postulate, was dropped, there were a number of alternate geometries that satisfied the first four axioms but that displayed behavior quite different from traditional Euclidean geometry. These geometries are called non-Euclidean geometries, and include projective, hyperbolic, and spherical geometries. As the theory of these geometries began to develop, one of the great mathematicians of the day, Felix Klein, proposed his Erlangen Program, a new method for studying and characterizing these geometries based on group theory and symmetries. A thesis in this area would study the various geometries, and the groups of transformations that define them.

Reference:

David Gans, Transformations and Geometries.

For further information, see Emily Proctor.

## 0 Replies to “Mathematics Dissertation Topics”