Landon Lehman

Updates on my interests/research

Entropy of a d-dimensional ideal gas

I’m slowly reading through Kittel and Kroemer to refresh my knowledge of basic statistical mechanics and thermodynamics. They have a nice little problem at the end of Chapter 3 (Problem 11), which is to calculate the entropy of a one-dimensional ideal gas (using the methods outlined in that chapter). It is only a brief step further to calculate the general expression for the entropy of a {d}-dimensional ideal gas, which I do here.

The single-particle partition function is calculated using the “particle-in-a-box” solution from quantum mechanics, and the end result is that

\displaystyle  	Z_1 = \left( \int_0^\infty \text{d}n\, e^{-\alpha^2 n^2} \right)^d = \left( 	\frac{ \sqrt{\pi}}{2\alpha}\right)^d, \ \ \ \ \ (1)


\displaystyle  	\alpha = \left( \frac{\hbar^2 \pi^2}{2 M L^2 \tau} \right)^{1/2}. \ \ \ \ \ (2)

Define the {d}-dimensional quantum concentration as {n_{Q_d} = Z_1/L^d}, then by plugging in we get

\displaystyle  	n_{Q_d} = \left( \frac{M\tau}{2\pi \hbar^2} \right)^{d/2}. \ \ \ \ \ (3)

The free energy is (using the Stirling approximation)

\displaystyle  	F = - \tau N \log{(Z_1)} + \tau (N \log{(N)} - N), \ \ \ \ \ (4)

and thus the entropy is

\displaystyle  	\sigma = - \left( \frac{\partial F}{\partial \tau} \right)_V = N 	\left( \log{\left(\frac{Z_1}{N}\right)} + 1 + \tau \frac{\partial \log{Z_1}}{\partial 	\tau} \right). \ \ \ \ \ (5)

Now, we can use the properties of the logarithm to make taking the derivative really easy, since

\displaystyle  	\log{Z_1} = \frac{d}{2} \log{\tau} + \cdots 	 \ \ \ \ \ (6)

where I have dropped all of the terms that don’t depend on the temperature. So

\displaystyle  	\frac{\partial \log{Z_1}}{\partial \tau} = \frac{d}{2\tau}, 	 \ \ \ \ \ (7)

and the final result for the entropy is

\displaystyle  	\sigma = N \left( \log{\left( \frac{n_{Q_d}}{n}\right)} + \frac{d}{2} + 1 \right). \ \ \ \ \ (8)

Here I have defined {n = N/L^d}.

For 1,2, and 3 dimensions, we have

\displaystyle  	\sigma_1 = N \left( \log{\left( \frac{n_{Q_1}}{n} \right)} + \frac{3}{2} 	\right), \ \ \ \ \ (9)

\displaystyle  	\sigma_2 = N \left( \log{\left(\frac{n_{Q_2}}{n} \right)} + 2 	\right), \ \ \ \ \ (10)


\displaystyle  	\sigma_3 = N \left( \log{\left(\frac{n_{Q_3}}{n} \right)} + \frac{5}{2} 	\right). \ \ \ \ \ (11)

Amazon Affiliate Link:


Anatomy of an “Anomaly”

On Tuesday, December 15 of last year, the LHC experimental collaborations ATLAS and CMS released the first results from the 13 TeV run. The most exciting news was a bump in the diphoton spectrum at about 750 GeV. This could be a sign of new physics (finally), or it might just be a fluctuation (the statistical significance is not very high). We will probably have to wait until at least late summer to find out whether or not the bump disappears with more statistics (this an optimistic estimate as far as magnet issues go).

Of course this “anomaly” is exciting and gives theorists something to think about. But I want to focus on the amount of thinking that has been done already, measured in part by the papers that have been written. (Of course, one could argue that the number of papers written is not a good proxy variable for the amount of real, hard thinking that has been done. One might have a point.)

Seven papers citing the ATLAS or CMS papers were submitted on the day of the announcement (i.e. the same day the ATLAS and CMS papers were released). Obviously a good number of theorists had advance warning. Here is a plot of the number of papers versus the date of their arXiv submissions from Dec. 15 until today (not counting the papers that just showed up on the arXiv tonight).


The vertical orange lines are for Mondays, the day when diphoton papers were most likely to be submitted. There is an idea that your paper is most likely to be read if you submit in on a Monday (so it shows up on the arXiv on Tuesday). So far there have been 172 papers!

For those interested in how I made the plot, I used inspire and searched for papers citing either the ATLAS results or the CMS results, and sorted by the earliest date recorded (“de”). I didn’t use both/and, because sometimes it takes a while for inspire to correctly catch all of the citations.



New books

“To those of my colleagues who predict that computers will soon replace human mathematicians by virtue of their superior skill and reliability in proving theorems, I am inclined to respond that the goal of mathematics is to convert rigorous proofs to heuristics. The latter are, in turn, used to produce new rigorous proofs, a necessary input (but not the only one) for new heuristics.”

– Michael Harris, Mathematics without Apologies: Portrait of a Problematic Vocation, 2015

Also, a Princeton Companion to Physics is scheduled to come out in 2018, with Wilczek as editor. The Princeton Companion to Mathematics – edited by Gowers, is quite nice, so I will be anticipating the physics one.

Firenze and amplitudes

I am headed to Firenze (Florence), Italy this weekend to attend a 3-week-long winter school at the Galileo Galilei Institute for Theoretical Physics. It should be a good place to meet some new people and perhaps come up with new ideas to explore. Perhaps there will be time to explore the city as well!

In December I thought about calculating Yang-Mills amplitudes in 6 dimensions by using the formalism outlined in a paper by Cheung and O’Connell. In the paper, they calculate the the 4- and 5-point amplitudes by using BCFW recursion on the 3-point amplitude. The interesting thing about 6-dimensional amplitudes is that all of the 4-dimensional helicity structures can be obtained from a single expression in 6 dimensions. For example, 4D MHV amplitudes are contained in the “general” 6D expression. So if there were a simple expression for an n-point 6D amplitude, like there is a simple expression for a 4D MHV n-point amplitude, it would contain all of the 4D tree-level amplitudes.

Unfortunately the 6D amplitudes become complicated very quickly. The 3-point amplitude must be written using special kinematic variables, and using these the 4-point amplitude can be deduced from BCFW. The 4-point Einstein gravity amplitude can also be obtained by using the KLT relations. The 4-point amplitude has a relatively simple structure, but applying BCFW to this structure and calculating the 5-point amplitude gives a very complicated expression (at least I think it is complicated!). Perhaps some new notation is needed in order to see the underlying structure. Or maybe 6D is just inherently more complicated than 4D, and it would be easy to just directly attack the 4D problem instead of solving it in 6D and reducing to 4D.

I also thought about the application of on-shell methods to effective field theories. This possibility has been explored for the nonlinear sigma model using “semi-on-shell” amplitudes. In general, adding masses to the spinor-helicity formalism makes it more complicated, and I am not sure what it means to integrate out a particle in this formalism. And since integrating out heavy particles from a Lagrangian is the most common form of generating an effective field theory, it would be useful to have a way to translate this procedure into spinor helicity language.


Almost done with my second year of classes (I will be done by Wednesday).  I will probably sit in on some classes next year, but this marks the likely end of my formal coursework. Now research starts “for real,” which is exciting (and somewhat frightening)!

On the research side of things, a couple of my fellow graduate students and I just had a paper accepted for publication!

On the more formal, coursework side of things, I feel that I still have only a marginal grasp of quantum field theory. I want to work through Weinberg’s trilogy on the subject, but it is proving to be a very slow and arduous task. Perhaps by the end of this summer I will have finished Volume 1.  One of my goals for this year was to finish the entire trilogy, and that is looking unlikely unless I really pick up the pace.

Other books I want to read in the near future: QFT in a Nutshell by Zee and Dynamics of the Standard Model by Donoghue, Golowich, and Holstein. Also, a new book by Schwartz just came out that looks good.  Between reading and research I should have a busy summer!



Go look at the last 12 or so slides in this presentation.  I hope this will be released soon – it should be fun to play around with.  I am especially interested to see how it works out the problem of finding invariants in general tensor products of Lie group representations.  At this point I am not sure how to do this in an algorithmic manner. After seeing the slides, I was inspired to write a short Mathematica function that finds the terms of a given dimension allowed by some collection of scalar fields with different U(1) charges. This was relatively easy – the hard part would be finding all the independent singlets of, for example, SU(2) that can then be formed once the U(1) constraint is satisfied.  Automatically generating the terms in, for example, the Higgs potential in this paper, would be impressive.

And here is the mandatory comment on BICEP2: I don’t understand inflationary physics nearly well enough to offer anything original – and if I did, I would probably be writing a paper on it right now.


Enigmas of Chance

I just finished reading Enigmas of Chance, the autobiography of mathematician Mark Kac (1914-1984). Not the most gripping book I’ve read, but it was sufficiently interesting to keep my attention for its quick 156 pages. Kac had some interaction with physics – he is most well-known for the Feynman-Kac formula and for other work in statistical physics, such as the “spherical model” for studying phase transitions (related to the Ising model).

One notable tidbit: Kac taught Weinberg and Glashow in the standard first-year graduate math methods course.

I learned that the “normal law” (Gaussian distribution) shows up mathematically in places that have nothing to do with “randomness.” Kac, along with Erdos, proved that the Gaussian shows up in the context of prime numbers. Using {\nu (m)} to represent the number of prime factors of some integer {m}, they proved that the proportion of integers {m} for which

\displaystyle \log \log m + a \sqrt{2 \log \log m} < \nu (m) < \log \log m + b \sqrt{2 \log \log m}

can be found by integrating the Gaussian

\displaystyle \frac{1}{\sqrt{\pi}} e^{-x^2}

from {a} to {b}.

This quote, from pg. 111, could potentially be applied to some areas of physics:

“I have often claimed that if a subject is robust it should be insensitive to its foundations, and therefore too great an emphasis on the latter tends to produce a misleading slant. Concern about foundations should come, if at all, after one has a firm intuitive grasp of the subject.”

Having finished Kac’s autobiography, I started Halmos’s (I Want to be a Mathematician). For some reason I currently find autobiographies of mathematicians appealing.

Null solutions to Maxwell equations

I saw this interesting paper last week (available on the arxiv as 1302.0342).  Maybe someone with more knowledge about such things can answer a question: do such solutions to the Maxwell equations exist in different dimensions?  By different dimensions I mean just trying to extend electromagnetism by changing the number of spatial dimensions (maybe using exterior calculus or something makes this easy, but I don’t know exterior calculus).

Some quotes from reading

“I don’t see any realistic prospect that anthropic or statistical selection arguments – applied to a single sample! – will ever lead to anything comparable in intellectual depth or numerical precision to the greatest and most characteristic achievements of theoretical physics and astrophysics, such as (for example) the prediction of electron and muon anomalous magnetic moments, the calculation of the hadron spectrum, or the enabling of GPS, celestial navigation, and interpretation of pulsar timing.”

“There is a danger that selection effects will be invoked prematurely or inappropriately, and choke off the search for deeper, more consequential explanations of observed phenomena. To put it crudely, theorists can be tempted to think along the lines “If people as clever as us haven’t explained it, that’s because it can’t be explained – it’s just an accident.” ”


from “Multiversality” by Frank Wilczek, available here.


I’m trying to get a grasp of thermodynamics as advance preparation for taking statistical mechanics this fall. It is turning out to be more challenging than I expected – there are these little points in derivations which do not seem at all obvious to me.  For example, it was interesting to see that the zeroth law establishes thermodynamic equilibrium as an equivalence relation over the set of thermodynamic states, but going from this to the existence of empirical temperature seemed to pull some things out of thin air, or at least make some assumptions that didn’t seem to be rigorously justified.  But I guess there will always be some of this, since classical thermodynamics is essentially phenomenological.

Always remember: “Heat is not a type of energy. It is a process – a mode of transfer of energy. There is no sense in which we can divide up the energy E(P,V) of a system into heat and work.” (David Tong).