The Washington Post ran this nice column by Alan Lightman the other day, where Lightman points out that the recent LIGO discovery is the result of a long, sustained research program going back decades. In it, Lightman points out that the modern go-go instant-gratification culture is the antithesis of this, and he hints (but does not say explicitly) that we need to worry about whether we are rewarding the right things in terms of research.

There is no question that we have arrived at a scientific research culture that, at least at the individual investigator level rather than large collaborations, tends to reward rapid progress and nimbleness. One of the more damning comments that can appear in a proposal or paper review is that some piece of work or proposed research idea is "incremental" or "just the next logical step". It is hard for me to see in the present environment how an individual investigator could get sustained federal funding to work on a single extremely ambitious, long timescale project unless there were many high-impact milestones along the way. Of course the right answer is probably that we should support a mixed portfolio of research with inherently different timescales for payoff, but the continual trend of short-termism (how many "research products" came out in the last reporting period? how fast is some promotion candidate's h-index growing year-over-year?) is not comforting.

## Thursday, February 25, 2016

## Monday, February 22, 2016

### Light-induced superconductivity

The physics of systems driven out of equilibrium remains a frontier topic, and as new techniques are enabled involving ultrafast lasers, exciting developments have been coming along. Light-induced (possible) superconductivity is one example that has gotten a lot of attention lately. Several years ago, the group of Andrea Cavalleri started with a

*non*-superconducting copper oxide material closely related to the high-*T*_{c}superconductors. By smacking this system with a light pulse intended to disrupt an intervening phase ("stripe order", a kind of spontaneous modulation of the charge density in the material into stripes), they were able to get the material to have an optical response that looked just like that of a superconducting cuprate, at least on the picosecond timescale.
Around this same time, Cavalleri and others pointed out that carefully tailored light pulses could also be created that would couple to particular vibrational modes of crystals. That way, again on the picosecond timescale, one could imagine reaching into a crystal and distorting it transiently. The Max Planck group made use of this approach in a very deliberate way. There is a trend toward higher superconducting transition temperatures in the cuprate superconductors as particular bonds within the lattice are distorted due to the overall crystal structure. What these folks did was hit a cuprate, YBa

_{2}Cu_{3}O_{6.5}, with a pulse designed to transiently distort the bonds even more in the favorable-for-superconductivity direction. Again, they found an optical response that indicated, below*T*_{c}, strengthened superconductivity, and above*T*_{c}, optical signatures similar to that of superconductivity all the way up to room temperature! Subsequent ultrafast x-ray diffraction measurements indicated that the lattice really was distorting as desired in those experiments.
The very recent attention has resulted from this paper, where this group has again optically driven some lattice modes, inducing signatures in the optical response that look very much like superconductivity well above

*T*_{c}, this time in K_{3}C_{60}. Interestingly, in this case when the material is already superconducting, it doesn't seem like the optical pulse enhances the superconductivity.
All of this is very cool, though it's important to remember that these are transient effects, and on the timescales so far it is extremely difficult to perform any other measurements (e.g., non-optical ones, like magnetometry) that would independently test for superconductivity. Still, this is an impressive strategy, and beyond nonequilibrium physics it strongly suggests that greater control over material structure (than what we have so far been able to achieve) could pay enormous dividends.

## Tuesday, February 16, 2016

### The end of Moore's Law

There is a nice article at

There are many reasons why continued aggressive transistor scaling is difficult. I write about these at some length in my book. Clearly we are starting to approach the limit where devices are so small that atomic-scale differences in geometry and composition can start to affect performance. Power density, even when transistors are nominally "off", is becoming a major problem. This is one reason mentioned in the article why clock speeds on processors have basically stopped climbing. (Oddly, I never hear anyone mention the

The article discusses possible radical shifts in strategy to extend the trend of increasing processor performance. These include major changes in materials (obligatory mention of graphene and two-dimensional semiconductors) and architecture (going 3d in circuit design; quantum computing; the increasingly trendy neuromorphic computing). There are also major efforts to think about computing at lower powers. While it's cool to talk about these, I have to say that the enormous economic advantage of silicon (an individual Si transistor costs an infinitesimal fraction of a cent, and we know how to make a billion of them at a time and have them all work for a decade) makes it very difficult to see how any competing material gains significant ground for a long time.

*Nature*this past week talking about the possible impending end of Moore's Law. The quick summary: It's really looking like we are approaching the end of one of Moore's laws (that the number of transistors on a chip doubles roughly every 18 months). Bear in mind that the endurance of this form of Moore's law is not an accident - the growth of transistor density transformed from an empirical observation made by Moore to a growth target adopted by the semiconductor industry decades ago.There are many reasons why continued aggressive transistor scaling is difficult. I write about these at some length in my book. Clearly we are starting to approach the limit where devices are so small that atomic-scale differences in geometry and composition can start to affect performance. Power density, even when transistors are nominally "off", is becoming a major problem. This is one reason mentioned in the article why clock speeds on processors have basically stopped climbing. (Oddly, I never hear anyone mention the

*other*major reason that clock speeds have plateaued at a few GHz: Going much higher in frequency makes layout and circuit design a much more difficult microwave engineering task.)The article discusses possible radical shifts in strategy to extend the trend of increasing processor performance. These include major changes in materials (obligatory mention of graphene and two-dimensional semiconductors) and architecture (going 3d in circuit design; quantum computing; the increasingly trendy neuromorphic computing). There are also major efforts to think about computing at lower powers. While it's cool to talk about these, I have to say that the enormous economic advantage of silicon (an individual Si transistor costs an infinitesimal fraction of a cent, and we know how to make a billion of them at a time and have them all work for a decade) makes it very difficult to see how any competing material gains significant ground for a long time.

## Monday, February 08, 2016

### Brief news items

As I go on some travel, here are some news items that looked interesting to me:

- Rumors are really heating up that LIGO has spotted gravity waves. The details are similar to some things I'd heard, for what that's worth, though that may just mean that everyone is hearing the same rumors.
**update**: Press conference coming (though they may just say that the expt is running well....) - The starship
*Enterprise*is undergoing a refit. - This paper reports a photocatalytic approach involving asymmetric, oblong, core-shell semiconductor nanoparticles, plus a single Pt nanoparticle catalyst, that (under the right solution conditions) can give essentially
*100% efficient*hydrogen reduction - every photon goes toward producing hydrogen gas. If the insights here can be combined with improved solution stability of appropriate nanoparticles, maybe there are ways forward for highly efficient water splitting or photo production of liquid fuels. - Quantum materials are like obscenity - hard to define, but you know it when you see it.

## Sunday, February 07, 2016

### What is density functional theory? part 3 - pitfalls and perils

As I've said, DFT proves that the electron density as a function of position contains basically

*all*the information about the ground state (very cool and very non-obvious). DFT has become of enormous practical use because one can use simple*noninteracting*electronic states plus the right functional (which unfortunately we can't write down in simple, easy-to-compute closed form, but we can choose various approximations) to find (a very good approximation to) the true, interacting density.
So, what's the problem, beyond the obvious issues of computing efficiency and the fact that we don't know how to write down an exact form for the exchange-correlation part of the functional (basically where all the bodies are buried)?

Well, the noninteracting states that people like to use, the so-called Kohn-Sham orbitals, are seductive. It's easy to think of them as if they are "real", meaning that it's very tempting to start using them to think about

*excited*states and where the electrons "really" live in those states, even though technically there is no*a priori*reason that they should be valid except as a tool to find the ground state density. This is discussed a bit in the comments here. This isn't a completely crazy idea, in the sense that the Kohn-Sham states*usually*have the right symmetries and in molecules tend to agree well with chemistry ideas about where reactions tend to occur, etc. However, there are no guarantees.
There are many approaches to do better (e.g., some statements that can be made about the lowest unoccupied orbital that let you determine not just the ground state energy but get a quantitative estimate of the gap to the lowest electronic excited state, and that has enabled very good computations of energy gaps in molecules and solids; time-dependent DFT, which looks at the general time-dependent electron density). However, you have to be very careful. Perhaps commenters will have some insights here.

The bottom line: DFT is intellectually deep, a boon to many practical calculations when implemented correctly, and so good at many things that the temptation is to treat it like a black box (especially as there are more and more simple-to-use commercial implementations) and assume it's good at everything. It remains an impressive achievement with huge scientific impact, and unless there are major advances in other computational approaches, DFT and its relatives are likely the best bet for achieving the long-desired ability to do "materials by design".

## Thursday, February 04, 2016

### What is density functional theory? part 2 - approximations

So, DFT contains a deep truth: Somehow just the electronic density as a function of position within a system in its lowest energy state contains, latent within it, basically

Moreover, thanks to Kohn and Sham, there is actually a procedure that lets you calculate things using a formalism where you can ignore electron-electron interactions and, in principle, get arbitrarily close to the

We could try a simplifying

So how good can this be? An example is shown in the figure (from a summer school talk by my friend Leeor Kronik). The yellow points indicate (on both axes) the experimental values of the ionization energies for the various organic molecules shown. The other symbols show different calculated ionization energies plotted vs. the experimental values. A particular mathematical procedure with a clear theoretical justification (read the talk for details) that mixes in long-range and short-range contributions gives the points labeled with asterisks, which show very good agreement with the experiments.

Next time: The conclusion, with pitfalls, perils, and general abuses of DFT.

*all*of the information about that ground state. This is the case even though you usually think that you should need to know the actual complex electronic wavefunction \(\Psi(\mathbf{r})\), and the density (\(\Psi^{*}\Psi\)) seems to throw away a bunch of information.Moreover, thanks to Kohn and Sham, there is actually a procedure that lets you calculate things using a formalism where you can ignore electron-electron interactions and, in principle, get arbitrarily close to the

*real*(including interaction corrections) density. In practice, life is not so easy. We don't actually know how to write down a readily computable form of the complete Kohn-Sham functional. Some people have very clever ideas about trying to finesse this, but it's hard, especially since the true functional is actually*nonlocal*- it somehow depends on correlations between the density (and its spatial derivatives) at different positions. In our seating chart analogy, we know that there's a procedure for finding the true optimal seating even without worrying about the interactions between people, but we don't know how to write it down nicely. The correct procedure involves looking at whether each seat is empty or full, whether its neighboring seats are occupied, and even potentially the coincident occupation of groups of seats - this is what I mean by*nonlocal*.Fig. from here. |

*local*approximation, where we only care about whether a given chair is empty or full. (If you try to approximate using a functional that depends only on the local density, you are doing LDA (the local density approximation)). We could try to be a bit more sophisticated, and worry about whether a chair is occupied and how much the occupancy varies in different directions. (If you try to incorporate the local density and its gradient, you are doing GGA (the generalized gradient approximation)). There are other, more complicated procedures that add in additional nonlocal bits - if done properly, this is rigorous. The real art in this business is understanding which approximations are best in which regimes, and how to compute things efficiently.So how good can this be? An example is shown in the figure (from a summer school talk by my friend Leeor Kronik). The yellow points indicate (on both axes) the experimental values of the ionization energies for the various organic molecules shown. The other symbols show different calculated ionization energies plotted vs. the experimental values. A particular mathematical procedure with a clear theoretical justification (read the talk for details) that mixes in long-range and short-range contributions gives the points labeled with asterisks, which show very good agreement with the experiments.

Next time: The conclusion, with pitfalls, perils, and general abuses of DFT.

## Tuesday, February 02, 2016

### What is density functional theory? part 1.

In previous posts, I've tried to introduce the idea that there can be "holistic" approaches to solving physics problems, and I've attempted to give a lay explanation of what a functional is (short version: a functional is a function of a function - it chews on a whole function and spits out a number.). Now I want to talk about density functional theory, an incredibly valuable and useful scientific advance ("easily the most heavily cited concept in the physical sciences"), yet one that is basically invisible to the general public.

Let me try an analogy. You're trying to arrange the seating for a big banquet, and there are a bunch of constraints: Alice wants very much to be close to the kitchen. Bob also wants to be close to the kitchen. However, Alice and Bob both want to be as far from all other people as possible. Etc. Chairs can't be on top of each other, but you still need to accommodate the full guest list. In the end you are going to care about the answers to certain questions: How hard would it be to push two chairs closer to each other? If one person left, how much would all the chairs need to be rearranged to keep everyone maximally comfortable? You could imagine solving this problem by brute force - write down all the constraints and try satisfying them one person at a time, though every person you add might mean rearranging all the previously seated people. You could also imagine solving this by some trial-and-error method, where you guess an initial arrangement, and make adjustments to check and see if you've improved how well you satisfy everyone. However, it doesn't look like there's any clear, immediate strategy for figuring this out and answering the relevant questions.

The analogy of DFT here would be three statements. First, you'd probably be pretty surprised if I told you that if I gave you the final seating positions of the people in the room, that would completely specify and nail down the answer to any of those questions up there that you could ask about the room.

For a more physicsy example: Suppose you want to figure out the electronic properties of some system. In something like hydrogen gas, H

So what do we do? DFT tells us:

The observations by Kohn and Hohenberg are very deep. Somehow

The advance by Kohn and Sham is truly great - it describes an actual procedure that you can carry out to really calculate those ground state properties. The Kohn-Sham approach and its refinements have created the modern field of "quantum chemistry".

More soon....

Let me try an analogy. You're trying to arrange the seating for a big banquet, and there are a bunch of constraints: Alice wants very much to be close to the kitchen. Bob also wants to be close to the kitchen. However, Alice and Bob both want to be as far from all other people as possible. Etc. Chairs can't be on top of each other, but you still need to accommodate the full guest list. In the end you are going to care about the answers to certain questions: How hard would it be to push two chairs closer to each other? If one person left, how much would all the chairs need to be rearranged to keep everyone maximally comfortable? You could imagine solving this problem by brute force - write down all the constraints and try satisfying them one person at a time, though every person you add might mean rearranging all the previously seated people. You could also imagine solving this by some trial-and-error method, where you guess an initial arrangement, and make adjustments to check and see if you've improved how well you satisfy everyone. However, it doesn't look like there's any clear, immediate strategy for figuring this out and answering the relevant questions.

The analogy of DFT here would be three statements. First, you'd probably be pretty surprised if I told you that if I gave you the final seating positions of the people in the room, that would completely specify and nail down the answer to any of those questions up there that you could ask about the room.

^{1}Second, there is a math procedure (a functional that depends on the positions of all of the people in the room that can be minimized) to find that unique seating chart.^{2}Third, even more amazingly, there is some mock-up of the situation where we don't have to worry about the people-people interactions directly, yet (minimizing a functional of the positions of the non-interacting people) would still give us the full seating chart, and therefore let us answer all the questions.^{3}For a more physicsy example: Suppose you want to figure out the electronic properties of some system. In something like hydrogen gas, H

_{2}, maybe we want to know where the electrons are, how far apart the atoms like to sit, and how much energy it takes to kick out an electron - these are important things to know if you are a chemist and want to understand chemical reactions, for example. Conceptually, this is easy: In principle we know the mathematical rules that describe electrons, so we should be able to write down the relevant equations, solve them (perhaps with a computer if we can't find nice analytical solutions), and we're done. In this case, the equation of interest is the time-independent form of the Schroedinger equation. There are two electrons in there, one coming from each hydrogen atom. One tricky wrinkle is that the two electrons don't just feel an attraction to the protons, but they also repel each other - that makes this an "interacting electron" problem. A second tricky wrinkle is that the electrons are fermions. If we imagine swapping (the quantum numbers associated with) two electrons, we have to pick up a minus sign in the math representation of their quantum state. We do know how to solve this problem (two interacting electrons plus two much heavier protons) numerically to a high degree of accuracy. Doing this kind of direct solution gets prohibitively difficult, however, as the number of electrons increases.So what do we do? DFT tells us:

^{1}If you actually knew the total electron density as a function of position, \(n(\mathbf{r})\), that would completely determine the properties of the electronic ground state. This is the first Hohenberg-Kohn theorem.^{2}There is a unique functional \(E[n(\mathbf{r})]\) for a given system that, when minimized, will give you the correct density \(n(\mathbf{r})\). This is the second Hohenberg-Kohn theorem.^{3}You can set up a system where, with the right functional, you can solve a problem involving*noninteracting*electrons that will give you the true density \(n(\mathbf{r})\). That's the Kohn-Sham approach, which has actually made this kind of problem solving practical.The observations by Kohn and Hohenberg are very deep. Somehow

*just the electronic density*encodes a whole lot more information than you might think, especially if you've had homework experience trying to solve many-body quantum mechanics problems. The electronic density somehow contains*complete*information about all the properties of the lowest energy many-electron state. (In quantum language, knowing the density everywhere in principle specifies the expectation value of*any*operator you could apply to the ground state.)The advance by Kohn and Sham is truly great - it describes an actual procedure that you can carry out to really calculate those ground state properties. The Kohn-Sham approach and its refinements have created the modern field of "quantum chemistry".

More soon....

Subscribe to:
Posts (Atom)