Friday, August 18, 2017

Invited symposia/speaker deadlines, APS 2018

For those readers who are APS members, a reminder.  The deadline for nominations for invited symposia for the Division of Condensed Matter Physics for the 2018 March Meeting is August 31.   See here.

Likewise, the Division of Materials Physics has their invited speaker (w/in focus topic) deadline of August 29.  See here.

Please nominate!

Power output of a lightsaber

It's been a long week and lots of seriously bad things have been in the news.   I intend to briefly distract myself by looking at that long-standing question, what is the power output of a lightsaber?  I'm talking about the power output when the lightsaber is actually slicing through something, not just looking cool.  We can get a very rough, conservative estimate from the documentary video evidence before us.  I choose not to use the prequels, on general principle, though the scene in The Phantom Menace when Gui-Gon Jinn cuts through the blast door would be a good place to start.  Instead, let's look at The Force Awakens, where Kylo Ren throws a tantrum and slices up an instrument panel, leaving behind dripping molten metal.  

With each low-effort swing of his arm, Ren's lightsaber, with a diameter of around 2 cm moves at something more than 2 m/s, slicing metal to a depth of, say, 3 cm (actually probably deeper than that).  That is, the cutting part of the blade is sweeping out a volume of around 1200 cc/sec.  It is heating that volume of console material up to well above its melting point, so we need to worry about the energy it takes to heat the solid from room temperature (300 K) up to its melting point, and then the heat of fusion required to melt the material.  At a rough guess, suppose imperial construction is aluminum.  Aluminum has a specific heat of 0.9 J/g-K, and a density of 2.7 g/cc when solid, a melting point of 933 K, and a heat of fusion of 10.7 kJ/mol.  In terms of volume, that's (10.7 kJ/mol)(1 mol/27 g)(2.7 g/cc) = 1070 J/cc.  So, the total power is around (933K-300K)*(0.9J/g-K)*(2.7g/cc)*(1200 cc/sec) + (1070 J/cc)(1200 cc/sec) = 3.1e6 J/s = 3.1 MW.  

Hot stuff.

Thursday, August 10, 2017

That's the way the ball bounces.

How does a ball bounce?  Why does a ball, dropped from some height onto a flat surface, not bounce all the way back up to its starting height?  The answers to these questions may seem obvious, but earlier this week, this paper appeared on the arxiv, and it does a great job of showing what we still don't understand about this everyday physics that is directly relevant for a huge number of sports.

The paper talks specifically about hollow or inflated balls.  When a ball is instantaneously at rest, mid-bounce, it's shape has been deformed by its interaction with the flat surface.  The kinetic energy of its motion has been converted into potential energy, tied up in a combination of the elastic deformation of the skin or shell of the ball and the compression of the gas inside the ball.  (One surprising thing I learned from that paper is that high speed photography shows that the non-impacting parts of such inflated balls tend to remain spherical, even as part of the ball deforms flat against the surface.)  That gas compression is quick enough that heat transfer between the gas and the ball is probably negligible.  A real ball does not bounce back to its full height; equivalently, the ratio \(v_{f}/v_{i}\) of the ball's speed immediately after the bounce, \(v_{f}\), to that immediately before the bounce, \(v_{i}\), is less than one.  That ratio is called the coefficient of restitution.

Somehow in the bounce process some energy must've been lost from the macroscopic motion of the ball, and since we know energy is conserved, that energy must eventually show up as disorganized, microscopic energy of jiggling atoms that we colloquially call heat.   How can this happen?

  • The skin of the ball might not be perfectly elastic - there could be some "viscous losses" or "internal friction" as the skin deforms.
  • As the ball impacts the surface, it can launch sound waves into the surface that eventually dissipate.
  • Similarly, the skin of the ball itself can start vibrating in a complicated way, eventually damping out to disorganized jiggling of the atoms.
  • As the ball's skin hits the ground and deforms, it squeezes air out from beneath the ball; the speed of that air can actually exceed the speed of sound in the surrounding medium (!), creating a shock wave that dissipates by heating the air, as well as ordinary sound vibrations.  (It turns out that clapping your hands can also create shock waves!  See here and here.)
  • There can also be irreversible acoustic process in the gas inside the ball that heat the gas in there.
This paper goes through all of these, estimates how big those effects are, and concludes that, for many common balls (e.g., basketballs), we actually don't understand the relative importance of these different contributions.  The authors propose some experiments to figure out what's going on.  The whole thing is a nice exercise in mechanics and elasticity, and it's always fun to realize that there may still be some surprises lurking in the physics of the everyday.

Saturday, August 05, 2017

Highlights from Telluride

Here are a few highlights from the workshop I mentioned.  I'll amend this over the next couple of days as I have time.  There is no question that smaller meetings (this one was about 28 people) can be very good for discussions.
  • I learned that there is a new edition of Cuevas and Scheer that I should pick up.  (The authors are Juan Carlos Cuevas and Elke Scheer, a great theorist/experimentalist team-up.)
  • Apparently it's possible to make a guitar amplifier using tunnel junctions made from self-assembled monolayers.  For more detail, see here.
  • Some folks at Aachen have gotten serious about physics lab experiments you can do with your mobile phone.
  • Richard Berndt gave a very nice talk about light emission from atomic-scale junctions made with a scanning tunneling microscope.  Some of that work has been written about here and here.  A key question is, when a bias of \(eV\) is applied to such a junction, what is the mechanism that leads to the emission of photons of energies \(\hbar \omega > eV\)?  Clearly the processes involve multiple electrons, but exactly how things work is quite complicated, involving both the plasmonic/optical resonances of the junction and the scattering of electrons at the atomic-scale region.  Two relevant theory papers are here and here.
  • Latha Venkataraman showed some intriguing new results indicating room temperature Coulomb blockade-like transport in nanoclusters.  (It's not strictly Coulomb blockade, since the dominant energy scale seems to be set by single-particle level spacing rather than by the electrostatic charging energy of changing the electronic population by one electron).
  • Katharina Franke showed some very pretty data on single porphyrins measured via scanning tunneling microscope, as in here.  Interactions between the tip and the top of the molecule result in mechanical deformation of the molecule, which in turn tunes the electronic coupling between the transition metal in the middle of the porphyrin and the substrate.  This ends up being a nice system for tunable studies of Kondo physics.
  • Uri Peskin explained some interesting recent results that were just the beginning of some discussions about what kind of photoelectric responses one can see in very small junctions.  One recurring challenge:  multiple mechanisms that seem to be rather different physics can lead to similar experimentally measurable outcomes (currents, voltages).
  • Jascha Repp discussed some really interesting experiments combining STM and THz optics, to do true time-resolved measurements in the STM, such as watching a molecule bounce up and down on a metal surface (!).  This result is timely (no pun intended), as this remarkable paper just appeared on the arxiv, looking at on-chip ways of doing THz and faster electronics.
  • Jeff Neaton spoke about the ongoing challenge of using techniques like density functional theory to calculate and predict the energy level alignment between molecules and surfaces to which they're adsorbed or bonded.  This is important for transport, but also for catalysis and surface chemistry broadly.  A relevant recent result is here.
  • Jan van Ruitenbeek talked about their latest approach to measuring shot noise spectra in atomically small structures up to a few MHz, and some interesting things that this technique has revealed to them at high bias.  
  • There were multiple theory talks looking at trying to understand transport, inelastic processes, and dissipation in open, driven quantum systems.  Examples include situations where higher driving biases can actually make cooling processes more efficient; whether it's possible to have experiments in condensed matter systems that "see" many-body localization, an effect most explored in cold atom systems; using ballistic effects in graphene to do unusual imaging experiments or make electronic "beam splitters"; open systems from a quantum information point of view; what we mean by local effective temperature on very small scales; and new techniques for transport calculations. 
  • Pramod Reddy gave a really nice presentation about his group's extremely impressive work measuring thermal conduction at the atomic scale.  Directly related, he also talked about the challenges of measuring radiative heat transfer down to nm separations, where the Stefan-Boltzmann approach should be supplanted by near-field physics.  This was a very convincing lesson in how difficult it is to ensure that surfaces are truly clean, even in ultrahigh vacuum.
  • Joe Subotnik's talk about electronic friction was particularly striking to me, as I'd been previously unaware of some of the critical experiments (1, 2).  When and how do electron-hole excitations in metals lead to big changes in vibrational energy content of molecules, and how to think about this.  These issues are related to these experiments as well.  
  • Ron Naaman spoke about chiral molecules and how electron transfer to and from these objects can have surprising, big effects (see here and here).
  • Gemma Solomon closed out the proceedings with a very interesting talk about whether molecules could be used to make effective insulating layers better at resisting tunneling current than actual vacuum, and a great summary of the whole research area, where it's been, and where it's going.

Thursday, August 03, 2017

Workshop on quantum transport

Blogging has been slow b/c of travel.  I'm attending a workshop on "Quantum transport in nanoscale molecular systems".  This is rather like a Gordon Conference, with a fair bit of unpublished work being presented, but when it's over I'll hit a few highlights that are already in the literature.  Update:  here you go.

Sunday, July 23, 2017

Several items - the arxiv, "axial-gravitational" fun, topology

Things have been a bit busy, but here are a few items that have popped up recently:
  • Symmetry magazine is generally insightful and well-written.   Recently they posted this amusing article looking at various fun papers on the arxiv.  Their first example reminds me of this classic.
  • Speaking of the arxiv, it's creator, Paul Ginsparg, posted this engaging overview recently.  It's not an overstatement to say that the arxiv has had an enormous impact on science over the last 25 years.
  • There has been a huge amount of media attention on this paper (arxiv version).  The short version:  In high energy physics there is a certain conservation principle regarding chiral (meaning that the particle spin is directed along its momentum) massless fermions, so that ordinarily these things are produced so that there is no net excess of one handedness of spin over the other.  There is a long-standing high energy theory argument that in curved spacetime, the situation changes and you can get an excess of one handedness - a "chiral anomaly".  It is difficult to see how one could test this directly via experiment, since in our daily existence spacetime curvature is pretty minimal, unlike, say, near the event horizon of a small blackhole.  However, solid state materials can provide a playground for some wild ideas.  The spatial arrangement of atoms in a crystalline solid strongly affects the dispersion relation, the relationship between energy and (the crystal analog of) momentum.  For example, the linear dispersion relation between energy and momentum in (neutral) graphene makes the electrons behave in some ways analogous to massless relativistic particles, and lets people do experiments that test the math behind things like Klein tunneling.  As a bonus, you can add in spin-orbit coupling in solids to bring spin into the picture.  In this particular example, the electronic structure of NbP is such that, once one accounts for the spatial symmetries and spin-orbit effects, and if the number of electrons in there is right, the low-energy electronic excitations are supposed to act mathematically like massless chiral fermions (Weyl fermions).  Moreover, in a temperature gradient, the math looks like that used to describe that gravitational anomaly I'd mentioned above, and this is a system where one can actually do measurements.  However, there is a lot of hype about this, so it's worth stating clearly:  gravity itself does not play a role in NbP or this experiment.  Also, I have heard concerns about the strength of the experimental interpretation, because of issues about anisotropy in the NbP material and the aspect ratio of the sample.  
  • Similarly, there is going to be a lot of media attention around this paper, where researchers have combined a material ((Cr0.12Bi0.26Sb0.62)2Te3) that acts like a kind of topological insulator (a quantum anomalous Hall insulator, to use the authors' particular language) and a superconductor (Nb).  The result is predicted to be a system where there is conduction around the edges with the low energy current-carrying excitations act like Majorana fermions, another concept originally invented in the context of high energy physics.  
  • Both of these are examples of a kind of topology mania going on in condensed matter physics these days, as described here.  This deserves a longer discussion later.  

Sunday, July 16, 2017

A thermoelectric surprise in metals

Earlier this year I'd described what thermoelectricity is, and I'd also discussed recent work of ours where we used a laser as a scan-able heat source, and were then able to see nicely the fact that changing the size of a nanoscale metal structure can vary the material's thermoelectric properties, and make a thermocouple out of a single metal.

With this same measurement technique, we found a result that we thought was rather strange and surprising, which we have written up here.   Take a moderately long wire, say 120 nm wide and several microns long, made by patterning a 15 nm thick Au film.  Hook up basically a volt meter to the ends of the wire, and scan the laser spot along the length of the wire, recording the voltage as a function of the laser position.  If the wire is nice and homogeneous, you'd expect not to see to much until you get to the ends of the wire where it widens out into bigger contacts.  (There the size variation should make the skinny/wide junction act like a thermocouple.)   Instead, we see the result shown here in the figure (fig. 2 of the paper).  There is a great deal of spatial variability in the photothermoelectric voltage, like the wire is actually made up of a whole bunch of little thermocouples!

Note that your eye tends to pick out a spatial scale in panel (a) comparable to the 1 micron scale bar.  That's a bit misleading; the spot size of the laser in our system is about 1.8 microns, so this measurement approach would not pick up much smaller spatial scales of variation.

The metal wire is polycrystalline, and if you look at the electron microscope images in panels (c, d, e) you can make out a grain structure with lateral grain sizes of 15-20 nm.  Maybe the wire isn't all that homogeneous?  One standard way physicists look at the quality of metal films is to consider the electrical resistance of a square patch of film (\(R_{\square}\), the "sheet resistance" or "resistance per square"), and compare that number with the "resistance quantum", \(R_{\mathrm{q}}\equiv h/2e^2\), a combination of fundamental constants that sets a scale for resistance.  If you had two pieces of metal touching at a single atom, the resistance between them would be around the resistance quantum.  For our wire material, \(R_{\square}\) is a little under 4 \(\Omega\), so \(R_{\square} << R_{\mathrm{q}}\), implying that the grains of our material are very well-connected - that it should act like a pretty homogeneous film.  This is why the variation shown in the figure is surprising.  Annealing the wires does change the voltage pattern as well as smoothing it out.  This is a pretty good indicator that the grain boundaries really are important here.  We hope to understand this better - it's always fun when a system thought to be well understood surprises you.





Friday, July 07, 2017

Two books that look fun

Two books that look right up my alley:

  • Storm in a Teacup by Helen Czerski.  Dr. Czerski is a researcher at University College London, putting her physics credentials to work studying bubbles in physical oceanography.  She also writes the occasional "everyday physics" column in the Wall Street Journal, and it's great stuff.
  • Max the Demon vs. Entropy of Doom by Assa Auerbach and Richard Codor.   Prof. Auerbach is a serious condensed matter theorist at the Technion.  This one is a kick-starter to produce a light-hearted graphic novel that is educational without being overly mathematical.  Looks fun.  Seems like the target audience would be similar to that for Spectra.

Thursday, July 06, 2017

Science and policy-making in the US

Over twenty years ago, Congress de-funded its Office of Technology Assessment, which was meant to be a non-partisan group (somewhat analogous to the Congressional Budget Office) that was to help inform congressional decision-making on matters related to technology and public policy.  The argument at the time of the de-funding was that it was duplicative - that there are other federal agencies (e.g., DOE, NSF, NIH, EPA, NOAA) and bodies (the National Academies) that are capable of providing information and guidance to Congress.   In addition, there are think-tanks like the Rand CorporationIDA, and MITRE, though those groups need direction and a "customer" for their studies.   Throughout this period, the executive branch at least had the Office of Science and Technology Policy, headed by the Presidential Science Advisor, to help in formulating policy.  The level of influence of OSTP and the science advisor waxed and waned depending on the administration.   Science is certainly not the only component of technology-related policy, nor even the dominant one, but for the last forty years (OSTP's existence) and arguably going back to Vannevar Bush, there has been broad bipartisan agreement that science should at least factor into relevant decisions.

We are now in a new "waning" limit, where all of the key staff offices at OSTP are vacant, and there seems to be no plan or timeline to fill them.     The argument from the administration, articulated in here, is that OSTP was redundant and that its existence is not required for science to have a voice in policy-making within the executive branch.   While that is technically true, in the sense that the White House can always call up anyone they want and ask for advice, removing science's official seat at the table feels like a big step.  As I've mentioned before, some things are hard to un-do.   Wiping out OSTP for at least the next 3.5 years would send a strong message, as does gutting the science boards of agencies.   There will be long-term effects, both in actual policy-making, and in continuity of knowledge and the pipeline of scientists and engineers interested in and willing to devote time to this kind of public service.   (Note that there is a claim from an unnamed source that there will be a new OSTP director, though there is no timeline.)

Thursday, June 29, 2017

Condensed matter/nano resources for science writers and journalists

I've been thinking about and planning to put together some resources about condensed matter physics and nanoscience that would be helpful for science writers and journalists.  Part of the motivation here is rather similar to that of doing outreach work with teachers - you can get a multiplicative effect compared to working with individual students, since each teacher interacts with many students.  Along those lines, helping science writers, journalists, and editors might have an impact on a greater pool than just those who directly read my own (by necessity, limited) writing.  I've had good exchanges of emails with some practitioners about this, and that has been very helpful, but I'd like more input from my readers.

In answer to a few points that have come up in my email discussions:

  • Why do this?  Because I'd like to see improved writing out there.  I'd like the science-interested public to understand that there is amazing, often deep physics around them all the time - that there are deep ideas at work deep down in your iphone or your morning cup of coffee, and that those are physics, too.  I know that high energy ("Building blocks of the universe!") and astro ("Origins of everything!  Alien worlds!  Black holes!") are very marketable.  I'd be happy to guide little more of the bandwidth toward condensed matter/materials/real nano (not sci-fi) popularization.  I think the perception that high energy = all of physics goes a long way toward explaining why so many people (incl politicians) think that basic research is pie-in-the-sky-useless, and everything else is engineering that should be funded by companies.  I do think online magazines like Quanta and sites like Inside Science are great and headed in a direction I like.  I wish IFLS was more careful, but I admire their reach.
  • What is the long-range audience and who are the stakeholders?  I'd like CMP and nano to reach a broad audience.  There are serious technically trained people (faculty, researchers, some policy makers) who already know a lot of what I'd write about, though some of them still enjoy reading prose that is well written.  I am more thinking about the educated lay-public - the people who watch Nova or Scientific American Frontiers or Mythbusters or Through The Wormhole (bleah) or Cosmos, or who read Popular Science or Discovery or Scientific American or National Geographic.  Those are people who want to know more about science, or at least aren't opposed to the idea.  I guess the stakeholders would be the part of the physics  and engineering community that work on solid state and nano things, but don't have the time or inclination to do serious popular communication themselves.  I think that community is often disserved by (1) the popular portrayal that high energy = all of physics and crazy speculative stuff = actual tested science; (2) hype-saturated press releases that claim breakthroughs or feel the need to promise "1000x faster computers" when real, fundamental results are often downplayed; and (3) a focus in the field that only looks at applications rather than properly explaining the context of basic research.
  • You know that journalists usually have to cover many topics and have very little time, right?  Yes.  I also know that just because I make something doesn't mean anyone would necessarily use it.  Hence, why I'm looking for input.   Maybe something like a CM/nano FAQ would be helpful.
  • You know that long-form non-fiction writers love to do their own topical research, right?  Yes, and if there was something I could do to help those folks save time and avoid subject matter pitfalls, I'd feel like I'd accomplished something.
  • You could do more writing yourself, or give regular tips/summaries to journalists and editors via twitter, your blog, etc.  That's true, and I plan to try to do more, but as I said at the top, the point is not for me to become a professional journalist (in the sense of providing breaking news tidbits) or writer, but to do what I can to help those people who have already chosen that vocation. 
  • You know there are already pros who worry about quality of science writing and journalism, right?  Yes, and they have some nice reading material.  For example, this and this from the Berkeley Science Review; this from the Guardian; this from the National Association of Science Writers.
So, writers and editors that might read this:  What would actually be helpful to you along these lines, if anything?  Some primer material on some topics more accessible and concise than wikipedia?


Tuesday, June 20, 2017

About grants: What are "indirect costs"?

Before blogging further about science, I wanted to explain something about the way research grants work in the US.  Consider this part of my series of posts intended to educate students (and perhaps the public) about careers in academic research.

When you write a proposal to a would-be source of research funding, you have to include a budget.  As anyone would expect, that budget will list direct costs - these are items that are clear research expenses.  Examples would include, say, $30K/yr for a graduate student's stipend, and $7K for a piece of laboratory electronics essential to the work, and $2K/yr to support travel of the student and the principal investigator (PI) to conferences.   However, budgets also include indirect costs, sometimes called overhead.  The idea is that research involves certain costs that aren't easy to account for directly, like the electricity to run the lights and air conditioning in the lab, or the costs to keep the laboratory building maintained so that the research can get done, or the (meta)costs for the university to administer the grant.  

So, how does the university to figure out how much to tack on for indirect costs?  For US federal grants, the magic (ahem) is all hidden away in OMB Circular A21 (wiki about it, pdf of the actual doc).  Universities periodically go through an elaborate negotiation process with the federal government (see here for a description of this regarding MIT), and determine an indirect cost rate for that university.  The idea is you take the a version of the direct costs ("modified total direct costs" - for example, a piece of equipment that costs more than $5K is considered a capital expense and not subject to indirect costs) and multiply by a negotiated factor (in the case of Rice right now, 56.5%) to arrive at the indirect costs.  The cost rates are lower for research done off campus (like at CERN), with the argument that this should be cheaper for the university.  (Effective indirect cost rates at US national labs tend to be much higher.)

Foundations and industry negotiate different rates with universities.  Foundations usually limit their indirect cost payments, arguing that they just can't afford to pay at the federal level.  The Bill and Melinda Gates Foundation, for example, only allows (pdf) 10% for indirect costs.   The effective indirect rate for a university, averaged over the whole research portfolio, is always quite a bit lower than the nominal A21 negotiated rate.  Vice provosts/presidents/chancellors for research at major US universities would be happy to explain at length that indirect cost recovery doesn't come close to covering the actual costs associated with doing university-based research.  

Indirect cost rates in the US are fraught with controversy, particularly now.  The current system is definitely complicated, and reasonable people can ask whether it makes sense (and adds administrative costs) to have every university negotiate its own rate with the feds.   It remains to be seen whether there are changes in the offing.

Saturday, June 17, 2017

Interesting reading material

Summer travel and other activities have slowed blogging, but I'll pick back up again soon.  In the meantime, here are a couple of interesting things to read:

  • Ignition!  An Informal History of Liquid Rocket Propellants (pdf) is fascinating, if rather chemistry-heavy.  Come for discussions of subtle side reactions involved in red fuming nitric acid slowly eating its storage containers and suggested (then rejected) propellants like dimethyl mercury (!!), and stay for writing like, "Miraculously, nobody was killed, but there was one casualty — the man who had been steadying the cylinder when it split. He was found some five hundred feet away, where he had reached Mach 2 and was still picking up speed when he was stopped by a heart attack."  This is basically the story from start to finish (in practical terms) of the development of liquid propellants for rockets.   That book also led me to stumbling onto this library of works, most of which are waaaaay too chemistry-oriented for me.  Update:  for a directly relevant short story, see here.
  • Optogenetics is the idea of using light to control and trigger the activation/inactivation of genes.  More recently, there is a big upswing in the idea of magnetogenetics, using magnetic fields to somehow do similar things.  One question at play is, what is the physical mechanism whereby magnetic fields can really do much at room temperature, since magnetic effects tend to be weak.  (Crudely speaking, the energy scale of visible photons is eV, much larger than the thermal energy scale of \(k_{\mathrm{B}}T \sim ~\)26 meV, and readily able to excite vibrations or drive electronic excitations.  However, one electron spin in a reasonably accessible magnetic field of 1 Tesla is \(g \mu_{\mathrm{B}}B \sim ~\) 0.1 meV.)  Here is a nice survey article about the constraints on how magnetogenetics could operate.
  • For a tutorial in how not to handle academic promotion cases, see here.  

Tuesday, June 06, 2017

Follow-up: More 5nm/7nm/10nm transistors

Two years ago I wrote this post about IBM's announcement of "7 nm transistors", and it still gets many pageviews every week.   So, what's been going on since then?

I encourage you to read this article from Semiconductor Engineering - it's a nice, in-depth look at what the major manufacturers are doing to get very small transistors actually to market.  It also explains what those numerical designations of size really mean, and changes in lithography technology.  (This is one part of my book that I really will have to revise at some point.)

Likewise, here is a nice article about IBM's latest announcement of "5 nm" transistors.  The writing at Ars Technica on this topic is usually of high quality.  The magic words that IBM is now using are "gate all around", which means designing the device geometry so that the gate electrode, the one that actually controls the conduction, affects the channel (where the current flows) from all sides.  In old-school planar transistors, the gate only couples to the channel from one direction.

Later I will write a long, hopefully accessible article about transistors, as this year is the 70th anniversary of the Bell Labs invention that literally reshaped global technology.

Thursday, June 01, 2017

What should everyone know about physics?

A couple of weeks ago, Sean Carroll made an offhand remark (the best kind) on twitter about what every physics major should know.  That prompted a more thoughtful look at our expectations for physics majors by Chad Orzel, with which I broadly agree, as does ZapperZ, who points out that most physics majors don't actually go on to be physics PhDs (and that's fine, btw.)  

Entertaining and thought-provoking as this was, it seems like it's worth having a discussion among practicing physicist popularizers about what we'd like everyone to know about physics.  (For the pedants in the audience, by "everyone" I'm not including very young children, and remember, this is aspirational.  It's what we'd like people to know, not what we actually expect people to know.)  I'm still thinking about this, but here are some basic ingredients that make the list, including some framing topics about science overall.
  • Science is a reason- and logic-based way to look at the natural world; part of science is figuring out "models" (ways of describing the natural world and how it works) that have explanatory (retrodictive) and predictive power.   
  • Basic science is about figuring out how the world works - figuring out the "rules of the game".  Engineering is about using that knowledge to achieve some practical goal.  The line is very blurry; lots of scientists are motivated by and think about eventual applications.
  • Different branches of science deal with different levels of complexity and have their own vocabularies.  When trying to answer a scientific question, it's important to use the appropriate level of complexity and the right vocabulary.  You wouldn't try to describe how a bicycle works by starting with the molecular composition of the grease on the chain....
  • Physics in particular uses mathematics as its language and a tool.  You can develop good intuition for how physics works, but to make quantitative predictions, you need math.
  • Ultimately, observation and experiment are the arbiters of whether a scientific model/theory is right and wrong, scientifically.  "Right" means "agrees with observation/experiment whenever checked", "wrong" means "predicts results at odds with reality".  Usually this means the model/theory needs an additional correction, or has only a limited range of applicability.  (Our commonplace understanding of how bicycles work doesn't do so well at speeds of thousands of miles an hour.  That doesn't mean we don't understand how bikes work at low speeds; it means that additional effects have to be considered at very high speeds.)
  • There are many branches of physics - it's not all particle physics and astrophysics, despite the impression you might get from TV or movies.
  • Physics explains light in all its forms (the microwaves that heat your food; the radio waves that carry your wifi and cell phone traffic; the light you see with your eye and which carries your internet data over fibers; the x-rays that can go through your skin; and the really high energy gamma rays that do not, in fact, turn you into an enormous green ragemonster).  
  • Physics includes not just looking at the tiniest building blocks of matter, but also understanding what happens when those building blocks come together in very large numbers - it can explain that diamond is hard and transparent, how/why water freezes and boils, and how little pieces of silicon in your computer can be used to switch electric current.  Physics provides a foundation for chemistry and biology, but in those fields often it makes much more sense to use chemistry and biology vocabulary and models.  
  • Quantum mechanics can be unintuitive and weird, but that doesn't mean it's magic, and it doesn't mean that everything we don't understand (e.g., consciousness) is deeply connected to quantum physics.  Quantum is often most important at small scales, like at the level of individual atoms and molecules.  That's one reason it can seem weird - your everyday world is much larger.
  • Relativity can also be unintuitive and weird - that's because it's most important at speeds near the speed of light, and again those are far from your everyday experience.   
  • We actually understand a heck of a lot of physics, and that's directly responsible for our enormous technological progress in the last hundred and fifty years.
  • Physicists enjoy being creative and speculative, but good and honest ones are careful to point out when they're hand waving or being fanciful. 
I'll add more to this list over time, but that's a start.... 

Wednesday, May 24, 2017

Hot electrons and a connection to thermoelectricity

The two recent posts about the Seebeck effect and hot electrons give some context so that I can talk about a paper we published last month.

We started out playing around with metal nanowires, and measuring the open-circuit voltage (that is, hook up a volt meter across the device, which nominally doesn't allow current to flow) across those wires as a function of where we illuminated them with a near-IR laser.  Because the metal absorbs some of the light, that laser spot acts like a local heat source (though figuring out the temperature profile requires some modeling of the heat transfer processes).   As mentioned here, particles tend to diffuse from hot locations to cold locations; in an open circuit, a voltage builds up to balance out this tendency, because in the steady state no net current flows in an open circuit; and in a metal, the way electron motion and scattering depend on the energy of the electrons gives you the magnitude and sign of this process.   If the metal is sufficiently nanoscale that boundary scattering matters, you end up with a thermoelectric response that depends on the metal geometry.  The end result is shown in the left portion of the figure.  If you illuminate the center of the metal wire, you measure no net voltage - you shouldn't, because the whole system is symmetric.  The junction where the wire fans out to a bigger pad acts like a thermocouple because of that boundary scattering, and if you illuminate it you get a net thermoelectric voltage (sign depends on how you pick ground and which end you're illuminating).   Bottom line:  Illumination heats the electrons a bit (say a few Kelvin), and you get a thermoelectric voltage because of that, to offset the tendency of the electrons to diffuse due to the temperature gradient.  In this system, the size of the effect is small - microvolts at our illumination conditions.

Now we can take that same nanowire, and break it to make a tunnel junction somewhere in there - a gap between the two electrodes where the electrons are able to "tunnel" across from one side to the other.  When we illuminate the tunnel junction, we now see open-circuit photovoltages that are much larger, and very localized to the gap region.  So, what is going on here?  The physics is related, but not true thermoelectricity (which assumes that it always makes sense to define temperature everywhere).   What we believe is happening is something that was discussed theoretically here, and was reported in molecule-containing junctions here.   As I said when talking about hot electrons, when light gets absorbed, it is possible to kick electrons way up in energy.  Usually that energy gets dissipated by being spread among other electrons very quickly.  However, if hot electrons encounter the tunnel junction before they've lost most of that energy, they have a higher likelihood of getting across the tunnel junction, because quantum tunneling is energy-dependent.  Producing more hot electrons on one side of the junction than the other will drive a tunneling current.  We still have an open circuit, though, so some voltage has to build up so that the net current in the steady state adds up to zero.  Bottom line:  Illumination here can drive a "hot" electron tunneling current, and you get a photovoltage to offset that process.  This isn't strictly a thermoelectric effect because the electrons aren't thermally distributed - it's the short-lived high energy tail that matters most.

It's fun to think about ways to try to better understand and maximize such effects, perhaps for applications in photodetection or other technologies....

Friday, May 19, 2017

What are "hot" electrons?

In basic chemistry or introductory quantum mechanics, you learn about the idea of energy levels for electrons.  If you throw a bunch of electrons into some system, you also learn about the ground state, the lowest energy state of the whole system, where the electrons fill up* the levels from the bottom up, in accord with the Pauli principle.   In statistical physics, there are often a whole lot of energy levels and a whole lot of electrons (like \(10^{22}\) per cc), so we have to talk about distribution functions, and how many electrons are in the levels with energies between \(E\) and \(E + dE\).   In thermal equilibrium (meaning our system of interest is free to exchange energy in the form of heat with some large reservoir described by a well-defined temperature \(T\)), the distribution of electrons as a function of energy is given by the Fermi-Dirac distribution.

So, what are "hot" electrons?  If we have a system driven out of equilibrium, it's possible to have the electrons arranged in a non-thermal (non-FD distribution!) way.  Two examples are of particular interest at the nanoscale.  In a transistor, say, or other nanoelectronic device, it is possible to apply a voltage across the system so that \(eV >> k_{\mathrm{B}}T\) and inject charge carriers at energies well above the thermally distributed population.  Often electron-electron scattering on the 10-100 fs timescale redistributes the energy across the electrons, restoring a thermal distribution at some higher effective temperature (and on longer timescales, that energy cascades down into the vibrations of the lattice).  Electrons in a metal like Au at the top of the distribution are typically moving at speeds of \(\sim 10^{6}\) m/s (!!), so that means that near where the current is injected, on distance scales like 10-100 nm, there can be "hot" electrons well above the FD distribution.  

The other key way to generate "hot" electrons is by optical absorption.  A visible photon (perhaps a green one with an energy \(\hbar \omega\) of 2 eV) can be absorbed by a metal or a semiconductor, and this can excite an electron at an energy \(\hbar \omega\) above the top of the FD distribution.  Often, on the 10-100 fs timescale, as above, that energy gets redistributed among many electrons, and then later into the lattice.  That's heating by optical absorption.  In recent years, there has been an enormous amount of interest in trying to capture and use those hot electrons or their energy before there is a chance for that energy go become converted to heat.  See here, for instance, for thoughts about solar energy harvesting, or here for a discussion of hot electron photochemistry.  Nanoscale systems are of great interest in this field for several reasons, including the essential fact that hot electrons generated in them can access the system surface or boundary in the crucial timespan before energy relaxation.

(Talking about this and thermoelectricity now sets the stage so I can talk about our recent paper in an upcoming post.)

*Really, the whole many-body electron wavefunction has to be antisymmetric under the exchange of any two electrons, so it's wrong to talk as if one particular electron is sitting in one particular state, but let's ignore that for now.  Also, in general, the energy levels of the many-electron system actually depend on the number and arrangement of the electrons in the system (correlation effects!), but let's ignore that, too.

Tuesday, May 16, 2017

More coming, soon.

I will be posting more soon.  I'm in the midst of finally shifting my group webpage to a more modern design.  In the meantime, if there are requests for particular topics, please put them in the comments and I'll see what I can do.

Update:  Victory.  After a battle with weird permissions issues associated with the way Rice does webhosting, it's up here:  natelson.web.rice.edu/group.html

Still a few things that should be updated and cleaned up (including my personal homepage), but the major work is done.

Tuesday, May 09, 2017

Brief items

Some interesting items of note:

  • Gil Refael at Cal Tech has a discussion going on the Institute for Quantum Information and Matter blog about the content of "modern physics" undergraduate courses.  The dilemma as usual is how to get exciting, genuinely modern physics developments into an already-packed undergrad curriculum.  
  • The variety and quality of 3d printed materials continues to grow and impress.  Last month a team of folks from Karlsruhe demonstrated very nice printing of (after some processing) fused silica.  Then last week I ran across this little toy.  I want one.  (Actually, I want to know how much they cost without getting on their sales engineer call list.)  We very recently acquired one of these at Rice for our shared equipment facility, thanks to generous support of the NSF MRI program.   There are reasons to be skeptical that additive manufacturing will scale in such a way as to have enormous impact, but it sure is cool and making impressive progress.
  • There is a news release about our latest paper that has been picked up by a few places, including the NSF's electronic newsletter.  I'll write more about that very soon.
  • The NSF and the SRC are having a joint program in "SemiSynBio", trying to work at the interface of semiconductor devices and synthetic biology to do information processing and storage.  That's some far out stuff for the SRC - they're usually pretty conservative.
  • Don Lincoln has won the AIP's 2017 Gemant Award for his work presenting science to the public - congratulations!  You have likely seen his videos put out by Fermilab - they're frequently featured on ZapperZ's blog

Friday, May 05, 2017

What is thermoelectricity?

I noticed I'd never written up anything about thermoelectricity, and while the wikipedia entry is rather good, it couldn't hurt to have another take on the concept.   Thermoelectricity is the mutual interaction of the flow of heat and the flow of charge - this includes creating a voltage gradient by applying a temperature gradient (the Seebeck Effect) and driving a heating or cooling thermal flow by pushing an electrical current (the Peltier Effect).  Recently there have been new generalizations, like using a temperature gradient to drive a net accumulation of electronic spin (the spin Seebeck effect).

First, the basic physics.  To grossly oversimplify, all other things being equal, particles tend to diffuse from hot locations to cold locations.  (This is not entirely obvious in generality, at least not to me, from our definitions of temperature or chemical potential, and clearly in some situations there are still research questions about this.  There is certainly a hand-waving argument that hotter particles, be they molecules in a gas or electrons in a solid, tend to have higher kinetic energies, and therefore tend to diffuse more rapidly.  That's basically the argument made here.)

Let's take a bar of a conductor and force there to be a temperature gradient across it.  The mobile charge carriers will tend to diffuse away from the hot end.  Moreover, there will be a net flux of lattice vibrations (phonons) away from the hot end.  Those phonons can also tend to scatter charge carriers - an effect called phonon drag.   For an isolated bar, though, there can't be any net current, so a voltage gradient develops such that the drift current balances out the diffusion tendency.  This is the Seebeck effect, and the Seebeck coefficient is the constant of proportionality between the temperature gradient and the voltage gradient.   If you hook up two materials with different (known) Seebeck coefficients as shown, you make a thermocouple and can use the thermoelectric voltage generated as thermometer.

Ignoring the phonon drag bit, the Seebeck coefficient depends on particular material properties - the sign of the charge carriers (thermoelectric measurements are one way to tell if your system is conducting via electrons or holes, leading to some dramatic effects in quantum dots), and the energy dependence of their conductivity (which has wrapped up in it the band structure of the material and extrinsic factors like the mean free path for scattering off impurities and boundaries).

Because of this dependence on extrinsic factors, it is possible to manipulate the Seebeck coefficient through nanoscale structuring or alteration of materials.  Using boundary scattering as a tuning parameter for the mean free path is enough to let you make thermocouples just by controlling the geometry of a single metal.  This has been pointed out here and here, and in our own group we have seen those effects here.   Hopefully I'll have time to write more on this later....

(By the way, as I write this, Amazon is having some kind of sale on my book, at $19 below publisher list price.  No idea why or how long that will last, but I thought I'd point it out.  I'll delete this text when that expires.)

Monday, April 24, 2017

Quantum conduction in bad metals, and jphys+

I've written previously about bad metals.  We recently published a result (also here) looking at what happens to conduction in an example of such a material at low temperatures, when quantum corrections to conduction (like these) should become increasingly important.   If you're interested, please take a look at a blog post I wrote about this that is appearing on jphys+, the very nice blogging and news/views site run by the Institute of Physics.

Sunday, April 23, 2017

Thoughts after the March for Science

About 10000 people turned out (according to the Houston Chronicle) for our local version of the March for Science.   Observations:

  • While there were some overtly partisan participants and signs, the overarching messages that came through were "We're all in this together!", "Science has made the world a better place, with much less disease and famine, a much higher standard of living for billions, and a greater understanding of the amazingness of the universe.", "Science does actually provide factual answers to properly formulated scientific questions", and "Facts are not opinions, and should feed into policy decisions, rather than policy positions altering what people claim are facts."
  • For a bunch of people often stereotyped as humorless, scientists had some pretty funny, creative signs.  A personal favorite:  "The last time scientists were silenced, Krypton exploded!"  One I saw online:  "I can't believe I have to march for facts."
  • Based on what I saw, it's hard for me to believe that this would have the negative backlash that some were worrying about before the event.  It simply wasn't done in a sufficiently controversial or antagonistic way.  Anyone who would have found the messages in the first point above to be offensive and polarizing likely already had negative perceptions of scientists, and (for good or ill) most of the population wasn't paying much attention anyway.
So what now?

  • Hopefully this will actually get more people who support the main messages above to engage, both with the larger community and with their political representatives.  
  • It would be great to see some more scientists and engineers actually run for office.  
  • It would also be great if more of the media would get on board with the concept that there really are facts.  Policy-making is complicated and must take into account many factors about which people can have legitimate disagreements, but that does not mean that every statement has two sides.  "Teach the controversy" is not a legitimate response to questions of testable fact.  In other words, Science is Real
  • Try to stay positive and keep the humor and creativity flowing.  We are never going to persuade a skeptical, very-busy-with-their-lives public if all we do is sound like doomsayers.

Thursday, April 20, 2017

Ready-to-peel semiconductors!

Every now and then there is an article that makes you sit up and say "Wow!"  

Epitaxy is the growth of crystalline material on top of a substrate with a matching (or very close to it) crystal structure.  For example, it is possible to grow InAs epitaxially on top of GaSb, or SiGe epitaxially on top of Si.  The idea is that the lattice of the underlying material guides the growth of the new layers of atoms, and if the lattice mismatch isn't too bad and the conditions are right, you can get extremely high quality growth (that is, with nearly perfect structure).  The ability to grow semiconductor films epitaxially has given us a ton of electronic devices that are everywhere around us, including light emitting diodes, diode lasers, photodiodes, high mobility transistors, etc.   Note that when you grow, say, AlGaAs epitaxially on a GaAs substrate, you end up with one big crystal, all covalently bonded.  You can't readily split off just the newly grown material mechanically.  If you did homoepitaxy, growing GaAs on GaAs, you likely would not even be able to figure out where the substrate ended and the overgrown film began.

In this paper (sorry about the Nature paywall - I couldn't find another source), a group from MIT has done something very interesting.  They have shown that a monolayer of graphene on top of a substrate does not screw up overgrowth of material that is epitaxially registered with the underlying substrate.  That is, if you have an atomically flat, clean GaAs substrate ("epiready"), and cover it with a single atomic layer of graphene, you can grow new GaAs on top of the graphene (!), and despite the intervening carbon atoms (with their own hexagonal lattice in the way), the overgrown GaAs will have registry (crystallographic alignment and orientation) with the underlying substrate.  Somehow the short-ranged potentials that guide the overgrowth are able to penetrate through the graphene.  Moreover, after you've done the overgrowth, you can actually peel off the epitaxial film (!!), since it's only weakly van der Waals bound to the graphene.  They demonstrate this with a variety of overgrown materials, including a III-V semiconductor stack that functions as a LED.  

I found this pretty amazing.  It suggests that there may be real opportunities for using layered van der Waals materials to grow new and unusual systems, perhaps helping with epitaxy even when lattice mismatch would otherwise be a problem.  I suspect the physics at work here (chemical interactions from the substrate "passing through" overlying graphene) is closely related to this work from several years ago.

Wednesday, April 19, 2017

March for Science, April 22

There has been a great deal written by many (e.g., 1 2 3 4 5 6) about the upcoming March for Science.  I'm going to the Houston satellite event.  I respect the concern that such a march risks casting scientists as "just another special interest group", or framing scientists as a group as leftists who are reflexively opposed to the present US administration.  Certainly some of the comments from the march's nominal twitter feed are (1) overtly political, despite claims that the event is not partisan; and (2) not just political, but rather extremely so.

On balance, though, I think that the stated core messages (science is not inherently partisan; science is critical for the future of the country and society; policy making about relevant issues should be informed by science) are important and should be heard by a large audience.   If the argument is that scientists should just stay quiet and keep their heads down, because silence is the responsible way to convey objectivity, I am not persuaded.  

Friday, April 14, 2017

"Barocalorics", or making a refrigerator from rubber

People have spent a lot of time and effort in trying to control the flow and transfer of heat.  Heat is energy transferred in a disorganized way among many little degrees of freedom, like the vibrations of atoms in a solid or the motion of molecules in a gas.  One over-simplified way of stating how heat likes to flow:  Energy tends to be distributed among as many degrees of freedom as possible.  The reason heat flows from hot things to cold things is that tendency.  Manipulating the flow of heat then really all comes down to manipulating ways for energy to be distributed.

Refrigerators are systems that, with the help of some externally supplied work, take heat from a "cold" side, and dump that heat (usually plus some additional heat) to a "hot" side.  For example, in your household refrigerator, heat goes from your food + the refrigerator inner walls (the cold side) into a working fluid, some relative of freon, which boils.  That freon vapor gets pumped through coils; a fan blows across those coils and (some of) the heat is transferred from the freon vapor to the air in your kitchen.   The now-cooler freon vapor is condensed and pumped (via a compressor) and sent back around again.  

There are other ways to cool things, though, than by running a cycle using a working fluid like freon. For example, I've written before about magnetic cooling.  There, instead of using the motion of liquid and gas molecules as the means to do cooling, heat is made to flow in the desired directions by manipulating the spins of either electrons or nuclei.  Basically, you can use a magnetic field to arrange those spins such that it is vastly more likely for thermal energy to come out of the jiggling motion of your material of interest, and instead end up going into rearranging those spins.

Stretching a polymer tends to heat it, due to the barocaloric
effect.  Adapted from Chauhan et al., doi:10.1557/mre.2015.17 
It turns out, you can do something rather similar using rubber.  The key is something called the elasto-caloric or barocaloric effect - see here (pdf!) for a really nice review.  The effect is shown in the figure, adapted from that paper.   An elastomer in its relaxed state is sitting there at some temperature and with some entropy - the entropy has contributions due to the jiggling around of the atoms, as well as the structural arrangement of the polymer chains.  There are lots of ways for the chains to be bunched up, so there is quite a bit of entropy associated with that arrangement.  Roughly speaking, when the rubber is stretched out quickly (so that there is no time for heat to flow in or out of the rubber) those chains straighten, and the structural piece of the entropy goes down.  To make up for that, the kinetic contribution to the entropy goes up, showing up as an elevated temperature.  Quickly stretch rubber and it gets warmer.  A similar thing happens when rubber is compressed instead of stretched.  So, you could imagine running a refrigeration cycle based on this!  Stretch a piece of rubber quickly; it gets warmer (\(T \rightarrow T + \Delta T\)).  Allow that heat to leave while in the stretched state (\(T + \Delta T \rightarrow T\)).  Now release the rubber quickly so no heat can flow.  The rubber will get colder now than the initial \(T\); energy will tend to rearrange itself out the kinetic motion of the atoms and into crumpling up the polymer chains.  The now-cold rubber can be used to cool something.  Repeat the cycle as desired.  It's a pretty neat idea.  Very recently, this preprint showed up on the arxiv, showing that a common silicone rubber, PDMS, is great for this sort of thing.  Imagine making a refrigerator out of the same stuff used for soft contact lenses!  These effects tend to have rather limited useful temperature ranges in most elastomers, but it's still funky.


Monday, April 10, 2017

Shrinkage - the physics of shrink rays

It's a trope that's appeared repeatedly in science fiction:  the shrink ray, a device that somehow takes ordinary matter and reduces it dramatically in physical size.  Famous examples include Fantastic Voyage, Honey I Shrunk the Kids, Innerspace, and Ant Man.  This particular post was inspired partly by my old friend Rob Kutner's comic series Shrinkage, where tiny nanotech-using aliens take over the mind of the (fictitious) President, with the aim of turning the world into a radioactive garden spot for themselves.  (Hey Rob - your critters thrive on radioactivity, yet if they're super small, they're probably really inefficient at capturing that radiation.  Whoops.)  Coincidentally, this week there was an announcement about a film option for Michael Crichton's last book, in which some exotic (that is to say, mumbo jumbo) "tensor field" is used to shrink people.

It's easy to enumerate many problematic issues that should arise in these kinds of stories:
  • Do the actual atoms of the objects/people shrink?  
  • If so, even apart from how that's supposed to work, what do these people breathe?  (At least Ant Man has a helmet that could be hand-waved to shrink air molecules....)  Or eat/drink?
  • What about biological scaling laws?
  • If shrunken objects keep their mass, that means a lot of these movies don't work.  Think about that tank that Hank Pym carries on his keychain....  If they don't keep their mass, where does that leave the huge amounts of energy (\(mc^2\)) that would have to be accounted for?
  • How can these people see if their eyes and all their cones/rods become much smaller than the wavelength of light?
  • The dynamics of interacting with a surrounding fluid medium (air or water) are completely different for very small objects - a subject explored at length by Purcell in "Life at Low Reynolds Number". 
The only attempt I've ever seen in science fiction to discuss some kind of real physics that would have to be at work in a shrink ray was in Isaac Asimov's novel Fantastic Voyage II.   One way to think about this is that the size of atoms is set by a competition between the electrostatic attraction between the electrons and the nucleus, and the puffiness forced by the uncertainty principle.  The typical size scale of an atom is given by the Bohr radius, \( a_{0} \equiv (4 \pi \epsilon_{0} \hbar^{2})/(m_{\mathrm{e}}e^{2}) \), where \(m_{\mathrm{e}} \) is the mass of the electron, and e is the electronic charge.   Shrinking actual atoms would require rejiggering some fundamental natural constants.  For example, you could imagine shrinking atoms by cranking up the electronic charge (and hence the attractive force between the electron and the nucleus).  That would have all kids of other consequences, however - such as screwing up chemistry in a big way.  

Of course, if we want to keep the appearances that we see in movies and TV, then somehow the colors of shrunken objects have to remain what they were at full size.   That would require the typical energy scale for optical transitions in atoms, for example, to remain unchanged.  That is, the Rydberg \( \equiv  m_{\mathrm{e}}e^4/(8 \epsilon_{0}^2 h^3 c) \) would have to stay constant.  Satisfying these constraints is very tough.  Asimov's book takes the idea that the shrink ray messes with Plank's constant, and I vaguely recall some discussion about altering c as well.  

While shrinking rays (and their complement) are great fun in story-telling, they're much more in the realm of science fantasy than true science fiction....


Friday, March 31, 2017

Site recommendation: Inside Science

A brief post in a busy time:  If you like well-written, even-handed journalistic discussions of science, I strongly recommend checking out Inside Science, an editorially independent, non-profit science news service affiliated with the American Institute of Physics.   The writing is engaging and of consistently high quality, and I'm glad it's supported by underwriters so that it's not dependent on clicks/ad revenue.

Tuesday, March 28, 2017

What is Intel's Optane memory?

Intel has developed a new product, dubbed Optane, that is a memory technology that supposedly combined the speed of conventional DRAM with the nonvolatility of flash memory.   It would appear that the devices function as a form of "3d crosspoint" memory, where the functionality is all in a blob of material at the crossing point between two wires (a bit line and a word line).  Depending on some particular voltage pulse applied to the junction, the blob of material either has a high electrical resistance or a low electrical resistance, corresponding to the two different states of a binary bit.   The upsides here are that information is stored in a material property rather than as charge (making it non-volatile, probably radiation hard, probably uninfluenced by magnetic fields, etc.), and the devices can be packed very densely, since they need fewer transistors, etc. than conventional DRAM.

There are multiple different mechanisms to achieve that kind of response from the mysterious blob of material.  For example, you can have a material that changes structural phase under current flow, between two different structures with different electrical properties.  You can have a material where some redox chemistry takes place, switching on or off a conductive filament.  You can have a material where other redox chemistry takes place, along with the migration of oxygen vacancies, to create or destroy a conductive filament (as in the HP implementation of memristors, which I've written about before).  You could use magnetic data storage of some sort, with spin transfer torque driving switching of some giant magnetoresistive or tunneling magnetoresistive device.

Breathless articles like this one this week make some pretty bold claims for Optane's performance.  However, no one seems to know what it is.  There has been speculation.  Intel's CEO says it's based on actual bulk changes in the electrical properties of the mysterious material.  Intel categorically denies that it is based on phase changes, "memristor" approaches, or spin transfer torque.  

Well, now that they are actually shipping chips, it's only a very short matter of time before someone cuts one open and reverse-engineers what is actually in there.  So, we have only a little while to speculate wildly or place bets.  Please go for it in the comments, or chime in if you have an informed perspective!  Personally, I suspect it really is some form of bias-driven chemical alteration of material, whether this is called "memristor" in the HP sense of the word or not.  (Note that something rather analogous happened back when IBM and Intel switched to using "high-k" dielectrics in transistors.  They wouldn't say what material they'd come up with, and in the end it turned out to be (most commonly) hafnium oxynitrides.)

Wednesday, March 22, 2017

Hysteresis in science and engineering policy

I have tried hard to avoid political tracts on this blog, because I don't think that's why people necessarily want to read here.  Political flamewars in the comments or loss of readers over differences of opinion are not outcomes I want.  The recent proposed budget from the White House, however, inspires some observations.  (I know the President's suggested budget is only the very beginning of the budgetary process, but it does tell you something about the administration priorities.)

The second law of thermodynamics tell us that some macroscopic processes tend to run only one direction.  It's easier to disperse a drop of ink in a glass of water than to somehow reconstitute the drop of ink once the glass has been stirred.  

In general, the response of a system to some input (say the response of a ferromagnet to an applied magnetic field, or the deformation of a blob of silly putty in response to an applied stress) can depend on the history of the material.  Taking the input from A to B and back to A doesn't necessarily return the system to its original state.  Cycling the input and ending up with a looping trajectory of the system in response because of that history dependence is called hysteresis.  This happens because there is some inherent time scale for the system to respond to inputs, and if it can't keep up, there is lag.

The proposed budget would make sweeping changes to programs and efforts that, in some cases, took decades to put in place.   Drastically reducing the size and scope of federal agencies is not something that can simply be undone by the next Congress or the next President.  Cutting 20% of NIH or 17% of DOE Office of Science would have ripple effects for many years, and anyone who has worked in a large institution knows that big cuts are almost never restored.   Expertise at EPA and NOAA can't just be rebuilt once eliminated.  

People can have legitimate discussions and differences of opinion about the role of the government and what it should be funding.  However, everyone should recognize that these are serious decisions, many of which are irreversible in practical terms.   Acting otherwise is irresponsible and foolish.

Wednesday, March 15, 2017

APS March Meeting 2017 Day 3 - updated w/ guest post!

Hello readers - I have travel plans such that I have to leave the APS meeting after lunch today.  That means I will miss the big Kavli Symposium session.  If someone out there would like to offer to write up a bit about those talks, please email me or comment below, and I'd be happy to give someone a guest post on this.

Update:  One of my readers was able to attend the first two talks of the Kavli Symposium, by Duncan Haldane and Michael Kosterlitz, two of this year's Nobel laureates.  Here are his comments.  If anyone has observations about the remaining talks in the symposium, please feel free to email me or post in the comments below.
I basically ran from the Buckley Prize talk by Alexei Kitaev down the big hall where Duncan Haldane was preparing to talk.  When I got there it was packed full but I managed to squeeze into a seat in the middle section.  I sighted my postdoc near the back of the first section; he later told me he’d arrived 35 minutes early to get that seat.

I felt Haldane’s talk was remarkably clear and simple given the rarified nature of the physics behind it.  He pointed out that condensed matter physics really changed starting in the 1980’s, and conceptually now is much different than the conventional picture  presented in books like Ashcroft and Mermin’s Solid State Physics that many of us learned from as students.  One prevailing idea leading up to that time was that changes in the ground state must always be related to changes in symmetry.  Haldane’s paper on antiferromagnetic Heisenberg spin chains showed that the ground state properties of the chains were drastically different depending on whether  the spin at each site is integer (S=1,2,3,…) or half-integer (S=1/2, 3/2, 5/2 …) , despite the fact that the Hamiltonian has the same spherical symmetry for any value of S.  This we now understand on the basis of the topological classifications of the systems.  Many of these topological classifications were later systematically worked out by Xiao-Gang Wen who shared this year’s Buckley prize with Alexei Kitaev. Haldane flashed a link to his original manuscript on spin chains which he has posted on arXiv.org as https://arxiv.org/abs/1612.00076 , and which he noted was “rejected by many journals”.  He was also amused or bemused or maybe both by the fact that people referred to his ideas as “Haldane’s conjecture” rather than recognizing that he’d solved the problem.  He noted that once one understands that the topological classification determines many of the important properties it is obvious that simplified “toy models” can give deep insight into the underlying physics of all systems in the same class.  In this regard he singled out the AKLT model, which revealed how finite chains of spin S=1 have effective S=1/2 degrees of freedom associated with each end.  These are entangled with each other no matter how long the finite chain – a remarkable demonstration of quantum entanglement over a long distance.  This also is a simple example of the special nature of surface states or excitations in topological systems. 

Kosterlitz began by pointing out that the Nobel prize was effectively awarded for work on two distinct aspects of topology in condensed matter, and both of these involved David Thouless which led to his being awarded one-half of the prize, with the other half shared by Kosterlitz and Haldane.  He then relayed a bit about his own story: he started as a high energy physicist, and apparently did not get offered the position he wanted at CERN so he ended up at Birmingham, which turned out to be remarkably fortuitous.  There he teamed with Thouless and gradually switched his interests to condensed matter physics.  They wanted to understand data suggesting that quasi-two-dimensional films of liquid helium seemed to show a phase transition despite the expectation that this should not be possible.  He then gave a very professorial exposition of the Kosterlitz-Thouless (K-T) transition, starting with the physics of vortices, and how their mutual interactions involve a potential that depends on the logarithm of the distance.  The results point to a non-zero temperature above which the free energy favors free vortices and below which vortex-anti vortex pairs are bound. He then pointed out how this is relevant to a wide variety of two dimensional systems, including xy magnets, and also the melting of two-dimensional crystals in which two K-T transitions occur corresponding respectively to the unbinding of dislocations and disclinations.  
I greatly enjoyed both of these talks, especially since I have experimentally researched both spin chains and two-dimensional melting at different times in my career. 

APS March Meeting 2017 Day 2

Some highlights from day 2 (though I spent quite a bit of time talking with colleagues and collaborators):

Harold Hwang of Stanford gave a very nice talk about oxide materials, with two main parts.  First, he spoke about making a hot electron (metal base) transistor  (pdf N Mat 10, 198 (2011)) - this is a transistor device made from STO/LSMO/Nb:STO, where the LSMO layer is a metal, and the idea is to get "hot" electrons to shoot over the Schottky barrier at the STO/LSMO interface, ballistically across the metallic LSMO base, and into the STO drain.  Progress has been interesting since that paper, especially with very thin bases.  In principle such devices can be very fast. 

The second part of his talk was about trying to make free-standing ultrathin oxide layers, reminiscent of what you can see with the van der Waals materials like graphene or MoS2.  To do this, they use a layer of Sr3Al2O6 - that stuff can be grown epitaxially with pulsed laser deposition on nice oxide substrates like STO, and other oxide materials (even YBCO or superlattices) can be grown epitaxially on top of it. Sr3Al2O6 is related to the compound in Portland cement that is hygroscopic, and turns out to be water soluble (!), so that you can dissolve it and lift off the layers above it.  Very impressive.

Bharat Jalan of Minnesota spoke about growing BaSnO3 via molecular beam epitaxy.  This stuff is a semiconductor dominated by the Ba 5s band, with a low effective mass so that it tends to have pretty high mobilities.  This is an increasingly trendy new wide gap oxide semiconductor that could potentially be useful for transparent electronics.  

Ivan Bozovic of Brookhaven (and Yale) gave a very compelling talk about high temperature superconductors, specifically LSCO, based on having grown thousands of extremely high quality (as assessed by the width of the transition in penetration depth measurements) epitaxial films of varying doping concentrations.   Often people assert that the cuprates, when "overdoped", basically become more conventional BCS superconductors with a Fermi liquid normal state.  Bozovic presents very convincing evidence (from pretty much the data alone, without complex models for interpretation) that shows this is not right - that instead these materials are weird even in the overdoped regime, with systematic property variations that don't look much like conventional superconductors at all.  In the second part of his talk, he showed clear transport evidence for electronic anisotropy in the normal state of LSCO over the phase diagram, with preferred axes in the plane that vary with temperature and don't necessarily align with crystallographic axes of the material.  Neat stuff.   

Shang-Jie Yu at Maryland spoke about work on coherent optical manipulation of phonons.  In particular, previous work from this group looked at ensembles of spherical core-shell nanoparticles in solution, and found that they could excite a radial breathing vibrational mode with an optical pulse, and then measure that breathing in a time-resolved way with probe pulses.  Now they can do more complex pulse sequences to control which vibrations get excited - very cute, and it's impressive to me that this works even when working with an ensemble of particles with presumably some variation in geometry.