## Monday, December 30, 2013

### Links and thanks at the end of another year

Thank you, readers, for another year.  It is gratifying to hear from you and to meet people who have found this blog useful and informative (or at least worth noticing, which is something these days).  My posting frequency slumped quite a bit at the end of 2013 because of various writing commitments (a couple of which continue).  I'm hoping to have some more to say in the coming year, and I hope that there are opportunities out there to get some of the coolness of condensed matter and nanoscale physics to the general public as well as my more specialized readers.

• Sometimes people put amusing or snarky comments in the acknowledgments sections of papers.  Who knew?  (Full disclosure:  It's possible that the "M. Fleetwood" acknowledged in this paper is a musician.)
• Scientists and mathematicians have their own special brand of humor.  (I love the Mandelbrot joke.  This does omit one of my favorites:  What is purple and commutes?  An Abelian grape.)
• Bob Laughlin is jumping back into physics aiming to make a big splash:  He basically is arguing (see this preprint) that there really is no such thing as a Mott insulator.  That's a rather radical statement at this point, given (for example) experiments with optical lattices that seem to show that the Mott insulator appears to exist as a realizable state.  (I'm sure there are ways to argue that those experiments are not really in the thermodynamic limit and involve interactions that are not the Coulomb interaction, etc.)
• Snowflakes are still cool.  (A repeat, but worth it.)

## Friday, December 20, 2013

### I was plagiarized - what happened, and how I handled it.

Last week I received an email from a scientist asking me for a copy of one of my articles, and the correspondent pointed out:  "On a tangent it would appear that your work is highly prized by fellow researchers.  I include your work and that of an admirer.  Not sure if its worth chasing this up or not?" while attaching a copy of one of my group's papers, and a copy of a paper published here.  I was surprised to find that the other paper (published in August of 2013) was a copy-and-paste plagiarism job, approximately 70% from our paper and 30% from my former student's doctoral thesis.  This was not some minor issue of someone "borrowing" a paragraph - this was a full blown appropriation of all of the words and claims, including a discussion of what future work could be done at Rice (!) on these systems.  Nothing subtle here - no possible legitimate excuse.

The authors (a student and professor) are from some tiny college in India.  I couldn't find email addresses (or really any history of publication) from either one, but I was able to get an email address for the department head there.  I emailed him (explaining what I'd found, and how in the US I would have contacted someone at his university with a title like "research integrity officer"), and was pleasantly surprised to get a response within a day, agreeing that this was "terrible", and saying that he would take it up with the faculty member and the "proper authorities".  This morning, I received an apologetic email from the professor, placing all of the blame on his student.

Some observations:
• Even if the student submitted the paper, the professor has some culpability - his name is on it, and someone paid the publication charges (a whopping $75US, which tells you something about the journal). • It is abundantly clear that the paper was never seen by any reviewer of any kind. It literally jumped, in mid sentence in a grammatically awful way, from one spot to another in the copy/paste from our paper, skipping over all the surface chemistry stuff. The discussion included from the thesis mentioned Rice explicitly. No one who read this would have thought that this work was done at the home institution of the supposed authors. • "Disappearing" a paper is wrong - a legit journal should retract the work, or publish an expression of concern, or something, not make it look like this never happened. • I suppose I should be happy with the outcome (paper gone, publisher chastened, professor at least disciplined somehow by his chair), and given the lack of academic footprint of the student and professor, I'm not sure there is any point in mentioning them by name - I'd rather resign them to obscurity. • This was so absurd (absolute copying without any attempt to hide it, in an obscure "journal"), my reaction really was one of almost amusement. My former student's reaction was definitely more one of anger, which is understandable given that it was his thesis that was being stolen. We'll see what comes next. I suppose I should be flattered - this is a sign that I've made it, right? Like some weird kind of peer review? My group's work is worth stealing. ## Tuesday, December 10, 2013 ### Another workshop + some links It's been a very hectic end-of-semester, between classes, research, and writing. Thank all of you for your replies concerning power outages. Perhaps recurring power problems are actually some secret motivator to get all of us to work on alternative energy research - I'll invent an arc reactor just to guarantee reliable power. Right now, my colleagues and I here at Rice are hosting another workshop, this one on heavy fermions and quantum criticality. For those who don't know, "heavy fermions" are found in materials where there are particular magnetic interactions between mobile charge carriers and localized (usually 4f) electrons. Think of it this way: the comparatively localized electrons have a very flat, narrow band dispersion (energy as a function of momentum). (In the limit of completely localized states and isolated atoms, the "band" is completely flat, since the 4f states are identical for all the same atoms.) If you hybridize those comparatively localized electrons with the more delocalized carriers that live in s and p bands, you end up with one band of electron states that is quite flat. Since $E \sim (\hbar^2/2m*)k^2$, a very flat $E(k)$ would be equivalent to a really large effective mass $m*$. Heavy fermion materials can have electron-like (or hole-like) charge carriers with effective masses several hundred times that of the free electron mass. These systems are interesting for several reasons - they often have very rich phase diagrams (in terms of magnetic ordered states), can exhibit superconductivity (!), and indeed can share a lot of phenomenology with the high temperature superconductors, including having a "bad metal" normal state. In the heavy fermion superconductors, it sure looks like spin fluctuations (related to the magnetism) are responsible for the pairing in the superconducting state. Quantum criticality has to do with quantum phase transitions. A quantum phase transition happens when you take a system and tune some parameter (like pressure, temperature, doping, etc.), and find that there is a sharp change (like onset of magnetic order or superconductivity) in the properties (defined by some order parameter) of the ground state (at zero temperature) at some value of that tuning (the quantum critical point). Like an ordinary higher temperature phase transition, one class of quantum phase transitions is "second order", meaning (among other things) that fluctuations of the order parameter are important. In this case, they are quantum fluctuations rather than thermal fluctuations, and there are particular predictions for how these things survive above $T = 0$ - for example, plotting properties as a function of $\omega/T$, where $\omega$ is frequency, should collapse big data sets onto universal curves. There appears to be an intriguing correlation between quantum critical points, competition between phases, and the onset of superconductivity. Fun stuff. A couple of links: • Higgs doesn't think he would have gotten tenure in today's climate. • Elsevier is still evil, or at least, not nice. • A higher capacity cathode for Li-ion batteries looks interesting. • NIST has decided to award their big materials center to a consortium led by Northwestern. ## Monday, December 02, 2013 ### Quick survey of my readers at R1 universities I'm trying to gather some information, informally. If you're at a tier-1 research university in the US or Canada, please tell me how many power failures your university has had this year, or in a typical year. By "power failure", I mean a campus-wide outage lasting at least a second. It's better if you can mention the name of the university. (Rice readers, I already know the answer for us - no need to chime in.) Follow up: if you know, does your university have big uninterruptible supplies for particular facilities or buildings? Thanks. ## Sunday, November 24, 2013 ### Some interesting links Several interesting links have crossed my virtual radar this past week. • Here is a (rather over-written) article about how one of the big proponents of massive online open courses (MOOCs) has realized that they are not really a panacea for higher education. • As Peter Woit reports, there is going to be a 90th birthday bash for Phil Anderson next month. (No, I was not invited, though he did fall asleep in a seminar by me once.) More interesting is Anderson's letter to the APS News, where he points out that he was not, in fact, single-handedly responsible for killing the SSC. • Also in the APS News was a two-part interview (here and here) with Elon Musk. He's either The Man Who Sold The Moon, or a Bond supervillain. • This video is a great set of demonstrations about magnetic fields and forces, though the speaker uses some very nonstandard terminology, so it's a bit hard to figure out exactly what's going on. • Here, Prof. Laithwaite does a further cool demo of a very heavy gyroscope. • Charles Day was written a thought-provoking essay about the academic job market. ## Monday, November 18, 2013 ### Coauthorship I've been asked by a colleague to write a post about coauthorship. This topic comes up often in courses on scientific ethics and responsible conduct of research. Like many of these things, my sense is that good practice prevails in the large majority of circumstances, though not 100% of the time. I think my views on this are in line with the mainstream, at least in condensed matter physics. First, to be a coauthor, a person has to have made a real intellectual contribution to the work, somewhere in the planning, execution, analysis, and/or writeup stages. Simply paying for a person's time, some supplies, or lending a left-handed widget does not alone entitle someone to coauthorship. It's best to have straightforward, direct conversations with potential coauthors early on, before the paper is written, to make sure that they understand this. A couple of times I've turned down offers of coauthorship b/c I felt like I didn't really contribute to the paper; once, for example, one of my students did some lithography for a colleague as a favor, while offering advice on sample design. She rightfully was a coauthor, but I hadn't really done anything beyond say that this was fine with me. The challenge is, the current culture of h indices and citation metrics rewards coauthorship. People coming out of large research groups with many-person collaborative projects can end up looking fantastic in some of these metrics, a bias exacerbated if coauthorships are distributed lightly. Research cultures that have very hierarchical structures can also lead to "courtesy" coauthorships (Does the Big Professor or Group Leader who runs a whole institute or laboratory automatically end up on all the important papers that come out of there, even if they are extremely detached from the work? I hope not.). Coauthorship entails responsibilities, and this is where things can get ethically tricky. As a coauthor, minimally you should contribute to the writing of the manuscript (even if that means reading a draft and offering substantive comments) and actually understand the research. Just understanding your own little piece and having no clue about the rest is not acceptable. At the same time, it's not really fair to expect, e.g., the MBE materials grower to know in detail some low-T, rf experimental technique tidbit, but s/he should at least understand the concepts. A coauthor should know enough to ask salient questions during the analysis and writeup. Note that all of this gets rather murky when dealing with very large, collaborative projects (e.g., particle physics). When CERN collaborations produce a paper with 850 coauthors, do I think that each of them really read the manuscript in detail? No, but they have a representative system with internal committees, etc. for internal review and deciding authorship, and the ones I talk to are aware of the challenges that this represents. Some topics lend themselves more to a back-an-forth in the comments, and this may be one. I'm happy to try to answer questions on this. ## Monday, November 11, 2013 ### The Orthogonality Catastrophe (!) Physicists sometimes like to use dramatic terminology to describe phenomena or ideas. For example, the Ultraviolet Catastrophe was the phrase describing the failure of classical statistical physics to predict the correct form of thermal ("black body") radiation in the high frequency ("UV" at the time) limit. The obvious disagreement between what seemed like a solid calculation (based on equipartition) and observation (that is, you are not bathed by gamma radiation from every thermal emitter around you) was a herald of the edge of validity of classical physics. In condensed matter physics, there is another evocative phrase, Anderson's Orthogonality Catastrophe. (Original paper) Here's the scenario. Suppose we have a crystal lattice, within which are the electrons of an ordinary metal. In regular solid state physics/quantum mechanics, the idea is that the lattice provides a periodic potential energy for the electrons. We can solve the problem of a single electron in that periodic potential, and we find that the single-particle states (which look a lot like plane waves) form bands. The many-electron ground state is built up from products of those single-electron states (glossing over irrelevant details). The important thing to realize is that those single-particle states form a complete basis set - any arrangement of the electrons can be written as some linear combination of those states. (For students/nonexperts: This is analogous to Fourier series, where any reasonable function can be written as a linear combination of sines and cosines. Check out this great post about that.) Now, imagine reaching in and replacing one of the atoms in the lattice with an impurity, an atom of a different type. What happens? Well, intuitively it seems like not much should happen; if there were 1022 atoms in the lattice, it's hard to see how changing one of them could do much of anything. Indeed, if you compared the solutions to the single-particle problem before and after the change, you would find that the single-particle states are almost identical. However, "almost" means "not quite". Suppose the overlap between the new and old single particle states was 0.9999999, where 1 = no change. For $N$ electrons, that means that the new many-particle ground state's overlap with the old many-particle ground state is goes something like $0.9999999^{N}$. For a thermodynamically large $N$, that's basically zero. The new many-particle ground state is therefore orthogonal to the old many-particle ground state. In other words, in terms of the old basis, it seems like adding one impurity (!) produces an infinite number of electron-hole excitations (!!) (since it takes an infinite number of terms to write the new many-particle ground state in terms of the old). So, where does this fall apart? It doesn't! It's basically correct. The experimental signature of this ends up being apparent in x-ray absorption spectra, in a variety of meso/nano experiments (pdf), and in cold atoms (pdf). Any other good physics examples of overly dramatic language? ## Wednesday, November 06, 2013 ### blogging and academic honesty Paul Weiss and the editorial board of ACS Nano seem to have ignited a bit of a controversy with their editorial published a couple of weeks ago. The main message of the editorial seems to chastise bloggers and other social media users regarding accusations of scientific misconduct. As with many positions taken on the internet, there is a defensible point of view (in this case, "It is better to try to go through an investigative procedure regarding misconduct, given that the consequences of a false public accusation can be very severe") that was expressed in an inartful way (coming across as scolding, and implying that people who criticize published work are likely not necessarily accomplishing science themselves). As my mom used to say, sometimes it's not what you say, but how you say it that matters. Clearly, as this extremely long thread on ChemBark and this response editorial by Nature will attest, there are differing views about this issue. Here's my take (not that I have any privileged point of view - your mileage may vary): • Public accusations of misconduct or incompetence should never be made lightly. If misconduct is really suspected, then the appropriate first course of action is to contact the relevant journal and ideally the research integrity officer (or equivalent) at the authors' home institution. Public accusation should not be a first recourse. • Journals should deal with accusations in a timely way. Years passing before resolutions of inquiries is unacceptable. Authors stonewalling by refusing to respond to inquiries is unacceptable. While authors should be given every opportunity to respond, it is also not appropriate to, e.g., refuse to publish a comment because the authors drag their heels on writing a response. • No one has yet come up with a perfect feedback mechanism. Things like pubpeer are better than nothing, but anonymous commenting, like anonymous peer review, is a double-edged sword. Yes, anonymity protects the vulnerable from possible retaliation. However, anonymity also leads to genuinely awful behavior sometimes. • People are going to make public comments about the scientific literature - that's the nature of the internet, and in general that's a good thing. I would hope that they will do this with due consideration. The journals are free to encourage people to pursue concerns within the journal system, but it's not productive to imply that people who draw attention to suspect figures (for example) are somehow poor scientists or gleeful bullies. We're all on the same side (except the cheaters). ## Tuesday, October 29, 2013 ### Accusations of misconduct regarding a condensed matter paper I had heard some gossip about this from a couple of my colleagues last week, and now it would appear that the news has broken in the English-language media. The paper at hand is this one, which reports that, in one of the iron pnictide superconductors, there can be phase separation between regions of one composition (K0.68Fe1.78Se2) and regions of another composition (K0.81Fe1.6Se2). This is important because in trying to understand the superconducting properties of samples with some nominal composition, it's a big deal to know whether you're looking at a homogeneous system or one where some other composition is actually dominating the properties. The accusation is reported here and here, and comes from Prof. Mu Wang at Nanjing University (also the home institution of the accused, Prof. Hai-Hu Wen). There are at least two scientific ethics issues. First, there is a question about co-authorship (did all of the authors on the paper actually contribute, and did they even see the paper prior to submission). Second, from what I can gather, there are concerns about data selection in Fig. 4 of the paper. People who know more about this, please feel free to chime in, since it's a good idea to understand specifically what the concerns are regarding the validity of the scientific claims of the article. The added dimension to all of this is the claim that both the accuser and the accused were up for membership in the Chinese Academy of Sciences (though apparently the accuser has withdrawn his candidacy - see my second link in this paragraph). Interesting, particularly since accusations like this in the physical sciences remain relatively rare. ## Sunday, October 27, 2013 ### Two striking results on the arxiv This past week, I noticed two preprints on the arxiv that particularly got my attention. First, "How many is different?" In condensed matter physics, we like to point to Phil Anderson's famous 1972 paper "More is different" (pdf) as an important tract about the nature of our field: Fascinating, rich, profound physics can emerge from collective, interacting, many-particle systems. As I've said before, one water molecule is not wet. A large ensemble of water molecules can be. The preprint at hand looks specifically at a classic statistical mechanics problem, the ideal Bose gas, and summarizes results that show that it takes, for a cubic box, exactly 7616 particles for there to be a real thermodynamic liquid-vapor-like transition. (Note that in this case there are no interparticle interactions; it's the identical nature of the particles that leads to this effect.) A spherical box would lead to a different criterion. Still, it's pretty neat to me that sometimes we can come up with a direct answer to "how many particles does it take before we see an emergent collective response?" The other paper is "Reaching Fermi degeneracy by universal dipolar scattering". Truly cooling the atoms is a major challenge in the trapped-atomic-gasses community, particularly those people who want to use optical lattices filled with fermionic atoms as a way to simulate electronic phases relevant to condensed matter. A phenomenon called "Pauli blocking" is a real hindrance. As a Fermi gas cools, there are fewer and fewer empty single-particle states into which you want to scatter the excited ("hot") atoms. (Fermionic atoms can only get into empty states, because the Pauli exclusion principle forbids multiple occupancy.) These authors are able to leverage magnetic dipole-dipole scattering within a cloud of erbium atoms to get to temperatures below 0.2 times the Fermi temperature - quite cold for these folks. If this can be adapted to other systems and optical lattices, that would be very exciting. (For reference, room temperature is something like 0.005 times the Fermi temperature of copper, and lots of exciting solid state electronic physics requires such low temperatures.) ## Friday, October 25, 2013 ### Follow-up: Workshop on "Surface Plasmons, Metamaterials, and Catalysis" The workshop here was, I think, very successful. It was great to get so many knowledgeable people together in one place to talk about these issues. Here are some key points that I learned from watching the talks and speaking with people: • While it may take years and$$to build, single-molecule tip-enhanced Raman spectroscopy in variable-temperature UHV looks like an amazing capability. Turns out that vibrational lineshapes that look gaussian at room temperature look like Lorenztians at cryogenic temperatures, as one would expect for true (not inhomogeneously broadened) resonances with well-defined lifetimes. • It's possible to get large electromagnetic field enhancements out of high-index materials even without plasmon resonances. Clearly it is worthwhile to try to create geometries where the local high intensities occur close enough to a surface interface that charge carriers produced there are able to diffuse to the surface to do chemistry. • Careful engineering of structures can lead to very high (say 90%) absorption even in extremely thin coatings. That is, you can imagine making anti-reflection coatings that are tuned to produce absorption in materials where the loss mechanism can be used to do chemistry. • By driving plasmon resonances detuned from the resonance, it is possible to favor net motion of charge (generating photovoltages that can be measured in experiments). Conversely, gating structures to alter their total charge density can tune their plasmon resonances (though it's still not clear to me how one would make this a big effect). Shifting the energies a little with charge is something I understand; drastically changing the intensity of the resonances with charge is much more mysterious to me. • Very specific defect sites can drive very particular catalytic reactions. In some special cases theory can show how this works, but I can't help thinking that there are many issues left to resolve. For example, in discussing catalysis most people draw energy level diagrams, correctly showing that energy conservation is really important when, e.g., a hot electron in some solid is able to occupy (transiently) an unfilled molecular orbital in a molecule of interest. However, there are other issues that affect which electronic processes can happen (e.g., momentum conservation; how incoherent hot electrons or holes can excite plasmons, which are coherent e-h excitations) that seem to be given short shrift. • When an electron transiently occupies an empty molecular orbital, you can think of that as delivering an impulse to the molecule's nuclei. (The equilibrium bond lengths differ when the molecule has an extra electron; thus passing through that state is like kicking the nuclei.) • Single-particle vs. ensemble studies (via plasmons and optics) can give insight into processes like the evolution of phase transitions with size - such as the formation of palladium hydride. • Doped Si can support nice plasmon excitations out in the mid-far IR, and could be very interesting from several points of view. • While it's very trendy to worry about water splitting, there are a huge number of other reactions that are important and complicated! For example, converting CO2 to methane is eventually an eight electron process (!). Those are just a few points. Teaching, etc. made me miss some of the talks as well. All in all, good talks and good discussion. There is clearly an enormous amount still to be done in this field, both on the experimental and theory sides. ## Monday, October 21, 2013 ### Workshop on "Surface Plasmons, Metamaterials, and Catalysis" Today is the beginning of our workshop on "Surface Plasmons, Metamaterials, and Catalysis", hosted here at Rice. I've mentioned this before; it's actually happening, and thanks to the end of the government shutdown, the ARO sponsors are even going to be able to see the talks. Should be fun, and I'm hoping to learn a lot. As I've said previously, catalysis seems like magic to me sometimes. I'll try to post a bit about what I learn. (By the way, this is the 702nd post on this blog. Meant to point out the Big Round Number threshold last week.) ## Wednesday, October 16, 2013 ### How to: write a scientific paper (in physics, anyway) There are many resources out there for people who want guidance on scientific writing. Here, in very brief form, are my tips. • First, do some science. That sounds flippant, but in order to justify adding to the scientific literature (assuming we're talking about a research-based paper here and not a review article), you do have to have done something. Figuring out really what you've done and being able to articulate it clearly in a couple of sentences is essential. What do you know now about physics that you did not know before the work? Compelling papers are stories - they have some narrative structure (more on that below). • Who is your audience? Remember that you would tell your story differently to a specialized audience (e.g., your direct competitors who already know the topic in depth) than you would to a generic physicist in your subfield, and still differently to a generic physical scientist (if you're aiming for a broad/glossy journal). This is equivalent to figuring out at least in rough terms where you are going to submit the work. Knowing your audience gets you in the right frame of mind to.... • Figure out your figures. Ok, so you've done some piece of physics. How can you explain that in images/graphs/visualizations? For a short "letters" paper, it's typical to have around four figures. Often the first figure somehow sets the context and may include a cartoon/diagram of the setup or the system. The figures need to tell the story in a logical way - if you're an experimentalist, show what you measured, and show how you got from those measurements to your conclusion (characterizing a new phenomenon? Comparison with some candidate models?). Again, who is going to be reading this - a specialist, or someone who needs some intro context? Work hard on your figures - many readers will spend far more time looking at your figures that sifting through the text. • Tell your story by describing the figures. As I said, a paper is a narrative, but unlike historical literature you are not required to describe what you did chronologically, and unlike detective stories you do should not leave the main point for the very end. If you've chosen your figures well, you are now something like 80% of the way there. Do not worry about length at this point. • Now work on the intro and the conclusions. The intro needs to reflect the Big Picture and place your work in context, while citing appropriate literature references. In a short paper, you don't need to cite everything remotely related to your work, but you don't want to leave out major contributors. Don't cite yourself unless it's really germane. The intro is where you will gain or lose the audience, including reviewers. Usually at the end of the intro is a paragraph that starts out "In this work, we...." That's where you need to be able to hit the highlights of your work in just a few sentences. Really work on your intro. It's right behind the figures and the abstract in terms of making an impression on the reader. As for conclusions, what you do here depends on the journal. Some journals just don't do concluding paragraphs. Others wrap up the discussion, summarize the results again briefly, and then give perspectives on possible future work. • Lastly, do the abstract (though I often rough one out earlier). Again, the abstract needs to be concise, clear, provide context, and summarize the main points of the paper. As annoying as it is on some level that there is an algorithm for this, there really is, and it works. Take a look at the Nature guide for writing an abstract (pdf!). The general format and the way it makes you think about what each sentence is doing can be very helpful, even when generating abstracts for non-Nature journals. • Now edit. Read and re-read the paper, rewording things to make them more concise and clear. Put it aside for a day or two and then look at it with fresh eyes. Make sure the coauthors do edits as well. My approach is definitely to rough some text onto the page and then edit, rather than agonize about trying to write something perfect from the outset. • Think about a cover letter for the submission, and come up with some suggested referees. For the cover letter, you are trying to convey clearly and cleanly what you did to an editor who has to filter through many many papers. Don't make it a lengthy exercise. Do make it accessible, and do make it clear why your work is appropriate for the journal. On the referee front, don't try to stack the list with your pals. Pick people who will actually give insights and have relevant expertise. The editor may well take your suggestions as a jumping off point for picking people out of their database, so if you pick people who have a clue, it increases the likelihood that your referees will be thematically appropriate. Good luck! Feel free to add in more suggestions in the comments; I'm sure I'm leaving things out. ## Monday, October 14, 2013 ### Scientific American, blogging, and appropriate behavior Many out there in the science blogging community have been rightly angered and frustrated by what transpired last week regarding Danielle Lee's blog at Scientific American. Much as it pains me to link there, Jezebel actually has a cogent summary of what went on. Short version: Dr. Lee politely declined a request to guest blog (at a paying advertising partner of SciAm), and was shocked and angered when she was denigrated in sexist, degrading language for her declination. She posted about this on her blog, and SciAm took down the post, arguing feebly that it wasn't really about science. This is weak sauce, of course, because SciAm has allowed many of its bloggers to post about topics that aren't directly science. Pretty much they mishandled this about as thoroughly as possible. Blogging is a tricky business. Outfits like SciAm, NatGeo, Wired, etc. clearly value the perspective (and clicks) that bloggers can bring, and must view bloggers as a very inexpensive source of content. Bloggers generally value the greater readership and exposure that is afforded by a widely recognized host - certainly my readership is much lower than it could be if I blogged for scienceblogs (of course, then I'd feel obligated to post more often). However, by being independent of these corporate hosts, I have greater editorial control. I don't have to worry about ticking off advertisers; I just have to keep some level of a lid on my sarcasm and desire to vent. This whole thing of course reinforces the standard dictum that seems to be ignored by a surprisingly large number of allegedly smart people: Beyond trying to be a good person, don't write things in email (or post on the internet) that you would be sorry to see on the front page of the New York Times (perhaps I should say "homepage", since we are approaching the supposed death of print media). Hopefully someone at biology-online.org (who started all of this with their inexcusable behavior) now appreciates this lesson. ## Monday, October 07, 2013 ### Nobel I have been very busy, and I missed my traditional Nobel guessing game. To be honest, it seems so highly likely that there will be some version of a Higgs prize (the LHC people have been making preparations for weeks) that it's taken the fun out of matters. I still hold out for an outside chance of a surprise - perhaps extrasolar planets, or galaxy rotation curves, or geometric phases. Guess we'll know in about 8 hours or less. ## Wednesday, October 02, 2013 ### Bits and pieces Sorry for the comparatively slow rate of posts - it's an extremely busy time of the semester, and travel last week threw me off. Here are a few odds and ends: • There is about to be a new edition of Kleppner and Kolenkow, an advanced freshman physics text that I really value. (Unsurprisingly, when I was taking a class out of it 24 years ago my opinion of the book was not nearly so favorable.) I'm very much looking forward to seeing the revisions and new problems. While I like the mathematical sophistication a lot, I think it's superior to most generic freshman books by not trying to distract you on every page with glossy color illustrations and three different kinds of call-outs, text boxes, or tables. Many modern omnibus books look like they're actively trying to encourage attention deficit disorder. If they could use the html blink tag and have rotating animated gifs on every page, they would. • There is also a new edition of Purcell, a fantastic E&M book. I've written about Purcell before, and Morin has done the community a great service by revising this book so that the presentation is in what has become the mainstream MKS system of units. I haven't had a chance to teach out of this new version, which contains more exercises and worked examples, but anything that introduces Purcell's lucid explanations to more students is good. • There is going to be a new edition of Horowitz and Hill as well. • The US government took time out from its complete dysfunction last week to address the looming problems related to the strategic helium reserve. • I hope the majority party in the US House understands that a US sovereign debt default would be an incredibly bad idea. ## Wednesday, September 25, 2013 ### DOE ECMP PI meeting, day 2 - things I learned Very brief list of things I learned about during day 2 yesterday: • Due to optical nonlinearities, it is possible to get broadband negative (left-handed, that is) refraction from (18 layer thick) graphene. • In a strong perpendicular-to-plane magnetic field, you can detect (optically) evidence of 1-d subband formation and e-e interactions. • The optical properties of graphene are very very rich. That is, complicated. • Doug Hofstadter was right. • With an in-plane magnetic field, you can see physics that looks like the quantum spin Hall effect in single-layer graphene. • Trying to tune the bandgap of GaAs down to 1 V via nitrogen doping without killing the mobility is very hard. • Ballistic phonon pulses are a very cool way of detecting defects and interface roughness basically with sonar! • You can measure the exchange field between a magnet and electrons in a superconductor if you can work with ultrathin films (field in plane). • There are still some weird issues associated with electronic decoherence in the mesoscopic world - coherence times seem to saturate at the lowest temperatures in various etched semiconductor and bismuth nanowires. • Pumping spin currents via the spin Hall effect is still cool. • Electronic heating above the lattice temperature in graphene is more complicated than it would appear. • Anisotropy in the electronic structure at B=0 leads to modified anisotropy in composite fermions at $\nu = 1/2$. • The $\nu = 5/2$ quantum Hall state is surprisingly robust as mobility goes down. That means that short-range, high-angle scattering doesn't really kill the state, which is good, and that mobility as our favorite proxy for sample quality is a poor guide in this regime, which is interesting. • My colleague Rui-Rui Du has a really great and exciting system for looking at topological edge states and quantum spin Hall in InAs/GaSb quantum well structures available from a commercial vendor. ## Tuesday, September 24, 2013 ### DOE ECMP PI meeting, day 1 - things I learned Yesterday was an extremely dense meeting day. Many talks, many posters. By its nature, this meeting is far more technical than the Packard meeting, so the bullet points below are going to be more obscure to the nonexpert. The program is here. Among the things I learned yesterday: • Harold Hwang continues to do very interesting physics at the interface between LaAlO3 and SrTiO3, looking at fundamental issues like the limits of charge mobility in the 2d electron gas there, and how to make delta-doped bilayers. • Many other people are playing with oxide and pnictide MBE, making pnictide superlattices, strain-controlled pnictides, multiferroic films, etc. • It is possible to use the elastic deformation of VO2 at the metal-insulator transition to alter the magnetic coercivity of an overlying Ni layer. • In strained films, it is possible to see through x-ray techniques that one can decouple the electronic transition in VO2 from the structural transition. • Real progress has been made recently in using engineered structures of nanomagnetic patterns to model complex systems like spin ice. • Nd2Fe14B, the rare-earth hard magnet, can take up hydrogen into its open structure, and when it does, the lattice expands, which greatly softens the magnetic response. • Mott insulating materials can be synthesized that exhibit quantum criticality at zero magnetic field and as-made. • Iridates are interesting and complicated. • Investing in developing a particular technique (in this case, NMR of unusual elements like oxygen, sodium, and arsenic) can pay big long-term dividends in terms of unique experimental insights (e.g., there are no "static loop currents" flowing in the cuprate superconducting state). ## Monday, September 23, 2013 ### DOE experimental condensed matter physics principal investigator meeting I am spending the next 2.5 days at the DOE's experimental CMP principal investigator meeting in the Washington DC area. I'll try to blog a few highlights over the course of the meeting. The basic idea is supposed to be to get all of the PIs together to talk about their latest stuff, and ideally to foster new collaborations and activities. Judging from the people sitting around me, it looks like this will be an extremely strong meeting in terms of science. ## Sunday, September 15, 2013 ### Things I learned this week at the Packard meeting For the 25th anniversary of the amazingly awesome David and Lucille Packard Foundation fellowships, I was fortunate (and as always very grateful) for the opportunity to go to their annual meeting and listen to talks from incoming and outgoing Fellows. These meetings are tremendous - a very rare chance to hear 20 minute talks on topics across science, engineering, and math, aimed at the technically literate non-expert. Back in the dim past of this blog, I've posted about these before (here, here, and here). Here are some take-away facts I learned this time around: • By very narrow targeting of specific pathogens, it might be possible to remove some of the evolutionary pressure (exerted by horizontal gene transfer [something I'd never learned about] from your gut bacteria) that leads to antibiotic resistant strains. • It's possible to use ideas from superresolution microscopy and principal component analysis to improve structure determination in materials characterization. • Using small molecule dyes, it is possible to use optical processes to turn the tables on some chemical reactions, favoring "anti-Markovnikov" selection, rather than Markovnikov rules (where reaction sites are determined by permanent dipole moments of bonds). • Sometimes cells can recognize themselves (and distinguish between themselves and close relatives) using proteins based only on one or two genes. • I'm used to thinking about coupling two (identical) resonators and getting an energy splitting (like bonding/antibonding orbitals). I hadn't realized that using an effectively imaginary coupling means you can get a lifetime splitting (one long-lived, one short-lived mode). • You can tie vortex rings in knots. Watch the videos! • Greenland has not been ice-free for at least 350,000 years, and radioactive dating based on dust captured in the ice makes it possible to untangle even faulted or folded ice cores. • Monsoons are complicated, even if you model a completely water-covered idealized planet. • Every time a pair of neutron stars collide, they produce about one Jupiter mass worth of Au, while a core-collapse supernova makes about one lunar mass worth of Au. As a result, even though colliding neutron stars are rare, half of the gold out there came from them. (In case you were wondering, in all of human history we have mined about 165,000 tons of Au.) I've left out many others. As always, very cool. ## Friday, September 13, 2013 ### Ionic liquids and gating - how much is chemistry? I've written before (here, here and here) about the use of ionic liquids in condensed matter physics investigations. These remarkable liquid salts, with small organic molecules playing the roles of both positive and negative ions, can be used in electrochemical applications to generate extremely large surface charge densities near electrode interfaces. Many experiments have been published in the last few years in which ionic liquids are meant to induce (via capacitive coupling) large densities of mobile charge carriers within interesting solids at the solid/ionic liquid interface. One concern in these experiments has been the role of surface chemistry. While the molecular ions themselves are intended to be stable over a large range of electrochemical conditions, the ionic liquids can dissolve more reactive species (like water). Likewise, recent experiments by Stuart Parkin of IBM Research have shown that in some systems (vanadium oxide in particular), under certain electrochemical conditions it would appear that ionic liquids can favor the formation of oxygen vacancies in the adjacent solid. Since oxygen vacancy defects in many oxide materials can act as dopants, changing the concentration of charge carriers, one must be extremely careful that any measured changes in electronic properties are really from electrostatics rather than effective chemical doping. These concerns can only be ratcheted higher by the simultaneous online publication of two more papers from the Parkin lab, this one in Nano Letters (on SrTiO3) and this one in ACS Nano (on TiO2). In both systems, the authors again find evidence that changes in oxygen stoichiometry (rather than pure electrostatic charging) can be extremely important in generating apparently metallic 2d surface layers. This is a very subtle issue, and the gating experiments remain of great interest. Unraveling the physics and chemistry at work in all the relevant systems is going to be a big job, with a strong need for in situ characterization of buried solid-liquid interfaces. Fun, challenging stuff that shows how tricky this area can be. ## Thursday, September 12, 2013 ### New big science prizes - time for nominations and opinions There are large, endowed prizes in a number of disciplines. The most famous of all are the Nobel prizes, of course, and in the sciences at least (chemistry, physics, and medicine), being awarded a Nobel is a singular crowning achievement. A huge amount has been written about the Nobels - if you want to learn how they came to be, and at the same time become extremely disillusioned about the process for the early awards (my goodness I hope it's better these days - it seems like it must be), I recommend The Politics of Excellence. The purpose of the Nobel is to reward a major, transformative (to use the NSF's favorite word) intellectual achievement. The money is not meant to be a research grant. (Similar in spirit is the Fields Medal for mathematics, though that is much less money and purposefully directed at younger researchers.) The MacArthur Fellowships are another well-known set of awards. These are known in popular parlance as "Genius Grants", and unlike the Nobels are (apparently) intended not so much as a financial reward, but as a liberating resource, a grant that can provide the winner with the financial freedom to continue to excel. In some disciplines (the arts and the humanities in particular) this can completely change the financial landscape for the winners. Awards that go directly toward furthering the creative ends of the recipients are clearly great things. In recent years, a couple of new, very large awards have been created, and it's interesting to consider whether this is a good thing. The Kavli Foundation is awarding prizes every other year in Neuroscience, Astrophysics, and Nanoscience. To nominate someone, see here. In spirit, these seem much like the Nobels, with awards so far going to extremely well regarded people, and not meant to function as direct research support. In more flamboyant style, Yuri Milner has endowed the Fundamental Physics Prizes, also not meant to function as research grants. What really distinguishes these latest, apart from the sheer magnitude of the awards ($3M each), is that they have largely gone to high energy physics theorists whose work has not been confirmed by experiment (in contrast to theoretical physics Nobel awards).  More recently there has been a special award to the LHC experimentalists, and some related prizes to condensed matter theorists.  However, the idea of giving very large prizes for unconfirmed theoretical work is controversial.  In essence, is something a "scientific breakthrough" if it's not confirmed by experiment, or is it very exciting math?  Perhaps this is just a labeling issue, but it is hard not to be unsettled by the willingness of some to try to detach science from experimental tests.

Is the scientific community better off from having more of these kinds of prizes?  Certainly it makes sense to consider awards for fields not recognized by the Nobel Foundation.  Nobels have gravitas because of their long established history, but that does not mean that there shouldn't be an analogous prize for, e.g., computer science.  Likewise, anything positive about the sciences that gets public attention is probably a net good.  However, prizes will lose their meaning if there are too many, and making some of them destabilizingly large amounts of money is not necessarily great.   It's also not clear quite what the point is if the same people win multiple large prizes for the same work.   For example, it's credible that Alan Guth could win a Nobel in addition and a Kavli astrophysics prize in addition to the Fundamental Physics prize.    I always tell would-be scientists not to get into this if they're after the big prize at the end - that's not the point of the enterprise, and I'd hate to see that change.  It's also hard for me to believe that the existence of these prizes is going to get the public or students materially more interested in the sciences.   Somehow prizes that go toward helping people continue their work or recognize a career of achievement seem more sound to me, but I remain ambivalent.

## Monday, September 02, 2013

### How to: Carry on a scientific collaboration

I'm writing this at the suggestion of a commenter on my previous how-to post, who was specifically interested in experiment/theory interactions.  Collaborations, as a fundamentally personal endeavor, are as varied as the people who collaborate.  Over the years I have collaborated with a number of theorist colleagues as well as fellow experimentalists, and generally it's been a very positive set of experiences, both scientifically as well as personally.  The main recommendations I can make about collaboration:
• Discuss and plan the ground rules at the beginning.  How is the collaboration going to work?  Is this the sort of collaboration that requires regular discussions and updates?  Are physical samples being sent by one party to another?  Which people are going to be responsible for what tasks?  What are peoples' expectations of authorship (recognizing that occasionally work may take an unanticipated turn, and someone's contribution may grow or shrink along the way)?  Are there restrictions about the samples or data?  (For example, a materials grower might collaborate with person A and person B on different projects; it could be very awkward if person A took samples and then on the side started working on the same project as person B!)
• Collaborate with people who have a similar approach to research projects as you, in terms of rigor, timeliness, and seriousness.  This is true whether those people are your own group, or outside collaborators.
• Make sure to understand what your collaborators are actually doing.  Collaborations are a chance for you to learn something, since presumably you're working with these people because they bring something to a project that you can't do your self.  Sometimes asking what might seem at first glance a silly or naive question can lead to discussion that is informative for everyone.
• Have realistic expectations.  On the sociological level, realize that no one is going to retool their entire research enterprise or retask several people for your sake.  On the scientific side, know what can and can't be done by your collaborators and their techniques.
• Be communicative.  Keep your collaborators in the loop and up to date on what's going on.  If there is a big delay on your end for some reason, let them know.  You'd want them to do the same.  If you have decided that you don't think the project is going to work, or it's not working as anticipated, bring this up and don't let it sit.
• Be a finisher.  The most successful grad students are the ones who actually finish tasks and projects.  In the same way, don't let things slide.  If your collaborator wants you to read through a draft, or you promised to get some data to them in time for some deadline, follow through.

## Wednesday, August 28, 2013

Now that we live in the Information Age, where I am reliably told that Information Wants to be Free, I'm confused by a trend that is coming in terms of how university researchers access electronic versions of journals.  (For those of you under 30, there was once a time when journal articles were published in an arcane format that predates pdf called "paper".)   The electronic availability of journals, including historical archives, has largely been an enormous boon to scientific progress.  It is far easier and faster now than ever before to do proper literature research when writing a paper or a proposal.  If I'm using google scholar or Web of Knowledge or Scopus or any other reference crawling aid, I can now find and (and if my institution subscribes or the content is available free) download copies of relevant references very quickly and efficiently.  If anything, the technology to provide this content is continually becoming cheaper and faster, since providing print content has far lower bandwidth requirements than the streaming video demands that are really driving innovation.

That is why I am concerned and confused by a trend popping up in the perpetually-financially-stressed university libraries around the country (and the world, presumably).  We all know that commercial publishers have been cranking up prices and applying annoying/evil tactics like bundling one high impact title with a dozen expensive, low-impact journals in forced package deals.  (Wiley, Elsevier, Taylor and Francis, that's you.)  Now, though, there is this idea being pushed that it would somehow be cheaper for university libraries to actually drop their subscriptions (!!) and instead use Get It Now, a product of the Copyright Clearance Center (those people you have to contact if you want permission to use a figure in a review article).  The problem is, Get It Now is misnamed; really it's Get It In Seven Minutes.  Needless to say, if you are trying to trace references and write a paper or proposal, having to wait seven minutes for every article you want to examine (which could easily number in the dozens while proposal writing) would be a major mess.

Given that the publishers have the capability to provide content essentially instantly, and that the infrastructure to support that capability is steadily getting cheaper, and that the publishers could quite readily track download statistics (and could charge per download if they really wanted to), I don't understand how Get It Now is a positive step.  Surely if per-article billing was an economically viable approach, the publishers would do it themselves, right?  The publishes are going to recoup their costs somehow, passing them along to CCC, and CCC will pass those along to the universities, so it's hard for me to see how interposing a middleman like CCC can really do anything except slow down researchers and make money for CCC.  This idea seems to go directly against the trend of open access, public archives, etc.

Do any of my readers work at institutions that use this service?   How does it work for you?  Is it as annoying as it sounds?  Does it actually enable your university to save money (that is, provide more or better content for the same actual cost) relative to the old approach?  A major challenge faced by universities in budgeting is that libraries don't sound as exciting as new buildings or major initiatives, and yet libraries and their services are essential to the scholarly mission of the institution.

## Monday, August 19, 2013

### How to: Write a response to referees

I think I'm going to start a periodic series of "how to" posts.  First up, how to write a decent "response to referees" document.  While this is pretty much common sense, it's not bad to think about it a bit in the abstract, rather than in the heat of the moment of having just received some kind of (perceived) searing blast of criticism.  In brief, assuming you get some collection of referee reports, at least one or two of which are not particularly positive, and you intend to revise and resubmit:
• Read the reports, and then put them aside for a day, as your white-hot rage over the terrible injustice that has befallen you fades, and in the cold light of reflection you realize that perhaps the manuscript you'd sent in is not, in fact, the greatest non-fiction prose writing since Churchill's six volume history of the Second World War.
• Regarding (b) above, write down and make a list of what you think the referees want you to do, or what you think it would take to address the points that they raise.    Then consider whether you want to or should do all of those things.  Sometimes the referees can be very demanding.  (We've all seen this.)  You have to use your judgment, and remember that referees are not generally gratuitously mean.  I'd say the default position should be to do what they want, unless what they want is really considered unreasonable by you and your coauthors.  This list, by the way, is a headstart on the eventual "list of changes" that you'll need to provide when you resubmit.
• When you sit down to write your response, have the referee remarks right there.  In fact, it's a good idea to use copy/paste to intersperse your point-by-point responses.  That way you can be sure you didn't miss anything, and you are forced to write your response in an order that will seem logical to the referee.
• Always (always) thank the referees for your time.  Seriously.  You know what refereeing is like, and you'd like to be thanked, admit it.
• Point out that after this process you believe the paper is much improved (it will be, too, assuming the referees were really on point and not just asking you to cite their seminal work on the topic at hand), and if possible explain why.  (e.g., we believe that our main point is now much clearer)
• Always be polite and professional.  If you fly off the handle in your response, even if the referee is overtly hostile, it won't do you any favors with other referees or the editor.  Similarly, just as tone is difficult to convey in email, I suggest avoiding attempted jokes or sarcasm.  This is a professional communication - keep it that way.
• Try to be timely about revisions.  It's much better to get revisions done while everything is fresh in your mind, rather than letting things linger.  (Don't write them in the heat of the moment, though.)
In your accompanying cover letter when you resubmit, make sure that you emphasize the changes you made in response to the referees.  Also, it doesn't hurt to point out to the editor if you think a referee either missed the mark or seems not to be objective, but it would be best to do so in a very professional way.  Calling the referee an idiot won't win you any friends, particularly since the editor likely chose the referee.  Still, if you really think the referee made serious mistakes, or was not competent, or didn't read the paper, you should bring that to the editor's attention in a professional way.  Again, this kind of response is an important part of your repertoire of professional communications, so it's best to get in the habit of writing them well.

That's it for now.  I'm sure I've left out points - please feel free to bring them up in the comments.

## Tuesday, August 13, 2013

### Rankings and metrics - yet again

I (along with my departmental colleagues) was very happy to see this.  My department does extremely well in a particular ranking scheme (described in the original paper here and implemented online here, though you need to ask for password access) that asks, essentially, what fraction of the papers published by a department fall into the top 10% in terms of impact in an area.  We can debate about the flaws of any ranking scheme (hint:  they're all imperfect, because quantifying scientific quality and impact in a single number is fundamentally wrong-headed).  Still, it is nice to see an approach that agrees well with much intuition (that is, the usual top-10 suspects all look pretty good; schools that don't do much research in an area rank lower) where Rice does well.

## Saturday, August 10, 2013

### A new kind of solid - why "q-glass" really is weird and interesting

As long-time readers here know, I'm not a big fan of hype and press releases.  While it is very important to let people know what academic scientists and engineers are doing, not everything needs to be trumpeted from the rooftops as a huge breakthrough or a paradigm-altering thunderbolt.  However, this new result (the actual paper is here) is genuinely weird, unexpected, and exciting (at least to me).  The authors claim to have discovered a truly new kind of solid.  Let me break down what this means and why it's surprising.

A solid is a material that resists shear deformation - if you exert a certain force horizontally across the top surface of the material, the material will deform a bit until it's internal forces balance your applied force, and then deformation will reach some constant amount.  (In contrast, a fluid will keep deforming continuously!)  The most ordinary solids people know about are either crystalline (this includes polycrystalline materials made up of many crystal grains) or glasses.  In a crystalline solid, the atoms have taken on highly symmetric spatial arrangements.  That is, the atoms aren't separated by random distances, but integer multiples of certain particular spacings; similarly, crystals are not isotropic - there are particular directions along which atoms are arranged.   In contrast, simple liquids are isotropic, and except for some typical nearest-neighbor distance set by the atomic or molecular size, there is no other spatial ordered arrangement.   When a solid crystallizes from a liquid, it is a collective phenomenon, a phase transition, and this happens on cooling when the free energy of solid phase becomes lower than that of the liquid phase.  Quasicrystals (see 2011 Nobel for Chemistry) are in these senses crystals - their symmetries are just more subtle than those of ordinary crystals.

Glasses (including those made from polymers) are different.  They resist shear, too, but they do not have the long-range, periodic/anisotropic arrangement of constituents seen in crystals.  Instead, upon cooling, glasses become solid (meaning that their viscosity diverges toward infinity) because the constituents become "kinetically hindered".  At the risk of dragging up controversy, the simple description is that there is no true glass phase in the thermodynamic sense - glasses are rigid because the constituents can't readily move out of each others' ways, not because there is some true collective thermodynamic stability (involving free energies) at work.

The authors of this new work have found something special that they have termed a "q-glass" while looking at what happens in the solidification of a molten mixture of aluminum, iron, and silicon.  In the resulting solids, they find nodules of a new material (Al91Fe7Si2, approximately) that is definitely not crystalline or polycrystalline (no preferred lattice spacings; completely isotropic).  At the same time, the material does form out of the melt through a genuine first-order phase transition (!), and therefore appears to be highly ordered in some sense (both distinguishing it from a glass).  It will be very interesting to learn exactly what is going on here, and whether there are other materials that have these peculiar features.

## Wednesday, August 07, 2013

### Peer review, tone, and common courtesy

I'm back, though now I'm in the writing-six-things-before-the-term mode, so blogging will likely continue to be sparse.   Several of my friends pointed out this article in the Chronicle of Higher Education regarding the tone of correspondence in the peer review process.  In short, some fraction (from my own experience, I'd say maybe 15%) of "negative" reviews go beyond pointing out issues that need to be corrected to improve the paper and instead are genuinely hostile and nasty - basically tone and phrasing that the reviewer would very likely never have the nerve to use face to face.  Interestingly, those of us who experience the peer review process beat the rest of the world to the observation of a behavior pattern that is now realized to be common on the internet.

I wish I knew the solution to this.  Removing the blindness of the review process is one possibility, though I do worry that the same petty, vindictive people who write reviews like this will then engage in additional unprofessional behaviors toward people that they perceive as slighting them.

The point of the review process in science is to make sure that correct, clear, original science results get disseminated in the literature.  We are all (allegedly) on the same side.  If people would just adhere to that, then reviews could be much more constructive in tone (e.g., instead of "The authors are just plain wrong", wouldn't it be better to say "I'm concerned that there are some problems with steps 1 through 4"?).

I am worried that there is a general erosion in common courtesy as well.  I know this makes me sound like a grumpy old man, but again there are some people who use electronic communications in general as an excuse for rudeness.  Taking the time to say "please" and "thank you" is never time poorly spent.

## Wednesday, July 24, 2013

### Online physics lecture notes

Next week is going to be largely internet-free for me, so don't expect much excitement here until early August.  I do have a few things I want to discuss (thermoelectric effects, the great mess that was San Jose State's experiment in MOOCs, etc.), but that will wait.  In the mean time, I wanted to point out some nice lecture notes I'd found lately on the arxiv and elsewhere.  If people have their own favorite notes to point out online, please do so in the comments.

Neri Merhav has produced a couple of nice sets of notes, written from the perspective of trying to teach very physicsy concepts to electrical engineering students.  This past week he put up these notes about statistical mechanics, and previously he had written this set about the connections between information theory and statistical physics.  I found them both very readable.

Doron Cohen's notes on statistical mechanics and mesoscopics are a bit more mathy and closer to notes than a textbook-style discourse.

Not on the arxiv, but Yoshi Yamamoto's online notes regarding noise and noise processes are great.

## Thursday, July 18, 2013

### Printing at the 180 nm scale??

I stumbled across this post, where it is asserted (with no link, and my google-fu is inadequate) that some group or collaboration at Berkeley is working on the ability to make 180 nm critical dimension transistors via printing (and therefore over really large areas, rather than only on dinner-plate-sized Si wafers).  Can a reader out there point me to who is really doing this, or is this a case of a press office distorting things (e.g., taking a layer thickness and claiming it's a lateral feature size)?

## Monday, July 15, 2013

### Physics is hard - how much should that worry us?

I'm a bit late to the party, but there have been discussions lately about the number of undergrad STEM majors, including physics, with some gnashing of teeth about overall difficulty. For example, a report by the National Bureau of Economic Research has been interpreted as saying that the main reason students bail on science majors is poor grades. That is, students go in, knowing that science will require more work than majoring in something fluffy, but when many receive tough grades even though they work hard, that's too much for them and they change fields. Chad Orzel does his usual thorough job looking into what the study really says, and it does seem true that tough grading drives some people out of STEM pursuits.

Similarly, there is a new report from the National Academy of Sciences called "Adapting to a Changing World: Challenges and Opportunities in Undergraduate Physics Education". I found the content rather disappointing, in the sense that it didn't seem to say much new. We all know that some approaches can be better under some circumstances than traditional lecture. However, many of those are very labor intensive, and I'm sure that my 50 person class would benefit if it were instead five ten-person classes. More to the point, though, the report specifically claims that hard grades are a major factor in the low participation of women and underrepresented groups in the physics major.

So, is physics unnaturally harsh in its grading, to its detriment? Or is this a question of high school preparation on the one hand, and grade inflation in nonscience majors on the other? I lean toward the latter.

(Note that the NSF has proven that science is hard. Also, here is the paper featured in that article - it's actually very interesting.)

(One other note: no one commented on my three part post about the physics of contacts, and the hit rate on those posts was very low. At the same time, in one 15 minute interval last week my post about "whiskey stones" got nearly 500 page views after it was mentioned in an argument about whiskey on reddit. Guess I should write about other things besides physics if I want more readership:-).

## Monday, July 08, 2013

### Contacts III: The search for measurements

In the last two posts I've talked a bit about contact resistances, but I haven't said much of anything about how to infer these experimentally.

In some sense, the best, most general way to understand contact voltages is through scanning potentiometry.  For example, this paper (pdf - sorry for the long URL) in Fig. 10 uses a conductive AFM tip to look at the local electrostatic potential as a function of position along an organic transistor under bias.  When done properly, this allows the direct measurement of the potential difference between, e.g., the source electrode and the adjacent channel material.  If you know the potential difference and the current flowing, you can calculate the contact resistance.  Even better, this method lets you determine the $I-V$ characteristic of the contact even if it is non-Ohmic, because you directly measure $V$ while knowing $I$.   The downside, of course, is that not every device (particularly really small ones) has a geometry amenable to this kind of scanned probe characterization.

A more common approach used by many is the transmission line method.  In the traditional version of this, you have a whole series of (otherwise identical) devices of differing channel lengths.  You can then plot the resistance of the device as a function of $L$.  For Ohmic contacts and an Ohmic device, the slope of the $R-L$ plot gives the channel resistance per unit length, while the intercept at $L \rightarrow 0$ is the total contact contribution.  This does not tell you how the contact resistance is apportioned between source/channel and channel/drain interfaces (this can be nontrivial - see the figure I mentioned above, where most of the voltage is dropped at the injecting contact, and a smaller fraction is dropped at the collecting contact).  Related to the transmission line approach is the comparison between two- and four-terminal measurements of the same device.   The four-terminal measurement, assuming that no current flows in the voltage contacts and that the voltage probes are ideal, should tell you the contribution of the channel.  Comparison with the two-terminal resistance measurement should then let you get some total contact resistance.  I should also note that, if you know that the channel is Ohmic and that one contact dominates the resistance, you can still use length scaling to infer the $I-V$ characteristic of the contact even if it is non-Ohmic.

The length scaling argument to infer contact resistances has also been used to great effect in molecular junctions.  There, for non-resonant transport, the usual assumption is that the bulk of the molecule (whatever that means) acts as an effective tunneling barrier, so that conductance should fall exponentially with increasing molecular length (assuming the barrier height does not change with molecular length, an approximation most likely to be true in saturated as opposed to conjugated molecules).  Thus, one can plot $log G$ as a function of molecular length, and expect a straight line, with an intercept that tells you something about the contact between the molecule and the metal electrodes.  This has been done in molecular layers (see here, for example), and in single molecule junctions (see here, for example).  These kinds of contact resistances can then be related, ideally, to realistic electronic structure calculations looking at overlap between electronic states in the metal and those of the linking group of the molecule.

Hopefully these three posts have clarified a little the issue of contact effects in electronic devices - why they are not trivial to characterize, and how they may actually tell you interesting things.

## Friday, July 05, 2013

### Contacts, part deux

I will make an argument now that contact resistances are much maligned, and instead of rigorously trying to avoid worrying about them, we should instead look for opportunities (with well defined, reproducible contact interfaces) when they can actually tell us something. I'll punctuate this with some papers from our own group and areas I happen to know, but that's only because those are the examples that come to my mind.

What happens when you try to inject charge from a metal into a hopping conductor - a material with some energy-dependent density of localized states? Many organic semiconducting polymers are such systems. In this situation, an injected charge carrier faces a competition between diffusion away into the channel by hopping, and an attraction to its own image charge in the metal. The rather odd result is that this contact often tends to be Ohmic (in the sense that the contact voltage is directly proportional to the current), but the contact resistance ends up being inversely proportional to the mobility of the charge in the channel. This is true even when the metal Fermi level lies somewhere in the tail of the band (a situation where you would expect a Schottky contact in a nonhopping semiconductor). We ran into this here, and systematically varied the contact resistance by using surface chemistry to adjust the energetic alignment.

In correlated materials, the situation may seem tantalizing yet hopeless. On the one hand, you know something interesting must happen when charge is injected into the material - carriers in the metal are boring, electron-like quasiparticles, while charge excitations in the correlated system could in principle be very different, with fractional charge or spin-charge separation. On the other hand, depending on the bulk properties and ability to make reproducible contacts, it can be very hard to extract useful information from contact resistances in these systems. We did get lucky, and found that in magnetite conduction in both the high temperature (short range ordered) state and in the low temperature (long range ordered) state seems to be through hopping, similar to the description above. I definitely think that there is a lot more to be done in such materials by using contact effects as a tool rather than avoiding them.

In the world of molecular junctions, often one is in the limit where the device is "all contact", in the sense that the "bulk" is only a couple of nanometers and a few atoms. Next time I'll talk about some great measurements by others in these systems, as part of a discussion on how one can measure contact resistance.

## Thursday, July 04, 2013

### Contacts - annoying or an opportunity

In condensed matter physics, often we are interested in the flow of charge from a "source" electrode, through some material (the "channel", probably a different material than the source), and away into a "drain" electrode.  When the source, channel, and drain are all metals, life is simple.  While there might be some mismatch in the electrical conductivities of the materials, in the end an electron can go from some delocalized (extended, wavelike) state in the source, into such a state in the channel, and then into such a state in the drain, smoothly.   Because of the discontinuity in electronic band structure and dielectric properties, there is some reflection at the interface, leading to a contact resistance.  That is, some fraction of the applied voltage, linearly proportional to the applied voltage, is dropped across the source-channel contact, and some across the channel-drain contract.  This is an example of an ohmic contact.

However, the situation can be more complicated.  If the channel is a crystalline semiconductor, the Fermi level of the metal usually winds up sitting somewhere in the band gap.  If the is appropriate band bending takes place, there can then be an energy barrier (a Schottky barrier) for injection if charge from the metal into the semiconductor.   The spatial width of the barrier depends on the level of doping in the semiconductor, with higher doping leading to a narrower (though not necessarily shorter) barrier.   In this case, the current-voltage characteristics of the contact is not Ohmic, and looks instead like a diode, because the applied bias changes the shape of the barrier.  To avoid this in transistors, the regions of the channel where the source and drain contact it are very highly doped.  Still, in this case we are still assuming that the actual electronic states are extended, delocalized things.

The situation gets more complicated when the channel does not have delocalized states near the Fermi level.

Usually experiments are designed to mitigate contact effects, either by avoiding measurements of the contact voltages (so-called four terminal measurements) or by making the contact contribution negligible compared to the bulk channel.  However, it turns out that sometimes contact effects can provide valuable insights into charge transport properties in the bulk.  I'll write more soon about this.

## Monday, June 24, 2013

### Timescales, averaging, and baseball

Please pardon the summer blogging slowdown - it's been a surprisingly busy couple of weeks, between an instructor search, working on papers, proposal stuff, and trying to write more on my big long-term project.

Thanks to an old friend for pointing me to this link, which does a great job looking at why a knuckleball is so erratic in its flight from pitcher to batter.  For non-Americans:  In baseball, a pitcher throws a ball to a catcher, while a batter attempts to hit the ball.  There are several types of pitches, depending on the pitcher's grip on the ball (which has seams due to the stitching that holds the leather cover on), the throwing motion, and the release.  A fastball can reach speeds in excess of 100 mph (161 kph) and typically spins more than 1000 rpm.  In contrast, a knuckleball can drift by the batter at a leisurely 70 mph yet be nearly unhittable because of its erratic motion.   A knuckleball barely spins, so that it may complete only 1-2 revolutions from leaving the pitcher's hand to reaching the batter.  This means that the positioning of the seams is absolutely critical to determing the aerodynamics of the motion, and no two knuckleballs move the same way.  In physics lingo, a knuckleball has almost none of the orientational averaging that happens in basically every other pitch.  I propose the definition of a new dimensionless parameter, the Wakefield number, $W$, that is the ratio of the ball's period of revolution to its time-of-flight from pitcher to batter.   A knuckleball is a pitch with $W \sim 1$.

## Friday, June 14, 2013

### Come on, PRL editors.

I rarely criticize papers.  I write this not to single out the authors (none of whom I know), nor to criticize the actual science (which seems very interesting) but to ask pointedly:  How did the editors of PRL, a journal that allegedly prizes readability by a general physics audience, allow this to go through in its current form?  This paper is titled "Poor Man’s Understanding of Kinks Originating from Strong Electronic Correlations".  A natural question would be, "Kinks in what?".  Unfortunately, the abstract doesn't say.  Worse, it refers to "the central peak".  Again, a peak in what?!   Something as a function of something, that's for sure.

Come on, editors - if you are going to let articles be knocked from PRL contention because they're "more suitable for a specialized journal", that obligates you to make sure that the papers you do print at least have titles and abstracts that are accessible.  I'm even a specialist in the field and I wasn't sure what the authors were talking about (some spectral density function?) based on the title and abstract.

The authors actually do a good job explaining the issue in the very first sentence of the paper:  "Kinks in the energy vs. momentum dispersion relation indicate deviations from a quasiparticle renormalization of the noninteracting system."   That should have been the first sentence in the abstract.  In a noninteracting system, the relationship between energy and momentum of particles is smooth.  For example, for a free electron, $E = p^{2}/2m$ where $m$ is the mass.  In an ordinary metal (where Fermi liquid theory works), you can write a similar smooth relationship for the energy vs. momentum relationship of the quasiparticles. Kinks in that relationship, as the authors say, "provide valuable information of many-body effects".