Ticking clocks

Standard

by Iulia Georgescu, Nature Physics 13, 529 (2017) doi:10.1038/nphys4169 (sci-hub)

Special relativity assumes that laws of physics are the same in all reference frames, a principle known as Lorentz invariance. This principle has been subject to numerous experimental tests, but no sign of Lorentz violation has yet been spotted: either a reassuring or disappointing revelation, depending on your stance. These results are now reinforced by a new test using a fibre network of optical clocks, which pushes the existing bound on Lorentz violation in experiments measuring time dilation.
Pacôme Delva and colleagues used strontium optical lattice clocks located at the LNE-SYRTE, Observatoire de Paris in France, the National Metrology Institute in Germany and the National Physical Laboratory in the UK and connected via state-of-the-art optical fibre links. Looking at the frequency difference between the clocks, they were able to test whether time dilation varies between the reference frames of the three geographically remote locations. This approach improves on previous tests — including other atomic clock comparison experiments — by two orders of magnitude. Moreover, it is only limited by technical noise sources, so further improvements are certainly possible.

Advertisements

Transparent perfect mirror

Standard

by Rachel Won, Nature Photonics 11, 331 (2017) doi:10.1038/nphoton.2017.90 (sci-hubpaper)

Transparent ‘perfect’ mirrors — one-way mirrors that transmit or reflect light completely depending on the direction of view — are useful for security, privacy and camouflage purposes. However, current designs are not perfectly reflective. Now, Ali Jahromi and colleagues from the USA and Finland have demonstrated a new design based on a non-Hermitian configuration — an active optical cavity — that may overcome this limitation. At a critical value of prelasing gain that is termed Poynting’s threshold, all remnants of the cavity’s structural resonances disappear in the reflected signal. At this point, the reflection becomes spectrally flat and light incident on the cavity is 100%-reflected at all wavelengths continuously across the gain bandwidth independently of the reflectivities of the cavity mirrors. Thus, the device at Poynting’s threshold becomes indistinguishable from a perfect mirror. The researchers have confirmed these predictions in an integrated on-chip active semiconductor waveguide device and in an all-optical-fibre system. They note that Poynting’s threshold is, however, dependent on polarization and incidence angle, and that observing the reflection of coherent pulses may reveal the cavity structure via its decay time. Since the concept of Poynting’s threshold is a universal wave phenomenon, it can be exploited in many areas including microwaves, electronics, acoustics, phononics and electron beams.

Indefinite causality

Standard

Causality is a concept deeply rooted in our understanding of the world and lies at the basis of the very notion of time. It plays an essential role in our cognition — enabling us to make predictions, determine the causes of certain events, and choose the appropriate actions to achieve our goals. But even in quantum mechanics, for which countless measurements and preparations have been rethought, the assumption of pre-existing causal structure has never been challenged — until now.
Giulia Rubino and colleagues have designed an experiment to show that causal order can be genuinely indefinite. By creating wires between a pair of operating gates whose geometry is controlled by a quantum switch — the state of single photon — they realized a superposition of gate orders. From the output, they measured the so-called causal witness, which specifies whether a given process is causally ordered or not. The result brings a new set of questions to the fore — namely, where does causal order come from, and is it a necessary property of nature?

via Nature Physics (sci-hub)

A quantum theory for thrones fans

Standard

game_of_thrones

Sydney University‘s delightful video in which academics predict who is going to win the Game of Thrones based on their disciplinary knowledge and understandings has had 62,500 Facebook likes, 900 YouTube hits and 10,000 Twitter impressions. The university has now uploaded, the full five-minute video of Michael Biercuk‘s quantum theory, which predicted a major event from the finale before it aired: ‘Tommin’s gotta die’. Biercuk has since been asked for further quantum physics theories, including how Bran can see into and interact with the past. The uni obviously harbours some hard core GoT fans. Back in 2014 it produced a video of Amy Johansen playing the GoT theme on the carillon, which was even watched by Davos Seaworth from the show.

via The Australian

Dawn of the quark ages

Standard

quark_age

by Michael Brooks from NewScientist 3024, 6 june 2015


Ask them to name their heart’s truest desire, and many a science nut might say the answer to life, the universe and everything – or, failing that, a fully functioning lightsaber.
Odd, then, that one field of scientific enquiry that could conceivably provide both gets so little press. After all the hoopla of the past few years, you could be forgiven for believing that understanding matter’s fundamentals is all about the Higgs boson – the “God particle” that explains where mass comes from.
The Higgs is undoubtedly important. But it is actually pretty insignificant for real stuff like you and me, accounting for just 1 or 2 per cent of normal matter’s mass. And the huge energy needed to make a Higgs means we’re unlikely to see technology exploiting it any time soon.
Two more familiar, though less glamorous, particles might offer more. Get to grips with their complexities, and we can begin to explain how the material universe came to exist and persist, and explore mind-boggling technologies: not just lightsabers, but new sorts of lasers and materials to store energy, too. That’s easier said than done, granted – but with a lot of computing muscle, it is what we are starting to do.
Chances are you know about protons and neutrons. Collectively known as nucleons, these two particles make up the nucleus, the meaty heart of the atom. (In terms of mass, the weedy electrons that orbit the nucleus are insignificant contributors to the atom.)
The headline difference between protons and neutrons is that protons have a positive electrical charge, whereas neutrons are neutral. But they also differ ever so slightly in mass: in the units that particle physicists use, the neutron weighs in at 939.6 megaelectronvolts (MeV) and the proton at 938.3 MeV.
That’s a difference of just 0.14 per cent, but boy does it matter. The neutrons’ extra mass means they decay into protons, not the other way around. Protons team up with negatively charged electrons to form robust, structured, electrically neutral atoms, rather than the world being a featureless neutron gloop.
“The whole universe would be very different if the proton were heavier than the neutron,” says particle theorist Chris Sachrajda of the University of Southampton in the UK. “The proton is stable, so atoms are stable and we’re stable.” Our current best guess is that the proton’s half-life, a measure of its stability over time, is at least 1032 years. Given that the universe only has 1010 or so years behind it, that is a convoluted way of saying no one has ever seen a proton decay.
The exact amount of the neutron’s excess baggage matters, too. The simplest atom is hydrogen, which is a single proton plus an orbiting electron. Hydrogen was made in the big bang, before becoming fuel for nuclear fusion in the first stars, which forged most of the other chemical elements. Had the protonneutron mass difference been just a little bigger, adding more neutrons to make more complex elements would have encountered energy barriers that were “difficult or impossible” to overcome, says Frank Wilczek of the Massachusetts Institute of Technology. The universe would be stuck at hydrogen.
But had the mass difference been subtly less, hydrogen would have spontaneously changed to the more inert, innocuous helium before stars could form – and the cosmos would have been an equally limp disappointment. Narrow the gap further, and hydrogen atoms would have transformed via a process called inverse beta decay into neutrons and another sort of neutral particle, the neutrino. Bingo, no atoms whatsoever.
All of that leads to an unavoidable conclusion about the proton and neutron masses. “Without these numbers, people wouldn’t exist,” says Zoltán Fodor of the University of Wuppertal, Germany.
But where do they come from?
The question is fiendishly difficult to answer. We’ve known for half a century that protons and neutrons are not fundamental particles, but made of smaller constituents called quarks. There are six types of quark: up, down, strange, charm, bottom and top. The proton has a composition of up-up-down, while the neutron is up-down-down.

A full explanation of where stuff gets its mass from is buried deep in the atomic nucleus

A full explanation of where stuff gets its mass from is buried deep in the atomic nucleus

Quark QuirksDown quarks are slightly heavier than up quarks, but don’t expect that to explain the neutron’s sliver of extra mass: both quark masses are tiny. It’s hard to tell exactly how tiny, because quarks are never seen singly (see “Quark quirks“, right), but the up quark has a mass of something like 2 or 3 MeV, and the down quark maybe double that – just a tiny fraction of the total proton or neutron mass.
Like all fundamental particles, quarks acquire these masses through interactions with the sticky, all-pervasive Higgs field, the thing that makes the Higgs boson. But explaining the mass of matter made of multiple quarks clearly needs something else.
The answer comes by scaling the sheer cliff face that is quantum chromodynamics, or QCD. Just as particles have an electrical charge that determines their response to the electromagnetic force, quarks carry one of three “colour charges” that explain their interactions via another fundamental force, the strong nuclear force. QCD is the theory behind the strong force, and it is devilishly complex.
Electrically charged particles can bind together by exchanging massless photons. Similarly, colour-charged quarks bind together to form matter such as protons and neutrons by exchanging particles known as gluons. Although gluons have no mass, they do have energy. What’s more, thanks to Einstein’s famous E = mc2, that energy can be converted into a froth of quarks (and their antimatter equivalents) beyond the three normally said to reside in a proton or neutron. According to the uncertainty principle of quantum physics, these extra particles are constantly popping up and disappearing again (see diagram).
To try and make sense of this quantum froth, over the past four decades particle theorists have invented and refined a technique known as lattice QCD. In much the same way that meteorologists and climate scientists attempt to simulate the swirling complexities of Earth’s atmosphere by reducing it to a three-dimensional grid of points spaced kilometres apart, lattice QCD reduces a nucleon’s interior to a lattice of points in a simulated space-time tens of femtometres across. Quarks sit at the vertices of this lattice, while gluons propagate along the edges. By summing up the interactions along all these edges, and seeing how they evolve step-wise in time, you begin to build up a picture of how the nucleon works as a whole.
Trouble is, even with a modest number of lattice points – say 100 by 100 by 100 separated by one-tenth of a femtometre – that’s an awful lot of interactions, and lattice QCD simulations require a screaming amount of computing power. Complicating things still further, because quantum physics offers no certain outcomes, these simulations must be run thousands of times to arrive at an “average” answer. To work out where the proton and neutron masses come from, Fodor and his colleagues had to harness two IBM Blue Gene supercomputers and two suites of cluster-computing processors.
The breakthrough came in 2008, when they finally arrived at a mass for both nucleons of 936 MeV, give or take 25 MeV – pretty much on the nose (Science, vol 322, p 1224). This confirmed that the interaction energies of quarks and gluons make up the lion’s share of the mass of stuff as we know it. You might feel solid, but in fact you’re 99 per cent energy.
But the calculations were nowhere near precise enough to pin down that all-important difference between the proton and neutron masses, which was still 40 times smaller than the uncertainty in the result. What’s more, the calculation suffered from a glaring omission: the effects of electrical charge, which is another source of energy, and therefore mass. All the transient quarks and antiquarks inside the nucleon are electrically charged, giving them a “self-energy” that makes an additional contribution to their mass. Without taking into account this effect, all bets about quark masses are off. Talk about one compound particle being more massive than another because of a difference in quark masses is a “crude caricature”, says Wilczek, who won a share of a Nobel prize in 2004 for his part in developing QCD.
The subtle roots of the proton-neutron mass difference lie in solving not just the equations of QCD, but those of quantum electrodynamics (QED), which governs electromagnetic interactions. And that is a theorist’s worst nightmare. “It’s awfully difficult to have QED and QCD in the same framework,” says Fodor. The electromagnetic self-energy can’t even be calculated directly. In a limited lattice simulation, its interactions create an infinity – a mathematical effect rather like a never-ending reverberation inside a cathedral.
Fodor and his colleagues’ new workaround involves solving the QED equations for various combinations of quarks inside different subatomic particles. The resulting subtle differences are used to replace the results of calculations that would invoke infinities, and so grind out a value for the proton-neutron mass difference (Science, vol 347, p 1452).
The figure the team came up with is in agreement with the measured value, although the error on it is still about 20 per cent. It is nonetheless “a milestone”, says Sachrajda. Wilczek feels similarly. “I think it’s exciting,” he says. “It’s a demonstration of strength.”
You might be forgiven for wondering what we gain by calculating from first principles numbers we already knew. But quite apart from this particular number’s existential interest, for Wilczek the excitement lies in our ability now to calculate very basic things about how the universe ticks that we couldn’t before.
Take the processes inside huge stars that go supernova – the events that first seeded the universe with elements heavier than hydrogen and helium. Our inability to marry QED and QCD meant we couldn’t do much more than wave our hands at questions such as the timescale over which heavy elements first formed – and we couldn’t make a star to test our ideas. “Conditions are so extreme we can’t reproduce them in the laboratory,” says Wilczek. “Now we will be able to calculate them with confidence.”
The advance might help clear up some of the funk surrounding fundamental physics. The Large Hadron Collider’s discovery in 2012 of the Higgs boson, and nothing else so far, leaves many open questions. Why did matter win out over antimatter just after the big bang (New Scientist, 23 May, p 28)? Why do the proton and electron charges mirror each other so perfectly when they are such different particles? “We need new physics, and simulations like ours can help,” says Kálmán Szabó, one of Fodor’s Wuppertal collaborators. “We can compare experiment and our precise theory and look for processes that tell us what lies beyond standard physics.”

An open road

For Sachrajda, this kind of computational capability comes at just the right time, as the LHC fires up again to explore particle interactions at even higher energies. “We all hope it will give an unambiguous signal of something new,” he says. “But you’re still going to have to understand what the underlying theory is, and for that you will need this kind of precision.”
If that still sounds a little highfalutin, it’s also worth considering how modern technologies have sprung from an ever deeper understanding of matter’s workings. A century or so ago, we were just getting to grips with the atom – an understanding on which innovations such as computers and lasers were built. Then came insights into the atomic nucleus, with all the technological positives and negatives – power stations, cancer therapies, nuclear bombs – those have brought.
Digging down into protons and neutrons means taking things to the next level, and a potentially rich seam to mine. Gluons are far more excitable in their interactions with colour charge than are photons in electromagnetic interactions, so it could be that manipulating colour-charged particles yields vastly more energy than fiddling with things on the atomic scale. “I think the possibility of powerful X-ray or gamma-ray sources exploiting sophisticated nuclear physics is speculative, but not outrageously so,” says Wilczek.
Star Wars' lightsaberGluons, unlike photons, also interact with themselves, and this could conceivably see them confining each other into a writhing pillar of energy – hence Wilczek’s tongue-incheek suggestion they might make a Star Wars-style lightsaber. More immediate, perhaps, is the prospect of better ways to harness and store energy. “Nuclei can pack a lot of energy into a small space,” says Wilczek. “If we can do really accurate nuclear chemistry by calculation as opposed to having hit-andmiss experiments, it could very well lead to dense energy storage.”
For Fodor, that’s still a long way off – but with the accuracy that calculations are now reaching, the road is at last open. “These are mostly dreams today, but now we can accommodate the dreams, at least,” he says. “You’ve reached a level where these technological ideas might be feasible.”
Welcome, indeed, to the quark ages.

From generation to generation

Standard

by Robert Kowalewski from Nature Physics 11, 705–706 (2015) doi:10.1038/nphys3464


A new measurement from the LHCb experiment at CERN’s Large Hadron Collider impinges on a puzzle that has been troubling physicists for decades namely the breaking of the symmetry between matter and antimatter.

Experimental constraints on the unitarity triangle. Each band shows the allowed region (at 95% confidence level, CL) based on specific measured quantities. The quantities η and ρ are functions of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, which allow the triangle to have a base of unit length oriented along the ρ axis. The angles α, β and γ correspond to the blue and tan bands, and are measured from matter-antimatter-violating asymmetries in B meson decay. The circular arcs centred on (10) show the constraints from the mass differences, Δmd and Δms, measured in studies of B-B oscillations. Measurements of matter-antimatter violation in the kaon system determine εK, which is a measure of the admixture of the CP-even eigenstate in the long-lived neutral kaon, and result in the green band. The dark green semi-circle centred on (0,0) shows the constraint from the measurement of the ratio IVubl/IVcbl, where Vub describes the transition of a b quark to a u quark. Image courtesy of the CKMfitter group.

Experimental constraints on the unitarity triangle. Each band shows the allowed region (at 95% confidence level, CL) based on specific measured quantities. The quantities η and ρ are functions of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, which allow the triangle to have a base of unit length oriented along the ρ axis. The angles α, β and γ correspond to the blue and tan bands, and are measured from matter-antimatter-violating asymmetries in B meson decay. The circular arcs centred on (10) show the constraints from the mass differences, Δmd and Δms, measured in studies of BB oscillations. Measurements of matter-antimatter violation in the kaon system determine εK, which is a measure of the admixture of the CP-even eigenstate in the long-lived neutral kaon, and result in the green band. The dark green semi-circle centred on (0,0) shows the constraint from the measurement of the ratio |Vub|/|Vcb|, where Vub describes the transition of a b quark to a u quark. Image courtesy of the CKMfitter group.

We learn early that the matter in and around us is made up of three particles: electrons, and the up and down quarks found in nuclei. Add in the electron neutrino and we also account for nuclear fission and fusion and the stellar furnace that fuels life on Earth. But nature is not that simple. It replicates this four-particle structure in ‘generations’ of heavier, but otherwise similar, particles. The first evidence for this was the discovery of the muon in 1936. Other second-generation particles were subsequently discovered, as was another unexpected phenomenon: the violation of matter-antimatter (CP) symmetry in neutral kaons(1). Now, writing in Nature Physics, the LHCb collaboration(2) provides fresh evidence to fuel the ongoing discussion surrounding CP violation.
In 1973, Makoto Kobayashi and Toshihide Maskawa proposed a mechanism whereby mixing between the mass and weak eigenstates of quarks would, if there were three generations, result in an irreducible complex phase that could be responsible for CP violation(3).
The discovery of the first third-generation particle, the tau lepton(4), came a year later, followed in 1977 by the discovery of the third-generation ‘b’ quark(5). With the advent of high-intensity electron-positron colliders at the start of the twenty-first century, studies of CP violation in the decays of B mesons (which contain a b quark) at the BaBar and Belle experiments validated Kobayashi and Maskawas proposal, for which they shared in the 2008 Nobel Prize in Physics.
The CKM matrix – introduced by Kobayashi and Maskawa, following the formative work of Nicola Cabibbo – describes the mixing of quark mass and weak eigenstates in the standard model of particle physics. It is unitary and can be fully specified with four parameters: three real angles and one imaginary phase. This unitarity condition is the basis for a set of testable constraints in the form of products of complex numbers that sum to zero – for example, V*ud Vub + V*cd Vcb + V*td Vtb = 0 where Vub describes the transition of a b quark to a u quark. The triangle in Fig. 1 provides a convenient graphical representation of this equation. The unitarity condition connects a large set of measurable quantities in the standard model, including CP-violating asymmetries, which depend on the imaginary phase, and mixing strengths, which are magnitudes such as |Vub| and |Vcb|. In the standard model, all the bands corresponding to the different measurements in Fig. 1 should overlap at a unique point, which they do at the current level of precision. The presence of new particles or interactions would contribute to these measurable quantities in different ways, resulting in bands that fail to converge at a point. The ratio of matrix elements |Vub|/|Vcb| corresponds to the length of the side of the ‘unitarity triangle’ opposite the angle labelled β, which is well determined from measured CP-violating asymmetries. The precise determination of this ratio is a crucial ingredient in providing sensitivity to new particles and interactions.
Experiments at electron-positron colliders have measured |Vub| and |Vcb| for many years using two complementary methods based on the decays of a B meson to an electron or muon, its associated neutrino and one or more strongly interacting particles. The first method measures exclusive final states whose decay rates are proportional to |Vqb|2 (where q = u, c), and uses lattice quantum chromodynamic (QCD) calculations of form factors to determine |Vqb|. The second inclusive method requires only the presence of an electron or muon and sums over many exclusive final states. These summed rates are also proportional to |Vqb|2, the determination of |Vqb| in this case relies on perturbative QCD calculations and auxiliary measurements. Although these two methods have improved significantly in precision over the years, the values determined for both |Vub| and |Vcb| from the inclusive method persistently exceed those from the exclusive method by two to three standard deviations. This has prompted speculation that the familiar left-handed charged weak interaction has a right-handed counterpart that contributes
to this difference.
With this backdrop, the new measurement of the ratio |Vub|/|Vcb| from the LHCb experiment at CERN’s Large Hadron Collider (LHC) is a welcome addition to the literature(2). It is based on a different exclusive decay mode than can
be measured at the electron-positron collider experiments, namely that of a baryon containing b, u and d quarks (a heavier version of the neutruon) that decays into a proton, a muon and a neutrino. Particle physicists have been surprised that these decays, where the missing neutrino prevents reliance on kinematic constraints, can be distinguished from the huge background inherent in proton-proton collisions at the LHC This new result, which makes use of very precise spatial measurements of the decay vertices of short-lived particles and uses innovative analysis techniques, is a noteworthy achievement.
What have we learned? The new experimental information, instead of resolving the inclusive-exclusive puzzle, deepens it. The measurement and corresponding lattice QCD calculation lead to a value for |Vub|/|Vcb| that is lower than both the pre-existing exclusive and inclusive determinations. The consistency of the three determinations with a single value is only 1.8%, indicating that particle physicists have more work to do in this area. On a more positive note, the LHCb measurement, when combined with previous measurements, strongly disfavours the hypothesis of a right-handed weak interaction.


(1) Christenson, J. H., Cronin, J. W. Fitch, V L. & Turlay, R. Phys. Rev. Lett. 13, 138-140 (1964).
(2) The LHCb collaboration Nature Phys. 11,743-747 (2015).
(3) Kobayashi, M. & Maskawa, T. Prog. Theor Pinys. 49, 652-657 (1973).
(4) Perl, M. L. et al Phys. Rev. Lett 35,1489-1492 (1975).
(5) Herb, S. W. et al Phys. Rev Lett. 39, 252-255 (1977).

Evaporation drives engine

Standard

from Nature 522, 259 (18 June 2015) doi:10.1038/522259b


An engine fuelled only by water evaporation can power a miniature car and lights.

522259b-i1

Ozgur Sahin at Columbia University in New York and his colleagues applied bacterial spores to thin plastic strips. The spores absorb and release water with changes in relative humidity, so the strips curl and straighten. The team stacked the strips and formed them into a water-containing engine so that the strips were exposed to recurring periods of high and low humidity, acting like oscillators to power the engine. When attached to a generator, the engine powered light-emitting diodes. A rotary version attached to two pairs of wheels (pictured, left) pushed a 100-gram car forwards (pictured, right).
The engine could be used in devices in areas that have scarce electricity, the authors say.


P.S.: interessting: the news is close, the paper is open!

Two-atom bunching

Standard

by Lindsay J. LeBlanc from Nature 520, 36–37 (02 April 2015) doi:10.1038/520036a


The Hong–Ou–Mandel effect, whereby two identical quantum particles launched into the two input ports of a ‘beam-splitter’ always bunch together in the same output port, has now been demonstrated for helium-4 atoms.
All particles, including photons, electrons and atoms, are described by a characteristic list of ‘quantum numbers’. For a pair of particles whose lists match, there is no way of telling them apart — they are perfectly indistinguishable. One of the more intriguing consequences of quantum mechanics arises from this indistinguishability, and was exemplified(1) in an experiment by Hong, Ou and Mandel (HOM) in the 1980s. The researchers showed that, although a single photon approaching an intersection along one of two input paths exits in one of two output paths with equal probability, identical pairs brought to the intersection simultaneously from different paths always exit together. Lopes et al.(2) now demonstrate this manifestation of two-particle quantum interference for two identically prepared — and thus indistinguishable — helium-4 atoms. The result provides an opportunity to extend advances made in quantum optics to the realm of atomic systems, especially for applications in quantum information.
As a graduate student faced with finding a wedding present for my labmate, I decided that the HOM experiment was a fitting analogy to marriage: from two separate paths, this couple’s lives were intersecting and would continue along a single path together. Along with the formalism describing the effect tucked into the card, I gave them a glass ‘beam-splitter’ to represent a key ingredient in the optical demonstration of the effect: this glass cube could act as the intersection, at which half the light incident on any of the four polished faces is transmitted, with the remaining half being reflected; for single particles, the probabilities for transmission and reflection are both 50%. All HOM experiments require a ’50:50 beam-splitting’ mechanism that sends quantum particles incident along one of two input paths to one of two output paths with a 50% probability (Fig. 1a).

Each beam-splitter (blue) is represented as two input paths (left) and two output paths (right); here we consider '50:50' beam-splitters, for which the probability of each output is of equal magnitude. A particle is represented by a red circle, and its wavefunction's phase by the position of the black dot on the grey circle. Individual phases cannot be measured directly. a, Possible outcomes for a single particle entering either of the input paths; the probabilities for particle transmission and reflection are both 50%. In the case of reflection, the phase changes by 90°. b, For incoming particles at both inputs, there are four possible outcomes. However, the overall probability of the outcomes is determined by adding the individual probabilities using rules of quantum mechanics. For bosonic particles such as photons and helium-4 atoms, the subject of Lopes and colleagues' study(2), the first two outcomes (transmit/transmit and reflect/reflect) cancel. The only outcomes remaining are the third and the fourth.

Each beam-splitter (blue) is represented as two input paths (left) and two output paths (right); here we consider ’50:50′ beam-splitters, for which the probability of each output is of equal magnitude. A particle is represented by a red circle, and its wavefunction’s phase by the position of the black dot on the grey circle. Individual phases cannot be measured directly. a, Possible outcomes for a single particle entering either of the input paths; the probabilities for particle transmission and reflection are both 50%. In the case of reflection, the phase changes by 90°. b, For incoming particles at both inputs, there are four possible outcomes. However, the overall probability of the outcomes is determined by adding the individual probabilities using rules of quantum mechanics. For bosonic particles such as photons and helium-4 atoms, the subject of Lopes and colleagues’ study(2), the first two outcomes (transmit/transmit and reflect/reflect) cancel. The only outcomes remaining are the third and the fourth.

Careful analysis shows that there must be a well-defined relationship between the beam-splitter’s inputs and outputs that is demanded by energy conservation in the classical picture of the beam-splitter3, or by a property known as unitarity in the quantum view4: for classical waves, this relationship fixes the relative positions of the output waves’ peaks and valleys with respect to those of the input waves, whereas for quantum particles this relationship manifests as a relative ‘phase’ between the particles’ input and output wavefunctions. Although the probability of finding a particle in a particular output path depends only on the amplitude of its wavefunction, the phase is important when determining the output wavefunction, and corresponding output probability, for two or more particles.
If two particles enter such a 50:50 beam-splitter, naively one would expect one of four possible outcomes: two in which the particles exit along a path together, and two in which they exit along different paths (Fig. 1b). In these cases, the single-particle output-wavefunction phases accumulate in an overall output phase. The HOM result is a consequence of the particles’ indistinguishability, which means that there is no measurable difference between the two outcomes in which the particles exit along different paths. The overall output phases of these indistinguishable outcomes are opposite to each other, and when added together using quantum rules for bosons (particles with integer spin, a quantum property common to both photons and helium-4 atoms), these two possible outcomes interfere and cancel. The only outcomes remaining are those with two particles in a single output. As a result, simultaneous single-particle detections (‘coincidence counts’) at both outputs are forbidden.
Lopes et al. demonstrate two-particle quantum interference with helium-4 atoms. In their experiments, the atoms’ paths are related to their speeds, which are manipulated by selectively transferring momentum to and from light in absorption and emission processes(5, 6). First, the researchers prepared a ‘twin pair’ by removing from an atom reservoir indistinguishable atoms with different speeds. Second, they used light pulses to modify the atoms’ momenta and cause the pair to meet; the atom in the first path travels with velocity v1 and the atom in the second path with v2. A beam-splitting mechanism implemented reflection and transmission by changing the atoms’ speeds with 50% probability from v1 to v2 and vice versa.
The atoms continued to travel until they hit a time-resolved, multipixel atom-counting detector, at which an atom with v1 would arrive at a different time from one with v2. Lopes and colleagues prepared many twin pairs in a short interval and recorded the precise location and timing of the atoms’ arrivals at the detector: a coincident count would be the measurement at a particular location of a particle at time t1 followed by a measurement at t2. Although the researchers found that the arrivals from the many pairs were distributed in two time windows (corresponding to the two output paths), they found a striking lack of instances among these random outcomes when the time difference was exactly t2t1, indicating that the atoms from a twin pair must be exiting the beam-splitter with the same velocity. This ‘anticorrelation’ is the signature of a HOM experiment.
As in quantum-optics demonstrations of the HOM effect, the present result demonstrates that pairs of identical, ‘quantum-entangled’ particles have been produced. The unique capabilities of this apparatus, including the combination of condensed metastable helium-4 atoms and the atom-counting detector, offer a spatial and temporal resolution unavailable to others. Protocols for transmitting and processing quantum information, analogous to those used in optical systems, can now be implemented with new capabilities in atomic systems: atoms, unlike photons, may interact with one another, and because they have mass, their mechanical properties, such as momentum, can be varied and used as experimental parameters.
Furthermore, because atoms can also be fermions (particles with half-integer spin, such as electrons), they could exhibit a quantum-interference effect that is the fermionic equivalent of the HOM effect(4). Evidence for this mechanism has already been seen in electronic systems(7). The bosonic HOM effect demonstrated here, and its fermionic counterpart, may offer new possibilities for implementing quantum-information protocols and for exploring the foundations of quantum physics.


(1) Hong, C. K., Ou, Z. Y. & Mandel, L. Phys. Rev. Lett. 59, 2044–2046 (1987).
(2) Lopes, R. et al. Nature 520, 66–68 (2015).
(3) Ou, Z. Y. & Mandel, L. Am. J. Phys. 57, 66 (1989).
(4) Loudon, R. Phys. Rev. A 58, 4904–4909 (1998).
(5) Campbell, G. K. et al. Phys. Rev. Lett. 96, 020406 (2006).
(6) Bonneau, M. et al. Phys. Rev. A 87, 061603 (2013).
(7) Neder, I. et al. Nature 448, 333–337 (2007).

From physics to revolution and back

Standard

by Lui Lam from Science 5 June 2015: Vol. 348 no. 6239 p. 1170 doi: 10.1126/science.348.6239.1170


Illustration by Robert Neubecker

Illustration by Robert Neubecker

As a boy, I was not interested in science; I was interested in girls. Upon graduating from high school in Hong Kong, I did not particularly want to work in science; I just wanted a job, because I rarely left the dinner table with my stomach full. For graduate school, I went to Columbia University with a scholarship. There, I was surrounded by Nobel laureates—Isidor Isaac Rabi, Polykarp Kusch, Tsung-Dao Lee—and laureates-in-waiting: James Rainwater, Jack Steinberger, Leon Lederman. The most important moment in my physics education was when I noticed Lee standing next to me in the men’s room, peeing. Nobel laureates are ordinary people, I learned, just like you and me. At Columbia, I was influenced by the student antiwar movement and the Cultural Revolution that was raging in China. I and several others (including Peter Kwong, now at Hunter College, and Jean Quan, who would become the mayor of Oakland, California) started the Chinatown Food Co-op; our aim was to “serve the people.” I wasn’t interested in solving small problems; I wanted to save the world, to return to China and join the revolution.
I finally made it in 1978, at the beginning of the country’s “reform and opening up” movement. I was assigned to do physics at the Chinese Academy of Sciences’ Institute of Physics. In China, the spring of 1978 is called “Science Spring” because for 10 years science had come to a virtual halt. Basic science was banned. My colleagues and I resumed the work with great enthusiasm; if China was in ruins, we figured, the best option was to fix it. I helped open the door to the West, discovering bowlics—a type of liquid crystal—and publishing the first paper by mainland-only authors ever to appear in Physical Review Letters.
We did not have journals—only copies of journals—and there was just one Chinese-made copying machine in the institute, which broke down every half hour or so. Everything was scarce, including tofu and writing paper, but we were not troubled by the minimal calories available to fuel us. We worked diligently, day and night, except on Sundays. Sundays were for washing clothes, by hand.
China started sending scientists to the West—many to the United States—as visiting scholars and to international conferences. Among China’s billion people were dozens of notable physicists, but the number of excellent ones was smaller; I was the only government-approved doctoral mentor in liquid crystal physics. Physics research can’t be learned properly by reading books or papers; it should be learned by working alongside masters. China did have a few masters—those who had been trained in the West before 1949—but they were all busy establishing institutes or working on the atomic bomb.
Today, China is making progress scientifically, but there is still some distance to go. The Cultural Revolution sacrificed a whole generation of scientists. Among all the problems this created, it led to a scarcity of experts who could act as academic judges. Qualitative assessment was replaced by counting papers—a poor substitute. The damage lingers.
Lately, and happily, China’s priorities have shifted toward innovation. Innovation is not easy, though, in a culture that for 2000 years has prided itself on taking the middle course in everything, for the sake of harmony and stability. It’s the opposite of the Socratic method that’s at the heart of Western science. Great scientists like Galileo show that breakthroughs can be made even in environments that emphasize orthodoxy—but then no one was counting Galileo’s papers. China has very good scientists, but the country needs to find its way of doing innovative science.
After 6 years, I left China and returned to the United States—not for scientific or political reasons, or due to material want, but for family. During a visit to the United States with our daughter, my wife announced that she intended to stay. Forced to choose between the motherland and a daughter, I chose my daughter. I left China on good terms.

Squeezed ions in two places at once

Standard

by Tracy Northup from Nature 521, 295–296 (21 May 2015) doi:10.1038/521295a


Experiments on a trapped calcium ion have again exposed the strange nature of quantum phenomena, and could pave the way for sensitive techniques to explore the boundary between the quantum and classical worlds.
In Schrödinger’s famous thought experiment, a cat is prepared in a quantum superposition of being both alive and dead by being trapped in a box with a flask of poison. As if that were not enough, the poor cat is now being squeezed too — all in the name of quantum measurement. In laboratory experiments, atoms have been prepared in superpositions of being in two places at once, playfully called Schrödinger’s cat states(1). Lo et al.(2) demonstrate superposition states of a trapped ion in which its position is not only split between two locations, but also squeezed. Squeezing refers to the process of suppressing quantum fluctuations for a particular measurement, such as that of a particle’s position.
Quantum mechanics tells us that the position of a particle (or Schrödinger’s fictitious cat) has an inherent uncertainty even when it is at rest, a feature known as the standard quantum limit. When the particle is prepared in a squeezed state, however, we can pinpoint its position to better than that limit (Fig. 1). There is a price to pay for squeezing, though. When fluctuations in position are squashed down, additional fluctuations arise in the particle’s momentum, such that the product of position and momentum fluctuations still satisfies Heisenberg’s uncertainty relation — which states that there is a fundamental limit to the precision with which a particle’s position and momentum can be simultaneously determined. Nevertheless, by suppressing fluctuations in the quantity that they intend to measure, researchers can improve measurement precision. For example, squeezed states have been used to achieve record sensitivities for one of the detectors at the Laser Interferometer Gravitational-Wave Observatory in Richland, Washington(3).

Every object's momentum and position are subject to fluctuations, which become pronounced on the atomic scale. a, The red circle indicates the uncertainty in position and momentum for a calcium ion (Ca+) in its motional ground state. b, Lo et al.(2) used laser pulses to squeeze fluctuations in position, at the cost of amplifying the fluctuations in momentum. c, They then displaced the ion in opposite directions at once, so that it would be equally likely to be found in one of two distinct states. The squeezing operation provides a better signal-to-noise ratio for the ion's position, so that it is easier to distinguish between the states.

Every object’s momentum and position are subject to fluctuations, which become pronounced on the atomic scale. a, The red circle indicates the uncertainty in position and momentum for a calcium ion (Ca+) in its motional ground state. b, Lo et al.(2) used laser pulses to squeeze fluctuations in position, at the cost of amplifying the fluctuations in momentum. c, They then displaced the ion in opposite directions at once, so that it would be equally likely to be found in one of two distinct states. The squeezing operation provides a better signal-to-noise ratio for the ion’s position, so that it is easier to distinguish between the states.

The starting point for Lo and colleagues’ study is a single calcium ion (Ca+) trapped by radiofrequency electromagnetic fields in a vacuum vessel. One can picture the trapped ion as a tiny pendulum oscillating around its equilibrium position. For a quantum pendulum in its lowest energy state, the uncertainties in its position and momentum have equal magnitude. In this case, squeezing corresponds to suppressing position fluctuations at the cost of momentum, or vice versa.
The authors use a set of methods known as laser cooling to bring the ion to its motional ground state(4), and then introduce additional laser fields to squeeze the state, reducing the positional variance by a factor of nine. Although squeezed states of trapped ions were first demonstrated 19 years ago(5), the fidelity with which these delicate states are prepared is highly sensitive to experimental noise, such as fluctuating electric and magnetic fields. The authors used a technique called reservoir engineering, which was previously developed by the same research group(6), to achieve robust, high-fidelity squeezing even in the presence of noise.
With the ion in a squeezed ground state, the next step is to prepare it in a cat-state superposition. Imagine that the ion pendulum is displaced by pulling it to one side, then releasing it; it will swing back and forth with the amplitude that has been imparted. Now imagine pulling the ion to the right and left at the same time: classically this does not make sense, but quantum mechanically it is possible.
The way to do this with a trapped ion is to apply a state-dependent force — a displacement whose direction depends on the spin state of the ion’s outermost electron. When the electron is prepared in a superposition of two spin states, the force acts in an equal and opposite direction on each component. As a result, the ion pendulum’s motion is a superposition of two possible oscillations, each with the same amplitude but in opposite directions. In fact, each motional direction is entangled with the electron’s spin state; that is, one property cannot be described independently of the other.
How distinguishable are the two cat-state components from each other? It depends on whether the initial squeezing was performed on the ion’s position or on its momentum. Lo and colleagues measured and compared the two cases. If momentum fluctuations were suppressed before the cat state was prepared, then the corresponding enhancement in position fluctuations made the spatial separation more difficult to distinguish. By contrast, if the ion’s position was squeezed, then the spatial separation between the components became 56 times larger than the extent of the squeezed positional fluctuations.
It is exactly this amplified sensitivity to spatial separation that makes squeezed states promising for future applications. For example, using cat states, the wave nature of a single ion can be exploited for interferometry. In an interferometer, a wave is split, sent along two paths and finally recombined, providing information about how the paths differ. In a cat state, the ion’s location is split into two superposition components, each of which explores a different path. Thus, if the cat-state components are recombined, the superposition acts as an interferometer, probing path differences. Moreover, an ion is highly sensitive to changes in electric and magnetic fields, which shift its electron energy levels, so an ion interferometer could measure field gradients on the scale of tens of nanometres(7). Squeezed cat states would also be more robust than non-squeezed states to certain types of noise, providing improved sensing capabilities.
Building on established techniques for the precise manipulation of trapped ions, the authors have demonstrated an exciting new capability for both engineering and characterizing quantum states. These states are fascinating, not only as future sensors, but also as a means of exploring the boundary between the quantum and classical worlds. The ion pendulum demonstrated by Lo and colleagues has a position uncertainty of only a few nanometres, but it swings back and forth — in two directions at once — over hundreds of nanometres, a much larger distance than atomic scales. Efforts are under way in many research groups to extend cat-state length scales even further, into truly macroscopic regimes.
Future work with squeezed cat states will continue to characterize their strange, often counter-intuitive, quantum properties. Here, as the authors have shown, single ions provide an exceptional experimental platform on which to do so.


(1) Monroe, C., Meekhof, D. M., King, B. E. & Wineland, D. J. Science 272, 1131–1136 (1996).
(2) Lo, H.-Y. et al. Nature 521, 336–339 (2015).
(3) Aasi, J. et al. Nature Photon. 7, 613–619 (2013).
(4) Leibfried, D., Blatt, R., Monroe, C. & Wineland, D. Rev. Mod. Phys. 75, 281–324 (2003).
(5) Meekhof, D. M., Monroe, C., King, B. E., Itano, W. M. & Wineland, D. J. Phys. Rev. Lett. 76, 1796–1799 (1996).
(6) Kienzler, D. et al. Science 347, 53–56 (2015).
(7) Poyatos, J. F., Cirac, J. I., Blatt, R. & Zoller, P. Phys. Rev. A 54, 1532–1540 (1996).