Bouncing ball’s trajectory in general relativity and newtonian gravity

Standard

by Paul Mainwood on Quora

This is a conceptually simple and fun piece of work that has been let down by an appalling write-up on phys.org.
Reading the paper itself, what the authors have done is to put together two fascinating phenomena from 20th Century physics.

  1. At low speeds and with weak gravitational fields, the predictions of Newtonian Physics and General Relativity approach one another, giving rise to dynamics that are so similar that they can be treated as identical.
  2. There are some types of system — chaotic systems — where very small differences can give rise to unexpectedly huge differences in future dynamics.(1)

With these two phenomena written next to one another, there is an obvious path to explore. Why don’t we see if we can come up with a chaotic system whose dynamics can distinguish between the tiny differences between the predictions of GR and NP, even at low speeds and in a weak gravitational field?
And that’s exactly what the authors: Shiuan-Ni Liang and Boon Leong Lan have done, claiming to have found such a desktop system that can distinguish between the predictions of GR and NP when its dynamics becomes chaotic. Fun!
But the phys.org write-up of this simple concept veers all over the place, making it sound as though the paper is claiming to contradict General Relativity (it says the opposite). Worse, after happily taking a logically labyrinthine tour through some out-of-context quotes that the authors may have said on the phone, the write-up triumphantly ends with a crashing non-sequitur: “Explore further: Doubly Special Relativity”. (Doubly Special Relativity is a speculative theory with zero connection to anything in the paper.)
So, which one of General Relativity or Newtonian Physics is right? The authors of the paper could not be clearer:

When the predictions are different, general-relativistic mechanics must therefore be used, instead of special-relativistic mechanics (Newtonian mechanics), to correctly study the dynamics of a weak-gravity system (a low-speed weak-gravity system).


(1) Usually in the study of chaotic systems, these small differences are differences in initial conditions: that is, you keep the dynamical laws the same but alter the initial conditions slightly. But you could equally ask what happens if you keep the initial conditions the same and slightly alter the dynamics, which is what they are effectively doing here.

Advertisements

Extragalactic origin confirmed

Standard

Cosmic rays — fast-moving, high-energy nuclei — pervade the Universe. We know that the lower-energy variety that we detect on Earth is funnelled by the solar wind. However, higher-energy cosmic rays have an isotropic distribution due to scattering that makes it difficult to identify their source, although they are likely to be generated by high-energy phenomena like supernova explosions and jets from active galactic nuclei. By looking at the ultrahigh-energy end of the cosmic ray spectrum (on the order of exa-electron volts and higher, where cosmic rays are not scattered by solar-scale magnetic fields), the Pierre Auger Collaboration detected an anisotropy in their arrival directions that indicates an extragalactic origin.
Ultrahigh-energy cosmic rays are rare: typically one cosmic ray with an energy > 10 EeV hits each square kilometre of the Earth’s surface per year. The Pierre Auger Observatory in Argentina detects cosmic rays using two combined techniques: telescopes to detect fluorescence from cosmic-ray-generated air showers, and a network of 12-tonne containers of ultrapure water, spread over an area of 3,000 square kilometres. Photomultiplier detectors in the containers observe the faint Cherenkov radiation generated when cosmic-ray-generated muons encounter water molecules. By reconstructing the cone of emission of the muon (analogous to an aircraft’s sonic boom) an incident direction can be derived. By analysing 32,187 cosmic rays detected over 12.75 years, a map of the sky was produced (pictured), showing evidence of an enhancement (5.2 σ significance) in a region away from the Galactic Centre (marked with an asterisk; the dashed line indicates the Galactic plane). The distance of this hotspot from the Galactic Centre (~125°) points towards an extragalactic origin of ultrahigh-energy cosmic rays, reinforcing previous (less conclusive) results from the Collaboration at lower energies.

Paul Woods, doi:10.1038/s41550-017-0304-0

Ticking clocks

Standard

by Iulia Georgescu, Nature Physics 13, 529 (2017) doi:10.1038/nphys4169 (sci-hub)

Special relativity assumes that laws of physics are the same in all reference frames, a principle known as Lorentz invariance. This principle has been subject to numerous experimental tests, but no sign of Lorentz violation has yet been spotted: either a reassuring or disappointing revelation, depending on your stance. These results are now reinforced by a new test using a fibre network of optical clocks, which pushes the existing bound on Lorentz violation in experiments measuring time dilation.
Pacôme Delva and colleagues used strontium optical lattice clocks located at the LNE-SYRTE, Observatoire de Paris in France, the National Metrology Institute in Germany and the National Physical Laboratory in the UK and connected via state-of-the-art optical fibre links. Looking at the frequency difference between the clocks, they were able to test whether time dilation varies between the reference frames of the three geographically remote locations. This approach improves on previous tests — including other atomic clock comparison experiments — by two orders of magnitude. Moreover, it is only limited by technical noise sources, so further improvements are certainly possible.

Transparent perfect mirror

Standard

by Rachel Won, Nature Photonics 11, 331 (2017) doi:10.1038/nphoton.2017.90 (sci-hubpaper)

Transparent ‘perfect’ mirrors — one-way mirrors that transmit or reflect light completely depending on the direction of view — are useful for security, privacy and camouflage purposes. However, current designs are not perfectly reflective. Now, Ali Jahromi and colleagues from the USA and Finland have demonstrated a new design based on a non-Hermitian configuration — an active optical cavity — that may overcome this limitation. At a critical value of prelasing gain that is termed Poynting’s threshold, all remnants of the cavity’s structural resonances disappear in the reflected signal. At this point, the reflection becomes spectrally flat and light incident on the cavity is 100%-reflected at all wavelengths continuously across the gain bandwidth independently of the reflectivities of the cavity mirrors. Thus, the device at Poynting’s threshold becomes indistinguishable from a perfect mirror. The researchers have confirmed these predictions in an integrated on-chip active semiconductor waveguide device and in an all-optical-fibre system. They note that Poynting’s threshold is, however, dependent on polarization and incidence angle, and that observing the reflection of coherent pulses may reveal the cavity structure via its decay time. Since the concept of Poynting’s threshold is a universal wave phenomenon, it can be exploited in many areas including microwaves, electronics, acoustics, phononics and electron beams.

Indefinite causality

Standard

Causality is a concept deeply rooted in our understanding of the world and lies at the basis of the very notion of time. It plays an essential role in our cognition — enabling us to make predictions, determine the causes of certain events, and choose the appropriate actions to achieve our goals. But even in quantum mechanics, for which countless measurements and preparations have been rethought, the assumption of pre-existing causal structure has never been challenged — until now.
Giulia Rubino and colleagues have designed an experiment to show that causal order can be genuinely indefinite. By creating wires between a pair of operating gates whose geometry is controlled by a quantum switch — the state of single photon — they realized a superposition of gate orders. From the output, they measured the so-called causal witness, which specifies whether a given process is causally ordered or not. The result brings a new set of questions to the fore — namely, where does causal order come from, and is it a necessary property of nature?

via Nature Physics (sci-hub)

A quantum theory for thrones fans

Standard

game_of_thrones

Sydney University‘s delightful video in which academics predict who is going to win the Game of Thrones based on their disciplinary knowledge and understandings has had 62,500 Facebook likes, 900 YouTube hits and 10,000 Twitter impressions. The university has now uploaded, the full five-minute video of Michael Biercuk‘s quantum theory, which predicted a major event from the finale before it aired: ‘Tommin’s gotta die’. Biercuk has since been asked for further quantum physics theories, including how Bran can see into and interact with the past. The uni obviously harbours some hard core GoT fans. Back in 2014 it produced a video of Amy Johansen playing the GoT theme on the carillon, which was even watched by Davos Seaworth from the show.

via The Australian

Dawn of the quark ages

Standard

quark_age

by Michael Brooks from NewScientist 3024, 6 june 2015


Ask them to name their heart’s truest desire, and many a science nut might say the answer to life, the universe and everything – or, failing that, a fully functioning lightsaber.
Odd, then, that one field of scientific enquiry that could conceivably provide both gets so little press. After all the hoopla of the past few years, you could be forgiven for believing that understanding matter’s fundamentals is all about the Higgs boson – the “God particle” that explains where mass comes from.
The Higgs is undoubtedly important. But it is actually pretty insignificant for real stuff like you and me, accounting for just 1 or 2 per cent of normal matter’s mass. And the huge energy needed to make a Higgs means we’re unlikely to see technology exploiting it any time soon.
Two more familiar, though less glamorous, particles might offer more. Get to grips with their complexities, and we can begin to explain how the material universe came to exist and persist, and explore mind-boggling technologies: not just lightsabers, but new sorts of lasers and materials to store energy, too. That’s easier said than done, granted – but with a lot of computing muscle, it is what we are starting to do.
Chances are you know about protons and neutrons. Collectively known as nucleons, these two particles make up the nucleus, the meaty heart of the atom. (In terms of mass, the weedy electrons that orbit the nucleus are insignificant contributors to the atom.)
The headline difference between protons and neutrons is that protons have a positive electrical charge, whereas neutrons are neutral. But they also differ ever so slightly in mass: in the units that particle physicists use, the neutron weighs in at 939.6 megaelectronvolts (MeV) and the proton at 938.3 MeV.
That’s a difference of just 0.14 per cent, but boy does it matter. The neutrons’ extra mass means they decay into protons, not the other way around. Protons team up with negatively charged electrons to form robust, structured, electrically neutral atoms, rather than the world being a featureless neutron gloop.
“The whole universe would be very different if the proton were heavier than the neutron,” says particle theorist Chris Sachrajda of the University of Southampton in the UK. “The proton is stable, so atoms are stable and we’re stable.” Our current best guess is that the proton’s half-life, a measure of its stability over time, is at least 1032 years. Given that the universe only has 1010 or so years behind it, that is a convoluted way of saying no one has ever seen a proton decay.
The exact amount of the neutron’s excess baggage matters, too. The simplest atom is hydrogen, which is a single proton plus an orbiting electron. Hydrogen was made in the big bang, before becoming fuel for nuclear fusion in the first stars, which forged most of the other chemical elements. Had the protonneutron mass difference been just a little bigger, adding more neutrons to make more complex elements would have encountered energy barriers that were “difficult or impossible” to overcome, says Frank Wilczek of the Massachusetts Institute of Technology. The universe would be stuck at hydrogen.
But had the mass difference been subtly less, hydrogen would have spontaneously changed to the more inert, innocuous helium before stars could form – and the cosmos would have been an equally limp disappointment. Narrow the gap further, and hydrogen atoms would have transformed via a process called inverse beta decay into neutrons and another sort of neutral particle, the neutrino. Bingo, no atoms whatsoever.
All of that leads to an unavoidable conclusion about the proton and neutron masses. “Without these numbers, people wouldn’t exist,” says Zoltán Fodor of the University of Wuppertal, Germany.
But where do they come from?
The question is fiendishly difficult to answer. We’ve known for half a century that protons and neutrons are not fundamental particles, but made of smaller constituents called quarks. There are six types of quark: up, down, strange, charm, bottom and top. The proton has a composition of up-up-down, while the neutron is up-down-down.

A full explanation of where stuff gets its mass from is buried deep in the atomic nucleus

A full explanation of where stuff gets its mass from is buried deep in the atomic nucleus

Quark QuirksDown quarks are slightly heavier than up quarks, but don’t expect that to explain the neutron’s sliver of extra mass: both quark masses are tiny. It’s hard to tell exactly how tiny, because quarks are never seen singly (see “Quark quirks“, right), but the up quark has a mass of something like 2 or 3 MeV, and the down quark maybe double that – just a tiny fraction of the total proton or neutron mass.
Like all fundamental particles, quarks acquire these masses through interactions with the sticky, all-pervasive Higgs field, the thing that makes the Higgs boson. But explaining the mass of matter made of multiple quarks clearly needs something else.
The answer comes by scaling the sheer cliff face that is quantum chromodynamics, or QCD. Just as particles have an electrical charge that determines their response to the electromagnetic force, quarks carry one of three “colour charges” that explain their interactions via another fundamental force, the strong nuclear force. QCD is the theory behind the strong force, and it is devilishly complex.
Electrically charged particles can bind together by exchanging massless photons. Similarly, colour-charged quarks bind together to form matter such as protons and neutrons by exchanging particles known as gluons. Although gluons have no mass, they do have energy. What’s more, thanks to Einstein’s famous E = mc2, that energy can be converted into a froth of quarks (and their antimatter equivalents) beyond the three normally said to reside in a proton or neutron. According to the uncertainty principle of quantum physics, these extra particles are constantly popping up and disappearing again (see diagram).
To try and make sense of this quantum froth, over the past four decades particle theorists have invented and refined a technique known as lattice QCD. In much the same way that meteorologists and climate scientists attempt to simulate the swirling complexities of Earth’s atmosphere by reducing it to a three-dimensional grid of points spaced kilometres apart, lattice QCD reduces a nucleon’s interior to a lattice of points in a simulated space-time tens of femtometres across. Quarks sit at the vertices of this lattice, while gluons propagate along the edges. By summing up the interactions along all these edges, and seeing how they evolve step-wise in time, you begin to build up a picture of how the nucleon works as a whole.
Trouble is, even with a modest number of lattice points – say 100 by 100 by 100 separated by one-tenth of a femtometre – that’s an awful lot of interactions, and lattice QCD simulations require a screaming amount of computing power. Complicating things still further, because quantum physics offers no certain outcomes, these simulations must be run thousands of times to arrive at an “average” answer. To work out where the proton and neutron masses come from, Fodor and his colleagues had to harness two IBM Blue Gene supercomputers and two suites of cluster-computing processors.
The breakthrough came in 2008, when they finally arrived at a mass for both nucleons of 936 MeV, give or take 25 MeV – pretty much on the nose (Science, vol 322, p 1224). This confirmed that the interaction energies of quarks and gluons make up the lion’s share of the mass of stuff as we know it. You might feel solid, but in fact you’re 99 per cent energy.
But the calculations were nowhere near precise enough to pin down that all-important difference between the proton and neutron masses, which was still 40 times smaller than the uncertainty in the result. What’s more, the calculation suffered from a glaring omission: the effects of electrical charge, which is another source of energy, and therefore mass. All the transient quarks and antiquarks inside the nucleon are electrically charged, giving them a “self-energy” that makes an additional contribution to their mass. Without taking into account this effect, all bets about quark masses are off. Talk about one compound particle being more massive than another because of a difference in quark masses is a “crude caricature”, says Wilczek, who won a share of a Nobel prize in 2004 for his part in developing QCD.
The subtle roots of the proton-neutron mass difference lie in solving not just the equations of QCD, but those of quantum electrodynamics (QED), which governs electromagnetic interactions. And that is a theorist’s worst nightmare. “It’s awfully difficult to have QED and QCD in the same framework,” says Fodor. The electromagnetic self-energy can’t even be calculated directly. In a limited lattice simulation, its interactions create an infinity – a mathematical effect rather like a never-ending reverberation inside a cathedral.
Fodor and his colleagues’ new workaround involves solving the QED equations for various combinations of quarks inside different subatomic particles. The resulting subtle differences are used to replace the results of calculations that would invoke infinities, and so grind out a value for the proton-neutron mass difference (Science, vol 347, p 1452).
The figure the team came up with is in agreement with the measured value, although the error on it is still about 20 per cent. It is nonetheless “a milestone”, says Sachrajda. Wilczek feels similarly. “I think it’s exciting,” he says. “It’s a demonstration of strength.”
You might be forgiven for wondering what we gain by calculating from first principles numbers we already knew. But quite apart from this particular number’s existential interest, for Wilczek the excitement lies in our ability now to calculate very basic things about how the universe ticks that we couldn’t before.
Take the processes inside huge stars that go supernova – the events that first seeded the universe with elements heavier than hydrogen and helium. Our inability to marry QED and QCD meant we couldn’t do much more than wave our hands at questions such as the timescale over which heavy elements first formed – and we couldn’t make a star to test our ideas. “Conditions are so extreme we can’t reproduce them in the laboratory,” says Wilczek. “Now we will be able to calculate them with confidence.”
The advance might help clear up some of the funk surrounding fundamental physics. The Large Hadron Collider’s discovery in 2012 of the Higgs boson, and nothing else so far, leaves many open questions. Why did matter win out over antimatter just after the big bang (New Scientist, 23 May, p 28)? Why do the proton and electron charges mirror each other so perfectly when they are such different particles? “We need new physics, and simulations like ours can help,” says Kálmán Szabó, one of Fodor’s Wuppertal collaborators. “We can compare experiment and our precise theory and look for processes that tell us what lies beyond standard physics.”

An open road

For Sachrajda, this kind of computational capability comes at just the right time, as the LHC fires up again to explore particle interactions at even higher energies. “We all hope it will give an unambiguous signal of something new,” he says. “But you’re still going to have to understand what the underlying theory is, and for that you will need this kind of precision.”
If that still sounds a little highfalutin, it’s also worth considering how modern technologies have sprung from an ever deeper understanding of matter’s workings. A century or so ago, we were just getting to grips with the atom – an understanding on which innovations such as computers and lasers were built. Then came insights into the atomic nucleus, with all the technological positives and negatives – power stations, cancer therapies, nuclear bombs – those have brought.
Digging down into protons and neutrons means taking things to the next level, and a potentially rich seam to mine. Gluons are far more excitable in their interactions with colour charge than are photons in electromagnetic interactions, so it could be that manipulating colour-charged particles yields vastly more energy than fiddling with things on the atomic scale. “I think the possibility of powerful X-ray or gamma-ray sources exploiting sophisticated nuclear physics is speculative, but not outrageously so,” says Wilczek.
Star Wars' lightsaberGluons, unlike photons, also interact with themselves, and this could conceivably see them confining each other into a writhing pillar of energy – hence Wilczek’s tongue-incheek suggestion they might make a Star Wars-style lightsaber. More immediate, perhaps, is the prospect of better ways to harness and store energy. “Nuclei can pack a lot of energy into a small space,” says Wilczek. “If we can do really accurate nuclear chemistry by calculation as opposed to having hit-andmiss experiments, it could very well lead to dense energy storage.”
For Fodor, that’s still a long way off – but with the accuracy that calculations are now reaching, the road is at last open. “These are mostly dreams today, but now we can accommodate the dreams, at least,” he says. “You’ve reached a level where these technological ideas might be feasible.”
Welcome, indeed, to the quark ages.

From generation to generation

Standard

by Robert Kowalewski from Nature Physics 11, 705–706 (2015) doi:10.1038/nphys3464


A new measurement from the LHCb experiment at CERN’s Large Hadron Collider impinges on a puzzle that has been troubling physicists for decades namely the breaking of the symmetry between matter and antimatter.

Experimental constraints on the unitarity triangle. Each band shows the allowed region (at 95% confidence level, CL) based on specific measured quantities. The quantities η and ρ are functions of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, which allow the triangle to have a base of unit length oriented along the ρ axis. The angles α, β and γ correspond to the blue and tan bands, and are measured from matter-antimatter-violating asymmetries in B meson decay. The circular arcs centred on (10) show the constraints from the mass differences, Δmd and Δms, measured in studies of B-B oscillations. Measurements of matter-antimatter violation in the kaon system determine εK, which is a measure of the admixture of the CP-even eigenstate in the long-lived neutral kaon, and result in the green band. The dark green semi-circle centred on (0,0) shows the constraint from the measurement of the ratio IVubl/IVcbl, where Vub describes the transition of a b quark to a u quark. Image courtesy of the CKMfitter group.

Experimental constraints on the unitarity triangle. Each band shows the allowed region (at 95% confidence level, CL) based on specific measured quantities. The quantities η and ρ are functions of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, which allow the triangle to have a base of unit length oriented along the ρ axis. The angles α, β and γ correspond to the blue and tan bands, and are measured from matter-antimatter-violating asymmetries in B meson decay. The circular arcs centred on (10) show the constraints from the mass differences, Δmd and Δms, measured in studies of BB oscillations. Measurements of matter-antimatter violation in the kaon system determine εK, which is a measure of the admixture of the CP-even eigenstate in the long-lived neutral kaon, and result in the green band. The dark green semi-circle centred on (0,0) shows the constraint from the measurement of the ratio |Vub|/|Vcb|, where Vub describes the transition of a b quark to a u quark. Image courtesy of the CKMfitter group.

We learn early that the matter in and around us is made up of three particles: electrons, and the up and down quarks found in nuclei. Add in the electron neutrino and we also account for nuclear fission and fusion and the stellar furnace that fuels life on Earth. But nature is not that simple. It replicates this four-particle structure in ‘generations’ of heavier, but otherwise similar, particles. The first evidence for this was the discovery of the muon in 1936. Other second-generation particles were subsequently discovered, as was another unexpected phenomenon: the violation of matter-antimatter (CP) symmetry in neutral kaons(1). Now, writing in Nature Physics, the LHCb collaboration(2) provides fresh evidence to fuel the ongoing discussion surrounding CP violation.
In 1973, Makoto Kobayashi and Toshihide Maskawa proposed a mechanism whereby mixing between the mass and weak eigenstates of quarks would, if there were three generations, result in an irreducible complex phase that could be responsible for CP violation(3).
The discovery of the first third-generation particle, the tau lepton(4), came a year later, followed in 1977 by the discovery of the third-generation ‘b’ quark(5). With the advent of high-intensity electron-positron colliders at the start of the twenty-first century, studies of CP violation in the decays of B mesons (which contain a b quark) at the BaBar and Belle experiments validated Kobayashi and Maskawas proposal, for which they shared in the 2008 Nobel Prize in Physics.
The CKM matrix – introduced by Kobayashi and Maskawa, following the formative work of Nicola Cabibbo – describes the mixing of quark mass and weak eigenstates in the standard model of particle physics. It is unitary and can be fully specified with four parameters: three real angles and one imaginary phase. This unitarity condition is the basis for a set of testable constraints in the form of products of complex numbers that sum to zero – for example, V*ud Vub + V*cd Vcb + V*td Vtb = 0 where Vub describes the transition of a b quark to a u quark. The triangle in Fig. 1 provides a convenient graphical representation of this equation. The unitarity condition connects a large set of measurable quantities in the standard model, including CP-violating asymmetries, which depend on the imaginary phase, and mixing strengths, which are magnitudes such as |Vub| and |Vcb|. In the standard model, all the bands corresponding to the different measurements in Fig. 1 should overlap at a unique point, which they do at the current level of precision. The presence of new particles or interactions would contribute to these measurable quantities in different ways, resulting in bands that fail to converge at a point. The ratio of matrix elements |Vub|/|Vcb| corresponds to the length of the side of the ‘unitarity triangle’ opposite the angle labelled β, which is well determined from measured CP-violating asymmetries. The precise determination of this ratio is a crucial ingredient in providing sensitivity to new particles and interactions.
Experiments at electron-positron colliders have measured |Vub| and |Vcb| for many years using two complementary methods based on the decays of a B meson to an electron or muon, its associated neutrino and one or more strongly interacting particles. The first method measures exclusive final states whose decay rates are proportional to |Vqb|2 (where q = u, c), and uses lattice quantum chromodynamic (QCD) calculations of form factors to determine |Vqb|. The second inclusive method requires only the presence of an electron or muon and sums over many exclusive final states. These summed rates are also proportional to |Vqb|2, the determination of |Vqb| in this case relies on perturbative QCD calculations and auxiliary measurements. Although these two methods have improved significantly in precision over the years, the values determined for both |Vub| and |Vcb| from the inclusive method persistently exceed those from the exclusive method by two to three standard deviations. This has prompted speculation that the familiar left-handed charged weak interaction has a right-handed counterpart that contributes
to this difference.
With this backdrop, the new measurement of the ratio |Vub|/|Vcb| from the LHCb experiment at CERN’s Large Hadron Collider (LHC) is a welcome addition to the literature(2). It is based on a different exclusive decay mode than can
be measured at the electron-positron collider experiments, namely that of a baryon containing b, u and d quarks (a heavier version of the neutruon) that decays into a proton, a muon and a neutrino. Particle physicists have been surprised that these decays, where the missing neutrino prevents reliance on kinematic constraints, can be distinguished from the huge background inherent in proton-proton collisions at the LHC This new result, which makes use of very precise spatial measurements of the decay vertices of short-lived particles and uses innovative analysis techniques, is a noteworthy achievement.
What have we learned? The new experimental information, instead of resolving the inclusive-exclusive puzzle, deepens it. The measurement and corresponding lattice QCD calculation lead to a value for |Vub|/|Vcb| that is lower than both the pre-existing exclusive and inclusive determinations. The consistency of the three determinations with a single value is only 1.8%, indicating that particle physicists have more work to do in this area. On a more positive note, the LHCb measurement, when combined with previous measurements, strongly disfavours the hypothesis of a right-handed weak interaction.


(1) Christenson, J. H., Cronin, J. W. Fitch, V L. & Turlay, R. Phys. Rev. Lett. 13, 138-140 (1964).
(2) The LHCb collaboration Nature Phys. 11,743-747 (2015).
(3) Kobayashi, M. & Maskawa, T. Prog. Theor Pinys. 49, 652-657 (1973).
(4) Perl, M. L. et al Phys. Rev. Lett 35,1489-1492 (1975).
(5) Herb, S. W. et al Phys. Rev Lett. 39, 252-255 (1977).

Evaporation drives engine

Standard

from Nature 522, 259 (18 June 2015) doi:10.1038/522259b


An engine fuelled only by water evaporation can power a miniature car and lights.

522259b-i1

Ozgur Sahin at Columbia University in New York and his colleagues applied bacterial spores to thin plastic strips. The spores absorb and release water with changes in relative humidity, so the strips curl and straighten. The team stacked the strips and formed them into a water-containing engine so that the strips were exposed to recurring periods of high and low humidity, acting like oscillators to power the engine. When attached to a generator, the engine powered light-emitting diodes. A rotary version attached to two pairs of wheels (pictured, left) pushed a 100-gram car forwards (pictured, right).
The engine could be used in devices in areas that have scarce electricity, the authors say.


P.S.: interessting: the news is close, the paper is open!

Two-atom bunching

Standard

by Lindsay J. LeBlanc from Nature 520, 36–37 (02 April 2015) doi:10.1038/520036a


The Hong–Ou–Mandel effect, whereby two identical quantum particles launched into the two input ports of a ‘beam-splitter’ always bunch together in the same output port, has now been demonstrated for helium-4 atoms.
All particles, including photons, electrons and atoms, are described by a characteristic list of ‘quantum numbers’. For a pair of particles whose lists match, there is no way of telling them apart — they are perfectly indistinguishable. One of the more intriguing consequences of quantum mechanics arises from this indistinguishability, and was exemplified(1) in an experiment by Hong, Ou and Mandel (HOM) in the 1980s. The researchers showed that, although a single photon approaching an intersection along one of two input paths exits in one of two output paths with equal probability, identical pairs brought to the intersection simultaneously from different paths always exit together. Lopes et al.(2) now demonstrate this manifestation of two-particle quantum interference for two identically prepared — and thus indistinguishable — helium-4 atoms. The result provides an opportunity to extend advances made in quantum optics to the realm of atomic systems, especially for applications in quantum information.
As a graduate student faced with finding a wedding present for my labmate, I decided that the HOM experiment was a fitting analogy to marriage: from two separate paths, this couple’s lives were intersecting and would continue along a single path together. Along with the formalism describing the effect tucked into the card, I gave them a glass ‘beam-splitter’ to represent a key ingredient in the optical demonstration of the effect: this glass cube could act as the intersection, at which half the light incident on any of the four polished faces is transmitted, with the remaining half being reflected; for single particles, the probabilities for transmission and reflection are both 50%. All HOM experiments require a ’50:50 beam-splitting’ mechanism that sends quantum particles incident along one of two input paths to one of two output paths with a 50% probability (Fig. 1a).

Each beam-splitter (blue) is represented as two input paths (left) and two output paths (right); here we consider '50:50' beam-splitters, for which the probability of each output is of equal magnitude. A particle is represented by a red circle, and its wavefunction's phase by the position of the black dot on the grey circle. Individual phases cannot be measured directly. a, Possible outcomes for a single particle entering either of the input paths; the probabilities for particle transmission and reflection are both 50%. In the case of reflection, the phase changes by 90°. b, For incoming particles at both inputs, there are four possible outcomes. However, the overall probability of the outcomes is determined by adding the individual probabilities using rules of quantum mechanics. For bosonic particles such as photons and helium-4 atoms, the subject of Lopes and colleagues' study(2), the first two outcomes (transmit/transmit and reflect/reflect) cancel. The only outcomes remaining are the third and the fourth.

Each beam-splitter (blue) is represented as two input paths (left) and two output paths (right); here we consider ’50:50′ beam-splitters, for which the probability of each output is of equal magnitude. A particle is represented by a red circle, and its wavefunction’s phase by the position of the black dot on the grey circle. Individual phases cannot be measured directly. a, Possible outcomes for a single particle entering either of the input paths; the probabilities for particle transmission and reflection are both 50%. In the case of reflection, the phase changes by 90°. b, For incoming particles at both inputs, there are four possible outcomes. However, the overall probability of the outcomes is determined by adding the individual probabilities using rules of quantum mechanics. For bosonic particles such as photons and helium-4 atoms, the subject of Lopes and colleagues’ study(2), the first two outcomes (transmit/transmit and reflect/reflect) cancel. The only outcomes remaining are the third and the fourth.

Careful analysis shows that there must be a well-defined relationship between the beam-splitter’s inputs and outputs that is demanded by energy conservation in the classical picture of the beam-splitter3, or by a property known as unitarity in the quantum view4: for classical waves, this relationship fixes the relative positions of the output waves’ peaks and valleys with respect to those of the input waves, whereas for quantum particles this relationship manifests as a relative ‘phase’ between the particles’ input and output wavefunctions. Although the probability of finding a particle in a particular output path depends only on the amplitude of its wavefunction, the phase is important when determining the output wavefunction, and corresponding output probability, for two or more particles.
If two particles enter such a 50:50 beam-splitter, naively one would expect one of four possible outcomes: two in which the particles exit along a path together, and two in which they exit along different paths (Fig. 1b). In these cases, the single-particle output-wavefunction phases accumulate in an overall output phase. The HOM result is a consequence of the particles’ indistinguishability, which means that there is no measurable difference between the two outcomes in which the particles exit along different paths. The overall output phases of these indistinguishable outcomes are opposite to each other, and when added together using quantum rules for bosons (particles with integer spin, a quantum property common to both photons and helium-4 atoms), these two possible outcomes interfere and cancel. The only outcomes remaining are those with two particles in a single output. As a result, simultaneous single-particle detections (‘coincidence counts’) at both outputs are forbidden.
Lopes et al. demonstrate two-particle quantum interference with helium-4 atoms. In their experiments, the atoms’ paths are related to their speeds, which are manipulated by selectively transferring momentum to and from light in absorption and emission processes(5, 6). First, the researchers prepared a ‘twin pair’ by removing from an atom reservoir indistinguishable atoms with different speeds. Second, they used light pulses to modify the atoms’ momenta and cause the pair to meet; the atom in the first path travels with velocity v1 and the atom in the second path with v2. A beam-splitting mechanism implemented reflection and transmission by changing the atoms’ speeds with 50% probability from v1 to v2 and vice versa.
The atoms continued to travel until they hit a time-resolved, multipixel atom-counting detector, at which an atom with v1 would arrive at a different time from one with v2. Lopes and colleagues prepared many twin pairs in a short interval and recorded the precise location and timing of the atoms’ arrivals at the detector: a coincident count would be the measurement at a particular location of a particle at time t1 followed by a measurement at t2. Although the researchers found that the arrivals from the many pairs were distributed in two time windows (corresponding to the two output paths), they found a striking lack of instances among these random outcomes when the time difference was exactly t2t1, indicating that the atoms from a twin pair must be exiting the beam-splitter with the same velocity. This ‘anticorrelation’ is the signature of a HOM experiment.
As in quantum-optics demonstrations of the HOM effect, the present result demonstrates that pairs of identical, ‘quantum-entangled’ particles have been produced. The unique capabilities of this apparatus, including the combination of condensed metastable helium-4 atoms and the atom-counting detector, offer a spatial and temporal resolution unavailable to others. Protocols for transmitting and processing quantum information, analogous to those used in optical systems, can now be implemented with new capabilities in atomic systems: atoms, unlike photons, may interact with one another, and because they have mass, their mechanical properties, such as momentum, can be varied and used as experimental parameters.
Furthermore, because atoms can also be fermions (particles with half-integer spin, such as electrons), they could exhibit a quantum-interference effect that is the fermionic equivalent of the HOM effect(4). Evidence for this mechanism has already been seen in electronic systems(7). The bosonic HOM effect demonstrated here, and its fermionic counterpart, may offer new possibilities for implementing quantum-information protocols and for exploring the foundations of quantum physics.


(1) Hong, C. K., Ou, Z. Y. & Mandel, L. Phys. Rev. Lett. 59, 2044–2046 (1987).
(2) Lopes, R. et al. Nature 520, 66–68 (2015).
(3) Ou, Z. Y. & Mandel, L. Am. J. Phys. 57, 66 (1989).
(4) Loudon, R. Phys. Rev. A 58, 4904–4909 (1998).
(5) Campbell, G. K. et al. Phys. Rev. Lett. 96, 020406 (2006).
(6) Bonneau, M. et al. Phys. Rev. A 87, 061603 (2013).
(7) Neder, I. et al. Nature 448, 333–337 (2007).