Physics as metaphor
Black holes in a spin
Comet catastrophe
Faster than light
The prescient power of mathematics
What Newton really meant
Einstein's PhD
The quark-gluon plasma
Undeground astronomy
Back to John Gribbin's Home Page


Physics is a load of balls

Just because our treasured theories of the sub-atomic world work doesn't necessarily mean they are revealing the ultimate truth about what goes on inside atoms, argues John Gribbin.

Why are physicists so fond of describing everything in terms of balls? The Universe itself can be likened to a uniform sphere; the Sun and the Earth are both represented, to a first approximation and with appropriate choice of scale, by a billiard ball; and the same model has been applied with great success both to atoms and to subatomic particles. Rather than being a mathematician, as some have claimed, it seems that God is really a snooker player. Or is it possible that the ubiquity of the billiard ball analogy in physics is an indication of the lack of imagination of physicists?

When I was a student, my supervisor, John Faulkner, used to collect choice examples of job references. One of his favourites described an applicant as being "a man of unique insight and tenacity". This, according to Faulkner, meant that the person referred to only ever had one good idea in his life, and stuck to it. In those terms, it seems that physicists as a whole have exhibited unique insight and tenacity throughout the 20th century.

The kind of entity that we call an atom is really a theoretical model of reality. The familiar components of an atom -- positively charged nucleus, electron cloud, photons being exchanged -- are part of a self- consistent story which both explains past observations and makes it possible to predict what will happen in future experiments. But our understanding of what an atom "is" has changed several times in the past hundred years or so, and different images (different models) are still useful in different contexts today.

The very name "atom" comes from the ancient Greek idea of an ultimate, indivisible piece of matter. The image of atoms as indestructible billiard balls still held sway just over a hundred years ago. But by the end of the nineteenth century it had been shown that atoms were not indivisible, and that pieces (electrons) could be knocked off them. So, naturally, a model was developed which described the atom in terms of a billiard ball nucleus at the centre with billiard ball electrons orbiting around it rather like the way planets orbit around the Sun. This model still works very well for explaining how electrons "jump" from one orbit to another, absorbing or emitting electromagnetic energy (photons) as they do so and creating the characteristic lines associated with that kind of atom (that element) in a spectrum.

Later, the idea of electrons as waves, or clouds of probability, became fashionable (because these ideas could explain otherwise puzzling features of the behaviour of atoms), and to a quantum physicist the older orbital model was superceded. But this does not necessarily mean that atoms "really are" surrounded by electron probability clouds, or that all other models are irrelevant.

When physicists are interested in the purely physical behaviour of a gas in the everyday sense -- for example, the pressure it exerts on the walls of a container -- they are still quite happy to treat the gas as a collection of little, hard billiard balls. When chemists determine the composition of a substance by burning a small sample and analysing the lines in the spectrum produced, they are quite happy to think in terms of the "planetary" model of electrons orbiting the nucleus and jumping from one orbit to another. The planetary model still works entirely satisfactorily within its limitations, as does the billiard ball model within its limitations. All models of the atom are lies in the sense that they do not represent the single, unique truth about atoms; but all models are true, and useful, in so far as they give us a handle on some aspect of the atomic world.

The point is that not only do we not know what an atom is "really", we cannot ever know what an atom is "really". We can only know what an atom is like. By probing it in certain ways, we find that, under those circumstances, it is "like" a billiard ball. Probe it another way, and we find that it is "like" the Solar System. Ask a third set of questions, and the answer we get is that it is "like" a positively charged nucleus surrounded by a fuzzy cloud of electrons. These are all images that we carry over from the everyday world to build up a picture of what the atom "is". We construct a model, or an image; but then, all too often, we forget what we have done, and we confuse the image with reality. The way in which physicists construct their models is based on everyday experience. What else can it be based on? We can only say that atoms and subatomic particles are "like" something that we already know. It is no use describing the atom as like a billiard ball to somebody who has never seen a billiard ball, or describing electron orbits as like planetary orbits to somebody who does not know the way the Solar System works.

Analogies and modelling can even become completely circular processes, as happens when we try to explain the way atoms interact with one another in, for example, a crystal lattice. In such a crystal, the atoms are held in place by electromagnetic forces in a geometrical array. If one atom were to be displaced from its position, it would be pushed and pulled back into place by electromagnetic interactions involving its neighbours. A useful analogy is to imagine that all the atoms are joined to their immediate neighbours by little springs. If one atom is moved out of position, the electromagnetic forces act like imaginary springs, with the springs on one side being stretched, and so pulling the atom back into place, while the springs on the other side are compressed, and therefore push the atom back into place. We seem to have hit on a really good model of the electromagnetic force acting, under these circumstances, like a spring.

But what is a spring? The most common everyday variety of spring is a piece of metal wire bent into a helical or spiral shape. In the spiral form, it may literally be a component of a clockwork mechanism, the physicists' archetypal model of reality, which makes the analogy all the more appealing. When we push the spring, it pushes back; when we pull it, it pulls back. But why? It does so because it is made of atoms held together by electromagnetic forces! The forces we feel when we push and pull on a spring are electromagnetic forces. So when we say that the forces between atoms in a crystal are like little springs, what we are saying is that electromagnetic forces are like electromagnetic forces.

Atoms are such a familiar concept that, as this example shows, it is sometimes hard to see this process of modelling at work where they are concerned. It becomes much clearer when we look at how physicists have constructed their standard model of the sub-atomic world, using analogies which in many cases are not simply derived from the everyday world, but derived secondhand from our everyday understanding of reality. Within the nucleus (which for the purposes of the simple descriptions of the atom could be regarded as like a positively charged billiard ball), we find particles that are, in some senses, "like" electrons, and forces that operate "like" electromagnetism. But electrons, and electromagnetism, are themselves described as being "like" things in the everyday world -- billiard balls, or waves on a pond, or whatever.

Reality is what we make it to be -- as long as the models explain the observations, they are good models. But is it really true that electrons and protons were lying in wait to be discovered inside atoms, and that quarks were lying in wait to be discovered inside protons, before human scientists became ingenious enough to "discover" them? Or is it more likely that essentially incomprehensible aspects of reality at the quantum level are being put into boxes and labelled with names like "proton" and "quark" for human convenience? According to the standard model of particle physics, a proton is composed of two up quarks and one down quark, held together by one of the four fundamental forces, while a neutron is composed of two down quarks and one up quark, held together in a similar fashion. Many physicists, though, are in grave danger of forgetting that the standard model is just that -- a model. Protons behave as if they contain three quarks; but that does not "prove" that quarks "really exist". There may be a simpler way of modelling what goes on at the level of physical phenomena now conventionally explained in terms of the quark model; but that would not be the way things "really" are, just another model of reality, in the same way that Maxwell's wave equation and Einstein's photons are both good models of the reality represented by the phenomenon of light, and the billiard ball model and the "planetary" model of the atom are both good models, depending on which problem you are trying to solve.

The whole of physics is based upon the process of making analogies and making up models to account for what is going on in realms that we cannot probe with our own senses. The quark theory only began to be taken more seriously when experiments involving collisions between particles (electrons being bounced off protons, and protons being bounced off one another) began to show up structure inside the proton. When high energy (that is, fast moving) electrons are bounced off each other in accelerator experiments, they tend to be scattered at very large angles, ricocheting off one another as if they were hard objects, like billiard balls. When electrons are bounced off protons, however, they are usually deflected by only small angles, as if they are scattering off a soft object which can only give them a gentle nudge. The two kinds of interaction are known as "hard" and "soft" scattering experiments.

But the "answers" nature gave to the questions posed by the experimenters still depended on their choice of which experiments to carry out, and what to measure. As the philosopher Martin Heidegger has put it:

Modern physics is not experimental physics because it uses experimental devices in its questioning of nature. Rather the reverse is true. Because physics, already as pure theory, requests nature to manifest itself in terms of predictable forces, it sets up the experiments precisely for the sole purpose of asking whether and how nature follows the scheme prescribed by science.{1} The person who really made it respectable to think about structure inside the proton was Richard Feynman, who could be relied upon to be both insightful and comprehensible. The great thing about Feynman's approach was that it made sense to physicists brought up in the tradition of taking things (like atoms) apart to find out what they are made of. He developed his ideas in the mid-1960s, and published them in 1969. Without prejudging the issue of whether or not quarks existed, he developed a general explanation of what happens when a high energy electron probes inside a proton, or when two high energy protons collide head on. Feynman gave these inner components of the proton the name "parton". And he realised that very little of the complexity of any inner structure mattered in a single collision. When an electron is fired into a proton, it may exchange a photon with a single parton, which recoils as a result while the electron is deflected, but that is the limit of its influence on the proton (and of the proton's influence on the electron). Even if two protons smash into each other head on, what actually happens is that individual partons from the two protons interact with one another in a series of point-like hard scattering events -- just like snooker or pool balls ricocheting around the table after a vigorous break.

One reason why Feynman's approach swept the board, and led to further experiments which established the "reality" of quarks to the satisfaction of most theorists, is that it follows a long established and well understood tradition (see Andrew Pickering's book Constructing Quarks, Edinburgh UP, 1984). Theorists had a classic analogy ready and waiting, in the form of the experiments early in the twentieth century that had probed the structure of the atom.

The pioneering particle physicist Ernest Rutherford bombarded atoms with so-called alpha particles (now known to be helium nuclei, and regarded in this context as being like little billiard balls), and found that some of the alpha particles were scattered at large angles, showing that there was something hard and (you guessed!) billiard-ball like in the centre of an atom (its nucleus). Experiments in the 1960s showed electrons sometimes being scattered at surprisingly large angles from within otherwise "soft" protons, and Feynman's mode explained this in terms of hard, billiard-ball like entities within the proton.

It took years for the standard model to become established, but once the physicists had been set thinking along these lines there was an air of inevitability about the whole process. With two great analogies to draw on -- the nuclear model of the atom, and the quantum electrodynamics theory of light -- the quark model of protons and neutrons and the quantum chromodynamics theory of the strong interaction became irresistible. "Analogy was not one option amongst many", says Pickering; "it was the basis of all that transpired. Without analogy, there would have been no new physics."{2} But Pickering also raises another intriguing, perhaps disturbing, question. Was the path to the standard model of particle physics inevitable? Is this the real (or only) truth about the way the world works? None of the theories which led up to the standard model were ever perfect, and particle physicists continually had to choose which theories to give up and abandon and which ones to develop to try to make a better fit with experiment. The theories which they chose to develop also influenced the choices of which experiments to carry out, and this interacting chain of decisions led to the new physics. The new physics was a product of the culture in which it was created. The philosopher of science Thomas Kuhn has carried this kind of argument to its logical conclusion, arguing that if scientific knowledge really is a product of culture then scientific communities that exist in different worlds (literally on different planets, perhaps; or at different times on the same planet) would regard different natural phenomena as important, and would explain those phenomena in different theoretical ways (using different analogies). The theories from the different scientific communities -- the different worlds -- could not be tested against one another, and would be, in philosopher's jargon, "incommensurable".

This runs counter to the way most physicists think about their work. They imagine that if we ever make contact with a scientific civilization from another planet then, assuming language difficulties can be overcome, we will find that the alien civilization shares our views about the nature of atoms, the existence of protons and neutrons, and the way the electromagnetic force works. Indeed, more than one science fiction story has suggested that science is the (literally) universal language, and that the way to set up communication with an alien civilization will be by describing, for example, the chemical properties of the elements, or the nature of quarks, to establish a common ground. If the aliens turn out to have completely different ideas about what atoms are, or to have no concept of atoms at all, that would make such attempts at finding common ground doomed from the start.

The idea of science as a universal language is usually most forcefully expressed in terms of mathematics. Many scientists have commented on the seemingly magical way in which mathematics "works" as a tool for describing the Universe; Albert Einstein once said that "the most incomprehensible thing about the Universe is that it is comprehensible". But is this such a mystery? Pickering quotes John Polkinghorne, a British quantum theorist who is also a Minister in the Church of England, as saying that "it is a non-trivial fact about the world that we can understand it and that mathematics provides the perfect language for physical science; that, in a word, science is possible at all".

But such assertions, says Pickering, are mistaken: It is unproblematic that scientists produce accounts of the world that they find comprehensible: given their cultural resources, only singular incompetence could have prevented members of the [physics] community producing an understandable version of reality at any point in their history. And, given their extensive training in sophisticated mathematical techniques, the preponderance of mathematics in particle physicists' accounts of reality is no more hard to explain than the fondness of ethnic groups for their native language. In other words, the "mystery" that mathematics is a good language for describing the Universe is about as significant as the discovery that English is a good language for writing plays in. And the revelation that colliding billiard balls provide good analogies for many of the interactions that go on in the subatomic world is simply telling us that mathematical physicists are rather good at describing the way billiard balls collide with one another. So they should be -- by tenaciously sticking to their one good idea, they've had plenty of practice over the past hundred years!


A new twist to a continuing tale of spin

John Gribbin

ARE YOU worried about the possibility that centrifugal force changes direction near a black hole? Worry no more. Put the Alka Seltzer away. Help is at hand, in the form of a new explanation of this headache- inducing puzzle, from Sandip Chakrabarti, of the Tata Institute of Fundamental Research, in Bombay.

The story so far is that for several years researchers have been puzzling over the discovery that according to the equations of Einstein's theory, at a critical distance from a black hole centrifugal force changes sign. Within that orbit, instead of being a repulsive force flinging things outward, centrifugal force becomes a force of attraction, hastening material in to the maw of the black hole. The faster an object within the critical distance from the hole orbits, the more powerfully it is sucked inward.

The first brain teaser, for those versed in naive mechanics, is the way in which relativists bandy about the term "centrifugal force". Didn't we all learn in school that this is a "fictitious force", and doesn't really exist? But don't forget that in relativity theory any observer is entitled to regard his or her frame of reference as at rest. An observer in a rocket which is being forced to follow a closed orbit around a star by its motors (not falling freely under the influence of gravity, in a so- called Keplerian orbit) will indeed feel a centrifugal force. So do you, when you are in a car taking a bend at speed. From the frame of reference of the car, or the spaceship, centrifugal force is real, and that is why relativists use the term.

Suppose you had an apple rolling about on the dashboard of the car. You would be amazed if the car turned sharply to the right, and the apple, in response, rolled smartly to the right across the dash. But according to Marek Abramowicz, working in 1990 at NORDITA (the Scandinavian theoretical physics institute in Copenhagen) that is exactly what would happen if your car was a spaceship, and the sharp right turn you were making was taking you skimming above the surface of a black hole -- it's "event horizon", from within which nothing can escape.

This only happens for orbits that pass within a certain distance from the event horizon -- between the horizon itself and the distance from the central singularity where light rays are bent into a circle around the singularity.

Between the event horizon (the "edge" of the black hole) and the speed of light circle, any sufficiently powerful rocket ship could balance the force of gravity at any distance from the hole by judicious use of its rocket motors. Then, with the aid of a sideways pointing rocket, it could travel in a circular orbit around the hole. This is where the fun begins.

In order to stay in a circular orbit the rocket motors must be used continuously to push the spaceship in or out with the appropriate force. In those circumstances, the occupants will feel a centrifugal force pushing them against one of the walls of the spacecraft. Within the speed of light circle, though, centrifugal force adds to the inward pull of gravity. So the outward force needed to keep the spaceship in a circular orbit increases as the speed of the spaceship round that orbit increases. Instead of being flung outward by centrifugal force, the occupants of the fast-moving spacecraft are sucked inward. In other words, centrifugal force always acts in such a way as to repel orbiting objects from the speed of light circle.

But researchers such as Abramowicz may be wrong to think in terms of centrifugal force, says Chakrabarti, not because the force is unreal, but because it is just one aspect of an overall force described by Einstein's equations. The point he makes, in a paper to be published shortly in the Monthly Notices of the Royal Astronomical Society, is that by splitting the single relativistic force into two components which are identified with the Newtonian gravitational force and a centrifugal force, we get a false impression of what is going on. Indeed, there is another way to split up the relativistic force on a particle orbiting a black hole. On this picture, centrifugal force is unchanged from the Newtonian picture, but gravity gets stronger inside the critical orbit, again increasing the effectiveness of the hole in swallowing matter up.

Chakrabarti says that what is really happening is that Einstein's description of gravity deviates from the Newtonian picture under the extreme conditions near a black hole.

This is reminiscent of the much more modest effect of the general relativistic "correction" which explains details of the orbit of Mercury around the Sun. Those details cannot be explained by Newtonian theory, but they can be mimicked by adding in an extra "Newtonian" force. Similarly, if we want to try to describe what is going on near a black hole in Newtonian terms we need to introduce an extra "force", as well as Newtonian gravity and good old fashioned centrifugal force. The bottom line is still that it is even harder for matter to escape from the vicinity of a black hole than the naive picture of gravity and centrifugal force being in balance in a Keplerian orbit (like the orbit of the Earth around the Sun) would suggest. But, for those who like to think in simple Newtonian pictures (rather than calculating the net force according to Einstein's equations), instead of centrifugal force literally reversing its sign at the critical radius, it can be seen as being overwhelmed by the contribution of the "new" force. There -- that feels better, doesn't it?


Comet catastrophe back on the agenda

The "Tunguska event" in which a fragment of comet exploded over Siberia and devastated a wide area in June 1908 may have been caused by a fragment of leftover material from an interplanetary stream which has already destroyed civilizations twice. If this hypothesis is correct, the world is in for another bout of fire from the heavens in about a thousand years time.

It sounds like science fiction, or the work of the same kind of crank that predicts the end of the world will occur when the pieces of comet Shoemaker-Levy [spelling?] strike Jupiter in July. But these views were aired at a specialist meeting of the Royal Astronomical Society in London in March. They show that catastrophe is very much back on the agenda of serious science.

Interest in the effects of impacts from meteorites and comets striking the Earth has been high ever since the early 1980s, when it was suggested that a very large impact of this kind contributed to the death of the dinosaurs, 65 million years ago. This is now well established, and the site of the impact has been identified in the Yucatan peninsula of Mexico. Clearly, such large impacts are extremely rare. But what are the chances of a lesser catastrophe occurring? Astronomers know that there are objects like very large cometary nuclei, lumps of icy matter more than 200 km across, in the outer part of the Solar System. One example is the object Chiron, orbiting beyond Jupiter. It seems likely that one comet in a thousand is such a giant, and that most of the mass of the cloud of comets that surrounds the Solar System is in the form of giants.

Mark Bailey, of Liverpool's John Moores University, has calculated how gravitational disturbances, chiefly caused by Jupiter and Saturn, will perturb such objects so that about once in every 200 000 years a giant falls in to the inner part of the Solar System. Computer simulations show that the orbits the comets end up in are not stable, but are influenced by chaotic dynamics. They must, however, end up in a Sun-grazing orbit, and get broken up into pieces by the Sun in the same way that Shoemaker-Levy has been broken into pieces by Jupiter. The result is a stream of debris orbiting round the Sun and crossing the orbit of the Earth.

When the Earth passes through such a stream of material, it will collect a rain of fine dust over a timescale of thousands of years, until this fine material is blown away by the solar wind. As the Earth sweeps up about 5700 million tonnes of dust each year, but dust steadily settles out of the atmosphere, the resulting "load" of material in the air will average out at about a thousand billion kilograms. Bill Napier, of the University of Oxford, pointed out that this is sufficient to reduce surface temperatures (by acting as a sunshield) by 3-5 degrees Kelvin, perhaps triggering an Ice Age.

But this is not the only problem. Mixed in with the dust stream, and still there when all the fine dust has blown away, there will be fragments of comet ranging from a few centimeters to a few tens of kilometers across. The impact that wiped out the dinosaurs was caused by an object only about 10 km in diameter, but the chances of a direct hit by such a large fragment are less than the chances of being hit by smaller fragments.

Victor Clube, also of the University of Oxford, argued that we are living in the aftermath of the breakup of a giant comet in the inner Solar System. He suggests that this event may have been associated with the most recent Ice Age, which began about 100 000 years ago. According to Clube, it produced a stream of Sun-orbiting material linked with the Taurid meteor stream, which peaks around 30 June in daylight hours but is visible as "shooting stars" in the night skies of November.

Clube calculates that the Earth passes through the thickest part of this belt of debris every 3000 years, and that this happened most recently in 500 AD and before that in 2500 BC. On both occasions, Tunguska-like events would have been common, with one impact in each region the size of England over a period of a hundred years or so. Could this explain the collapse of past civilizations, the "Dark Ages" of Europe, and recurring legends about fire from the skies? Clube and his colleagues have been promoting this idea for ten years, but now they have a solid weight of scientific evidence to support their case. The Tunguska event itself came at just the right time of year to fit the pattern, as an isolated straggler in the stream, but the next main date to watch out for is the year 3000, give or take 200 years. For once, the scientists involved are happy that they will not be here to test their prediction.


More atoms that communicate faster than light

PHYSICISTS still struggling to come to terms with experiments which show instantaneous communication between quantum particles under special circumstances are now faced with another puzzle. Correcting a mistake made by Enrico Fermi more than sixty years ago, Gerhard Hegerfeldt, of the University of Gttingen, has shown that in theory any pair of atoms can communicate faster than light.

The now-familiar puzzle of what are called "non local" interactions develops from theoretical work by John Bell, of CERN, in the 1960s and experiments by Alain Astpect in Paris in the 1980s. Together, these show that a pair of photons ejected in opposite directions from an atom remain somehow entangled, as if they were one particle. Measuring the state of one of the photons instantaneously affects the state of the other one, wherever it may be. Now, it seems that even atoms which have never come into contact (from the perspective of classical Newtonian physics) are entangled in a similar way.

The calculation Fermi carried out in 1932, in the early days of quantum mechanics, concerned the response of one atom to radiation emitted by another atom of the same kind, some distance away. If the second atom is in an excited state, sooner or later it will emit radiation, falling back to its ground state. This radiation will have exactly the right frequency to excite the second atom (this is one of the principles underlying the way atoms are "pumped" into an excited state to make a laser).

Common sense tells us that the first atom cannot be excited until a finite time after the second atom decays -- until there has been time for radiation travelling at the speed of light to cross the gap. That is the result Fermi found. But it now turns out that he made a mistake in his calculation. Probably because the mistaken conclusion matched common sense, it took a long time for this to come to light. But Hegerfeldt's correct version of the calculation now makes it clear that there is a small chance that the first atom will be excited as soon as the second atom decays (Physical Review Letters, vol 72 p 596). As with all such quantum puzzles, this is only the beginning of the story; now, the experts have to explain what this mathematical result means. The best interpretation of the evidence so far seems to be that we should not think of any object, not even a single atom, as an "isolated system".

Because particles must also be considered as waves (one of the basic tenets of quantum mechanics), the individual particles in the atom are spread out, and there is a finite (though small) chance of finding them anywhere in the Universe. So the wave functions of the electrons in the first atom overlap with those of the electrons in the second atom. They are entangled, like the two photons produced in the Aspect experiment, and when an electron in one atom jumps down an energy level that can instantaneously make its counterpart in the other atom jump up by the same amount.


The prescient power of mathematics

WHY is mathematics such an effective tool for describing the way the physical world works? Many people see this as a deep mystery, and refer to the "unreasonable effectiveness" of mathematics in describing the known Universe. But Bruno Augenstein, of RAND, in Santa Monica, California, has turned this argument on its head. He says that the truth is that physicists are capable of finding a counterpart in the real world to any mathematical concept, and suggests that clever physicists" should be advised to "deliberately and routinely" seek out "physical models of already discovered mathematical structures".

One person who might have benefited from such advice was Albert Einstein. He came to his general theory of relativity in 1915 by a tortuous path. But it turns out with hindsight that this mathematical description of the way spacetime curves and bends in the presence of matter is precisely equivalent to the equations developed by nineteenth century mathematicians to describe hypothetical alternative geometries to the familiar Euclidean geometry of flat planes (see New Scientist, 2 January 1993, "Pay Attention Albert Einstein".) Augenstein cites this as an example where work in pure mathematics seems to have anticipated a realization subsequently found in physical theory -- and he has found an even more dramatic example of this process at work.

In 1924, two mathematicians published a paper concerning what Augenstein calls "a somewhat surreal corner of set theory", dubbed the Banach-Tarski Theorems (BTT) in their honour (S. Banach and A. Tarski, Fundamenta Mathematica, vol 6 p 244). BTT is an utterly bizarre branch of mathematics involving what is known as decomposition. Leaving out the mathematics and expressing some of the key results in vivid terms, Augenstein says that it is possible to prove mathematically that "you can cut solid body A, of any finite size and arbitrary shape, into m pieces which, without any alteration, can be reassembled into solid body B, also of any finite size and arbitrary shape."

Surreal indeed -- but so general as to be of little practical value. So he has taken up a specific version of this behaviour dealing with solid spheres. In particular, a solid sphere with unit radius can be cut into five pieces in such a way that two of the pieces can be reassembled into one solid sphere with unit radius, while the other three pieces are reassembled into a second solid sphere with unit radius. These are the minimum numbers of pieces required to do the trick, but it can be repeated indefinitely -- and perhaps readers familiar with modern particle physics may guess what is coming next. In a paper that is to be published in Speculations in Science and Technology (which, in spite of the title, is a serious scientific journal), Augenstein shows that the rules governing the behaviour of these mathematical sets and sub-sets are formally exactly the same as the rules which describe the behaviour of quarks and "gluons" in the standard model of particle physics, quantum chromodynamics (QCD), which was developed in the 1970s.

QCD was developed half a century after the original BTT paper appeared, but the physicists who developed the standard model knew nothing of that surreal corner of set theory. Neutrons and protons, in this model, are made up of triplets of quarks, and the gluons that bind protons and neutrons together (equivalent to photons in electromagnetic field theory) are made of pairs of quarks. The magical way in which a proton entering a metal target can produce a swarm of new copies of protons emerging from that target, each identical to the original proton, is precisely described by the BTT process of cutting spheres into pieces and reassembling them to make pairs of spheres. BTT has been described as "the most surprising result of theoretical mathematics", a view which Augenstein endorses. But what does all this tell us about the physicists' view of the world? How "real" are entities such as quarks, and to what extent should they be regarded simply as artificial models and analogies to help us try to understand the incomprehensible subatomic world? Andrew Pickering, of Edinburgh University, has argued that physicists are always capable of producing models of how the world works, given any self-consistent set of experimental data, and that these models always reflect the culture of their times.

"It is unproblematic," he says, "that scientists produce accounts of the world that they find comprehensible; given their cultural resources, only singular incompetence could have prevented members of the [particle physics] community producing an understandable version of reality at any point in their history" (Constructing Quarks, Edinburgh UP, 1984. p 413). And, he continues, "given their extensive training in sophisticated mathematical techniques, the preponderance of mathematics in particle physicists' accounts of reality is no more hard to explain than the fondness of ethnic groups for their native language". All of this runs counter to the way most physicists regard their trade, which they see as a process of uncovering real truths about the Universe which exist independently of whether or not physicists have yet discovered them. It is a disturbing thought, to most physicists, that their entire world view may be no more than a Kiplingesque "just so" story which provides a series of analogues and models which enable us to think we understand what is going on inside the atom, but which are as much a product of our cultural experiences and beliefs as they are an indication of an underlying reality that exists independently of the probing of physicists.

But if Augenstein and Pickering are right, perhaps that is the way we should regard the world. "BTT," says Augenstein, "can shed light on other wide-ranging questions of cognition and learning -- how we develop views of physical space, how we internally manipulate the external world and build models of it [and] how we fashion and modify beliefs". Not bad, for a 70-year old corner of an obscure branch of pure mathematics. Accepting the unreality of quarks may be a small price to pay for such an understanding of ourselves.


On Ye Shoulders of Giants

John Gribbin

Isaac Newton was born in Woolsthorpe, Lincolnshire, just 350 years ago, on Christmas day, 1642. He was a small, sickly baby, who surprised his mother (his father, also called Isaac, had died three months before young Isaac was born) by surviving his birthday; he went on surviving for another 84 years. His contribution to establishing science and the scientific method as providing the best description of the material world, and the awe in which he was held by his contemporaries, were neatly encapsulated early in the eighteenth century by the poet Alexander Pope, with his famous couplet Nature and Nature's laws lay hid in night: God said, Let Newton be! and all was light. But, as we shall see, it wasn't quite that simple.

Before Newton was two, his mother remarried and moved to a nearby village, leaving him in the care of his grandmother for nine years, until the death of his stepfather. The trauma of this separation almost certainly explains Newton's strange behaviour as an adult, including his secretiveness about his work, his obsessive anxiety about how it would be received when it was published, and the violent, irrational way in which he responded to any criticism by his peers.

After his stepfather died, however, Isaac and his mother were reunited, and she planned initially for him to take over the management of the family farm. He proved hopeless at this, preferring to read books rather than to herd cattle, so he was sent back to school in Grantham, and then (with the aid of an uncle who had a connection with Trinity College in Cambridge) on to university. He arrived in Cambridge in 1661, a little older than most of the other new undergraduates because of his interrupted schooling.

Newton's notebooks show that even as an undergraduate he kept abreast of new ideas, including those of Galileo and the French philosopher Ren Descartes. These marked the beginning of the new view of the Universe as an intricate machine, an idea which had yet to penetrate, officially, the great universities of Europe. But he kept all this to himself, while he also made a thorough study of the distinctly old-fashioned official curriculum, based on the ancient teaching of Aristotle, and obtained his bachelor's degree in 1665, a satisfactory, but not seemingly brilliant, student in the eyes of his teachers. The same year, plague broke out in London, the university was closed as a result, and Newton went home to Lincolnshire, where he stayed for the best part of two years, until normal academic life resumed. It was during those two years that Newton derived the inverse square law of gravity -- perhaps stimulated by watching the fall of an apple. In order to do this, he invented a new mathematical technique, differential calculus, which made the calculations more straightforward. And, as if this were not enough, he also began his investigation of the nature of light, discovering and naming the spectrum, the rainbow pattern of colours that is produced when white light passes through a prism. None of this made any impact on the scientific world at the time, because Newton didn't tell anybody what he was up to. When the university reopened in 1667, he was elected to a fellowship at Trinity College, and by 1669 he had developed some of his mathematical ideas to the point where they were circulated to the cognoscenti. By now, at least some of the professors in Cambridge were beginning to take notice of his ability, and when Isaac Barrow resigned from the post of Lucasian professor of mathematics in 1669 (in order to devote more time to divinity), he recommended that Newton should be his successor. Newton became Lucasian professor at the age of 26 -- a secure position for life (if he wanted it to be), with no tutoring responsibilities but the requirement to give one course of lectures each year. The present Lucasian professor, incidentally, is Stephen Hawking.

Between 1670 and 1672, Newton used these lectures to develop his ideas on light into the form which later became the first part of his epic treatise Opticks. But this was not published until 1704, as a result of one of the most protracted personality clashes of even Newton's tempestuous career. The problems began when Newton started to communicate his new ideas through the Royal Society, an organization which had been founded only in 1660, but which was already established as the leading channel of scientific communication in Britain. The row, with Robert Hooke, also led to the most famous remark made by Newton -- and one which, recent research suggests, has been misinterpreted for three hundred years.

The Royal Society first learned of Newton as a result of his interest in light -- not his new theory of how colours are formed, but his practical skill in inventing the first telescope to use a mirror, instead of a lens system, to focus light. The design is still widely in use and known to this day as a Newtonian reflector. The learned gentlemen of the Society liked the telescope so much, when they saw it in 1671, that in 1672 Newton was elected a Fellow of the Society. Pleased in his turn by this recognition, in that same year Newton presented a paper on light and colours to the Society. Robert Hooke, who was the first "curator of experiments" at the Royal Society, and is remembered today for Hooke's Law of elasticity, was regarded at the time (especially by himself) as the Society's (if not the world's) expert on optics, and he responded to Newton's paper with a critique couched in condescending terms that would surely have annoyed any young researcher. But Newton had never been able, and never learned, to cope with criticism of any kind, and was driven to rage by Hooke's comments. Within a year of becoming as Fellow of the Royal Society and first attempting to offer his ideas through the normal channels of communication, he had retreated back into the safety of his Cambridge base, keeping his thoughts to himself and avoiding the usual scientific toing and froing of the time.

But early in 1675, during a visit to London, Newton heard Hooke, as he thought, saying that he now accepted Newton's theory of colours. Newton was sufficiently encouraged by this to offer the Society a second paper on light, which included a description of the way coloured rings of light (now known as Newton's rings) are produced when a lens is separated from a flat sheet of glass by a thin film of air. Hooke immediately complained, both privately and publicly, that most of the ideas presented to the Society by Newton in 1675 were not original at all, but had simply been stolen from his (Hooke's) work. In ensuing correspondence with the secretary of the Society, Newton denied this, and made the counter claim that, in any case, Hooke's work was essentially derived from that of Ren Descartes.

Things were brewing up for an epic row when, seemingly under pressure from the Society, Hooke wrote a letter to Newton couched in terms which could be interpreted as conciliatory (if the reader were charitable) but in which he still managed to repeat all his allegations and to imply that, at best, Newton had merely tidied up some loose ends. It was this letter that provoked Newton's famous remark to the effect that if he had seen further than other men, it is because he stood on the shoulders of giants.

This remark has traditionally been interpreted as indicating Newton's modesty, and his recognition that earlier scientists such as Johannes Kepler, Galileo and Descartes had laid the foundations for his laws of motion and his great work on gravity -- which is odd, because in 1675 Newton hadn't made his ideas about gravity and motion public. The charge of modesty does not, in any case, seem one which would stick to such a prickly, even arrogant, character as Newton, although it is easy to see how the story might appeal to later generations. So where did the remark come from?

As part of the celebrations marking the tercentenary of the publication of the Principia, in 1987 Cambridge University organised a week-long meeting at which eminent scientists from around the world brought the story of gravity up to date. At that meeting, John Faulkner, a British Researcher now based at the Lick Observatory in California, presented his persuasive new interpretation of what Newton meant by that remark, based on Faulkner's probing into the documents related to the feud with Hooke. Newton was certainly not being modest, but arrogant when he made that statement, said Faulkner; and he was certainly not referring to Kepler and Galileo, or his work on gravity, but, indeed, to his work on light.

In fact, similar references to the giants of the past were common in Newton's day, and were generally used to express indebtedness to the ancients, especially the Greeks. Seventeenth century scientists in general (perhaps even Newton himself) seem to have thought that they were doing no more than rediscovering laws known in much more detail to the ancients. Newton's choice of words, in a letter to Hooke dated 5 February 1675, seems to have been particularly careful, bearing in mind their previous disagreements and the fact that Hooke himself had a distinctly unprepossessing personal appearance.

Quoting from seventeenth century contemporaries of Newton and Hooke, including Hooke's friends, Faulkner created a picture of Hooke resembling nothing so much as William Shakespeare's caricature of Richard III -- distinctly twisted, and even dwarfish. Even taking some of this with a pinch of salt, there is no doubt that Hooke was a little man.

In this context, says Faulkner, the sentences in that letter by Newton leading up to the remark about giants set that remark in a quite different context. Remember that this was, after all, not a hurried note despatched to a friend, but a letter written at the behest of the Royal Society in order to resolve publicly an embarrassing public quarrel between two of its Fellows; Newton certainly chose his words carefully to achieve that objective, but in the light of his previous and subsequent behaviour it seems more than likely that, as Faulkner suggests, he took equal care with the hidden sub-text. Here are the relevant sentences, with Faulkner's interpretation of Newton's intended meaning.

"What Des-Cartes did was a good step." (Interpretation: he did it before you did.) "You have added much in several ways, & especially in taking ye colours of thin plates into philosophical consideration." (Interpretation: all you did was follow where Descartes led.) "If I have seen further it is by standing on ye shoulders of Giants." (Interpretation, taking particular notice of Newton's careful use of the capital "G": my research owes nothing to anybody except the ancients, least of all to a little runt like you.)

Taking the exchange of letters at face value, they achieved the Society's objective of pouring public oil on troubled waters and restoring respectability to the dealings between its Fellows. But the upshot was that Newton retreated back even further into his shell following this encounter. He waited patiently until Hooke died, in 1703, before publishing his Opticks in 1704, when he could safely have the last word. And it was only through the intervention of his friend Edmund Halley, of comet fame, that he was pushed into publishing his greatest work, the Principia, in 1687, twelve years after the second row with Hooke. By then, the core of the work was more than twenty years old. If it hadn't been for the row with Hooke, Cambridge University might have been celebrating the tercentenary back in the 1960s.


Suck it and see

Albert Einstein is famous for many things, but not for explaining what goes on inside a cement mixer, a glass of milk, a dirty cloud or a cup of sweet tea. He ought to be, though; his first important piece of scientific work did all that, and determined the sizes of molecules, as well.
John Gribbin.

IF YOUR Christmas festivities are anything like mine, at some point in the proceedings several cups of well-sweetened black coffee will feature in the proceedings. As you gaze into the murky depths of the healing brew, ponder on the fact that shortly before he came to the attention of the scientific community at large Albert Einstein used a similar brew to cure a headache that had been troubling scientists for some time. He worked out how big molecules are.

This may not seem like a big deal, from the perspective of the 1990s. But Einstein's biographer Abraham Pais provides a neat indication of just how important the work was.

One of the standard ways to determine how useful a scientific paper is is to count the number of times it is referred to in other scientific papers -- the number of citations. Pais points out that of all the scientific papers published before 1912, three of the "top ten" most frequently cited in all the papers published between 1961 and 1975, are by Einstein.

The most frequently cited of the three, is, in fact, his thesis, and a sequel to this paper is the next most frequently cited of Einstein's early works. The third is one of the three famous papers he published in 1905, on Brownian motion. The paper announcing the special theory of relativity, also published in 1905, doesn't come in Einstein's top three or the overall top ten of citations for papers published before 1912!

One reason for this is that special relativity became such a standard feature of physics that long before the 1960s and 1970s hardly anyone ever bothered to read, or refer to, the original 1905 paper. Nevertheless, this scientific ranking does make it clear that Einstein's thesis was something special. The reason why the thesis paper has been so widely quoted, so recently, is that it deals with the properties of particles suspended in a fluid, a topic which has much more practical application in everyday life than the special theory of relativity, and has found uses in calculations as diverse as the way sand particles get stirred up in cement mixers, the properties of cow's milk, and the way fine particles of dust and droplets of liquid (aerosols) are suspended in clouds.

But Einstein didn't set about his investigation of the way particles are suspended in fluids because he was concerned about problems involving cement, milk, or dirty clouds. As he later told his friend and scientific sparring partner Max Born, referring to the work leading up to his thesis, "my main purpose for doing this was to find facts which would attest to the existence of atoms of definite size".

Amazingly to modern minds, at the beginning of the twentieth century many scientists still doubted the reality of atoms. But in searching fur evidence of atoms Einstein was following a well established tradition, going back almost a hundred years. In his thesis he came to the very brink of providing the final clinching proof that would at last persuade the remaining doubters of the reality of molecules and atoms -- that final proof actually came in the Brownian motion paper, which was really an extension of the thesis work. In the mid-1860s, a very neat attempt at estimating the sizes of molecules was made by Johann Joseph Loschmidt, a German chemist. The Italian Amadeo Avogadro had hypothesised that a box of a certain volume, filled with gas at a certain temperature and pressure, must contain the same number of particles (atoms or molecules), whatever the chemical composition of the gas in the box. The nub of Loschmidt's approach, which carries over into Einstein's work, is that he used two sets of equations to determine simultaneously two properties of molecules -- their sizes, and Avogadro's number. If you have one unknown quantity, and one equation in which that quantity appears, you can solve the equation to find the unknown quantity. If you have two unknown quantities, you need two equations each involving both quantities before you can solve the equations to find out both the unknown numbers. With three unknowns, you need three equations, and so on -- but both Loschmidt and Einstein stuck, for this particular calculation, with a pair of simultaneous equations, solving them for two unknown quantities.

Loschmidt's calculations involved the average distance a molecule travels between collisions in a gas -- the so-called "mean free path" -- and the fraction of the volume of the gas actually occupied by the volume of all the molecules added together. He assumed that in a liquid all the molecules are touching each other, which gave him a handle on the volume occupied by all the particles (molecules) in the liquid when they are closely packed together. Then, when the same liquid was heated to become a gas, he knew that the volume of gas actually occupied by the molecules must be the same as the volume of the liquid that had been evaporated, and that the rest of the volume of the gas is simply the empty space that the molecules whiz through. Since he actually carried out his calculations for air, he had to use estimates of the densities of liquid nitrogen and liquid oxygen which were not as accurate as modern measurements, but he still came up with answers to his calculations that stand up very well even today. Loschmidt said that the diameter of a typical molecule of air must be measured in millionths of a millimetre, and he gave, in 1866, a value for Avogadro's Number of 0.5 x 1023.

Using modern data, the mean free path of molecules of air turns out to be just 13 millionths of a metre at 0 oC, and an oxygen molecule in air at that temperature will be travelling at just over 461 metres per second. So it undergoes more than 3.5 billion (thousand million) collisions every second. The modern value for Avogadro's number is 6 x 1023.

Einstein's approach used a similar form of mathematical reasoning, solving two simultaneous equations with the same two unknown quantities in them. But he applied his reasoning not to gases but to solutions, in which molecules of one compound (the solute) are spread more or less evenly through a liquid which is made up of molecules of another compound (the solvent). There is nothing particularly exotic about the solutions Einstein based his calculations on -- they were simply solutions of sugar in water. His calculations would, in fact, apply very precisely to the behaviour of a cup of sweet tea or coffee. The starting point for this work was the discovery, made back in the 1880s but still surprising the first time you encounter it, that the molecules in a solution behave like the molecules of a gas. One example of this is the phenomenon known as osmotic pressure. Imagine a container full of a solvent (just water, in Einstein's calculation), divided into two halves by a barrier which has tiny holes in it that allow the molecules of solvent to pass through, but are too small to allow molecules of a chosen solute (the sugar) to pass through. Now, if you put sugar into one half of the container, so that there is a solution on one side of the barrier (often called a "semi-permeable membrane") but not on the other side, solvent will flow through the barrier and into the half of the container that holds the solution. The water, in this case, flows from the weaker solution, into the stronger solution, trying to establish a thermodynamic equilibrium by evening out the concentration of the solution in both halves of the container. This is exactly equivalent to the way in which gas spreads out from one side of a box to fill the entire box when a partition in the middle of the box is removed. You might think, on first encountering the problem of two different strength solutions separated by a semi-permeable membrane, that the "extra" molecules in the stronger solution ought somehow to force the solute through into the weaker solution. But if that happened, the strong solution would get stronger and the weak solution would get weaker. There would be a more pronounced difference between the two halves of the container, so entropy would have decreased.

In order for information to be lost and entropy to increase, in line with the second law of thermodynamics, the strong solution must somehow be made weaker, more like the weak solution, even if that involves a net flow of solute from the weak solution into the strong solution. As a result, the level of solution rises in the side of the container that contains the sugar, and the level drops in the side of the container containing just water. The process stops when the extra ressure of the stronger solution, caused by the weight of the extra height of liquid in that side of the container, is strong enough to stop the flow of solvent through the membrane. The flow of solvent through the membrane is known as osmosis, and the pressure needed to stop the flow is called the osmotic pressure.

The osmotic pressure depends on the number of molecules of solute in the solution -- the more concentrated the solution is, the stronger the pressure. And, once again, the sizes of the molecules involved comes in to the calculation in terms of the fraction of the volume of the solution that is actually occupied by those molecules. The second equation used by Einstein involved the mean free path of the molecules of the solute, which he related to the speed with which liquid (molecules) diffused through the membrane. Along the way, he had to determine other properties, such as the relation between this diffusion and the viscosity (stickiness) of a liquid, which proved so interesting to engineers investigating cement, milk, and all the rest. In the thesis itself, written in 1904, Einstein found a value for Avogadro's Number of 2.1 x 1023, with estimates for molecular sizes in the now familiar range of a few hundred micrometers; in an updated version published in 1906, he was able to improve the calculation by using some new data from more accurate measurements of the behaviour of sugar solutions, which gave him a value of 4.15 x 1023. And by 1911, Einstein was able to improve the calculation still further in a new paper which gave Avogadro's Number as 6.6 x 1023. By then, this crucial number had been determined reasonably accurately in a dozen different ways, and all those determinations gave very similar values. Each technique independently confirmed the reality of molecules, and there was no longer any doubt that atoms and molecules were real, physical entities.

Einstein had one more trick up his sleeve derived from he same investigation of the sizes of atoms. In a paper written in October 1910, he considered the way in which the blue colour of the sky is produced by light scattering from the molecules of the air itself. As far back as 1869, the British physicist John Tyndall had explained that the blueness of the sky might be caused by the way in which small dust particles or droplets of liquid in the air would bounce blue light (which has short wavelengths) around, scattering it to all parts of the sky, while red and orange light (which has longer wavelengths) could pass through relatively unaffected (explaining why sunrises and sunsets are red). Other scientists realised that the scattering must actually be caused by the molecules of air themselves. But it was Einstein who put the numbers in, proving that the blueness of the sky was connected with the existence of molecules and deriving the value of Avogadro's Number in yet another way in the process. It was the ultimate piece of blue sky research.


Shedding light on the quark-gluon plasma

OR
Big Bang physics reproduced at CERN

ENERGETIC photons produced in high energy interactions at the Super Proton Synchrotron (SPS) facility at CERN have begun to reveal the behaviour of matter under conditions that have not existed naturally since the Big Bang. Researchers from the Variable Energy Cyclotron Centre, in Calcutta, claim that the energy spectrum of these photons, announced recently by the WA80 collaboration at CERN, exactly matches predictions for the radiation from a quark-gluon plasma.

Quarks are thought to be the fundamental building blocks of matter, from which protons and neutrons (members of the hadron family) are constructed. Gluons are the particles that carry forces between quarks, playing the equivalent role to photons as carriers of the electromagnetic force. Some, but not all, theories predict that at very high energies hadrons would be decomposed into a quark-gluon plasma, as a result of a phase transition analogous to the transformation of electrically neutral atoms into a mixture of positive ions and negative electrons at temperatures of a few thousand K.

Physicists have begun to probe the energies where the transition to a quark-gluon plasma might occur, by accelerating not just protons but relatively massive atomic nuclei, such as those of sulphur, to a sizeable fraction of the speed of light and smashing them into targets (Recreating the Birth of the Universe, New Scientist, 17 August 1991). This should briefly create a quark-gluon bubble, mimicking in miniature the Big Bang itself. But how can the physicists tell if they have succeeded in their aim?

Such short lived mini bangs should release a flood of radiation, and particles of many kinds, manufactured by the conversion of energy into mass. It is difficult to unravel the complexity of all the particles produced. But Dinesh Srivastava and Bikash Sinha have calculated that the characteristic signature of the quark-gluon phase transition should be revealed from the simplest observations -- measurements of the energy of individual photons produced in these events.

They have calculated the energy spectrum of these photons, and compared this with the equivalent energy spectrum of single photons produced by a bubble of hot hadrons. They report, in a paper to appear in the journal Physical Review Letters, that the observations of single photons from the CERN experiment are incompatible with the presence of a pure hot hadron bubble, and unambiguously confirm that the photons are coming from a quark-gluon plasma.

The implication is that theories which do not predict the formation of a quark-gluon plasma can already be discarded. Further studies of these mini bangs will lead to a better understanding of the physics of the Big Bang itself.


Underground Astronomy

John Gribbin

THE LAST place you would expect to find a new astronomical telescope would be in a hole in the ground, buried beneath nearly 1.5 kilometers of rock. Yet that is exactly what you would find, in a series of chambers carved out alongside the road tunnel that runs through the mountains of the Gran Sasso Massif, some 150 km northeast of Rome. Of course, the astronomers operating the "telescopes" in the Gran Sasso Laboratory are not investigating the light from heavenly bodies. The particles they study come from deep within the Sun and stars, and pass through all those kilometers of rock more easily than light passes through a pane of clear glass. They are called neutrinos, and by counting the number of neutrinos passing through two 7-metre high tanks, each containing 70 cubic meters of liquid, the astronomers are able to probe the way in which stars work, and to find out more about the elusive neutrinos themselves.

Neutrinos are produced in nuclear reactions, like those which keep the Sun hot, and in the violent explosions, known as supernovas, which mark the deaths of some stars. They travel at close to the speed of light, and are extremely reluctant to interact with the atoms of ordinary matter.

If a beam of neutrinos were to travel through solid lead for a distance of 3,500 light years (one tenth of the distance to the centre of the Milky Way) only half of the neutrinos would be absorbed by atoms of lead along the way.

Even though 66 billion neutrinos from the Sun are estimated to pass through every square centimetre of the Earth's surface (and through you) every second, capturing even one neutrino is very difficult. Detectors have to be very large, and very sensitive. And because they are very sensitive, they have to be buried deep beneath the ground, where the layers of rock above will block out the "interference" caused by other particles from space, the cosmic rays.

Cosmic rays would trigger sensitive neutrino detectors on the surface of the Earth, and blot out any "signals" from neutrinos themselves. But since the cosmic rays are absorbed by rock, this problem does not arise with the detectors in the Gran Sasso Laboratory. In fact, the Gran Sasso scientists also study the cosmic rays themselves, using detectors on the surface of the Gran Sasso massif, at an altitude of more than 2000 metres. Some of these instruments, much more like traditional telescopes, monitor the flashes of light produced by some cosmic rays when they strike the atmosphere of the Earth. Others detect many of the particles which penetrate through the atmosphere to the surface of the mountain.

But although the rock of the mountain shields the Gran Sasso Laboratory itself from almost all cosmic rays, there may be other particles, as well as neutrinos, that can penetrate to the deep caverns. According to some theories, during the Big Bang in which the Universe was born some 15 billion years ago, stable entities called magnetic monopoles should have been created. These are particles which carry just one kind of magnetism, an isolated "north pole" or "south pole". If cosmic rays contain such magnetic monopoles, some of them should penetrate to the Gran Sasso caverns, where a detector known as MACRO is waiting to trap them. So far, however, the search has proved fruitless.

The isolation of the laboratory from interference by cosmic rays also makes it an ideal home for other experiments which need extremely sensitive detectors. One such phenomenon is known as double beta decay, and occurs when the nucleus of an atom emits two electrons simultaneously (electrons are also known, for historical reasons, as beta rays).

The process also involves the emission of neutrinos from the nucleus, and it is extremely rare. By studying the neutrinos emitted during double beta decay, researchers at Gran Sasso and elsewhere hope to find out more about neutrinos themselves, and in particular to get an accurate measure of their mass.

So far, all that can be said is that these elusive particles each have a mass less than 0.001 per cent of the mass of an electron. This is so ridiculously tiny that some researchers would like to be able to set the mass at precisely zero. But on the other hand, it may be that such a tiny, but non-zero, mass really is needed to explain what the main detectors in the Gran Sasso Laboratory, and elsewhere around the world, are discovering. They have found that there is something wrong with the standard theory of how the Sun works -- but that the problem might be resolved if neutrinos really do have a tiny mass.

This "solar neutrino problem" is the main reason for the existence of the Gran Sasso Laboratory, the reason why, in 1981 Antonio Zichichi, president of the Istituto Nazionale di Fisica Nucleare proposed taking advantage of the excavations for a new road tunnel through the mountains by extending them to provide an underground astronomical laboratory in man-made caverns in the rock.

The saga of the solar neutrino problem began with the construction of the first solar neutrino detector by Ray Davis and his colleagues in America at the end of the 1960s. That detector, a tank the size of an Olympic swimming pool, filled with perchlorethylene (a commonly used cleaning fluid!), was set up at the bottom of the Homestake Gold Mine, in South Dakota, where, like the Gran Sasso detectors, it is shielded by hundreds of meters of rock from the "noise" of cosmic rays.

On the rare occasions that a solar neutrino interacts with one of the chlorine atoms in the perchlorethylene liquid, it converts the chlorine into argon. Every few weeks, the swimming-pool size detector is "purged" of argon by bubbling helium gas through the tank. The helium carries the argon off to a system of detectors which actually count the number of argon atoms. And after all that effort, an average of 12 counts are recorded for each run of the experiment. Only one argon atom is produced in the tank every two or three days.

The problem is that, according to standard theories of how the Sun works, there should be about three times as many detections in the Davis tank.

Astrophysicists are confident that they know how the Sun shines. The energy to make sunlight comes from nuclear reactions deep in the heart of the Sun, where hydrogen nuclei are fused together to make helium nuclei -- essentially the same process that occurs in the hydrogen bomb. They know how many reactions should be taking place every second to keep the Sun shining, and they know how many neutrinos those reactions should produce. Overall, they calculate that as a result the Davis detector should be recording 25 events per month. In fact, throughout the 1970s and 1980s, and right up to date, Davis has found an average of just 9 solar neutrino events per month. One explanation for the shortfall would be that astrophysicists do not, after all, understand how the Sun shines. This would be a very uncomfortable discovery. The only other possibility is that something happens to the neutrinos on their way out from the heart of the Sun, so that two-thirds of them are converted into a form that cannot be detected by the Davis experiment.

GALLEX, the main experiment in the Gran Sasso Laboratory, may have helped to resolve the dilemma.

The experiment gets its name from the fact that it uses gallium, rather than chlorine, as the neutrino detector. The 30 tonnes of gallium are in the form of a solution of 100 tonnes of gallium chloride. This time, when a neutrino from the Sun reacts with a nucleus of gallium, the gallium is converted into germanium. Germanium chloride is swept out of the tank by "purging" it with nitrogen, and, in much the same way that the Davis experiment counts argon atoms, the germanium atoms are counted by sensitive detectors.

The key importance of the GALLEX experiment is that it detects neutrinos across a broader range of energies than those captured in the Davis experiment.

The Davis experiment finds only a third of the expected number of neutrinos. A Japanese detector, known as Kamiokande II and operating at higher energies, finds about half the number of neutrinos predicted by theory. GALLEX, which is sensitive to both lower energy and high energy neutrinos, finds about two-thirds of the predicted number. The situation looks messy, but in fact the differences are a great help to the theorists. The point is that the neutrinos with different energies are produced in slightly different ways inside the Sun. Comparison of the results at different energies shows that the standard astrophysical theory of how the Sun works is correct. So what is happening to the neutrinos? Putting all of the evidence together, the best explanation is that some of them really are "changing their spots" on the way out from the heart of the Sun.

This is possible because there are, in fact, three different kinds of neutrino -- one associated with the electron and two others associated with two other electron-like particles known as leptons. Only electron neutrinos will interact with the solar neutrino detectors here on Earth, so if some of the electron neutrinos from the Sun are converted into other forms before they reach us, those detectors will record a shortfall.

Such changes will depend on the energy of the original neutrinos, which explains why the three different detectors give different measurements of the solar neutrino flux. And, crucially, they require the neutrinos to have a tiny mass. The mass can be very small indeed - - within the limit of 0.001 per cent of the electron mass -- but it cannot be precisely zero.

More observations, at the Gran Sasso Laboratory and other research centres, are needed to confirm the conclusion. But for now it seems that astrophysicists do understand how the Sun works, that neutrinos do have a tiny mass, and that some electron neutrinos from the Sun are converted into undetectable forms before they reach our detectors here on Earth.

As US astronomer John Bahcall puts it, "the theory of stellar evolution has been checked in a critical way, and the result has come out much closer than any physicist could have guessed in 1962 [when the first solar neutrino calculations were made]".

And we know all this because some solar astronomers are crazy enough to bury their telescopes down mine shafts, or in caverns in the Italian mountains, with 1400 metres of rock over their heads.


Soccer balls in space?

ABSORPTION features in the light from distant stars may be caused by the presence of soccer-ball shaped molecules in clouds of interstellar material. Adrian Webster, of the Royal Observatory in Edinburgh, first suggested this possibility last year. Now, measurements of the spectrum of these molecules in the laboratory have strengthened the likelihood that he is right.

Absorption in the ultraviolet part of the spectrum seems to be made up of two different components. One, called the "ubiquitous" component, can be neatly explained as due to dust grains in space. The other, called the "variable" component, is harder to explain. It is very smooth and featureless, and has the curious property that the absorption is stronger from regions where the density of material is lower.

Webster suggests that a variation on the "buckyball" molecule, the football-shaped buckminsterfullerene, C60, can explain these features. C60 itself has a smooth absorption spectrum in the right wavelength range, but with a couple of rounded peaks. Also, the colour of a C60 solution is mauve, while the absorbing material in space is deep red. But the basic C60 molecule can be easily modified by the addition of hydrogen atoms, to make so-called fulleranes. New studies of C60H2 show that it has the same sort of spectrum, but with less pronounced peaks, and is a yellow-brown colour.

Webster says that "if the trend from C60 to C60H2 continues to the heavier hydrides, with the strength of the ultraviolet peaks decreasing and with the colour becoming redder, then a theory in which the variable component of the extinction is carried by the fulleranes and their ions may be viable" (Monthly Notices of the Royal Astronomical Society, vol 263 p L55). He also says that it "would be of interest" to investigate the spectra of similar molecules containing five or ten hydrogen atoms.