Zilch… Naught… Nada…

It’s easy to dismiss the concept of nothing as, well, nothing. In fact, nothing is everything to science – understanding the intangible voids has lead to breakthroughs we could never have imagined possible.

Read on to find out why nothing is more important than nothing…

# HISTORY OF NOTHING

## Nothingness: Zero, the number they tried to ban

*Every schoolchild knows the concept of zero – so why did it take so long to catch on? Follow its convoluted path from heresy to common sense*

**Read more:** "The nature of nothingness"

I USED to have seven goats. I bartered three for corn; I gave one to each of my three daughters as dowry; one was stolen. How many goats do I have now?

This is not a trick question. Oddly, though, for much of human history we have not had the mathematical wherewithal to supply an answer. There is evidence of counting that stretches back five millennia in Egypt, Mesopotamia and Persia. Yet even by the most generous definition, a mathematical conception of nothing - a zero - has existed for less than half that time. Even then, the civilisations that discovered it missed its point entirely. In Europe, indifference, myopia and fear stunted its development for centuries. What is it about zero that stopped it becoming a hero?

This is a tangled story of two zeroes: zero as a symbol to represent nothing, and zero as a number that can be used in calculations and has its own mathematical properties. It is natural to think the two are the same. History teaches us something different.

Zero the symbol was in fact the first of the two to pop up by a long chalk. This is the sort of character familiar from a number such as the next year in our calendar, 2012. Here it acts as a placeholder in our "positional" numerical notation, whose crucial feature is that a digit's value depends on where it is in a number. Take 2012, for example: a "2" crops up twice, once to mean 2 and once to mean 2000. That's because our positional system uses "base" 10 - so a move of one place to the left in a number means a digit's worth increases by a further power of 10.

It is through such machinations that the string of digits "2012" comes to have the properties of a number with the value equal to 2 × 10^{3} + 0 × 10^{2} + 1 × 10^{1} + 2. Zero's role is pivotal: were it not for its unambiguous presence, we might easily mistake 2012 for 212, or perhaps 20012, and our calculations could be out by hundreds or thousands.

The first positional number system was used to calculate the passage of the seasons and the years in Babylonia, modern-day Iraq, from around 1800 BC onwards. Its base was not 10, but 60. It didn't have a symbol for every whole number up to the base, unlike the "dynamic" system of digits running from 1 to 9 that is the bread-and-butter of our base-10 system. Instead it had just two symbols, for 1 and 10, which were clumped together in groups with a maximum headcount of 59. For example, 2012 equates to 33 × 60^{1} + 32, and so it would have been represented by two adjacent groups of symbols: one clump of three 10s and three ones; and a second clump of three 10s and two ones.

This particular number has nothing missing. Quite generally, though, for the first 15 centuries or so of the Babylonian positional numbering system the absence of any power of 60 in the transcription of any number was marked not by a symbol, but (if you were lucky) just by a gap. What changed around 300 BC we don't know; perhaps one egregious confusion of positions too many. But it seems to have been at around this time that a third symbol, a curious confection of two left-slanting arrows (see timeline, overleaf), started to fill missing places in the stargazers' calculations.

This was the world's first zero. Some seven centuries later, on the other side of the world, it was invented a second time. Mayan priest-astronomers in central America began to use a snail-shell-like symbol to fill gaps in the (almost) base-20 positional "long-count" system they used to calculate their calendar.

Zero as a placeholder was clearly a useful concept, then. It is a frustration entirely typical of zero's vexed history, though, that neither the Babylonians nor the Mayans realised quite how useful it could be.

In any dynamic, positional number system, a placeholder zero assumes almost unannounced a new guise: it becomes a mathematical "operator" that brings the full power of the system's base to bear. This becomes obvious when we consider the result of adding a placeholder zero to the end of a decimal number string. The number 2012 becomes 20120, magically multiplied by the base of 10. We intuitively take advantage of this characteristic whenever we sum two or more numbers, and the total of a column ticks over from 9 to 10. We "carry the one" and leave a zero to ensure the right answer. The simplicity of such algorithms is the source of our system's supple muscularity in manipulating numbers.

##### Facing the void

We shouldn't blame the Babylonians or Mayans for missing out on such subtlety: various blemishes in their numerical systems made it hard to spot. And so, although they found zero the symbol, they missed zero the number.

Zero is admittedly not an entirely welcome addition to the pantheon of numbers. Accepting it invites all sorts of logical wrinkles that, if not handled with due care and attention, can bring the entire number system crashing down. Adding zero to itself does not result in any increase in its size, as it does for any other number. Multiply any number, however big, by zero and it collapses down to zero. And let's not even delve into what happens when we divide a number by zero.

Classical Greece, the next civilisation to handle the concept, was certainly not keen to tackle zero's complexities. Greek thought was wedded to the idea that numbers expressed geometrical shapes; and what shape would correspond to something that wasn't there? It could only be the total absence of something, the void - a concept that the dominant cosmology of the time had banished.

Largely the product of Aristotle and his disciples, this world view saw the planets and stars as embedded in a series of concentric celestial spheres of finite extent. These spheres were filled with an ethereal substance, all centred on Earth and set in motion by an "unmoved mover". It was a picture later eagerly co-opted by Christian philosophy, which saw in the unmoved mover a ready-made identity for God. And since there was no place for a void in this cosmology, it followed that it - and everything associated with it - was a godless concept.

Eastern philosophy, rooted in ideas of eternal cycles of creation and destruction, had no such qualms. And so the next great staging post in zero's journey was not to Babylon's west, but to its east. It is found in*Brahmasphutasiddhanta*, a treatise on the relationship of mathematics to the physical world written in India in around AD 628 by the astronomer Brahmagupta.

Brahmagupta was the first person we see treating numbers as purely abstract quantities separate from any physical or geometrical reality. This allowed him to consider unorthodox questions that the Babylonians and Greeks had ignored or dismissed, such as what happens when you subtract from one number a number of greater size. In geometrical terms this is a nonsense: what area is left when a larger area is subtracted? Equally, how could I ever have sold or bartered more goats than I had in the first place? As soon as numbers become abstract entities, however, a whole new world of possibilities is opened up - the world of negative numbers.

The result was a continuous number line stretching as far as you could see in both directions, showing both positive and negative numbers. Sitting in the middle of this line, a distinct point along it at the threshold between the positive and negative worlds, was *sunya*, the nothingness. Indian mathematicians had dared to look into the void - and a new number had emerged.

It was not long before they unified this new number with zero the symbol. While a Christian Syrian bishop writes in 662 that Hindu mathematicians did calculations "by means of nine signs", an inscription of dedication at a temple in the great medieval fort at Gwalior, south of Delhi in India, shows that two centuries later the nine had become ten. A zero - a squashed-egg symbol recognisably close to our own - had been incorporated into the canon, a full member of a dynamic positional number system running from 0 to 9. It marked the birth of the purely abstract number system now used throughout the world, and soon spawned a new way of doing mathematics to go with it: algebra.

News of these innovations took a long time to filter through to Europe. It was only in 1202 that a young Italian, Leonardo of Pisa - better remembered as Fibonacci - published a book, *Liber Abaci*, in which he presented details of the Arabic counting system he had encountered on a journey to the Mediterranean's southern shores, and demonstrated the superiority of this notation over the abacus for the deft performance of complex calculations.

While merchants and bankers were quickly convinced of the Hindu-Arabic system's usefulness, the governing authorities were less enamoured. In 1299, the city of Florence, Italy, banned the use of the Hindu-Arabic numerals, including zero. They considered the ability to inflate a number's value hugely simply by adding a digit on the end - a facility not available in the then-dominant, non-positional system of Roman numerals - to be an open invitation to fraud.

Zero the number had an even harder time. Schisms, upheavals, reformation and counter-reformation in the church meant a continuing debate as to the worth of Aristotle's ideas about the cosmos, and with it the orthodoxy or otherwise of the void. Only the Copernican revolution - the crystal-sphere-shattering revelation that Earth moves around the sun - began, slowly, to shake European mathematics free of the shackles of Aristotelian cosmology from the 16th century onwards.

By the 17th century, the scene was set for zero's final triumph. It is hard to point to a single event that marked it. Perhaps it was the advent of the coordinate system invented by the French philosopher and mathematician René Descartes. His Cartesian system married algebra and geometry to give every geometrical shape a new symbolic representation with zero, the unmoving heart of the coordinate system, at its centre. Zero was far from irrelevant to geometry, as the Greeks had suggested: it was essential to it. Soon afterwards, the new tool of calculus showed that you had first to appreciate how zero merged into the infinitesimally small to explain how anything in the cosmos could change its position at all - a star, a planet, a hare overtaking a tortoise. Zero was itself the prime mover.

Thus a better understanding of zero became the fuse of the scientific revolution that followed. Subsequent events have confirmed just how essential zero is to mathematics and all that builds on it (see "You need nothing to count everything"). Looking at zero sitting quietly in a number today, and primed with the concept from a young age, it is equally hard to see how it could ever have caused so much confusion and distress. A case, most definitely, of much ado about nothing.

**Richard Webb** is a feature editor for *New Scientist*

# MATHEMATICS

## Nothingness: Mathematics starts with an empty set

*What's inside an empty bag? Nothing – but that's something on which all mathematics is founded*

THE mathematicians' version of nothing is the empty set. This is a collection that doesn't actually contain anything, such as my own collection of vintage Rolls-Royces. The empty set may seem a bit feeble, but appearances deceive; it provides a vital building block for the whole of mathematics.

It all started in the late 1800s. While most mathematicians were busy adding a nice piece of furniture, a new room, even an entire storey to the growing mathematical edifice, a group of worrywarts started to fret about the cellar. Innovations like non-Euclidean geometry and Fourier analysis were all very well - but were the underpinnings sound? To prove they were, a basic idea needed sorting out that no one really understood. Numbers.

Sure, everyone knew how to do sums. Using numbers wasn't the problem. The big question was what they were. You can show someone two sheep, two coins, two albatrosses, two galaxies. But can you show them two?

The symbol "2"? That's a notation, not the number itself. Many cultures use a different symbol. The word "two"? No, for the same reason: in other languages it might be *deux* or *zwei* or *futatsu*. For thousands of years humans had been using numbers to great effect; suddenly a few deep thinkers realised no one had a clue what they were.

An answer emerged from two different lines of thought: mathematical logic, and Fourier analysis, in which a complex waveform describing a function is represented as a combination of simple sine waves. These two areas converged on one idea. Sets.

A set is a collection of mathematical objects - numbers, shapes, functions, networks, whatever. It is defined by listing or characterising its members. "The set with members 2, 4, 6, 8" and "the set of even integers between 1 and 9" both define the same set, which can be written as {2, 4, 6, 8}.

Around 1880 the mathematician Georg Cantor developed an extensive theory of sets. He had been trying to sort out some technical issues in Fourier analysis related to discontinuities - places where the waveform makes sudden jumps. His answer involved the structure of the set of discontinuities. It wasn't the individual discontinuities that mattered, it was the whole class of discontinuities.

##### How many dwarfs?

One thing led to another. Cantor devised a way to count how many members a set has, by matching it in a one-to-one fashion with a standard set. Suppose, for example, the set is {Doc, Grumpy, Happy, Sleepy, Bashful, Sneezy, Dopey}. To count them we chant "1, 2, 3..." while working along the list: Doc (1), Grumpy (2), Happy (3), Sleepy (4), Bashful (5), Sneezy (6) Dopey (7). Right: seven dwarfs. We can do the same with the days of the week: Monday (1), Tuesday (2), Wednesday (3), Thursday (4), Friday (5), Saturday (6), Sunday (7).

Another mathematician of the time, Gottlob Frege, picked up on Cantor's ideas and thought they could solve the big philosophical problem of numbers. The way to define them, he believed, was through the process of deceptively simple process of counting.

What do we count? A collection of things - a set. How do we count it? By matching the things in the set with a standard set of known size. The next step was simple but devastating: throw away the numbers. You could use the dwarfs to count the days of the week. Just set up the correspondence: Monday (Doc), Tuesday (Grumpy)... Sunday (Dopey). There are Dopey days in the week. It's a perfectly reasonable alternative number system. It doesn't (yet) tell us what a number is, but it gives a way to define "same number". The number of days equals the number of dwarfs, not because both are seven, but because you can match days to dwarfs.

What, then, is a number? Mathematical logicians realised that to define the number 2, you need to construct a standard set which intuitively has two members. To define 3, use a standard set with three numbers, and so on. But which standard sets to use? They have to be unique, and their structure should correspond to the process of counting. This was where the empty set came in and solved the whole thing by itself.

Zero is a number, the basis of our entire number system (see "Zero's convoluted history"). So it ought to count the members of a set. Which set? Well, it has to be a set with no members. These aren't hard to think of: "the set of all honest bankers", perhaps, or "the set of all mice weighing 20 tonnes". There is also a mathematical set with no members: the empty set. It is unique, because all empty sets have exactly the same members: none. Its symbol, introduced in 1939 by a group of mathematicians that went by the pseudonym Nicolas Bourbaki, is θ. Set theory needs θ for the same reason that arithmetic needs 0: things are a lot simpler if you include it. In fact, we can define the number 0 as the empty set.

What about the number 1? Intuitively, we need a set with exactly one member. Something unique. Well, the empty set is unique. So we define 1 to be the set whose only member is the empty set: in symbols, {θ}. This is not the same as the empty set, because it has one member, whereas the empty set has none. Agreed, that member happens to be the empty set, but there is one of it. Think of a set as a paper bag containing its members. The empty set is an empty paper bag. The set whose only member is the empty set is a paper bag containing an empty paper bag. Which is different: it's got a bag in it (see diagram).

The key step is to define the number 2. We need a uniquely defined set with two members. So why not use the only two sets we've mentioned so far: θ and {θ}? We therefore define 2 to be the set {θ, {θ}}. Which, thanks to our definitions, is the same as {0, 1}.

Now a pattern emerges. Define 3 as {0, 1, 2}, a set with three members, all of them already defined. Then 4 is {0, 1, 2, 3}, 5 is {0, 1, 2, 3, 4}, and so on. Everything traces back to the empty set: for instance, 3 is {θ, {θ}, {θ, {θ}}} and 4 is {θ, {θ}, {θ, {θ}}, {θ, {θ}, {θ, {θ}}}}. You don't want to see what the number of dwarfs looks like.

The building materials here are abstractions: the empty set and the act of forming a set by listing its members. But the way these sets relate to each other leads to a well-defined construction for the number system, in which each number is a specific set that intuitively has that number of members. The story doesn't stop there. Once you've defined the positive whole numbers, similar set-theoretic trickery defines negative numbers, fractions, real numbers (infinite decimals), complex numbers... all the way to the latest fancy mathematical concept in quantum theory or whatever.

So now you know the dreadful secret of mathematics: it's all based on nothing.

**Ian Stewart** is emeritus professor of mathematics at the University of Warwick, UK

# TRANSISTORS

## Nothingness: Computers are powered by holes

*Digital technology wouldn't work without something missing at its heart. Read the story of the transistor's difficult birth*

THE sound of New Year's Eve celebrations drifting up from the Palace Theater did not distract William Shockley. Nor did the few scattered revellers straying through Chicago's snow-covered streets below. Rarely a mingler, Shockley had more important things on his mind. Barricaded in his room in the art-deco opulence of the Bismarck Hotel, he was thinking, and writing.

Eight days earlier, on 23 December 1947, John Bardeen and Walter Brattain, two of Shockley's colleagues at Bell Laboratories in Murray Hill, New Jersey, had unveiled a device that would change the world: the first transistor. Today, shrunk to just nanometres across and carved into beds of silicon, these electrical on-off switches mass in their billions on every single computer chip. Without them, there would be no processing of the words, sounds and images that guide our electronic lives. There would be no smartphone, router, printer, home computer, server or internet. There would be no information age.

Bardeen and Brattain's device, a rather agricultural construction of semiconductor, gold-enwrapped polystyrene and a spaghetti twist of connecting wires, did not look revolutionary, and it would have taken a seer to foretell the full changes it would bring. Even so, those present that December at Bell Labs knew they had uncovered something big. In Shockley's words, the transistor was a "magnificent Christmas present". Magnificent, but for one thing: no one knew quite how it worked.

Holed up in his Chicago hotel, Shockley needed to change that. As head of Bell Labs' solid-state physics group, he had been the intellectual driving force behind the transistor, yet Bardeen and Brattain had made the crucial breakthrough largely without him. To reclaim the idea as his own, he needed to go one better.

That meant getting to grips with a curious entity that seemed to control the transistor's inner workings. Its existence had been recognised two decades earlier, but its true nature had eluded everyone. For good reason: it was not there.

Transistors - both Bardeen and Brattain's original and those that hum away in computer processors today - depend on the qualities of that odd half-breed of material known as a semiconductor. Sitting on the cusp of electrical conduction and insulation, semiconductors sometimes let currents pass and sometimes resolutely block their passage.

By the early 20th century, some aspects of this dual personality were well documented. For example, the semiconductor galena, or lead sulphide, was known under certain circumstances to form a junction with a metal through which current travelled in only one direction. That had made it briefly popular in early wireless receivers, where a filigree metal probe - a "cat's whisker" - was tickled across a crystal of galena to find the contact that would transform oscillating radio signals into steady direct current.

This process had to be repeated afresh each time a radio receiver was switched on, which made tuning a time-consuming and sometimes infuriating business. This was symptomatic of all semiconductors' failings. There seemed little rhyme or reason in their properties; a slight change in temperature or their material make-up could tip them from conduction to insulation and back again. It was tempting to think their caprices might be tamed to make reliable, reproducible electrical switches, but no one could see how.

And so in the radio receivers and telephone and telegraph systems of the 1920s and 30s - such as those operated by Bell Labs' parent company, AT&T - vacuum tubes came to reign supreme. They worked by heating an electrode in a vacuum and applying electric fields of varying strength to the stream of electrons emitted, thus controlling the size of the current reaching a second electrode at the far side. Bulky, failure-prone and power-hungry though they were, vacuum tubes were used as switches and amplifying "repeaters" to hoist fading signals out of a sea of static on their long transcontinental journeys.

Even as they did, however, the seeds of their demise and semiconductors' eventual triumph were being sown. In 1928 Rudolph Peierls, a young Berlin-born Jew, was working as a student of the great pioneer of quantum physics, Werner Heisenberg, in Leipzig, Germany. The convolutions of history would later make Peierls one of the UK's most respected physicists, and pit him against his mentor in the race to develop the first atomic bomb. At the time, though, he was absorbed by a more niggling problem: why were electrical currents in some metals deflected the wrong way when they hit a magnetic field?

To Peierls, the answer was obvious. "The point [was] you couldn't understand solids without using the quantum theory," he recalled in a 1977 interview. Just as quantum theory dictates that electrons orbiting an atom couldn't have just any old energy, but are confined to a series of separate energy states, Peierls showed that within a solid crystal, electrons are shoe-horned into "bands" of allowed energy states. If one of these bands had only a few occupied states, electrons had great freedom to move, and the result was a familiar electron current. But if a band had only a few vacant states, electron movement would be restricted to the occasional hop into a neighbouring empty slot. With most electrons at a standstill, these vacancies would themselves seem to be on the move: mobile "absences of electron" acting for all the world like positive charges - and moving the wrong way in a magnetic field.

##### Nonentities named

Peierls never gave these odd non-entities a name. It was Heisenberg who gave them their slightly off-hand moniker: *L?cher* - or "holes". And there things rested. The holes were, after all, just a convenient fiction. Electrons were still doing the actual conducting - weren't they?

Although Peierls's band calculations were the germ of a consistent, quantum-mechanical way of looking at how electrical conduction happened, no one quite joined up the dots at the time. It was 10 years before the rumblings of war would begin to change that.

Radar technology, which involves bouncing radio waves off objects to determine their distance and speed, would become crucial to Allied successes in the latter stages of the second world war. But radar presented a problem. If the equipment were to fly on bombing missions, it needed to be as compact and lightweight as possible. Vacuum tubes no longer cut the mustard. Might the long-neglected semiconductors, for all their failings, be a way forward?

In 1940, a team at Bell Labs led by engineer Russell Ohl was exploring that possibility by attempting to tame the properties of the semiconductor silicon. At the time, silicon's grouchy and intermittent conduction was thought to be the result of impurities in its crystal structure, so Ohl and his team set about purifying it. One day, a glitch in the purification process produced a silicon rod with a truly bizarre conducting character. One half acted as if dominated by negatively charged carriers: electrons. The other half, though, seemed to contain moving positive charges.

That was odd, but not half as odd as what happened when you lit up the rod. Left to its own devices, the imbalanced silicon did nothing at all. Shine a bright light on it, however, and it flipped into a conducting state, with current flowing from the negative to the positive region.

A little more probing revealed what was going on. Usually, a silicon atom's four outer electrons are all tied up in bonds to other atoms in the crystal. But on one side of Ohl's rod, a tiny impurity of phosphorus with its five outer electrons was creating an excess of unattached electrons. On the other, a small amount of boron with just three electrons was causing an electron deficit (see diagram).

Peierls's holes had suddenly found a role. When kicked into action by the light, electrons were spilling over from the region of their excess to fill the holes in the electron structure introduced by the boron. However passively, it was the presence of an absence of electrons that was causing the silicon rod's unique behaviour. Ohl named his discovery the positive-negative or "p-n" junction, owing to its two distinct areas of positive and negative charge carriers. Its property of converting light energy into electric current made it, incidentally, the world's first photovoltaic cell.

It was a few years before Shockley got wind of Ohl's breakthrough. Already a senior member of Bell Labs' physics team before the war, the hostilities had taken him in a very different direction, as head of the US navy's anti-submarine warfare operations research unit. Resurfacing in 1945 leading Bell's solid-state physics division, it did not take Shockley long to spot the p-n junction's potential.

He was fascinated by the thought that, by pressing a metal contact to the junction's midriff, you might use an external electric field instead of light to control the current across it. In a sufficiently thin layer of n or p-type silicon, he reasoned, the right sort of voltage would make electrons or holes swarm towards the contact, providing extra carriers of charge that would boost the current flow along the surface layer. The result would be an easily controllable, low-power, small-scale amplifier that would smash the vacuum tube out of sight. That was truly a prospect to pique the interest of Shockley's paymasters.

His first attempts to realise the dream, though, were unsuccessful. "Nothing measurable, no measurable results," he noted of an early failure. "Quite mysterious." And with his mind now on the broad sweep of Bell Labs' solid-state research, Shockley was obliged to leave further investigations to two highly qualified subordinates: Bardeen, a thoughtful theorist, and Brattain, an inveterate tinkerer.

It proved a frustrating chase, and it was a classic combination of experimental nous and luck that led the pair to success - plus Bardeen's spur-of-the-moment decision to abandon silicon for its slightly more predictable semiconducting sister germanium. This finally produced the right sort of amplification effect, boosting the power of input signals, sometimes by a factor of hundreds. The magnificent Christmas present was unwrapped.

Just one thing didn't add up: the current was moving through the device in the wrong direction. Although the germanium slab had n-type material at the top, it appeared to be positive charges making the running. The puzzlement is almost palpable in Brattain's lab-book entry for 8 December 1947. "Bardeen suggests that the surface field is so strong that one is actually getting p-type conduction near the surface," he wrote. It was a mental block that stopped Bardeen and Brattain understanding the fruits of their labours.

No doubt they would have done, given time. But in his Chicago hotel room that New Year's Eve, Shockley stole a march on his colleagues. There was a way out of the impasse, he realised, and he did the first hurried calculations to firm up his case.

If a hole were merely the absence of an electron, then electrons and holes could hardly co-exist: whenever an electron met a hole, its presence would by definition negate the absence of itself that was the hole. By that measure, the existence of positive charges in a negative region, as Bardeen and Brattain had seemingly observed, was a nonsense.

But what if a hole were real, Shockley asked: not just an absence of something, but a true nothing-that-is? What if it were a particle all on its own, with an independent existence just as real as the electron's? If this were true, holes would not need to fear encountering an electron. They could happily co-exist with electrons in areas dominated by them - and that would explain what was going on in the transistor.

It was a daring intellectual leap. In the weeks that followed, Shockley used the idea to develop a transistor that exploited the independence of electrons and holes. This was the "p-n-p" transistor, in which a region of electron excess was sandwiched between two hole-dominated areas. Apply the right voltage, and the resistance of the middle section could be broken down, allowing holes to pass through hostile electron-populated territory without being swallowed up. It also worked in reverse: electrons could be made to flow through a central region given over to holes. This was the principle that came to underpin the workings of commercial transistors in the decades that followed.

The rest, as they say, is history. For Shockley, it was not a happy one. He did not at first tell Bardeen and Brattain of his new course, and even attempted to claim sole patent rights over the first transistor. The relationship between the three men never recovered. By the time they shared the Nobel prize in physics for their discovery in 1956, Shockley had left Bell Labs to form the Shockley Semiconductor Laboratory to capitalise on his transistor alone. But his high-handed and increasingly paranoid behaviour soon led to a mass mutiny from the bright young talents he had hired, such as Gordon Moore and Robert Noyce, who went on to found Intel, which remains the world's largest manufacturer of microchips.

The hole, meanwhile, went from strength to strength. Today you will find it at the heart of not just every computer chip, but every energy-saving LED lightbulb, every laser that reads our CDs and DVDs, and every touchscreen. Modern life has become unimaginable without this curiosity whose nature took two decades to reveal: the nothing that became a something and changed the world.

**Richard Webb** is a feature editor for *New Scientist*