ICO Newsletter January 2010 Number 82

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

Mario Bertolotti, a professor at the University of Roma, La Sapienza, Italy, and author of a comprehensive book on the history of lasers, continues the 2010 ICO series for the Year of the Laser

Mario Bertolotti is the author of The History of the Laser, published 2004 by Taylor & Francis.

In the November 2009 issue of ICO Newsletter, Anthony Siegman discussed how the laser came to be made. As he said, the invention of the laser had its origins in the 1916 introduction of the concept of stimulated emission by Albert Einstein. The concept was subsequently discussed by several authors and used, for example, in theoretical formulations of dispersion by Hendrik A. Kramers (1924). As early as the 1930s, Rudolf Walther Ladenburg provided experimental evidence and realized that stimulated emission could be used for amplification. The same concept was also mentioned in works by W. Bothe (1923), H. A. Kramers (1925), Richard Tolman (1924) and J. van Vleck. In 1924 the latter introduced the term induced emission, and negative absorption was often used, to describe the effect of stimulated emission.

At that moment the time was ripe for the invention of the laser, and in fact an optical approach to it was proposed by the Russian scientist V. A. Fabrikant (1940). In 1947 W. E. Lamb and R. C. Retherford encountered stimulated emission in connection with their experiment on what was later called the Lamb shift.

The question I want to address is: “Why was the laser not invented at that time?” No doubt one reason for not pursuing research in this area was the fact that there was no particular need for optical sources different from the existing ones. A second reason was that the basic properties of the emission processes were not yet completely understood. A third one might be that people were so used to dealing with equilibrium processes that they considered a non-equilibrium device like the laser unrealistic.

Let us consider these arguments separately. Light was mainly used for illumination purposes, and the relationship between spectral power and temperature was already well represented by Planck’s law. Note, however, that it was in trying to better understand Planck’s black-body distribution that Einstein introduced the concept of stimulated emission. Besides, black-body radiation is exactly the radiation that is emitted in thermal equilibrium.

The other applications of light were in the laboratory; mainly for spectroscopy, microscopy, and interference. All of this is related to the concept of coherence. Coherence – conveying the ability to produce interference fringes – was first studied by E. Verdet (1865) and M. A. Michelson (1890-1920), but a deeper understanding came with the research of P. H. van Cittert (1934) and, finally, of F. Zernicke, who in 1938 introduced the definition of the degree of coherence. A complete theory of coherence was later produced by Emil Wolf (1954), A. Blanc-Lapierre and P.D umontet (1955), and finally by Roy Glauber (1963) who provided a full quantum mechanical treatment.

I would like to remind the reader of the great concern provoked by the R. Hanbury Brown and R. Q. Twiss experiment of intensity correlation (1956), which introduced the concept of correlation among photons – a concept that took a while to be accepted and that started the revolution of quantum optics that was completed later by R. Glauber. The successive experiments by L. Mandel and collaborators contributed a lot to understanding the problems, but we were already in the laser era by then.

As can be appreciated from this very short list, the most important property of a laser – its coherence – was not understood until after the invention of the maser (1954). This microwave device was the first operative device using stimulated emission. In the 1930s nobody was looking for a new source of light to be used in scientific applications. The problem of suitable sources was simply solved using light from gas lamps emitting on single lines, filtered spatially and temporally.

What about radio sources? Did people understand that they emitted coherently? Not at that time; and there was no reason to search for a light source with the same properties of the sources used for broadcasting.

So what did people understand about the emission process? Quantum electrodynamics – the full quantum-mechanical understanding of how light is emitted – started with P. A M Dirac in 1925. Fermi’s Review of Modern Physics article in the 1930s succinctly summarized what was known at that time about the interaction of light and matter.

It is significant that in the fundamental book by Dirac, Quantum Mechanics (I own a copy of the 1958 fourth edition), stimulated emission is mentioned in only two places (pages 177 and 238). W .Heitler’s The Quantum Theory of Radiation (second edition 1944) mentions induced emission three times and of course correctly defines the probability of emission as a sum of two terms: one corresponding to spontaneous emission and a second one proportional to the intensity of radiation. “This term,” he writes, “gives rise to a certain induced emission of radiation. The existence of such an induced emission was first postulated by Einstein, who has shown that it is necessary to account for the thermal equilibrium in a gas emitting and absorbing radiation.” The term used is “certain”, perhaps meaning that it was not so important. And thermal equilibrium was again stressed.

Stimulated or induced emission was in that period the dominion of physicists, and physicists had more important things to play with. There was the problem of self-energy in electrodynamics and the extension of the interaction of quantum radiation with matter at high energies with the newly discovered positron (1932), the relativistic Dirac equation, the proposal of the existence of antiparticles, pair production, bremstrahlung and Compton scattering – concepts and theories that had their proof in cosmic-ray research, not in new sources of light. Mesons had started to appear (1937) and a new understanding of nuclear forces was beginning.

In this respect it is worth remembering that after the invention of the laser, Lamb, speaking of his results of 1947, said that at the time he was not familiar with the concept of stimulated emission.

Engineers – practical men making things – were not involved and perhaps even not aware of all these developments, which were considered only of theoretical interest to understand basic principles and were developed mainly in universities by academic people. Starting in 1934, engineers were instead progressively more and more interested in microwaves, which had assumed great relevance a few years before, and during the Second World War with the construction of radar. Resonant cavities were then well known to microwave engineers. The interaction of optical radiation with matter in a cavity was not of great concern. You might remember the Fabry-Perot cavity, invented by C. Fabry and A. Perot in 1899. That was not actually a cavity but an interferometer. No one considered it as a special kind of resonant cavity before the 1950s, as was done by R. H. Dicke, C. H. Townes, A. Schawlow and G. Gould, to name just the best known people.

A turning point occurred in 1948 with the invention of the transistor by J. Bardeen and W. H. Brattain. To understand its working principle it was necessary to know quantum mechanics, a discipline unknown to most engineers until that time. Because of this, quantum mechanics became popular with reference to practical application, and engineers were forced to understand it. At that time classical and quantum concepts were mixed in the minds of everybody. Charles Townes, with his famous sitting and mulling on a park bench, generated the idea of the maser by using stimulated emission to create a completely new source of electromagnetic radiation.

People were already playing with the idea of using stimulated emission – for example, the V. A. Fabrikant proposal in the Soviet Union and the J. Weber proposal for amplification by stimulated emission. But it was Townes who put the idea on a firm rational basis and built a real device. It was considered, from the point of view of an engineer, as a possible extension of the microwave domain. What did it have to do with light?

Charles Townes and his brother-in-law Art Schawlow were puzzled by this problem. Was it possible to use stimulated emission again to cross the barrier and jump into the infrared-visible domain? The problem was beautifully addressed and hints to its solution were given in their Physical Review paper of 1958.

To better understand the spirit of those times, remember that colleagues at Bell Labs asked Townes and Schawlow to discuss in greater detail the modes of an optical cavity. Even after the first construction of the laser, the idea that a Fabry-Perot constituted a special type of resonant cavity was challenged by many.

It is worth noting that a purely optical approach was suggested by Gould, who did not use the traditional channels to discuss his ideas and preferred instead to obtain patents first, with the result of a number of patent litigations that eventually ended in his favour after more than 25 years. But with the exception of the bold suggestion by Gould, who entered into the optical domain with a number of different proposals, most people thought that using stimulated emission through inverted population was a very difficult task. Why? Because of thermal equilibrium – my third reason for the delay in the building of the laser.

People were used to believing that the deviation of a system from thermal equilibrium was a very small effect and of transient nature. Paradoxically, immediately after the war, research into microwaves led to the invention of magnetic resonance by F .Bloch at Stanford and E. M. Purcel at Harvard, independently, in 1946, and of electron paramagnetic resonance by E. Zavoisky in the USSR. Transient inversion of populations was then obtained in magnetic resonance by F. Bloch in 1946 and by E. M. Purcel and R .V .Pound in 1950-51, who eventually introduced the concept of negative temperature to deal with these situations.

However, people were still convinced that only pulsed regimes could be considered and that the deviation from thermal equilibrium was marginal.

The words of R. Tolman (1924) – “molecules in the upper quantum state may return to the lower quantum state in such a way to reinforce the primary beam by ‘negative absorption'” – were immediately followed by “for absorption experiments as usually performed the amount of negative absorption can be neglected”. The sentence “as usually performed”, I presume, means in thermal equilibrium, as usual.

Art Schawlow, after the construction of the laser, identified the general belief that to deviate from thermal equilibrium was extremely difficult as one of the principal reasons that the laser had not been made before.

At the time of their Physical Review paper, Townes and Schawlow were convinced, I presume for many of the reasons I have discussed here, together with all the other researchers in the 1950s, that to extend the maser concept to light was a very difficult and challenging task.

Art Schawlow later showed that a marmalade jell may lase, but that demonstration came only after the first laser was built by Theodor Maiman, an unknown physicist working in an industrial laboratory who was not in the main stream of people working actively in the field, trying to turn into reality the proposal by Townes and Schawlow.

As often happens, once the way to build the new device was discovered, a crowd of people got on the bandwagon and many different lasing systems were built in a very short time.

We should all pay tribute to T. Maiman for showing us how easy it was to build the laser.

Mario Bertolotti, University of Roma, La Sapienza, Italy

[Top]
Nobel prize recognizes 40 years of fibre revolution
One of the 2009 Nobel laureates in physics is Charles Kao, who is recognized for his seminal work that laid the foundation for fibre-based communication systems. Although more than 40 years have passed since his 1966 breakthrough paper, research into fibre design and propagation remains as hot a topic as ever, continuing to drive new science, both fundamental and applied.
1966 saw a milestone in communications when Charles Kao and George Hockham of Standard Telecommunications Laboratories in Harlow, UK, developed the vision of glass optical fibre as a practical communications transmission medium. The concept of using glass in this way was far from being evident, given the 1000 dB/km attenuation levels of the time, but Kao’s calculations showed that aiming for a technically possible loss of 20 dB/km would yield fibre-based transmission that was commercially viable. Kao’s work was critical in providing a realistic technological target for research in this area. Subsequent work over the 40 years since has of course seen tremendous development in many different areas such as glasses, sources, amplifiers and transmission protocols, and together these constitute the photonic technologies that make up the backbone of the modern information society.

Far from being a field that has reached saturation, however, research in fibre optics continues to develop at an ever-increasing pace. Naturally, there is still a great deal of research related to its use in high-capacity communications, but there are many applications unanticipated in the 1960s that are having a dramatic impact in other areas of science. For example, the development of advanced fibre-drawing technology and an improved understanding of the physics of fibre waveguides have led to the realization of a wide range of fibre-based components such as gratings and filters, couplers and interferometers. As well as being essential building blocks for lightwave systems, they have also been employed widely in fields such as optical sensing, and have been crucial in the transfer of many other optical technologies from the laboratory into the real world where device robustness is essential.

The design and application of the new class of photonic crystal fibre (PCF) is one recent area of research that has seen intense worldwide interest. PCF was first proposed in the 1990s by Philip Russell, who had the key insight that a microstructured cladding surrounding a fibre core yielded fundamentally new guidance mechanisms, as well as enhanced dispersion and nonlinearity engineering on a scale impossible in standard fibre. The significance of PCF for nonlinear optics was revealed in striking fashion in 1999 when Ranka et al.. reported supercontinuum generation spanning 400-1500 nm using only nanojoule energy pulses from a modelocked Ti:sapphire laser. These results attracted immediate attention because of their potential application in optical frequency metrology, allowing complex room-sized frequency chains to be replaced by compact benchtop systems. This discovery was recognized in another share of a Nobel prize, this time in 2005 to Hall and Haensch.

photo
Supercontinuum generation in a photonic crystal fibre.

Supercontinuum generation is a complex process that involves the interaction between a number of different nonlinear effects and the intrinsic linear dispersion of the fibre waveguide. As well as its application in frequency metrology, PCF-supported supercontinuum generation has made it possible to study in detail previously unappreciated aspects of complex nonlinear pulse propagation in optical fibres. Novel experiments using bandgap guiding PCF have taken gas- and liquid-based nonlinear optics to a new level and opened up new and important interactions with other fields of ultrafast optics. Indeed, although supercontinuum generation in PCF was reported nearly 10 years ago, the field continues to throw up surprises, particularly in terms of detailed studies of spectral stability properties. For example, under some conditions, supercontinuum generation yields unexpectedly large fluctuations with long-tail statistics, similar to those associated with the legendary rogue waves on the surface of the ocean. Standard numerical techniques appear only partially successful at explaining these instabilities, but analytical approaches based on thermodynamics appear to hold more promise to providing clear insight.

The success of waveguide engineering in silica-based fibres has been accompanied by parallel efforts to engineer other functional materials, such as chalcogenide glass and silicon. Of course, the field of silicon photonics itself continues to grow dramatically, and among the most recent results in this area are the report of a silicon-chip-based ultrafast oscilloscope and the drawing of long silicon fibres using practical drawtower techniques.

When one considers the last 40 years since low-loss optical fibre was first proposed, it becomes clear how fibre appears as a common factor in many groundbreaking experiments that have combined ideas and researchers from diverse domains, such as guided wave and gas-based nonlinear optics, ultrafast source development, nanophotonics, materials science and clinical medicine. It is likely that dramatic progress will continue in all of these fields, but perhaps the most genuine future breakthroughs will be made with unexpected applications at the boundaries between disciplines. In a more general vein, there is currently much worldwide debate over the way in which fundamental and applied research are supported, and this could have particular impact on developing countries where technology often takes precedence over curiosity. But one of the lessons that can be learned from the recent activities in fibre optics is that curiosity-driven research and applications often go hand-in-hand. Results of fundamental significance can arise in unexpected places provided one always keeps an eye out for potential breakthroughs.

Scroll to Top