- Fusion power
Fusion power is the power generated by nuclear fusion processes. In fusion reactions two light atomic nuclei fuse together to form a heavier nucleus (in contrast with fission power). In doing so they release a comparatively large amount of energy arising from the binding energy due to the strong nuclear force which is manifested as an increase in temperature of the reactants. Fusion power is a primary area of research in plasma physics.
The term is commonly used to refer to potential commercial production of net usable power from a fusion source, similar to the usage of the term "steam power." The leading designs for controlled fusion research use magnetic (tokamak design) or inertial (laser) confinement of a plasma, with heat from the fusion reactions used to operate a steam turbine which in turn drives electrical generators, similar to the process used in fossil fuel and nuclear fission power stations.
Fusion power is believed to have significant safety advantages over current power stations based on nuclear fission. Fusion only takes place under very limited and controlled circumstances, and requires a constant feed of new fuel to maintain the reaction, so the cessation of active fuelling or simple changes to the control system quickly shuts down fusion power reactions. By comparison, fission reactors only require that there is sufficient fuel within a small enough space, and are subject to catastrophic failures that self-maintain the reaction, notably the meltdown. In a fusion reactor there is no possibility of runaway heat build-up or large-scale release of radioactivity, little or no atmospheric pollution, the power source comprises light elements in small quantities, the waste products are short-lived in terms of radioactivity.
Fusion powered electricity generation was initially believed to be readily achievable, as fission power had been. However the extreme requirements for continuous reactions and plasma containment led to projections being extended by several decades. In 2010, more than 60 years after the first attempts, commercial power production is still believed to be unlikely before 2050.
As of July 2010[update], the largest experiment by means of magnetic confinement has been the Joint European Torus (JET). In 1997, JET produced a peak of 16.1 megawatts (21,600 hp) of fusion power (65% of input power), with fusion power of over 10 MW (13,000 hp) sustained for over 0.5 sec. In June 2005, its successor, ITER, was announced by the seven parties involved in the project - the United States, China, the European Union (EU), India, Japan, the Russian Federation, and South Korea. ITER is designed to produce ten times more fusion power than the power put into the plasma over many minutes; for example 50 MW of input power to produce 500 MW of output power. ITER is currently under construction in Cadarache, France. DEMO is intended as the next generation of research from ITER, and to be the first reactor demonstrating sustained net energy-producing fusion on a commercial scale. It has been proposed to begin construction of DEMO in 2024.
Inertial (laser) confinement, which was for a time seen as more difficult or infeasible, has generally seen less development effort than magnetic approaches. However this approach made a comeback following further innovations, and is being developed at both the United States National Ignition Facility as well as the planned European Union High Power laser Energy Research (HiPER) facility. As of 2010 heating to 3.3 million Kelvin was achieved and in October 2010 the first integrated ignition test was announced to have been completed successfully with the 192-beam laser system firing over a million joules of ultraviolet laser energy into a capsule filled with hydrogen fuel. Fusion ignition tests are to follow.
The basic concept behind any fusion reaction is to bring two or more nuclei close enough together so that the residual strong force (nuclear force) in their nuclei will pull them together into one larger nucleus. If two light nuclei fuse, they will generally form a single nucleus with a slightly smaller mass than the sum of their original masses (though this is not always the case). The difference in mass is released as energy according to Albert Einstein's mass-energy equivalence formula E = mc2. If the input nuclei are sufficiently massive, the resulting fusion product will be heavier than the sum of the reactants' original masses, in which case the reaction requires an external source of energy. The dividing line between "light" and "heavy" is iron-56. Above this atomic mass, energy will generally be released by nuclear fission reactions; below it, by fusion.
Fusion between the nuclei is opposed by their shared electrical charge, specifically the net positive charge of the protons in the nucleus. To overcome this electrostatic force, or "Coulomb barrier", some external source of energy must be supplied. The easiest way to do this is to heat the atoms, which has the side effect of stripping the electrons from the atoms and leaving them as bare nuclei. In most experiments the nuclei and electrons are left in a fluid known as a plasma. The temperatures required to provide the nuclei with enough energy to overcome their repulsion is a function of the total charge, so hydrogen, which has the smallest nuclear charge therefore reacts at the lowest temperature. Helium has an extremely low mass per nucleon and therefore is energetically favoured as a fusion product. As a consequence, most fusion reactions combine isotopes of hydrogen ("protium", deuterium, or tritium) to form isotopes of helium (3
He or 4
The reaction cross section, denoted σ, is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, as is the case in a thermal distribution within a plasma, then it is useful to perform an average over the distributions of the product of cross section and velocity. The reaction rate (fusions per volume per time) is <σv> times the product of the reactant number densities:
- ƒ = (½n)2 <σv> (for one reactant)
- ƒ = n1n2 <σv> (for two reactants)
<σv> increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV (2.2–22 fJ). The significance of <σv> as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion.
Perhaps the three most widely considered fuel cycles are based on the D-T, D-D, and p-11
B reactions. Other fuel cycles (D-3
He and 3
He) would require a supply of 3He, either from other nuclear reactions or from extraterrestrial sources, such as the surface of the moon or the atmospheres of the gas giant planets. The details of the calculations comparing these reactions can be found here.
D-T fuel cycle
The easiest (according to the Lawson criterion) and most immediately promising nuclear reaction to be used for fusion power is:
1D + 3
1T → 4
2He + 1
Hydrogen-2 (Deuterium) is a naturally occurring isotope of hydrogen and as such is universally available. The large mass ratio of the hydrogen isotopes makes the separation rather easy compared to the difficult uranium enrichment process. Hydrogen-3 (Tritium) is also an isotope of hydrogen, but it occurs naturally in only negligible amounts due to its radioactive half-life of 12.32 years. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions:
0n + 6
3Li → 3
1T + 4
0n + 7
3Li → 3
1T + 4
2He + 1
The reactant neutron is supplied by the D-T fusion reaction shown above, the one that also produces the useful energy. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic but does not consume the neutron. At least some 7Li reactions are required to replace the neutrons lost by reactions with other elements. Most reactor designs use the naturally occurring mix of lithium isotopes. However, the supply of lithium is relatively limited with other applications such as Li-ion batteries increasing its demand.
Several drawbacks are commonly attributed to D-T fusion power:
- It produces substantial amounts of neutrons that result in induced radioactivity within the reactor structure.
- Only about 20% of the fusion energy yield appears in the form of charged particles (the rest neutrons), which limits the extent to which direct energy conversion techniques might be applied.
- The use of D-T fusion power depends on lithium resources, which are less abundant than deuterium resources. However, lithium is relatively abundant on earth.
- It requires the handling of the radioisotope tritium. Similar to hydrogen, tritium is difficult to contain and may leak from reactors in some quantity. Some estimates suggest that this would represent a fairly large environmental release of radioactivity.
The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of current fission power reactors, posing problems for material design. Design of suitable materials is under way but their actual use in a reactor is not proposed until the generation after ITER. After a single series of D-T tests at JET, the largest fusion reactor yet to use this fuel, the vacuum vessel was sufficiently radioactive that remote handling needed to be used for the year following the tests.
In a production setting, the neutrons would be used to react with lithium in order to create more tritium. This also deposits the energy of the neutrons in the lithium, which would then be cooled to remove this energy and drive electrical production. This reaction protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, also use lithium inside the reactor core as a key element of the design. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this layout was demonstrated in the Lithium Tokamak Experiment.
D-D fuel cycle
Though more difficult to facilitate than the deuterium-tritium reaction, fusion can also be achieved through the reaction of deuterium with itself. This reaction has two branches that occur with nearly equal probability:
2H + 2H → 3T + 1H → 3He + n
The optimum energy for this reaction is 15 keV, only slightly higher than the optimum for the D-T reaction. The first branch does not produce neutrons, but it does produce tritium, so that a D-D reactor will not be completely tritium-free, even though it does not require an input of tritium or lithium. Most of the tritium produced will be burned before leaving the reactor, which reduces the tritium handling required, but also means that more neutrons are produced and that some of these are very energetic. The neutron from the second branch has an energy of only 2.45 MeV (0.393 pJ), whereas the neutron from the D-T reaction has an energy of 14.1 MeV (2.26 pJ), resulting in a wider range of isotope production and material damage. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons is only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from limitations of lithium resources and a somewhat softer neutron spectrum. The price to pay compared to D-T is that the energy confinement (at a given pressure) must be 30 times better and the power produced (at a given pressure and volume) is 68 times less.
D-3He fuel cycle
A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H). This reaction produces a helium-4 nucleus (4He) and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several speculative technologies). In practice, D-D side reactions produce a significant number of neutrons, resulting in p-11B being the preferred cycle for aneutronic fusion.
p-11B fuel cycle
- 1H + 11B → 3 4He
Under reasonable assumptions, side reactions will result in about 0.1% of the fusion power being carried by neutrons. At 123 keV, the optimum temperature for this reaction is nearly ten times higher than that for the pure hydrogen reactions, the energy confinement must be 500 times better than that required for the D-T reaction, and the power density will be 2500 times lower than for D-T. Since the confinement properties of conventional approaches to fusion such as the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense plasma focus.
History of research
The idea of using human-initiated fusion reactions was first made practical for military purposes in nuclear weapons. In a hydrogen bomb, the energy released by a fission weapon is used to compress and heat fusion fuel, beginning a fusion reaction that releases a large amount of neutrons that increases the rate of fission. The first fission-fusion-fission-based weapons released some 500 times more energy than early fission weapons.
Attempts at controlling fusion had already started by this point. Registration of the first patent related to a fusion reactor by the United Kingdom Atomic Energy Authority, the inventors being Sir George Paget Thomson and Moses Blackman, dates back to 1946. This was the first detailed examination of the pinch concept, and small efforts to experiment with the pinch concept started at several sites in the UK.
Around the same time, an expatriate German Ronald Richter proposed the Huemul Project in Argentina, announcing positive results in 1951. Although these results turned out to be false, it sparked off intense interest around the world. The UK pinch programs were greatly expanded, culminating in the ZETA and Sceptre devices. In the US, pinch experiments like those in the UK started at the Los Alamos National Laboratory. Similar devices were built in the USSR after data on the UK program was passed to them by Klaus Fuchs. At Princeton University a new approach developed as the stellarator, and the research establishment formed there continues to this day as the Princeton Plasma Physics Laboratory. Not to be outdone, Lawrence Livermore National Laboratory entered the field with their own variation, the magnetic mirror. These three groups have remained the primary developers of fusion research in the US to this day.
In the time since these early experiments, two new approaches developed that have since come to dominate fusion research. The first was the tokamak approach developed in the Soviet Union, which combined features of the stellarator and pinch to produce a device that dramatically outperformed either. The majority of magnetic fusion research to this day has followed the tokamak approach. In the late 1960s the concept of "mechanical" fusion through the use of lasers was developed in the US, and Lawrence Livermore switched their attention from mirrors to lasers over time.
Civilian applications are still being developed. Although it took less than ten years for fission to go from military applications to civilian fission energy production, it has been very different in the fusion energy field; more than fifty years have already passed since the first fusion reaction took place and sixty years since the first attempts to produce controlled fusion power, without any commercial fusion energy production plant coming into operation.
A major area of study in early fusion power research is the "pinch" concept. Pinch is based on the fact the plasmas are electrically conducting. By running a current through the plasma, a magnetic field will be generated around the plasma. This field will, according to Lenz's law, create an inward directed force that causes the plasma to collapse inward, raising its density. Denser plasmas generate denser magnetic fields, increasing the inward force, leading to a chain reaction. If the conditions are correct, this can lead to the densities and temperatures needed for fusion. The trick is getting the current into the plasma; this is solved by inducing the current from an external magnet, which also produces the external field the internal field acts against.
Pinch was first developed in the UK in the immediate post-war era. Starting in 1947 small experiments were carried out and plans were laid to build a much larger machine. When the Huemul results hit the news, James L. Tuck, a UK physicist working at Los Alamos, introduced the pinch concept in the US and produced a series of machines known as the Perhapsatron. In the Soviet Union, a series of similar machines were being built, unknown in the west. All of these devices quickly demonstrated a series of instabilities in the fusion when the pinch was applied, which broke up the plasma column long before it reached the densities and temperatures needed for fusion. In 1953 Tuck and others suggested a number of solutions to these problems.
The largest "classic" pinch device was the ZETA, including all of these upgrades, starting operations in the UK in 1957. In early 1958 John Cockcroft announced that fusion had been achieved in the ZETA, an announcement that made headlines around the world. When physicists in the US expressed concerns about the claims they were initially dismissed. However, US experiments demonstrated the same neutrons, although measurements suggested these could not be from fusion reactions. The neutrons seen in the UK were later demonstrated to be from different versions of the same instability processes that plagued earlier machines. Cockcroft was forced to retract the fusion claims, which tainted the entire field for years. ZETA ended its experiments in 1968, and most other pinch experiments ended shortly after.
In 1974 a study of the ZETA results demonstrated an interesting side-effect; after the experimental runs ended, the plasma would enter a short period of stability. This led to the reversed field pinch concept which has seen some level of development ever since. Recent work on the basic concept started as a result of the appearance of the "wires array" concept in the 1980s, which allowed a more efficient use of this technique. The Sandia National Laboratory runs a continuing wire-array research program with the Zpinch machine. In addition, the University of Washington's ZaP Lab has shown quiescent periods of stability hundreds of times longer than expected for plasma in a Z-pinch configuration, giving promise to the confinement technique.
In 1995, the staged Z-pinch concept was introduced by a team of scientist from University of California Irvine (UCI). This scheme can control one of the most dangerous instability that normally disintegrate conventional Z-pinch before the final implosion. The concept is based on a complex load of radiative liner plasma embedded with a target plasma. During implosion the outer surface of the liner plasma becomes unstable but the target plasma remains remarkably stable, up until the final implosion, generating a very high energy density stable target plasma. The heating mechanisms are shock heating, adiabatic compression and trapping of charge particles produced in fusion reaction due to a very strong magnetic field, which develops between the liner and the target. Details of this concept are shown in various publications available on the web page of MIFTI .
Early magnetic approaches
The U.S. fusion program began in 1951 when Lyman Spitzer began work on a stellarator under the code name Project Matterhorn. His work led to the creation of the Princeton Plasma Physics Laboratory, where magnetically confined plasmas are still studied. Spitzer planned an aggressive development project of four machines, A, B, C, and D. A and B were small research devices, C would be the prototype of a power-producing machine, and D would be the prototype of a commercial device. A worked without issue, but even by the time B was being used it was clear the stellarator was also suffering from instabilities and plasma leakage. Progress on C slowed as attempts were made to correct for these problems.
At Lawrence Livermore, the magnetic mirror was the preferred approach. The mirror consisted of two large magnets arranged so they had strong fields within them, and a weaker, but connected, field between them. Plasma introduced in the area between the two magnets would "bounce back" from the stronger fields in the middle. Although the design would leak plasma through the mirrors, the rate of leakage would be low enough that a useful fusion rate could be maintained. The simplicity of the design was supposed to make up for its lower performance. In practice the mirror also suffered from mysterious leakage problems, and never reached the expected performance.
Gun Club, MHD, instability; progress slows
By the mid-1950s it was clear that the simple theoretical tools being used to calculate the performance of all fusion machines were simply not predicting their actual behaviour. Machines invariably leaked their plasma from their confinement area at rates far higher than predicted.
In 1954, Edward Teller held a gathering of fusion researchers at the Princeton Gun Club, near the Project Matterhorn (now known as Project Sherwood) grounds. Teller started by pointing out the problems that everyone was having, and suggested that any system where the plasma was confined within concave fields was doomed to fail. Attendees remember him saying something to the effect that the fields were like rubber bands, and they would attempt to snap back to a straight configuration whenever the power was increased, ejecting the plasma. He went on to say that it appeared the only way to confine the plasma in a stable configuration would be to use convex fields, a "cusp" configuration.
When the meeting concluded, most of the researchers quickly turned out papers saying why Teller's concerns did not apply to their particular device. The pinch machines did not use magnetic fields in this way at all, while the mirror and stellarator seemed to have various ways out. However, this was soon followed by a paper by Martin David Kruskal and Martin Schwarzschild discussing pinch machines, which demonstrated instabilities in those devices were inherent to the design. A series of similar studies followed, abandoning the simplistic theories previously used and introducing a full consideration of magnetohydrodynamics with a partially-resistive plasma. These concepts developed quickly, and by the early 1960s it was clear that small devices simply would not work. A series of much larger and more complex devices followed as researchers attempted to add field upon field in order to provide the required field strength without reaching the unstable regimes. As cost and complexity climbed, the initial optimism of the fusion field faded.
The tokamak is announced
A new approach was outlined in the theoretical works fulfilled in 1950–1951 by I.E. Tamm and A.D. Sakharov in the Soviet Union, which first discussed a tokamak-like approach. Experimental research on these designs began in 1956 at the Kurchatov Institute in Moscow by a group of Soviet scientists led by Lev Artsimovich. The tokamak essentially combined a low-power pinch device with a low-power simple stellarator. The key was to combine the fields in such a way that the particles wound around the reactor a particular number of times, today known as the "safety factor". The combination of these fields dramatically improved confinement times and densities, resulting in huge improvements over existing devices.
The group constructed the first tokamaks, the most successful being the T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, producing the first quasistationary thermonuclear fusion reaction ever. The tokamak was dramatically more efficient than the other approaches of that era, on the order of 10 to 100 times. When they were first announced the international community was highly skeptical. However, a British team was invited to see T-3, and having measured it in depth they released their results that confirmed the Soviet claims. A burst of activity followed as many planned devices were abandoned and new tokamaks were introduced in their place - the C model stellarator, then under construction after many redesigns, was quickly converted to the Symmetrical Tokamak and the stellarator was abandoned.
Through the 1970s and 80s great strides in understanding the tokamak system were made. A number of improvements to the design are now part of the "advanced tokamak" concept, which includes non-circular plasmas, internal diverters and limiters, often superconducting magnets, and operate in the so-called "H-mode" island of increased stability. Two other designs have also become fairly well studied; the compact tokamak is wired with the magnets on the inside of the vacuum chamber, while the spherical tokamak reduces its cross section as much as possible.
The tokamak dominates modern research, where very large devices like ITER are expected to pass several milestones toward commercial power production, including a burning plasma with long burn times, high power output, and online fueling. There are no guarantees that the project will be successful; previous generations of tokamak machines have uncovered new problems many times. But the entire field of high temperature plasmas is much better understood now than formerly, and there is considerable optimism that ITER will meet its goals. If successful, ITER would be followed by a "commercial demonstrator" system, similar in purpose to the very earliest power-producing fission reactors built in the era before wide-scale commercial deployment of larger machines started in the 1960s and 1970s.
Even with these goals met, there are a number of major engineering problems remaining, notably finding suitable "low activity" materials for reactor construction, demonstrating secondary systems including practical tritium extraction, and building reactor designs that allow their reactor core to be removed when its materials becomes embrittled due to the neutron flux. Practical commercial generators based on the tokamak concept are far in the future. The public at large has been disappointed, as the initial outlook for practical fusion power plants was much rosier; a pamphlet from the 1970s printed by General Atomic stated that "Several commercial fusion reactors are expected to be online by the year 2000."
Inertial (laser) containment
The technique of implosion of a microcapsule irradiated by laser beams, the basis of laser inertial confinement, was first suggested in 1962 by scientists at Lawrence Livermore National Laboratory, shortly after the invention of the laser itself in 1960. Lasers of the era were very low powered, but low-level research using them nevertheless started as early as 1965. A great advance in the field was John Nuckolls' 1972 paper that ignition would require lasers of about 1 kJ, and efficient burn around 1 MJ. kJ lasers were just beyond the state of the art at the time, and his paper sparked off a tremendous development effort to produce devices of the needed power.
Early machines used a variety of approaches to attack one of two problems - some focused on fast delivery of energy, while others were more interested in beam smoothness. Both were attempts to ensure the energy delivery would be smooth enough to cause an even implosion. However, these experiments demonstrated a serious problem; laser wavelengths in the infrared area lost a tremendous amount of energy before compressing the fuel. Important breakthroughs in this laser technology were made at the Laboratory for Laser Energetics at the University of Rochester, where scientists used frequency-tripling crystals to transform the infrared laser beams into ultraviolet beams. By the late 1970s great strides had been made in laser power, but with each increase new problems were found in the implosion technique that suggested even more power would be required. By the 1980s these increases were so large that using the concept for generating net energy seemed remote. Most research in this field turned to weapons research, always a second line of research, as the implosion concept is somewhat similar to hydrogen bomb operation. Work on very large versions continued as a result, with the very large National Ignition Facility in the US and Laser Mégajoule in France supporting these research programs.
More recent work had demonstrated that significant savings in the required laser energy are possible using a technique known as "fast ignition". The savings are so dramatic that the concept appears to be a useful technique for energy production again, so much so that it is a serious contender for pre-commercial development. There are proposals to build an experimental facility dedicated to the fast ignition approach, known as HiPER. At the same time, advances in solid state lasers appear to improve the "driver" systems' efficiency by about ten times (to 10- 20%), savings that make even the large "traditional" machines almost practical, and might make the fast ignition concept outpace the magnetic approaches in further development.
The laser-based concept has other advantages. The reactor core is mostly exposed, as opposed to being wrapped in a huge magnet as in the tokamak. This makes the problem of removing energy from the system somewhat simpler, and should mean that a laser-based device would be much easier to perform maintenance on, such as core replacement. Additionally, the lack of strong magnetic fields allows for a wider variety of low-activation materials, including carbon fiber, which would reduce both the frequency of such neutron activations and the rate of irradiation to the core. In other ways the program has many of the same problems as the tokamak; practical methods of energy removal and tritium recycling need to be demonstrated.
Over the years there have been a wide variety of fusion concepts. In general they fall into three groups - those that attempt to reach high temperature/density for brief times (pinch, inertial confinement), those that operate at a steady state (magnetic confinement) or those that try neither and instead attempt to produce low quantities of fusion but do so at an extremely low cost. The latter group has largely disappeared, as the difficulties of achieving fusion have demonstrated that any low-energy device is unlikely to produce net gain. This leaves the two major approaches, magnetic and laser inertial, as the leading systems for development funding. However, alternate approaches continue to be developed, and alternate non-power fusion devices have been successfully developed as well.
Philo T. Farnsworth, the inventor of the first all-electronic television system in 1927, patented his first Fusor design in 1968, a device that uses inertial electrostatic confinement. This system consists largely of two concentric spherical electrical grids inside a vacuum chamber into which a small amount of fusion fuel is introduced. Voltage across the grids causes the fuel to ionize around them, and positively charged ions are accelerated towards the center of the chamber. Those ions may collide and fuse with ions coming from the other direction, may scatter without fusing, or may pass directly through. In the latter two cases, the ions will tend to be stopped by the electric field and re-accelerated toward the center. Fusors can also use ion guns rather than electric grids. Towards the end of the 1960s, Robert Hirsch designed a variant of the Farnsworth Fusor known as the Hirsch-Meeks fusor. This variant is a considerable improvement over the Farnsworth design, and is able to generate neutron flux in the order of one billion neutrons per second. Although the efficiency was very low at first, there were hopes the device could be scaled up, but continued development demonstrated that this approach would be impractical for large machines. Nevertheless, fusion could be achieved using a "lab bench top" type set up for the first time, at minimal cost. This type of fusor found its first application as a portable neutron generator in the late 1990s. An automated sealed reaction chamber version of this device, commercially named Fusionstar was developed by EADS but abandoned in 2001. Its successor is the NSD-Fusion neutron generator.
Robert W. Bussard's Polywell concept is roughly similar to that of the Fusor, but replaces the problematic grid with a magnetically contained electron cloud, which holds the ions in position and provides an accelerating potential. The polywell consists of electromagnet coils arranged in a polyhedral configuration and positively charged to between several tens and low hundreds of kilovolts. This charged magnetic polyhedron is called a MaGrid (Magnetic Grid). Electrons are introduced outside the "quasi-spherical" MaGrid and are accelerated into the MaGrid due to the electric field, similar to a magnetic bottle. Within the MaGrid, magnetic fields confine most of the electrons and those that escape are retained by the electric field. This configuration traps the electrons in the middle of the device, focusing them near the center to produce a virtual cathode (negative electric potential). The virtual cathode accelerates and confines the ions to be fused which, except for minimal losses, never reach the physical structure of the MaGrid. Bussard had reported a fusion rate of 109 per second running D-D fusion reactions at only 12.5 kV (based on detecting a total of nine neutrons in five tests. Bussard claimed a scaled-up version of 2.5–3 m in diameter, would operated at over 100 MW net power (fusion power scales as the fourth power of the B field and the cube of the size)
A recent[when?] area of study is the magneto-inertial fusion (MIF) concept, which combines some form of external inertial compression (like lasers) with further compression through an external magnet (like pinch devices). The magnetic field traps heat within the inertial core, causing a variety of effects that improves fusion rates. These improvements are relatively minor, however the magnetic drivers themselves are inexpensive compared to lasers or other systems. There is hope for a sweet spot that allows the combination of features from these devices to create low-density but also low-cost fusion devices. A similar concept is the magnetized target fusion device, which uses a magnetic field in an external metal shell to achieve the same basic goals.
According to Eric Lerner, Focus fusion takes place in a dense plasma focus, which typically consists of two coaxial cylindrical electrodes made from copper or beryllium and housed in a vacuum chamber containing a low-pressure gas, which is used as the reactor fuel. An electrical pulse is applied across the electrodes, producing heating and a magnetic field. The current forms the hot gas into many minuscule vortices perpendicular to the surfaces of the electrodes, which then migrate to the end of the inner electrode to pinch-and-twist off as tiny balls of plasma called plasmoids. The electron beam collides with the plasmoid, heating it to fusion temperatures. This will, in principle, yield more energy in the beams than was input to form them.
Non-power generating approaches
A more subtle technique is to use more unusual particles to catalyse fusion. The best known of these is Muon-catalyzed fusion which uses muons, which behave somewhat like electrons and replace the electrons around the atoms. These muons allow atoms to get much closer and thus reduce the kinetic energy required to initiate fusion. Muons require more energy to produce than can be obtained from muon-catalysed fusion, making this approach impractical for the generation of power.
In April 2005, a team from UCLA announced it had devised a way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to smash deuterium atoms together. However, the process does not generate net power. See Pyroelectric fusion. Such a device would be useful in the same sort of roles as the fusor.
Some scientists reported excess heat, neutrons, tritium, helium and other nuclear effects in so-called cold fusion systems, which for a time gained interest as showing promise. Hopes fell when replication failures were weighed in view of several reasons cold fusion is not likely to occur, the discovery of possible sources of experimental error, and finally the discovery that Fleischmann and Pons had not actually detected nuclear reaction byproducts. By late 1989, most scientists considered cold fusion claims dead, and cold fusion subsequently gained a reputation as pathological science. However, a small community of researchers continues to investigate cold fusion claiming to replicate Fleishmann and Pons' results including nuclear reaction byproducts. Claims related to cold fusion are largely disbelieved in the mainstream scientific community. In 1989, the majority of a review panel organized by the US Department of Energy (DOE) found that the evidence for the discovery of a new nuclear process was not persuasive. A second DOE review, convened in 2004 to look at new research, reached conclusions similar to the first.
Key design areas
Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion:
- Equilibrium: There must be no net forces on any part of the plasma, otherwise it will rapidly disassemble. The exception, of course, is inertial confinement, where the relevant physics must occur faster than the disassembly time.
- Stability: The plasma must be so constructed that small deviations are restored to the initial state, otherwise some unavoidable disturbance will occur and grow exponentially until the plasma is destroyed.
- Transport: The loss of particles and heat in all channels must be sufficiently slow. The word "confinement" is often used in the restricted sense of "energy confinement".
The first human-made, large-scale fusion reaction was the test of the hydrogen bomb, Ivy Mike, in 1952. As part of the PACER project, it was once proposed to use hydrogen bombs as a source of power by detonating them in underground caverns and then generating electricity from the heat produced, but such a power plant is unlikely ever to be constructed, for a variety of reasons. Controlled thermonuclear fusion (CTF) refers to the alternative of continuous power production, or at least the use of explosions that are so small that they do not destroy a significant portion of the machine that produces them.
To produce self-sustaining fusion, the energy released by the reaction (or at least a fraction of it) must be used to heat new reactant nuclei and keep them hot long enough that they also undergo fusion reactions. Retaining the heat is called energy confinement and may be accomplished in a number of ways.
The hydrogen bomb really has no confinement at all. The fuel is simply allowed to fly apart, but it takes a certain length of time to do this, and during this time fusion can occur. This approach is called inertial confinement. If more than milligram quantities of fuel are used (and efficiently fused), the explosion would destroy the machine, so theoretically, controlled thermonuclear fusion using inertial confinement would be done using tiny pellets of fuel which explode several times a second. To induce the explosion, the pellet must be compressed to about 30 times solid density with energetic beams. If the beams are focused directly on the pellet, it is called direct drive, which can in principle be very efficient, but in practice it is difficult to obtain the needed uniformity. An alternative approach is indirect drive, in which the beams heat a shell, and the shell radiates x-rays, which then implode the pellet. The beams are commonly laser beams, but heavy and light ion beams and electron beams have all been investigated.
Inertial confinement produces plasmas with impressively high densities and temperatures, and appears to be best suited to weapons research, X-ray generation, very small reactors, and perhaps in the distant future, spaceflight. They require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave to produce the high-density plasma, and in practice these have proven difficult to produce. A recent development in the field of laser induced ICF is the use of ultrashort pulse multi-petawatt lasers to heat the plasma of an imploding pellet at exactly the moment of greatest density after it is imploded conventionally using terawatt scale lasers. This research will be carried out on the (currently being built) OMEGA EP petawatt and OMEGA lasers at the University of Rochester and at the GEKKO XII laser at the institute for laser engineering in Osaka Japan, which if fruitful, may have the effect of greatly reducing the cost of a laser fusion based power source.
At the temperatures required for fusion, the fuel is in the form of a plasma with very good electrical conductivity. This opens the possibility to confine the fuel and the energy with magnetic fields, an idea known as magnetic confinement. The Lorenz force works only perpendicular to the magnetic field, so that the first problem is how to prevent the plasma from leaking out the ends of the field lines. There are basically two solutions.
The first is to use the magnetic mirror effect. If particles following a field line encounter a region of higher field strength, then some of the particles will be stopped and reflected. Advantages of a magnetic mirror power plant would be simplified construction and maintenance due to a linear topology and the potential to apply direct conversion in a natural way, but the confinement achieved in the experiments was so poor that this approach has been essentially abandoned.
The second possibility to prevent end losses is to bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed system of this type is the tokamak, with the stellarator being next most advanced, followed by the Reversed field pinch. Compact toroids, especially the Field-Reversed Configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. Compact toroids still have some enthusiastic supporters but are not backed as readily by the majority of the fusion community.
Finally, there are also electrostatic confinement fusion systems, in which ions in the reaction chamber are confined and held at the center of the device by electrostatic forces, as in the Farnsworth-Hirsch Fusor, which is not believed to be able to be developed into a power plant. The Polywell, an advanced variant of the fusor, has shown a degree of research interest as of late; however, the technology is relatively immature, and major scientific and engineering questions remain which researchers under the auspices of the U.S. Office of Naval Research hope to further investigate.
Developing materials for fusion reactors has long been recognized as a problem nearly as difficult and important as that of plasma confinement, but it has received only a fraction of the attention. The neutron flux in a fusion reactor is expected to be about 100 times that in existing pressurized water reactors (PWR). Each atom in the blanket of a fusion reactor is expected to be hit by a neutron and displaced about a hundred times before the material is replaced. Furthermore the high-energy neutrons will produce hydrogen and helium in various nuclear reactions that tends to form bubbles at grain boundaries and result in swelling, blistering or embrittlement. One also wishes to choose materials whose primary components and impurities do not result in long-lived radioactive wastes. Finally, the mechanical forces and temperatures are large, and there may be frequent cycling of both.
The problem is exacerbated because realistic material tests must expose samples to neutron fluxes of a similar level for a similar length of time as those expected in a fusion power plant. Such a neutron source is nearly as complicated and expensive as a fusion reactor itself would be. Proper materials testing will not be possible in ITER, and a proposed materials testing facility, IFMIF, was still at the design stage in 2005.
The material of the plasma facing components (PFC) is a special problem. The PFC do not have to withstand large mechanical loads, so neutron damage is much less of an issue. They do have to withstand extremely large thermal loads, up to 10 MW/m², which is a difficult but solvable problem. Regardless of the material chosen, the heat flux can only be accommodated without melting if the distance from the front surface to the coolant is not more than a centimeter or two. The primary issue is the interaction with the plasma. One can choose either a low-Z material, typified by graphite although for some purposes beryllium might be chosen, or a high-Z material, usually tungsten with molybdenum as a second choice. Use of liquid metals (lithium, gallium, tin) has also been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates.
If graphite is used, the gross erosion rates due to physical and chemical sputtering would be many meters per year, so one must rely on redeposition of the sputtered material. The location of the redeposition will not exactly coincide with the location of the sputtering, so one is still left with erosion rates that may be prohibitive. An even larger problem is the tritium co-deposited with the redeposited graphite. The tritium inventory in graphite layers and dust in a reactor could quickly build up to many kilograms, representing a waste of resources and a serious radiological hazard in case of an accident. The consensus of the fusion community seems to be that graphite, although a very attractive material for fusion experiments, cannot be the primary PFC material in a commercial reactor.
The sputtering rate of tungsten can be orders of magnitude smaller than that of carbon, and tritium is not so easily incorporated into redeposited tungsten, making this a more attractive choice. On the other hand, tungsten impurities in a plasma are much more damaging than carbon impurities, and self-sputtering of tungsten can be high, so it will be necessary to ensure that the plasma in contact with the tungsten is not too hot (a few tens of eV rather than hundreds of eV). Tungsten also has disadvantages in terms of eddy currents and melting in off-normal events, as well as some radiological issues.
In fusion research, achieving a fusion energy gain factor Q = 1 is called breakeven and is considered a significant although somewhat artificial milestone. Ignition refers to an infinite Q, that is, a self-sustaining plasma where the losses are made up for by fusion power without any external input. In a practical fusion reactor, some external power will always be required for things like current drive, refueling, profile control, and burn control. A value on the order of Q = 20 will be required if the plant is to deliver much more energy than it uses internally.
Despite many differences between possible designs of power plant, there are several systems that are common to most. A fusion power plant, like a fission power plant, is customarily divided into the nuclear island and the balance of plant. The balance of plant converts heat into electricity via steam turbines; it is a conventional design area and in principle similar to any other power station that relies on heat generation, whether fusion, fission or fossil fuel based.
The nuclear island has a plasma chamber with an associated vacuum system, surrounded by plasma-facing components (first wall and divertor) maintaining the vacuum boundary and absorbing the thermal radiation coming from the plasma, surrounded in turn by a blanket where the neutrons are absorbed to breed tritium and heat a working fluid that transfers the power to the balance of plant. If magnetic confinement is used, a magnet system, using primarily cryogenic superconducting magnets, is needed, and usually systems for heating and refueling the plasma and for driving current. In inertial confinement, a driver (laser or accelerator) and a focusing system are needed, as well as a means for forming and positioning the pellets.
Although the standard solution for electricity production in fusion power plant designs is conventional steam turbines using the heat deposited by neutrons, there are also designs for direct conversion of the energy of the charged particles into electricity. These are of little value with a D-T fuel cycle, where 80% of the power is in the neutrons, but are indispensable with aneutronic fusion, where less than 1% is. Direct conversion has been most commonly proposed for open-ended magnetic configurations like magnetic mirrors or Field-Reversed Configurations, where charged particles are lost along the magnetic field lines, which are then expanded to convert a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. Typically the claimed conversion efficiency is in the range of 80%, but the converter may approach the reactor itself in size and expense.
Safety and the environment
There is no possibility of a catastrophic accident in a fusion reactor resulting in major release of radioactivity to the environment or injury to non-staff, unlike modern fission reactors. The primary reason is that nuclear fusion requires precisely controlled temperature, pressure, and magnetic field parameters to generate net energy. If the reactor were damaged, these parameters would be disrupted and the heat generation in the reactor would rapidly cease. Fusion reactors are extremely safe in this sense, and it makes them favorable over fission reactors, which, in contrast, continues to generate heat through beta-decay for several hours or even days after reactor shut-down, meaning that melting of fuel rods is possible even after the reactor has been stopped due to continued accumulation of heat.
There is also no risk of a runaway reaction in a fusion reactor, since the plasma is normally burnt at optimal conditions, and any significant change will render it unable to produce excess heat. In fusion reactors the reaction process is so delicate that this level of safety is inherent; no elaborate failsafe mechanism is required. Although the plasma in a fusion power plant will have a volume of 1000 cubic meters or more, the density of the plasma is extremely low, and the total amount of fusion fuel in the vessel is very small, typically a few grams. If the fuel supply is closed, the reaction stops within seconds. In comparison, a fission reactor is typically loaded with enough fuel for one or several years, and no additional fuel is necessary to keep the reaction going.
In the magnetic approach, strong fields are developed in coils that are held in place mechanically by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to any other industrial accident or an MRI machine quench/explosion, and could be effectively stopped with a containment building similar to those used in existing (fission) nuclear generators. The laser-driven inertial approach is generally lower-stress. Although failure of the reaction chamber is possible, simply stopping fuel delivery would prevent any sort of catastrophic failure.
Most reactor designs rely on the use of liquid lithium as both a coolant and a method for converting stray neutrons from the reaction into tritium, which is fed back into the reactor as fuel. Lithium is highly flammable, and in the case of a fire it is possible that the lithium stored on-site could be burned up and escape. In this case the tritium contents of the lithium would be released into the atmosphere, posing a radiation risk. However, calculations suggest that the total amount of tritium and other radioactive gases in a typical power plant would be so small, about 1 kg, that they would have diluted to legally acceptable limits by the time they blew as far as the plant's perimeter fence.
The likelihood of small industrial accidents including the local release of radioactivity and injury to staff cannot be estimated yet. These would include accidental releases of lithium, tritium, or mis-handling of decommissioned radioactive components of the reactor itself.
Effluents during normal operation
The natural product of the fusion reaction is a small amount of helium, which is completely harmless to life. Of more concern is tritium, which, like other isotopes of hydrogen, is difficult to retain completely. During normal operation, some amount of tritium will be continually released. There would be no acute danger, but the cumulative effect on the world's population from a fusion economy could be a matter of concern.
Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, due to tritium's short half-life (12 years), very low decay energy (~14.95 keV), and the fact that it does not bioaccumulate (instead being cycled out of the body as water, with a biological half-life of 7 to 14 days). Current ITER designs are investigating total containment facilities for any tritium.
The large flux of high-energy neutrons in a reactor will make the structural materials radioactive. The radioactive inventory at shut-down may be comparable to that of a fission reactor, but there are important differences.
The half-life of the radioisotopes produced by fusion tend to be less than those from fission, so that the inventory decreases more rapidly. Unlike fission reactors, whose waste remains radioactive for thousands of years, most of the radioactive material in a fusion reactor would be the reactor core itself, which would be dangerous for about 50 years, and low-level waste another 100. Although this waste will be considerably more radioactive during those 50 years than fission waste, the very short half-life makes the process very attractive, as the waste management is fairly straightforward. By 300 years the material would have the same radioactivity as coal ash.
Additionally, the choice of materials used in a fusion reactor is less constrained than in a fission design, where many materials are required for their specific neutron cross-sections. This allows a fusion reactor to be designed using materials that are selected specifically to be "low activation", materials that do not easily become radioactive. Vanadium, for example, would become much less radioactive than stainless steel. Carbon fiber materials are also low-activation, as well as being strong and light, and are a promising area of study for laser-inertial reactors where a magnetic field is not required.
In general terms, fusion reactors would create far less radioactive material than a fission reactor, the material it would create is less damaging biologically, and the radioactivity "burns off" within a time period that is well within existing engineering capabilities.
Although fusion power uses nuclear technology, the overlap with nuclear weapons technology is small. Tritium is a component of the trigger of hydrogen bombs, but not a major problem in production. The copious neutrons from a fusion reactor could be used to breed plutonium for an atomic bomb, but not without extensive redesign of the reactor, so that production would be difficult to conceal. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with the more scientifically developed magnetic confinement fusion.
As a sustainable energy source
Large-scale reactors using neutronic fuels (e.g. ITER) and thermal power production (turbine based) are most comparable to fission power from an engineering and economics viewpoint. Both fission and fusion power plants involve a relatively compact heat source powering a conventional steam turbine-based power plant, while producing enough neutron radiation to make activation of the plant materials problematic. The main distinction is that fusion power produces no high-level radioactive waste (though activated plant materials still need to be disposed of). There are some power plant ideas which may significantly lower the cost or size of such plants; however, research in these areas is nowhere near as advanced as in tokamaks.
Fusion power commonly proposes the use of deuterium, an isotope of hydrogen, as fuel and in many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 x 1020 J/yr) and that this does not increase in the future, then the known current lithium reserves would last 3000 years, lithium from sea water would last 60 million years, and a more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years. To put this in context, 150 billion years is over ten times the currently measured age of the universe, and is close to 30 times the remaining lifespan of the sun.
While fusion power is still in early stages of development, substantial sums have been and continue to be invested in research. In the EU almost € 10 billion was spent on fusion research up to the end of the 1990s, and the new ITER reactor alone is budgeted at € 10 billion. It is estimated that up to the point of possible implementation of electricity generation by nuclear fusion, R&D will need further promotion totalling around € 60-80 billion over a period of 50 years or so (of which € 20-30 billion within the EU). Nuclear fusion research receives € 750 million (excluding ITER funding), compared with € 810 million for all non-nuclear energy research combined, putting research into fusion power well ahead of that of any single rivaling technology.
Fusion power would provide much more energy for a given weight of fuel than any technology currently in use, and the fuel itself (primarily deuterium) exists abundantly in the Earth's ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this may seem a low proportion (about 0.015%), because nuclear fusion reactions are so much more energetic than chemical combustion and seawater is easier to access and more plentiful than fossil fuels, fusion could potentially supply the world's energy needs for millions of years.
Despite being technically non-renewable, fusion power has many of the benefits of renewable energy sources (such as being a long-term energy supply and emitting no greenhouse gases) as well as some of the benefits of the resource-limited energy sources as hydrocarbons and nuclear fission (without reprocessing). Like these currently dominant energy sources, fusion could provide very high power-generation density and uninterrupted power delivery (due to the fact that it is not dependent on the weather, unlike wind and solar power).
Another aspect of fusion energy is that the cost of production does not suffer from diseconomies of scale. The cost of water and wind energy, for example, goes up as the optimal locations are developed first, while further generators must be sited in less ideal conditions. With fusion energy, the production cost will not increase much, even if large numbers of plants are built.
Some problems which are expected to be an issue in this century such as fresh water shortages can alternatively be regarded as problems of energy supply. For example, in desalination plants, seawater can be purified through distillation or reverse osmosis. However, these processes are energy intensive. Even if the first fusion plants are not competitive with alternative sources, fusion could still become competitive if large scale desalination requires more power than the alternatives are able to provide.
A scenario has been presented of the effect of the commercialization of fusion power on the future of human civilization. ITER and later Demo are envisioned to bring online the first commercial nuclear fusion energy reactor by 2050. Using this as the starting point and the history of the uptake of nuclear fission reactors as a guide, the scenario depicts a rapid take up of nuclear fusion energy starting after the middle of this century.
Despite optimism dating back to the 1950s about the wide-scale harnessing of fusion power, there are still significant barriers standing between current scientific understanding and technological capabilities and the practical realization of fusion as an energy source. Research, while making steady progress, has also continually thrown up new difficulties. Therefore it remains unclear whether an economically viable fusion plant is possible. A 2006 editorial in New Scientist magazine opined that "if commercial fusion is viable, it may well be a century away.". This pessimistic view is in contrast to the optimism of a pamphlet printed by General Atomics in 1970s stated that "By the year 2000, several commercial fusion reactors are expected to be on-line."
Several fusion D-T burning tokamak test devices have been built (TFTR, JET), but these were not built to produce more thermal energy than electrical energy consumed. The ITER project is currently leading the effort to commercialize fusion power.
A paper published in January 2009 and part of the IAEA Fusion Conference Proceedings at Geneva last October claims that small 50 MW Tokamak style reactors are feasible.
On May 30, 2009, the US Lawrence Livermore National Laboratory (LLNL), announced the creation of a high-energy laser system, the National Ignition Facility, which can heat hydrogen atoms to temperatures only existing in nature in the cores of stars. The new laser is expected to have the ability to produce, for the first time, more energy from controlled, inertially confined nuclear fusion than was required to initiate the reaction.
On January 28, 2010, the LLNL announced tests using all 192 laser beams, although with lower laser energies, smaller hohlraum targets, and substitutes for the fusion fuel capsules. More than one megajoule of ultraviolet energy was fired into the hohlraum, beating the previous world record by a factor of more than 30. The results gave the scientists confidence that they will be able to achieve ignition in more realistic tests scheduled to begin in the summer of 2010.
NIF researchers are currently conducting a series of "tuning" shots to determine the optimal target design and laser parameters for high-energy ignition experiments with fusion fuel in the coming months. Two firing tests have been performed on October 31 and November 2.
- ^ "Beyond ITER". The ITER Project. Information Services, Princeton Plasma Physics Laboratory. Archived from the original on 7 November 2006. http://web.archive.org/web/20061107220145/http://www.iter.org/Future-beyond.htm. Retrieved 5 February 2011. - Projected fusion power timeline
- ^ ITER and the Promise of Fusion Energy.
- ^ Bullis, Kevin (January 28, 2010). "Scientists Overcome Obstacle to Fusion". Technology Review. http://www.technologyreview.com/blog/energy/24720/. Retrieved 2010-01-29.
- ^ "First successful integrated experiment at National Ignition Facility announced". General Physics. PhysOrg.com. October 8, 2010. http://www.physorg.com/news205740709.html. Retrieved 2010-10-09.
- ^ "Fission and fusion can yield energy". http://hyperphysics.phy-astr.gsu.edu/hbase/nucene/nucbin.html#c2.
- ^ a b "Thinkquest: D-T reaction". http://library.thinkquest.org/17940/texts/fusion_dt/fusion_dt.html. Retrieved 12 June 2010
- ^ "Nuclear Fusion Power, Assessing fusion power". http://www.world-nuclear.org/info/inf66.html.
- ^ Heindler and Kernbichler, Proc. 5th Intl. Conf. on Emerging Nuclear Energy Systems, 1989, pp. 177-82. See also Residual radiation from a p–11B reactor
- ^ British Patent 817681, available here
- ^ The first A-bomb shot dates back to July 16, 1945 in Alamogordo (New Mexico desert), while the first civilian fission plant was connected to the electric power network on June 27, 1954 in Obninsk (Russia).
- ^ The first H-bomb, Ivy Mike, was detonated on Eniwetok, an atoll of the Pacific Ocean, on November 1, 1952 (local time).
- ^ Nathaniel Fisch, "Edward Teller Centennial Symposium", pg 118
- ^ Great Soviet Encyclopedia, 3rd edition, entry on "Токамак", available online here 
- ^ "The Advent of Clean Nuclear Fusion: Super-performance Space Power and Propulsion", Robert W. Bussard, Ph.D., 57th International Astronautical Congress, October 2–6, 2006
- ^ "ADVANCES TOWARDS PB11 FUSION WITH THE DENSE PLASMA FOCUS", Eric Lerner, Lawrenceville Plasma Physics, 2008
- ^ Browne 1989, Close 1992, Huizenga 1993, Taubes 1993
- ^ a b Browne 1989
- ^ Chang, Kenneth (2004-03-25). "US will give cold fusion a second look". The New York Times. http://query.nytimes.com/gst/fullpage.html?res=9C01E0DC1530F936A15750C0A9629C8B63. Retrieved 2009-02-08.
- ^ Voss 1999, Platt 1998, Goodstein 1994, Van Noorden 2007, Beaudette 2002, Feder 2005, Hutchinson 2006, Kruglinksi 2006, Adam 2005
- ^ William J. Broad (31 October 1989). "Despite Scorn, Team in Utah Still Seeks Cold-Fusion Clues". The New York Times: pp. C1. http://query.nytimes.com/gst/fullpage.html?res=950DE6DA1331F932A05753C1A96F948260&pagewanted=all.
- ^ Randy 2009
- ^ "'Cold fusion' rebirth? New evidence for existence of controversial energy source" (Press release). American Chemical Society. http://www.eurekalert.org/pub_releases/2009-03/acs-fr031709.php.
- ^ Hagelstein et al. 2004
- ^ Feder 2005
- ^ Choi 2005, Feder 2005, US DOE 2004
- ^ a b T. Hamacher and A.M. Bradshaw (October 2001). "Fusion as a Future Power Source: Recent Achievements and Prospects" (PDF). World Energy Council. Archived from the original on 2004-05-06. http://web.archive.org/web/20040506065141/http://www.worldenergy.org/wec-geis/publications/default/tech_papers/18th_Congress/downloads/ds/ds6/ds6_5.pdf.
- ^ Petrangeli, Gianni (2006). Nuclear Safety. Butterworth-Heinemann. p. 430. ISBN 9780750667234.
- ^ Basu, S. K. Encyclopaedic Dictionary of Astrophysics. Global Vision, 2007, pg. 110
- ^ Energy for Future Centuries
- ^ Dr. Eric Christian, Et al.. "Cosmicopia". NASA. http://helios.gsfc.nasa.gov/qa_sun.html#sunlife. Retrieved 2009-03-20.
- ^ "The current EU research programme". FP6. http://www.tab.fzk.de/en/projekt/zusammenfassung/ab75.htm.
- ^ "The Sixth Framework Programme in brief". http://ec.europa.eu/research/fp6/pdf/fp6-in-brief_en.pdf.
- ^ Robert F. Heeter, et al.. "Conventional Fusion FAQ Section 2/11 (Energy) Part 2/5 (Environmental)". http://fusedweb.llnl.gov/FAQ/section2-energy/part2-enviro.txt.
- ^ Dr. Frank J. Stadermann. "Relative Abundances of Stable Isotopes". Laboratory for Space Sciences, Washington University in St. Louis. http://presolar.wustl.edu/work/abundances.html.
- ^ J. Ongena and G. Van Oost. "Energy for Future Centuries" (PDF). Laboratorium voor Plasmafysica– Laboratoire de Physique des Plasmas Koninklijke Militaire School– Ecole Royale Militaire; Laboratorium voor Natuurkunde, Universiteit Gent. pp. Section III.B. and Table VI. http://www.agci.org/dB/PDFs/03S2_MMauel_SafeFusion%3F.pdf.
- ^ EPS Executive Committee. "The importance of European fusion energy research". The European Physical Society. http://www.eps.org/about-us/position-papers/fusion-energy/.
- ^ Sing Lee and Sor Heoh Saw. "Nuclear Fusion Energy-Mankind's Giant Step Forward". http://www.plasmafocus.net/IPFS/2010%20Papers/LSmankind.pdf.
- ^ Fusion’s False Dawn by Michael Moyer Scientific American March 2010
- ^ a b "Editorial: Nuclear fusion must be worth the gamble". New Scientist. 7 June 2006. http://www.newscientist.com/channel/opinion/mg19025543.300-editorial-nuclear-fusion-must-be-worth-the-gamble.html.
- ^ physics and engineering basis of multi-functional compact tokamak
- ^ "US lab debuts super laser", Breitbart news site
- ^ "Laser fusion test results raise energy hopes". BBC News. January 28, 2010. http://news.bbc.co.uk/2/hi/science/nature/8485669.stm. Retrieved 2010-01-29.
- ^ "Initial NIF experiments meet requirements for fusion ignition". Lawrence Livermore National Laboratory. https://publicaffairs.llnl.gov/news/news_releases/2010/NR-10-01-06.html. Retrieved 2010-01-29.
- ^ "BBC:Laser fusion test results raise energy hopes"
- ^ https://www.llnl.gov/news/newsreleases/2010/Nov/NR-10-11-02.html
- Fusion as an Energy Source: A guide from the Institute of Physics
- U.S. Fusion Energy Science Program
- Latest Fusion Energy Research News
- EURATOM/UKAEA Fusion Association
- FUSION FAQ
- European Fusion Development Agreement
- Fusion Power Associates A Washington, DC area lobbying organization; "a non-profit, tax-exempt research and educational foundation, providing timely information on the status of fusion development." Edits the Journal of Fusion Energy.
- Plasma/Fusion Glossary
- The Helimak Experiment, at the Fusion Research Center at UT Austin
- Investigations of the Formability, Weldability and Creep Resistance of Some Potential Low-activation Austenitic Stainless Steels for Fusion Reactor Applications (ISBN 0-85311-148-0): A.H. Bott, G.J. Butterworth, F. B. Pickering
- PDF (58.8 KB)
- International Thermonuclear Experimental Reactor (Iter) fusion reactor work gets go-ahead (BBC news May 2006)
- Unofficial ITER fan club
- Will Nuclear Fusion Fill the Gap Left by Peak Oil?
- Fusion Science and Technology
- Google Tech Talk
- General Fusion--technology for a safe, economically viable modified MTF fusion reactor by 2010.
- Universal Plasma Focus Laboratory Facility
- A Central Site for Fusion Energy Links
- Institute for Plasma Focus Studies
- Josh Dean (December 23, 2008). "This Machine Might* Save the World". Popular Science. http://www.popsci.com/scitech/article/2008-12/machine-might-save-world.
Fusion power Core topics Nuclear fusion processes and methods Gravitational confinement Magnetic confinement Inertial confinement Spatial confinement Other forms Fusion experimental devices by confinement method MagneticInternationalAmericasAsiaEuropeMST • RFX • TPE-RX • EXTRAP T2ROther InertialLaserAmericasAsiaEuropeNon-laser International Fusion Materials Irradiation Facility Nuclear technology Science Fuel Neutron ReactorsFLiBeNone
Power MedicineTherapy WeaponTopicsLists WasteProductsDisposal Debate
Wikimedia Foundation. 2010.