Piketty Under Attack

Piketty is under attack by the Far Right. They argue the data , or imply he is a hidden Communist.

Thomas Piketty published the English translation of his Capital in the Twenty First Century (C21) and pulled his massive open database into the bright light of fame. Reviews started arriving in March; praise came from economists in the political moderate center to central left. The far left think it is incomplete, it ignores important things. The strongest pushback, though, has been from the fringe on the political right. A small sampling of responses:

Positive: Eduardo Porter3 reviews from economistsJennifer SchuesslerSteven Erlanger,   Paul Krugman.

Negative:  James Pethokoukis,  David BrooksClive Crook (neg)Thomas EdsallChris Giles,  plus the Scott Winship and Russ Douthat (their comments below).

C21 is incomplete:   Clive Crook (pos)  later version.
Modern left of center thought
(Andrew Mackay).    Adviser to President Neil Irwin.
There are many more that say C21 has incomplete solutions than in this tiny sampling

The LastTechAge review of C21 is here.

The Attack

Suddenly, the economic trends illuminated by 100 years data are universally available. The charts are easy to use; trends visually correlate with recalled events (if we are old enough).
Suddenly, the outlandish proposal for a global progressive tax enter cocktail party conversation.

The far Right must not leave Piketty be. The point of his analysis is nearly intuitive, too hard to miss, dangerous to discount. Many successful decades of rightist achievement could be undone.

The first strategy is to paint Prof. Piketty as a communist. He is French & susceptible to mutual Yankee/French distrust – commie commie commie ought to work. Second, attack C21 as flawed over as many categories as possible. This worked against the right in 2013 when the Reinhart-Rogoff proof that government debt varies inversely to growth was demolished by outside analysis (Herndon-Ash-Pollin, Krugman).

So the two fronts in the Piketty attack are Name-Calling and Doubt-Casting.
Click any figure for full resolution image

Paint Him Communist


Fig 1 Russ Douthat, NYT Columnist

One attack came from Russ Douthat’s 2014 Apr 19 New York Times Op Ed column, Karl Marx Rises Again. Douthat uses ‘Marx’, ‘Karl’ or ‘USSR’ 13 times in his scree of 14 paragraphs. This does not count calling the book “Capital,” as in Das Kapital. Douthat’s column indicated that his was not the first to imply that Piketty is a French communist.

The idea is to imprint nasty commie into your brain whenever you hear “Piketty.” Douthat …

Piketty himself is a social democrat who abjures the Marxist label. But as his title suggests, he is out to rehabilitate and recast one of Marx’s key ideas…

Hmm, “…as his title suggests…“? We revisit this next section. Russ continues – Dr. P observes that normal market forces mean that the acquisition of new wealth by the currently rich must naturally suck out the income reserves of the poorest part of society. Piketty does indicate this with the observation that the ratio of wealth-rate-to-income converges to the ratio of savings-rate-to-growth rate, and the implication of what happens when earnings (r) from invested wealth exceeds growth rate (g).

He tries to convert this into a Piketty dogma. We have discussed this many times, starting with our post Zero Sum Game. This is not an ideological postulate, Russ, it is a data observable stretching back over 200 years (France) and 100 years (US). Other industrialized country data vary as to when their records began. These are matters of economic record. If Douthat denies this, he must assume the robes of the old Soviet censors, expunging and rewriting history.


Fig 2: Scott Winship

Douthat quotes a Scott Winship of the Manhattan Institute (a right edge group founded in 1978 by William Casey + Anthony Fisher, funded by Koch Family Foundation and others).

Winship’s review is pretty cool. He starts by charging that on page ONE (Introduction), C21 quotes from the Declaration of the Rights of Man and the Citizen.

Winship: that Declaration says “… to the effect that all inequality should be viewed as suspect.”


Fig 3 Start up of the Introduction

What he does not explain – the Declaration of the Rights of Man and the Citizen was written in 1789 as a foundation of the French Revolution. Benjamin Franklin helped write it and it is about as communistic as the American Declaration of Independence.

Golly! What an indictment, Scott! The rest of Winship’s writing is about as accurate so we will move on.

Both efforts were pretty sad, but Winship seems to have read the book and tries to glean snippets to attack; comments similar to his attack on the Rights of Man. Douthat, though, appears to have not read C21. In passing: Clive Cook’s April comment (link at top of post) starts with bust of Karl Marx labeled “This guy wrote a manifesto, too”.

This kind of name calling ‘with intent to damage’ works for Limbaugh and others – why was it not swallowed whole by the American public? My take: By the 1950s, the USSR was recognized for the terrible dictatorship it was, operated by the nomenklatura aristocracy. There were always a few American true-believers and a few U.S.S.R. spies, but, the by 1960s, most communist penetrations succeeded by appealing to base greed. Two decades after the fall of the USSR, communist agitators do not threaten our Republic.

Fig   Simon Kuznets

Fig 4 Simon Kuznets, Kuznets Curve

Actually, Piketty is highly sarcastic about dogmatists/ideologues of the last several centuries, Marx included.  He dismisses Marxist thinking, praises Simon Kuznets‘ data analysis but says the Kuznets Curve used too short a timeline and drew faulty inferences.  (Rightists use the  K-Curve to justify unregulated capitalism.)

Piketty (C21) says that going from Marx to Kuznets is like going from “Apocalypse to Fairy Tale.” Even so, Piketty repeatedly calls Kuznets his intellectual predecessor.

Why pick on Piketty personally?


Fig 5 Capital In The Twenty First Century 2014

You cannot attack a data base unless you have one, too.  So destroy the person.

The flow of capital has not been something American economists have spent much time on since WW-II.  Do capital flow research, get branded “commie.”   Also, the publisher designed a bright, clean arresting cover, which happened to cause knee jerks on the Right. Think of the cover as red-meat bait for attack dogs; this one is a doozy.

The actual title is 6 words: Capital In The Twenty-First Century. The layout makes it appear simply Capital. But a subtitle would have been a standalone phrase like “Economics in the twenty-first century.” The red border gives it a manifesto-look of flaming rhetoric.  These were traps the that caught Russ Douthat.    The actual book is careful analysis of data, not a flaming manifesto of any sort

Even the New York Times Sunday Review of Books got caught. Here are two C21 descriptions from 2014 Jul 6.

BestSellers listingA French economist’s analysis of centuries of economic history predicts worsening inequality and proposes solutions.

Separate display caption on image of top sellers “Piketty’s neo-Marxist account of capitalism and inequality is No.3 in its 11th week…

‘BestSellers’ isn’t too bad. But Piketty does not “…predicts worsening inequality”, he uses data and demonstrates it happened. Most people picture economists as mathematicians, priests, shamans, horoscope readers. Dr P. says the opposite, that economists are more like archeologists than physicists.

‘Display Caption’ fell into the trap. This could have been written by a Heritage Foundation Spin Meister trying to sound neutral.

As Steven Erlanger points out (see top section), Piketty takes on all economists who work from dogma. Erlanger says Piketty’s work is a challenge both to Marxism and laissez-faire economics, because “both count on pure economic forces for harmony or justice to prevail.”

If you hear someone say Piketty’s ideas are a “soft Marxism” you know that he/she is frustrated over the lack of concrete critical points.

Database Integrity


Fig 6: Chris Giles, Financial Times

There have been a few lists drawn of discovered faults. Near the end of May Chris Giles, the financial editor for the UK based Financial Times, published his broadside (ref at top of post) → there are many structural issues with C21.

Giles’ points are in the reference at the top of this post. Piketty’s response was on the same day. Read both for details. A good summary of this is by Neil Irwin. Highlights – The data files are open and on-line, not hidden. They are diverse and heterogeneous, requiring assumptions to be usable. The assumptions are described many places, and are the team’s judgement calls. Anyone is free to use their own . Since everyone lies about income, why use tax data?   Piketty:  tax data may be the most reliable source, since people lie for all ways to gather income data.

And so it goes. Name calling may work on Talk Radio or Fox TV, but not with the kind of audience that follows Inequality issues.  If the well-funded, right-wing gnomes are going to refute the C21 trends they must present analytical facts that stick. Their candles for midnight work are still lit, but the game keeps enlarging as Piketty, Saez and others publish newer and newer analyses that continue to validate C21.


Charles J. Armentrout, Ann Arbor
2014 July 12
Listed under Economythread Economy > Inequality
Have a comment? Click on the title of this post, go to bottom, let us know.
Related posts: Click the INDEX button under the Banner picture

Posted in Economics | Tagged , , , , , , , , , | 3 Comments

ABM Test Finally Succeeded, Maybe

Successful test of troublesome ABM used known-flawed kill vehicle. Meaningful or theater?

The U.S. successfully tested its anti-ICBM missile on 2014-Jun-22, with a launch from Vandenberg AFB in California.  News was trumpeted everywhere!  The LA Times says that the launch came just after noon (Pacific Time), and could be seen in nearby towns.  I used to enjoy seeing Minuteman  launches from much further south, near San Diego.

The GBI  (Ground Based Interceptor, also called GMD for Ground-based Midcourse Defense) dates back to the 1980s SDI program with a very bad history (see our final ABM report  last year).   So we have to ask about fraud, graft, and conspiracies to protect hidden activities, cash flows.


Fig 2    GBI launch,   probably 2014 Jun 22 (source: MDA)

MDA logo from Website

Fig 1    MDA logo from its Website

Apparently, the GBI’s  EKV (Exoatmosphere kinetic Kill Vehicle) did make contact with a simulated attacking warhead.

But, if you exclude the patriotic hype, the Missile Defense Agency released very little about this test.

The  Under The Dome series notes that a previous Glorious Success was declared when they partially hit a target that was actually sending radio “look-at-me” beeps.  Actually, GBI had no history of working properly  when our idiot president (too young to use Reagan’s excuse) declared it operational and started distributing federal fortune  to contractors.  As late as 2011, it was discovered that its EKV could not work correctly  … declared operational for a couple years prior to that …

We pointed out that a tactical ABM need only swat the incoming warhead aside, destruction was not needed to protect its military target.  Tactically, it is okay that collateral damage happens.  GBI’s are used against the horribly expensive ICBM threat; these things deliver H bombs.  Everything a successful attack delivers might be called collateral damage in the tactical theater of war.  The EKV of an effective GBI must stop –halt– inside its target if it is to deposit its energy into the target for destruction.

Did the EKV deliver its energy potential to the target?  What we do know is that one of the early (& kinda functional) EKV units was removed from an ‘operational’ GBI for this test.

We tried to explain the support that the GBI has always received in our post Knights To Battleships.  Pictures of the system are awesome, the program’s verbiage is highly patriotic,  the leaders are tough-lookn’  good old boys, too.  How could you not respond positively? (… unless you are not a real Amurcan.)

So.  Something happened on June 22.  A GBI was launched.  The old and known-defective EKV is reported to have contacted its target.  The question remains – is this supposed to mean something?     ABM defense is a valid concept – wish we had a  valid system; but, we are paying valid money for it.

Update: 2014 Jul 20: Comment By Dean A Wilkening, LLNL   “If you’re going to rely on that as an operational system, one shouldn’t be too surprised that it does tend to fail more than you’d like.”    Really good 1 sentence summary of the GMD system


Charles J. Armentrout, Ann Arbor
2014 July 3
Listed under    TechnologyTechnology > Aerospace
Have a comment? Click on the title of this post, go to bottom, let us know.
Related posts: Click the INDEX button under the Banner picture

Posted in Technology | Tagged , , , , , , , | Leave a comment

Piketty discovers America or vice versa

Piketty describes the growth of U.S. and World inequality with unique large data resource. Our review of Capital In The 21st Century

The French economist Thomas Piketty  published the French edition of his new book Capital In The Twenty-First Century in  2013.   (Call it C21.)  English version came out in April 2014.  Finally.

Click any image for full resolution.  Click here to jump to the review.


Fig 1:  the book causing all the fuss.

Fig 1  is a link to the Harvard University Press website.  Review copies were passed around after Christmas and breathless reports  started appearing by February.

Response was immediate and strong long before the planned release date and I can almost see/hear rioting mobs outside Harvard offices…  Mobs probably never happened, but by the end of March, the publisher agreed to early release its English translation of C21.

My own impatience pushed me over the top in mid-March when The American Prospect ran its critique.  I pushed until I received one of the first copies in our area from a local bookstore.    So I was ready by the time Krugman and Brooks had face-to-face comments in the New York Times issue of April 25.


Fig 2:   Thomas Piketty (2014)

First Things First.

Americans tend to pronounce Piketty’s name as  either PICK-etty, a la Paul Krugman,  or  pe-KET-y,  a la moi.

Oh – no no no No!

Dr. Piketty is a native Frenchman.  As Krugman wrote, “who knew it should be PEE-ket-EE?”

Personal name?  A guess:  probably “toe-MAS

The buzz

This book made inequality the word de jour from mid-winter to early-summer.  The hype started prior to his April tour in the U.S; those pre-release descriptions generated standing room only for nearly every appearance.  Stunning – probably the most stunned was the serious scholar, Thomas Piketty.   As most people know, C21  hit the top of Best Seller lists (!) and stayed there.  Copies evaporated from booksellers’ shelves.  Harvard University Press reports the highest sales for any new book in its century of business  …  this for the hardcopy edition of a scholarly tome!

My best guess – most people are not reading it for any detail.  If you can get it,  read at least the introduction.  It has good, interesting background and and lays out the ideas underlying economic inequality.

At its end, C21 presents a discussion of how to avoid the successful diversion of the world’s wealth into the hands of an ultra tiny, ultra powerful  and ultra rich group

Why C21 is important – Review

C21 provides inferences from data on wealth, income and inequality available nowhere else.  It is a data-driven discussion on the diversion of capital and income into the wealthiest segment of our population.

Piketty wrote C21, but he is part of a large cooperative team assembling the WTID (World’s Top Income Database) available to all here.  This is the world’s largest database on trends in income flowing to the different earning sections of populations world-wide.  He has announced that  WTID will be transformed into the much more inclusive WWID (World Wealth and Income Database).   These workers are special –  Thomas Piketty in France, Emmanuel Saez in the U.S.,  Anthony Atkinson in the U.K. and all the others.  The effort provides unrestricted access to information not otherwise obtainable, anywhere.

Piketty contributed French data extending back to the late 1700s.  Atkinson  did the same for England using data from the 19th and 20th centuries.   Saez got hold of U.S. tax records from 1913 and forward.  Saez’ work caught my attention in 2009 and, to a very important extent,  LastTechAge started in 2011 due to those data (see our Zero Sum Games). I shifted from an  experience-based belief that for most of my productive life, our technical competence eroded, to a data-based understanding as to why.

What  Capital In The 21st Century  Says

C21 is a detailed book for non-economists written by a scholarly author;  it runs to 577 pages, not counting pages of notes and good index.

The issue of the journal Science, May 25, 2014, published an overview  by Pikettey and Saez that is 5½ pages long.  It is one of the articles in its 50 page Special Issue on the science of inequality.   Although it is aimed at a slightly more technical professional audience, I think it is a really good introduction to C21.

The WTID  group works with historic raw data, such as tax returns, and generates many charts.  It does not use the differential equation modeling typical of many economists.  The team makes adjustments on the raw data using assumptions that are clearly stated, so that the results are consistent with each other.  Results are placed in  Excel files.

Pikettty, Seaz, and team draw their inferences from those data, not from ideologically “reasonable” models.

The U.S. trendings

Cmt-A  Definition of Wealth, Income, Output

Cmt-A   Definition of Wealth, Income, Output

Cmt-A is my non-economist descriptions of important concepts, though the Output local/global comment is per the book.  Don’t blame Dr.P –  all errors are mine.

Note: Piketty uses wealth and capital interchangeably and explains his reasons in the book.

I am focused on income inequality trends, but C21 actually shows  global trends on income and wealth, industrial production and growth.  Our comments here will stay with what has been going wrong in America.

Fig 3 shows Piketty during a presentation.  Fig 4 shows the same data from the WTID independently plotted .

LTA graph of same data base 1917-2011

Fig 4:  LTA graph of same data base 1917-2011


Fig 3:  Piketty presenting iconic top-earning 10% of population graph

C21 has a fruitful way to  look at the data.  There is always a fixed 100% of all income flowing through the economy.  Divide the the total group of earners into various sub-groups, from the lowest paid earners to the highest.  Then, determine what fraction of that 100% total flow goes into each sub-group.  You have to work at it though – the two percentages are easy to confuse and many commentators are diverted by the changing total dollars in motion.

Informed inferences are there for the making; but you must be willing to correlate WTID results with 100 years of U.S. history.

Post WW-1, the highest-paid 10% raked in ever larger fractions of the money moving through the economy.  For example, one can see the ’29 crash, the quick resurgence of wealth percentage, and its growth stopping at 45% total flow by the 1935  progressive tax law.  Real equilibrium emerged about 1945 when WW-2 ended.

The equilibrium share of income for the top-compensated 10% was 34% of income (with small fluctuations).  C21 uses the French phrase Trente Glorieuses (“30 glorious” years) for the equilibrium decades between 1945 and about 1980 or 81.

Reagan’s Economic Recovery act on 1981 was an undo for  Roosevelt’s progressive tax and you can see the sudden % shift upwards  as ever-greater income flow was diverted to our fine & deserving Toffs.  This means that the rest of us had to accept ever-decreasing fractions.

WTID provide historical data to use for comparisons.   Other data sets may be fruitfully compared to Piketty’s database charts, and the LastTechAge has frequently used them for correlative support.  Here are some examples …

Bottom 20pct vs top 10pct

Lowest earning 20% + top 10%

Fig 5 shows census bureau data on the lowest paid 20% of the population with the highest 10% shown in light blue, for visual correlations.  Census data set started late and is noisy (they are not so interested in the lower earning levels). But it is perfectly believable that the data from the 1967-1980 are just fluctuations about a flat line, too.  The lowest 20% start losing income share in 1981, when income share for the top 10% begins its spectacular rise.


Fig 6:   Fusion research + Top earning 10%

Fig 6  shows the timeline for the U.S. fusion energy budget – the American Fusion Engineering Act of 1980 allocated a massive increase in funding and set the stage for a shift from basic research on why things happen to the engineering how-to with Prototype 1 of a net power producing magnetic fusion machine.  Correlations to WTID anchor data suggests why fusion funding never really started up.

LastTechAge calls it a Zero Sum Game when you have to describe what happens when growth of one faction requires corresponding reduction by another.  With our apologies to game theory, we have a number of posts on the idea.   (Refer to our Index under the header for related posts.)

Synergistic/Parasite_txtWe will continue to probe other data sets using C21‘s WTID data as a correlation baseline.

Visible even in our example Fig 5 and 6: Unrestricted capitalistic acquisition of wealth seems to focus on maximization of the personal family wealth of select individuals, even to the detriment to the society that makes financial success possible (my statement, not Piketty’s).

In the next section, we discuss how Piketty used the measured total Wealth of the U.S. and its total Income to draw inferences.

Now – Inferences from the math

Someone who comments on a book should read at least  the introduction; wouldn’t hurt to read a couple paragraphs of Chapter 1 and probably the last page or so at the end.  Many C21 critics  seem to have done none of those.

Even in his introduction, Piketty states that, without a solid database, economics is not science, just speculation.  Page 2, Piketty comments on data-free discussions as “A Debate without Data“; page 3, he refers to ideological hypotheses as “the dialogue of the deaf “, using “intellectual laziness” where one side points to the other side’s intellectually lazy habits; page 11, in reference to Marx and  Kuznets he says that without data, one can find “theories” that range from “Apocalypse to Fairy Tale“; page 3 again,  “Social scientific research is and always will be tentative and imperfect. It does not claim to transform economics, sociology and history into exact sciences.”   These are not words of a rigid ideologue, though Greg Mankiw tries to frame him as one.

Predictive Econ is science fiction

Cmt-B  Predictive Econ is science fiction

Economic math models (even those with differential equations) start as the verbal ones do, maybe from divine revelation of an axiom set, maybe a set of blindingly-clear but hypothetical truths.

Several of our posts that say Economics is not Science.   Cmt-B discusses SciFi imagery.

Piketty scorns economists who use complex mathematical models, but he does use a few simple equations.

For me to detail how the math is used, I would surely need more than 577 book pages.  For good discussions of what it all means, read Piketty’s book or the Science journal summary.


Cmt-C: Algebra of Capital in the 21st Century

The few math relations used are shown in Cmt-C.  The content of his resulting inferences are why some high-wealth people are trying to undermine his credibility.

The second law generates concerns.  It is given without proof, and means that over a long time period, a country’s beta value (its total wealth divided by total income) will gradually adjust its value to match its average rate of savings divided by its growth rate. This assumes that everything stays in equilibrium: no awful wars, no asteroid hits, no super volcano at Yellowstone Park, etc.

If true, and if the saving rate is constant,  a country needs continuous growth  to keep its wealth from concentrating into a few hands.

If growth g gets really small (no growth – static population size and static manufacturing productivity),  the ratio s/g must become really big.  2nd Law: (s/g) = β  = (K/Y) and since  capital wealth K cannot grow to unbounded infinity,  it must be that the income Y averaged over the population becomes very small – lots of earners with subsistence paychecks.

Oh, wow… Static societies develop into inherited aristocracies with their majority as starvation level serfs . ? !

I have a  small question about the 2nd Law.  Beta is measured in years but the law convergence ratio is a pure number.  Where did the ‘year’ dimension go? I am a physicist and all our valid results keep the measurement dimensions consistent.  If it starts as ‘years,’ it must remain as ‘years.’  Maybe my issue is only because I have not seen how this convergence was proven?

The r > g condition is more serious.  When the rate of return on capital investments exceeds the growth rate of the country, you should also expect wealth concentration over the long run.  Because when the wealth capitalization rate r (return on capital investment) is bigger than the rate at which wealth of the bulk of the population grows, the folks with inherited wealth acquire resources faster (much faster?) than all others.  For really large r, new capital flows to the very rich & well connected; all others lose their pool of resources.

r > g  is serious without the second law.  With the 2nd Law, it acts as a strong accelerant. (Think of what an arsonist uses to start building fires.)

Piketty’s team best model for capital growth uses an economy with with successive shocks – sudden and unanticipated changes in the fortunes of various subsets in our economy.  We are currently passing through such a “shock.”  in 2009, the rich and well connected got huge government bailouts + monster bonuses.  But manufacturing had been hustled off shore and most of us found lower paying jobs waiting to replace what we lost.

Finally:  The model is not the thing, the  map is not the territory.

The model points toward wealth concentrated into a few hands, especially if you include resource depletion and climate change as forcing elements.   But the issues are complex, and include forces that push toward inequality but also those that push away.

Piketty’s proposal.  Reinstate a progressive tax rate (PTR).  Suppose the capital return rate r  does try to grow above growth g. Government can shift the r/g ratio.  After all, turning the PTR off shut down the 30 Glorious Years.  This is a way to reduce the rate of net capital return r so that wealth agglomeration does not happen.  And why he is under such extreme attack by the ultra rich and their (much poorer) far right followers.

These seem to be our choices:  (A) a feudal oligarchy where the very rich rule our impoverished people, or (B) a revolutionary oligarchy where the very rich rule our impoverished people, or (C) an increased share in the support for our country paid by those who rose to the top via the American avenue of opportunity.

Piketty says that his model shows a potential direction, but does not predict the actual future.  (C) is certainly possible, the world is not required to follow the path of inequality  that leads to injustice. We could, in fact, return to the “Glorious Years.” It is just that, in his opinion, (C) with its PTR action is the most unlikely choice the country would accept.

The final statement of the Piketty and Saez inequality summary.

“To summarize: Inequality does not follow a deterministic process.  … There are powerful forces pushing alternately in the direction of rising or shrinking inequality. Which one dominates depends on the institutions and policies that societies choose to adopt.”


Charles J. Armentrout, Ann Arbor
2014 Jun 26
Listed under   Economics    thread    Economics > Inequality
Have a comment? Click on the title of this post, go to bottom, let us know.
Related posts: Click the INDEX button under the Banner picture

Posted in Economics | Tagged , , , , , , , , , , , , , | 2 Comments

NIF shifts toward success – 4

NIF addresses the universal RT instability all implosion fusion schemes must face, is there 7more that they can do?

NIF people limited the impact of the RT instability in their most recent report.  But they have more to do and surprises to absorb before they achieve their goals.  This is our 4th post in this thread.  Summary of our previous 3:

    1. Gain: Target implosions produced about as much energy as was applied.
    2. Enhanced heating: The fusion reaction is starting to cause more fusion. First step towards an ignited plasma core.
    3. Not in-control:  The stagnation core at the end of the implosion is an inefficient donut with hard peanuts on one side, should be a plum with a hard/hot central core.
    4. Something is not working as expected:  Are there hidden issues that block success?

This is a “popular” discussion, not a truly technical one, but we must estimate one more technical topic if we want to understand.  But you may skip the close-up discussion; click Summary and jump past the nitty-gritty  details.

Click any image to expand to full size.


Fig 1  RT mixing of water

Rayleigh-Taylor Instabilities

The RT (Rayleigh-Taylor) instability is the hard reality that ICF researchers must accommodate. When a heavy liquid pushes on a lighter one, RT mixes the boundary with growing turbulence.

Fig 1 shows the beginnings of RT driven tendrils from the red (heavier) water into the lighter, clear water.

Click the here to see the full discussion of this effect.

RT is a universal instability
It is insidiousness because any bump, no matter how small, will grow and never stop growing.

Target implosion simulationi

Fig 3   Simulated collapse of a target due to bumps on the target surface


Fig 2  Target cutaway

Fig 2   is the target design (from our previous post).  Click this link to see the Specification Table.

Fig 3   shows a computer model of an imploding ICF target, driven by non uniform bams; this was done at LLNL and displays the RT turbulent structures. Top half of the target is shown.  The  structure causes havoc when the fuel compresses into its stagnation-point core.

RT starts with very small bumps on the interface. In Fig 3, the largest tendrils are from the fuel penetrating into the low pressure in the cavity.  The bumps expand exponentially, driven by  RT mixing caused by the monstrous drive acceleration not Earth gravity.

Tbl A        formulas to calculate RT effects - simplest model

Tbl A   Formulas to calculate RT effects – simplest model

Now the math:   The answer to:
Will RT appear?  … is always YES, whenever a dense fluid pushes against a lighter one.

The correct question is:
WHEN will RT effects start to  mess things up?  … this  has to be estimated.

Table A defines our terms.   A target has two interfaces that can generate RT instabilities,  • the Shell Wall to Fuel surface and the  • Fuel to inner cavity surface

StartHeight is the size of the initial bump.  The smaller the bump the longer it takes to be noticed, but bumps will grow like penny stacks discussed in the “exponentially” link, above.

BumpHeight is the size a time t after the beams have turned on.

Gamma The rate of growth of a bump in the interface. It determines how fast the bump will rise,  the bigger gamma, the faster the growth, the sooner RT turbulence becomes an issue.

Doubling Time A bump doubles in size every time the clock ticks through  It will be easier to understand and most calculators with  the [Log] key has the [Ln] key too.

Estimating Gamma means we have to estimate 3 quantities


Tbl B Atwood values for target shells

Atwd  is the Atwood number.  It is the difference between the two densities of the 2 materials divided by their sum.  Table B gives some values for this function.

Case A  applies for the inner surface of the fuel cell, solid DT on one side, essentially nothing in the cavity.  Cases B, C, D, and F refer to the interface between the pusher shell and the solid fuel.

Case F is for the shell/fuel interface in the current NIF target.   The value Atwd=0.5 for F instead of 0.6 would make the RT growth rate 15.5% lower than D, a simple PVA plastic shell with a cryo fuel layer.  The 0.5  might justify the much larger target expense,  but we will stay with the conservative 0.6 guess in F.

KMS built the shells used by most ICF labs in the 1970s and ’80s,  Here is my link for the our simple estimates.

WaveNumberWavN  is the Wave Number, units of “per cm.”  Basically, 2π divided by the average distance between successive RT peaks.  This assumes number of peaks is 10.  If it were actually 8 or  12, the square root in the growth rate would change  the effects only a bit.

Accel  was discussed in the Target Demands table (post -3, this thread),
and is 7.2×1012 m/s2.   This enters the RT growth rate as its square root, too.


Tbl C    RT Instability growth, Summary

Table C shows that, at both boundaries  (Shell|Fuel and Fuel|Cavity), any and all imperfections grow by a factor of 5 or 6.  This means the Target Fabricators must keep all surface bumps below some maximum size, or the targets will fail to meet goals for high efficiency;  the shell will  mix and “poison” the fuel, or the fuel will not converge into a spherical core.

Same argument goes for the drive beams; they must all turn on at the same time so that implosion irregularity does not locally cause (much) worse RT effects.  Of course, beams that are hugely out of balance  will destroy the collapse without any help from RT.  This was the topic of the last post where we pointed out ICF labs have been serving up donuts for decades.


Tbl D      RT limits to flaw sizes

Table D is our bottom line to Rayleigh-Taylor effects on ICF implosions. Recall that the target is 2.2 mm id diameter (about 1/8 of an inch).  The fuel layer is 70 microns thick (slightly thinner than writing paper)

The Shell | Fuel irregularity limit is on the ID of the tiny plastic sphere and are about 0.1 % of the shell thickness.  the Fuel | Cavity limits represent about 0.1% of the Fuel layer, too.

These requirements  cannot  be met.

What NIF Reports Mean

The NIF data are a single start of the unfolding of ICF capabilities, not the final results.  We will see much better one before they have finished.  … but lets evaluate current status

Seven requirements for fusion success

Target Criteria

    1. Fuel: Cryogenic solid layer, vacuum cavity – Best bang-time convergence (highest density, reduced Atwood number – next point)
    2. Shell: Lowest element mass, Z – Minimize the Atwood number and minimize Rayleigh-Taylor instability of a high mass pusher driving a much lighter fuel layer.
    3. Shell: Symmetric as possible – Avoid instabilities that shred the implosion, mix fuel and shell material. Use perfect spheres (conic eccentricity= 0), perfect surface finish (RMS = 0).

Drive Beam Criteria

    1. Wavelength – make as short as feasible. Minimize fuel preheat (which blocks compression), avoid excessive loss of beam power to the ablating plasma.
    2. Pulse Shape Control – control shocks on ablating shell, initial preheat of fuel, then gentle ramp-up of the drive with a  hard push at stagnation.  Follow adiabat ( min entropy trajectory) through the implosion phase space
    3. Symmetry: Spatial – Avoid hot spots that drive instabilities. Every beam with uniform intensity, all beams with same intensity.
    4. Symmetry: Temporal – Avoid time dependent hot spots or regions that implode faster or slower than average.

Items 1-3, NIF does these well, as best as can be accomplished.
Item 4, put a check next to this, too, because of the indirect-drive x-ray strategy.  The conversion from IR to X-ray is lossy,  but at least it is in place at high power.  But a UV laser like a KrF system would be much more efficient.
Item 5, a work in progress, not done yet – the laser appears to have all the control capabilities desired. But as we discuss,  optimal  strategy is still under development.
Item 6 is not under control, as evidenced by the poor shape of the collapsed core. This will never be perfect because each of the 192 beams have spatial hot spots, though uncontrolled, unsmooth beams were expected to be smoothed hohlraum. The inner layer of solid DT fuel will self-smooth by what its discoverers called Beta Heating.
Item 7 was never meant to be controlled (IMHO). The 192 separate beamlines turn on not quite together (guess: within slightly less than 1 ns. )  I suspect beam space and time inequalities were meant to be smoothed by the x-ray conversion.

Non uniformity may be why they use a helium prefill of the hohlraum.  Edwards made a comment to this effect in his report (last post).  However, there may be jetting from the gold and this would mess up expected smoothing.  If true, it would add to my arguments against excessive trust in simulations, even when done by the LLNL theory powerhouse.

Beam irregularities and RT have messed up many a target shot.


Fig 4   Square pulse, KMS 1985-86 data

Our simple estimates on the effects of RT make the IFE (fusion energy generation) task appear impossible.  And it is, if one were to use only the square laser pulses as modeled.

Square:  beam OFF at start, beam ON at full power during shot,  beam OFF at end.  [Fig 4, reference.]

The killer assumption is that the stupendous implosive acceleration was fully on at start and constant during the inward drive.  Acceleration pushes RT, without it – no RT growth.

Use Fermi degenerate adiabats The idea – dating back to the early days of the 1970s – was to push with a force that kept the electron distribution in their quantum lowest states, the “Fermi degenerate condition.” This is equivalent to saying that we should not push so fast that we increase the entropy of the fuel core, should move system along a thermodynamic adiabat.

Fancy words for “Start with low push (acceleration) and gradually increase the pressure by increasing intensity, until the end.” I am not aware of any actual Fermi-degenerate implosions being done, but this is the prescription for low-RT shots.


Fig 5  t square pulse, KMS 1985-86 data

Fig 5, is 30 year old data to show that ramped pulses are nothing new. This is data from the Chroma laser at KMS fusion.  The shots are 1/10 the length of NIF shots and the targets were about 1/10 the diameter, too.  Chroma was significantly smaller than NIF/10, however.

Ignition and Burn-wave   At full run-in, the core will be at its most compacted form, is a plasma, and is way too cold for fusion.  Now smack it with the highest power shock to initiate fusion at the center.  The products (helium nuclei,  “alphas”) will heat up the plasma just outside the core, these heat the next  fuel out from the center, and so forth.  The  ignited, fusion burn wave produces the energy.

Good news:   NIF has demonstrated the self heating that means ignition and burn wave is possible!  Edward’s Dec 2013 report (see post thread, part 3) indicates that they are working very hard at reduced RT shots.

I am not at all certain I understand the actual shot profile.  Edwards’ Report to the NIF Management Advisory Council NMAC) shows Fig 6 as the successful “High Foot” beam profile (pg 17 of the report).  “NIC” is the “Low-Foot” profile. The intent seems to be a shock hitting the capsule and causing an inward drift, as the laser beam was held back until the final drive shock of full power at the end.


Fig 7    Park, Hurricane, et al PRL Feb 2014


Fig 6  Edwards High Foot profile Dec 2013


Fig 7 is from the data paper by Park, Hurricane, et al and shows a slightly different “High Foot” success profile.  Similar, but there is significant drive power in the plateau between the initial shock and final, power drive shock.  These data are in the form or a rough ramp, similar to Fig 5, except …

    • the initial preheat shock that starts an inward drift.
    • the intermediate power plateau, which does not go away, but continues to gently (?) push the fuel layer inwards.
HiFt with Tsqr

Fig 8  High Foot with time squared (dotted)

Fig 8 is the High Foot part of Fig 7, to display the shape of the supplied laser power.

The actual success pulses roughly follow the dotted T2 curve drawn over the data.

First the initial preheat shock, then a flat power plateau which starts above the quadratic curve, then goes below it.

What the results would have been if they had followed the  T2  profile closer?


The NIF researchers emphasize that they worked hard to reduce the universal RT effects, and the results show that they are on the right path.

    • First they shortened the shot duration – is it short enough yet?
    • Then they ramped the intensity  so that the RT did not rise to destructive levels.
    • Next, they plan to try a larger hohlraum to see if they can remove most drive anisotropies.

The NIF workers have done well.   I wish them success and hope/expect to see really good results before the end of the year.


Charles J. Armentrout, Ann Arbor
2014 Jun 11
Listed under    Technology    …    Technology > ICF/IFE
Have a comment? Click on the title of this post, go to bottom, let us know.
Related posts: Click the INDEX button under the Banner picture

Posted in Technology | Tagged , , , , , , , , , , , , , | Leave a comment

NIF shifts toward success – 3

NIF released positive data.  Is some unexpected effect is blocking full success?

The staff of the National Ignition Facility (NIF) in Livermore California released papers on very positive new results in their work toward inertial confinement fusion (ICF).  NIF is on a success path, and they need to continue doing what they just did, but more so.  This will actualize the potential for 5 MJ power output/target shot, as  required for true success.

This is our 3rd post in this thread.  NIF results discussed in Parts 1 and 2:

    1. Gain: Target implosions produced about as much energy as was applied.
    2. Enhanced heating: The fusion reaction is starting to cause more fusion. First step towards an ignited plasma core.
    3. Not in-control:  The stagnation core at the end of the implosion is an inefficient donut with hard peanuts on one side, should be a plum with a hard/hot central core

We continue our discussion by looking at what might have been helped by their new operating changes. This is a “popular” discussion, not a truly technical one, but we must get through some of the technical points if we want to understand.

Click any image to expand to full size.

 Overview:  Beams on Target

Table A outlines specifications for a NIF laser, hohlraum and target.


Fig 1   Target shell cutaway. Compressed core is shown at the center (red)


Tbl A  Specifications for Target, hohlraum, Beams

Fig 1 is an image of the target, less than 1/8″ in diameter.

    • The ablating shell (called the pusher) has a complex construction of 5 plastic layers to provide density gradation between laser and the fuel layer.
    • The inner layer (gray) is the deuterium/tritium fuel mix frozen onto the inner shell in a layer about 3 thousands of an inch thick.
    • The compressed core is shown in the center, drawn in as the red circle.
    • The central void is mostly vacuum, except the thin DT gas that has evaporated from the inner surface.

The target must hold together and collapse smoothly during the extreme environment of the implosion.   What researches must do is smoothly guide the target plasma (something like an arc welding “flame,” but thousands of times hotter) into a smooth core.


Tbl B   Extreme target environments

Table B  lists several of the severe conditions they must deal with.

    • To reach the bang-point while the laser is on, it must be pushed smoothly with a trillion times more force than Earth’s gravitational field.
    • It will be in the plasma state but must be moving at 10 million cm/sec, 5 to 10 faster than most plasma acceleration  schemes in current use.
    • The shell blows off but it must be evenly removed over the surface, not like a space craft’s re-entry.    The shell should be mostly used up by Bang-Time, but the ablation front must not burn through to the fuel layer.

We will look at NIF using a very simple model, the kind called “back of the envelope.”
About modeling with calculations:  Models extrapolate what would happen if everything works as expected , but doing new things is hard because of unexpected events happen.  So my guiding idea is to check feasibility with simple estimates first; if the simple estimate are good, then do the detailed computer modeling.

Our point:
if simple estimates say …

it will not work.  It probably won’t.  But cross check your findings!
it is “iffy” – maybe it is possible.  Try it but don’t start by pushing the boundaries.
it definitely will work.  Go for it! Expect tension and high stress to make it go.

Computation is worthwhile, but over-the-top detailed models do not reveal the actual truth. Consider this slight misquote from General Semantics:  The map is not the territory, the calculation is not the thing.

The data and the simple criteria


Fig 2:  New HF and old LF Shot sequences

Shorter shot time What the NIF managers did is shown in Fig 2.  The shot length was reduced from from 25 to 15 ns and the power was raised in the trough leading to the ramp-up of laser energy.

This new method is called the “high foot” (HF) mode of operation. (A nanosecond is 1 billionth (US) of a second.  Light moves about a foot ··· 30cm ··· in 1 ns.)

Their change pushes the same energy as before into the target, but  faster … 870 GW instead of the previous 520 GW.

Are the laser beams misaligned – or blocked?


Fig 3 1995 Nova data. Alleged RT spikes visible … 0r are they beam-target instabilities?

Fig 3 an x-ray image of a target implosion from the large Nova laser, predecessor to NIF. The image exposure time is (about) 1/3 nanosecond.  We see strong speckle because there are not many photons available in short a time. Wikipedia calls these images of the RT instability inside a target. (We discuss RT in part 4 of this  blog thread.)


Fig 4   Nova hohlraum with the 12 laser beam hit points

These 12 spikes in the images do not look like RT inside a targetFig 4 shows the Nova hohlraum with this 12 hit points.  Looks to us as that the spikes in Fig 3 are plasma jets from the hit points.


Fig 5 Cold jet from beam on Au

Does beam-on-metal data help make things clear?  One of our last studies at KMS Fusion focused the intense laser beams on flat gold plates. Fig 5 was taken by our imaging holographic interferometer; 1 of 4 images, 1/3 ns exposure.

Result – we observed cold dense plasma jets from the strike spot, originating between the hot spots in the laser beam: high density (1020 g/mL) objects moving from the target surface at plasma expansion speeds (> 106 cm/s).  Our working model: higher intensity beam regions pushed the plasma jets up and out from the lower intensity  and cold nearby regions , similar to squeezing a tube of toothpaste.


Tbl C Jet size, assuming sonic velocity


Fig 6 NIF hohlraum with beam hit points

Table C  shows estimates the size of a beam-plasma jet at the end of a NIF pulse.  Our model is of a jet expanding at about the plasma speed of sound, as in the KMS data.  Fig 6 shows the NIF  situation: many beam hit points on the walls.  Jets could occur between beam strike points (if close enough) or in any one of the hit areas, between the hot spots.

If jets are a real phenomenon, they are most likely to form in the equatorial band where beams from the two ends overlap (see Part 2, Fig 2).

The target is separated from the hohlraum wall by 1.75 mm (Table A).  The coldest estimate for the plasma indicates that a jet will have crossed about 1/3 of the original distance.  Such jets would change laser beam and x-ray propagation in the chamber, change the distribution of intensities; they could  affect the heating of the target.

Blockage by cold jets is a hypothetical process.  I have not read of one discussing such a possibility – no one (else) has indited the hohlraum of misbehavior, but Nova data seems to show the event and NIF geometry invites it.

Is shell burn-through possible?


Fig 7  Imploding target with beam breakthrough. Blue is x-ray drive illumination.

What if uneven shell erosion is driven by hot spots in the driving illumination?  Is this a question too basic and simple to ask? The the ablation process ought to move through the solid shell at the speed of sound (in the shell material).  But what if shell erosion is boosted by interactions between the shell wall and the beam intensity? Fig 7 shows the implosion issue.

The fuel perturbation were sketched as a bulge toward the center, but it might well be an depression in the ID.

NIF results looks like square donuts, not the intended spherical billiards ball. Meaning: yes, there are irregularities in the illumination.  Look again at Fig 1.  If the driving light is not symmetrical and uniform, the smooth sphere will immediately be cratered like the surface of the moon.  Once formed, these craters do not vanish but grow proportionately deeper.

Shell Punchthrogh+txt

Tbl D Time for drive beam to punch through the target shell, into the fuel

Bright spots imprint dents into the pusher shell that do not disappear during collapse – one of the messages from a presentation by a member of Stephen Bodner’s NRL team (Nov 1988).

Table D shows estimates of the time needed for a deeper-than-normal crater to reach the inner fuel layer, assuming the pit propagates inwards at sound speed.

The estimates are for sound speed expansion in through a plasma, and show no joy.  Hotter than 20 eV might be a bit too hot; 2 eV is the temperature of an arc welder – it is a bit too cold.

Pusher shell burn-through is regime where the estimates are ambiguous.  A cold pusher shell (2 eV case) indicates that the 25 ns original shots were probably too long and current 15 ns pulse lengths are possibly so.  If the beam is hotter than shown on the table, the ambiguity goes away; the pusher shell would be burned away in even the best scenario and the last nanoseconds are about ablating fuel.  Bad idea.


We have done “back of the envelope” estimates for 2 possible reasons that the NIF shots, good as they were, did not do better.

LLNL has some of the wold leaders in detailed modeling, it is inconceivable that they would not have simulated any foreseeable issue in implosion physics.  They are masters at target design, just look at the actual complex target – it takes real skill to design and assemble such a multilayer shell.  Could they have missed any normal occurrence?  Not likely.

The performance limiter must be something arising from an unexpected direction.  These are just guesses without sufficient access to data (or computation power) to put more depth into them.

Things have gone horribly wrong for NIF these past 5 years. They are working their way out of the depressing pit and toward success. The next post in this thread will discuss the important RT instability and will do an overall summary of the result.


Charles J. Armentrout, Ann Arbor
2014 Jun 6         Update Jun 7 to correct several typographical errors
Listed under   Technology    …   Technology > ICF/IFE
Have a comment? Click on the title of this post, go to bottom, let us know.
Related posts: Click the INDEX button under the Banner picture

Posted in Technology | Tagged , , , , , , , , | Leave a comment

NIF shifts toward success – 2

NIF’s 2013 data showed fusion can be feasible but geometries need adjustment.

We  continue our discussion by looking at target geometry.  Yes, NIF seems on a success path, but they do need to clean up various physical details. That would lead them to the point that it would be reasonable to start considering realistic IFE power plant designs.

NIF results discussed in Part 1:

    1. Gain: Target implosions produced about as much energy as was applied. Yes!
    2. Enhanced heating: fusion-produced alphas provided significant heating to the plasma, and enhanced the yield of fusion energy. Yes!

Point 2 indicates that a path to ignition surely exists in ICF studies.  Ignition is the name when the fusion reaction products (the alphas) become a significant energy source to drive new fusion events. We need an ignited plasma to use the fuel. Fusion power plants can never be economical without ignition.

So… what happened to make things work?


Fig 1: John Edwards, ICF Program Leader, LLNL

We rely on the two presentations discussed in the previous post and on the December presentation by  John Edwards, ICF Program Leader at LLNL.

    • Future Modifications: There is need to adjust  strike point where laser beams hits inside hohlraum cavity wall.  The final core after implosion is far from a clean sphere.
    • Most Important: The laser pulse characteristics were changed so that the beams ceased being effective generators of  hydro instabilities (fluid-style  fluctuations that can shred the collapsing core) ; but the data may yet indicate their implosions are troubled with hydro effects.

Yes, test shots function better than before, but results show that they still have issues to address before they close in on success.
Click any illustration for the full sized image.

Modifications Needed:  Uniformity adjustments


Fig 2:  Geometry of a NIF test shot.  NIF distributed image, modified

In Part-1 we showed a typical target, the hohlraum radiation chamber it sits in, and a schematic of the 96 laser beams entering the chamber from both sides and striking the walls. Fig 2 is another image.

The beams enter at 4 different angles.  The equatoral plane is the horizontal line through the CH (plastic) target; the polar axis is a vertical line through the center. Interior dimensions are about 10 mm ( ~ 3/8 inch) vertical, 5 ¾ mm  (~ ¼ inch) horizontal.

Adjustment of the x-ray flux from the laser/wall strike points requires exquisite care to generate uniform burn-off (ablation) of the target shell with a uniform sphere as the final stagnation core. A symmetric core with a single hot spot is the right recipe for efficient fusion.

Fig 2 shows another innovation: back-filling the hohlraum with helium gas. No specific reason given, perhaps it is to help smooth beam unevenness?


Fig 3:  X-ray emission from collapsed core. Shot N130812. Blue- low emission, dark Red-high

Here are results from their best shots.  Fast x-ray cameras took images of core emissions, as indications of fusion activity. The data images are gray shades with dark areas being high emission regions.

Fig 3 shows colorized x-ray images from one of the good shots, the papers have others.  The shape from the side (equatorial plane) is  quite different what is seen through the hohlraum ends (polar view).  Interesting: highest emission is from two distinct regions from a single side, and the nominally spherical central core is a “square” donut.  The out-of-round must be an issue in locating the beam hit-points.


Fig 4A:  3D image of x-ray emission. Numerical reconstruction of data from Fig 3


Fig 4B:  Chocolate donut, for comparison with collapsed core.

  Fig 4A shows a 3D reconstructed image of the core, which accurately reproduces the shape.   Fig 4B is a fresh donut, to demonstrate the toroidal shape. The core doesn’t have to be symmetrical to be yummy.

Not a sphere, and more than one compact hot spot.  Clearly the laser beam hit points in the hohlraum must be re-thought.  We will discuss this again in Part 3.


Fig 5: 1980s KMS implosion test

From the Archives:  The need for symmetrical illumination has been understood for many decades.

Fig 5 shows a test with the Chroma laser at KMS Fusion, Inc, in the late 1980s.  We were actually testing a facility-developed x-ray framing camera using a left-over target shell and the 0.53 µm laser light. The beam was split into 2 arms and focused on the shell.  Our “Equator” image looks a lot like NIF shots,  but without a “Polar” view, we can only speculate that it probably had  a toroidal shape.

This old recollection is to point out that ICF labs have been serving up donuts for many years.  Most shots had fairly low compression with energy distributed among multiple dense kernels.  KMS’ best results date from 1986 and use its radiation chamber for precise time and space symmetry adjustments. Achieved  • “at least” 80× target compression   • a single high density compressed core.

Most Important need:  Pulse shape adjustment

The key element for the current success is that the NIF team shortened the laser pulse length (Fig 6).


Fig 6: NIF changed to shorter shot time spans.    1 ns = 1 billionth of a second, 0.000 000 001 s

Joules_Watts_txtShorter shot time NIF managers trimmed the shot length from 25 to 15 ns and raised the power in the trough at the center. This is called the “high foot” (HF) mode of operation.

Their basic change was to push into the target the same energy as before, but  faster … 870 GW instead of the previous 520 GW.

There had apparently been worries that the HF pulse shape might cause too much preheating of the fuel, generating back pressure that would inhibit formation of a tight stagnation core at the end of the implosion.  The results show differently.   The main positive seems to be that they significantly reduced previous effects from severe RT (Rayleigh-Taylor) instabilities and allowed the fuel to converge into an almost sphere with a not quite single hot spot.

We will continue  the discussion of the really positive 2013 NIF results in part 3, with discussions of what might be yet be causing trouble .

Charles J. Armentrout, Ann Arbor
2014 Jun 1
Listed under   Technology    …    Technology > ICF/IFE
Have a comment? Click on the title of this post, go to bottom, let us know.
Related posts: Click the INDEX button under the Banner picture

Posted in Technology | Tagged , , , , , , | Leave a comment

NIF shifts toward success – 1

Tweaks at NIF generate good results – to come: spectacular shots 

NIF (National Ignition Facility) at Lawrence Livermore National Laboratory in California published results during the dark of winter that are exciting, show astounding differences from all their previous announcements.  This is our Part 1 of 2 on the what and why of these data.   Their promising results came out in February, in different journals with different but complimentary results; a note in Physics Review Letters, (get a free PDF copy!),  a Letter in the journal Nature (behind a paywall), and Daniel Clery wrote a news summary  in the journal Science (perhaps easier to access).

During my recent blogging sabbatical, there have been many events worthy of comment.  Many, like this, have been very positive and leave me in an upbeat feeling.  NIF results area good place to start our discussions.


Fig 1:  Dr. Omar Hurricane, NIF Team Leader for laser fusion studies

In the past, LastTechAge has been pretty critical about how LLNL upper management has guided NIF operations.  See our Index for other posts in the Technology-Fusion ICR thread.

It is about time to recognize the hard earned achievement that NIF Team Leader Dr. Omar Hurricane and his team have accomplished.  He and 15 others are list as authors on the encouraging – and believable – PRL paper.



Fig 2:  LEFT – Target: 2.3 mm (< 1/8 inch) diameter, plastic shell with frozen mixture of deuterium and tritium (DT) that is the fusion fuel.   CENTER – Holhraum chamber 5.4 mm cylinder diameter (<1/4 inch). Target is at center of this chamber.   RIGHT – 192 separate laser beams enter chamber ends and strike the walls, not the target:   • walls radiate x-rays  • x-rays strike target   • target compresses 100× smaller   • core Temp > 100 M degrees and energy released by fusion.  Image source: LLNL

IndirectDrive_DefFig 2 is a set of images from an earlier post. NIF used plastic target shells lined with a thin layer of cryogenic DT ice (D deuterium, T tritium). The target is placed in the hohlraum chamber and all 192 beams were focused on the end holes.

Fig 3:  D & T fuse to make He +neutron + energy

Fig 3: D & T fuse to make He +neutron + energy

A hohlraum is the key part of indirect drive because the laser’s purpose is to generate high x-rays flux that ablates the shell and causes the target( A.K.A pellet or capsule) to implode.

The Fusion process (Fig 3) is the merging of the D with T to form a helium nucleus.

In the end, we get a stable high energy helium nucleus (A.K.A. alpha particle), and a fast moving neutron.

Click any graph to enlarge it.

The Good Data

Fig 4 shows the data that is causing the celebrations. These data were used in the February reports.   The graph is similar to those in the PRL and Nature publicatoins, but is from the 2013 Dec presentation to the NIF management Advisory Council by John Edwards, Assoc. Dir. for ICF.

Fig 3:  Selections from start-up (2011) through late 2013.

Fig 4:  Selections from start-up (2011) through late 2013.

First:  These new data have the highest fusion energy yield they ever achieved; several shots generated more energy in fusion than actually reached the pellet.  They do report that the collapsed core was a bit lower in density than expected.

Second:  The important part of the good news is that the high energy alpha particles from each fusion event significantly heated the remaining collapsed core plasma.  This is called self-heating and happens when the core density is high enough to slow down the alphas and absorb some of their energy.  They are on the way  to significant self-heating, what is frequently called ignition, where fusion reactions generate the power to cause subsequent reactions.   Ignition means the collapsed core “can do it alone;” it needs no outside driver power to make fusion power.     Is this the first time any ICF shot has demonstrated self-heating?

NIF is definitely on the right track, now.

These data are not fully optimized, and many things might be adjusted/modified for even better results.  Nothing negative here.  It will be exciting to watch the NIF team make the adjustments to the target, hohlraum and beamlines  to reach their promised Q=1 (generated output energy equals what was put in).  They are at or near energy output/input breakeven (19 kJ), but the goal is and always has been 5 MJ,  that needed to get back all the energy that was initially put in by the NIF facility.

They have finally found their first turn in their long trek through the murky and complicated parameter space, the most exciting game of old-fashioned Physics Adventure/Zork imaginable.   I wish them bon voyage.  Stay tuned here for Part 2.

Charles J. Armentrout, Ann Arbor
2014 May 25
Listed under   Technology    …   Technology > ICF/IFE
Have a comment? Click on the title of this post, go to bottom, let us know.
Related posts: Click the INDEX button under the Banner picture

Posted in Technology | Tagged , , , , , , , , , , , | Leave a comment