Reviews of Numerical Recipes

``Numerical Recipes,'' with its versions for C and Pascal in addition to the original FORTRAN one, is one of the most frequently used compendiums on numerical programming methods.  The flaws of the first edition were popularized by a long list of reviews that was published on USENET and some ftp servers.  By 1995, the second edition of Numerical Recipes in C  (Press et al., ISBN 0-521-43108-5, Cambridge University Press) has almost completely replaced the first edition, making it hard to judge whether the praise it has accumulated stems from different criteria of judgement or from improvements to the text.

Numerical Recipes on the World-Wide Web

Users of the Numerical Recipes books might be interested in the Numerical Recipes WWW site. The subtree features an FAQ, ordering information, links to other information on numerical programming, and online copies (in PDF and PostScript) of the C and Fortran versions.

The criticism about the first version is mentioned in the

as an ``urban legend'' on a line with the poodle in a microwave oven and the Mrs. Fields cookie recipe.


Bad reviews of Numerical Recipes

First edition

[Editor's note: The reviews in this whole section are the feedback I (and other participants) received from a poster to USENET who asked for people's experiences with the `Numerical Recipes' books.  Unfortunately, not only have all the names of the submitters been removed (which I agree with), but also that of the initial poster, making it impossible to give credits and date the survey or the books surveyed.  If you can contribute details, please contact me Thanks.]

From an internal memo:

"There is a series of books and associated software with the name 'Numerical Recipes' in the titles that provide descriptions of numerical algorithms and associated programs in popular programming languages...

"The good news is that this series gives exceptionally broad coverage of computational topics that arise in scientific and engineering computing at a very reasonable price...  The bad news is that the quality and reliability of the mathematical exposition and the codes it contains are spotty.  It is not safe, we have found, to take discussions in the book as authoritative or to use the codes with confidence in the validity of the results.

"The authors are identified on the book jacket as 'leading scientists' and [we] have no reason to think that they are not.  However, there is no claim that they have special competence in numerical analysis or mathematical software.  At least in the parts of the book that [we] have studied closely, they do not demonstrate any such competence.

"Published reviews of the book[s] have fallen into two classes: Testimonials and reviews by scientists [including Kenneth Wilson, Nobel Laureate] and engineers tend to extol the broad scope and convenience of the products, without seriously evaluating the quality, while reviews by numerical analysts are very critical of the quality of the discussions and the codes...

"Two reviews by numerical analysts are: (1) Lawrence F. Shampine, "Review of 'Numerical Recipes, The Art of Scientific Computing'," The American Mathematical Monthly 94, 9 (Nov 87) 889-892. (2) Richard J. Hanson "Cooking with 'Numerical Recipes' on a PC," SIAM (Society for Industrial and Applied Mathematics) News 28, 3 (May 90) 18.

"Professor Shampine is a specialist in the numerical solution of ordinary differential equations (ODEs).  He gives specific criticisms of chapter 15, which deals with ODEs, and says, in summary:

'This chapter describes numerical methods for ODE's from the viewpoint of 1970.  If the authors had consulted an expert in the subject or read one of the good survey articles available, I think they would have assessed the methods differently and presented more modern versions of the methods.'

"He also remarks that adaptive methods for numerical quadrature problems are not treated in NR although they are much in favor by numerical analysts.

"Dr. Hanson is a former editor of the algorithms department of the Association for Computing Machinery Transactions on Mathematical Software (ACM TOMS).  He ran tests of the nonlinear least-squares codes from NR and made comparisons with published results of better known codes LMDIR from MINPACK and NL2SOL...  He found the NR codes sometimes required up to 20 times as many iterations as the comparison codes.  He noted that the control of the Levenberg-Marquardt damping parameter \lambda was not sufficiently sophisticated, permitting overflow or underflow of \lambda to occur...  the algorithm in NR is a very bare-bones implementation of the ideas presented in the referenced 1963 paper by Marquardt.  Many significant enhancements of that idea have been given in the intervening 27 years.  [We] would expect the codes LMDIR, NL2SOL, and their successors to be much more EFFICIENT AND RELIABLE [my emphasis].

"[Our] present attention to the NR products was initiated by calls for consultation ...  Two involved the topics mentioned above...  Other calls led us to scrutinize Sections 6.6, 'Spherical Harmonics' and 14.6, 'Robust Estimation'...

"...The discussion, algorithms, and code given in section 6.6 is internally consistent and the choices of the recursions to use in computing the associated Legendre functions are ones recommended by specialists in the topic as being stable.  No warning is given, however, regarding the fact that there are a number of alternative conventions in use regarding signs and normalization factors...  [If one naively combined results from NR codes with results from other sources] one would probably obtain incorrect results.

"In reading the section on robust estimation, [we were] skeptical of Figure 14.6.1(b) that shows a 'robust straight-line fit' looking substantially better than a 'least-squares fit'...

"To check [our] doubts about this figure, [we] enlarged it and traced the points and the 'fitted' lines onto graph paper to obtain data with which [to] experiment...

"We computed a least squares fit...  The particular 'robust' method illustrated by figure 14.6.1(b) is not identified.  However, since the only method for which NR attempts to give code in this area is L1 fitting, [we] computed an L1 fit to the data as an example of a 'robust' fit...  [We] used a subroutine CL1, that was published in the algorithms department of ACM TOMS in 1980, to obtain an L1 fit in which [we] could have confidence.  [We] also applied the NR code MEDFIT to the data and obtained a fit that agreed with the CL1 fit to about three decimal places.

..."As expected, the least-squares fit is not as far from the visual trend as in figure 14.6.1(b) and the L1 fit is not as close...  It appears that the lines labelled 'fits' in the NR figure 14.6.1(b) are not the result of any computed fitting at all, but are just suggestive lines drawn by the authors to buttress their enthusiasm for 'robust' fitting.  An uncritical reader would probably incorrectly assume that figure 14.6.1(b) illustrates the performance of actual algorithms.

"The objective function in an L1 fitting problem is not differentiable at parameter values that cause the fitted line to interpolate one or more data points.  The authors indicate some awareness of this fact but not of all its consequences for a solution algorithm.  Typically, the solution to this problem will interpolate two or more data points, and in the authors' algorithm, it would be common for trial fits in the course of execution of the algorithm to interpolate at least one data point.  ... suffice it to note that it is easy to produce data sets for which the MEDFIT/ROFUNC code will fail.

"One data set which causes looping is [x = 1, 2, 3; y = 1, 1, 1]. Another which causes looping in a different part of the code is [x = 2, 3, 4; y = 1, 3, 2].  A data set on which the code terminates, but with a significantly wrong result is [x = 3, 4, 5, 6, 7; y = 1, 3, 2, 4, 3].  Because of the faulty theoretical foundation, there is no reason to believe any particular result obtained by this code is correct, although by chance it will sometimes get a correct result...

"Conclusions

"The authors of 'Numerical Recipes' were not specialists in numerical analysis or mathematical software prior to publication of this book and its software, and this deficiency shows WHENEVER WE TAKE A CLOSE LOOK AT A TOPIC in the book [my emphasis].  The authors have attempted to cover a very extensive range of topics.  They have basically found 'some' way to approach each topic rather than finding one of the best contemporary ways.  In some cases they were apparently not aware of standard theory and algorithms, and consequently devised approaches of their own.  The MEDFIT code of section 14.6 is a particularly unfortunate example of this latter situation.

"One should independently check the validity of any information or codes obtained from 'Numerical Recipes'..."

I have independently checked the codes for Bessel Functions and Modified Bessel Functions of the first kind and orders zero and one (J0, J1, I0, I1).  The approximations given in NR are those to be found in the National Bureau of Standards "Handbook of Special Functions, Applied Mathematics Series 55," which were published by Cecil Hastings in 1959.  Although the approximations are accurate, they are not very precise: don't trust them beyond 6 digits.  Coding them in "double precision" won't help.  Much work has been done in the approximation of special functions in the last 32 years.

We haven't investigated the quality of every one of the NR algorithms and codes, nor the exposition in every chapter of NR (we have more productive things to do).  But sampling randomly (based on calls for consultation) in four areas, and finding ALL FOUR faulty, we have very little confidence in the rest.

I hope this helps some of you.


Received in the mail, in response to USENET postings:

"You can add the section on PDE's to the list of "bad".  The discussion of relaxation solvers for elliptic PDE's starts off OK (in about 1950, but that is OK for a naive user if he is not in a hurry) but then fails to mention little details like boundary conditions! Their code has the implicit assumption that all elliptic problems have homogeneous Dirichlet boundary conditions!

"Then they have their little coding quirks, like accessing their arrays the wrong way and putting unnecessary IF and MOD statements inside of inner loops....

"On the other hand, I did learn something from their discussion of the Conjugate Gradient technique for solving systems of linear equations. I did not like their implementation, but the discussion was OK."

And:

"Your posts in sci.physics about "Numerical Recipes" match my experience. I've found that "Numerical Recipes" provide just enough information for a person to get himself into trouble, because after reading NR, one thinks that one understands what's going on.  The one saving grace of NR is that it usually provides references; after one has been burned enough times, one learns to go straight to the references :-).

"Example: Section 9.5 claims that Laguerre's method, used for finding zeros of a polynomial, converges from any starting point. According to Ralston and Rabinowitz, however, this is only true if all the roots of the polynomial are real.  For example, Laguerre's method runs into difficulty for the polynomial f(x) = x^n + 1 if the initial guess is 0, because f'(0) = f''(0) = 0."

And:

"As an aside, I have just received a preprint from Press describing what looks like chapter 18 for NR -- about the discrete wavelet transform.  Now, I can tell you that this stuff is wrong, as the results which are in his figures are not reconstructable using his routines.  Don't know why yet, but it just doesn't work.  If anyone out there is using these routines -- toss them.  If you have fixed these routines or have other discrete wavelet transform routines, I want to know about it.  Thanks."

And:

" ...  And so let me offer my personal caveat: SVDCMP does not always work.  I found one example where the result is just wrong (fortunately it is easy to check, but one doesn't usually do so).  I translated the NR fortran to c, and also tried the NR c code.  Both wrong in the same way.  I tried IMSL and Linpack in fortran, and tried translating Linpack to c; all three produced correct answers..."

And:

"The NR-recommended random number generators RAN1 and RAN2 should not be used for any serious application.  If you use the top bit of RAN1 to create a discrete random walk (plus or minus 1 with equal probability) of length 10,000, the variance will be around 1500, far below the desired value of 10,000.

"Both are low-modulus generators with a shuffling buffer, in one case with the bottom bits twiddled with another low-modulus generator.  The moduli are just too low for serious work, and the resulting generators even out too well."

And:

"...  It seems that everyone I talk to has a different part of the book that they don't like (The part I hate most is the section on simulated annealing and the travelling salesman's problem- there are far better approaches to the problem.)"

And:

"Yes, I was another numerical babe in the woods, told the NR was the ultimate word (obviously by professors and colleagues who had never used it!) and so I spent months trying to figure out why their QL decomposition routine didn't work.  I thought it was me......"

And:

"I recently encountered a bug in Numerical Recipes in C (dunno what edition).  Look at the code for ksprob() When the routine fails to converge after so many iterations, they return 0.0 But a quick glance at the formula reveals that the sum will not converge for *small* lambda (and they must be small indeed), hence the correct value to return is 1.0, not 0.0.

"Additionally, it would be nice to caution users that this formula is only an asymptotic approximation to the true function (which nobody, apparently, has figured out yet), and that the method is horribly unstable for small lambda."


I don't know if the senders want to be publicly identified.  If you're interested in contacting them, send me e-mail, and I'll ask them to contact you.

Our experience, and that of many others, is that it is best to get numerical software from reliable sources.  The easiest and cheapest is NETLIB, which includes the collected algorithms from ACM Transactions on Mathematical Software (which have all been refereed), and a great many other algorithms that have withstood the scrutiny of the peers of the authors, but in ways different from the formal journal refereeing process.

If you don't know how to use NETLIB, the easiest way to get started is to send mail to netlib@ornl.gov, subject irrelevant, with the text consisting of the lines "send help" and "send index".  If you're an X-windows user, you may want to consider having NETLIB send you 'xnetlib', an X-windows interface to the NETLIB collection.

Best regards.


nr-info.medfit:

The problem in MEDFIT is more in the theory than in the code.  It's hard to point to one place in the code and say AhA!  The author of MEDFIT assumed that the minimum of the residual will occur at a place where the derivative is zero.  But for L1 approximation, that's not true: the approximating function is only C0 continuous, so there are places where the derivative doesn't exist, namely at every data point.  The criterion to use is spelled out in section 4-4 of "The Approximation of Functions -- Vol 1: Linear Theory" by John R. Rice, Addison-Wesley, 1964 (102ff): a minimum occurs where an arbitrary perturbation increases the residual.  In the case of a continuous and continuously differentiable approximant, this is the same as saying the derivative is zero, but it's not the case with the approximant for L1 fitting. So the first defect in MEDFIT is in assuming that the minimum occurs at a zero of the derivative of the approximating function.

The second defect arises in the transformation of the flawed theory into an algorithm.  To understand this problem, consider first the problem of one parameter L1 fitting.  In this case, what one wants is the median of the set.  But if the size of the set is even, anywhere between the two media is an equally good solution.  The algorithm in NR, however, considers only solutions that interpolate one or two data points.  (In general, L1 fitting to N parameters will interpolate N data points).  The result is that MEDFIT is always computing a derivative at a place where it doesn't exist.  It turns out that what MEDFIT really needs is just a sign, and there's a 50-50 chance of getting that right, even if the derivative doesn't exist.  But when there's just a little bit of data, and MEDFIT gets close to the solution, it will get alternating positive and negative derivatives, and just rattle back and forth between two points, one probably being a solution (but at which MEDFIT can't decide to quit).

The third kind of defect arises in the transformation of the flawed algorithm into code.  Others have already pointed out these defects, which can be spotted even if one has no idea what the code is intended to do.

Incidentally, the Fortran and C versions could frequently have different behaviour because the tests used in the C version in place of the Fortran SIGN function do not do the same as what the Fortran-77 standard says the SIGN function does.

Hope this helps.


Third edition

Pavel Holoborodko, who develops numerical software for a living, has pitted his software against a number of different other implementations, among them one taken from the third edition of Numerical Recipes. He writes:
I do a lot of tests on accuracy of existing numerical libraries - GNU GSL, SLATEC, Specfun, etc.,
To my surprise, tests reveal that Numerical Recipes algorithms delivers the lowest accuracy in many cases!

Here is reviews on modified Bessel functions: I0(x), I1(x), K0(x) K1(x).
NR exhibit low accuracy on all of them, besides error scatter plot shows systematic behavior - indication of improper rounded coefficients or similar.

Another notable example - algorithms for Bessel functions Y0, Y1, J0, J1 suffer from catastrophic cancellation near zeros of the functions. To the point when result have NO correct digits at all.
The links point to posts on Pavel's blog that show scatterplots of the various numerical software versions he's tested.

Favorable reviews of Numerical Recipes in C

[From: Daniel Asimov]

I would like to offer a contrasting opinion.  I bought the original Numerical Recipes edition and found it to be excellent.  It covers a wide range of topics and includes lucid explanations of why the algorithms are the way they are, as well as printed computer code for each algorithm.

I, too, heard that the first edition contained some errors.  But, after a lot of use, I encountered perhaps one error total.  For a book that comprehensive, it is almost unimaginable for it to be error­free.

Now I have the second edition (C version), and it seems to me to be superb.  All known errors in the first edition have been corrected, a number of topics have been revised or expanded, and a number of new topics (like wavelets) have been added.  They will also sell a disk containing all the code (saving you the trouble of typing it in by hand from the book) for about $40.

[From: mbk@jt3ws1.etd.ornl.gov (Matt Kennel)]

It's by far the most useful single book for my research that I've ever owned.

Its explanations are far better for getting scientists to understand the idea (like myself) than anything else I've read.

Most other literature are oversimplified undergraduate textbooks (not sufficient for the broad range of problems encountered in research science) or highly inappropriate advanced research works focused either on a single topic and often directed at other people who are creating and proving numerical mathematics instead of just using it.

Numerical Recipes is also good at exposing to the naive just what sorts of algorithms do exist "out there" that many people were previously unaware of.

Its programs are explicitly not designed to be bullet-proof high performance black boxes.  Though I have not personally encountered a problem with them.

Still, I think that most people would be wiser LAPACK users after reading NR.


Contributions to this collection are welcome.  Send email to jutta@pobox.com.