# Introduction to Modern Scientific Programming and Numerical Methods

We all know that the ability to use computers to solve mathematical relationships is a fundamental skill for anyone planning a career in science or engineering. For this reason, numerical analysis is part of the core curriculum in just about every undergraduate engineering department. However, my observation has been that under most of these curricula, students (at least outside the computer science department) are introduced only to numerical methods and programming under the MATLAB® environment. While MATLAB® is a great tool for algorithm prototyping, being an interpreted language, it is simply too slow for large-scale computation. This is especially true for problems that cannot be easily cast into a matrix form, such as those encountered in stochastic particle simulations. There are few opportunities for students to gain exposure to other commonly used languages, such as C/C++ (and its derivatives Java, Javascript and CUDA), Python, or even FORTRAN, which is still used by many legacy codes. By focusing solely on MATLAB®, students also fail to learn about basic programming paradigms such as variable types and dynamic memory allocation. These courses also do not cover hardware technologies, including computer clusters, graphic cards, microcontrollers, or field programmable gate arrays. For most non-CS major engineering students or professionals doing computational research, practical programming is a self-taught process.

It was essentially these skills that I tried to expose students to in my ASTE-499 Applied Scientific Computing course at USC. That course was taught mainly off slides, but in the future, students taking this course – and similar courses at different universities – will have a textbook at their disposal. We recently got a contract from CRC Press (the publisher of my Plasma Simulations by Example text) for *Introduction to Modern Scientific Programming and Numerical Methods*, due in July 2021! This book is co-authored with **Prof. Joseph Wang** at USC, and **Dr. Robert Martin** at the Air Force Research Laboratory (AFRL). I put this page together to solicit feedback. Below you will find the proposed table of contents. Some material will likely end up getting re-ordered but it summarizes the topics we plan to include. Please leave a comment below (or just email me) with any suggestions. Also let us know if you would like to review draft copies.

## Table of Contents

### 1. Introduction to Scientific Computing

This chapter introduces basic concepts from scientific computing by demonstrating how to simulate a ball dropped from some height and falling under the action of gravity (and drag) and bouncing on surface impact (example differential equation).

**1.1 Problem Description**

We start by describing the problem. Here we introduce limits, Taylor series, differentiation, and numerical round off errors.

**1.2 Pseudo code**

Next, a pseudo code is developed. Basic programming concepts, such as variables, data types, for loops, if statements, functions, and input/output are introduced. We also discuss binary representation and finite-precision computer arithmetic.

**1.3 Programming Languages**

Various programming languages, including Assembly, Basic, Pascal, Fortran, Matlab, Python, C/C++, Java, Javascript, and R are discussed. We demonstrate how to implement a basic version of the simulation code in each language and discuss the pros and cons. We learn about interpreted vs. compiled languaged.

**1.4 Integration Methods**

Numerical results are compared to theoretical model. We also discuss visualization options, including use of Python/Matlab plotting and the use of Tecplot and Paraview. We discuss convergence, consistency, and alternate integration methods including forward and backward Euler, Leapfrog, Crank-Nicolson, multistep, Runge Kutta. These examples are developed primarily in Matlab and Python.

**1.5 Additional Physics**

Additional physics is included to increase complexity. We mainly focus on adding aerodynamic drag and surface reflection. Root finding is introduced.

**1.6 Modularity and Data Reduction**

The simulation is expanded to include multiple bouncing balls. This section allows us to introduce arrays, functions, and function arguments. We also discuss pass-by-reference and pass-by-value. Additional visualization options such as scatter plots and animations are discussed.

### 2. Numerical Analysis and Linear Algebra

In this chapter, we cover many elementary topics from Numerical Analysis, including equation solving and linear algebra.

**2.1 Root Finding**

We start by discussing root finding (solving equations of one variable). We cover Bisection Method, Fixed point iteration, Newton’s method. Error analysis and rate of convergence is also discussed.

**2.2 Differentiation and Integration**

We discuss Taylor Series in more detail and discuss “big O” notation. We then cover numerical integration (quadrature) including Trapezoidal and Simpson rule, and Gaussian Quadrature. Double (or triple) integrals are also covered.

**2.3 Matrix Algebra**

We switch to Linear Algebra, and introduce the concept of a matrix and a vector. We learn about matrix multiplication, and see how to implement it in Matlab and also languages where it is not natively supported (such as C++). Einstein (index) notation is also discussed. We also learn about different matrix types, such as banded, sparse, singular, symmetric, diagonally dominant, and positive definite. We demonstrate how to use rotation and translation matrix for coordinate transformation.

**2.4 Solving Systems of Equations**

We introduce matrix inverse for solving systems of equations. We also cover topics such as determinants and cofactors and see how to compute inverse of small matrices.

**2.5 Direct Methods**

We cover Gaussian elimination, a direct solution method for larger systems for which computing a matrix inverse is not practical. We discuss the Thomas algorithm for tridiagonal systems. These examples are worked out in Matlab, Python and C++.

**2.6 Iterative Methods**

We discuss iterative methods for solving matrix problems, including Jacobi, and Gauss-Seidel, and Successive Over Relaxation. We learn about residue norm and convergence.

**2.7 Newton-Raphson Method**

Non-linear systems of equations are introduced and we discuss the popular Newton-Raphson method

**2.8 Conjugate Gradient**

The preconditioned conjugate gradient method is discussed and convergence is compared to Jacobi

**2.9 Multigrid**

The multigrid method is covered next. We cover high and low frequency errors.

**2.10 LU and Cholesky Decomposition**

LU and Cholesky matrix decomposition is discussed and illustrated with Matlab and C++ example

**2.11 Eigenvalues**

Eigenvalues and eigenvectors are covered. Some popular algorithms such as power method and QR decomposition are illustrated.

**2.12 Numerical Libraries**

We review popular numerical libraries such as NumPy/SciPy and BLAS/LAPACK and see how to use them in our codes

### 3. Interpolation and Signal Processing

This chapter focus on methods for fitting curves to large amounts of data and reducing data trends

**3.1 Polynomial Fits**

We cover Lagrange polynomials, Hermite interpolation, Bezier splines, Richardson extrapolation

**3.2 Least Squares Fit**

The Least Squares method is covered next

**3.3 Fourier Series**

Fourier Series and the FFT method is covered. We demonstrate the method using built-in functions from Matlab and Python and with a custom C++ code.

**3.4 Filtering**

Methods for filtering data are covered.

**3.5 Probability Distributions**

Probability distribution functions are covered. We learn about cumulative distribution function and stochastic (random) sampling. Earth Mover’s method is discussed for comparing distribution functions.

### 4. Object Oriented Programming

This chapter we introduce advanced C++ syntax and learn about object oriented programming

**4.1 Data Encapsulation**

Here we discuss benefits of object oriented programming, including data encapsulation, access control, storage, and operator overloading without getting into the actual implementation details

**4.2 Structures and Classes**

Structs and classes are introduced here. They form the foundation of object oriented programming. We learn about access control, constructors, and destructors.

**4.3 Pointers, References, and Dynamic Memory Allocation**

We then start with a crash course in C++. We start by covering pointers, references, and dynamic memory allocation. These topics, while not difficult, are are a source of confusion to many new students of C++.

**4.4 Operator Overloading**

We learn about implementing operators for manipulating our custom classes. These operators allow us to write the code in a more concise, mathematical way.

**4.5 Templates**

We see how templates allow us to generalize the functionality of a class to operate on different data types. We implement a “vec3” data object for storing 3D vectors.

**4.6 Storage Containers**

Next we discuss various storage containers and see how they are implemented inthe C++11 standard library. These include vector, linked list, map,and set. We also learn about octrees.

**4.7 Lambda Functions and Algorithms**

We discuss anonymous lambda functions and see how they can be used with standard library algorithms to perform manipulations such as data sorting.

**4.8 Putting it Together**

We close the chapter by utilizing the new skills to write a multiple-bouncy ball simulation from Chap. 1 as well as a matrix solver. We compare the performance to Python/MATLAB version.

### 5. Interactive Web Applications

In this chapter we discuss using Javascript to develop simulations that run in a web browser. Not only are such simulations interactive by the basic nature of web sites, every computer has a web browser installed, allowing us to develop code even on systems that may not have a more standard compiler available, such as lab computers used for data acquisition.

**5.1 HTML**

We start by describing the basic syntax of HTML files. We learn about basic element types such as <head>, <body>, <p>, <h?>, <a>, <ul>, <li>, <input>, and <div>.

**5.2 CSS**

We next learn how to change the visualization representation using styles. We learn about the rule selectors in CSS files.

**5.3 Javascript**

Next we learn the basic syntax of Javascript and see how to use it to dynamically modify the webpage and how to process user input

**5.4 Canvas**

We then introduce the <canvas> element that can be used for 2D and 3D rendering. We mainly focus on the 2D “context”. We learn how to draw shaded circles, lines, and rectangles. Discussion of webGL (for parallel 3D rendering) is left for Chapter 10.

**5.5 Javascript Examples**

We use the above material to develop several interactive simulations covering methods discussed previously. We start with a simulation that adds a new bouncing ball at mouse click location. We then write code for filtering and interpolating .csv data that is drag-and-dropped from the file system.

### 6. Documentation and Code Testing

This chapter covers additional programming topics which are not commonly discussed but knowledge of which is critical for developing large-scale codes that can be utilized and expanded by other researchers.

**6.1 Testing and Visualization**

Here we start by introducing code validation and verification, and sensitivity and uncertainty analysis. We also discuss various methods for visualizing data, including isosurfaces, scatter plots, and virtual probes

**6.2 Debugging**

Techniques for code debugging are discussed. We also introduce the use of automated tools such as valgrind for finding memory leaks.

**6.3 Coding Standards**

We discuss the need for coding standards for code maintainability. Several popular standards are introduced.

**6.4 Test Suites**

GTest and JUnit is discussed here. We learn about benefits of automated unit testing

**6.5 Version Control**

We introduce version control using CVS,SVN, Mercurial, but mainly focusing on Git (current favorite).

**6.6 Documentation Systems**

In-line documentation using Doxygen is discussed. We include a brief description of LaTeX, focusing on mathematical formulas. We see how to include LaTeX syntax in Doxygen.

### 7. Partial Differential Equations

This chapter introduces basic solution approaches for selected partial differential equations. We illustrate the basic methods in Matlab and Python, but the more advanced methods are illustrated mainly using C++.

**7.1 Classification**

We learn about elliptic, parabolic, and hyperbollic partial differential equations. We discuss their derivation, and learn about initial and boundary conditions

**7.2 Finite Difference Method**

The finite different method is discussed. We learn about discretization, mesh nodes and cells, and matrix representation. First and second order differencing is discussed.

**7.3 Heat Equation**

We then start covering common solution methods to model PDEs. We start by discussing steady (elliptic) and un-stead (parabolic) heat (or diffusion) equation solved using Finite Difference. We write an example code in Matlab, C++, and Javascript. We cover schemes such as forward-time centered-space, RK, and Crank-Nicolson schemes. Stability and Von Neumann analysis are covered.

**7.4 Finite Volume Method**

The finite volume method is covered. We learn about the Divergence and Stokes Theorem, and also see how it naturally retrieves coefficients in cylindrical coordinates. We see that for uniform Cartesian meshes, the method reduces to the Finite Difference. A FVM example is developed in C++.

**7.5 Finite Element Method**

The FEM is discussed briefly. We use the method to develop a simple FEM solver of the heat equation in C++.

**7.6 Wave Equation**

We then discuss the parabolic wave equation and cover several popular solution methods.

**7.7 Beam Equation**

The beam equation (example of a higher-order PDE) is covered and several popular methods are illustrated.

**7.8 Advection-Diffusion Equation and the SIMPLE Method**

Another model equation from aerodynamics is discussed. We discuss some approaches for the Advective Term starting with the Upwind method. We then cover the SIMPLE method used for incompressible fluids.

**7.9 Vlasov Equation**

Vlasov Equation covering the evolution of a velocity distribution function is introduced. We cover a simple first order integration scheme.

### 8. Particle Methods

This chapter introduces numerical methods based on Lagrangian Mechanics and stochastic (particle) methods.

**8.1 Random Numbers and Monte Carlo**

We learn about using random numbers to sample values from Maxwellian velocity distribution function

**8.2 Gas Kinetics**

We modify the “bouncy-balls” example from Ch. 4 to simulate free molecular flow gas dynamics. We learn about using a computational mesh to sample velocity moments for computing density, stream velocity, and temperature.

**8.3 Direct Simulation Monte Carlo**

The Direct Simulation Monte Carlo (DSMC) method is introduced next to add intermolecular collisions

**8.4 Particle in Cell**

We cover Poisson’s equation (which has the same form as steady heat equation covered in Ch. 7) and electrostatics. We see how to use the PIC method to simulate ionized gas (plasma) dynamics.

**8.5 Molecular Dynamics**

We discuss the molecular dynamics (MD) method that is used in computational chemistry to model molecular structures

**8.6 Smoothed Particle Hydrodynamics**

In this section we describe the SPH method that blends Eulerian and Lagrangian description of fluid motion

### 9. Parallel Processing

This chapter introduces the reader to code optimization and high performance computing. These examples are worked out in C++ and CUDA

**9.1 Profiling and Memory Access**

We learn how to use timers and profilers to find parts of the code that run slowly. We also demonstrate how re-arranging memory access can lead to drastic performance gain, and learn about CPU cache.

**9.2 Multithreading**

We discuss vector operations and see how to split them up using threads. We cover race condition, atomics, and locks (and how to avoid needing them). We see how some library functions are not thread safe and lead to serialization and slow performance (i.e. Random). We also demonstrate that there is finite penalty for thread creation that may make multithreading not be efficient for small computations. Parallel efficiency is compared.

**9.3 Message Passing Interface**

We learn about MPI and discuss several popular matrix solution methods such as red-black ordering and domain decomposition. We also discuss approaches for particle simulations. We briefly describe setting up a cluster using Amazon Web Services and provide a short list of basic Linux commands.

**9.4 CUDA**

Graphics card processing using CUDA is discussed next. We cover memory transfer, pinned memory, and streams.

**9.5 webGL**

Returning to interactive Javascript, we see how to use webGL to add GPU processing to web applications

### 10. Optimization and Machine Learning

This chapter introduces optimization methods for large problems and machine learning

**10.1 Optimization Basics**

We discuss the need for optimization – often we do not have the full parameter space for the simulation and need to perform parameter space search. We cover the brute force and shooting methods. We see this approach is inefficient for a large parameter space.

**10.2 Gradient Descent**

The gradient descent method is described next. This follows up on the Conjugate Gradient discussion for matrix solving.

**10.3 Genetic Algorithms**

Genetic algorithms are described next. We develop simulation to find the shortest path between points.

**10.4 Neural Networks**

Deep neural networks, which form the foundation of machine learning, are covered. We cover basic activation functions such as ReLU and logistic curve. Cost function is also covered.

**10.5 Back-propagation**

We next see how back-propagation is used to update activation function weights

### 11. Embedded Systems

This chapter discusses programming microcontrollers and FPGAs, as they allow us to develop data acquisition sensors for code validation.

**11.1 Introduction to Arduino**

We learn about the basics of the popular Arduino microcontroller platform, which is programmed using a simplified form of C++.

**11.2 Sensors and Electronics**

Here we cover basic electronic components including resistors, capacitors, transistors, diodes, and LEDs. We also discuss breadboards and soldering.

**11.3 Basic DAQ system**

Putting it all together, we learn how to create a simple data acquisition system that collects pressure and humidity data from multiple sensors and stores the data onto an SD card

**11.4 Edge Computing**

We learn how to use TensorFlow Lite to add machine learning to microcontrollers to enable “edge computing”

**11.5 FPGAs and Verilog**

Here we cover what FPGAs are and introduce the syntax of Verilog. We learn how to write our own hardware instructions for performing rapid math operations. We write code for rapidly “mux”-ing data passed by an Arduino.

### Appendix: Programming Syntax

This appendix summarizes common programming syntax such as variable and function declaration, flow control, memory allocation, random number sampling, and input output in Matlab, Python, Fortran, C++, R, Java, and Javascript

## Example Codes

Various code examples are posted here:

- Chapter 7 Javascript/WebGL heat equation solver: heat-webgl.html

Initial Progress on Micro Cathodic Arc Thruster Plume Simulations

USC ASTE-499 Applied Scientific Computing Debrief

A quick summary of the past six months

This looks great. As a former CS guy in aero, this is pretty much everything I would hope to see. A few random suggestions: validation cases for testing consistency, versioning (i.e. bake in to your results what build produced the result), C++11 style pointers, build systems, file formats like json & xml for configuration (please do not invent new text file formats for your solver, it makes me cry…). I would say also that some discussion of UI options wouldn’t be amiss. PyQT, etc. Maybe interprocess comm too. There’s a lot of merit to doing solvers in one language and your problem setup and viz in another (C++ and python being a common example). I also wouldn’t mind seeing some EM-specific examples, and maybe some inside baseball stuff like using ghost cells, specific treatment of BCs, etc to help improve results. Though this may be in your other book!

Anyhow these are all relatively minor points. Looking forward to this one!

Thanks Philip for many great suggestions! I plan to have a description of various common file formats in the text, somewhere. However, utilizing either is not trivial in C++, at least not without using external libraries. At least Java has an XML parser (as used by my 2D code Starfish), but I ended up writing my own JSON-like parser for CTSP.

Hi Lubos,

Here are my wild thoughts – no need to respond. Good to hear you are doing well.

Looks like you didn’t leave anything out. How many thousands of pages are you planning on? I think you need two books. One for programming principles for engineers/scientists that includes all the computer stuff and basic numerical analysis techniques that are the most commonly used in engineering. The other is numerical analysis book with all the other numerical techniques you mention using a single programming language. The second book is similar to a few already on the market.

I’m not sure where the chapter on embedded systems fits.

You may want to spend some time on “make” and tool chains. Coming from matlab linking will be a foreign concept.

I’m not sure about the logic of putting optimization in the same chapter as machine learning. I think you are talking about regression machine learning which fits more with the section on interpolation. Also, I’m not sure how interpolation fits with signal processing.

You may want to add an example of using a large off-the-shelf code like Sparta / Paraview?

Is the audience for the book people who need to understand what is going on inside of a code like Sparta or people who can build and augment large complex codes for a particular domain?

Cheers,

Ken

Thanks Ken. So we actually went back and forth on whether to do a single book or a multi-volume, and we settled on a single volume as otherwise, there is not much difference from having multiple books on related subjects. The total number of pages is going to be only about 450 (so about 40 pages per chapter). I realize this may sound too little, but the point of this text is to introduce the reader to these various topics (“I didn’t even know what I didn’t know”) and refer them to other texts for the actual details. I do plan on having a brief section on data visualization with Paraview. For your two other questions, I guess I don’t see the topics to be that incompatible. Interpolation (or perhaps what I meant is smoothing/fitting) involves looking for trends in noisy data which is essentially signal processing, no? And with optimization and machine learning, the idea is that we are looking for some parameter – or a set of relationships – that will get us a desired answer given some set of inputs, with us letting the computer find it on its own. But perhaps there is a deeper philosophical distinction between the two I am not aware of.

When will the book be out for sale? Or it is and I missed reading that somewhere

Not until late next year. It is due to the publisher in July 2021, so likely targeting November/December release.

Hi, Chemical Engineer self taught programmer here. My opinion is similar to the one given by Ken Daniel. My biggest concern is that you try to cover too many topics, that could leave you in some cases with a single page to “cover” a topic.

But everyone will clear out once the printed version is ready, I hope to see how you manage to solve this problem.

Best regards.

Thank you Francisco. Yes, this is definitely a concern. This is why I am not trying the book to be fully inclusive of all methods/etc, but to serve as the proper introductory material, with references to more detailed texts. For example, there are 500+ page books covering CFD, and here we will try to just cover basics of Eulerian methods in about 40 pages.

I feel like Newton-Raphson is a bit out of order… If I were to structure chapter 2, I might have something like:

a) Solving Linear Systems of equations

1) Direct methods

2) Stationary methods

3) Steepest descent and conjugate gradient methods

4) Krylov methods

5) Geometric MultiGrid

6) Algebraic Multigrid

b) Solving Nonlinear Systems of Equations

1) Newton-Raphson

2) Newton-Krylov

3) Nonlinear conjugate gradient (I know you have this in optimization, but could fit here as well).

4) (others as relevant)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Also, some other things to consider might be automatic differentiation (aka algorithmic differentiation), complex-step, symbolic calculations, scientific file formats like HDF5 & NetCDF, Verification & Validation (at least what they are, and how they’re different).

Perhaps also some discussion on how to present results / data, e.g. when to use log scales, when to use a diverging colormap vs when to use a “single-sided” colormap (e.g. white-to-red), when to use a contour plot vs a surface plot, etc.

Would be good to present Amdahl’s Law in the parallel section.

Thanks Greg, these are all good suggestions. But I think that Krylov methods are much more complicated to implement than N-R, no? For instance, I have implemented PCG many times, but every time it was just following a recipe in a numerical book. I don’t actually know how to derive the solver (yet?), but N-R is fairly straightforward.

Lubos,

I think for the most part, the majority of users of scientific computing will be calling a library that already exists for the more complicated methods. I think what’s more important for these more involved solvers is to give an introduction to *how* they work, *when* to use them, and *why* they fail. To your earlier point, it seems like this book is a bit of an introductory book – with references to the vast 500+ page treatises on the subject. Thus I think it’s more important to give the reader a full overview of linear solvers THEN give a full overview of nonlinear solvers, rather than jump between linear -> nonlinear -> linear -> nonlinear (optimization). Especially since Newton methods are effectively just wrappers around linear solvers.

Might also be a good idea to talk preconditioning…

Sounds good. I will send you the draft copy of this chapter once ready, if interested, to get additional feedback.

Lubos,

I’d be happy to review a draft of the chapter. Feel free to direct email me (I presume you have my email) if you’d like to.

Please mention semi-discretisation schemes for PDEs(Heat PDE, Wave PDE) along with the state Laplacian matrix generation for Cartesian, semi-Cartesian and polar coordinates evaluated with examples involving various initial and boundary conditions.