These will be updated as we progress with the course.

§X: Y means Section X should be studied and exercise Y is suggested

The corresponding numbers from the 6th edition are given in .

- Chapter 1
- §1.1: 8, 10 (§1.1: 8, 10)
- §1.2: 8, 9, 23, 30, 41 ( §1.2: 8, 9, 23, 30, 41)
- §1.3: 13, 14, 18, 22, 24, 45 (§2.1: 13, 14, 18, 24, 45)
- §1.4: 8, 11, 13, 14, 24, 26, 29
(§2.2: 8, 11, 14, 24, 26, 29)

Note: skip the theorem on loss of precision

- Chapter 2
- §2.1: 1, 2, 7a (§7.1: 1, 2, 7a)
- §2.2: 1, 8, 9, 23 (§7.2: 1, 8, 9, 23)
- Gauss elimination

- Chapter 8
- §8.1, pp. 358-365, 371-373: 2, 4, 11, 15, 18, 23

(§8.1, pp. 293-302, 306-307: 2, 4, 11, 15, 18, 23) - LU decomposition
- Pivoting
- Linear algebra background
- Errors in linear solving

- §8.1, pp. 358-365, 371-373: 2, 4, 11, 15, 18, 23
- Chapter 4
- §4.1, pp. 153-168: 1, 9, 12, 18, 40 (§4.1, pp. 124-141: 1, 9, 12, 18, 40)
- §4.2: 5, 9, 10, 15 (§4.2: 5, 9, 10, 15)
- Polynomial interpolation
- Polynomial interpolation cont.
- Errors in polynomial interpolation
- Try runge_example.m,
interperror.m,
and the main programs in
more examples

Try also cheb.m - Some exercises

- Chapter 5
- §5.1, up to p. 210: 1, 6, 7, 8, 11 (§5.2, up to p. 196: 2, 4, 5, 7, 19)
- §5.3: 1, 2, 4 (§6.1: 1, 2, 4)
- Basic rules
- Composite rules
- Adaptive Simpson
- Try adsimpsondemo.m, adsimpson.m

- Chapter 9
- §9.1: 3, 5, 13, 25 (§12.1: 3, 5, 13, 25)
- Linear least squares
- Try leastsq_ex1.m, leastsq_ex2.m,timeGE.m

- Chapter 3
- §3.2: 1, 3, 13, 14, 23 (§3.2: 1, 3, 13, 14, 23)
- §3.1: 8, 12, 13 (§3.1: 8, 12, 13)
- §3.3: 2, 3 (§3.3: 2, 3)
- Nonlinear equations
- Convergence of Newton's method
- Nonlinear systems
- Try NewtonExamples

- Introduction to deep learning.
Summary
of Sections 1-4 of
Catherine F. Higham, Desmond J. Higham. Deep Learning: An Introduction for Applied Mathematicians
- Chapter 7
- Eigenvalues