Difference between revisions of "MAT3633"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
(Inserted a Rough of topics split into rows)
 
(10 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
==Topics List==
 
==Topics List==
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
! Date !! Sections !! Topics !! Prerequisite Skills !! Student Learning Outcomes
+
! Date !! Sections [Sauer 3rd ed] !! Topics !! Prerequisite Skills !! Student Learning Outcomes
 
|-
 
|-
|Week 1
+
| Week.1
 
||
 
||
Section 0.2: Loss of significant digits
+
0.2 and 1.1
 
||
 
||
* [[Loss of Significant Digits]]
+
* Loss of significant digits
* [[Nested Multiplication]]
+
* Bisection Method
 +
* Brief introduction to matlab
 
||
 
||
* [[Binary Number System]]
+
* binary number system;
* [[Taylor's Theorem]]
+
* Taylor's theorem;
 +
* intermediate value theorem
 
||
 
||
* Nested Multiplication for Evaluating Polynomials
+
* Nested multiplication for evaluating polynomials
* Machine Representation of Real Numbers
+
* Machine representation of real numbers
* Loss of Significant Digits in Numerical Computing
+
* Loss of significant digits in numerical computing
 
* Review of Taylor's Theorem
 
* Review of Taylor's Theorem
|-
+
* Bisection method and implementation
|Week 1
 
||
 
Section 1.1: Fixed-Point Iteration
 
||
 
* [[Bisection Method]]
 
||
 
* [[Intermediate Value Theorem]]
 
||
 
* Bisection Method and Implementation
 
* Brief Introduction to Matlab
 
|-
 
|Week 2
 
 
||
 
||
Section 1.2: Fixed-Point Iteration
 
||
 
* [[Fixed-Point Iteration]]
 
* [[Order of Convergence]] of Iterative Methods
 
||
 
* [[Limit of Sequences]]
 
* [[Solution Multiplicity of Equations]]
 
||
 
* Geometric Interpretation of Fixed-Point Iteration
 
* Convergence of Fixed Point Iterations
 
* Order of Convergence of Iterative Methods
 
|-
 
|Week 2
 
||
 
Section 1.3: Limits of Accuracy: Conditioning of Problems
 
||
 
* [[Wilkinson Polynomial]]
 
||
 
* [[Limit of Sequences]]
 
* [[Solution Multiplicity of Equations]]
 
||
 
* Sensitivity Analysis of Root-Finding
 
* Error Magnification Factor for Solution of Equations
 
|-
 
|Week 3
 
||
 
Section 1.4: Newton's Method
 
||
 
* [[Newton's Method]]
 
* [[Error Analysis]] for Newton's Method
 
* [[Modified Newton's Method]]
 
  
* [[Root-Finding Without Derivatives]]
 
* [[Secant Method and its Convergence]]
 
* [[Method of False Position, Muller's Method]]
 
* [[Stopping Criteria for Iterative Methods]]
 
||
 
* [[Remainder of Taylor's Series]]
 
* [[Intermediate Value Theorem]]
 
* [[Fixed-Point Iteration]]
 
||
 
* Algebraic and Geometric Interpretation of Newton's method
 
* Error Analysis for Newton's Method Based on Taylor's Theorem
 
* Newton's Method as a Fixed Point Iteration
 
* Modified Newton's Method and its Rate of Convergence
 
|-
 
|Week 3
 
||
 
Section 1.5 Root-Finding Without Derivatives
 
||
 
* [[Secant Method]]
 
* [[Method of False Position]]
 
* [[Muller's Method]]
 
* [[Stopping Criteria]]
 
||
 
* [[Remainder of Taylor's Series]]
 
* [[Intermediate Value Theorem]]
 
||
 
* Secant Method and its Convergence
 
* Stopping Criteria for Iterative Methods
 
|-
 
|Week 4
 
||
 
Section 2.1 Solve Systems of Linear Equations: Gaussian Elimination
 
||
 
* [[Gaussian Elimination]]
 
||
 
* [[Elementary Row Operations]]
 
||
 
* Gaussian Elimination and its Operation Counts
 
* Gaussian Elimination with Pivoting
 
* Implementation of Gauss Elimination
 
|-
 
|Week 4
 
||
 
Section 2.2 Solve Systems of Linear Equations: LU Decomposition
 
||
 
* [[LU Decomposition]]
 
||
 
* [[Matrix-Matrix Products]]
 
* [[Matrix-Vector Products]]
 
* [[Inverse Matrix]]
 
* [[Elementary Row Operations]]
 
||
 
* Matrices for Elementary Row Operations
 
* Gauss Elimination as Matrix Products
 
* Advantages of Solutions by LU Decomposition
 
|-
 
|Week 5
 
||
 
Section 2.3 Error Analysis for Solution of Ax=b
 
||
 
* [[Norms]]
 
* [[Error Analysis]] for Solution of Ax=b
 
* [[Error Magnification]] Factor and Condition Number of Matrix
 
||
 
* [[Length of Vectors]]
 
* [[Eigenvalues of a Matrix]]
 
* [[Eigenvectors of a Matrix]]
 
||
 
* Various Norms for Vectors and Matrices: Compatibility of Vector and Matrix Norms
 
* Error Analysis for Solution of Ax=b
 
* Error Magnification Factor and Condition Number of Matrix
 
|-
 
|Week 5
 
||
 
Section 2.5: Iterative Methods for Solving Ax=b
 
||
 
* [[Iterative Methods]]
 
* [[Jacobi Method]]
 
* [[Gauss-Seidel Method]]
 
* [[Successive-Over-Relaxation (SOR) Method]]
 
* [[Convergence of Iterative Methods]]
 
* [[Spectral Radius of Matrix]]
 
* [[Sparse Matrix]]
 
||
 
* [[Length of Vectors]]
 
* [[Eigenvalues of a Matrix]]
 
* [[Eigenvectors of a Matrix]]
 
||
 
* Convergence of General Iterative Method for Solving System of Linear Equations
 
* Comparison of Gauss Elimination and Iterative Methods
 
|-
 
|Week 6
 
||
 
Section 2.6: Conjugate Gradient (CG) Method
 
||
 
* [[Conjugate Gradient Method]]
 
* [[Symmetric Positive Definite Matrix]]
 
* [[CG Method]]
 
||
 
* [[Scalar Product of Vectors]]
 
* [[Determinant of a Matrix]]
 
* [[Eigenvalues of a Matrix]]
 
* [[Quadratic Polynomials of n-variables]]
 
* [[Partial Derivatives]]
 
* [[Gradients]]
 
* [[Chain Rule for Partial Derivatives]]
 
||
 
* Symmetric Positive Definite Matrix and Properties
 
* Construction of Conjugate Gradient (CG) Method
 
* Properties of CG Method
 
* Preconditioning for CG Method
 
|-
 
|Week 6
 
||
 
Section 2.7: Nonlinear System of Equations
 
||
 
* [[Nonlinear System of Equations]]
 
* [[Taylor's Theorem for Multi-Variate Vector Valued Functions]]
 
* [[Newton's Method]]
 
* [[Broyden's Method]]
 
||
 
* [[Scalar Product of Vectors]]
 
* [[Determinant of a Matrix]]
 
* [[Eigenvalues of a Matrix]]
 
* [[Quadratic Polynomials of n-variables]]
 
* [[Partial Derivatives]]
 
* [[Gradients]]
 
* [[Chain Rule for Partial Derivatives]]
 
||
 
* (TBD)
 
|-
 
|Week 7
 
||
 
Sections 3.1: Data and Interpolating Functions
 
||
 
* [[Lagrange Basis Functions]]
 
* [[Newton's Divided Differences]]
 
* [[Properties of Newton's Divided Differences]]
 
||
 
* [[Fundamental Theorem of Algebra]]
 
* [[Rolle's Theorem]]
 
||
 
* Properties of Lagrange Basis Functions
 
* Lagrange Form of the Interpolation Polynomials
 
* Properties of Newton's Divided Differences
 
* Newton's Form of the Interpolation Polynomials
 
|-
 
|Week 7
 
||
 
Section 3.2: Interpolation Error and Runge Phenomenon
 
||
 
* [[Interpolation Error]]
 
* Interpolation [[Error Analysis]]
 
* [[Runge Phenomenon]]
 
* [[Chebyshev Polynomial]]
 
* [[Error Estimates]] for Chebyshev Interpolation
 
||
 
* [[Fundamental Theorem of Algebra]]
 
* [[Rolle's Theorem]]
 
||
 
* (TBD)
 
|-
 
|Week 8
 
||
 
Section 3.4: Cubic Splines
 
||
 
* [[Cubic Splines]]
 
||
 
* [[One-Sided Limits]]
 
* [[Continuity of Functions]]
 
* [[Indefinite Integrals]]
 
* [[Extremum Values of Multivariate Quadratic Functions]]
 
||
 
* Construction of Cubic Splines for Interpolation
 
* End Conditions
 
* Properties of Cubic Spline Interpolation
 
|-
 
|Week 8
 
||
 
Section 3.5: Bezier Curves
 
||
 
* [[Bezier Curves]]
 
||
 
* [[One-Sided Limits]]
 
* [[Continuity of Functions]]
 
* [[Indefinite Integrals]]
 
* [[Extremum Values of Multivariate Quadratic Functions]]
 
||
 
* Bezier Curve and Fonts
 
|-
 
|Week 8
 
||
 
Section 4.1: Least Square Method
 
||
 
* [[Least Square Method]]
 
||
 
* [[One-Sided Limits]]
 
* [[Continuity of Functions]]
 
* [[Indefinite Integrals]]
 
* [[Extremum Values of Multivariate Quadratic Functions]]
 
||
 
* Least Square Method for Solving Inconsistent System of Linear Equations]
 
* Basic Properties of Least Square Solutions
 
|-
 
|Week 9
 
||
 
Section 4.2: Mathematical Models and Data Fitting
 
||
 
* [[Curve Fitting]]
 
* [[Statistical Modeling]]
 
||
 
* [[Linear Spaces]]
 
* [[Basis Functions]]
 
* [[Product Rule for Vector Valued Multivariate Functions]]
 
* [[Chain Rule for Vector Valued Multivariate Functions]]
 
||
 
* Least square method for curve fitting and statistical modeling
 
* Survey of Models: linear model, periodic model, exponential models, logistic model, etc
 
|-
 
|Week 9
 
||
 
Section 4.5: Nonlinear Least Square Fitting
 
||
 
* [[Taylor's Theorem for Vector Valued Multivariate Functions]]
 
* [[Gauss-Newton Method]]
 
* [[Levenberg-Marquardt Method]]
 
||
 
* [[Linear Spaces]]
 
* [[Basis Functions]]
 
* [[Product Rule for Vector Valued Multivariate Functions]]
 
* [[Chain Rule for Vector Valued Multivariate Functions]]
 
||
 
* (TBD)
 
|-
 
|Week 10
 
||
 
Section 5.1: Numerical Differentiation
 
||
 
* [[Numerical Differentiation]]
 
* [[Finite Difference]] (FD)
 
* [[Undetermined Coefficient Method]]
 
* [[Extrapolation Technique]]
 
||
 
* [[Taylor's Theorem]]
 
* [[Interpolation Error Estimates]]
 
* [[Properties of Definite Integrals]]
 
||
 
* Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
 
* FD approximations of 2nd order Derivatives and Their Error Analysis
 
* Undetermined Coefficient Method for FD Approximation
 
* Extrapolation Technique for Improving the Order of Approximation
 
|-
 
|Week 10
 
||
 
Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Newton-Cotes]]
 
* [[Midpoint rule]]
 
* [[Trapezoid rule]]
 
* [[Simpson's rule]]
 
* [[Error Analysis based on Interpolation Errors]]
 
* [[Quadrature Rules]]
 
* [[Composite Quadrature Rules]]
 
||
 
* [[Taylor's Theorem]]
 
* [[Interpolation Error Estimates]]
 
* [[Properties of Definite Integrals]]
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
 
|-
 
|-
|Week 10
+
| Week.2
 
||
 
||
Section 5.3: Numerical Integration: Romberg's Technique
+
1.2 and 1.3
 
||
 
||
* [[Romberg's Technique]]
 
||
 
* [[Taylor's Theorem]]
 
* [[Interpolation Error Estimates]]
 
* [[Properties of Definite Integrals]]
 
||
 
* Motivation, construction and implementation of Romberg's Technique.
 
|-
 
|Week 11
 
||
 
Section 5.4: Adaptive Numerical Integration
 
||
 
* [[Adaptive Numerical Integration]]
 
* [[Implementation of Adaptive Numerical Integration Techniques]]
 
||
 
* [[Long Divisions]]
 
* [[Substitution Methods]] for definite integrals
 
||
 
* How to estimate the error on a sub interval
 
* How to mark sub intervals to be further refinement?
 
|-
 
|Week 11
 
||
 
Section 5.5: Gauss Quadrature Formulas
 
||
 
* [[Gauss Quadrature Formulas]]
 
* [[Orthogonal Polynomials]]
 
* [[Legendre polynomials]]
 
* [[Gauss Quadrature Rule]]
 
||
 
* [[Long Divisions]]
 
* [[Substitution Methods]] for definite integrals
 
||
 
* Motivation and difficulties with straightforward approach
 
* Legendre polynomials and their basic properties
 
* Gauss Quadrature rule based on Legendre polynomials
 
* Degree of precision of Gauss Quadrature
 
* Gauss quadrature formula on general interval and composite Gauss rules
 
|-
 
|Week 12
 
||
 
Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
 
||
 
* [[Fourier Series]]
 
* [[Discrete Fourier Transform]] (DFT)
 
* [[Inverse Discrete Fourier Transform]]
 
* [[Fast Fourier Transform (FFT)]]
 
||
 
* [[Complex Numbers]]
 
* [[Complex Variables]]
 
* [[Integration by Parts]]
 
* [[Convergence of Sequences]]
 
* [[Convergence of Series]]
 
||
 
* Matrix Form of Discrete Fourier Transform
 
* DFT and Trigonometric Interpolation
 
|-
 
|Week 12
 
||
 
Section 11.1: Discrete Cosine Transform (optional)
 
||
 
* [[Discrete Cosine Transform]]
 
* [[Discrete Cosine Transform]](DCT)
 
||
 
* [[Complex Numbers]]
 
* [[Complex Variables]]
 
* [[Integration by Parts]]
 
* [[Convergence of Sequences]]
 
* [[Convergence of Series]]
 
||
 
* DCT and Interpolation by Cosine Functions
 
* Relation between DFT and DCT
 
* Fourier Transform of 2-Dimensional Functions
 
* DCT of 2-Dimensional Functions
 
* Interpolation Theorem for 2-Dimensional DCT
 
|-
 
|Week 12
 
||
 
Section 11.2: Image Compression (optional)
 
||
 
* [[Quantization]]
 
* [[Image Compression]]
 
* [[Image Decompression]]
 
||
 
* [[Complex Numbers]]
 
* [[Complex Variables]]
 
* [[Integration by Parts]]
 
* [[Convergence of Sequences]]
 
* [[Convergence of Series]]
 
||
 
* Digital Gray scale images and color color images
 
* RGB format
 
* YCbCr (or YUV) format
 
* Convertion between RGB and YUV formats
 
|-
 
|Week 13
 
||
 
Section 12.1: Power Iteration Methods
 
||
 
* [[Power Iteration Methods]]
 
* [[Inverse Power Iteration]]
 
* [[Inverse Power Iteration with Shift]]
 
* [[Rayleigh Quotient Iteration]]
 
||
 
* [[Eigenvalues]]
 
* [[Eigenvectors]]
 
* [[Orthonormal Bases and the Gram-Schmidt Process]]
 
||
 
* Convergence of Power Iteration Methods
 
|-
 
|Week 13
 
||
 
Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Orthogonal Matrices]]
 
* [[QR-Factorization]]
 
* [[Normalized Simultaneous Iteration]](NSI)
 
* [[Unshifted QR Algorithm]]
 
* [[Shifted QR Algorithm]]
 
||
 
* [[Eigenvalues]]
 
* [[Eigenvectors]]
 
* [[Orthonormal Bases and the Gram-Schmidt Process]]
 
||
 
* Definition and basic properties of orthogonal matrices
 
* QR-Factorization based on Gram-Schmidt Orthogonalization
 
|-
 
|Week 14
 
||
 
Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Upper Hessenberg Form]] (UHF)
 
* [[Householder Reflector]]
 
||
 
* [[Matrices for Orthogonal Projection]]
 
* [[Matrices for Reflection]]
 
* [[Block Matrices]]
 
* [[Similar Matrices]]
 
||
 
* Convert a matrix into UHF by Householder reflectors
 
|}
 
==Topics List B Wiki Format ==
 
{| class="wikitable sortable"
 
! Date !! Sections !! Topics !! Prerequisite Skills !! Student Learning Outcomes
 
|-
 
|Week 1
 
||
 
* Section 0.2: Loss of significant digits
 
||
 
* [[Loss of Significant Digits]]
 
||
 
* Binary Number System
 
* Taylor's Theorem
 
||
 
* Nested Multiplication for Evaluating Polynomials
 
* Machine Representation of Real Numbers
 
* Loss of Significant Digits in Numerical Computing
 
* Review of Taylor's Theorem
 
|-
 
|Week 1
 
||
 
* Section 0.2: Loss of significant digits
 
||
 
* [[Nested Multiplication]]
 
||
 
* Binary Number System
 
* Taylor's Theorem
 
||
 
* Nested Multiplication for Evaluating Polynomials
 
* Machine Representation of Real Numbers
 
* Loss of Significant Digits in Numerical Computing
 
* Review of Taylor's Theorem
 
|-
 
|Week 1
 
||
 
* Section 1.1: Fixed-Point Iteration
 
||
 
* [[Bisection Method]]
 
||
 
* Intermediate Value Theorem
 
||
 
* Bisection Method and Implementation
 
* Brief Introduction to Matlab
 
|-
 
|Week 2
 
||
 
* Section 1.2: Fixed-Point Iteration
 
||
 
* [[Fixed-Point Iteration]]
 
||
 
* Limit of Sequences
 
* Solution Multiplicity of Equations
 
||
 
* Geometric Interpretation of Fixed-Point Iteration
 
* Convergence of Fixed Point Iterations
 
* Order of Convergence of Iterative Methods
 
|-
 
|Week 2
 
||
 
* Section 1.2: Fixed-Point Iteration
 
||
 
* [[Order of Convergence of Iterative Methods]]
 
||
 
* Limit of Sequences
 
* Solution Multiplicity of Equations
 
||
 
* Geometric Interpretation of Fixed-Point Iteration
 
* Convergence of Fixed Point Iterations
 
* Order of Convergence of Iterative Methods
 
|-
 
|Week 2
 
||
 
* Section 1.3: Limits of Accuracy: Conditioning of Problems
 
||
 
* [[Wilkinson Polynomial]]
 
||
 
* Limit of Sequences
 
* Solution Multiplicity of Equations
 
||
 
* Sensitivity Analysis of Root-Finding
 
* Error Magnification Factor for Solution of Equations
 
|-
 
|Week 3
 
||
 
* Section 1.4: Newton's Method
 
||
 
* [[Newton's Method]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
 
* Fixed-Point Iteration
 
* Fixed-Point Iteration
 +
* Limits of Accuracy: Conditioning of problems
 
||
 
||
* Algebraic and Geometric Interpretation of Newton's method
+
* limit of sequences
* Error Analysis for Newton's Method Based on Taylor's Theorem
+
* multiplicity of solution of equations.
* Newton's Method as a Fixed Point Iteration
 
* Modified Newton's Method and its Rate of Convergence
 
|-
 
|Week 3
 
 
||
 
||
* Section 1.4: Newton's Method
+
* Geometric interpretation
||
+
* Convergence of fixed point iterations
* [[Error Analysis for Newton's Method]]
+
* Order of convergence of iterative methods
||
+
 
* Remainder of Taylor's Series
+
* Wilkinson polynomial and other examples
* Intermediate Value Theorem
+
* Sensitivity analysis of root-finding
* Fixed-Point Iteration
+
* Error magnification factor for solution of equations
||
+
 
* Algebraic and Geometric Interpretation of Newton's method
 
* Error Analysis for Newton's Method Based on Taylor's Theorem
 
* Newton's Method as a Fixed Point Iteration
 
* Modified Newton's Method and its Rate of Convergence
 
 
|-
 
|-
|Week 3
+
| Week.3
 
||
 
||
* Section 1.4: Newton's Method
+
1.4 and 1.5
 
||
 
||
* [[Modified Newton's Method]]
+
* Newton's Method
 +
* Root-Finding without Derivatives
 
||
 
||
* Remainder of Taylor's Series
+
* Remainder of Taylor's series
* Intermediate Value Theorem
+
* intermediate value theorem.
* Fixed-Point Iteration
 
 
||
 
||
* Algebraic and Geometric Interpretation of Newton's method
+
* Algebraic and geometric interpretation of Newton's method
* Error Analysis for Newton's Method Based on Taylor's Theorem
+
* Error analysis for Newton's method based on Taylor's theorem
* Newton's Method as a Fixed Point Iteration
+
* Newton's method as a fixed point iteration
* Modified Newton's Method and its Rate of Convergence
+
* Modified Newton's method and its rate of convergence
 +
 
 +
* Secant Method and its convergence,
 +
* Method of False Position, Muller's Method:
 +
* Stopping criteria for iterative methods
 +
 
 +
 
 
|-
 
|-
|Week 3
+
| Week.4
 
||
 
||
* Section 1.4: Newton's Method
+
2.1 and 2.2
 
||
 
||
* [[Root-Finding Without Derivatives]]
+
* Solve Systems of Linear Equations: Gaussian Elmination
 +
* Solve System of Linear Equations: LU Decomposition
 
||
 
||
* Remainder of Taylor's Series
+
* Matrix-matrix products and matrix-vector products
* Intermediate Value Theorem
+
* inverse matrix
* Fixed-Point Iteration
+
* elementary row operations
 +
* product and inverse of matrices for elementary row operations.
 
||
 
||
* Algebraic and Geometric Interpretation of Newton's method
+
* Gaussian elimination and its operation counts
* Error Analysis for Newton's Method Based on Taylor's Theorem
+
* Gaussian elimination with pivoting
* Newton's Method as a Fixed Point Iteration
+
* Implementation of Gauss elimination
* Modified Newton's Method and its Rate of Convergence
+
 
 +
* Matrices for elementary row operations
 +
* Gauss elimination as matrix products
 +
* Advantages of solutions by LU decomposition
 +
 
 
|-
 
|-
|Week 3
+
| Week.5
 
||
 
||
* Section 1.4: Newton's Method
+
2.3 and 2.4
 
||
 
||
* [[Secant Method and its Convergence]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
* Fixed-Point Iteration
 
||
 
* Algebraic and Geometric Interpretation of Newton's method
 
* Error Analysis for Newton's Method Based on Taylor's Theorem
 
* Newton's Method as a Fixed Point Iteration
 
* Modified Newton's Method and its Rate of Convergence
 
|-
 
|Week 3
 
||
 
* Section 1.4: Newton's Method
 
||
 
* [[Method of False Position, Muller's Method]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
* Fixed-Point Iteration
 
||
 
* Algebraic and Geometric Interpretation of Newton's method
 
* Error Analysis for Newton's Method Based on Taylor's Theorem
 
* Newton's Method as a Fixed Point Iteration
 
* Modified Newton's Method and its Rate of Convergence
 
|-
 
|Week 3
 
||
 
* Section 1.4: Newton's Method
 
||
 
* [[Stopping Criteria for Iterative Methods]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
* Fixed-Point Iteration
 
||
 
* Algebraic and Geometric Interpretation of Newton's method
 
* Error Analysis for Newton's Method Based on Taylor's Theorem
 
* Newton's Method as a Fixed Point Iteration
 
* Modified Newton's Method and its Rate of Convergence
 
|-
 
|Week 3
 
||
 
* Section 1.5 Root-Finding Without Derivatives
 
||
 
* [[Secant Method]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
||
 
* Secant Method and its Convergence
 
* Stopping Criteria for Iterative Methods
 
|-
 
|Week 3
 
||
 
* Section 1.5 Root-Finding Without Derivatives
 
||
 
* [[Method of False Position]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
||
 
* Secant Method and its Convergence
 
* Stopping Criteria for Iterative Methods
 
|-
 
|Week 3
 
||
 
* Section 1.5 Root-Finding Without Derivatives
 
||
 
* [[Muller's Method]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
||
 
* Secant Method and its Convergence
 
* Stopping Criteria for Iterative Methods
 
|-
 
|Week 3
 
||
 
* Section 1.5 Root-Finding Without Derivatives
 
||
 
* [[Stopping Criteria]]
 
||
 
* Remainder of Taylor's Series
 
* Intermediate Value Theorem
 
||
 
* Secant Method and its Convergence
 
* Stopping Criteria for Iterative Methods
 
|-
 
|Week 4
 
||
 
* Section 2.1 Solve Systems of Linear Equations: Gaussian Elimination
 
||
 
* [[Gaussian Elimination]]
 
||
 
* Elementary Row Operations
 
||
 
* Gaussian Elimination and its Operation Counts
 
* Gaussian Elimination with Pivoting
 
* Implementation of Gauss Elimination
 
|-
 
|Week 4
 
||
 
* Section 2.2 Solve Systems of Linear Equations: LU Decomposition
 
||
 
* [[LU Decomposition]]
 
||
 
* Matrix-Matrix Products
 
* Matrix-Vector Products
 
* Inverse Matrix
 
* Elementary Row Operations
 
||
 
* Matrices for Elementary Row Operations
 
* Gauss Elimination as Matrix Products
 
* Advantages of Solutions by LU Decomposition
 
|-
 
|Week 5
 
||
 
* Section 2.3 Error Analysis for Solution of Ax=b
 
||
 
* [[Norms]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Various Norms for Vectors and Matrices: Compatibility of Vector and Matrix Norms
 
 
* Error Analysis for Solution of Ax=b
 
* Error Analysis for Solution of Ax=b
* Error Magnification Factor and Condition Number of Matrix
+
* Iterative Methods for Solving Ax=b
|-
 
|Week 5
 
 
||
 
||
* Section 2.3 Error Analysis for Solution of Ax=b
+
* Length of vectors
 +
* eigenvalue and eigenvectors of matrix
 
||
 
||
* [[Error Analysis for Solution of Ax=b]]
+
* various norms for vectors and matrices: compatibility of vector and matrix norms.
||
+
* Error Analysis for the solution of Ax=b
* Length of Vectors
+
* Error magnification factor and condition number of matrix
* Eigenvalues of a Matrix
+
 
* Eigenvectors of a Matrix
+
* Jacobi method, Gauss-Seidel method, Successive-Over-Relaxation (SOR) method
||
+
* Convergence of Jacobi Method, GS method and SOR method.
* Various Norms for Vectors and Matrices: Compatibility of Vector and Matrix Norms
+
* spectral radius of matrix
* Error Analysis for Solution of Ax=b
+
* convergence of general iterative method for solving system of linear equations,
* Error Magnification Factor and Condition Number of Matrix
+
* Sparse Matrix
|-
+
* Comparison of Gauss Elimination and iterative methods
|Week 5
+
 
||
+
 
* Section 2.3 Error Analysis for Solution of Ax=b
 
||
 
* [[Error Magnification Factor and Condition Number of Matrix]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Various Norms for Vectors and Matrices: Compatibility of Vector and Matrix Norms
 
* Error Analysis for Solution of Ax=b
 
* Error Magnification Factor and Condition Number of Matrix
 
|-
 
|Week 5
 
||
 
* Section 2.5: Iterative Methods for Solving Ax=b
 
||
 
* [[Iterative Methods]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Convergence of General Iterative Method for Solving System of Linear Equations
 
* Comparison of Gauss Elimination and Iterative Methods
 
|-
 
|Week 5
 
||
 
* Section 2.5: Iterative Methods for Solving Ax=b
 
||
 
* [[Jacobi Method]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Convergence of General Iterative Method for Solving System of Linear Equations
 
* Comparison of Gauss Elimination and Iterative Methods
 
 
|-
 
|-
|Week 5
+
| Week.6
 
||
 
||
* Section 2.5: Iterative Methods for Solving Ax=b
+
2.6 and 2.7
 
||
 
||
* [[Gauss-Seidel Method]]
+
* Conjugate Gradient Method
 +
* Nonlinear System of Equations
 
||
 
||
* Length of Vectors
+
* scalar product of vectors
* Eigenvalues of a Matrix
+
* determinant and eigenvalues of matrix
* Eigenvectors of a Matrix
+
* quadratic polynomials of n-variables
 +
* partial derivatives and gradients
 +
* chain rule for partial derivatives.
 
||
 
||
* Convergence of General Iterative Method for Solving System of Linear Equations
+
* Symmetric positive definite matrix and properties
* Comparison of Gauss Elimination and Iterative Methods
 
|-
 
|Week 5
 
||
 
* Section 2.5: Iterative Methods for Solving Ax=b
 
||
 
* [[Successive-Over-Relaxation (SOR) Method]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Convergence of General Iterative Method for Solving System of Linear Equations
 
* Comparison of Gauss Elimination and Iterative Methods
 
|-
 
|Week 5
 
||
 
* Section 2.5: Iterative Methods for Solving Ax=b
 
||
 
* [[Convergence of Iterative Methods]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Convergence of General Iterative Method for Solving System of Linear Equations
 
* Comparison of Gauss Elimination and Iterative Methods
 
|-
 
|Week 5
 
||
 
* Section 2.5: Iterative Methods for Solving Ax=b
 
||
 
* [[Spectral Radius of Matrix]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Convergence of General Iterative Method for Solving System of Linear Equations
 
* Comparison of Gauss Elimination and Iterative Methods
 
|-
 
|Week 5
 
||
 
* Section 2.5: Iterative Methods for Solving Ax=b
 
||
 
* [[Sparse Matrix]]
 
||
 
* Length of Vectors
 
* Eigenvalues of a Matrix
 
* Eigenvectors of a Matrix
 
||
 
* Convergence of General Iterative Method for Solving System of Linear Equations
 
* Comparison of Gauss Elimination and Iterative Methods
 
|-
 
|Week 6
 
||
 
* Section 2.6: Conjugate Gradient (CG) Method
 
||
 
* [[Conjugate Gradient Method]]
 
||
 
* Scalar Product of Vectors
 
* Determinant of a Matrix
 
* Eigenvalues of a Matrix
 
* Quadratic Polynomials of n-variables
 
* Partial Derivatives
 
* Gradients
 
* Chain Rule for Partial Derivatives
 
||
 
* Symmetric Positive Definite Matrix and Properties
 
 
* Construction of Conjugate Gradient (CG) Method
 
* Construction of Conjugate Gradient (CG) Method
* Properties of CG Method
+
* Propertise of CG Method
* Preconditioning for CG Method
+
* Preconditioning for CG method
 +
 
 +
* Taylor's Theorem for multi-variate vector valued functions:
 +
* Newton's Method:
 +
*  Broyden's Method
 +
 
 
|-
 
|-
|Week 6
+
| Week.7
 
||
 
||
* Section 2.6: Conjugate Gradient (CG) Method
+
3.1 and 3.2
 
||
 
||
* [[Symmetric Positive Definite Matrix]]
+
* Data and Interpolating Functions
 +
* Interpolation Error and Runge Phenomenon
 +
* Chebyshev interpolation
 
||
 
||
* Scalar Product of Vectors
+
* Fundamental theorem of algebra
* Determinant of a Matrix
+
* Rolle's theorem.
* Eigenvalues of a Matrix
 
* Quadratic Polynomials of n-variables
 
* Partial Derivatives
 
* Gradients
 
* Chain Rule for Partial Derivatives
 
 
||
 
||
* Symmetric Positive Definite Matrix and Properties
+
* Lagrange Basis Functions:
* Construction of Conjugate Gradient (CG) Method
+
* Properties of Lagrange basis functions:
* Properties of CG Method
+
* Lagrange form of the interpolation polynomials
* Preconditioning for CG Method
+
 
 +
* Newton's Divided Differences:
 +
* Properties of Newton's divided differences:
 +
* Newton's Form of the interpolation polynomials
 +
 
 +
* Interpolation error analysis
 +
* Runge phenomenon
 +
 
 +
* Chebyshev Polynomial
 +
* Error estimates for Chebyshev interpolation
 +
 
 +
 
 
|-
 
|-
|Week 6
+
| Week.8
 
||
 
||
* Section 2.6: Conjugate Gradient (CG) Method
+
3.4, 3.5 and 4.1
 
||
 
||
* [[CG Method]]
+
* Cubic Splines
 +
* Bezier Curves
 +
* Least Square Method
 
||
 
||
* Scalar Product of Vectors
+
* one-sided limits
* Determinant of a Matrix
+
* continuity of functions
* Eigenvalues of a Matrix
+
* indefinite integrals
* Quadratic Polynomials of n-variables
+
* extremum values of multivariate quadratic functions.
* Partial Derivatives
 
* Gradients
 
* Chain Rule for Partial Derivatives
 
 
||
 
||
* Symmetric Positive Definite Matrix and Properties
+
* Cubic splines
* Construction of Conjugate Gradient (CG) Method
+
* construction of cubic splines for interpolation
* Properties of CG Method
+
* end conditions
* Preconditioning for CG Method
+
* properties of cubic spline interpolation
|-
+
 
|Week 6
+
* Bezier Curve and fonts
||
+
 
* Section 2.7: Nonlinear System of Equations
+
* Least square method for solving inconsistent system of linear equations.
||
+
* Basic properties of least square solutions:
* [[Nonlinear System of Equations]]
+
 
||
+
 
* Scalar Product of Vectors
 
* Determinant of a Matrix
 
* Eigenvalues of a Matrix
 
* Quadratic Polynomials of n-variables
 
* Partial Derivatives
 
* Gradients
 
* Chain Rule for Partial Derivatives
 
||
 
* (TBD)
 
|-
 
|Week 6
 
||
 
* Section 2.7: Nonlinear System of Equations
 
||
 
* [[Taylor's Theorem for Multi-Variate Vector Valued Functions]]
 
||
 
* Scalar Product of Vectors
 
* Determinant of a Matrix
 
* Eigenvalues of a Matrix
 
* Quadratic Polynomials of n-variables
 
* Partial Derivatives
 
* Gradients
 
* Chain Rule for Partial Derivatives
 
||
 
* (TBD)
 
|-
 
|Week 6
 
||
 
* Section 2.7: Nonlinear System of Equations
 
||
 
* [[Newton's Method]]
 
||
 
* Scalar Product of Vectors
 
* Determinant of a Matrix
 
* Eigenvalues of a Matrix
 
* Quadratic Polynomials of n-variables
 
* Partial Derivatives
 
* Gradients
 
* Chain Rule for Partial Derivatives
 
||
 
* (TBD)
 
|-
 
|Week 6
 
||
 
* Section 2.7: Nonlinear System of Equations
 
||
 
* [[Broyden's Method]]
 
||
 
* Scalar Product of Vectors
 
* Determinant of a Matrix
 
* Eigenvalues of a Matrix
 
* Quadratic Polynomials of n-variables
 
* Partial Derivatives
 
* Gradients
 
* Chain Rule for Partial Derivatives
 
||
 
* (TBD)
 
|-
 
|Week 7
 
||
 
* Sections 3.1: Data and Interpolating Functions
 
||
 
* [[Lagrange Basis Functions]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* Properties of Lagrange Basis Functions
 
* Lagrange Form of the Interpolation Polynomials
 
* Properties of Newton's Divided Differences
 
* Newton's Form of the Interpolation Polynomials
 
|-
 
|Week 7
 
||
 
* Sections 3.1: Data and Interpolating Functions
 
||
 
* [[Newton's Divided Differences]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* Properties of Lagrange Basis Functions
 
* Lagrange Form of the Interpolation Polynomials
 
* Properties of Newton's Divided Differences
 
* Newton's Form of the Interpolation Polynomials
 
|-
 
|Week 7
 
||
 
* Sections 3.1: Data and Interpolating Functions
 
||
 
* [[Properties of Newton's Divided Differences]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* Properties of Lagrange Basis Functions
 
* Lagrange Form of the Interpolation Polynomials
 
* Properties of Newton's Divided Differences
 
* Newton's Form of the Interpolation Polynomials
 
|-
 
|Week 7
 
||
 
* Section 3.2: Interpolation Error and Runge Phenomenon
 
||
 
* [[Interpolation Error]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* (TBD)
 
|-
 
|Week 7
 
||
 
* Section 3.2: Interpolation Error and Runge Phenomenon
 
||
 
* [[Interpolation Error Analysis]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* (TBD)
 
|-
 
|Week 7
 
||
 
* Section 3.2: Interpolation Error and Runge Phenomenon
 
||
 
* [[Runge Phenomenon]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* (TBD)
 
|-
 
|Week 7
 
||
 
* Section 3.2: Interpolation Error and Runge Phenomenon
 
||
 
* [[Chebyshev Polynomial]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* (TBD)
 
|-
 
|Week 7
 
||
 
* Section 3.2: Interpolation Error and Runge Phenomenon
 
||
 
* [[Error Estimates for Chebyshev Interpolation]]
 
||
 
* Fundamental Theorem of Algebra
 
* Rolle's Theorem
 
||
 
* (TBD)
 
|-
 
|Week 8
 
||
 
* Section 3.4: Cubic Splines
 
||
 
* [[Cubic Splines]]
 
||
 
* One-Sided Limits
 
* Continuity of Functions
 
* Indefinite Integrals
 
* Extremum Values of Multivariate Quadratic Functions
 
||
 
* Construction of Cubic Splines for Interpolation
 
* End Conditions
 
* Properties of Cubic Spline Interpolation
 
|-
 
|Week 8
 
||
 
* Section 3.5: Bezier Curves
 
||
 
* [[Bezier Curves]]
 
||
 
* One-Sided Limits
 
* Continuity of Functions
 
* Indefinite Integrals
 
* Extremum Values of Multivariate Quadratic Functions
 
||
 
* Bezier Curve and Fonts
 
|-
 
|Week 8
 
||
 
* Section 4.1: Least Square Method
 
||
 
* [[Least Square Method]]
 
||
 
* One-Sided Limits
 
* Continuity of Functions
 
* Indefinite Integrals
 
* Extremum Values of Multivariate Quadratic Functions
 
||
 
* Least Square Method for Solving Inconsistent System of Linear Equations]
 
* Basic Properties of Least Square Solutions
 
 
|-
 
|-
|Week 9
+
| Week.9
 
||
 
||
* Section 4.2: Mathematical Models and Data Fitting
+
4.2 and 4.5
 
||
 
||
* [[Curve Fitting]]
+
* Mathematical Models and Data Fitting
 +
* Nonlinear Least Square Fitting
 
||
 
||
* Linear Spaces
+
* linear spaces, basis functions
* Basis Functions
+
* product rule and chain rule for vector valued multivariate functions.
* Product Rule for Vector Valued Multivariate Functions
 
* Chain Rule for Vector Valued Multivariate Functions
 
 
||
 
||
* Least square method for curve fitting and statistical modeling
+
* Least square method for curve fitting and statistical modeling.
 
* Survey of Models: linear model, periodic model, exponential models, logistic model, etc
 
* Survey of Models: linear model, periodic model, exponential models, logistic model, etc
 +
 +
* Taylor's theorem for vector valued multivariate functions.
 +
* Gauss-Newton Method
 +
* Levenberg-Marquardt Method
 +
 +
 +
 
|-
 
|-
|Week 9
+
| Week.10
 
||
 
||
* Section 4.2: Mathematical Models and Data Fitting
+
5.1, 5.2 and 5.3
 
||
 
||
* [[Statistical Modeling]]
+
* Numerical Differentiation
 +
* Numerical Integration: Newton-Cotes Formulas
 +
* Numerical Integration: Romberg's Technique
 
||
 
||
* Linear Spaces
+
* Taylor's theorem
* Basis Functions
+
* interpolation error estimates
* Product Rule for Vector Valued Multivariate Functions
+
* properties of definite inetgrals
* Chain Rule for Vector Valued Multivariate Functions
 
 
||
 
||
* Least square method for curve fitting and statistical modeling
+
* Finite difference (FD) approximations of 1st order derivative and their error analysis
* Survey of Models: linear model, periodic model, exponential models, logistic model, etc
+
* FD approximations of 2nd order derivatives and their error analysis
 +
* Undetermined coefficient method for FD approximation
 +
* Extropolation technique for improving the order of approximation
 +
 
 +
* Midpoint rule, trapezoid rule and Simpson's rule;
 +
* Error analysis based on Taylor's Theorem and interpolation errors
 +
* Degree of precision of quadrature rules
 +
* Composite quadrature rules
 +
 
 +
* Motivation, construction and implementation of Romberg's technique.
 +
 
 +
 
 
|-
 
|-
|Week 9
+
| Week.11
 
||
 
||
* Section 4.5: Nonlinear Least Square Fitting
+
5.4 and 5.5
 
||
 
||
* [[Taylor's Theorem for Vector Valued Multivariate Functions]]
+
* Adaptive Numerical Integration
 +
* Gauss Quadrature Formulas
 
||
 
||
* Linear Spaces
+
* long divisions
* Basis Functions
+
* changing variables for definite integrals
* Product Rule for Vector Valued Multivariate Functions
 
* Chain Rule for Vector Valued Multivariate Functions
 
 
||
 
||
* (TBD)
+
* How to estimate the error on a subinterval
|-
+
* How to mark subintervals to be further refinement?
|Week 9
+
* Implementation of adaptive numerical integration techniques.
||
+
 
* Section 4.5: Nonlinear Least Square Fitting
+
* Motivation and difficulties with straightforward approach.
||
+
* Orthogonal polynomials,
* [[Gauss-Newton Method]]
+
* Legendre polynomials and their basic properties;
||
+
* Gauss quadrature rule based on Legendre polynomials
* Linear Spaces
 
* Basis Functions
 
* Product Rule for Vector Valued Multivariate Functions
 
* Chain Rule for Vector Valued Multivariate Functions
 
||
 
* (TBD)
 
|-
 
|Week 9
 
||
 
* Section 4.5: Nonlinear Least Square Fitting
 
||
 
* [[Levenberg-Marquardt Method]]
 
||
 
* Linear Spaces
 
* Basis Functions
 
* Product Rule for Vector Valued Multivariate Functions
 
* Chain Rule for Vector Valued Multivariate Functions
 
||
 
* (TBD)
 
|-
 
|Week 10
 
||
 
* Section 5.1: Numerical Differentiation
 
||
 
* [[Numerical Differentiation]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
 
* FD approximations of 2nd order Derivatives and Their Error Analysis
 
* Undetermined Coefficient Method for FD Approximation
 
* Extrapolation Technique for Improving the Order of Approximation
 
|-
 
|Week 10
 
||
 
* Section 5.1: Numerical Differentiation
 
||
 
* [[Finite Difference (FD)]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
 
* FD approximations of 2nd order Derivatives and Their Error Analysis
 
* Undetermined Coefficient Method for FD Approximation
 
* Extrapolation Technique for Improving the Order of Approximation
 
|-
 
|Week 10
 
||
 
* Section 5.1: Numerical Differentiation
 
||
 
* [[Undetermined Coefficient Method]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
 
* FD approximations of 2nd order Derivatives and Their Error Analysis
 
* Undetermined Coefficient Method for FD Approximation
 
* Extrapolation Technique for Improving the Order of Approximation
 
|-
 
|Week 10
 
||
 
* Section 5.1: Numerical Differentiation
 
||
 
* [[Extrapolation Technique]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
 
* FD approximations of 2nd order Derivatives and Their Error Analysis
 
* Undetermined Coefficient Method for FD Approximation
 
* Extrapolation Technique for Improving the Order of Approximation
 
|-
 
|Week 10
 
||
 
* Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Newton-Cotes]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
|-
 
|Week 10
 
||
 
* Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Midpoint rule]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
|-
 
|Week 10
 
||
 
* Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Trapezoid rule]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
|-
 
|Week 10
 
||
 
* Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Simpson's rule]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
|-
 
|Week 10
 
||
 
* Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Error Analysis based on Interpolation Errors]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
|-
 
|Week 10
 
||
 
* Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Quadrature Rules]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
|-
 
|Week 10
 
||
 
* Section 5.2: Numerical Integration: Newton-Cotes Formulas
 
||
 
* [[Composite Quadrature Rules]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Error Analysis based on Taylor's Theorem
 
* Error Analysis based on Interpolation Errors
 
* Degree of Precision of Quadrature Rules
 
|-
 
|Week 10
 
||
 
* Section 5.3: Numerical Integration: Romberg's Technique
 
||
 
* [[Romberg's Technique]]
 
||
 
* Taylor's Theorem
 
* Interpolation Error Estimates
 
* Properties of Definite Integrals
 
||
 
* Motivation, construction and implementation of Romberg's Technique.
 
|-
 
|Week 11
 
||
 
* Section 5.4: Adaptive Numerical Integration
 
||
 
* [[Adaptive Numerical Integration]]
 
||
 
* Long Divisions
 
* Substitution Methods for definite integrals
 
||
 
* How to estimate the error on a sub interval
 
* How to mark sub intervals to be further refinement?
 
|-
 
|Week 11
 
||
 
* Section 5.4: Adaptive Numerical Integration
 
||
 
* [[Implementation of Adaptive Numerical Integration Techniques]]
 
||
 
* Long Divisions
 
* Substitution Methods for definite integrals
 
||
 
* How to estimate the error on a sub interval
 
* How to mark sub intervals to be further refinement?
 
|-
 
|Week 11
 
||
 
* Section 5.5: Gauss Quadrature Formulas
 
||
 
* [[Gauss Quadrature Formulas]]
 
||
 
* Long Divisions
 
* Substitution Methods for definite integrals
 
||
 
* Motivation and difficulties with straightforward approach
 
* Legendre polynomials and their basic properties
 
* Gauss Quadrature rule based on Legendre polynomials
 
 
* Degree of precision of Gauss Quadrature
 
* Degree of precision of Gauss Quadrature
 
* Gauss quadrature formula on general interval and composite Gauss rules
 
* Gauss quadrature formula on general interval and composite Gauss rules
 +
 +
 
|-
 
|-
|Week 11
+
| Week.12
 
||
 
||
* Section 5.5: Gauss Quadrature Formulas
+
10.1 and 11.1
 
||
 
||
* [[Orthogonal Polynomials]]
+
* Discrete Fourier Transform and FFT
 +
* Discrete Cosine Transform (optional)
 +
* Image Compression  (optional)
 
||
 
||
* Long Divisions
+
* complex numbers and complex variables
* Substitution Methods for definite integrals
+
* integration by parts
 +
* convergence of sequences and series.
 
||
 
||
* Motivation and difficulties with straightforward approach
+
* Fourier Series,
* Legendre polynomials and their basic properties
+
*  Discrete Fourier Transform
* Gauss Quadrature rule based on Legendre polynomials
+
*  Matrix Form of Discrete Fourier Transform:
* Degree of precision of Gauss Quadrature
+
*  Inverse Discrete Fourier Transform:
* Gauss quadrature formula on general interval and composite Gauss rules
+
*  DFT and Trigonometric interpolation
 +
* Algorithm for computing DFT: Fast Fourier Transform (FFT)
 +
 
 +
*  Discrete Cosine Transform (DCT),
 +
*  DCT and Interpolation by Cosine Functions
 +
* Relation between DFT and DCT:
 +
* Fourier Transform of 2-Dimensional Functions
 +
*  DCT of 2-Dimensional Functions:
 +
* Interpolation Theorem for 2-Dimensional DCT
 +
 
 +
*  Digital Gray scale images and color color images:
 +
*  RGB format:
 +
*  YCbCr (or YUV) format:
 +
*  Convertion between RGB and YUV formats:
 +
*  Quantization, Image Compression and Decompression
 +
 
 +
 
 +
 
 
|-
 
|-
|Week 11
+
| Week.13
 
||
 
||
* Section 5.5: Gauss Quadrature Formulas
+
12.1 and 12.2
 
||
 
||
* [[Legendre polynomials]]
+
* Power Iteration Methods
 +
* QR Algorithm for Computing Eigenvalues
 
||
 
||
* Long Divisions
+
* properties of eigen values and eigenvectors
* Substitution Methods for definite integrals
+
* Gram-Schmidt orthogonalization
 
||
 
||
* Motivation and difficulties with straightforward approach
+
* Power iteration and its rate of convergence.
* Legendre polynomials and their basic properties
+
* Inverse Power Iteration,
* Gauss Quadrature rule based on Legendre polynomials
+
* Inverse Power Iteration with Shift
* Degree of precision of Gauss Quadrature
+
* Rayleigh Quotient Iteration
* Gauss quadrature formula on general interval and composite Gauss rules
+
 
 +
 
 +
* Definition and basic properties of orthogonal matrices:
 +
* QR-Factorization based on Gram-Schmidt Orthogonalization:
 +
* Normalized Simultaneous Iteration (NSI).
 +
* Unshifted QR Algorithm:
 +
* Shifted QR Algorithm:
 +
 
 +
 
 +
 
 
|-
 
|-
|Week 11
+
| Week.14
 
||
 
||
* Section 5.5: Gauss Quadrature Formulas
+
12.2
 
||
 
||
* [[Gauss Quadrature Rule]]
+
* Algorithm for Computing Eigenvalues: Speed up of QR-algorithm:
 
||
 
||
* Long Divisions
+
* matrices for orthogonal projection and reflection
* Substitution Methods for definite integrals
+
* block matrices and their products
||
+
* similar matrices.
* Motivation and difficulties with straightforward approach
 
* Legendre polynomials and their basic properties
 
* Gauss Quadrature rule based on Legendre polynomials
 
* Degree of precision of Gauss Quadrature
 
* Gauss quadrature formula on general interval and composite Gauss rules
 
|-
 
|Week 12
 
||
 
* Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
 
||
 
* [[Fourier Series]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* Matrix Form of Discrete Fourier Transform
 
* DFT and Trigonometric Interpolation
 
|-
 
|Week 12
 
||
 
* Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
 
||
 
* [[Discrete Fourier Transform (DFT)]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* Matrix Form of Discrete Fourier Transform
 
* DFT and Trigonometric Interpolation
 
|-
 
|Week 12
 
||
 
* Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
 
||
 
* [[Inverse Discrete Fourier Transform]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* Matrix Form of Discrete Fourier Transform
 
* DFT and Trigonometric Interpolation
 
|-
 
|Week 12
 
||
 
* Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
 
||
 
* [[Fast Fourier Transform (FFT)]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* Matrix Form of Discrete Fourier Transform
 
* DFT and Trigonometric Interpolation
 
|-
 
|Week 12
 
||
 
* Section 11.1: Discrete Cosine Transform (optional)
 
||
 
* [[Discrete Cosine Transform]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* DCT and Interpolation by Cosine Functions
 
* Relation between DFT and DCT
 
* Fourier Transform of 2-Dimensional Functions
 
* DCT of 2-Dimensional Functions
 
* Interpolation Theorem for 2-Dimensional DCT
 
|-
 
|Week 12
 
||
 
* Section 11.1: Discrete Cosine Transform (optional)
 
||
 
* [[Discrete Cosine Transform(DCT)]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* DCT and Interpolation by Cosine Functions
 
* Relation between DFT and DCT
 
* Fourier Transform of 2-Dimensional Functions
 
* DCT of 2-Dimensional Functions
 
* Interpolation Theorem for 2-Dimensional DCT
 
|-
 
|Week 12
 
||
 
* Section 11.2: Image Compression (optional)
 
||
 
* [[Quantization]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* Digital Gray scale images and color color images
 
* RGB format
 
* YCbCr (or YUV) format
 
* Convertion between RGB and YUV formats
 
|-
 
|Week 12
 
||
 
* Section 11.2: Image Compression (optional)
 
||
 
* [[Image Compression]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* Digital Gray scale images and color color images
 
* RGB format
 
* YCbCr (or YUV) format
 
* Convertion between RGB and YUV formats
 
|-
 
|Week 12
 
||
 
* Section 11.2: Image Compression (optional)
 
||
 
* [[Image Decompression]]
 
||
 
* Complex Numbers
 
* Complex Variables
 
* Integration by Parts
 
* Convergence of Sequences
 
* Convergence of Series
 
||
 
* Digital Gray scale images and color color images
 
* RGB format
 
* YCbCr (or YUV) format
 
* Convertion between RGB and YUV formats
 
|-
 
|Week 13
 
||
 
* Section 12.1: Power Iteration Methods
 
||
 
* [[Power Iteration Methods]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Convergence of Power Iteration Methods
 
|-
 
|Week 13
 
||
 
* Section 12.1: Power Iteration Methods
 
||
 
* [[Inverse Power Iteration]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Convergence of Power Iteration Methods
 
|-
 
|Week 13
 
||
 
* Section 12.1: Power Iteration Methods
 
||
 
* [[Inverse Power Iteration with Shift]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Convergence of Power Iteration Methods
 
|-
 
|Week 13
 
||
 
* Section 12.1: Power Iteration Methods
 
||
 
* [[Rayleigh Quotient Iteration]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Convergence of Power Iteration Methods
 
|-
 
|Week 13
 
||
 
* Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Orthogonal Matrices]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Definition and basic properties of orthogonal matrices
 
* QR-Factorization based on Gram-Schmidt Orthogonalization
 
|-
 
|Week 13
 
||
 
* Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[QR-Factorization]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Definition and basic properties of orthogonal matrices
 
* QR-Factorization based on Gram-Schmidt Orthogonalization
 
|-
 
|Week 13
 
||
 
* Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Normalized Simultaneous Iteration(NSI)]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Definition and basic properties of orthogonal matrices
 
* QR-Factorization based on Gram-Schmidt Orthogonalization
 
|-
 
|Week 13
 
||
 
* Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Unshifted QR Algorithm]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Definition and basic properties of orthogonal matrices
 
* QR-Factorization based on Gram-Schmidt Orthogonalization
 
|-
 
|Week 13
 
||
 
* Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Shifted QR Algorithm]]
 
||
 
* Eigenvalues
 
* Eigenvectors
 
* Orthonormal Bases and the Gram-Schmidt Process
 
||
 
* Definition and basic properties of orthogonal matrices
 
* QR-Factorization based on Gram-Schmidt Orthogonalization
 
|-
 
|Week 14
 
||
 
* Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Upper Hessenberg Form (UHF)]]
 
||
 
* Matrices for Orthogonal Projection
 
* Matrices for Reflection
 
* Block Matrices
 
* Similar Matrices
 
||
 
* Convert a matrix into UHF by Householder reflectors
 
|-
 
|Week 14
 
||
 
* Section 12.2: QR Algorithm for Computing Eigenvalues
 
||
 
* [[Householder Reflector]]
 
||
 
* Matrices for Orthogonal Projection
 
* Matrices for Reflection
 
* Block Matrices
 
* Similar Matrices
 
 
||
 
||
 +
* Upper Hessenberg form (UHF)
 +
* Householder Reflector
 
* Convert a matrix into UHF by Householder reflectors
 
* Convert a matrix into UHF by Householder reflectors
 +
 
|}
 
|}

Latest revision as of 14:38, 17 August 2020

Course Catalog

MAT 3633. Numerical Analysis. (3-0) 3 Credit Hours.

Prerequisites: MAT2233, MAT3213, and one of the following: CS1063, CS1714, or CS2073. Solution of linear and nonlinear equations, curve-fitting, and eigenvalue problems. Generally offered: Fall, Spring. Differential Tuition: $150.

Topics List

Date Sections [Sauer 3rd ed] Topics Prerequisite Skills Student Learning Outcomes
Week.1

0.2 and 1.1

  • Loss of significant digits
  • Bisection Method
  • Brief introduction to matlab
  • binary number system;
  • Taylor's theorem;
  • intermediate value theorem
  • Nested multiplication for evaluating polynomials
  • Machine representation of real numbers
  • Loss of significant digits in numerical computing
  • Review of Taylor's Theorem
  • Bisection method and implementation
Week.2

1.2 and 1.3

  • Fixed-Point Iteration
  • Limits of Accuracy: Conditioning of problems
  • limit of sequences
  • multiplicity of solution of equations.
  • Geometric interpretation
  • Convergence of fixed point iterations
  • Order of convergence of iterative methods
  • Wilkinson polynomial and other examples
  • Sensitivity analysis of root-finding
  • Error magnification factor for solution of equations
Week.3

1.4 and 1.5

  • Newton's Method
  • Root-Finding without Derivatives
  • Remainder of Taylor's series
  • intermediate value theorem.
  • Algebraic and geometric interpretation of Newton's method
  • Error analysis for Newton's method based on Taylor's theorem
  • Newton's method as a fixed point iteration
  • Modified Newton's method and its rate of convergence
  • Secant Method and its convergence,
  • Method of False Position, Muller's Method:
  • Stopping criteria for iterative methods


Week.4

2.1 and 2.2

  • Solve Systems of Linear Equations: Gaussian Elmination
  • Solve System of Linear Equations: LU Decomposition
  • Matrix-matrix products and matrix-vector products
  • inverse matrix
  • elementary row operations
  • product and inverse of matrices for elementary row operations.
  • Gaussian elimination and its operation counts
  • Gaussian elimination with pivoting
  • Implementation of Gauss elimination
  • Matrices for elementary row operations
  • Gauss elimination as matrix products
  • Advantages of solutions by LU decomposition
Week.5

2.3 and 2.4

  • Error Analysis for Solution of Ax=b
  • Iterative Methods for Solving Ax=b
  • Length of vectors
  • eigenvalue and eigenvectors of matrix
  • various norms for vectors and matrices: compatibility of vector and matrix norms.
  • Error Analysis for the solution of Ax=b
  • Error magnification factor and condition number of matrix
  • Jacobi method, Gauss-Seidel method, Successive-Over-Relaxation (SOR) method
  • Convergence of Jacobi Method, GS method and SOR method.
  • spectral radius of matrix
  • convergence of general iterative method for solving system of linear equations,
  • Sparse Matrix
  • Comparison of Gauss Elimination and iterative methods


Week.6

2.6 and 2.7

  • Conjugate Gradient Method
  • Nonlinear System of Equations
  • scalar product of vectors
  • determinant and eigenvalues of matrix
  • quadratic polynomials of n-variables
  • partial derivatives and gradients
  • chain rule for partial derivatives.
  • Symmetric positive definite matrix and properties
  • Construction of Conjugate Gradient (CG) Method
  • Propertise of CG Method
  • Preconditioning for CG method
  • Taylor's Theorem for multi-variate vector valued functions:
  • Newton's Method:
  • Broyden's Method
Week.7

3.1 and 3.2

  • Data and Interpolating Functions
  • Interpolation Error and Runge Phenomenon
  • Chebyshev interpolation
  • Fundamental theorem of algebra
  • Rolle's theorem.
  • Lagrange Basis Functions:
  • Properties of Lagrange basis functions:
  • Lagrange form of the interpolation polynomials
  • Newton's Divided Differences:
  • Properties of Newton's divided differences:
  • Newton's Form of the interpolation polynomials
  • Interpolation error analysis
  • Runge phenomenon
  • Chebyshev Polynomial
  • Error estimates for Chebyshev interpolation


Week.8

3.4, 3.5 and 4.1

  • Cubic Splines
  • Bezier Curves
  • Least Square Method
  • one-sided limits
  • continuity of functions
  • indefinite integrals
  • extremum values of multivariate quadratic functions.
  • Cubic splines
  • construction of cubic splines for interpolation
  • end conditions
  • properties of cubic spline interpolation
  • Bezier Curve and fonts
  • Least square method for solving inconsistent system of linear equations.
  • Basic properties of least square solutions:


Week.9

4.2 and 4.5

  • Mathematical Models and Data Fitting
  • Nonlinear Least Square Fitting
  • linear spaces, basis functions
  • product rule and chain rule for vector valued multivariate functions.
  • Least square method for curve fitting and statistical modeling.
  • Survey of Models: linear model, periodic model, exponential models, logistic model, etc
  • Taylor's theorem for vector valued multivariate functions.
  • Gauss-Newton Method
  • Levenberg-Marquardt Method


Week.10

5.1, 5.2 and 5.3

  • Numerical Differentiation
  • Numerical Integration: Newton-Cotes Formulas
  • Numerical Integration: Romberg's Technique
  • Taylor's theorem
  • interpolation error estimates
  • properties of definite inetgrals
  • Finite difference (FD) approximations of 1st order derivative and their error analysis
  • FD approximations of 2nd order derivatives and their error analysis
  • Undetermined coefficient method for FD approximation
  • Extropolation technique for improving the order of approximation
  • Midpoint rule, trapezoid rule and Simpson's rule;
  • Error analysis based on Taylor's Theorem and interpolation errors
  • Degree of precision of quadrature rules
  • Composite quadrature rules
  • Motivation, construction and implementation of Romberg's technique.


Week.11

5.4 and 5.5

  • Adaptive Numerical Integration
  • Gauss Quadrature Formulas
  • long divisions
  • changing variables for definite integrals
  • How to estimate the error on a subinterval
  • How to mark subintervals to be further refinement?
  • Implementation of adaptive numerical integration techniques.
  • Motivation and difficulties with straightforward approach.
  • Orthogonal polynomials,
  • Legendre polynomials and their basic properties;
  • Gauss quadrature rule based on Legendre polynomials
  • Degree of precision of Gauss Quadrature
  • Gauss quadrature formula on general interval and composite Gauss rules


Week.12

10.1 and 11.1

  • Discrete Fourier Transform and FFT
  • Discrete Cosine Transform (optional)
  • Image Compression (optional)
  • complex numbers and complex variables
  • integration by parts
  • convergence of sequences and series.
  • Fourier Series,
  • Discrete Fourier Transform
  • Matrix Form of Discrete Fourier Transform:
  • Inverse Discrete Fourier Transform:
  • DFT and Trigonometric interpolation
  • Algorithm for computing DFT: Fast Fourier Transform (FFT)
  • Discrete Cosine Transform (DCT),
  • DCT and Interpolation by Cosine Functions
  • Relation between DFT and DCT:
  • Fourier Transform of 2-Dimensional Functions
  • DCT of 2-Dimensional Functions:
  • Interpolation Theorem for 2-Dimensional DCT
  • Digital Gray scale images and color color images:
  • RGB format:
  • YCbCr (or YUV) format:
  • Convertion between RGB and YUV formats:
  • Quantization, Image Compression and Decompression


Week.13

12.1 and 12.2

  • Power Iteration Methods
  • QR Algorithm for Computing Eigenvalues
  • properties of eigen values and eigenvectors
  • Gram-Schmidt orthogonalization
  • Power iteration and its rate of convergence.
  • Inverse Power Iteration,
  • Inverse Power Iteration with Shift
  • Rayleigh Quotient Iteration


  • Definition and basic properties of orthogonal matrices:
  • QR-Factorization based on Gram-Schmidt Orthogonalization:
  • Normalized Simultaneous Iteration (NSI).
  • Unshifted QR Algorithm:
  • Shifted QR Algorithm:


Week.14

12.2

  • Algorithm for Computing Eigenvalues: Speed up of QR-algorithm:
  • matrices for orthogonal projection and reflection
  • block matrices and their products
  • similar matrices.
  • Upper Hessenberg form (UHF)
  • Householder Reflector
  • Convert a matrix into UHF by Householder reflectors