MAT3633

From Department of Mathematics at UTSA
Jump to navigation Jump to search

Course Catalog

MAT 3633. Numerical Analysis. (3-0) 3 Credit Hours.

Prerequisites: MAT2233, MAT3213, and one of the following: CS1063, CS1714, or CS2073. Solution of linear and nonlinear equations, curve-fitting, and eigenvalue problems. Generally offered: Fall, Spring. Differential Tuition: $150.

Topics List

Date Sections Topics Prerequisite Skills Student Learning Outcomes
Week.1

0.2 and 1.1

  • Loss of significant digits
  • Bisection Method
  • Brief introduction to matlab
  • binary number system;
  • Taylor's theorem;
  • intermediate value theorem
  • Nested multiplication for evaluating polynomials
  • Machine representation of real numbers
  • Loss of significant digits in numerical computing
  • Review of Taylor's Theorem
  • Bisection method and implementation
Week.2

1.2 and 1.3

  • Fixed-Point Iteration
  • Limits of Accuracy: Conditioning of problems
  • limit of sequences
  • multiplicity of solution of equations.
  • Geometric interpretation
  • Convergence of fixed point iterations
  • Order of convergence of iterative methods
  • Wilkinson polynomial and other examples
  • Sensitivity analysis of root-finding
  • Error magnification factor for solution of equations
Week.3

1.4 and 1.5

  • Newton's Method
  • Root-Finding without Derivatives
  • Remainder of Taylor's series
  • intermediate value theorem.
  • Algebraic and geometric interpretation of Newton's method
  • Error analysis for Newton's method based on Taylor's theorem
  • Newton's method as a fixed point iteration
  • Modified Newton's method and its rate of convergence
  • Secant Method and its convergence,
  • Method of False Position, Muller's Method:
  • Stopping criteria for iterative methods


Week.4

2.1 and 2.2

  • Solve Systems of Linear Equations: Gaussian Elmination
  • Solve System of Linear Equations: LU Decomposition
  • Matrix-matrix products and matrix-vector products
  • inverse matrix
  • elementary row operations
  • product and inverse of matrices for elementary row operations.
  • Gaussian elimination and its operation counts
  • Gaussian elimination with pivoting
  • Implementation of Gauss elimination
  • Matrices for elementary row operations
  • Gauss elimination as matrix products
  • Advantages of solutions by LU decomposition
Week.5

2.3 and 2.4

  • Error Analysis for Solution of Ax=b
  • Iterative Methods for Solving Ax=b
  • Length of vectors
  • eigenvalue and eigenvectors of matrix
  • various norms for vectors and matrices: compatibility of vector and matrix norms.
  • Error Analysis for the solution of Ax=b
  • Error magnification factor and condition number of matrix
  • Jacobi method, Gauss-Seidel method, Successive-Over-Relaxation (SOR) method
  • Convergence of Jacobi Method, GS method and SOR method.
  • spectral radius of matrix
  • convergence of general iterative method for solving system of linear equations,
  • Sparse Matrix
  • Comparison of Gauss Elimination and iterative methods


Week.6

2.6 and 2.7

  • Conjugate Gradient Method
  • Nonlinear System of Equations
  • scalar product of vectors
  • determinant and eigenvalues of matrix
  • quadratic polynomials of n-variables
  • partial derivatives and gradients
  • chain rule for partial derivatives.
  • Symmetric positive definite matrix and properties
  • Construction of Conjugate Gradient (CG) Method
  • Propertise of CG Method
  • Preconditioning for CG method
  • Taylor's Theorem for multi-variate vector valued functions:
  • Newton's Method:
  • Broyden's Method
Week.7

3.1 and 3.2

  • Data and Interpolating Functions
  • Interpolation Error and Runge Phenomenon
  • Chebyshev interpolation
  • Fundamental theorem of algebra
  • Rolle's theorem.
  • Lagrange Basis Functions:
  • Properties of Lagrange basis functions:
  • Lagrange form of the interpolation polynomials
  • Newton's Divided Differences:
  • Properties of Newton's divided differences:
  • Newton's Form of the interpolation polynomials
  • Interpolation error analysis
  • Runge phenomenon
  • Chebyshev Polynomial
  • Error estimates for Chebyshev interpolation


Week.8

3.4, 3.5 and 4.1

  • Cubic Splines
  • Bezier Curves
  • Least Square Method
  • one-sided limits
  • continuity of functions
  • indefinite integrals
  • extremum values of multivariate quadratic functions.
  • Cubic splines
  • construction of cubic splines for interpolation
  • end conditions
  • properties of cubic spline interpolation
  • Bezier Curve and fonts
  • Least square method for solving inconsistent system of linear equations.
  • Basic properties of least square solutions:


Week.9

4.2 and 4.5

  • Mathematical Models and Data Fitting
  • Nonlinear Least Square Fitting
  • linear spaces, basis functions
  • product rule and chain rule for vector valued multivariate functions.
  • Least square method for curve fitting and statistical modeling.
  • Survey of Models: linear model, periodic model, exponential models, logistic model, etc
  • Taylor's theorem for vector valued multivariate functions.
  • Gauss-Newton Method
  • Levenberg-Marquardt Method


Week.10

5.1, 5.2 and 5.3

  • Numerical Differentiation
  • Numerical Integration: Newton-Cotes Formulas
  • Numerical Integration: Romberg's Technique
  • Taylor's theorem
  • interpolation error estimates
  • properties of definite inetgrals
  • Finite difference (FD) approximations of 1st order derivative and their error analysis
  • FD approximations of 2nd order derivatives and their error analysis
  • Undetermined coefficient method for FD approximation
  • Extropolation technique for improving the order of approximation
  • Midpoint rule, trapezoid rule and Simpson's rule;
  • Error analysis based on Taylor's Theorem and interpolation errors
  • Degree of precision of quadrature rules
  • Composite quadrature rules
  • Motivation, construction and implementation of Romberg's technique.


Week.11

5.4 and 5.5

  • Adaptive Numerical Integration
  • Gauss Quadrature Formulas
  • long divisions
  • changing variables for definite integrals
  • How to estimate the error on a subinterval
  • How to mark subintervals to be further refinement?
  • Implementation of adaptive numerical integration techniques.
  • Motivation and difficulties with straightforward approach.
  • Orthogonal polynomials,
  • Legendre polynomials and their basic properties;
  • Gauss quadrature rule based on Legendre polynomials
  • Degree of precision of Gauss Quadrature
  • Gauss quadrature formula on general interval and composite Gauss rules


Week.12

10.1 and 11.1

  • Discrete Fourier Transform and FFT
  • Discrete Cosine Transform (optional)
  • Image Compression (optional)
  • complex numbers and complex variables
  • integration by parts
  • convergence of sequences and series.
  • Fourier Series,
  • Discrete Fourier Transform
  • Matrix Form of Discrete Fourier Transform:
  • Inverse Discrete Fourier Transform:
  • DFT and Trigonometric interpolation
  • Algorithm for computing DFT: Fast Fourier Transform (FFT)
  • Discrete Cosine Transform (DCT),
  • DCT and Interpolation by Cosine Functions
  • Relation between DFT and DCT:
  • Fourier Transform of 2-Dimensional Functions
  • DCT of 2-Dimensional Functions:
  • Interpolation Theorem for 2-Dimensional DCT
  • Digital Gray scale images and color color images:
  • RGB format:
  • YCbCr (or YUV) format:
  • Convertion between RGB and YUV formats:
  • Quantization, Image Compression and Decompression


Week.13

12.1 and 12.2

  • Power Iteration Methods
  • QR Algorithm for Computing Eigenvalues
  • properties of eigen values and eigenvectors
  • Gram-Schmidt orthogonalization
  • Power iteration and its rate of convergence.
  • Inverse Power Iteration,
  • Inverse Power Iteration with Shift
  • Rayleigh Quotient Iteration


  • Definition and basic properties of orthogonal matrices:
  • QR-Factorization based on Gram-Schmidt Orthogonalization:
  • Normalized Simultaneous Iteration (NSI).
  • Unshifted QR Algorithm:
  • Shifted QR Algorithm:


Week.14

12.2

  • Algorithm for Computing Eigenvalues: Speed up of QR-algorithm:
  • matrices for orthogonal projection and reflection
  • block matrices and their products
  • similar matrices.
  • Upper Hessenberg form (UHF)
  • Householder Reflector
  • Convert a matrix into UHF by Householder reflectors

Topics List B Wiki Format

Date Sections Topics Prerequisite Skills Student Learning Outcomes
Week 1
  • Section 0.2: Loss of significant digits
  • Binary Number System
  • Taylor's Theorem
  • Nested Multiplication for Evaluating Polynomials
  • Machine Representation of Real Numbers
  • Loss of Significant Digits in Numerical Computing
  • Review of Taylor's Theorem
Week 1
  • Section 0.2: Loss of significant digits
  • Binary Number System
  • Taylor's Theorem
  • Nested Multiplication for Evaluating Polynomials
  • Machine Representation of Real Numbers
  • Loss of Significant Digits in Numerical Computing
  • Review of Taylor's Theorem
Week 1
  • Section 1.1: Fixed-Point Iteration
  • Intermediate Value Theorem
  • Bisection Method and Implementation
  • Brief Introduction to Matlab
Week 2
  • Section 1.2: Fixed-Point Iteration
  • Limit of Sequences
  • Solution Multiplicity of Equations
  • Geometric Interpretation of Fixed-Point Iteration
  • Convergence of Fixed Point Iterations
  • Order of Convergence of Iterative Methods
Week 2
  • Section 1.2: Fixed-Point Iteration
  • Limit of Sequences
  • Solution Multiplicity of Equations
  • Geometric Interpretation of Fixed-Point Iteration
  • Convergence of Fixed Point Iterations
  • Order of Convergence of Iterative Methods
Week 2
  • Section 1.3: Limits of Accuracy: Conditioning of Problems
  • Limit of Sequences
  • Solution Multiplicity of Equations
  • Sensitivity Analysis of Root-Finding
  • Error Magnification Factor for Solution of Equations
Week 3
  • Section 1.4: Newton's Method
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Fixed-Point Iteration
  • Algebraic and Geometric Interpretation of Newton's method
  • Error Analysis for Newton's Method Based on Taylor's Theorem
  • Newton's Method as a Fixed Point Iteration
  • Modified Newton's Method and its Rate of Convergence
Week 3
  • Section 1.4: Newton's Method
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Fixed-Point Iteration
  • Algebraic and Geometric Interpretation of Newton's method
  • Error Analysis for Newton's Method Based on Taylor's Theorem
  • Newton's Method as a Fixed Point Iteration
  • Modified Newton's Method and its Rate of Convergence
Week 3
  • Section 1.4: Newton's Method
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Fixed-Point Iteration
  • Algebraic and Geometric Interpretation of Newton's method
  • Error Analysis for Newton's Method Based on Taylor's Theorem
  • Newton's Method as a Fixed Point Iteration
  • Modified Newton's Method and its Rate of Convergence
Week 3
  • Section 1.4: Newton's Method
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Fixed-Point Iteration
  • Algebraic and Geometric Interpretation of Newton's method
  • Error Analysis for Newton's Method Based on Taylor's Theorem
  • Newton's Method as a Fixed Point Iteration
  • Modified Newton's Method and its Rate of Convergence
Week 3
  • Section 1.4: Newton's Method
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Fixed-Point Iteration
  • Algebraic and Geometric Interpretation of Newton's method
  • Error Analysis for Newton's Method Based on Taylor's Theorem
  • Newton's Method as a Fixed Point Iteration
  • Modified Newton's Method and its Rate of Convergence
Week 3
  • Section 1.4: Newton's Method
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Fixed-Point Iteration
  • Algebraic and Geometric Interpretation of Newton's method
  • Error Analysis for Newton's Method Based on Taylor's Theorem
  • Newton's Method as a Fixed Point Iteration
  • Modified Newton's Method and its Rate of Convergence
Week 3
  • Section 1.4: Newton's Method
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Fixed-Point Iteration
  • Algebraic and Geometric Interpretation of Newton's method
  • Error Analysis for Newton's Method Based on Taylor's Theorem
  • Newton's Method as a Fixed Point Iteration
  • Modified Newton's Method and its Rate of Convergence
Week 3
  • Section 1.5 Root-Finding Without Derivatives
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Secant Method and its Convergence
  • Stopping Criteria for Iterative Methods
Week 3
  • Section 1.5 Root-Finding Without Derivatives
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Secant Method and its Convergence
  • Stopping Criteria for Iterative Methods
Week 3
  • Section 1.5 Root-Finding Without Derivatives
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Secant Method and its Convergence
  • Stopping Criteria for Iterative Methods
Week 3
  • Section 1.5 Root-Finding Without Derivatives
  • Remainder of Taylor's Series
  • Intermediate Value Theorem
  • Secant Method and its Convergence
  • Stopping Criteria for Iterative Methods
Week 4
  • Section 2.1 Solve Systems of Linear Equations: Gaussian Elimination
  • Elementary Row Operations
  • Gaussian Elimination and its Operation Counts
  • Gaussian Elimination with Pivoting
  • Implementation of Gauss Elimination
Week 4
  • Section 2.2 Solve Systems of Linear Equations: LU Decomposition
  • Matrix-Matrix Products
  • Matrix-Vector Products
  • Inverse Matrix
  • Elementary Row Operations
  • Matrices for Elementary Row Operations
  • Gauss Elimination as Matrix Products
  • Advantages of Solutions by LU Decomposition
Week 5
  • Section 2.3 Error Analysis for Solution of Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Various Norms for Vectors and Matrices: Compatibility of Vector and Matrix Norms
  • Error Analysis for Solution of Ax=b
  • Error Magnification Factor and Condition Number of Matrix
Week 5
  • Section 2.3 Error Analysis for Solution of Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Various Norms for Vectors and Matrices: Compatibility of Vector and Matrix Norms
  • Error Analysis for Solution of Ax=b
  • Error Magnification Factor and Condition Number of Matrix
Week 5
  • Section 2.3 Error Analysis for Solution of Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Various Norms for Vectors and Matrices: Compatibility of Vector and Matrix Norms
  • Error Analysis for Solution of Ax=b
  • Error Magnification Factor and Condition Number of Matrix
Week 5
  • Section 2.5: Iterative Methods for Solving Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Convergence of General Iterative Method for Solving System of Linear Equations
  • Comparison of Gauss Elimination and Iterative Methods
Week 5
  • Section 2.5: Iterative Methods for Solving Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Convergence of General Iterative Method for Solving System of Linear Equations
  • Comparison of Gauss Elimination and Iterative Methods
Week 5
  • Section 2.5: Iterative Methods for Solving Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Convergence of General Iterative Method for Solving System of Linear Equations
  • Comparison of Gauss Elimination and Iterative Methods
Week 5
  • Section 2.5: Iterative Methods for Solving Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Convergence of General Iterative Method for Solving System of Linear Equations
  • Comparison of Gauss Elimination and Iterative Methods
Week 5
  • Section 2.5: Iterative Methods for Solving Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Convergence of General Iterative Method for Solving System of Linear Equations
  • Comparison of Gauss Elimination and Iterative Methods
Week 5
  • Section 2.5: Iterative Methods for Solving Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Convergence of General Iterative Method for Solving System of Linear Equations
  • Comparison of Gauss Elimination and Iterative Methods
Week 5
  • Section 2.5: Iterative Methods for Solving Ax=b
  • Length of Vectors
  • Eigenvalues of a Matrix
  • Eigenvectors of a Matrix
  • Convergence of General Iterative Method for Solving System of Linear Equations
  • Comparison of Gauss Elimination and Iterative Methods
Week 6
  • Section 2.6: Conjugate Gradient (CG) Method
  • Scalar Product of Vectors
  • Determinant of a Matrix
  • Eigenvalues of a Matrix
  • Quadratic Polynomials of n-variables
  • Partial Derivatives
  • Gradients
  • Chain Rule for Partial Derivatives
  • Symmetric Positive Definite Matrix and Properties
  • Construction of Conjugate Gradient (CG) Method
  • Properties of CG Method
  • Preconditioning for CG Method
Week 6
  • Section 2.6: Conjugate Gradient (CG) Method
  • Scalar Product of Vectors
  • Determinant of a Matrix
  • Eigenvalues of a Matrix
  • Quadratic Polynomials of n-variables
  • Partial Derivatives
  • Gradients
  • Chain Rule for Partial Derivatives
  • Symmetric Positive Definite Matrix and Properties
  • Construction of Conjugate Gradient (CG) Method
  • Properties of CG Method
  • Preconditioning for CG Method
Week 6
  • Section 2.6: Conjugate Gradient (CG) Method
  • Scalar Product of Vectors
  • Determinant of a Matrix
  • Eigenvalues of a Matrix
  • Quadratic Polynomials of n-variables
  • Partial Derivatives
  • Gradients
  • Chain Rule for Partial Derivatives
  • Symmetric Positive Definite Matrix and Properties
  • Construction of Conjugate Gradient (CG) Method
  • Properties of CG Method
  • Preconditioning for CG Method
Week 6
  • Section 2.7: Nonlinear System of Equations
  • Scalar Product of Vectors
  • Determinant of a Matrix
  • Eigenvalues of a Matrix
  • Quadratic Polynomials of n-variables
  • Partial Derivatives
  • Gradients
  • Chain Rule for Partial Derivatives
  • (TBD)
Week 6
  • Section 2.7: Nonlinear System of Equations
  • Scalar Product of Vectors
  • Determinant of a Matrix
  • Eigenvalues of a Matrix
  • Quadratic Polynomials of n-variables
  • Partial Derivatives
  • Gradients
  • Chain Rule for Partial Derivatives
  • (TBD)
Week 6
  • Section 2.7: Nonlinear System of Equations
  • Scalar Product of Vectors
  • Determinant of a Matrix
  • Eigenvalues of a Matrix
  • Quadratic Polynomials of n-variables
  • Partial Derivatives
  • Gradients
  • Chain Rule for Partial Derivatives
  • (TBD)
Week 6
  • Section 2.7: Nonlinear System of Equations
  • Scalar Product of Vectors
  • Determinant of a Matrix
  • Eigenvalues of a Matrix
  • Quadratic Polynomials of n-variables
  • Partial Derivatives
  • Gradients
  • Chain Rule for Partial Derivatives
  • (TBD)
Week 7
  • Sections 3.1: Data and Interpolating Functions
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • Properties of Lagrange Basis Functions
  • Lagrange Form of the Interpolation Polynomials
  • Properties of Newton's Divided Differences
  • Newton's Form of the Interpolation Polynomials
Week 7
  • Sections 3.1: Data and Interpolating Functions
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • Properties of Lagrange Basis Functions
  • Lagrange Form of the Interpolation Polynomials
  • Properties of Newton's Divided Differences
  • Newton's Form of the Interpolation Polynomials
Week 7
  • Sections 3.1: Data and Interpolating Functions
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • Properties of Lagrange Basis Functions
  • Lagrange Form of the Interpolation Polynomials
  • Properties of Newton's Divided Differences
  • Newton's Form of the Interpolation Polynomials
Week 7
  • Section 3.2: Interpolation Error and Runge Phenomenon
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • (TBD)
Week 7
  • Section 3.2: Interpolation Error and Runge Phenomenon
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • (TBD)
Week 7
  • Section 3.2: Interpolation Error and Runge Phenomenon
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • (TBD)
Week 7
  • Section 3.2: Interpolation Error and Runge Phenomenon
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • (TBD)
Week 7
  • Section 3.2: Interpolation Error and Runge Phenomenon
  • Fundamental Theorem of Algebra
  • Rolle's Theorem
  • (TBD)
Week 8
  • Section 3.4: Cubic Splines
  • One-Sided Limits
  • Continuity of Functions
  • Indefinite Integrals
  • Extremum Values of Multivariate Quadratic Functions
  • Construction of Cubic Splines for Interpolation
  • End Conditions
  • Properties of Cubic Spline Interpolation
Week 8
  • Section 3.5: Bezier Curves
  • One-Sided Limits
  • Continuity of Functions
  • Indefinite Integrals
  • Extremum Values of Multivariate Quadratic Functions
  • Bezier Curve and Fonts
Week 8
  • Section 4.1: Least Square Method
  • One-Sided Limits
  • Continuity of Functions
  • Indefinite Integrals
  • Extremum Values of Multivariate Quadratic Functions
  • Least Square Method for Solving Inconsistent System of Linear Equations]
  • Basic Properties of Least Square Solutions
Week 9
  • Section 4.2: Mathematical Models and Data Fitting
  • Linear Spaces
  • Basis Functions
  • Product Rule for Vector Valued Multivariate Functions
  • Chain Rule for Vector Valued Multivariate Functions
  • Least square method for curve fitting and statistical modeling
  • Survey of Models: linear model, periodic model, exponential models, logistic model, etc
Week 9
  • Section 4.2: Mathematical Models and Data Fitting
  • Linear Spaces
  • Basis Functions
  • Product Rule for Vector Valued Multivariate Functions
  • Chain Rule for Vector Valued Multivariate Functions
  • Least square method for curve fitting and statistical modeling
  • Survey of Models: linear model, periodic model, exponential models, logistic model, etc
Week 9
  • Section 4.5: Nonlinear Least Square Fitting
  • Linear Spaces
  • Basis Functions
  • Product Rule for Vector Valued Multivariate Functions
  • Chain Rule for Vector Valued Multivariate Functions
  • (TBD)
Week 9
  • Section 4.5: Nonlinear Least Square Fitting
  • Linear Spaces
  • Basis Functions
  • Product Rule for Vector Valued Multivariate Functions
  • Chain Rule for Vector Valued Multivariate Functions
  • (TBD)
Week 9
  • Section 4.5: Nonlinear Least Square Fitting
  • Linear Spaces
  • Basis Functions
  • Product Rule for Vector Valued Multivariate Functions
  • Chain Rule for Vector Valued Multivariate Functions
  • (TBD)
Week 10
  • Section 5.1: Numerical Differentiation
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
  • FD approximations of 2nd order Derivatives and Their Error Analysis
  • Undetermined Coefficient Method for FD Approximation
  • Extrapolation Technique for Improving the Order of Approximation
Week 10
  • Section 5.1: Numerical Differentiation
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
  • FD approximations of 2nd order Derivatives and Their Error Analysis
  • Undetermined Coefficient Method for FD Approximation
  • Extrapolation Technique for Improving the Order of Approximation
Week 10
  • Section 5.1: Numerical Differentiation
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
  • FD approximations of 2nd order Derivatives and Their Error Analysis
  • Undetermined Coefficient Method for FD Approximation
  • Extrapolation Technique for Improving the Order of Approximation
Week 10
  • Section 5.1: Numerical Differentiation
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Finite Difference (FD) Approximations of 1st order Derivative and Their Error Analysis
  • FD approximations of 2nd order Derivatives and Their Error Analysis
  • Undetermined Coefficient Method for FD Approximation
  • Extrapolation Technique for Improving the Order of Approximation
Week 10
  • Section 5.2: Numerical Integration: Newton-Cotes Formulas
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Error Analysis based on Taylor's Theorem
  • Error Analysis based on Interpolation Errors
  • Degree of Precision of Quadrature Rules
Week 10
  • Section 5.2: Numerical Integration: Newton-Cotes Formulas
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Error Analysis based on Taylor's Theorem
  • Error Analysis based on Interpolation Errors
  • Degree of Precision of Quadrature Rules
Week 10
  • Section 5.2: Numerical Integration: Newton-Cotes Formulas
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Error Analysis based on Taylor's Theorem
  • Error Analysis based on Interpolation Errors
  • Degree of Precision of Quadrature Rules
Week 10
  • Section 5.2: Numerical Integration: Newton-Cotes Formulas
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Error Analysis based on Taylor's Theorem
  • Error Analysis based on Interpolation Errors
  • Degree of Precision of Quadrature Rules
Week 10
  • Section 5.2: Numerical Integration: Newton-Cotes Formulas
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Error Analysis based on Taylor's Theorem
  • Error Analysis based on Interpolation Errors
  • Degree of Precision of Quadrature Rules
Week 10
  • Section 5.2: Numerical Integration: Newton-Cotes Formulas
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Error Analysis based on Taylor's Theorem
  • Error Analysis based on Interpolation Errors
  • Degree of Precision of Quadrature Rules
Week 10
  • Section 5.2: Numerical Integration: Newton-Cotes Formulas
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Error Analysis based on Taylor's Theorem
  • Error Analysis based on Interpolation Errors
  • Degree of Precision of Quadrature Rules
Week 10
  • Section 5.3: Numerical Integration: Romberg's Technique
  • Taylor's Theorem
  • Interpolation Error Estimates
  • Properties of Definite Integrals
  • Motivation, construction and implementation of Romberg's Technique.
Week 11
  • Section 5.4: Adaptive Numerical Integration
  • Long Divisions
  • Substitution Methods for definite integrals
  • How to estimate the error on a sub interval
  • How to mark sub intervals to be further refinement?
Week 11
  • Section 5.4: Adaptive Numerical Integration
  • Long Divisions
  • Substitution Methods for definite integrals
  • How to estimate the error on a sub interval
  • How to mark sub intervals to be further refinement?
Week 11
  • Section 5.5: Gauss Quadrature Formulas
  • Long Divisions
  • Substitution Methods for definite integrals
  • Motivation and difficulties with straightforward approach
  • Legendre polynomials and their basic properties
  • Gauss Quadrature rule based on Legendre polynomials
  • Degree of precision of Gauss Quadrature
  • Gauss quadrature formula on general interval and composite Gauss rules
Week 11
  • Section 5.5: Gauss Quadrature Formulas
  • Long Divisions
  • Substitution Methods for definite integrals
  • Motivation and difficulties with straightforward approach
  • Legendre polynomials and their basic properties
  • Gauss Quadrature rule based on Legendre polynomials
  • Degree of precision of Gauss Quadrature
  • Gauss quadrature formula on general interval and composite Gauss rules
Week 11
  • Section 5.5: Gauss Quadrature Formulas
  • Long Divisions
  • Substitution Methods for definite integrals
  • Motivation and difficulties with straightforward approach
  • Legendre polynomials and their basic properties
  • Gauss Quadrature rule based on Legendre polynomials
  • Degree of precision of Gauss Quadrature
  • Gauss quadrature formula on general interval and composite Gauss rules
Week 11
  • Section 5.5: Gauss Quadrature Formulas
  • Long Divisions
  • Substitution Methods for definite integrals
  • Motivation and difficulties with straightforward approach
  • Legendre polynomials and their basic properties
  • Gauss Quadrature rule based on Legendre polynomials
  • Degree of precision of Gauss Quadrature
  • Gauss quadrature formula on general interval and composite Gauss rules
Week 12
  • Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • Matrix Form of Discrete Fourier Transform
  • DFT and Trigonometric Interpolation
Week 12
  • Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • Matrix Form of Discrete Fourier Transform
  • DFT and Trigonometric Interpolation
Week 12
  • Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • Matrix Form of Discrete Fourier Transform
  • DFT and Trigonometric Interpolation
Week 12
  • Section 10.1: Discrete Fourier Transform and Fast Fourier Transform (FTT)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • Matrix Form of Discrete Fourier Transform
  • DFT and Trigonometric Interpolation
Week 12
  • Section 11.1: Discrete Cosine Transform (optional)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • DCT and Interpolation by Cosine Functions
  • Relation between DFT and DCT
  • Fourier Transform of 2-Dimensional Functions
  • DCT of 2-Dimensional Functions
  • Interpolation Theorem for 2-Dimensional DCT
Week 12
  • Section 11.1: Discrete Cosine Transform (optional)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • DCT and Interpolation by Cosine Functions
  • Relation between DFT and DCT
  • Fourier Transform of 2-Dimensional Functions
  • DCT of 2-Dimensional Functions
  • Interpolation Theorem for 2-Dimensional DCT
Week 12
  • Section 11.2: Image Compression (optional)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • Digital Gray scale images and color color images
  • RGB format
  • YCbCr (or YUV) format
  • Convertion between RGB and YUV formats
Week 12
  • Section 11.2: Image Compression (optional)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • Digital Gray scale images and color color images
  • RGB format
  • YCbCr (or YUV) format
  • Convertion between RGB and YUV formats
Week 12
  • Section 11.2: Image Compression (optional)
  • Complex Numbers
  • Complex Variables
  • Integration by Parts
  • Convergence of Sequences
  • Convergence of Series
  • Digital Gray scale images and color color images
  • RGB format
  • YCbCr (or YUV) format
  • Convertion between RGB and YUV formats
Week 13
  • Section 12.1: Power Iteration Methods
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Convergence of Power Iteration Methods
Week 13
  • Section 12.1: Power Iteration Methods
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Convergence of Power Iteration Methods
Week 13
  • Section 12.1: Power Iteration Methods
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Convergence of Power Iteration Methods
Week 13
  • Section 12.1: Power Iteration Methods
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Convergence of Power Iteration Methods
Week 13
  • Section 12.2: QR Algorithm for Computing Eigenvalues
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Definition and basic properties of orthogonal matrices
  • QR-Factorization based on Gram-Schmidt Orthogonalization
Week 13
  • Section 12.2: QR Algorithm for Computing Eigenvalues
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Definition and basic properties of orthogonal matrices
  • QR-Factorization based on Gram-Schmidt Orthogonalization
Week 13
  • Section 12.2: QR Algorithm for Computing Eigenvalues
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Definition and basic properties of orthogonal matrices
  • QR-Factorization based on Gram-Schmidt Orthogonalization
Week 13
  • Section 12.2: QR Algorithm for Computing Eigenvalues
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Definition and basic properties of orthogonal matrices
  • QR-Factorization based on Gram-Schmidt Orthogonalization
Week 13
  • Section 12.2: QR Algorithm for Computing Eigenvalues
  • Eigenvalues
  • Eigenvectors
  • Orthonormal Bases and the Gram-Schmidt Process
  • Definition and basic properties of orthogonal matrices
  • QR-Factorization based on Gram-Schmidt Orthogonalization
Week 14
  • Section 12.2: QR Algorithm for Computing Eigenvalues
  • Matrices for Orthogonal Projection
  • Matrices for Reflection
  • Block Matrices
  • Similar Matrices
  • Convert a matrix into UHF by Householder reflectors
Week 14
  • Section 12.2: QR Algorithm for Computing Eigenvalues
  • Matrices for Orthogonal Projection
  • Matrices for Reflection
  • Block Matrices
  • Similar Matrices
  • Convert a matrix into UHF by Householder reflectors