Difference between revisions of "MAT2253"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
(→‎List of Topics: Edited by Vu Hoang, Stephen Peña, Juan B. Gutiérrez)
(moved matrix factorization up)
 
(4 intermediate revisions by 2 users not shown)
Line 3: Line 3:
 
Prerequisite: [[MAT1214]]/[[MAT1213]] Calculus I
 
Prerequisite: [[MAT1214]]/[[MAT1213]] Calculus I
  
This comprehensive course in linear algebra provides an in-depth exploration of core concepts and their applications to optimization, data analysis, and neural networks. Students will gain a strong foundation in the fundamental notions of linear systems of equations, vectors, and matrices, as well as advanced topics such as eigenvalues, eigenvectors, and canonical solutions to linear systems of differential equations. The course also delves into the critical techniques of calculus operations in vectors and matrices, optimization, and Taylor series in one and multiple variables. By the end of the course, students will have a thorough understanding of the mathematical framework underlying principal component analysis, gradient descent, and the implementation of simple neural networks.
+
This comprehensive course in linear algebra provides an in-depth exploration of core concepts and their applications to optimization, data analysis, and neural networks. Students will gain a strong foundation in the fundamental notions of linear systems of equations, vectors, and matrices, as well as advanced topics such as eigenvalues, eigenvectors, and canonical solutions to linear systems of differential equations. The course also explores he critical techniques of calculus operations in vectors and matrices, optimization, and Taylor series in one and multiple variables. By the end of the course, students will have a thorough understanding of the mathematical framework underlying principal component analysis, gradient descent, and the implementation of simple neural networks.
 +
 
 +
The primary textbook is "Mathematics for Machine Learning" by Deisenroth, Faisal, and Ong, 2020, Cambridge University Press. The book is available for free for personal use at https://mml-book.github.io/book/mml-book.pdf
 +
 
 +
The secondary textbook is "Pattern Recognition and Machine Learning" by Bishop, 2006, Springer Information Science and Statistics. The book is available for free for personal use at https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 14: Line 18:
 
| 3 || 2.3 || Solving systems of linear equations ||  ||  
 
| 3 || 2.3 || Solving systems of linear equations ||  ||  
 
|-
 
|-
| 4 || 3.1, 3.2, 3.3 || Norms, Inner Products, Lengths & Distances ||  ||  
+
| 4 || 2.4 || Vector spaces ||  ||  
 
|-
 
|-
| 5 || 3.4 || Angles & orthogonality ||  ||  
+
| 5 || 2.5 ||Linear Independence ||  ||  
 
|-
 
|-
| 6 || 2.4, 2.5 || Vector spaces & Linear Independendence ||  ||  
+
| 6 || 2.6 || Basis & Rank ||  ||  
 
|-
 
|-
| 7 || Mini-test ||  ||  ||  
+
| 7 || Exam 1 ||  ||  ||  
 
|-
 
|-
| 8 || 2.6 || Basis & Rank ||  ||  
+
| 8 || 2.7 || Linear Mappings ||  ||  
 
|-
 
|-
| 9 || 2.7 || Linear Mappings ||  ||  
+
| 9 || 2.7 || Linear Mappings (examples) ||  ||  
 
|-
 
|-
 
| 10 || 4.1 || Determinant and Traces ||  ||  
 
| 10 || 4.1 || Determinant and Traces ||  ||  
Line 30: Line 34:
 
| 11 || 4.2 || Eigenvalues & Eigenvectors ||  ||  
 
| 11 || 4.2 || Eigenvalues & Eigenvectors ||  ||  
 
|-
 
|-
| 12 || 4.3, 4.4 || Matrix Factorization ||  ||
+
| 12 || 4.3, 4.4 || Matrix Factorization ||  ||
|-
 
| 13 || 3.5 || Orthonormal Basis ||  ||
 
|-
 
| 14 || 3.7 || Inner Product of Functions ||  ||
 
|-
 
| 15 || 3.9 || Rotations ||  ||
 
|-
 
| 16 || Mini-test ||  ||  ||  
 
 
|-
 
|-
| 17 || 5.1 || Vector Calculus ||  ||  
+
| 13 || 3.1, 3.2, 3.3 || Norms, Inner Products, Lengths & Distances ||  ||  
 
|-
 
|-
| 18 || 5.1 || Taylor Series ||  ||  
+
| 14 || 3.4 || Angles & orthogonality ||  ||  
 
|-
 
|-
| 19 || 5.1 || Differentiation Rules Review ||  ||  
+
| 15 || 3.5 || Orthonormal Basis ||  ||  
 
|-
 
|-
| 20 || 5.2 || Partial Derivatives ||  ||  
+
| 16 || 3.7 || Inner Product of Functions ||  ||  
 
|-
 
|-
| 21 || 5.2 || Gradients - Examples, visualizations, computaiton ||  ||  
+
| 17 || Project 1 || ||  ||  
 
|-
 
|-
| 22 || 5.2 || Rules for Partial Differentiation & Chain Rule ||  ||  
+
| 18 || 5.1 ||Vector Calculus Intro and Taylor Series ||  ||  
 
|-
 
|-
| 23 || Mini-test || ||  ||  
+
| 19 || 5.1, 5.2 || Differentiation Rules Review and Partial Derivatives ||  ||  
 
|-
 
|-
| 24 || 5.3 || Gradients of Vector-Valued Functions ||  ||  
+
| 20 || 5.2 || Gradients- Examples, visualizations, computation ||  ||  
 
|-
 
|-
| 25 || 5.3 || Gradients of Vector-Valued Functions ||  ||  
+
| 21 || 5.3 || Gradients of Vector-Valued Functions ||  ||  
 
|-
 
|-
| 26 || 5.4, Dhrymes 78 || Gradients of Matrices ||  ||  
+
| 22 || 5.4, Dhrymes 78 || Gradients of Matrices ||  ||  
 
|-
 
|-
| 27 || 5.5, Dhrymes 78 || Useful Identities for Computing Gradients ||  ||  
+
| 23 || Exam 2 || ||  ||  
 
|-
 
|-
| 28 || 5.7 || Higher-Order Derivatives ||  ||  
+
| 24 || 5.5, Dhrymes 78 || Useful Identities for Computing Gradients ||  ||  
 
|-
 
|-
| 29 || Notes || Minimization via Newton's Method & Backpropagation ||  ||  
+
| 25 || 5.7 || Higher-Order Derivatives ||  ||  
 
|-
 
|-
| 30 || Min-test || ||  ||  
+
| 26 || Notes || Minimization via Newton's Method & Backpropagation ||  ||  
 
|-
 
|-
| 31 || 5.8 || Linearization & Multivariate Taylor Series ||  ||  
+
| 27 ||Project 2 || ||  ||  
 
|-
 
|-
| 32 || Notes || Linear optimization: Simplex method ||  ||  
+
| 28 || 5.8 || Multivariate Taylor Series ||  ||  
 
|-
 
|-
| 33 || 7.1 || Optimization Using Gradient Descent ||  ||  
+
| 29 || Notes || Linear optimization: Simplex method ||  ||  
 
|-
 
|-
| 34 || 7.2 || Constrained Optimization and Lagrange Multipliers ||  ||  
+
| 30 || 7.1 || Optimization Using Gradient Descent ||  ||  
 
|-
 
|-
| 35 || 7.3 || Convex Optimization ||  ||  
+
| 31|| 7.2 and Notes || Constrained Optimization and Lagrange Multipliers: PCA ||  ||  
 
|-
 
|-
| 36 || Mini-Test || ||  ||  
+
| 32 || 7.3 || Convex Optimization (time permitting) ||  ||  
 
|-
 
|-
| 37 || Bishop, Duda et al. || Feed-forward Artificial Neural Networks ||  ||  
+
| 33 || Bishop, Duda et al. || Feed-forward Artificial Neural Networks ||  ||  
 
|-
 
|-
| 38 || Bishop, Duda et al. || Backpropagation in ANNs ||  ||  
+
| 34 || Bishop, Duda et al. || Backpropagation in ANNs ||  ||  
 
|-
 
|-
| 39 || Bishop, Duda et al. || Activation Functions: Linear & Nonlinear ||  ||  
+
| 35 || Bishop, Duda et al. || Activation Functions: Linear & Nonlinear ||  ||  
 
|-
 
|-
| 40 || Bishop, Duda et al. || Step-by-step simple ANN ||  ||  
+
| 36 || Bishop, Duda et al. || Step-by-step simple ANN ||  ||  
 
|-
 
|-
| 41 || Bishop, Duda et al. || Measures of performance ||  ||  
+
| 37 || Bishop, Duda et al. || Measures of performance ||  ||  
 
|-
 
|-
| 42 || Bishop, Duda et al. || More complex architectures of ANNs ||  ||  
+
| 38 || Bishop, Duda et al. || More complex architectures of ANNs ||  ||  
 
|-
 
|-
| 43 || Mini-test ||  ||  ||  
+
| 39 || Final Project Introduction ||  ||  ||  
 
|-
 
|-
| 44 || Final project ||  ||  ||  
+
| 40 || Final project||  ||  ||  
 
|-
 
|-
| 45 || Review ||  ||  ||  
+
| 41 || Review ||  ||  ||  
 
|}
 
|}

Latest revision as of 10:09, 17 March 2026

Applied Linear Algebra

Prerequisite: MAT1214/MAT1213 Calculus I

This comprehensive course in linear algebra provides an in-depth exploration of core concepts and their applications to optimization, data analysis, and neural networks. Students will gain a strong foundation in the fundamental notions of linear systems of equations, vectors, and matrices, as well as advanced topics such as eigenvalues, eigenvectors, and canonical solutions to linear systems of differential equations. The course also explores he critical techniques of calculus operations in vectors and matrices, optimization, and Taylor series in one and multiple variables. By the end of the course, students will have a thorough understanding of the mathematical framework underlying principal component analysis, gradient descent, and the implementation of simple neural networks.

The primary textbook is "Mathematics for Machine Learning" by Deisenroth, Faisal, and Ong, 2020, Cambridge University Press. The book is available for free for personal use at https://mml-book.github.io/book/mml-book.pdf

The secondary textbook is "Pattern Recognition and Machine Learning" by Bishop, 2006, Springer Information Science and Statistics. The book is available for free for personal use at https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf

Session Section Topic Prerequisites SLOs
1 2.1 Systems of Linear Equations
2 2.2 Matrices
3 2.3 Solving systems of linear equations
4 2.4 Vector spaces
5 2.5 Linear Independence
6 2.6 Basis & Rank
7 Exam 1
8 2.7 Linear Mappings
9 2.7 Linear Mappings (examples)
10 4.1 Determinant and Traces
11 4.2 Eigenvalues & Eigenvectors
12 4.3, 4.4 Matrix Factorization
13 3.1, 3.2, 3.3 Norms, Inner Products, Lengths & Distances
14 3.4 Angles & orthogonality
15 3.5 Orthonormal Basis
16 3.7 Inner Product of Functions
17 Project 1
18 5.1 Vector Calculus Intro and Taylor Series
19 5.1, 5.2 Differentiation Rules Review and Partial Derivatives
20 5.2 Gradients- Examples, visualizations, computation
21 5.3 Gradients of Vector-Valued Functions
22 5.4, Dhrymes 78 Gradients of Matrices
23 Exam 2
24 5.5, Dhrymes 78 Useful Identities for Computing Gradients
25 5.7 Higher-Order Derivatives
26 Notes Minimization via Newton's Method & Backpropagation
27 Project 2
28 5.8 Multivariate Taylor Series
29 Notes Linear optimization: Simplex method
30 7.1 Optimization Using Gradient Descent
31 7.2 and Notes Constrained Optimization and Lagrange Multipliers: PCA
32 7.3 Convex Optimization (time permitting)
33 Bishop, Duda et al. Feed-forward Artificial Neural Networks
34 Bishop, Duda et al. Backpropagation in ANNs
35 Bishop, Duda et al. Activation Functions: Linear & Nonlinear
36 Bishop, Duda et al. Step-by-step simple ANN
37 Bishop, Duda et al. Measures of performance
38 Bishop, Duda et al. More complex architectures of ANNs
39 Final Project Introduction
40 Final project
41 Review