[Machine Learning] Basic Math for ML with Python (Part 1)
Machine learning is nothing but a geometry problem ! All data is the same. Understanding geometry is very important to solve machine learning problems.
[Numpy] Basic Linear Algebra
Linear algebra
, mathematical discipline that deals with vectors and matrices and, more generally, with vector spaces and linear transformations (Britannica)
In this lecture, you will learn
- Scala, Vector, Array, Tensor
- Dot product & Norm
- Multiplication & Transpose & Invertible matrix
- Linear Transformation
- Eigen Value & Eigen Vector
- Cosine Similarity
and perform these with Numpy
1. Scala, Vector, Matrix, Tensor
Scala
- A Scalar has only magnitude (size) and is like a number
|
|
Vector
- A vector has magnitude and direction and is a list of numbers (can be in a row or column) which could present as $ \vec{p} = (1, 2, 3)$
|
|
Matrix
- A matrix is an array of numbers
|
|
Tensor
Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space,, a tensor can be represented as an organized multidimensional array of scalars (numerical values) with respect to this specific basis - wikipedia
2. Dot product & Norm
Dot product
Dot product equals to inner product which is the sum of the products of corresponding components. Numpy helps us calculate vector’s inner product easily with dot()
function.
|
|
The vector norm refers to the length of the vector. In machine learning, the most commonly used norms are $L^2$ Norm and $L^1$ Norm.
-
$L^2$ Norm
$L^2$ is represented as $||\vec{x}||_2$. $$||\vec{x}||_2 = \sqrt{x_1^2 + x_2^2+ \cdot \cdot \cdot + x_n^2} = \sqrt{\sum^n_k{x^2_k}}$$ where $k = 1…n$.
-
$L^1$ Norm
$L^1$ is represented as $||\vec{x}||_1$. $$||\vec{x}||_1 = |x_1| + |x_2| + … + |x_n| = \sum^n_k{|x_k|}$$.
In Numpy, Norm can be calculated by using linalg.norm() function.
|
|
3. Multiplication & Transpose & Invertible matrix
Multiplication
Numpy’s dot()
function can be used for matrix multiplication
|
|
Transpose
The transpose of a matrix is simply a flipped version of the original matrix. We can transpose a matrix by switching its rows with its columns.
In numpy, implementing matrix transpose can be done by just adding .T
at the end of the original matrix
|
|
Multiplying matrix A and transpose of a matrix A can be implemented as following:
|
|
Invertible matrix
In linear algebra, an n-by-n square matrix is called invertible, if the product of the matrix and its inverse is the identity matrix (CUEMATH)
$$ AB = BA = I_n$$ $$\implies B = A^{-1}$$
where,
- A is (n x n) invertible matrix
- B is (n x n) matrix called inverse of A
- $I_n$ is (n x n) identity matrix
|
|
- Implement Invertible matrix
|
|