$$\color{Blue} {\textbf{Linear Algebra}}$$

$\textbf{1.Properties of determinants:-}$

The determinant is only valid for the square matrix.

  1. $\mid A^{T}\mid = \mid A \mid $
  2. $\mid AB \mid = \mid A \mid \mid B \mid $
  3. $\mid A^{n} \mid = \big(\mid A \mid\big)^{n}$
  4. $\mid kA\mid = k^{n} \mid A \mid$, here $A$ is the $n\times n$ matrix.
  5. If two rows (or two columns) of a determinant are interchanged, the sign of the value of the determinant changes.
  6. If in determinant any row or column is completely zero, the value of the determinant is zero.
  7. If two rows (or two columns) of a determinant are identical, the value of the determinant is zero.

$\textbf{2.Matrix Multiplication:-}$ 

It is valid for both square and non-square matrix.

Let $\mathbf{A_{m\times n}}$ and $\mathbf{B_{n\times p}}$ are two matrices then, the resultant matrix is $\mathbf{(AB)_{m\times p}}$, has

  1. Number of elements $=mp$
  2. Number of multiplication $ = (mp)n = mnp$
  3. Number of addition $ = mp(n-1)$ 

_________________________________________________________________

$\color{Red} { \textbf{Key Points:-}}$

  1. $(Adj\: A)A = A(Adj\:A) = \mid A \mid I_{n}$
  2. $Adj(AB) = (Adj\:B)\cdot (Adj\: A)$
  3. $(AB)^{-1} = B^{-1}\cdot A^{-1}$
  4. $(AB)^{T} = B^{T}\cdot A^{T}$
  5. $(A^{T})^{-1} = (A^{-1})^{T}$
  6. $A\cdot A^{-1} = A^{-1} \cdot A = I$
  7. $Adj(Adj\:A) = \mid A\mid ^{n-2}\cdot A$
  8. $\mid Adj\: A \mid = \mid A \mid ^{n-1}$
  9. $\mid Adj(Adj\: A) \mid = \mid A \mid ^{{(n-1)}^{2}}$
  10. $Adj(A^{m}) = (Adj\:A)^{m}$
  11. $Adj(kA) = k^{n-1}(Adj \:A),k\in \mathbb{R}$

 _________________________________________________________________

${\color{Magenta}{\textbf{Some More Points:-}} }$

  1. Minimum number of zeros in a diagonal matrix of order $n$ is $n(n-1).$
  2. $AB = \text{diag}(a_{1},a_{2},a_{3})\times \text{diag}(b_{1},b_{2},b_{3}) = \text{diag}(a_{1}b_{1},a_{2}b_{2},a_{3}b_{3})$
  3. For diagonal and triangular matrix (upper triangular or lower triangular) the determinant is equal to product of leading diagonal elements.
  4. The matrix which is both symmetric and skew-symmetric must be a null matrix.
  5. All the diagonal elements of the Skew Hermitian matrix are either zero or pure imaginary.
  6. All the diagonal elements of the Hermitian matrix are real.
  7. The determinant of Idempotent matrix is either $0$ or $1.$ 
  8. Determinant and Trace of the nilpotent matrix is zero.
  9. The inverse of the nilpotent matrix does not exist.
  10. $\color{green}{\checkmark}$ A square matrix whose all eigenvalues are zero is a nilpotent matrix

    $\color{green}{\checkmark}$ In linear algebra, a nilpotent matrix is a square matrix $A$ such that ${\displaystyle A^{k}=0\,}$ for some positive integer ${\displaystyle k}.$ The smallest such ${\displaystyle k}$ is sometimes called the index of ${\displaystyle A}$

    Example$:$ the matrix $A  = \begin{bmatrix} 0& 0\\1 &0 \end{bmatrix}$ is nilpotent with index $2,$since $A^{2} = 0$


$\color{Blue}{\textbf{Eigenvalues:-}}$ The number is an eigenvalue of $A$ if and only if $A-\lambda I$ is singular: $\text{det}(A-\lambda I) = 0$

This “characteristic equation” $\text{det}(A-\lambda I) = 0$ involves only $\lambda$, not $x.$ When $A$ is $n \times n,$ the equation has degree $n.$ Then $A$ has $n$ eigenvalues and each leads to $x:$ For each $\lambda$ solve $\text{det}(A-\lambda I)x = 0$ or $Ax = \lambda x$ to find an eigenvector $x.$

${\color{Orange}{\textbf{Properties of Eigen Values (or) Characteristics roots (or) Latent roots:-}} }$

 $\color{green}\checkmark\:$ Eigenvalue and Eigenvector are only valid for square matrix.

  1. The sum of eigen values of a matrix is equal to the trace of the matrix, where the sum of the elements of principal diagonal of a matrix is called the trace of the matrix. $$\sum_{i=1}^{n}(\lambda_{i}) = \lambda_{1} + \lambda_{2} + \lambda_{3} + \dots \lambda_{n} = \text{Trace of the matrix}$$
  2. The product of eigen values of a matrix $A$ is equal to the determinant of matrix $A.$ $$\prod_{i=1}^{n} = \lambda_{1} \cdot \lambda_{2} \cdot \lambda_{3}  \dots \lambda_{n} = \mid A \mid $$
  3. For Hermitian matrix every eigen value is real.
  4. The eigenvalues of a skew-Hermitian matrix are all purely imaginary or zero.
  5. Every eigenvalue of a Unitary matrix has absolute value. i.e. $ \mid \lambda \mid  = 1$
  6. Any square matrix $A$ and its transpose $A^{T}$ have same eigenvalues.
  7. If $\lambda_{1},\lambda_{2},\lambda_{3},\dots ,\lambda_{n}$ are eigenvalues of matrix $A$, then eigenvalues of

            $\color{green}\checkmark\: kA$ are $k\lambda_{1},k\lambda_{2},k\lambda_{3},\dots, k\lambda_{n}$

            $\color{green}\checkmark\:A^{m}$ are $\lambda_{1}^{m},\lambda_{2}^{m},\lambda_{3}^{m},\dots ,\lambda_{n}^{m}$

            $\color{green}\checkmark\:A^{-1}$ are $\frac{1}{\lambda_{1}},\frac{1}{\lambda_{2}},\frac{1}{\lambda_{3}},\dots ,\frac{1}{\lambda_{n}}$

           $\color{green}\checkmark\:A+kI$ are $\lambda_{1}+k,\lambda_{2}+k,\lambda_{3}+k,\dots ,\lambda_{n}+k$

  1. If $\lambda$ is an eigenvalue of an Orthogonal matrix $A$, then $\dfrac{1}{\lambda}$ is also an eigenvalue of matrix $A(A^{T}  = A^{-1})$
  2. The eigenvalue of a symmetric matrix are purely real.
  3. The eigenvalue of a skew-symmetric matrix is either purely imaginary or zero.
  4. Zero is an eigenvalue of a matrix iff matrix is singular.
  5. If all the eigenvalues are distinct then the corresponding eigenvectors are independent.
  6. The set of eigenvalues is called the spectrum of $A$ and the largest eigenvalue in magnitude is called the spectral radius of $A.$ Where $A$ is the given matrix. 

${\color{Orchid}{\textbf{Properties of Eigen Vectors:-}} }$

  1. For every eigenvalue there exist at-least one eigenvectors.
  2. If $\lambda$ is an eigenvalue of a matrix $A,$ then the corresponding eigenvector $X$ is not unique.i.e., we have infinites number of eigenvectors corresponding to a single eigenvalue.
  3. If $\lambda_{1},\lambda_{2},\lambda_{3},\dots ,\lambda_{n}$ be distinct eigen values of a $n\times n$ matrix, then corresponding eigen vectors $=X_{1},X_{2},X_{3},\dots,X_{n}$ form a linearly independent set.
  4. If two or more eigenvalues are equal then eigenvectors are linearly dependent.
  5. Two eigenvectors $X_{1}$ and $X_{2}$ are called orthogonal vectors if $X_{1}^{T}X_{2}=0.$
  6. $\textbf{Normalized eigenvectors:-}$ A normalized eigenvector is an eigen vector of length one. Consider an eigen vector $X = \begin{bmatrix}a \\ b\end{bmatrix}_{2\times 1}$, then length of this eigen vector is  $\left \| X \right \| = \sqrt {a^{2} + b^{2}}.$ Normalized eigenvector is $\hat{X} = \dfrac{X}{\left \| X \right \|}=\dfrac{\text{Eigen vector}}{\text{Length of eigen vector}} =\begin{bmatrix}\dfrac{a}{\sqrt{a^{2} + b^{2}}} \\ \dfrac{b}{\sqrt{a^{2} + b^{2}}} \end{bmatrix}_{2\times 1}$
  7. Length of the normalized eigenvector is always unity.

 $\color{Orange}\checkmark\:$The eigenvectors corresponding to distinct eigenvalues of a real symmetric matrix are orthogonal.

$\textbf{3.Rank of the Matrix:-}$

The rank of a matrix $A$ is the maximum number of linearly independent rows or columns. A matrix is full rank matrix, if all the rows and columns are linearly independent. Rank of the matrix $A$ is denoted by $\rho{(A)}.$

${\color{Purple}{\textbf{Properties of rank of matrix:-}} }$

  1. The rank of the matrix does not change by elementary transformation, we can calculate the rank by elementary transformation by changing the matrix into echelon form. In echelon form, the rank of matrix is number of non-zero row of matrix.
  2. The rank of the matrix is zero, only when the matrix is a null matrix.
  3. $\rho(A)\leq \text{min(row, column)}$
  4. $\rho(AB)\leq \text{min}[\rho(A), \rho(B)]$
  5. $\rho(A^{T}A) = \rho(AA^{T}) = \rho(A) = \rho(A^{T})$
  6. If $A$ and $B$ are matrices of same order, then $\rho(A+B)\leq \rho(A) + \rho(B)$ and $\rho(A-B)\geq \rho(A)-\rho(B)$
  7. If $A^{\theta}$ is the conjugate transpose of $A,$ then $\rho(A^{\theta})= \rho(AA^{\theta}) = \rho(A^{\theta}A) = \rho(A) $
  8. The rank of the skew-symmetric matrix cannot be one.
  9. If $A$ and $B$ are two $n$-rowed square matrices, then $\rho(AB)\geq \rho(A) + \rho(B) – n$
  10. Rank of $A_{n*n}=2$ iff $n-2$  eigenvalues are zero.

$\textbf{4.Solution of Linear Simultaneous Equations:-}$

There are two types of linear simultaneous equations:

  1. Linear homogeneous equation$:AX = 0$
  2. Linear non-homogeneous equation$:AX = B$

Steps to investigate the consistency of the system of linear equations.

  1. First represent the equation in the matrix form as $AX = B$
  2. System equation $AX = B$ is checked for consistency as to make Augmented matrix $[A:B].$

$\textbf{Augmented Matrix}\: \mathbf{[A:B]:-}$

  1. $\rho(A)\neq \rho([A:B])$ inconsistent $\color{Red} {\textbf{(No solution)}}$
  2. $\rho(A) = \rho([A:B])$ consistent $\color{green} {\textbf{(Always have a solution)}}$

               $\color{green}\checkmark\:\rho(A) = \rho([A:B]) = \text{Number of unknown variables}\:\:  \color{Cyan} {\textbf{(Unique solution)}}$

               $\color{green}\checkmark\:\rho(A) = \rho([A:B]) < \text{Number of unknown variables}\:\:  \color{Salmon} {\textbf{(Infinite solution)}}$

______________________________________________________________

$\color{green}\checkmark$ Linear homogeneous equation$:AX = 0$ is always consistent.

If $A$ is a square matrix of order $n$ and

 $\color{green}\checkmark\:\mid A \mid = 0$, then the rows and columns are $\color{Teal} { \text{linearly dependent}}$ and system has a $\color{Magenta} { \text{non-trivial solution or infinite solution.}}$

$\color{green}\checkmark\:\mid A \mid \neq 0$, then the rows and columns are $\color{purple} {\text{linearly independent}}$ and system has a $\color{green} {\text{trivial solution or unique solution.}}$

______________________________________________________________ 

$\color{Brown} {\textbf{5.Cayley-Hamilton Theorem:-}}$ According to Cayley-Hamilton Theorem, “Every square matrix satisfies it’s own characteristic equation.”

$\color{green}\checkmark$This theorem is only applicable for square matrix. This theorem is used to find the inverse of the matrix in the form of matrix polynomial.

If $\mathbf{A}$ be $n\times n$ matrix and it’s characteristic equation is, $a_{0}\lambda^{n} + a_{1}\lambda^{n-1} + \dots + a_{n} = 0$, then according to Cayley-Hamilton Theorem, $a_{0}\mathbf{A}^{n} + a_{1}\mathbf{A}^{n-1} + \dots + a_{n}\mathbf{I_{n}} = 0$

$\color{Teal} {\textbf{For finding the roots of}\: A_{3\times 3}:-}$

$\lambda^{3} – (\text{Trace})\lambda^{2} + \text{(Sum Of Principal Cofactor)}\lambda \:– \mid A \mid = 0$

______________________________________________________________ 

$\color{Red} {\textbf{6.Types of Matrices According to Dimensions(R,C):-}}$

Rows and columns are all together said to be the dimensions of the matrix, according to the dimensions there are two types of matrix, rectangular matrix and square matrix.

       $\textbf{1.Rectangular Matrix:-}$ A matrix in which the number of rows is not equal to the number of columns is known as a rectangular matrix $\mathbf{(R\neq C \:\text{(or)}\: m\neq n)}.$

                   $\color{green}\checkmark\mathbf{A=[a_{ij}]_{m\times n}\:\:; m \neq n}$

       $\textbf{2.Square Matrix:-}$ A matrix in which the number of rows are equal to the number of columns is known as a square matrix $\mathbf{(R\neq C)}.$

                   $\color{green}\checkmark\mathbf{A=[a_{ij}]_{n\times n}\:\:; n \neq n}$

$$\color{Magenta} {\textbf{Types of Square Matrix:-}}$$

 $\textbf{1.Diagonal Matrix:-}$ A square matrix in which all the elements except leading diagonal elements are zero is known a a diagonal matrix.

        $\textbf{Example:} \: A = \begin{bmatrix} 3 &0 &0 \\ 0 &6 &0 \\ 0 &0 &1 \end{bmatrix}_{3\times 3}$  (or) $A = \text{diag(3,6,1)}$

            $\color{green}\checkmark$Minimum number of zeros in a diagonal matrix of order $n$ is $n(n-1).$

            $\color{green}\checkmark AB = \text{diag}(a_{1},a_{2},a_{3})\times \text{diag}(b_{1},b_{2},b_{3}) = \text{diag}  (a_{1}b_{1},a_{2}b_{2},a_{3}b_{3})$

 $\textbf{2.Scalar Matrix:-}$ A diagonal matrix in which all the diagonal elements are equal, is known as a scalar matrix.

        $\textbf{Example:} \: A = \begin{bmatrix} 5 &0 &0 \\ 0 &5 &0 \\ 0 &0 &5 \end{bmatrix}_{3\times 3}$  (or) $A = \text{diag(5,5,5)}$

 $\textbf{3.Unit Matrix:-}$  A diagonal matrix in which all the diagonal elements are unity is known as unit matrix or identity matrix. The matrix of order $n$ is denoted by $\mathbf{I_{n}}.$

        $\textbf{Example:} \: I_{n} = \begin{bmatrix} 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{bmatrix}_{3\times 3}$

 $\textbf{4.Upper Triangular Matrix:-}$ A square matrix $A = [a_{ij}]$ is said to be upper triangular matrix if $a_{ij} = 0$ whenever $i>j.$

         $\textbf{Example:} \: A = \begin{bmatrix} 5 &2 &3 \\ 0 &1 &6 \\ 0 &0 &8 \end{bmatrix}_{3\times 3}$

$\textbf{5.Lower Triangular Matrix:-}$ A square matrix $A = [a_{ij}]$ is said to be upper triangular matrix if $a_{ij} = 0$ whenever $i<j.$

         $\textbf{Example:} \: A = \begin{bmatrix} 5 &0 &0 \\ 4 &9 &0 \\ 3 &2 &1 \end{bmatrix}_{3\times 3}$

$\color{green}\checkmark$For diagonal and triangular matrix (upper triangular or lower triangular) the determinant is equal to product of leading diagonal elements.

$\textbf{6.Symmetric Matrix:-}$ A square matrix is said to be symmetric , if $\mathbf{A^{T} = A},$ where $A^{T}$ or $A’$ is transpose of matrix $A.$ In transpose of matrix th rows and columns are interchanged.

          $\textbf{Example:} \: A = \begin{bmatrix} 1 &2 &3 \\ 2 &4 &5 \\ 3 &5 &6 \end{bmatrix}_{3\times 3}\implies A^{T} = \begin{bmatrix} 1 &2 &3 \\ 2 &4 &5 \\ 3 &5 &6 \end{bmatrix}_{3\times 3} $

$\textbf{7.Skew Symmetric Matrix:-}$ A square matrix is said to be symmetric , if $\mathbf{A^{T} = – A},$ where $A^{T}$ or $A’$ is transpose of matrix $A.$ In transpose of matrix th rows and columns are interchanged.

          $\textbf{Example:} \: A = \begin{bmatrix} 0 &-2 &-3 \\ 2 &0&-5 \\ 3 &5 &0 \end{bmatrix}_{3\times 3}\implies A^{T} = \begin{bmatrix} 0 &2 &3 \\ -2 &0 &5 \\ -3 &-5 &0 \end{bmatrix}_{3\times 3}  = -A $

${\color{Teal}{\textbf{Properties of Symmetric Matrix:-}}}$

  1. If $A$ is a square matrix then $A+A^{T},AA^{T},A^{T}A$ are symmetric matrices, while $A – A^{T},A^{T}-A$ are skew symmetric matrix.
  2. If $A$ is a symmetric matrix, $k$ any real scalar, $n$ any integer, $B$ square matrix of order that of $A$, then $ – A,kA,A^{T},A^{n},A^{-1},B^{T}AB$ are also symmetric matrices. All positive integral power of a symmetric matrix are symmetric.
  3. If $A, B$ are two symmetric matrices, then

               $\color{green}\checkmark A\pm B, AB + BA$ are also symmetric matrices.

               $\color{green}\checkmark  AB - BA$ are skew-symmetric matrices.

               $\color{green}\checkmark AB$ is a symmetric matrix when $AB=BA$ otherwise $AB$ or $BA$ may not be symmetric.

               $\color{green}\checkmark A^{2},A^{3},A^{4},B^{2},B^{3},B^{4},A^{2}\pm B^{2},A^{3}\pm B^{3}$ are also symmetric matrices.

${\color{Orchid}{\textbf{Properties of Skew Symmetric Matrix:-}}}$

 $\color{green}\checkmark $If $A$ is a skew symmetric matrix, then

  • $A^{2n}$ is a symmetric matrix for $n$ positive integer.
  • $A^{2n+1}$ is a skew symmetric matrix for $n$ positive integer.
  • $kA$ is also a skew symmetric matrix, where $k$ is a real scalar.
  • $B^{T}AB$ is also skew symmetric matrix where $B$ is a square matrix of order that of $A.$

 $\color{green}\checkmark$All positive odd integral power of a skew symmetric matrix are skew symmetric matrix and positive even integral powers of a skew symmetric matrix are symmetric matrix.

 $\color{green}\checkmark$If $A,B$ are two skew symmetric matrices, then

  • $A\pm B,AB-BA$ are skew symmetric matrices.
  • $AB+BA$ is symmetric matrix.

$\color{green}\checkmark$If $A$ is a skew symmetric matrix and $C$ is a column matrix then $C^{T}AC$ is a zero matrix.

$\color{green}\checkmark$If $A$ is any square matrix then $A+A^{T}$ is symmetric matrix and $A-A^{T}$ is a skew symmetric matrix.

$\color{green}\checkmark$The matrix which is both symmetric and skew symmetric must be a null matrix.

$\color{green}\checkmark$If $A$ is symmetric and $B$ is skew symmetric, then $\textbf{trace(AB)}=0.$ 

$\color{green}\checkmark$Any real square matrix $A$ may be expressed as the sum of a symmetric matrix $A_{S}$ and a skew symmetric matrix $A_{AS}.$ 

$$\color{Cyan}\checkmark A = \dfrac{1}{2}\bigg[A + A^{T}\bigg] + \dfrac{1}{2}\bigg[A –  A^{T}\bigg] = A_{S} + A_{AS}$$

$\color{Green}\checkmark$ The determinant of an $n \times n $ Skew-Symmetric matrix is zero if $n$ is odd.

  $\textbf{Proof:-}$ $A$ is skew-symmetric means $A^{T}= -A$. Taking determinant both sides $$\det(A^T)=\det(-A)\implies \det A =(-1)^n\det A \implies \det A =-\det A\implies \det A=0$$

$\textbf{8.Singular Matrix (or) Non-Invertible Matrix:-}$ A singular matrix is a square matrix that is not invertible i.e., it does not have an inverse. A matrix is singular (or) degenerate if and only if (or) iff  its determinant is zero.

$$\color{Cyan}\checkmark\mathbf{\mid A \mid _{n\times n} = 0} $$

$\textbf{9.Non – Singular Matrix (or) Invertible Matrix:-}$ A  square matrix is non – singular (or) invertible if its determinant is non – zero.

$$\color{Cyan}\checkmark\mathbf{\mid A \mid _{n\times n} \neq 0} $$

     $\color{Cyan}\checkmark$A non - singular matrix has a matrix inverse.

$\textbf{10.Orthogonal Matrix:-}$ A square matrix is said to be orthogonal if $\mathbf{A\cdot A^{T} = I.}$ In other words the transpose of orthogonal matrix is equal to the inverse of the matrix, i.e. $\mathbf{A^{T}  = A^{-1}.}$

         $\textbf{Example:} \text{If} \: A = \dfrac{1}{3}\begin{bmatrix} 1 &2 &2 \\ 2 &1 &-2 \\ -2 &2 &-1 \end{bmatrix}_{3\times 3} \;\:\: ,\text{then} \:\:\: A^{T} = \dfrac{1}{3}\begin{bmatrix} 1 &2 &-2 \\ 2 &1 &2 \\ 2 &-2 &-1 \end{bmatrix}_{3\times 3}$

and $A\cdot A^{T} =\begin{bmatrix} 1 &0 &0 \\ 0 &1 &0 \\  &0 &1 \end{bmatrix}_{3\times 3}\:\:\:, \:\: A^{-1} = A^{T} = \dfrac{1}{3}\begin{bmatrix} 1 &2 &-2 \\ 2 &1 &2 \\ 2 &-2 &-1 \end{bmatrix}_{3\times 3}$

$\color{Cyan}\checkmark$If matrix $A$ is orthogonal then,

  • Its inverse and transpose are also orthogonal.
  • Its determinant is unity, i.e. $ \mid A \mid  =  \pm 1.$
  • $\mid A \mid \mid A^{T} \mid = 1$

$\textbf{11.Hermitian Matrix:-}$ A square matrix is said to be hermitian if $\mathbf{A = A^{\theta}}.$ Where $A^{\theta}$ is the transpose of conjugate of matrix $A,$ i.e. $A^{\theta} = \big(\overline{A}\big)^{T}$

           $\textbf{Example:}\: \text{If} \: A = \begin{bmatrix} 1 &3-2i &2+3i \\ 3+2i &2 &i \\ 2-3i &-i &3 \end{bmatrix}_{3\times 3}$

$\text{then Conjugate of A} = \overline{A} = \begin{bmatrix} 1 &3+2i &2-3i \\ 3-2i &2 &-i \\ 2+3i &i &3 \end{bmatrix}_{3\times 3}$

 $\implies A^{\theta} = \big(\overline{A}\big)^{T} = \begin{bmatrix} 1 &3-2i &2+3i \\ 3+2i &2 &i \\ 2-3i &-i &3 \end{bmatrix}_{3\times 3} = A$

$\textbf{12.Skew Hermitian Matrix:-}$ A square matrix $A$ is said to be skew hermitian if $\mathbf{A =\: – A^{\theta}.}$

           $\textbf{Example:}\: \text{If} \: A = \begin{bmatrix} i &2 –  3i &4+5i \\ – 2 – 3i &0& 2i \\ – 4 + 5i &2i &-3i \end{bmatrix}_{3\times 3}$

           $\text{then Conjugate of A} = \overline{A} = \begin{bmatrix} -i &2 +  3i &4-5i \\ – 2 + 3i &0&  -2i \\ – 4 - 5i &-2i &3i \end{bmatrix}_{3\times 3}$

 $\implies A^{\theta} = \big(\overline{A}\big)^{T} = \begin{bmatrix} -i &-2 +  3i &-4-5i \\  2 + 3i &0&  -2i \\  4 - 5i &-2i &3i \end{bmatrix}_{3\times 3} =\: –  A$

      $\color{Cyan}\checkmark$All the diagonal elements of Skew Hermitian matrix are either zero (or) pure imaginary.

      $\color{Cyan}\checkmark$All the diagonal elements of Hermitian matrix are real.

      $\color{Cyan}\checkmark$In Hermitian matrix upper and lower diagonal elements should be complex conjugate pair.

      $\color{Cyan}\checkmark$In Skew  Hermitian matrix upper and lower diagonal elements should be same but real value

          sign are  opposite.

$\textbf{13.Unitary Matrix:-}$ A square matrix is said to be unitary if $\mathbf{A\cdot A^{\theta} = I},$ where $A^{\theta}$ is transpose of conjugate matrix $A,$ i.e. $A^{\theta} = \big(\overline{A}\big)^{T}$

       $\textbf{Example:}\: \text{If} \: A = \begin{bmatrix} \dfrac{1+i}{2} &\dfrac{-1+i}{2} \\ \dfrac{1 - i}{2}& \dfrac{-1- i}{2} \end{bmatrix}_{2\times 2}$

$\implies A^{\theta} =  \big(\overline{A}\big)^{T} = \begin{bmatrix} \dfrac{1-i}{2} &\dfrac{1+i}{2} \\ \dfrac{-1 - i}{2} & \dfrac{-1+ i}{2} \end{bmatrix}_{2\times 2}$

$\implies A\cdot A^{\theta} = \begin{bmatrix} 1 &0 \\ 0 &1 \end{bmatrix}_{2\times 2}$

$\color{Cyan}\checkmark$If $A$ is unitary matrix then,

  • Its inverse and transpose is also unitary.
  • Its determinant is also unity, i.e. $\mid A \mid = \pm 1.$
  • $\mid A \mid \mid A^{\theta} \mid = 1$

$\textbf{14.Periodic Matrix:-}$ A square matrix is said to be a periodic if $\mathbf{A^{k+1} = A,}$ where $k$ is positive integer. $k$ is also known as period of the matrix.

$\textbf{15.Involutory Matrix:-}$ A matrix is said to be involutory if $\mathbf{A^{2} = I}.$

$\textbf{16.Idempotent Matrix:-}$ An idempotent matrix is a square matrix which, when multiplied by itself i.e. $\mathbf{A^{2}= A}.$

A periodic matrix is said to be idempotent when the positive integer $k$ is unity, i.e. $$A^{k+1} = A\implies A^{1 + 1} = A \implies A^{2} = A$$

$\textbf{17.Nilpotent Matrix:-}$ A square matrix is called a nilpotent matrix if there exists a positive integer $k$ such that $\mathbf{A^{k} = 0}.$

$\color{Cyan}\checkmark$The least positive value of $k$ is called the index of nilpotent matrix $A.$

$\color{Cyan}\checkmark$Determinant of Idempotent matrix is either $0$ (or) $1.$

$\color{Cyan}\checkmark$Determinant and Trace of nilpotent matrix is zero.

$\color{Cyan}\checkmark$Inverse of nilpotent matrix does not exist.

$\textbf{18.Invertible Matrix:-}$ A matrix  $A$ is said to be invertible (or) non-singular (or) non-degenerate if there exists a matrix $B$ such that $\mathbf{AB = BA = I_{n}}.$

$\color{Teal}\checkmark$If matrix $A$ is invertible, then the inverse is unique.

$\color{Teal}\checkmark$ If matrix $A$ is invertible, then $A$ cannot have a row (or) column consisting of only zeros.

$\textbf{19.Rotation Matrix:-}$ A rotation matrix in $n$–  dimensions is a $n\times n$ special orthogonal matrix, that is an orthogonal matrix whose determinant is $1.$ i.e. $R_{T} = R_{-1}, \mid R \mid = 1$

$\textbf{Example:}\:\: A = \begin{bmatrix} \cos\theta &-\sin\theta \\ \sin\theta &\cos\theta \end{bmatrix}_{2\times 2}$

$\textbf{20.Normal Matrix:-}$ A matrix is normal if it commutes with its conjugate transpose.

      $\color{Teal}\checkmark$A complex square  matrix $A$ is normal if $\mathbf{\big(\overline{A}\big)^{T}\cdot A = A\cdot  \big(\overline{A}\big)^{T} = A^{\theta}\cdot A = A\cdot A^{\theta}}.$

      $\color{Teal}\checkmark$A real square matrix $A$ is normal if $\mathbf{A^{T}\cdot A = A\cdot A^{T}},$ since a real matrix satisfies $\mathbf{\overline{A} = A}.$

__________________________________________________________________           

$\color{Red}{\text{Question:}}$ Prove that Let the $A_{n\times n}\:,$ then $$\color{Magenta}{\sum_{i=1}^{n}\sum_{j=1}^{n} A_{ij}^2 = \text{Trace of} \:\: \left( A\:A^{T}\right ) } $$

__________________________________________________________________ 

The given matrix is called square Vandermonde Matrix and it has the form :

$\begin{bmatrix} 1 & \alpha_1 & \alpha_1^2 & .... & \alpha_1^{n-1} \\ 1 & \alpha_2 & \alpha_2^2 & .... & \alpha_2^{n-1} \\ \vdots & \vdots & \vdots &\ddots & \vdots \\ 1 & \alpha_n & \alpha_n^2 & .... & \alpha_n^{n-1} \\  \end{bmatrix}$

and its determinant is given by : $\prod_{1 \leq i < j \leq n} (\alpha_j - \alpha_i)$. Proof is given here using induction.

$\Rightarrow$ For a general $4\times 4$ matrix of the form :

$$\begin{bmatrix} 1 &a &a^2 &a^3 \\ 1 &b &b^2 &b^3 \\ 1 &c &c^2 &c^3 \\ 1 &d &d^2 &d^3 \end{bmatrix}$$

Determinant is given by : $(b-a)*(c-a)*(d-a)*(c-b)*(d-b)*(d-c)$ and similarly, for this type of matrix, we can find determinant of any order.

posted in Study Materials Dec 1, 2019 edited Feb 6, 2020 by
18,004 views
49
Like
53
Love
0
Haha
1
Wow
0
Angry
0
Sad

31 Comments

31 Comments

Like
1
Thank you :)
Like
1
That's great.😍😍... appreciable.
Like
1

Determinant and Trace of the nilpotent matrix is zero.

But if determinant=0 or trace=0, what can we say about nilpotency of matrix?

Like

But if determinant=0 or trace=0, what can we say about nilpotency of matrix?

see this https://en.wikipedia.org/wiki/Nilpotent_matrix

We can say that it is nilpotent matrix but, we need to calculate the index.

Like
6
1) If a matrix is nilpotent then determinant and trace of that matrix are zero -- correct statement

 2) If determinant and trace of a matrix are zero then that matrix is nilpotent -- incorrect statement

Counter-example : $\begin{bmatrix} 1 &0 &-1 \\ 0 &1 &1 \\ 5 &3 &-2 \end{bmatrix}$
Like
How can we prove that the trace of nilpotent matricx is zero?
Like
trace is nothing but sum of leading diagonal elements.
Like
bro can be upload all standard book questions in pdf format . ??
Like

@mohan123

Do you have all standard book questions, in pdf format?

Like
i don't have  , if you have upload it
Like
i also don't have any pdf, actually, I added the questions in GO.
Like

Yes Sir, It will be very helpful if we get Standard books questions (Important ones) in pdf format

@Arjun   @mohan123 

Like
Great work sir
Like

 

Do you have such notes for Discrete Maths or any other topic?

Or anyone else have such quick notes for DM and other subject?

Collecting all of them at one place would be really handy and useful at the same time.

Like
1

No brother, I don't have.

These points I write from standard books and my notes.

Like
2
Good Work!!
Like
1
Thank you so much.
Like
1
Thank You, referring this 2 days prior exam :p
Like
1
Thanks for the post.
Like
Thanks for the post Sir
Like
1
thanks
Like
Incredible!
Like
Great work
Like

@Lakshman Patel RJIT Thank you Sir for this important points

Like
1

Diagonalization of a Matrix

If there is an invertible n×n matrix C and a diagonal matrix D such that A=CDC-1, then an n×n matrix A is diagonalizable.

For example,

[400050006]=�3[400050006]�3−1

Hence, we can say, any diagonal matrix D is diagonalizable, as it is similar to itself.

Diagonalization Theorem

If and only if A has n linearly independent eigenvectors, then the n×n matrix A is diagonalizable.

A=CDC-1 for this example.

�=(||…|�1�2…��||…|)
�=[�10⋯00�2⋯0⋮⋮⋱⋮00⋯��]

Here,

v1, v2, …, vn are the linearly independent Eigenvectors,

λ1, λ2, …λn are the corresponding Eigenvalues.

Diagonalization Proof

Assume that matrix A has n linearly independent Eigenvectors such as v1, v2, …, vn, having Eigenvalues λ1, λ2, …λn. Defining “C” as considered above, we can conclude C is invertible using the invertible matrix theorem.

Assume that D = C-1AC, Hence, A = CDC-1.

Now, consider multiplying the standard coordinate vectors picks by the elements columns of C, we can have

Cei = vi, and hence ei = C-1vi.

To obtain the columns of D, we multiply by the standard coordinate vectors.

Thus we can say,

Dei = C-1ACei = C-1Avi = C-1λivi = λiC-1vi = λiei

D’s columns are hence multiples of the standard coordinate vectors:

�=[�10⋯000�2⋯00⋮⋮⋱⋮⋮00⋯��−1000⋯0��]

Assume A = CDC-1, where C has columns v1, v2,…, vn, and D is diagonal with diagonal entries λ1, λ2,…, λn. C’s columns are linearly independent since it is invertible. We need to demonstrate that vi is an eigenvector of A with eigenvalue λi. Because the standard coordinate vector ei is an eigenvector of D with eigenvalue λi we can write:

Avi = CDC-1Vi = CDei = Cλiei = λiCei = λivi.

Hence, we can conclude that if

If an n×n matrix A has n different eigenvalues λ1, λ2,…, λn, then a selection of matching eigenvectors v1, v2,…, vn is inherently linearly independent.

In other words, an n×n matrix with unique Eigenvalues is diagonalizable. 

add them also and LU DECOMPOSITION

Like

@amit166 can you share the reference of the above text, I will review then add it.

Like

https://byjus.com/maths/diagonalization/

Like
3
It's not a standard resource, don't follow them.
Like
a pdf version is available of this blog??
Like
Awesome Post, I wanted to ask how am I suppose to remember all of this? Is there any way to make them intuitive?