The determinant
The determinant of a quadratic matrix is a scalar i.e. a number. It is calculated from its entries. By means of the determinant, the invertability of a matrix can be assessed. If $\det \mathbf{A} = |\mathbf{A}| = 0$, the matrix is singular and the inverse of the matrix does not exist. If $\det \mathbf{A} \neq 0$, the matrix is regular and the inverse exists.
Calculation of the determinant
There exist several procedures to calculate the determinant. For $2\times 2$ and $3\times 3$ matrices, the determinant can be computed via simplified calculation schemes.
$2\times 2$ matrices
In the case of $2 \times 2$, the determinant is calculated as the difference of the product of the diagonal elements and the anti-diagonal entries of the matrix. $$ |\mathbf{A}| = \det \left( \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{matrix} \right) = a_{11}a_{22} - a_{12}a_{21}$$
$3\times 3$ matrices
For $3\times 3$ matrices, the rule of Sarrus can be used \begin{align} |\mathbf{A}| &= \det \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} \\ &= a_{11}a_{22}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31}-a_{11}a_{23}a_{32}-a_{12}a_{21}a_{33} \end{align} To easily memorize the rule of Sarrus, it is helpful to extend the matrix on the right with the first and second column. Then, starting with the product from the entries of the main diagonal, one adds up the products of the adjacent diagonals (each with three entries) to the right. Having multiplied (the 3 elements of each diagonal) and summed up the three products, one then subtracts the product of each anti-diagonal.
$n\times n$ matrices
For any square $n \times n$ matrix, the determinant can be calculated by the Leibnitz formula , or the Laplace expansion . In the following, the algorithm according to the Laplace expansion is shortly described. According to the Laplace expansion, the determinant can be calculated by chosing a row $\bar{r}$ and calculating \begin{align} |\mathbf{A}| &= \sum_{j=1}^n a_{\bar{r}j} c_{\bar{r}j}. \end{align} where $c_{\bar{r}j}$ is the product of the minor $|\mathbf{A}|_{\bar{r},j}$, i.e. the determinant of the remaining matrix when deleting the $\bar{r}^{\text{th}}$ row and $i^{\text{th}}$ column of $\mathbf{A}$ multiplied with $(-1)^{\bar{r}+j}$. This product is called a cofactor : \begin{align} c_{i,j} = (-1)^{i+j} |\mathbf{A}|_{i,j} \end{align} Alternatively, one could chose a column $\bar{l}$ and calculate \begin{align} |\mathbf{A}| &= \sum_{i=1}^n (-1)^{i+\bar{l}} a_{i\bar{l}} |\mathbf{A}_{i,\bar{l}}| \end{align} It is debatable whether, once memorized, the application of the rule of Sarrus or the Laplace expansion is faster. In all cases, the Laplace expansion is more general and for a $4 \times 4$ matrix a mixture of the two can effectively be applied to calculate the determinant. (Note: In order to calculate the determinant of a $4 \times 4$ matrix, one has to calculate the minors i.e. the determinants of the $3 \times 3$ submatrices.)
Calculation of the inverse
The inverse is the matrix which when multiplied from the left or right with the square matrix $\mathbf{A}$ yields the identity matrix, i.e. $\mathbf{A} \cdot \mathbf{A}^{-1} = \mathbf{A}^{-1} \cdot \mathbf{A} = \mathbb{I}$. Once the determinant is calculated the calculation of the inverse boils down to calculating the determinants of the sub-matrices and putting a minus on each element of $\mathbf{A}$ with an odd index sum. (Imaging the matrix as a board of chequers, where the "black fields" get the minuses, helps.) For a $2\times 2$ matrix this is especially easy since the submatrices are scalars $$ \mathbf{A}^{-1}= \begin{pmatrix} a & b \\ c & d \end{pmatrix}^{-1} = \frac{1}{|\mathbf{A}|} \begin{pmatrix} d& -b \\ -c & a \end{pmatrix} $$ More general the elements of the inverse $b_{i,j}$ are determined by $$ b_{i,j}= \frac{1}{|\mathbf{A}|}(-1)^{i+j} |\mathbf{A}_{j,i}|$$ where $|\mathbf{A}_{j,i}|$ is the determinant of the submatrix that remains after deleting the $j^\text{th}$ row and $i^\text{th}$ column. This is the transposed element from the matrix of cofactors $\mathbf{C}$. Thus, writing everything in matrices yields $$ \mathbf{A}^{-1} = \frac{\mathbf{C}^\prime}{|\mathbf{A}|} $$
Alternatively, a matrix can also be inverted by using the Gauss-Jordan algorithm also called Gaussian elimination.
Eigenvalues and Eigenvectors
By definition, multiplying the eigenvectors $\mathbf{v}$ with the matrix $\mathbf{A}$ does not rotate the eigenvector. The multiplication only stretches $\mathbf{v}$. The associated stretching factor is called the eigenvalue $\lambda$. The eigenvalueequation or eigenequation for matrices is given by \begin{align} \mathbf{A v} = \lambda \mathbf{v} \end{align}
The eigenequation can be rewritten as \begin{align} \left(\mathbf{A} - \lambda \mathbb{I} \right) \mathbf{v} = \mathbf{0} \end{align}
The most simplest (trivial) solution would be $\bar{\mathbf{v}}=\mathbf{0}$. If, and only if, the determinant $D$ of the matrix $\left(\mathbf{A} - \lambda \mathbb{I} \right)$ is zero, $\mathbf{v}$ can assume non-zero values. The determinant $D(\lambda) = \left|\mathbf{A} - \lambda \mathbb{I} \right|$ is a function of $\lambda$. Or more specifically it is a polynomial of order $n$, the dimension of the matrix $\mathbf{A}$. The roots of the polynomial $D(\lambda) = 0$ are called the eigenvalues of $\mathbf{A}$. Given a particular eigenvalue $\lambda_i$, infinitly many vectors $\mathbf{v}$ fulfill the equation. The set of vectors fulfilling the eigenequation (i.e. the nullspace or kernel of the matrix $\left(\mathbf{A} - \lambda_i \mathbb{I} \right)$ ) are called eigenvectors of $\mathbf{A}$ and are associated with the eigenvalue $\lambda_i$. They form the eigenspace $\mathbf{E}_i$. \begin{align} \mathbf{E}_i =\left\{\mathbf{v} :(\mathbf{A}-\lambda_i \mathbb{I})\mathbf{v}= \mathbf{0}\right\} \end{align} So, for each eigenvalue $\lambda$ infinitly many eigenvectors exist. An example for the calculation of eigenvalues and associated eigenvectors is given here.