# Eigenvectors and Eigenvalues

I was talking to some students the other day (actually… a couple of months ago… ahem…), they had some questions about some problems on linear algebra and after a short while it became clear that they had mastered some of the techniques to deal with matrices and transformations, but sadly they had no idea about some of important concepts. The discussion moved into what the importance was for Eigenvectors and thus Eigenvalues. They could not answer, other than… “the way to calculate the Eigenvalue is…”. So I decided to do an entry here about why we are interested in these things (other than to pass the exam…).

Let me start by the origin and meaning of the word Eigen: it comes from German and it is a prefix that can be translated as “proper”, “own”, “particular”. That perhaps hints at the mathematical meaning, which could be even translated as “characteristic”, which was first used by David Hilbert (I believe…). Some times Eigenvectors are thus called “Proper Vectors” although that is not my personal preference.

If we consider a collection of numbers arranged in $n$ rows and $n$ columns, i.e. a square matrix that we will call $\bf{A}$. Let us also consider a column vector $\bf x$ with $n$ non-zero elements. We can therefore carry out the matrix multiplication $\bf{Ax}$. Now we raise the following question: Is there a number $\lambda$ such that the multiplication $\lambda \bf x$ gives us the same result as $\bf Ax$. In other words:

$\bf{Ax} = \lambda \bf x$,

English: Linear transformation by a given matrix (Photo credit: Wikipedia)

if so, then we say that $\lambda$ is an Eigenvalue of $\bf A$ and $\bf x$ is the Eigenvector. Great! That part is fine and we can compute these quantities, but why are we interested in this? Well, it turns out that many applications in science and engineering rely on linear transformations, which in turn use Eigenvectors and Eigenvalues. A linear transformation is a function between two vector spaces that preserves the operations of addition and scalar multiplication. In simpler terms, a linear transformation takes, for example, straight lines into straight lines or to a single point, and they can be used to elucidate how to stretch or rotate an object, and that is indeed useful.

So, where do Eigenvectors and Eigenvalues come into place? Well, they make linear transformations easier to understand. Eigenvectors can be seen as the “directions” along which a linear transformation stretches (or compresses), or flips an object, whereas Eigenvalues are effectively the factors by which such changes occur. In that way, Eigenvalues characterise important properties of linear transformations, for example whether a system of linear equations has a unique solution, and as described above, it can also describe the physical properties of a mathematical model.

Do you want a concrete example in which this is used on daily life? Well, have a look at PageRank used by Google…