A set of linearly independent vectors constitutes a basis for a given linear space if and only if all the vectors belonging to the linear space can be obtained as linear combinations of the vectors belonging to the basis.
Let us start with a formal definition of basis.
Definition Let be linearly independent vectors. Let be a linear space. The vectors are said to be a basis for if and only if, for any , there exist scalars , ..., such that
In other words, if any vector can be represented as a linear combination of , then these vectors are a basis for (provided they are also linearly independent).
Example Let and be two column vectors defined as follows.These two vectors are linearly independent (see Exercise 1 in the exercise set on linear independence). We are going to prove that and are a basis for the set of all real vectors. Now, take a vector and denote its two entries by and . The vector can be written as a linear combination of and if there exist two coefficients and such thatThis can be written asTherefore, the two coefficients and need to satisfy the following system of linear equationsFrom the second equation, we obtainBy substituting it in the first equation, we getorAs a consequence,Thus, we have been able to find two coefficients that allow to express as a linear combination of and , for any . Furthermore, and are linearly independent. As a consequence, they are a basis for .
An important fact is that the representation of a vector in terms of a basis is unique.
Proposition If are a basis for a linear space , then the representation of a vector in terms of the basis is unique, i.e., there exists one and only one set of coefficients such that
The proof is by contradiction. Suppose there were two different sets of coefficients and such thatIf we subtract the second equation from the first, we obtainSince the two sets of coefficients are different, there exist at least one such thatThus, there exists a linear combination of , with coefficients not all equal to zero, giving the zero vector as a result. But this implies that are not linearly independent, which contradicts our hypothesis ( are a basis, hence they are linearly independent).
The replacement theorem states that, under appropriate conditions, a given basis can be used to build another basis by replacing one of its vectors.
Proposition Let be a basis for a linear space . Let . If , then a new basis can be obtained by replacing one of the vectors with .
Because is a basis for and , there exist scalars , ..., such thatAt least one of the scalars must be different from zero, because otherwise we would have , in contradiction with our hypothesis that . Without loss of generality, we can assume that (if it is not, we can re-number the vectors in the basis). Now, consider the set of vectors obtained from our basis by replacing with :If this new set of vectors is linearly independent and spans , then it is a basis and the proposition is proved. First, we are going to prove linear independence. Supposefor some set of scalars . By replacing with its representation in terms of the original basis, we obtainBecause are linearly independent, this implies thatBut we know that . As a consequence, implies . By substitution in the other equations, we obtainThus, we can conclude that implies that all coefficients are equal to zero. By the very definition of linear independence, this means that are linearly independent. This concludes the first part of our proof. We now need to prove that span . In other words, we need to prove that for any , we can find coefficients such thatBecause is a basis, there are coefficients such that From previous results, we have thatand, as a consequence, Thus, we can writeThis means that the desired linear representation is achieved withAs a consequence, span . This concludes the second and last part of the proof.
Most of the learning materials found on this website are now available in a traditional textbook format.