Contents
Chapter 2 Vector Spaces (Continuation)
2.1 Linear Independence
In \(V_{3}(\mathbb {R})\) , let \(S=\{e_{1},\ e_{2},\ e_{3}\}\). We have seen that \(L(S)=V_{3}(\mathbb {R})\). Thus \(S\) is a subset of \(V_{3}(\mathbb {R})\) which spans the whole space \(V_{3}(\mathbb {R})\).
1. \(V_{3}(\mathbb {R})\) is a finite dimensional vector space.
2. \(V_{n}(\mathbb {R})\) is a finite dimensional vector space, since \(S=\{e_{1},\ e_{2},\ \ldots ,\ e_{n}\}\) is a finite subset of \(V_{n}(\mathbb {R})\) such that \(L(S)=V_{n}(\mathbb {R})\). In general if \(F\) is any field \(V_{n}(F)\) is a finite dimensional vector space over \(F.\)
3. Let \(V\) be the set of all polynomials in \(F[x]\) of degree \(\leq n\). Let \(S=\{1,\ x,\ x^{2},\ \ldots ,\ x^{n}\}.\) Then \(L(S)=V\) and hence \(V\) is finite dimensional.
4. \(\mathbb {C}\) is a finite dimensional vector space over \(\mathbb {R}\), since \(L(\{1,\ i\})=\mathbb {C}.\)
5. In \(M_{2}(\mathbb {R})\) consider the set \(S\) consisting of the matrices
\(A=\left (\begin {array}{ll} 1 & 0\\ 0 & 0 \end {array}\right )\); \(B=\left (\begin {array}{ll} 0 & 1\\ 0 & 0 \end {array}\right )\);
\(C=\left (\begin {array}{ll} 0 & 0\\ 1 & 0 \end {array}\right )\); \(D=\left (\begin {array}{ll} 0 & 0\\ 0 & 1 \end {array}\right )\).
Then \(\left (\begin {array}{ll} a & b\\ c & d \end {array}\right )=aA+bB+cC+dD\).
Hence \(L(S)=M_{2}(\mathbb {R})\) so that \(M_{2}(\mathbb {R})\) is finite dimensional.
Example 2.1.4. Consider \(\mathbb {R}[x]\). Let \(S\) be any finite subset of \(\mathbb {R}[x]\). Let \(f\) be a polynomial of maximum degree in \(S\). Let \(degf=n\). Then any element of \(L(S)\) is a polynomial of degree \(\leq n\) and hence \(L(S)\neq \mathbb {R}[x]\). Thus \(\mathbb {R}[x]\) is not finite dimensional.
Note 2.1.6. Consider the vectors \(e_{1}=(1,0,0), e_{2}=(0,1,0)\),
\(e_{3}=(0,0,1)\) in \(V_{3}(\mathbb {R})\).
Suppose that \(\alpha _{1}e_{1}+\alpha _{2}e_{2}+\alpha _{3}e_{3}=0\). Then
\(\seteqnumber{0}{2.}{0}\)\begin{align*} (\alpha _{1},0,0)+(0,\ \alpha _{2},0)+(0,0,\ \alpha _{3}) & =(0,0,0)\\ (\alpha _{1},\ \alpha _{2},\ \alpha _{3})& =(0,0,0)\\ \alpha _{1}=\alpha _{2}=\alpha _{3}&=0 \end{align*} (i.e.,) \(\alpha _{1}e_{1}+\alpha _{2}e_{2}+\alpha _{3}e_{3}=0\) if and only if \(\alpha _{1}=\alpha _{2}=\alpha _{3}=0\).
Thus a linear combination of the vectors \(e_{1}, e_{2}\) and \(e_{3}\) will yield the zero vector if and only if all the coefficients are zero.
Definition 2.1.7. Let \(V\) be a vector space over a field \(F\). A finite set of vectors \(v_{1}, v_{2}, \ldots , v_{n}\) in \(V\) is said to be linearly independent if
\[\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}= 0\Rightarrow \alpha _{1}=\alpha _{2}=\cdots =\alpha _{n}=0.\]
If \(v_{1}, v_{2}, \ldots , v_{n}\) are not linearly independent, then they are said to be linearly dependent.
1. In \(V_{n}(F), \{e_{1},\ e_{2},\ \ldots ,\ e_{n}\}\) is a linearly independent set of vectors, for,
\(\seteqnumber{0}{2.}{0}\)\begin{align*} \alpha _{1}e_{1}+\alpha _{2}e_{2}+ \cdots +\alpha _{n}e_{n}&=0\\ \Rightarrow \alpha _{1}(1,0,\ \ldots ,\ 0) +\alpha _{2}(0,1,\ \ldots ,\ 0)+\cdots \\ + \alpha _{n}(0,0,\ \ldots ,\ 1)& =(0,0,\ \ldots ,\ 0)\\ \Rightarrow (\alpha _{1},\ \alpha _{2},\ \ldots ,\ \alpha _{n})& =(0,0,\ \ldots ,\ 0)\\ \Rightarrow \alpha _{1}=\alpha _{2}=\cdots =\alpha _{n}&=0. \end{align*}
2. In \(V_{3}(\mathbb {R})\) the vectors \((1,\ 2,\ 1)\) , \((2,\ 1,\ 0)\) and \((1,\ -1,2)\) are linearly independent. For, let
\(\seteqnumber{0}{2.}{0}\)\begin{align*} \alpha _{1}(1,2,1)+\alpha _{2}(2,1,0)+\alpha _{3}(1,\ -1,2)&=(0,0,0)\\(\alpha _{1}+2\alpha _{2}+\alpha _{3},2\alpha _{1}+\alpha _{2}-\alpha _{3},\ \alpha _{1}+2\alpha _{3})& = (0,0,0) \end{align*}
\(\seteqnumber{0}{2.}{0}\)\begin{align} \alpha _{1}+2\alpha _{2}+\alpha _{3}&=0 \label {p516eq1} \\ 2\alpha _{1}+\alpha _{2}-\alpha _{3}&=0 \label {p516eq2}\\ \alpha _{1}+2\alpha _{3}&=0\label {p516eq3} \end{align} Solving equations (2.1),(2.2) and (2.3) we get
\[\alpha _{1}=\alpha _{2}=\alpha _{3}=0.\]
The given vectors are linearly independent.
3. In \(V_{3}(\mathbb {R})\) the vectors \((1,\ 4,\ -2), (-2,1,3)\) and \((-4,11,5)\) are linearly dependent. For, let
\(\seteqnumber{0}{2.}{3}\)\begin{align} \alpha _{1}(1,4,\ -2)+\alpha _{2}(-2,1,3)+\alpha _{3}(-4,11,5)&=(0,0,0) \notag \\ \alpha _{1}-2\alpha _{2}-4\alpha _{3}& =0 \label {p517eq1}\\ 4\alpha _{1}+\alpha _{2}+11\alpha _{3}=0\label {p517eq2}\\ -2\alpha _{1}+3\alpha _{2}+5\alpha _{3}=0\label {p517eq3} \end{align} From (2.4) and (2.5),
\(\seteqnumber{0}{2.}{6}\)
\begin{align*}
\frac {\alpha _{1}}{-18}=\frac {\alpha _{2}}{-27}=\frac {\alpha _{3}}{9}&=k \, \text {(say)}\\ \alpha _{1}=-18k, \alpha _{2}=-27k, \alpha _{3}&=9k.
\end{align*}
These values of \(\alpha _{1}, \alpha _{2}\) and \(\alpha _{3}\), for any \(k\) satisfy (2.6) also.
Taking \(k=1\) we get \(\alpha _{1}= -18, \alpha _{2}=-27, \alpha _{3}=9\) as a non-trivial solution. Hence the three vectors are linearly dependent.
4. Let \(V\) be a vector space over a field \(F\). Then any subset \(S\) of \(V\) containing the zero vector is linearly dependent.
Proof : Let \(V\) be a vector space over a field \(F\).
Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a linearly independent set.
Let \(S'\) be a subset of \(S\). Without loss of generality we take
\(S'= \{v_{1},\ v_{2},\ \ldots ,\ v_{k}\}\) where \(k\leq n\).
Suppose \(S'\) is a linearly dependent set.
Then there exist \(\alpha _{1}, \alpha _{2}, \ldots , \alpha _{k}\) in \(F\) not all zero, such that
\begin{align*} \alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{k}v_{k}&=0 \end{align*} Hence
\(\seteqnumber{0}{2.}{6}\)
\begin{align*}
\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{k}v_{k}+0v_{k+1}+\cdots +0v_{n}&=0
\end{align*}
is a non-trivial linear combination giving the zero vector.
Here \(S\) is a linearly dependent set which is a contradiction.
Hence \(S'\) is linearly independent. □
Proof : Let \(V\) be a vector space and \(S\) be a linearly dependent set. Let \(S'\supset S\).
If \(S'\) is linearly independent \(S\) is also linearly independent (by Theorem 2.1.10) which is a contradiction. Hence \(S'\) is linearly dependent. □
Theorem 2.1.12. Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a linearly independent set of vectors in a vector space \(V\) over a field \(F\). Then every element of \(L(S)\) can be uniquely written in the form \(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}\), where \(\alpha _{i}\in F.\)
Proof : By definition every elements of \(L(S)\) is of the form
\[\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}\]
Now,
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \alpha _{1}v_{1}+\alpha _{2} v_{2}+\cdots +\alpha _{n}v_{n} =\beta _{1}v_{1}+\beta _{2}v_{2}& +\cdots +\beta _{n}v_{n}\\ \end{align*} Hence
\(\seteqnumber{0}{2.}{6}\)
\begin{align*}
(\alpha _{1}-\beta _{1})v_{1}+(\alpha _{2}- \beta _{2})v_{2}+\cdots +(\alpha _{n}-\beta _{n})v_{n}&=0
\end{align*}
Since \(S\) is a linearly independent set, \(\alpha _{i}-\beta _{i}=0\) for all \(i\).
\(\alpha _{i}=\beta _{i}\) for all \(i\). Hence the theorem. □
Proof : Suppose \(v_{1}, v_{2}, \ldots , v_{n}\) are linearly dependent.
Then there exist \(\alpha _{1}, \alpha _{2}, \ldots , \alpha _{n}\in F\), not all zero, such that
\[\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}=0.\]
Let \(k\) be the largest integer for which \(\alpha _{k}\neq 0\).
Then \(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{k}v_{k}=0\)
\begin{align*} \alpha _{k}v_{k}& =-\alpha _{1}v_{1}-\alpha _{2}v_{2}-\cdots -\alpha _{k-1}v_{k-1}\\ v_{k}&=(-\alpha _{k}^{-1}\alpha _{1})v_{1}+\cdots +(-\alpha _{k}^{-1}\alpha _{k-1})v_{k-1} \end{align*} \(v_{k}\) is a linear combination of the preceding vectors.
Conversely, suppose there exists a vector \(v_{k}\) such that
\[v_k=\alpha _{1}v_{1}+ \alpha _{2}v_{2}+\cdots +\alpha _{k-1}v_{k-1}.\]
Hence
\[-\alpha _{1}v_{1}-\alpha _{2}v_{2}-\cdots -\alpha _{k-1}v_{k-1}+v_{k}+0v_{k+1}+\cdots +0v_{n}=0.\]
Since the coefficient of \(v_{k}=1\), we have
\[S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\]
is linearly dependent. □
Proof : Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\).
If \(S\) is linearly independent proof is completed.
If not, let \(v_{k}\) be the first vector in \(S\) which is a linear combination of the preceding vectors. Let
\[S_{1}=\{v_{1},\ v_{2},\ \ldots ,\ v_{k-1},\ v_{k+1},\ \ldots ,\ v_{n}\}\]
(i.e.,) \(S_{1}\) is obtained by deleting the vector \(v_{k}\) from \(S\).
We claim that \(L(S_{1})=L(S)=W\).
Since \(S_{1}\subseteq S, L(S_{1})\subseteq L(S)\).
Now, let \(v\in L(S)\). Then
\[v=\alpha _{1}v_{1}+\cdots +\alpha _{k}v_{k}+\cdots +\alpha _{n}v_{n}.\]
Now, \(v_{k}\) is a linear combination of the preceding vectors. Let
\[v_{k}=\beta _{1}v_{1}+\cdots +\beta _{k-1}v_{k-1}.\]
Hence
\(\seteqnumber{0}{2.}{6}\)
\begin{align*}
v&=\alpha _{1}v_{1}+\cdots +\alpha _{k-1}v_{k-1}+\alpha _{k}(\beta _{1}v_{1}+\cdots +\beta _{k-1}v_{k-1})\\ & \quad +\alpha _{k+1}v_{k+1}+\cdots +\alpha _{n}v_{n}
\end{align*}
\(\therefore \) \(v\) can be expressed as a linear combination of the vectors of \(S_{1}\) so that \(v\in L(S_{1})\). Hence \(L(S)\subseteq L(S_{1})\).
Thus \(L(S)=L(S_{1})=W\).
Now, if \(S_{1}\) is linearly independent, the proof is complete. If not, we continue the above process of removing a vector from \(S_{1}\), which is a linear combination of the preceding vectors until we arrive at a linearly independent subset \(S'\) of \(S\) such
that \(L(S')=W.\) □
2.2 Basis and Dimension
Proof : Since \(V\) is finite dimensional there exists a finite subset \(S\) of \(V\) such that \(L(S)=V\). Clearly this set \(S\) contains a linearly independent subset \(S'=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) such that
\[L(S')=L(S)=V.\]
Hence \(S'\) is a basis for \(V.\) □
Proof : Let \(S\) be a basis for \(V\). Then by definition \(S\) is linearly independent and \(L(S)=V\). Hence by Theorem 2.1.12 every element of \(V\) can be uniquely expressed as a linear combination of elements of \(S.\)
Conversely, suppose every element of \(V\) can be uniquely expressed as a linear combination of elements of \(S\).
Clearly \(L(S)=V\).
Now, let \(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}=0.\)
Also, \(0v_{1}+0v_{2}+\cdots +0v_{n}=0\).
Thus we have expressed \(0\) as a linear combination of vectors of \(S\) in two ways.
By hypothesis \(\alpha _{1}=\alpha _{2}=\cdots =\alpha _{n}=0\). Hence \(S\) is linearly independent. Therefore \(S\) is a basis. □
1. \(S=\{(1,0,0),\ (0,1,0),\ (0,0,1)\}\) is a basis for \(V_{3}(\mathbb {R})\) for,
\[(a,\ b,\ c)=a(1,0,0)+b(0,1,0)+ c(0,0,1).\]
\(\therefore \) Any vector \((a,\ b,\ c)\) of \(V_{3}(\mathbb {R})\) has been expressed uniquely as a linear combination of the elements of \(S\) and hence \(S\) is a basis for \(V_{3}(\mathbb {R})\).
2. \(S=\{e_{1},\ e_{2},\ \ldots ,\ e_{n}\}\) is a basis for \(V_{n}(F)\). This is known as the standard basis for \(V_{n}(F)\).
3. \(S=\{(1,0,0),\ (0,1,0),\ (1,1,1)\}\) is a basis for \(V_{3}(\mathbb {R})\).
Proof : We shall show that any element \((a,\ b,\ c)\) of \(V_{3}(\mathbb {R})\) can be uniquely expressed as a linear combination of the vectors of \(S\). Let
\(\seteqnumber{0}{2.}{6}\)\begin{align*} (a,\ b,\ c) & =\alpha (1,0,0)+\beta (0,1,0)+\gamma (1,1,1) \end{align*} Then,
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \alpha +\gamma & =a, \, \beta +\gamma =b, \, \gamma =c \end{align*} Hence
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \alpha & =a-c \text { and }\beta =b-c \end{align*} Thus,
\(\seteqnumber{0}{2.}{6}\)\begin{align*} (a,\ b,\ c)&=(a-c)(1,0,0)+(b-c)(0,1,0)+c(1,1,1) \end{align*} \(S\) is a basis for \(V_{3}(\mathbb {R})\) □
4. \(S=\{1\}\) is a basis for the vector space \(\mathbb {R}\) over \(\mathbb {R}.\)
5. \(S=\left \{\left (\begin {array}{ll} 1 & 0\\ 0 & 0 \end {array}\right ), \left (\begin {array}{ll} 0 & 1\\ 0 & 0 \end {array}\right ), \left (\begin {array}{ll} 0 & 0\\ 1 & 0 \end {array}\right ), \left (\begin {array}{ll} 0 & 0\\ 0 & 1 \end {array}\right )\right \}\) is a basis for \(M_{2}(\mathbb {R})\), since any matrix \(\left (\begin {array}{ll} a & b\\ c & d \end {array}\right )\) can be uniquely written as
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \left (\begin{array}{ll} a & b\\ c & d \end {array}\right ) & =a\left (\begin{array}{ll} 1 & 0\\ 0 & 0 \end {array}\right )+b\left (\begin{array}{ll} 0 & 1\\ 0 & 0 \end {array}\right )+ c\left (\begin{array}{ll} 0 & 0\\ 1 & 0 \end {array}\right )\\ & \quad +d\left (\begin{array}{ll} 0 & 0\\ 0 & 1 \end {array}\right ) \end{align*}
6. \(\{1, i\}\) is a basis for the vector space \(\mathbb {C}\) over \(\mathbb {R}.\)
7. Let \(V\) be the set of all polynomials of degree \(\leq n\) in \(\mathbb {R}[x]\). Then \(\{1,\ x,\ x^{2},\ \ldots ,\ x^{n}\}\) is a basis for \(V.\)
8. \(\{(1,0),\ (i,\ 0),\ (0,1),\ (0,\ i)\}\) is a basis for the vector space \(\mathbb {C}\times \mathbb {C}\) over \(\mathbb {R}\), for
\[(a+ib, c+ id)=a(1,0)+b(i,\ 0)+c(0,1)+d(0,\ i).\]
9. \(S=\{(1,0,0),\ (0,1,0),\ (1,1,1),\ (1,1,0)\}\) spans the vector space \(V_{3}(\mathbb {R})\) but is not a basis.
Proof : Let \(S=\{(1,0,0), (0,1,0), (1,1,1)\).
Then \(L(S)=V_{3}(\mathbb {R})\) (Refer Example 3).
Now, since \(S\subseteq S'\), we get \(L(S)=V_{3}(\mathbb {R})\).
Thus \(S\) spans \(V_{3}(\mathbb {R})\).
But \(S\) is linearly dependent since
\[(1,\ 1,\ 0)=(1,0,0)(0,1,0).\]
Hence \(S\) is not a basis. □
10. Let \(S=\{(1,0,0),\ (1,1,0)\}\) is linearly independent but not a basis of \(V_{3}(\mathbb {R})\).
Proof : Let
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \alpha (1,0,0)+\beta (1,1,0)&=(0,0,0) \end{align*} Then,
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \alpha +\beta &=0 \text { and }\beta =0 \\ \alpha =\beta & =0 \end{align*} Hence \(S\) is linearly independent. Also
\[L(S)=\{(a,\ b,\ 0)\ :\ a,\ b\in \mathbb {R}\}\neq V_{3}(\mathbb {R})\]
. \(\therefore \) \(S\) is not a basis. □
Proof : Since \(L(S)=V\), every vector in \(V\) and in particular \(w_{1}\), is a linear combination of \(v_{1}, v_{2}, \ldots , v_{n}\).
Hence \(S_{1}=\{w_{1},\ v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) is a linearly dependent set of vectors. Hence there exists a vector \(v_{k}\neq w_{1}\) in \(S_{1}\) which is a linear combination of the preceding vectors.
Let \(S_{2}=\{w_{1},\ v_{1},\ \ldots ,\ v_{k-1},\ v_{k+1},\ \ldots ,\ v_{n}\}\).
Clearly, \(L(S_{2})=V\).
Hence \(w_{2}\) is a linear combination of the vectors in \(S_{2}\).
Hence \(S_{3}=\{w_{2},\ w_{1},\ v_{1},\ \ldots ,\ v_{k-1},\ v_{k+1},\ \ldots ,\ v_{n}\}\) is linearly dependent.
Hence there exists a vector in \(S_{3}\) which is a linear combination of the preceding vectors. Since the \(w_{i}\)’s are linearly independent, this vector cannot be \(w_{2}\) or \(w_{1}\) and hence must be some \(v_{j}\) where \(j\neq k({s}{a}{y},\) with
\(j>k)\). Deletion of \(v_{j}\) from the set \(S_{3}\) gives the set
\(S_{4}=\{w_{2},\ w_{1},\ v_{1},\ \ldots ,\ v_{k-1},\ v_{k+1},\ \ldots ,\ v_{j-1},\ v_{j+1},\ \ldots ,\ v_{n}\}\) of \(n\) vectors spanning \(V\).
In this process, at each step we insert one vector from \(\{w_{1},\ w_{2},\ \ldots ,\ w_{m}\}\) and delete one vector from \(\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\).
If \(m>n\) after repeating this process \(n\) times, we arrive at the set \(\{w_{n},\ w_{n-1},\ \ldots ,\ w_{1}\}\) which spans \(V.\)
Hence \(w_{n+1}\) is a linear combination of \(w_{1}, w_{2}, \ldots , w_{n}\).
Hence \(\{w_{1}, w_{2}, \ldots , w_{n}, w_{n+1},\ldots , w_{n}\}\) is linearly dependent which is a contradiction.
Hence \(m\leq n.\) □
Proof : Since \(V\) is finite dimensional, it has a basis say
\[S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}.\]
Let
\[S'=\{w_{1},\ w_{2},\ \ldots ,\ w_{m}\}\]
be any other basis for \(V\).
Now, \(L(S)=V\) and \(S'\) is a set of \(m\) linearly independent vectors.
Hence by Theorem 2.2.5, \(m\leq n\).
Also, since \(L(S')=V\) and \(S\) is a set of \(n\) linearly independent vectors, \(n\leq m\). Hence \(m=n.\) □
1. \(\dim V_n(\mathbb {R})=n\), since \(\{e_1, e_2, \ldots , e_n\}\) is a basis of \(V_n(\mathbb {R})\).
2. \(M_2(\mathbb {R})\) is a vector space of dimension 4 over \(\mathbb {R}\) since
\[ \left \{ \begin {pmatrix}1 & 0 \\ 0 & 0 \end {pmatrix}, \begin {pmatrix}0 & 1 \\ 0 & 0 \end {pmatrix}, \begin {pmatrix}0 & 0 \\ 1 & 0 \end {pmatrix}, \begin {pmatrix}0 & 0 \\ 0 & 1 \end {pmatrix} \right \} \]
is a basis for \(M_2(\mathbb {R})\).
3. \(\mathbb {C}\) is a vector space of dimension 2 over \(\mathbb {R}\) since \(\{1,i\}\) is a basis for \(\mathbb {C}\).
4. Let \(V\) be the set of all polynomials of degree \(\leq n \) in \(\mathbb {R}[x]\). \(V\) is a vector space over \(\mathbb {R}\) having dimension \(n+1\), since \(\{1, x, x^2 , \ldots , x^n \}\) is a basis for \(V\).
Proof :
1. Let \(S=\{v_{1},\ v_{2},\ \cdots ,\ v_{n}\}\) be a basis for \(V\).
Hence \(L(S)=V\).
Let \(S'\) be any set consisting of \(m\) vectors where \(m>n\).
Suppose \(S'\) is linearly independent.
Since \(S\) spans \(V, m\leq n\) which is a contradiction.
Hence \(S'\) is linearly dependent.
2. Let \(S'\) be a set consisting of \(m\) vectors where \(m<n\).
Suppose \(L(S')=V\).
Now, \(S=\{v_{1},\ v_{2},\ \cdots ,\ v_{n}\}\) is a basis for \(V\) and hence linearly independent.
Hence by Theorem 2.2.5 \(n\leq m\) which is a contradiction.
Hence \(S'\) cannot span \(V.\)
□
Proof : Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{r}\}\) be a linearly independent set of vectors. If \(L(S)=V\) then \(S\) itself is a basis. If \(L(S)\neq V\), choose an element \(v_{r+1}\in V-L(S)\). Now, consider \(S_{1}=\{v_{1,2},\ \ldots ,\ v_{r},\ v_{r+1}\}\). We shall prove that \(S_{1}\) is linearly independent by showing that no vector in \(S_{1}\) is a linear combination of the preceding vectors. Since \(\{v_{1},\ v_{2},\ \ldots ,\ v_{r}\}\) is linearly independent \(v_{i}\) where \(1\leq i\leq r\) is not a linear combination of the preceding vectors. Also \(v_{r+1}\in L(S)\) and hence \(v_{r+1}\) is not a linear combination of \(v_{1}, v_{2}, \ldots , v_{r}\). Hence \(S_{1}\) is linearly independent. If \(L(S_{1})=V\), then \(S_{1}\) is a basis for \(V\). If not we take an element \(v_{r+2}\in V-L(S_{1})\) and proceed as before. Since the dimension of \(V\) is finite, this process must stop at a certain stage giving the required basis containing \(S.\) □
Proof : Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{r}\}\) be a basis of \(A\). By Theorem 2.1.12, we can find \(w_{1}, w_{2}, \ldots , w_{s}\in V\) such that \(S'=\{v_{1},\ v_{2},\ \cdots ,\ v_{r},\ w_{1},\ w_{2},\ \ldots ,\ w_{s}\}\) is a basis of \(V\). Now, let \(B=L(\{w_{1},\ w_{2},\ \ldots ,\ w_{s}\})\). We claim that \(A\cap B=\{0\}\) and \(V=A+B\). Now, let \(v\in A\cap B\). Then \(v\in A\) and \(v\in B\). Hence \(v=\alpha _{1}v_{1}+\cdots +\alpha _{r}v_{r}=\beta _{1}w_{1}+\cdots +\beta _{s}w_{s} \alpha _{1}v_{1}+\cdots +\alpha _{r}v_{r}-\beta _{1}w_{1}-\cdots -\beta _{s}w_{s}=0\). Now, since \(S'\) is linearly independent, \(\alpha _{i}=0=\beta _{j}\) for all \(i\) and \(j.\) Hence \(v=0\). Thus \(A\cap B=\{0\}.\)
Now, let \(v\in V\). Then \(v=(\alpha _{1}v_{1}+\cdots +\alpha _{r}v_{r})+(\beta _{1}w_{1}+\cdots +\beta _{s}w_{s})\in A+B\). Hence \(A+B=V\) so that \(V=A\oplus B.\) □
Proof : \(({i})\Rightarrow ({ii})\) Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a basis for \(V\). Then by theorem 4.6.8 any \(n+1\) vectors in \(V\) are linearly dependent and hence \(S\) is a maximal linearly independent set.
\(({ii})\Rightarrow ({iii})\) Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a maximal linearly independent set. Now to prove that \(S\) is a basis for \(V\) we shall prove that \(L(S)=V\). Obviously \(L(S)\subseteq V\). Now, let \(v\in V\). If \(v\in S\), then \(v\in L(S)\). (since \(S\subseteq L(S)\) ) If \(v\not \in S, S'=\{v_{1},\ v_{2},\ \ldots ,\ v_{n},\ v\}\) is a linearly dependent set (since \(S\) is a maximal independent set) There exists a vector in \(S'\) which is a linear combination of the preceding vectors. Since \(v_{1}, v_{2}, \ldots , v_{n}\) are linearly independent, this vector must be \(v\). Thus \(v\) is a linear combination of \(v_{1}, v_{2}, \ldots , v_{n}\). Therefore \(v\in L(S)\). Hence \(V\subseteq L(S)\). Thus \(V=L(S)\).
\(({i})\Rightarrow ({iii})\) Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a basis. Then \(L(S)=V\). If \(S\) is not minimal, there exists \(v_{i}\in S\) such that \(L(S-\{v_{i}\})=V\). Hence \(S\) is a linearly independent, \(S-\{v_{i}\}\) is also linearly independent. Thus \(S-\{v_{i}\}\) is a basis consisting of \(n-1\) elements which is a contradiction. Hence \(S\) is a minimal generating set. \(({i}{i}{i})\Rightarrow (({i})\) Let \(S=\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a minimal generating set. To prove that \(S\) is a basis, we have to show that \(S\) is linearly independent. If \(S\) is linearly dependent, there exists a vector \(v_{k}\) which is a linear combination of the preceding vectors. Clearly \(L(S-\{v_{k}\})=V\) contradicting the minimality of \(S\). Thus \(S\) is linearly independent and since \(L(S)=V, S\) is a basis for \(V.\) □
Proof : Let \(V\) be a vector space of dimension \(n\). Let \(\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a basis for \(V.\) Then we know that if \(v\in V, v\) can be written uniquely as \(v=\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n},\) where \(\alpha _{i}\in F\). Now, consider the map \(f\) : \(V\rightarrow V_{n}(F)\) given by \(f(\alpha _{1}v_{1}+\cdots +\alpha _{n}v_{n})= (\alpha _{1},\ \alpha _{2},\ \ldots ,\ \alpha _{n})\). Clearly \(f\) is 1-1 and onto. Let \(v, w\in V\). Then \(v=\alpha _{1}v_{1}+\cdots +\alpha _{n}v_{n}\) and \(w=\beta _{1}v_{1}+\cdots +\beta _{n}v_{n}.\)
\(\seteqnumber{0}{2.}{6}\)\begin{align*} f(v+w) & =f[(\alpha _{1}+\beta _{1})v_{1}+(\alpha _{2}+\beta _{2})v_{2}+\cdots +(\alpha _{n}+\beta _{n})v_{n}]\\ &=((\alpha _{1}+\beta _{1}),\ (\alpha _{2}+\beta _{2}),\ \cdots ,\ (\alpha _{n}+\beta _{n}))\\ &=(\alpha _{1},\ \alpha _{2},\ \cdots ,\ \alpha _{n})+(\beta _{1},\ \beta _{2},\ \cdots \ \beta _{n}) \end{align*} Also
\(\seteqnumber{0}{2.}{6}\)\begin{align*} f(\alpha u) & =f(\alpha \alpha _{1}v_{1}+\cdots +\alpha \alpha _{n}v_{n})\\ &=(\alpha \alpha _{1},\ \alpha \alpha _{2},\ \cdots ,\ \alpha \alpha _{n})\\ &=\alpha (\alpha _{1},\ \alpha _{2},\ \ldots ,\ \alpha _{n})\\ &=\alpha f(v). \end{align*} Hence \(f\) is an isomorphism of \(V\) to \(V_{n}(F)\). □
Proof : Let \(\{v_{1},\ v_{2},\ \ldots ,\ v_{n}\}\) be a basis for \(V\). We shall prove that \(T(v_{1}), T(v_{2}), \ldots , T(v_{n})\) are linearly independent and that they span \(W\). Now, \(\alpha _{1}T(v_{1})+\alpha _{2}T(v_{2})+\cdots + \alpha _{n}T(v_{n})=0\)
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \Rightarrow T(\alpha _{1}v_{1})+T(\alpha _{2}v_{2})+\cdots +T(\alpha _{n}v_{n})&=0 \\ \Rightarrow T(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n})&=0 \\ \Rightarrow \alpha _{1}v_{1}+\alpha _{2}v_{2}+ \cdots +\alpha _{n}v_{n}&=0 \text { (since $T$ is 1-1) } \\ \Rightarrow \alpha _{1}=\alpha _{2}=\cdots =\alpha _{n}&=0 \text { (since $v_{1}, v_{2}, \ldots , v_{n}$ are LI).} \end{align*} \(T(v_{1}), T(v_{2}), \ldots , T(v_{n})\) are linearly independent.
Now, let \(w\in W\). Then since \(T\) is onto, there exists a vector \(v\in V\) such that \(T(v)=w\). Let \(v=\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}\). Then
\(\seteqnumber{0}{2.}{6}\)\begin{align*} w& =T(v)\\ &= T(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n})\\ &=\alpha _{1}T(v_{1})+\alpha _{2}T(v_{2})+\cdots +\alpha _{n}T(v_{n}). \end{align*} Thus \(w\) is a linear combination of the vectors
\[T(v_{1}), T(v_{2})\ldots , T(v_{n}).\]
\(T(v_{1}), T(v_{2})\ldots , T(v_{n})\) span \(W\) and hence is a basis for \(W.\) □
Theorem 2.2.19. Let \(V\) and \(W\) be finite dimensional vector spaces over a field \(F.\) Let \(\{v_{1},\ v_{2},\ \cdots ,\ v_{n}\}\) be a basis for \(V\) and let \(w_{1}, w_{2}, \ldots , w_{n}\) be any \(n\) vectors in \(W\) (not necessarily distinct) Then there exists a unique linear transformation \(T\) : \(V\rightarrow W\) such that \(T(v_{i})=w_{i}, i=1,2, \ldots , n.\)
Proof : Let \(v=\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}\in V\). We define \(T(v)=\alpha _{1}w_{1}+\alpha _{2}w_{2}+\cdots +\alpha _{n}w_{n}\).
Now, let \(x, y\in V\). Let \(x=\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}\) and \(y=\beta _{1}v_{1}+\beta _{2}v_{2}+\cdots +\beta _{n}v_{n}\)
\(\seteqnumber{0}{2.}{6}\)
\begin{align*}
(x+y) & =(\alpha _{1}+\beta _{1})v_{1}+(\alpha _{2}+\beta _{2})v_{2}+\cdots +(\alpha _{n}+\beta _{n})v_{n}\\ T(x+y)&=(\alpha _{1}+\beta _{1})w_{1}+(\alpha _{2}+ \beta _{2})w_{2}+\cdots +(\alpha _{n}+\beta _{n})w_{n}\\ &
=(\alpha _{1}w_{1}+\alpha _{2}w_{2}+\cdots +\alpha _{n}w_{n})+(\beta _{1}w_{1}+\beta _{2}w_{2}+\cdots +\beta _{n}w_{n})\\ &= T(x)+T(y)
\end{align*}
Similarly \(T(\alpha x)=\alpha T(x)\). Hence \(T\) is a linear transformation.
Also \(v_{1}=1v_{1}+0v_{2}+\cdots +0v_{n}\).
Hence \(T(v_{1})=1w_{1}+0w_{2}+\cdots +0w_{n}=w_{1}\).
Similarly \(T(v_{i})=w_{i}\) for all \(i=1,2, \ldots , n\).
Now, to prove the uniqueness, let \(T' : V\rightarrow W\) be any other linear transformation such that \(T'(v_{i})=w_{i}\).
Let \(v=\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{n}v_{n}\in V. T'(v)=\alpha _{1}T'(v_{1})+\alpha _{2}T'(v_{2})+\cdots +\alpha _{n}\)
\begin{align*} T'(v_{n})&=\alpha _{1}w_{1}+\alpha _{2}w_{2}+\cdots +\alpha _{n}w_{n}\\ &=T(v) \end{align*} Hence \(T=T'.\) □
Proof : (i) Let \(S=\{w_{1},\ w_{2},\ \ldots ,\ w_{m}\}\) be a basis for \(W\). Since \(W\) is a subspace of \(V, S\) is a part of a basis for \(V\). Hence \(\dim W\leq dimV.\)
(ii) Let \(\dim V=n\) and \(\dim W=m\) Let \(S=\{w_{1},\ w_{2},\ \ldots ,\ w_{m}\}\) be a basis for \(W\). Clearly \(S\) is a linearly independent set of vectors in \(V\).
Hence \(S\) is a part of a basis in \(V\). Let \(S=\{w_{1},\ w_{2},\ \ldots ,\ w_{m},\ v_{1},\ v_{2},\ \cdots ,\ v_{r}\}\) be a basis for \(V\). Then \(m+r=n\).
Now, we claim \(S'=\{W+v_{1},\ W+v_{2},\ \ldots ,\ W+v_{r}\}\) is a basis for \(\displaystyle \frac {V}{W}\).
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \alpha _{1}(W+v_{1})+\alpha _{2}(W+ v_{2})+\cdots +\alpha _{r}(W+v_{r})& =W+0\\ \Rightarrow (W+\alpha _{1}v_{1})+(W+\alpha _{2}v_{2})+\cdots +(W+\alpha _{r}v_{r})&= W\\ \Rightarrow W+\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r}& =W\\ \Rightarrow \alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r}\in W \end{align*} Now, since \(\{w_{1},\ w_{2},\ \cdots \ ,\ w_{m}\}\) is a basis for \(W, \)
\(\seteqnumber{0}{2.}{6}\)
\begin{align*}
\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r}=\beta _{1}w_{1}+\beta _{2}w_{2}+\cdots +\beta _{m}w_{m}.\\ \therefore \alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r}-\beta _{1}w_{1}-\beta _{2}w_{2}-\cdots -\beta
_{m}w_{m}&=0\\ \alpha _{1}=\alpha _{2}= =\alpha _{r}=\beta _{1}=\beta _{2}=\cdots =\beta _{m}&=0
\end{align*}
\(S'\) is a linearly independent set.
Now, let \(W+v\displaystyle \in \frac {V}{W}\).
Let \(v=\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r}+\beta _{1}w_{1}+\beta _{2}w_{2}+\cdots +\beta _{m}w_{m}.\) Then
\begin{align*} W+v&=W+ (\alpha _{1}v_{1}+\alpha _{2}v_{2}+\ \cdots \ +\alpha _{r}v_{r}\\ & \quad +\beta _{1}w_{1}+\beta _{2}w_{2}+\ \cdots \ +\beta _{m}w_{m})\\ &= W+(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r})\\ & \text {(since $\beta _{1}w_{1}+\beta _{2}w_{2}+\cdots +\beta _{m}w_{m}\in W)$}\\ &=(W+\alpha _{1}v_{1})+ (W+\alpha _{2}v_{2})+\cdots +(W+\alpha _{r}v_{r})\\ &=\alpha _{1}(W+v_{1})+\alpha _{2}(W+v_{2})+\cdots +\alpha _{r}(W+v_{r}) \end{align*} Hence \(S'\) spans \(\displaystyle \frac {V}{W}\) so that \(S'\) is a basis for \(\displaystyle \frac {V}{W}\) and
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \dim \displaystyle \frac {V}{W}&=r=n-m\\ &=dimV-dim { W}. \end{align*} □
Proof : \(A\) and \(B\) are subspaces of \(V\). Hence \(A\cap B\) is subspace of \(V\).
Let \(\dim (A\cap B) =r\).
Let \(S= \{v_{1},\ v_{2},\ \ldots ,\ v_{r}\}\) be a basis for \(A\cap B\).
Since \(A\cap B\) is a subspace of \(A\) and \(B.\) \(S\) is a part of a basis for \(A\) and \(B\).
Let \(\{v_{1},\ v_{2},\ \ldots ,\ v_{r},\) \(u_{1},\ u_{2},\ \cdots ,\ u_{s}\}\) be a basis for \(A\) and \(\{v_{1},\ v_{2},\ \ldots ,\ v_{r},\) \(w_{1},\ w_{2},\ \ldots ,\ w_{t}\}\) be a basis for \(B.\)
We shall prove that \(\{v_{1},\ v_{2},\ \ldots ,\ v_{r},\ u_{1},\ u_{2},\ \ldots ,\ u_{s},\) \(w_{1},\ w_{2},\ \ldots ,\ w_{t}\}\) be a basis for \(A+B.\)
Let \(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r}+\beta _{1}u_{1}+\beta _{2}u_{2}+\cdots +\beta _{s}u_{s}+\gamma _{1}w_{1}+\gamma _{2}w_{2}, \cdots +\gamma _{t}w_{t}=0\).
Then \(\beta _{1}u_{1}+\beta _{2}u_{2}+\cdots +\beta _{s}u_{s}=-(\alpha _{1}v_{1}+\alpha _{2}v_{2}+\cdots +\alpha _{r}v_{r})-(\gamma _{1}w_{1}+\gamma _{2}w_{2},\ \cdots +\gamma _{t}w_{t})\in B.\)
Hence \(\beta _{1}u_{1}+\beta _{2}u_{2}+ +\beta _{s}u_{s}\in B\).
Also \(\beta _{1}u_{1}+\beta _{2}u_{2}+ +\beta _{s}u_{s}\in A\).
Hence \(\beta _{1}u_{1}+\beta _{2}u_{2}+\cdots +\beta _{s}u_{s}\in A\cap B\) and so \(\beta _{1}u_{1}+\beta _{2}u_{2}+\cdots +\beta _{s}u_{s}=\delta _{1}v_{1}+\delta _{2}v_{2}+\cdots +\delta _{r}v_{r}. \beta _{1}u_{1}+\beta _{2}u_{2}+\cdots
+\beta _{s}u_{s}-\delta _{1}v_{1}-\delta _{2}v_{2}- \cdots -\delta _{r}v_{r}=0\).
Thus \(\beta _{1}=\beta _{2}=.\) \(=\beta _{s}= \delta _{1}=\delta _{2}=\cdots =\delta _{r}=0 (\)Since \(\{u_{1},\ u_{2},\ \ldots ,\ u_{s},\ v_{1},\ v_{2},\ \ldots ,\ v_{r}\}\ {i}{s}\) linearly independent)
Similarly we can prove \(\gamma _{1}=\gamma _{2}=\cdots =\gamma _{t}=0\).
Thus \(\alpha _{i}=\beta _{j}=\gamma _{k}=0\) for all \(1\leq i\leq r;1\leq j\leq s;1\leq k\leq t\).
Thus \(S'\) is a linearly independent set.
Clearly \(S'\) spans \(A+B\) and so \(S'\) is a basis for \(A+B\).
Hence \(\dim (A+B)=r+s+t\).
Also \(\dim A=r+s;\dim B=r+t\) and \(\dim (A\cap B)=r\).
Hence \(\dim A+\dim B-\dim A\cap B= (r+s)+(r+t)-r=r+s+t=\dim (A+B)\). □
Proof : \(V=A\oplus B\Rightarrow A+B=V\) and \(A\cap B=\{0\}\).
Then \(\dim (A\cap B)=0\). Hence \(\dim V=\dim A+\dim B.\) □
2.3 Rank and Nullity
Proof : We know that
\(\seteqnumber{0}{2.}{6}\)\begin{align*} \dfrac {V}{\ker T} & = T(V)\\ \dim V - \dim (\ker T) & = \dim (T(V))\\ \dim V - nullity \, T& = rank \, T \\ \therefore \, \dim V & = nullity \, T + rank \, T \end{align*} □
Example 2.3.4. Let \(V \) denote the set of all polynomials of degree \(\leq n \) in \(\mathbb {R}[x]\). Let \(T : V \to V \) be defined by
\(T(f) = \dfrac {df}{dx}\).
We know that \(T\) is a linear transformation.
Since \(\dfrac {df}{dx}=0 \Leftrightarrow f\) is constant.
\(\therefore \) \(\ker T\) consists of all constant polynomials.
The dimension of this subspace of \(V \) is 1. Hence nullity \(T\) is 1.
Since \(\dim V = n+1\).
\begin{align*} \dim V & = rank \, T + nullity \, T \\ n+1 &= rank \, T + 1 \\ n &= rank \, T \end{align*}
2.4 Matrix of a Linear Transformation
Let \(V\) and \(W\) be finite dimensional vector spaces over a field \(F\). Let \(\dim V=m\) and \(\dim W=n\).
Fix an ordered basis \(\{v_{1},\ v_{2},\ \ldots ,\ v_{m}\}\) for \(V\) and an ordered basis \(\{w_{1},\ w_{2},\ \ldots ,\ w_{n}\}\) for \(W\).
Let \(T\) : \(V\rightarrow W\) be a linear transformation.
We have seen that \(T\) is completely specified by the elements \(T(v_{1}), T(v_{2}), \ldots , T(v_{m})\) . Now, let
\begin{equation} \begin{aligned} T(v_1) & = a_{11} w_1 + a_{12} w_2 + \cdots + a_{1n}w_n \\ T(v_2) & = a_{21} w_1 + a_{22} w_2 + \cdots + a_{2n}w_n \\ \cdots & \cdots \cdots \cdots \\ T(v_m) & = a_{m1} w_1 + a_{m2} w_2 + \cdots + a_{mn}w_n \end {aligned}\label {eq1} \end{equation}
Hence \(T(v_{1}), T(v_{2}), \ldots , T(v_{m})\) are completely specified by the \(mn\) elements \(a_{ij}\) of the field \(F\). These \(a_{ij}\) can be conveniently arranged in the form of \(m\) rows and \(n\) columns as follows.
\[ \begin {pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} &\cdots & a_{2n}\\ \cdots & \cdots & \cdots & \cdots \\ a_{m1} & a_{m2} &\cdots & a_{mn} \end {pmatrix} \]
Such an array of \(mn\) elements of \(F\) arranged in \(m\) rows and \(n\) columns is known as \(m\times n\) matrix over the field \(F\) and is denoted by \((a_{ij})\). Thus to every linear transformation \(T\) there is associated with it an \(m\times n\) matrix over
\(F\).
Conversely any \(m\times n\) matrix over \(F\) defines a linear transformation \(T:V\rightarrow W\) given by the formula (2.7).
Example 2.4.2. Consider the linear transformation \(T : V_{2} (\mathbb {R})\) \(\rightarrow V_{2}(\mathbb {R})\) given by \(T(a,\ b)=(a,\ a+b)\). Choose \(\{e_{1},\ e_{2}\}\) as a basis both for the domain and the range. Then
\(\seteqnumber{0}{2.}{7}\)
\begin{align*}
T(e_{1})&=(1,1)=e_{1}+e_{2}\\ T(e_{2})&=(0,1)=e_{2}.
\end{align*}
Hence the matrix representing \(T\) is \(\begin {pmatrix} 1 & 1\\ 0 & 1 \end {pmatrix}\)
Now, we choose \(\{e_{1},\ e_{2}\}\) as a basis for the domain and \(\{(1,1),\ (1,\ -1)\}\) as a basis for the range.
Let \(w_{1}=(1,1)\) and \(w_{2}=(1,\ -1)\). Then
\begin{align*} T(e_{1})&=(1,1)=w_{1}, \text { and }\\ T(e_{2})&=(0,1)= (1/2)w_{1}-(1/2)w_{2} \end{align*} Hence the matrix representing \(T\) is \(\begin {pmatrix} 1 & 0\\ 1/2 & -1/2 \end {pmatrix}\)
Solved problems
Solution :
\(\seteqnumber{0}{2.}{7}\)\begin{align*} T(e_{1})&=T(1,0,0)=(3,1,2)=3e_{1}+e_{2}+e_{3}\\ T(e_{2})&=T(0,1,0)=(0,\ -1,1)=-e_{2}+e_{3}\\ T(e_{3})&=T(0,0,1)=(0,0,1)=e_{3} \end{align*} Thus the matrix representing \(T\) is \(\begin {pmatrix} 3 & 1 & 2\\ 0 & -1 & 1\\ 0 & 0 & 1 \end {pmatrix}\).
Solution :
\(\seteqnumber{0}{2.}{7}\)\begin{align*} T(e_{1})&=e_{1}+2e_{2}+e_{3}=(1,2,1) \\ T(e_{2})&=0e_{1}+e_{2}+e_{3}=(0,1,1)\\ T(e_{3})&=-e_{1}+3e_{2}+4e_{3}=(-1,3,4) \end{align*} Now,
\(\seteqnumber{0}{2.}{7}\)\begin{align*} (a,\ b,\ c)&=a(1,0,0)+b(0,1,0)+c(0,0,1)\\ &=ae_{1}+be_{2}+ce_{3}\\ \therefore \, T(a,\ b,\ c)&=T(ae_{1}+be_{2}+ce_{3})\\ &=aT(e_{1})+bT(e_{2})+cT(e_{3})\\ &=a(1,2,1)+b(0,1,1)+c(-1,3,4)\\ T(a,\ b,\ c)&=(a-c,\ 2a+b+3c,\ a+b+4c) \end{align*} This is the required linear transformation.
Proof : Let \(A=(a_{ij})\) and \(B=(b_{ij})\) be two \(m\times n\) matrices over the field \(F\).
The addition of \(m\times n\) matrices is a binary operation which is both commutative and associative.
The \(m\times n\) matrix whose entries are \(0\) is the identity matrix and \((-a_{ij})\) is the inverse matrix of \((a_{ij})\).
Thus the set of all \(m\times n\) matrices over the field \(F\) is an abelian group with respect to addition.
The verification of the following axioms are straight forward.
(a) \(\alpha (A+B)=\alpha A+\alpha B\)
(b) \((\alpha +\beta )A=\alpha A+\beta A\)
(c) \((\alpha \beta )A=\alpha (\beta A)\)
(d) \(1A=A.\)
Hence the set of all \(m\times n\) over \(F\) is a vector space over \(F.\)
Now, we shall prove that the dimension of this vector space is \(mn\). Let \(E_{ij}\) be the matrix with entry 1 in the \((i,\ j)^{th}\) place and \(0\) in the other places. We have \(mn\) matrices of this form.
Also any matrix \(A=(a_{ij})\) can be written as \(A=\displaystyle \sum a_{ij}E_{ij}\).
Hence \(A\) is a linear combination of the matrices \(E_{ij}\).
Further these \(mn\) matrices \(E_{ij}\) are linearly independent. Hence these \(mn\) matrices form a basis for the space of all \(m\times n\) matrices. Therefore the dimension of the vector space is \(mn.\) □
Proof : By Theorem ??, \(L(V,\ W)\) is a vector space over \(F\).
Now, we shall prove that the vector space \(L(V,\ W)\) is isomorphic to the vector space \(M_{m\times n}(F)\).
Since \(M_{m\times n}(F)\) is of dimension \(mn\), it follows that \(L(V,\ W)\) is also of dimension \(mn\).
Fix a basis \(\{v_{1},\ v_{2},\ \ldots ,\ v_{m}\}\) for \(V\) and a basis \(\{w_{1},\ w_{2},\ \ldots ,\ w_{n}\}\) for \(W\).
We know that any linear transformation \(T\in L(V,\ W)\) can be represented by an \(m\times n\) matrix over \(F\).
Let \(T\) be represented by \(M(T)\) .
This function \(M: L(V,\ W)\rightarrow M_{m\times n}(F)\) is clearly 1-1 and onto. Let \(T_{1}, T_{2}\in L(V,\ W)\) and \(M(T_{1})=(a_{ij})\) and \(M(T_{2})=(b_{ij})\).
\begin{align*} M(T_{1})&=(a_{ij})\Rightarrow T_{1}(v_{i})=\sum _{j=1}^{n}a_{ij}w_{j} \\ M(T_{2})&=(b_{ij})\Rightarrow T_{2}(v_{i})=\sum _{j=1}^{n}b_{ij}w_{j}\\ (T_{1}+T_{2})&=\displaystyle \sum _{j=1}^{n}(a_{ij}+b_{ij})w_{j}\\ M(T_{1}+T_{2})&=(a_{ij}+b_{ij})=(a_{ij})+(b_{ij})=M(T_{1})+M(T_{2}). \end{align*} Similarly \(M(\alpha T_{1})=\alpha M(T_{1})\) . Hence \(M\) is the required isomorphism from \(L(V,\ W)\) to \(M_{m\times n}(F)\). □

0 Comments