Contents
Chapter 5 Bilinear Forms
5.1 Introduction
Consider a finite dimensional, inner product space \(V\) over the field \(\mathbb {R}\) of real numbers. The inner product is a function from \(V \times V\) to \(\mathbb {R}\) satisfying the following conditions
(i) \(\left \langle \alpha u_{1}+\beta u_{2}, v\right \rangle =\alpha \left \langle u_{1}, v\right \rangle +\beta \left \langle u_{2}, v\right \rangle \)
(ii) \(\left \langle u, \alpha \nu _{1}+\beta v_{2}\right \rangle =\alpha \left \langle u, v_{1}\right \rangle +\beta \left \langle u, v_{2}\right \rangle \)
In other words the inner product is a scalar valued function of the two variables \(u\) and \(v\) and is a linear function in each of the two variables. This type of scalar valued functions are called bilinear forms. In this chapter we study bilinear forms on finite dimensional vector spaces.
5.2 Bilinear forms
Definition 5.2.1. Let \(V\) be a vector space over a field \(F\). A bilinear form on \(V\) is a function \(f: V \times V \rightarrow F\) such that
(i) \(f\left (\alpha u_{1}+\beta u_{2}, \nu \right )=\alpha f\left (u_{1}, \nu \right )+\beta f\left (u_{2}, \nu \right )\)
(ii) \(f\left (u, \alpha v_{1}+\beta v_{2}\right )=\alpha f\left (u, v_{1}\right )+\beta f\left (u, v_{2}\right )\) where \(\alpha , \beta \in F\) and \(u_{1}, u_{2}, v_{1}, v_{2} \in V\).
In other words \(f\) is linear as a function of any one of the two variables when the other is fixed.
Examples
1. Let \(V\) be a vector space over \(\mathbb {R}\). Then an inner product on \(V\) is a bilinear form on \(V\).
2. Let \(V\) be any vector space over a field \(F\). Then Let \(V\) be any vector space over field \(F\). Then the zero function \(\hat {0} : V \times V \to F\) given by \(\hat {0}(u, u)=0\) is a bilinear form. For,
\(\seteqnumber{0}{5.}{0}\)\begin{align*} \hat {0}\left (\alpha u_{1}+\beta u_{2}, v\right ) &=0 \\ &=\alpha 0+\beta 0 \\ &=\alpha \hat {0}\left (u_{1}, v\right )+\beta \hat {0}\left (u_{2}, v\right ) \end{align*} Similarly,
\[ \hat {{0}}\left (u, \alpha \nu _{1}+\beta \nu _{2}\right )=\alpha \hat {{0}}\left (u, \nu _{1}\right )+\beta \hat {{0}}\left (u, \nu _{2}\right ) \]
3. Suppose \(V\) is a vector space over a field \(F\). Let \(f_{1}\) and \(f_{2}\) be two linear functionals on \(V\), (ie) \(f_{1}\) and \(f_{2}\) are linear transformations from \(V\) to \(F\). Then \(f: V \times V \rightarrow F\) defined by \(f(u, v)=f_{1}(u) f_{2}(v)\) is a bilinear form. For
\(\seteqnumber{0}{5.}{0}\)\begin{align*} f(\left .\alpha u_{1}+\beta u_{2}, v\right ) &= f_{1}\left (\alpha u_{1}+\beta u_{2}\right ) f_{2}(v) \\ &=\left [\alpha f_{1}\left (u_{1}\right )+\beta f_{1}\left (u_{2}\right )\right ] f_{2}(v) && \left (\text { since } f_{1} \text { is linear }\right ) \\ &= \alpha f_{1}\left (u_{1}\right ) f_{2}(\nu )+\beta f_{1}\left (u_{2}\right ) f_{2}(v) \\ &= \alpha f\left (u_{1}, \nu \right )+\beta f\left (u_{2}, \nu \right ) . \end{align*} Similarly,
\[ f\left (u, \alpha v_{1}+\beta v_{2}\right )=\alpha f\left (u, v_{1}\right )+\beta f\left (u, v_{2}\right ) . \]
Proof : Let \(f, g \in L(V, V, F)\) and \(\alpha _{1} \in F\).
We claim that \(f+g\) and \(\alpha _{1} f \in L(V, V, F)\).
\begin{align*} (f+g)\left (\alpha u_{1}+\beta u_{2}, v\right ) &=f\left (\alpha u_{1}+\beta u_{2}, v\right )+g\left (\alpha u_{1}+\beta u_{2}, v\right ) \\ &=\alpha f\left (u_{1}, v\right )+\beta f\left (u_{2}, v\right )+\alpha g\left (u_{1}, v\right )+\beta g\left (u_{2}, v\right ) \\ &=\alpha \left [f\left (u_{1}, v\right )+g\left (u_{1}, v\right )\right ]+\beta \left [f\left (u_{2}, v\right )+g\left (u_{2}, v\right )\right ] \\ &=\alpha \left [(f+g)\left (u_{1}, v\right )\right ]+\beta \left [(f+g)\left (u_{2}, v\right )\right ] . \end{align*} Similarly we can prove that
\(\seteqnumber{0}{5.}{0}\)
\begin{align*}
(f+g)\left (u, \alpha v_{1}+\beta v_{2}\right )=\alpha \left [(f+g)\left (u, v_{1}\right )\right ] +\beta \left [(f+g)\left (u, v_{2}\right )\right ]
\end{align*}
Hence \((f+g) \in L(V, V, F)\).
Also,
\begin{align*} \left (\alpha _{1}f \right )\left (\alpha u_{1}+\beta u_{2}, \nu \right ) &=\alpha _{1} f\left (\alpha u_{1}+\beta u_{2}, \nu \right ) \\ &=\alpha _{1}\left [\alpha f\left (u_{1}, \nu \right )+\beta f\left (u_{2}, \nu \right )\right ] \\ &=\alpha _{1} \alpha f\left (u_{1}, \nu \right )+\alpha _{1} \beta f\left (u_{2}, \nu \right ) \\ &=\alpha \left [\left (\alpha _{1} f\right )\left (u_{1}, \nu \right )\right ]+\beta \left [\left (\alpha _{1} f\right )\left (u_{2}, \nu \right )\right ] \end{align*} Similarly
\[ \left (\alpha _{1} f\right )\left (u, \alpha \nu _{1}+\beta \nu _{2}\right )=\alpha \left [\left (\alpha _{1} f\right .\right . \left .\left (u, \nu _{1}\right )\right ] +\beta \left [\left (\alpha _{1} f\right )\left (u, \nu _{2}\right )\right ] \]
\[ \alpha _{1} f \in L(V, V, F), \]
The remaining axioms of a vector space can be easily verified. □
Matrix of a bilinear form
Let \(f\) be a bilinear form on \(V\). Fix a basis \(\left (v_{1}, v_{2}, \ldots , \ldots , v_{n}\right )\) for \(V\),
Let \(u=\alpha _{1} v_{1}+\ldots \ldots +\alpha _{n} v_{n} \) and \(v =\beta _{1} \nu _{1}+\ldots \ldots +\beta _{n} v_{n}\) Then
\begin{align*}
f(u, v) & = f\left (\alpha _{1} v_{1}+\ldots \ldots +\alpha _{n} v_{n}, \beta _{1} v_{1}+\ldots \ldots +\beta _{n} v_{n}\right ) \\ &= \sum _{i=1}^{n} \sum _{j=1}^{n} \alpha _{i} \beta _{j} f\left (v_{i}, v_{j}\right ) \\ &=
\sum _{i=1}^{n} \sum _{j=1}^{n} a_{i j} \alpha _{i} \beta _{j} \text { where } f\left (v_{i}, v_{j}\right )=a_{i j} \\ &=\left (\alpha _{1}, \ldots \ldots , \alpha _{n}\right ) \begin{pmatrix} a_{11} & \cdots & a_{1n} \\
\cdots & \cdots & \cdots \\ a_{n1} & \cdot & c_{nn} \end {pmatrix} \begin{pmatrix} \beta _1 \\ \cdots \\ \beta _n \end {pmatrix} \\ \therefore \ f(u, v) & =X A Y^{T}
\end{align*}
where \(X=\left (\alpha _{1}, \ldots , \alpha _{n}\right ), A=\left (a_{i j}\right )\) and \(Y=\left (\beta _{1}, \ldots , \beta _{n}\right ) .\)
The \(n \times n\) matrix \(A\) is called the matrix of the bilinear form with respect to the chosen basis.
Conversely, given any \(n \times n\) matrix \(A=\left (a_{i j}\right )\) the \(f: V \times V \rightarrow \dot {F}\) defined by \(f(u, v)=X A Y^{T}\) is a bilinear form on \(V\) and \(f\left (v_{i}, v_{j}\right )=a_{i j}\). Also if \(g\) is any other bilinear form on \(V\) such that \(g\left (v_{i}, v_{j}\right )=a_{i j}\). then \(f=g\).
Solved Problems
Problem 5.2.4. Let \(f\) be the bilinear form defined on \(V_{2}(\mathbb {R})\) by \(f(x, y)=x_{1} y_{1}+x_{2} y_{2}\) where \(x=\left (x_{1}, x_{2}\right )\) and \(y=\left (y_{1}, y_{2}\right )\). Find the matrix of \(f\).
(i) w.r.t, the standard basis \(\left \{e_{1}, e_{2}\right \}\).
(ii) w.r.t, the basis \(\{(1,1),(1,2)\}.\)
(i) \(\begin {aligned}[t] f\left (e_{1}, e_{1}\right ) &=f((1,0),(1,0)\\ &=1 \times 1+0 \times 0=1 \end {aligned} \)
Similarly
\(\seteqnumber{0}{5.}{0}\)\begin{align*} f\left (e_{1}, e_{2}\right ) & =0 \\ f\left (e_{2}, e_{1}\right )&=0 \\ f\left (e_{2}, e_{2}\right )&=1 \end{align*} The matrix of \(f\) is \(\left (\begin {array}{ll} 1 & 0 \\ 0 & 1 \end {array}\right )\)
(ii) Let \(v_{1}=(1,1) \) and \(v_{2}=(1,2)\). Then
\(\seteqnumber{0}{5.}{0}\)\begin{align*} f\left (v_{1}, v_{1}\right )&=1+1=2\\ f\left (v_{1}, v_{2}\right )&=1+2=3\\ f\left (v_{2}, v_{1}\right )&=1+2=3\\ f\left (v_{2}, v_{2}\right )&=1+4=5 \end{align*} The matrix of \(f\) is \(\left (\begin {array}{ll} 2 & 3 \\ 3 & 5 \end {array}\right ) \).
Theorem 5.2.5. Let \(V\) be a vector space of dimension \(n\) over a field \(F\). Fix a basis \(\left \{v_{1}, v_{2}, \ldots \ldots , v_{n}\right \}\) for \(V\). Then the function \(\varphi : L(V, V, F) \rightarrow M_{n}(F)\) which associates with each bilinear form \(f \in L(V, V, F)\) the \(n \times n\) matrix \(\left (a_{i j}\right )\) where \(f\left (v_{i}, v_{j}\right )=a_{i j}\) is an isomorphism.
Proof : Clearly \(\varphi \) is \(1-1\) and onto.
Now, let \(f, g \in L(V, V, F)\) and \(\alpha \in F\).
Let \(\varphi (f)=\left (a_{i j}\right ) \) and \(\varphi (g)=\left (b_{i j}\right )\). Then
\begin{align*} (f+g)\left (v_{i}, v_{j}\right ) &=f\left (v_{i}, v_{j}\right )+g\left (v_{i}, v_{j}\right ) \\ &=a_{i j}+b_{i j}\\ \varphi (f+g)&=\left (a_{i j}+b_{i j}\right )=\left (a_{i j}\right )+\left (b_{i j}\right )\\ &=\varphi (f)+\varphi (g) . \end{align*} Also,
\(\seteqnumber{0}{5.}{0}\)\begin{align*} (\alpha f)\left (v_{i}, v_{j}\right ) & =\alpha f\left (v_{i}, v_{j}\right )=\alpha a_{i j}\\ \varphi (\alpha f)&=\left (\alpha a_{i j}\right )=\alpha \left (a_{i j}\right )=\alpha \varphi (f) . \end{align*} Thus \(\varphi \) is an isomorphism. □
5.3 Quadratic forms
Examples
(i) Let \(V\) be a vector space over \(\mathbb {R}\). Then any inner product defined on \(V\) is a symmetric bilinear form.
(ii) The bilinear form \(\hat {\mathbf {0}}\) defined in Example 2 of Section 5.2 is a symmetric bilinear form.
(iii) Let \(f\) be a bilinear form on \(V\). Then the bilinear form \(f_{1}\) defined by \(f_{1}(u, v)=f(u, v)+f(v, u) \) is a symmetric bilinear form.
Proof : Let \(f\) be a symmetric bilinear form. Now,
\(\seteqnumber{0}{5.}{0}\)
\begin{align*}
a_{i j} &=f\left (v_{i}, v_{j}\right ) \\ &=f\left (v_{j}, v_{i}\right ) \quad \text { (since } f \text { is symmetric) } \\ &=a_{j i}
\end{align*}
\(\left (a_{i j}\right ) \) is a symmetric matrix.
Conversely, let \(\left (a_{i j}\right ) \) be a symmetric matrix. Hence \(A=A^{T}\). Then
\begin{align*} f(u,v) & = XAY^T \\ &= (XAY^T)^T (\text { since $XAY^T$ is a $1 \times 1$ matrix } )\\ &=Y A^{T} X^{T} \\ &=Y A X^{T} \\ &=f(v, u) \end{align*} \(\therefore \ f\) is a symmetric bilinear form. □
Examples
1. Consider the bilinear form \(f\) defined on \(V_{n}(F)\) by \(f(u, v)=x_{1} y_{1}+x_{2} y_{2}+\ldots \ldots +x_{n} y_{n}\), \(u=\left (x_{1}, \ldots , x_{n}\right )\), \(v=\left (y_{1}, \ldots , y_{n}\right ) \).
Then the quadratic form \(q\) associated with \(f\) is given by
\[ q(u)=f(u, u)=x_{1}^{2}+\ldots +x_{n}^{2}. \]
2. Let \(A\) be a symmetric matrix of order \(n\) associated with the symmetric bilinear form \(f\). Then the corresponding quadratic form is given by
\[ q(X)=X A X^{T}=\sum _{i, j=1}^{n} a_{i j} x_{i} x_{j} \]
For example, consider the symmetric matrix
\[ A=\left (\begin {array}{lll} 1 & 2 & 3 \\ 2 & 4 & 7 \\ 3 & 7 & 6 \end {array}\right ) \]
The quadratic form \(q\) determined by \(A\) w.r.t. the standard basis for \(V_{3}(R)\) is given by
\(\seteqnumber{0}{5.}{0}\)\begin{align*} q(v) &=\left (x_{1}\ x_{2}\ x_{3}\right ) \left (\begin{array}{ccc} 1 & 2 & 3 \\ 2 & 4 & 7 \\ 3 & 7 & 6 \end {array}\right )\left (\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end {array}\right ) \\ &=x_{1}^{2}+4 x_{2}^{2}+6 x_{3}^{2}+4 x_{1} x_{2}+14 x_{2} x_{3}+6 x_{1} x_{3} . \end{align*}
3. Consider the diagonal matrix
\[ A=\left (\begin {array}{lll} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end {array}\right ) \]
The quadratic form \(q\) determined by \(A\) w.r. The quadratic form \(q\) determined by \(\mathrm {V}_{3}(\mathbb {R})\) is given by
\(\seteqnumber{0}{5.}{0}\)\begin{align*} q(v) & =\left (x_{1} \ x_{2} \ x_{3}\right )\left (\begin{array}{lll} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end {array}\right )\left (\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end {array}\right )\\ & =x_{1}^{2}+2 x_{2}^{2}+3 x_{3}^{2} . \end{align*} We say that this quadratic form \(q\) is in the diagonal form.
4. Consider the quadratic form defined on \(V_{2}( \mathbb R)\) by \(q\left (x_{1}, x_{2}\right )=2 x_{1}^{2}+x_{1} x_{2}+x_{2}^{2}\). Then the symmetric matrix associated with \(q\) can be found as follows. Let
\(\seteqnumber{0}{5.}{0}\)\begin{align*} 2 x_{1}^{2}+x_{1} x_{2}+x_{2}^{2} &=\left (x_{1} \ x_{2}\right )\left (\begin{array}{ll} a & b \\ b & c \end {array}\right )\left (\begin{array}{l} x_{1} \\ x_{2} \end {array}\right ) \\ &=a x_{1}^{2}+2 b x_{1} x_{2}+c x_{2}^{2}\\ \therefore a & =2 ; b=\dfrac {1}{2} ; c=1 . \\ \therefore A & =\left (\begin{array}{cc} 2 & \dfrac {1}{2} \\ \dfrac {1}{2} & 1 \end {array}\right ) \end{align*}
Proof :
(i) \(\begin {aligned}[t] \frac {1}{4}\{q(u+v)-q(u-v)\} & =\frac {1}{4}\{f(u+v, u+v)-f(u-v, u-v)\}\\ &=\frac {1}{4}\{f(u, u)+f(u, v)+f(v, u)+f(v, v) \\ & \quad -f(u, u)+f(u, v)+f(v, u)-f(v, v)\}\\ &=\frac {1}{4}\{4 f(u, v)\} =f(u, v). \end {aligned} \)
(ii) Similar proof of (i).
□
5.4 Reduction of a quadratic form to the diagonal form
In example 3 of the quadratic form in Section 5.3 we have seen that a quadratic form associated with a diagonal matrix of order \(n\) is of the form
\[ a_{1} x_{1}^{2}+a_{2} x_{2}^{2}+\ldots +a_{n} x_{n}^{2} \]
which is known as the diagonal form. Now, we prove that any quadratic form can be reduced to the diagonal form by means of a non- singular linear transformation. The method of reduction which we describe below is due to Lagrange.
Consider the quadratic form
\begin{align*} \varphi & =\varphi \left (x_{1}, x_{2}, \ldots \ldots , x_{n}\right )=\sum _{i, j=1}^{n} a_{i j} x_{i} x_{j}\\ &=a_{11} x_{1}^{2}+\ldots \ldots +a_{n n} x_{n}^{2}+2 a_{12} x_{1} x_{2} +\ldots +2 a_{n(n-1)} x_{n} x_{n-1} . \end{align*} Case (i) Suppose at least one of \(a_{11}, \ldots , a_{m n}\) is not zero. We assume, without loss of generality, that \(a_{11} \neq 0\). Then
\(\seteqnumber{0}{5.}{0}\)\begin{align*} \varphi & =\left (a_{11} x_{1}^{2}+2 a_{12} x_{1} x_{2} +\ldots \ldots +2 a_{\ln } x_{1} x_{n}\right ) +\sum _{i, j=2}^{n} a_{1, j} x_{i} x_{j}\\ &=a_{11}\left (x_{1}^{2}+2 \frac {a_{12}}{a_{11}} x_{1} x_{2}+\ldots \ldots +2 \frac {a_{1 n}}{a_{11}} x_{1} x_{n}\right ) +\varphi _{1}\left (x_{2}, \ldots \ldots , x_{n}\right ) \text { (say) }\\ &=a_{11}\left (x_{1}+\frac {a_{12}}{a_{11}} x_{2}+\ldots +\frac {a_{1 n}}{a_{11}} x_{n}\right )^{2} +\varphi _{2}\left (x_{2}, \ldots , x_{n}\right ) \text { (say) } \end{align*} Now, putting \(y_{1}=x_{1}+\frac {a_{12}}{a_{11}} x_{2}+\ldots +\frac {a_{1 n}}{a_{11}} x_{n}\), \(y_{2}=x_{2}, \ldots , y_{n}=x_{n}, \varphi \) reduces to
\(\seteqnumber{0}{5.}{0}\)\begin{equation} \label {p86eq1} \varphi =\alpha _{1} y_{1}^{2}+\varphi _{2}\left (y_{2}, \ldots , y_{n}\right ) \end{equation}
where \(\alpha _{1}=a_{11}\)
Case (ii) Suppose \(a_{11}=a_{22}= \ldots =a_{n n}=0\). We still have \(a_{i j} \neq 0\) for some \(i, j\) such that \(i \neq j\).
Without loss of generally we assume that \(a_{12} \neq 0\).
Then the non-singular linear transformation
\[ x_{1}=y_{1}, x_{2}=y_{1}+y_{2}, x_{3}=y_{3}, \ldots \ldots , x_{n}=y_{n} \]
changes the quadratic form \(\varphi \) to another quadratic form in which the term \(y_{1}^{2}\) is present.
Now applying the method of Case (i) \(\varphi \) can be reduced to the form (5.1). Treating \(\varphi _{2}\) in the same way we get
\begin{align*} \varphi _{2} &=\alpha _{2} z_{2}^{2}+\varphi _{3}\left (z_{2}, \ldots , z_{n}\right ) \text { so that } \\ \varphi &=\alpha _{1} z_{1}^{2}+\alpha _{2} z_{2}^{2}+\varphi _{3}\left (z_{2}, \ldots , z_{n}\right ) \end{align*} Continuing this process of reduction we obtain \(\varphi \) in the form \(\varphi =\alpha _{1} w_{1}^{2}+\ldots \ldots +\alpha _{r} w_{r}^{2}\).
Solved problems
Solution :
\(\seteqnumber{0}{5.}{1}\)
\begin{align*}
\varphi &=x_{1}^{2}+4 x_{1} x_{2}+4 x_{1} x_{3}+4 x_{2}^{2}+16 x_{2} x_{3}+4 x_{3}^{2} \\ &=\left (x_{1}+2 x_{2}+2 x_{3}\right )^{2}+8 x_{2} x_{3}
\end{align*}
Putting \(x_{1}+2 x_{2}+2 x_{3}=y_{1}\), \(x_{2}=y_{2}\), \(x_{3}=y_{2}+y_{3}\)
we get
\begin{align*} \varphi &=y_{1}^{2}+8 y_{2}^{2}+8 y_{2} y_{3} \\ &=y_{1}^{2}+8\left (y_{2}+\frac {1}{2} y_{3}\right )^{2}-2 y_{3}^{2} \end{align*} Putting \(z_{1}=y_{1}\), \(z_{2}=y_{2}+\frac {1}{2} y_{3}\), \(z_{3}=y_{3}\) we get
\(\seteqnumber{0}{5.}{1}\)\begin{align*} \varphi &=z_{1}^{2}+8 z_{2}^{2}-2 z_{3}^{2} \end{align*} where,
\(\seteqnumber{0}{5.}{1}\)\begin{align*} z_{1}&=x_{1}+2 x_{2}+2 x_{3} \\ z_{2} &=\frac {1}{2}\left (x_{2}+x_{3}\right ) \\ z_{3} &=x_{3}-x_{2} . \end{align*}
Solution : Let \(\varphi =2 x_{1} x_{2}-x_{1} x_{3}+x_{1} x_{4}-x_{2} x_{3} +x_{2} x_{4}-2 x_{3} x_{4} \)
Putting \(x_{1}=y_{1} \); \(x_{2}=y_{1}+y_{2} \); \(x_{3}=y_{3}\) and \(x_{4}=y_{4}\), we get
\begin{align*} \varphi & =2 y_{1}^{2}+2 y_{1} y_{2}-2 y_{1} y_{3}+2 y_{1} y_{4}-y_{2} y_{3} +y_{2} y_{4}-2 y_{3} y_{4} \\ & = 2 (y_1^2 + y_1 y_2 - y_1 y_3 + y_ 1 y_4) - y_{2} y_{3} +y_{2} y_{4}-2 y_{3} y_{4} \\ &= 2 \left ( y_1 +\dfrac {1}{2} y_2 - \dfrac {1}{2} y_3 + \dfrac {1}{2} y_4 \right ) ^2 -\frac {1}{2} y_{2}^{2}-\frac {1}{2} y_{3}^{2}-\frac {1}{2} y_{4}^{2}-y_{3} y_{4} \end{align*} Putting \(z_{1}=y_{1}+\frac {1}{2} y_{2}-\frac {1}{2} y_{3}+\frac {1}{2} y_{4} \); \(z_{2}=y_{2}\); \(z_{3}=y_{3} \) and \(z_{4}=y_{4}\) we get
\(\seteqnumber{0}{5.}{1}\)\begin{align*} \varphi & =2 z_{1}^{2}-\frac {1}{2} z_{2}^{2}-\frac {1}{2} z_{3}^{2}-\frac {1}{2} z_{4}^{2}-z_{3} z_{4}\\ &=2 z_{1}^{2}-\frac {1}{2} z_{2}^{2}-\frac {1}{2}\left (z_{3}^{2}+2 z_{3} z_{4}+z_{4}^{2}\right )\\ &=2 z_{1}^{2}-\frac {1}{2} z_{2}^{2}-\frac {1}{2}\left (z_{3}+z_{4}\right )^{2} \end{align*} Putting \(w_{1}=z_{1}, w_{2}=z_{2}, w_{3}=z_{3}+z_{4}, w_{4}=w_{4}\) we get
\(\seteqnumber{0}{5.}{1}\)\begin{align*} \varphi & =2 w_{1}^{2}-\frac {1}{2} u_{2}^{2}-\frac {1}{2} w_{3}^{2} \end{align*} where,
\(\seteqnumber{0}{5.}{1}\)\begin{align*} w_1 & = \dfrac {1}{2} x_1 + \dfrac {1}{2}x_2 - \dfrac {1}{2}x_3 + \dfrac {1}{2}x_4 \\ w_2 & = -x_1 + x_2 \\ w_3 & = x_3 + x_4 \\ w_4 & = x_4. \end{align*}
0 Comments