14.1
Bilinear Forms
In this chapter, we study the structure of a K-vector space E endowed with a nondegenerate
bilinear form ϕ : E × E → K (for any field K), which can be viewed as a kind of generalized
inner product. Unlike the case of an inner product, there may be nonzero vectors u ∈ E such
that ϕ(u, u) = 0, so the map u → ϕ(u, u) can no longer be interpreted as a notion of square
length (also, ϕ(u, u) may not be real and positive!). However, the notion of orthogonality
survives: we say that u, v ∈ E are orthogonal iff ϕ(u, v) = 0. Under some additional
conditions on ϕ, it is then possible to split E into orthogonal subspaces having some special
properties. It turns out that the special cases where ϕ is symmetric (or Hermitian) or skew-
symmetric (or skew-Hermitian) can be handled uniformly using a deep theorem due to Witt
(the Witt decomposition theorem (1936)).
We begin with the very general situation of a bilinear form ϕ : E ×F → K, where K is an
arbitrary field, possibly of characteristric 2. Actually, even though at first glance this may
appear to be an uncessary abstraction, it turns out that this situation arises in attempting
to prove properties of a bilinear map ϕ : E × E → K, because it may be necessary to restrict
ϕ to different subspaces U and V of E. This general approach was pioneered by Chevalley
[20], E. Artin [2], and Bourbaki [11]. The third source was a major source of inspiration,
and many proofs are taken from it. Other useful references include Snapper and Troyer [95],
Berger [7], Jacobson [57], Grove [50], Taylor [104], and Berndt [9].
Definition 14.1. Given two vector spaces E and F over a field K, a map ϕ : E × F → K
is a bilinear form iff the following conditions hold: For all u, u1, u2 ∈ E, all v, v1, v2 ∈ F , for
all λ µ ∈ K, we have
ϕ(u1 + u2, v) = ϕ(u1, v) + ϕ(u2, v)
ϕ(u, v1 + v2) = ϕ(u, v1) + ϕ(u, v2)
ϕ(λu, v) = λϕ(u, v)
ϕ(u, µv) = µϕ(u, v).
351
352
CHAPTER 14. BILINEAR FORMS AND THEIR GEOMETRIES
A bilinear form as in Definition 14.1 is sometimes called a pairing. The first two conditions
imply that ϕ(0, v) = ϕ(u, 0) = 0 for all u ∈ E and all v ∈ F .
If E = F , observe that
ϕ(λu + µv, λu + µv) = λϕ(u, λu + µv) + µϕ(v, λu + µv)
= λ2ϕ(u, u) + λµϕ(u, v) + λµϕ(v, u) + µ2ϕ(v, v).
If we let λ = µ = 1, we get
ϕ(u + v, u + v) = ϕ(u, u) + ϕ(u, v) + ϕ(v, u) + ϕ(v, v).
If ϕ is symmetric, which means that
ϕ(u, v) = ϕ(v, u) for all u, v ∈ E,
then
2ϕ(u, v) = ϕ(u + v, u + v) − ϕ(u, u) − ϕ(v, v).
The function Φ defined such that
Φ(u) = ϕ(u, u) u ∈ E,
is called the quadratic form associated with ϕ. If the field K is not of characteristic 2, then
ϕ is completely determined by its quadratic form Φ. The symmetric bilinear form ϕ is called
the polar form of Φ. This suggests the following definition.
Definition 14.2. A function Φ : E → K is a quadratic form on E if the following conditions
hold:
(1) We have Φ(λu) = λ2Φ(u), for all u ∈ E and all λ ∈ E.
(2) The map ϕ given by ϕ (u, v) = Φ(u + v) − Φ(u) − Φ(v) is bilinear. Obviously, the map
ϕ is symmetric.
Since Φ(x + x) = Φ(2x) = 4Φ(x), we have
ϕ (u, u) = 2Φ(u) u ∈ E.
If the field K is not of characteristic 2, then ϕ = 1ϕ is the unique symmetric bilinear form
2
such that that ϕ(u, u) = Φ(u) for all u ∈ E. The bilinear form ϕ = 1ϕ is called the polar
2
form of Φ. In this case, there is a bijection between the set of bilinear forms on E and the
set of quadratic forms on E.
If K is a field of characteristic 2, then ϕ is alternating, which means that
ϕ (u, u) = 0 for all u ∈ E.
14.1. BILINEAR FORMS
353
Thus, Φ cannot be recovered from the symmetric bilinear form ϕ . However, there is some
(nonsymmetric) bilinear form ψ such that Φ(u) = ψ(u, u) for all u ∈ E. Thus, quadratic
forms are more general than symmetric bilinear forms (except in characteristic = 2).
In general, if K is a field of any characteristic, the identity
ϕ(u + v, u + v) = ϕ(u, u) + ϕ(u, v) + ϕ(v, u) + ϕ(v, v)
shows that if ϕ is alternating (that is, ϕ(u, u) = 0 for all u ∈ E), then,
ϕ(v, u) = −ϕ(u, v) for all u, v ∈ E;
we say that ϕ is skew-symmetric. Conversely, if the field K is not of characteristic 2, then a
skew-symmetric bilinear map is alternating, since ϕ(u, u) = −ϕ(u, u) implies ϕ(u, u) = 0.
An important consequence of bilinearity is that a pairing yields a linear map from E into
F ∗ and a linear map from F into E∗ (where E∗ = HomK(E, K), the dual of E, is the set of
linear maps from E to K, called linear forms).
Definition 14.3. Given a bilinear map ϕ : E × F → K, for every u ∈ E, let lϕ(u) be the
linear form in F ∗ given by
lϕ(u)(y) = ϕ(u, y) for all y ∈ F ,
and for every v ∈ F , let rϕ(v) be the linear form in E∗ given by
rϕ(v)(x) = ϕ(x, v) for all x ∈ E.
Because ϕ is bilinear, the maps lϕ : E → F ∗ and rϕ : F → E∗ are linear.
Definition 14.4. A bilinear map ϕ : E ×F → K is said to be nondegenerate iff the following
conditions hold:
(1) For every u ∈ E, if ϕ(u, v) = 0 for all v ∈ F , then u = 0, and
(2) For every v ∈ F , if ϕ(u, v) = 0 for all u ∈ E, then v = 0.
The following proposition shows the importance of lϕ and rϕ.
Proposition 14.1. Given a bilinear map ϕ : E × F → K, the following properties hold:
(a) The map lϕ is injective iff property (1) of Definition 14.4 holds.
(b) The map rϕ is injective iff property (2) of Definition 14.4 holds.
(c) The bilinear form ϕ is nondegenerate and iff lϕ and rϕ are injective.
(d) If the bilinear form ϕ is nondegenerate and if E and F have finite dimensions, then
dim(E) = dim(F ), and lϕ : E → F ∗ and rϕ : F → E∗ are linear isomorphisms.
354
CHAPTER 14. BILINEAR FORMS AND THEIR GEOMETRIES
Proof. (a) Assume that (1) of Definition 14.4 holds. If lϕ(u) = 0, then lϕ(u) is the linear
form whose value is 0 for all y; that is,
lϕ(u)(y) = ϕ(u, y) = 0 for all y ∈ F ,
and by (1) of Definition 14.4, we must have u = 0. Therefore, lϕ is injective. Conversely, if
lϕ is injective, and if
lϕ(u)(y) = ϕ(u, y) = 0 for all y ∈ F ,
then lϕ(u) is the zero form, and by injectivity of lϕ, we get u = 0; that is, (1) of Definition
14.4 holds.
(b) The proof is obtained by swapping the arguments of ϕ.
(c) This follows from (a) and (b).
(d) If E and F are finite dimensional, then dim(E) = dim(E∗) and dim(F ) = dim(F ∗).
Since ϕ is nondegenerate, lϕ : E → F ∗ and rϕ : F → E∗ are injective, so dim(E) ≤ dim(F ∗) =
dim(F ) and dim(F ) ≤ dim(E∗) = dim(E), which implies that
dim(E) = dim(F ),
and thus, lϕ : E → F ∗ and rϕ : F → E∗ are bijective.
As a corollary of Proposition 14.1, we have the following characterization of a nondegen-
erate bilinear map. The proof is left as an exercise.
Proposition 14.2. Given a bilinear map ϕ : E × F → K, if E and F have the same finite
dimension, then the following properties are equivalent:
(1) The map lϕ is injective.
(2) The map lϕ is surjective.
(3) The map rϕ is injective.
(4) The map rϕ is surjective.
(5) The bilinear form ϕ is nondegenerate.
Observe that in terms of the canonical pairing between E∗ and E given by
f, u = f (u),
f ∈ E∗, u ∈ E,
(and the canonical pairing between F ∗ and F ), we have
ϕ(u, v) = lϕ(u), v = rϕ(v), u .
14.1. BILINEAR FORMS
355
Proposition 14.3. Given a bilinear map ϕ : E × F → K, if ϕ is nondegenerate and E and
F are finite-dimensional, then dim(E) = dim(F ) = n, and for every basis (e1, . . . , en) of E,
there is a basis (f1, . . . , fn) of F such that ϕ(ei, fj) = δij, for all i, j = 1, . . . , n.
Proof. Since ϕ is nondegenerate, by Proposition 14.1 we have dim(E) = dim(F ) = n, and
by Proposition 14.2, the linear map rϕ is bijective. Then, if (e∗1, . . . , e∗n) is the dual basis (in
E∗) of the basis (e1, . . . , en), the vectors (f1, . . . , fn) given by fi = r−1
ϕ (e∗
i ) form a basis of F ,
and we have
ϕ(ei, fj) = rϕ(fj), ei = e∗i, ej = δij,
as claimed.
If E = F and ϕ is symmetric, then we have the following interesting result.
Theorem 14.4. Given any bilinear form ϕ : E × E → K with dim(E) = n, if ϕ is symmet-
ric and K does not have characteristic 2, then there is a basis (e1, . . . , en) of E such that
ϕ(ei, ej) = 0, for all i = j.
Proof. We proceed by induction on n ≥ 0, following a proof due to Chevalley. The base
case n = 0 is trivial. For the induction step, assume that n ≥ 1 and that the induction
hypothesis holds for all vector spaces of dimension n − 1. If ϕ(u, v) = 0 for all u, v ∈ E,
then the statement holds trivially. Otherwise, since K does not have characteristic 2, by a
previous remark, there is some nonzero vector e1 ∈ E such that ϕ(e1, e1) = 0. We claim that
the set
H = {v ∈ E | ϕ(e1, v) = 0}
has dimension n − 1, and that e1 /
∈ H.
This is because
H = Ker (lϕ(e1)),
where lϕ(e1) is the linear form in E∗ determined by e1. Since ϕ(e1, e1) = 0, we have e1 /
∈ H,
the linear form lϕ(e1) is not the zero form, and thus its kernel is a hyperplane H (a subspace
of dimension n − 1). Since dim(H) = n − 1 and e1 /
∈ H, we have the direct sum
E = H ⊕ Ke1.
By the induction hypothesis applied to H, we get a basis (e2, . . . , en) of vectors in H such
that ϕ(ei, ej) = 0, for all i = j with 2 ≤ i, j ≤ n. Since ϕ(e1, v) = 0 for all v ∈ H and since
ϕ is symmetric, we also have ϕ(v, e1) = 0 for all v ∈ H, so we obtain a basis (e1, . . . , en) of
E such that ϕ(ei, ej) = 0, for all i = j.
If E and F are finite-dimensional vector spaces and if (e1, . . . , em) is a basis of E and
(f1, . . . , fn) is a basis of F then the bilinearity of ϕ yields
m
n
m
n
ϕ
xiei,
yjfj
=
xiϕ(ei, fj)yj.
i=1
j=1
i=1 j=1
356
CHAPTER 14. BILINEAR FORMS AND THEIR GEOMETRIES
This shows that ϕ is completely determined by the m × n matrix M = (ϕ(ei, ej)), and in
matrix form, we have
ϕ(x, y) = x M y = y M x,
where x and y are the column vectors associated with (x1, . . . , xm) ∈ Km and (y1, . . . , yn) ∈
Kn. We call M the matrix of ϕ with respect to the bases (e1, . . . , em) and (f1, . . . , fn).
If m = dim(E) = dim(F ) = n, then it is easy to check that ϕ is nondegenerate iff M is
invertible iff det(M ) = 0.
As we will see later, most bilinear forms that we will encounter are equivalent to one
whose matrix is of the following form:
1. In, −In.
2. If p + q = n, with p, q ≥ 1,
I
I
p
0
p,q =
0
−Iq
3. If n = 2m,
0
I
J
m
m.m =
−Im
0
4. If n = 2m,
0
I
A
m
m,m = Im.mJm.m =
.
Im
0
If we make changes of bases given by matrices P and Q, so that x = P x and y = Qy ,
then the new matrix expressing ϕ is P M Q. In particular, if E = F and the same basis
is used, then the new matrix is P M P . This shows that if ϕ is nondegenerate, then the
determinant of ϕ is determined up to a square element.
Observe that if ϕ is a symmetric bilinear form (E = F ) and if K does not have charac-
teristic 2, then by Theorem 14.4, there is a basis of E with respect to which the matrix M
representing ϕ is a diagonal matrix. If K = R or K = C, this allows us to classify completely
the symmetric bilinear forms. Recall that Φ(u) = ϕ(u, u) for all u ∈ E.
Proposition 14.5. Given any bilinear form ϕ : E × E → K with dim(E) = n, if ϕ is
symmetric and K does not have characteristic 2, then there is a basis (e1, . . . , en) of E such
that
n
r
Φ
xiei =
λix2i,
i=1
i=1
for some λi ∈ K − {0} and with r ≤ n. Furthermore, if K = C, then there is a basis
(e1, . . . , en) of E such that
n
r
Φ
xiei =
x2i,
i=1
i=1
14.1. BILINEAR FORMS
357
and if K = R, then there is a basis (e1, . . . , en) of E such that
n
p
p+q
Φ
xiei =
x2i −
x2i,
i=1
i=1
i=p+1
with 0 ≤ p, q and p + q ≤ n.
Proof. The first statement is a direct consequence of Theorem 14.4. If K = C, then every
λi has a square root µi, and if replace ei by ei/µi, we obtained the desired form.
If K = R, then there are two cases:
1. If λi > 0, let µi be a positive square root of λi and replace ei by ei/µi.
2. If λi < 0, et µi be a positive square root of −λi and replace ei by ei/µi.
In the nondegenerate case, the matrices corresponding to the complex and the real case
are, In, −In, and Ip,q. Observe that the second statement of Proposition 14.4 holds in any
field in which every element has a square root. In the case K = R, we can show that(p, q)
only depends on ϕ.
For any subspace U of E, we say that ϕ is positive definite on U iff ϕ(u, u) > 0 for all
nonzero u ∈ U, and we say that ϕ is negative definite on U iff ϕ(u, u) < 0 for all nonzero
u ∈ U. Then, let
r = max{dim(U) | U ⊆ E, ϕ is positive definite on U}
and let
s = max{dim(U) | U ⊆ E, ϕ is negative definite on U}
Proposition 14.6. (Sylvester’s inertia law ) Given any symmetric bilinear form ϕ : E ×E →
R with dim(E) = n, for any basis (e1, . . . , en) of E such that
n
p
p+q
Φ
xiei =
x2i −
x2i,
i=1
i=1
i=p+1
with 0 ≤ p, q and p + q ≤ n, the integers p, q depend only on ϕ; in fact, p = r and q = s,
with r and s as defined above.
Proof. If we let U be the subspace spanned by (e1, . . . , ep), then ϕ is positive definite on
U , so r ≥ p. Similarly, if we let V be the subspace spanned by (ep+1, . . . , ep+q), then ϕ is
negative definite on V , so s ≥ q.
Next, if W1 is any subspace of maximum dimension such that ϕ is positive definite on
W1, and if we let V be the subspace spanned by (ep+1, . . . , en), then ϕ(u, u) ≤ 0 on V , so
358
CHAPTER 14. BILINEAR FORMS AND THEIR GEOMETRIES
W1 ∩ V = (0), which implies that dim(W1) + dim(V ) ≤ n, and thus, r + n − p ≤ n; that
is, r ≤ p. Similarly, if W2 is any subspace of maximum dimension such that ϕ is negative
definite on W2, and if we let U be the subspace spanned by (e1, . . . , ep, ep+q+1, . . . , en), then
ϕ(u, u) ≥ 0 on U , so W2 ∩ U = (0), which implies that s + n − q ≤ n; that is, s ≤ q.
Therefore, p = r and q = s, as claimed
These last two results can be generalized to ordered fields. For example, see Snapper and
Troyer [95], Artin [2], and Bourbaki [11].
14.2
Sesquilinear Forms
In order to accomodate Hermitian forms, we assume that some involutive automorphism,
λ → λ, of the field K is given. This automorphism of K satisfies the following properties:
(λ + µ) = λ + µ
(λµ) = λ µ
λ = λ.
If the automorphism λ → λ is the identity, then we are in the standard situation of a
bilinear form. When K = C (the complex numbers), then we usually pick the automorphism
of C to be conjugation; namely, the map
a + ib → a − ib.
Definition 14.5. Given two vector spaces E and F over a field K with an involutive au-
tomorphism λ → λ, a map ϕ: E × F → K is a (right) sesquilinear form iff the following
conditions hold: For all u, u1, u2 ∈ E, all v, v1, v2 ∈ F , for all λ µ ∈ K, we have
ϕ(u1 + u2, v) = ϕ(u1, v) + ϕ(u2, v)
ϕ(u, v1 + v2) = ϕ(u, v1) + ϕ(u, v2)
ϕ(λu, v) = λϕ(u, v)
ϕ(u, µv) = µϕ(u, v).
Again, ϕ(0, v) = ϕ(u, 0) = 0. If E = F , then we have
ϕ(λu + µv, λu + µv) = λϕ(u, λu + µv) + µϕ(v, λu + µv)
= λλϕ(u, u) + λµϕ(u, v) + λµϕ(v, u) + µµϕ(v, v).
If we let λ = µ = 1 and then λ = 1, µ = −1, we get
ϕ(u + v, u + v) = ϕ(u, u) + ϕ(u, v) + ϕ(v, u) + ϕ(v, v)
ϕ(u − v, u − v) = ϕ(u, u) − ϕ(u, v) − ϕ(v, u) + ϕ(v, v),
14.2. SESQUILINEAR FORMS
359
so by subtraction, we get
2(ϕ(u, v) + ϕ(v, u)) = ϕ(u + v, u + v) − ϕ(u − v, u − v) for u, v ∈ E.
If we replace v by λv (with λ = 0), we get
2(λϕ(u, v) + λϕ(v, u)) = ϕ(u + λv, u + λv) − ϕ(u − λv, u − λv),
and by combining the above two equations, we get
2(λ − λ)ϕ(u, v) = λϕ(u + v, u + v) − λϕ(u − v, u − v) − ϕ(u + λv, u + λv) + ϕ(u − λv, u − λv).
If the automorphism λ → λ is not the identity, then there is some λ ∈ K such that λ−λ = 0,
and if K is not of characteristic 2, then we see that the sesquilinear form ϕ is completely
determined by its restriction to the diagonal (that is, the set of values {ϕ(u, u) | u ∈ E}).
In the special case where K = C, we can pick λ = i, and we get
4ϕ(u, v) = ϕ(u + v, u + v) − ϕ(u − v, u − v) + iϕ(u + λv, u + λv) − iϕ(u − λv, u − λv).
Remark: If the automorphism λ → λ is the identity, then in general ϕ is not determined
by its value on the diagonal, unless ϕ is symmetric.
In the sesquilinear setting, it turns out that the following two cases are of interest:
1. We have
ϕ(v, u) = ϕ(u, v),
for all u, v ∈ E,
in which case we say that ϕ is Hermitian. In the special case where K = C and the
involutive automorphism is conjugation, we see that ϕ(u, u) ∈ R, for u ∈ E.
2. We have
ϕ(v, u) = −ϕ(u, v), for all u, v ∈ E,
in which case we say that ϕ is skew-Hermitian.
We observed that in characteristic different from 2, a sesquilinear form is determined
by its restriction to the diagonal. For Hermitian and skew-Hermitian forms, we have the
following kind of converse.
Proposition 14.7. If ϕ is a nonzero Hermitian or skew-Hermitian form and if ϕ(u, u) = 0
for all u ∈ E, then K is of characteristic 2 and the automorphism λ → λ is the identity.
Proof. We give the prooof in the Hermitian case, the skew-Hermitian case being left as an
exercise. Assume that ϕ is alternating. From the identity
ϕ(u + v, u + v) = ϕ(u, u) + ϕ(u, v) + ϕ(u, v) + ϕ(v, v),
360
CHAPTER 14. BILINEAR FORMS AND THEIR GEOMETRIES
we get
ϕ(u, v) = −ϕ(u, v) for all u, v ∈ E.
Since ϕ is not the zero form, there exist some nonzero vectors u, v ∈ E such that ϕ(u, v) = 1.
For any λ ∈ K, we have
λϕ(u, v) = ϕ(λu, v) = −ϕ(λu, v) = −λ ϕ(u, v),
and since ϕ(u, v) = 1, we get
λ = −λ for all λ ∈ K.
For λ = 1, we get 1 = −1, which means that K has characterictic 2. But then
λ = −λ = λ for all λ ∈ K,
so the automorphism λ → λ is the identity.
The definition of the linear maps lϕ and rϕ requires a small twist due to the automorphism
λ → λ.
Definition 14.6. Given a vector space E over a field K with an involutive automorphism
λ → λ, we define the K-vector space E as E with its abelian group structure, but with
scalar multiplication given by
(λ, u) → λu.
Given two K-vector spaces E and F , a semilinear map f : E → F is a function, such that
for all u, v ∈ E, for all λ ∈ K, we have
f (u + v) = f (u) + f (v)
f (λu) = λf (u).
Because λ = λ, observe that a function f : E → F is semilinear iff it is a linear map
f : E → F . The K-vector spaces E and E are isomorphic, since any basis (ei)i∈I of E is also
a basis of E.
The maps lϕ and rϕ are defined as follows:
For every u ∈ E, let lϕ(u) be the linear form in F ∗ defined so that
lϕ(u)(y) = ϕ(u, y) for all y ∈ F ,
and for every v ∈ F , let rϕ(v) be the linear form in E∗ defined so that
rϕ(v)(x) = ϕ(x, v) for all x ∈ E.
The reader should check that because we used ϕ(u, y) in the definition of lϕ(u)(y), the
function lϕ(u) is indeed a linear form in F ∗. It is also easy to check that lϕ is a linear
map lϕ : E → F ∗, and that rϕ is a linear map rϕ : F → E∗ (equivalently, lϕ : E → F ∗ and
rϕ : F → E∗ are semilinear).
The notion of a nondegenerate sesquilinear form is identical to the notion for bilinear
forms. For the convenience of the reader, we repeat the definition.
14.2. SESQUILINEAR FORMS
361
Definition 14.7. A sesquilinear map ϕ : E × F → K is said to be nondegenerate iff the
following conditions hold:
(1) For every u ∈ E, if ϕ(u, v) = 0 for all v ∈ F , then u = 0, and
(2) For every v ∈ F , if ϕ(u, v) = 0 for all u ∈ E, then v = 0.
Proposition 14.1 translates into the following proposition. The proof is left as an exercise.
Proposition 14.8. Given a sesquilinear map ϕ : E × F → K, the following properties hold:
(a) The map lϕ is injective iff property (1) of Definition 14.7 holds.
(b) The map rϕ is injective iff property (2) of Definition 14.7 holds.
(c) The sesquilinear form ϕ is nondegenerate and iff lϕ and rϕ are injective.
(d) If the sesquillinear form ϕ is nondegenerate and if E and F have finite dimensions,
then dim(E) = dim(F ), and lϕ : E → F ∗ and rϕ : F → E∗ are linear isomorphisms.
Propositions 14.2 and 14.3 also generalize to sesquilinear forms. We also have the follow-
ing version of Theorem 14.4, whose proof is left as an exercise.
Theorem 14.9. Given any sesquilinear form ϕ : E × E → K with dim(E) = n, if ϕ is
Hermitian and K does not have characteristic 2, then there is a basis (e1, . . . , en) of E such
that ϕ(ei, ej) = 0, for all i = j.
As in Section 14.1, if E and F are finite-dimensional vector spaces and if (e1, . . . , em) is
a basis of E and (f1, . . . , fn) is a basis of F then the sesquilinearity of ϕ yields
m
n
m
n
ϕ
xiei,
yjfj
=
xiϕ(ei, fj)yj.
i=1
j=1
i=1 j=1
This shows that ϕ is completely determined by the m × n matrix M = (ϕ(ei, ej)), and in
matrix form, we have
ϕ(x, y) = x M y = y∗M x,
where x and y are the column vectors associated with (x1, . . . , xm) ∈ Km and (y1, . . . , yn) ∈
Kn, and y∗ = y . We call M the matrix of ϕ with respect to the bases (e1, . . . , em) and
(f1, . . . , fn).
If m = dim(E) = dim(F ) = n, then ϕ is nondegenerate iff M is invertible iff det(M ) = 0.
Observe that if ϕ is a Hermitian form (E = F ) and if K does not have characteristic 2,
then by Theorem 14.9, there is a basis of E with respect to which the matrix M representing
ϕ is a diagonal matrix. If K = C, then these entries are real, and this allows us to classify
completely the Hermitian forms.
362
CHAPTER 14. BILINEAR FORMS AND THEIR GEOMETRIES
Proposition 14.10. Given any Hermitian form ϕ : E × E → C with dim(E) = n, there is
a basis (e1, . . . , en) of E such that
n
p
p+q
Φ
xiei =
x2i −