Semrl P The optimal version of Hua's fundamental theorem of geometry of rectangular matrices (MEMO1089, AMS, 2014)(ISBN 9780821898451)(86s) MAl

background image

M

EMOIRS

of the

American Mathematical Society

Volume 232

Number 1089 (first of 6 numbers)

November 2014

The Optimal Version of Hua’s

Fundamental Theorem of Geometry

of Rectangular Matrices

Peter ˇ

Semrl

ISSN 0065-9266 (print)

ISSN 1947-6221 (online)

American Mathematical Society

background image

M

EMOIRS

of the

American Mathematical Society

Volume 232

Number 1089 (first of 6 numbers)

November 2014

The Optimal Version of Hua’s

Fundamental Theorem of Geometry

of Rectangular Matrices

Peter ˇ

Semrl

ISSN 0065-9266 (print)

ISSN 1947-6221 (online)

American Mathematical Society

Providence, Rhode Island

background image

Library of Congress Cataloging-in-Publication Data

ˇ

Semrl, Peter, 1962-

The optimal version of Hua’s fundamental theorem of geometry of rectangular matrices / Peter

ˇ

Semrl.

pages cm. – (Memoirs of the American Mathematical Society, ISSN 0065-9266 ; volume 232,

number 1089)

Includes bibliographical references.
ISBN 978-0-8218-9845-1 (alk. paper)
1. Matrices.

2. Geometry, Algebraic.

I. Title.

QA188.S45

2014

512.9

434–dc23

2014024653

DOI: http://dx.doi.org/10.1090/memo/1089

Memoirs of the American Mathematical Society

This journal is devoted entirely to research in pure and applied mathematics.

Subscription information. Beginning with the January 2010 issue, Memoirs is accessible

from www.ams.org/journals. The 2014 subscription begins with volume 227 and consists of six
mailings, each containing one or more numbers. Subscription prices are as follows: for paper deliv-
ery, US$827 list, US$661.60 institutional member; for electronic delivery, US$728 list, US$582.40
institutional member. Upon request, subscribers to paper delivery of this journal are also entitled
to receive electronic delivery. If ordering the paper version, add US$10 for delivery within the
United States; US$69 for outside the United States. Subscription renewals are subject to late
fees. See www.ams.org/help-faq for more journal subscription information. Each number may be
ordered separately; please specify number when ordering an individual number.

Back number information. For back issues see www.ams.org/bookstore.
Subscriptions and orders should be addressed to the American Mathematical Society, P. O.

Box 845904, Boston, MA 02284-5904 USA. All orders must be accompanied by payment. Other
correspondence should be addressed to 201 Charles Street, Providence, RI 02904-2294 USA.

Copying and reprinting.

Individual readers of this publication, and nonprofit libraries

acting for them, are permitted to make fair use of the material, such as to copy a chapter for use
in teaching or research. Permission is granted to quote brief passages from this publication in
reviews, provided the customary acknowledgment of the source is given.

Republication, systematic copying, or multiple reproduction of any material in this publication

is permitted only under license from the American Mathematical Society.

Requests for such

permission should be addressed to the Acquisitions Department, American Mathematical Society,
201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by
e-mail to reprint-permission@ams.org.

Memoirs of the American Mathematical Society (ISSN 0065-9266 (print); 1947-6221 (online))

is published bimonthly (each volume consisting usually of more than one number) by the American
Mathematical Society at 201 Charles Street, Providence, RI 02904-2294 USA. Periodicals postage
paid at Providence, RI. Postmaster: Send address changes to Memoirs, American Mathematical
Society, 201 Charles Street, Providence, RI 02904-2294 USA.

c

2014 by the American Mathematical Society. All rights reserved.

Copyright of individual articles may revert to the public domain 28 years

after publication. Contact the AMS for copyright status of individual articles.

This publication is indexed in Mathematical Reviews

R

, Zentralblatt MATH, Science Citation

Index

R

, Science Citation Index

T M

-Expanded, ISI Alerting Services

SM

, SciSearch

R

, Research

Alert

R

, CompuMath Citation Index

R

, Current Contents

R

/Physical, Chemical & Earth

Sciences. This publication is archived in Portico and CLOCKSS.

Printed in the United States of America.

The paper used in this book is acid-free and falls within the guidelines

established to ensure permanence and durability.

Visit the AMS home page at http://www.ams.org/

10 9 8 7 6 5 4 3 2 1

19 18 17 16 15 14

background image

Contents

Chapter 1.

Introduction

1

Chapter 2.

Notation and basic definitions

5

Chapter 3.

Examples

9

Chapter 4.

Statement of main results

27

Chapter 5.

Proofs

29

5.1.

Preliminary results

29

5.2.

Splitting the proof of main results into subcases

50

5.3.

Square case

55

5.4.

Degenerate case

58

5.5.

Non-square case

64

5.6.

Proofs of corollaries

67

Acknowledgments

71

Bibliography

73

iii

background image

background image

Abstract

Hua’s fundamental theorem of geometry of matrices describes the general form

of bijective maps on the space of all m

× n matrices over a division ring D which

preserve adjacency in both directions. Motivated by several applications we study
a long standing open problem of possible improvements. There are three natural
questions. Can we replace the assumption of preserving adjacency in both directions
by the weaker assumption of preserving adjacency in one direction only and still
get the same conclusion? Can we relax the bijectivity assumption? Can we obtain
an analogous result for maps acting between the spaces of rectangular matrices of
different sizes? A division ring is said to be EAS if it is not isomorphic to any
proper subring. For matrices over EAS division rings we solve all three problems
simultaneously, thus obtaining the optimal version of Hua’s theorem. In the case
of general division rings we get such an optimal result only for square matrices and
give examples showing that it cannot be extended to the non-square case.

Received by the editor May 28, 2012, and, in revised form, December 4, 2012.
Article electronically published on February 19, 2014.
DOI: http://dx.doi.org/10.1090/memo/1089
2010 Mathematics Subject Classification. Primary 15A03, 51A50.
Key words and phrases. Rank, adjacency preserving map, matrix over a division ring, geom-

etry of matrices.

The author was supported by a grant from ARRS, Slovenia.
Affiliation at time of publication: Faculty of Mathematics and Physics, University of Ljubl-

jana, Jadranska 19, SI-1000 Ljubljana, Slovenia, email: peter.semrl@fmf.uni-lj.si.

c

2014 American Mathematical Society

v

background image

background image

CHAPTER 1

Introduction

Let

D be a division ring and m, n positive integers. By M

m

×n

(

D) we denote

the set of all m

× n matrices over D. If m = n we write M

n

(

D) = M

n

×n

(

D). For

an arbitrary pair A, B

∈ M

m

×n

(

D) we define d(A, B) = rank (A − B). We call

d the arithmetic distance. Matrices A, B

∈ M

m

×n

(

D) are said to be adjacent if

d(A, B) = 1. If A

∈ M

m

×n

(

D), then

t

A denotes the transpose of A.

In the series of papers [4] - [11] Hua initiated the study of bijective maps on

various spaces of matrices preserving adjacency in both directions. Let

V be a space

of matrices. Recall that a map φ :

V → V preserves adjacency in both directions

if for every pair A, B

∈ V the matrices φ(A) and φ(B) are adjacent if and only if

A and B are adjacent. We say that a map φ :

V → V preserves adjacency (in one

direction only) if φ(A) and φ(B) are adjacent whenever A, B

∈ V are adjacent.

Hua’s fundamental theorem of the geometry of rectangular matrices (see [25])

states that for every bijective map φ : M

m

×n

(

D) → M

m

×n

(

D), m, n ≥ 2, preserving

adjacency in both directions there exist invertible matrices T

∈ M

m

(

D), S ∈ M

n

(

D),

a matrix R

∈ M

m

×n

(

D), and an automorphism τ of the division ring D such that

(1)

φ(A) = T A

τ

S + R,

A

∈ M

m

×n

(

D).

Here, A

τ

= [a

ij

]

τ

= [τ (a

ij

)] is a matrix obtained from A by applying τ entrywise.

In the square case m = n we have the additional possibility

(2)

φ(A) = T

t

(A

σ

)S + R,

A

∈ M

n

(

D),

where T, S, R are matrices in M

n

(

D) with T, S invertible, and σ : D D is an

anti-automorphism. Clearly, the converse statement is true as well, that is, any
map of the form (1) or (2) is bijective and preserves adjacency in both directions.

Composing the map φ with a translation affects neither the assumptions, nor

the conclusion of Hua’s theorem. Thus, there is no loss of generality in assuming
that φ(0) = 0. Then clearly, R = 0. It is a remarkable fact that after this harmless
normalization the additive (semilinear in the case when

D is a field) character of φ

is not an assumption but a conclusion.

This beautiful result has many applications different from the original Hua’s

motivation related to complex analysis and Siegel’s symplectic geometry. Let us
mention here two of them that are especially important to us. There is a vast
literature on linear preservers (see [16]) dating back to 1897 when Frobenious [3]
described the general form of linear maps on square matrices that preserve de-
terminant. As explained by Marcus [17], most of linear preserver problems can
be reduced to the problem of characterizing linear maps that preserve matrices of
rank one. Of course, linear preservers of rank one preserve adjacency, and there-
fore, most of linear preserver results can be deduced from Hua’s theorem. When
reducing a linear preserver problem to the problem of rank one preservers and then

1

background image

2

PETER ˇ

SEMRL

to Hua’s theorem, we end up with a result on maps on matrices with no linearity
assumption. Therefore it is not surprising that Hua’s theorem has been already
proved to be a useful tool in the new research area concerning general (non-linear)
preservers.

It turns out that the fundamental theorem of geometry of Grassmann spaces

[2] follows from Hua’s theorem as well (see [25]). Hence, improving Hua’s theorem
one may expect to be able to also improve Chow’s theorem [2] on the adjacency
preserving maps on Grassmann spaces.

Motivated by applications we will be interested in possible improvements of

Hua’s theorem. The first natural question is whether the assumption that adjacency
is preserved in both directions can be replaced by the weaker assumption that it is
preserved in one direction only and still get the same conclusion. This question had
been opened for a long time and has finally been answered in the affirmative in [13].
Next, one can ask if it is possible to relax the bijectivity assumption. The first guess
might be that Hua’s theorem remains valid without bijectivity assumption with a
minor modification that τ appearing in (1) is a nonzero endomorphism of

D (not

necessarily surjective), while σ appearing in (2) is a nonzero anti-endomorphism.
Quite surprisingly it turned out that the validity of this conjecture depends on
the underlying field. It was proved in [19] that it is true for real matrices and
wrong for complex matrices. And the last problem is whether we can describe
maps preserving adjacency (in both directions) acting between spaces of matrices
of different sizes.

Let us mention here Hua’s fundamental theorem for complex hermitian ma-

trices. Denote by H

n

the space of all n

× n complex hermitian matrices. The

fundamental theorem of geometry of hermitian matrices states that every bijective
map φ : H

n

→ H

n

preserving adjacency in both directions and satisfying φ(0) = 0

is a congruence transformation possibly composed with the transposition and possi-
bly multiplied by

1. Here, again we can ask for possible improvements in all three

above mentioned directions. Huang and the author have answered all three ques-
tions simultaneously in the paper [12] by obtaining the following optimal result.
Let m, n be integers with m

2 and φ : H

m

→ H

n

a map preserving adjacency

(in one direction only; note that no surjectivity or injectivity is assumed and that
m may be different from n) and satisfying φ(0) = 0 (this is, of course, a harmless
normalization). Then either φ is the standard embedding of H

m

into H

n

composed

with the congruence transformation on H

n

possibly composed with the transpo-

sition and possibly multiplied by

1; or φ is of a very special degenerate form,

that is, its range is contained in a linear span of some rank one hermitian matrix.
This result has already been proved to be useful including some applications in
mathematical physics [23, 24].

It is clear that the problem of finding the optimal version of Hua’s fundamental

theorem of geometry of rectangular matrices is much more complicated than the
corresponding problem for hermitian matrices. Classical Hua’s results characterize
bijective maps from a certain space of matrices onto itself preserving adjacency
in both directions. While in the hermitian case we were able to find the optimal
result by improving Hua’s theorem in all three directions simultaneously (removing
the bijectivity assumption, assuming that adjacency is preserved in one direction
only, and considering maps between matrix spaces of different sizes), we have seen
above that when considering the corresponding problem on the space of rectangular

background image

1. INTRODUCTION

3

matrices we enter difficulties already when trying to improve it in only one of the
three possible directions. Namely, for some division rings it is possible to omit the
bijectivity assumption in Hua’s theorem and still get the same conclusion, but not
for all. In the third secion we will present several new examples showing that this
is not the only trouble we have when searching for the optimal version of Hua’s
theorem for rectangular matrices.

Let m, n, p, q be positive integers with p

≥ m and q ≥ n, τ : D D a nonzero

endomorphism, and T

∈ M

p

(

D) and S ∈ M

q

(

D) invertible matrices. Then the map

φ : M

m

×n

(

D) → M

p

×q

(

D) defined by

(3)

φ(A) = T

A

τ

0

0

0

S

preserves adjacency. Similarly, if m, n, p, q are positive integers with p

≥ n and

q

≥ m, σ : D D a nonzero anti-endomorphism, and T ∈ M

p

(

D) and S ∈ M

q

(

D)

invertible matrices, then φ : M

m

×n

(

D) → M

p

×q

(

D) defined by

(4)

φ(A) = T

t

(A

σ

)

0

0

0

S

preserves adjacency as well. We will call any map that is of one of the above two
forms a standard adjacency preserving map.

Having in mind the optimal version of Hua’s theorem for hermitian matrices it

is natural to ask whether each adjacency preserving map between M

m

×n

(

D) and

M

p

×q

(

D) is either standard or of some rather simple degenerate form that can be

easily described. As we shall show in the third section, maps φ : M

m

×n

(

D)

M

p

×q

(

D) which preserve adjacency in one direction only can have a wild behaviour

that cannot be easily described. Thus, an additional assumption is required if we
want to have a reasonable result. As we want to have an optimal result we do not
want to assume that matrices in the domain are of the same size as those in the
codomain, and moreover, we do not want to assume that adjacency is preserved in
both directions. Standard adjacency preserving maps are not surjective in general.
They are injective, but the counterexamples will show that the injectivity assump-
tion is not strong enough to exclude the possibility of a wild behaviour of adjacency
preserving maps. Hence, we are looking for a certain weak form of the surjectivity
assumption which is not artificial, is satisfied by standard maps, and guarantees
that the general form of adjacency preserving maps satisfying this assumption can
be easily described. Moreover, such an assumption must be as weak as possible so
that our theorem can be considered as the optimal one.

In order to find such an assumption we observe that adjacency preserving maps

are contractions with respect to the arithmetic distance d. More precisely, assume
that φ : M

m

×n

(

D) → M

p

×q

(

D) preserves adjacency, that is, for every pair A, B ∈

M

m

×n

(

D) we have

d(A, B) = 1

⇒ d(φ(A), φ(B)) = 1.

Using the facts (see the next section) that d satisfies the triangle inequality and that
for every positive integer r and every pair A, B

∈ M

m

×n

(

D) we have d(A, B) = r if

and only if there exists a chain of matrices A = A

0

, A

1

, . . . , A

r

= B such that the

pairs A

0

, A

1

, and A

1

, A

2

, and . . ., and A

r

1

, A

r

are all adjacent we easily see that

φ is a contraction, that is

d(φ(A), φ(B))

≤ d(A, B), A, B ∈ M

m

×n

(

D).

background image

4

PETER ˇ

SEMRL

In particular, d(φ(A), φ(B))

min{m, n} for all A, B ∈ M

m

×n

(

D). We believe that

the most natural candidate for the additional assumption that we are looking for
is the condition that there exists at least one pair of matrices A

0

, B

0

∈ M

m

×n

(

D)

such that

(5)

d(φ(A

0

), φ(B

0

)) = min

{m, n}.

Of course, standard maps φ : M

m

×n

(

D) → M

p

×q

(

D) satisfy this rather weak as-

sumption.

Our first main result will describe the general form of adjacency preserving

maps φ : M

n

(

D) → M

p

×q

(

D), n ≥ 3, having the property that there exists at

least one pair of matrices A

0

, B

0

∈ M

n

(

D) such that d(φ(A

0

), φ(B

0

)) = n. It

turns out that such maps can have a certain degenerate form. But even if they
are not degenerate, they might be far away from being standard. Nevertheless,
the description of all possibile forms will still be quite simple. In the non-square
case, that is, the case when the domain of the map φ is the space of all m

× n

matrices with m possibly different from n, we need to restrict to matrices over EAS
division rings. For such matrices we will prove the desired optimal result stating
that all adjacency preserving maps satisfying (5) are either standard, or of a certain
degenerate form.

The next section is devoted to notation and basic definitions. Then we will

present several examples of adjacency preserving maps, some of them quite com-
plicated. Having these examples it will be easy to understand the necessity of the
assumption (5) in the statement of our main results. At the same time these exam-
ples will show that our results are indeed optimal. In particular, we will show that
in the non-square case the behaviour of adjacency preserving maps satisfying (5)
can be very wild in the absence of the EAS assumption on the underlying division
ring. And finally, the last section will be devoted to the proofs. When dealing with
such a classical problem it is clear that the proofs depend a lot on the techniques
developed in the past. However, we will deal with adjacency preserving maps under
much weaker conditions than in any of the previous works on this topic, and also
the description of such maps in this more general setting will differ a lot from the
known results. It is therefore not surprising that many new ideas will be needed to
prove our main theorems.

background image

CHAPTER 2

Notation and basic definitions

Let us recall the definition of the rank of an m

× n matrix A with entries in a

division ring

D. We will always consider D

n

, the set of all 1

× n matrices, as a left

vector space over

D. Correspondingly, we have the right vector space of all m × 1

matrices

t

D

m

. We first take the row space of A, that is the left vector subspace of

D

n

generated by the rows of A, and define the row rank of A to be the dimension of

this subspace. Similarly, the column rank of A is the dimension of the right vector
space generated by the columns of A. This space is called the column space of A. It
turns out that these two ranks are equal for every matrix over

D and this common

value is called the rank of a matrix. Assume that rank A = r. Then there exist
invertible matrices T

∈ M

m

(

D) and S ∈ M

n

(

D) such that

(6)

T AS =

I

r

0

0

0

.

Here, I

r

is the r

× r identity matrix and the zeroes stand for zero matrices of the

appropriate sizes. Let r be a positive integer, 1

≤ r ≤ min{m, n}. Then we denote

by M

r

m

×n

(

D) the set of all matrices A ∈ M

m

×n

(

D) of rank r. Of course, we write

shortly M

r

n

(

D) = M

r

n

×n

(

D).

In general the rank of a matrix A need not be equal to the rank of its transpose

t

A. However, if τ :

D D is a nonzero anti-endomorphism (that is, τ is additive

and τ (λμ) = τ (μ)τ (λ), λ, μ

D) of D, then rank A = rank

t

(A

τ

). Here, A

τ

=

[a

ij

]

τ

= [τ (a

ij

)] is a matrix obtained from A by applying τ entrywise.

Rank satisfies the triangle inequality, that is, rank (A + B)

rank A + rank B

for every pair A, B

∈ M

m

×n

(

D) [14, p.46, Exercise 2]. Therefore, the set of matrices

M

m

×n

(

D) equipped with the arithmetic distance d defined by

d(A, B) = rank (A

− B), A, B ∈ M

m

×n

(

D),

is a metric space. Matrices A, B

∈ M

m

×n

(

D) are said to be adjacent if d(A, B) = 1.

Let a

D

n

and

t

b

t

D

m

be any nonzero vectors. Then

t

ba = (

t

b)a is a matrix

of rank one. Every matrix of rank one can be written in this form. It is easy to
verify that two rank one matrices

t

ba and

t

dc,

t

ba

=

t

dc, are adjacent if and only

if a and c are linearly dependent or

t

b and

t

d are linearly dependent. As usual, the

symbol E

ij

, 1

≤ i ≤ m, 1 ≤ j ≤ n, will stand for a matrix having all entries zero

except the (i, j)-entry which is equal to 1.

For a nonzero x

D

n

and a nonzero

t

y

t

D

m

we denote by R(x) and L(

t

y)

the subsets of M

m

×n

(

D) defined by

R(x) =

{

t

ux :

t

u

t

D

m

}

and

L(

t

y) =

{

t

yv : v

D

n

}.

5

background image

6

PETER ˇ

SEMRL

Clearly, all the elements of these two sets are of rank at most one. Moreover, any
two distinct elements from R(x) are adjacent. And the same is true for L(

t

y). Let

us just mention here that a subset

S ⊂ M

m

×n

(

D) is called an adjacent set if any

two distinct elements of

S are adjacent. These sets were of the basic importance

in the classical approach to Hua’s fundamental theorem of geometry of rectangular
matrices. They were studied in full detail in Wan’s book [25]. In particular it is
shown there that R(x) and L(

t

y) are exactly the two types of maximal adjacent

sets of matrices containing 0.

The elements of the standard basis of the left vector space

D

n

( the right

vector space

t

D

m

) will be denoted by e

1

, . . . , e

n

(

t

f

1

, . . . ,

t

f

m

). Hence, E

ij

=

t

f

i

e

j

,

1

≤ i ≤ m, 1 ≤ j ≤ n. Later on we will deal simultaneously with rectangular

matrices of different sizes, say with matrices from M

m

×n

(

D) and M

p

×q

(

D). The

same symbol E

ij

will be used to denote the matrix unit in M

m

×n

(

D) as well as the

matrix unit in M

p

×q

(

D).

As always we will identify m

× n matrices with linear transformations mapping

D

m

into

D

n

. Namely, each m

×n matrix A gives rise to a linear operator defined by

x

→ xA, x ∈ D

m

. The rank of the matrix A is equal to the dimension of the image

Im A of the corresponding operator A. The kernel of an operator A is defined as
Ker A =

{x ∈ D

m

: xA = 0

}. It is the set of all vectors x ∈ D

m

satisfying x(

t

y) = 0

for every

t

y from the column space of A. We have m = rank A + dim Ker A.

We will call a division ring

D an EAS division ring if every nonzero endomor-

phism τ :

D D is automatically surjective. The field of real numbers and the field

of rational numbers are well-known to be EAS. Obviously, every finite field is EAS.
The same is true for the division ring of quaternions (see, for example [20]), while
the complex field is not an EAS field [15]. Let

D be an EAS division ring. It is

then easy to verify that also each nonzero anti-endomorphism of

D is bijective (just

note that the square of a nonzero anti-endomorphism is a nonzero endomorphism).

We denote by P

n

(

D) ⊂ M

n

(

D) the set of all n×n idempotent matrices, P

n

(

D) =

{P ∈ M

n

(

D) : P

2

= P

}. The symbol P

1

n

(

D) stands for the subset of all rank one

idempotent matrices. Let a

D

n

and

t

b

t

D

n

be any nonzero vectors. Then the

rank one matrix

t

ba is an idempotent if and only if a(

t

b) = a

t

b = 1.

If we identify an idempotent P

∈ P

n

(

D) with a linear transformation P : D

n

D

n

, then

D

n

= Im P

Ker P and xP = x for every x ∈ Im P . Indeed, all we need

to verify this statement is to observe that (xP )P = xP and (x

− xP )P = 0 for

every x

D

n

, and x = xP + (x

− xP ), x ∈ D

n

. Thus, if we choose a basis of

D

n

as a union of a basis of Im P and a basis of Ker P , then the corresponding matrix
representation of P is a diagonal matrix whose all diagonal entries are either 0, or
1. In other words, each idempotent matrix is similar to a diagonal matrix with
zeros and ones on the diagonal. Of course, dim Im P = rank P . Thus, the number
of 1’s on the diagonal equals the rank of P .

It is well-known that P

n

(

D) is a partially ordered set (poset) with partial order

defined by P

≤ Q if P Q = QP = P . A map φ : P

n

(

D) → P

n

(

D) is order preserving

if for every pair P, Q

∈ P

n

(

D) we have P ≤ Q ⇒ φ(P ) ≤ φ(Q).

We shall need the following fact that is well-known for idempotent matrices

over fields and can be also generalized to idempotent matrices over division rings
[14, p.62, Exercise 1]. Assume that P

1

, . . . , P

k

∈ P

n

(

D) are pairwise orthogonal,

that is, P

m

P

j

= 0 whenever m

= j, 1 ≤ m, j ≤ k. Denote by r

i

the rank of P

i

.

Then there exists an invertible matrix T

∈ M

n

(

D) such that for each i, 1 ≤ i ≤ k,

background image

2. NOTATION AND BASIC DEFINITIONS

7

we have

T P

i

T

1

= diag (0, . . . , 0, 1, . . . , 1, 0, . . . , 0)

where diag (0, . . . , 0, 1, . . . , 1, 0, . . . , 0) is the diagonal matrix in which all the diag-
onal entries are zero except those in (r

1

+ . . . + r

i

1

+ 1)st to (r

1

+ . . . + r

i

)th

rows.

Let P, Q

∈ P

n

(

D). If P ≤ Q then clearly, Q − P is an idempotent orthogonal

to P . Thus, by the previous paragraph, we have P

≤ Q, P = 0, Q = I, and P = Q

if and only if there exist an invertible T

∈ M

n

(

D) and positive integers r

1

, r

2

such

that

T P T

1

=


I

r

1

0

0

0

0

0

0

0

0


⎦ and T QT

1

=


I

r

1

0

0

0

I

r

2

0

0

0

0


and 0 < r

1

< r

1

+r

2

< n. In particular, if we identify matrices with linear operators,

then the image of P is a subspace of the image of Q, while the kernel of Q is a
subspace of the kernel of P .

Let us briefly explain why idempotents are important when studying adjacency

preserving maps. Assume that P

∈ P

n

(

D) ⊂ M

n

(

D) is of rank r. Then clearly,

d(0, I) = n = r +(n

−r) = d(0, P )+d(P, I). But we shall see later that the converse

is also true, that is, any matrix A

∈ M

n

(

D) satisfying

d(0, I) = n = d(0, A) + d(A, I)

is an idempotent. In the language of geometry, the set of all midpoints between
0 and I is exactly the set of all idempotents. Applying the fact that adjacency
preserving maps are contractions with respect to the arithmetic distance, one can
conclude (the details will be given later) that every adjacency preserving map on
M

n

(

D) that maps 0 and I into themselves, maps idempotents into idempotents.

Moreover, as the set of all midpoints between 0 and an idempotent P turns out to
be exactly the set of all idempotents Q satisfying Q

≤ P , we can further show that

such maps preserve the above defined partial order on P

n

(

D).

For a nonzero x

D

n

and a nonzero

t

y

t

D

n

we denote by P R(x) and P L(

t

y)

the subsets of P

n

(

D) defined by

P R(x) =

{

t

ux :

t

u

t

D

n

, x

t

u = 1

}

and

P L(

t

y) =

{

t

yv : v

D

n

, v

t

y = 1

}.

Clearly, all the elements of these two sets are of rank one. Further, if

t

ux,

t

wx

P R(x) for some

t

u,

t

w

t

D

n

, then either

t

u =

t

w, or

t

u and

t

w are linearly

independent. Moreover, if nonzero vectors x

1

and x

2

are linearly dependent then

P R(x

1

) = P R(x

2

).

By

P(D

n

) and

P(

t

D

n

) we denote the projective spaces over left vector space

D

n

and right vector space

t

D

n

, respectively,

P(D

n

) =

{[x] : x ∈ D

n

\ {0}} and

P(

t

D

n

) =

{[

t

y] :

t

y

t

D

n

\ {0}}. Here, [x] and [

t

y] denote the one-dimensional

left vector subspace of

D

n

generated by x and the one-dimensional right vector

subspace of

t

D

n

generated by

t

y, respectively.

background image

background image

CHAPTER 3

Examples

Let us first emphasize that all the examples presented in this section are new.

There is only one exception. Namely, our first example is just a slight modification
of [19, Theorem 2.4].

Example

3.1. Assume that

D is a non-EAS division ring. Let τ be a nonzero

nonsurjective endomorphism of

D. Choose c ∈ D that is not contained in the range

of τ and define a map φ : M

m

×n

(

D) → M

m

×n

(

D) by

φ



a

11

a

12

. . .

a

1n

..

.

..

.

. ..

..

.

a

m

2,1

a

m

2,2

. . .

a

m

2,n

a

m

1,1

a

m

1,2

. . .

a

m

1,n

a

m1

a

m2

. . .

a

mn



=


τ (a

11

)

τ (a

12

)

. . .

τ (a

1n

)

..

.

..

.

. ..

..

.

τ (a

m

2,1

)

τ (a

m

2,2

)

. . .

τ (a

m

2,n

)

τ (a

m

1,1

) + (a

m1

)

τ (a

m

1,2

) + (a

m2

)

. . .

τ (a

m

1,n

) + (a

mn

)

0

0

. . .

0


.

Then φ preserves adjacency.

Indeed, the map φ is additive and injective. To verify injectivity assume that

φ([a

ij

]) = 0. Then clearly, a

ij

= 0 whenever 1

≤ i ≤ m − 2 and 1 ≤ j ≤ n. From

τ (a

m

1,1

) + (a

m1

) = 0 we conclude that τ (a

m1

) = 0, since otherwise c would

belong to the range of τ . Thus, a

m1

= 0, and consequently, a

m

1,1

= 0. Similarly

we see that for every j, 1

≤ j ≤ n, we have a

ij

= 0 whenever i = m

1 or i = m.

Thus, in order to verify that it preserves adjacency it is enough to see that φ(A) is
of rank at most one for every A of rank one. The verification of this statement is
straightforward. And, of course, we have φ(0) = 0.

Several remarks should be added here. The map φ is a composition of two maps:

we have first applied the endomorphism τ entrywise and then we have replaced the
last row by zero and the (m

1)-st row by the sum of the (m − 1)-st row and the

m-th row multiplied by c on the left. We could do the same with columns instead
of rows. In that case, we need to multiply by c on the right side. Of course, we
could make the example more complicated by adding the scalar multiples of the
m-th row to other rows as well. Also observe that the map φ preserves adjacency,
but it does not preserve adjacency in both directions. Namely, if A is a nonzero
matrix having nonzero entries only in the last two rows, then A may have rank
two, but φ(A) is of rank one and thus, adjacent to 0. Over some division rings it is

9

background image

10

PETER ˇ

SEMRL

possible to modify the above example in such a way that we get a map preserving
adjacency in both directions. To see this we will now consider complex matrices.

Example

3.2. It is known [15] that there exist an endomorphism τ :

C C

and complex numbers c, d

C such that c, d are algebraically independent over

τ (

C), that is, if p(c, d) = 0 for some polynomial p ∈ τ(C)[X, Y ], then p = 0. The

map φ : M

m

×n

(

C) → M

m

×n

(

C) defined by

φ



a

11

. . .

a

1n

..

.

. ..

..

.

a

m

2,1

. . .

a

m

2,n

a

m

1,1

. . .

a

m

1,n

a

m1

. . .

a

mn



(7)

=


τ (a

11

)

. . .

τ (a

1n

)

..

.

. ..

..

.

τ (a

m

2,1

) + (a

m1

)

. . .

τ (a

m

2,n

) + (a

mn

)

τ (a

m

1,1

) + (a

m1

)

. . .

τ (a

m

1,n

) + (a

mn

)

0

. . .

0


preserves adjacency in both directions.

Note that we obtain φ(A) from A by first applying τ entrywise, then multiplying

the last row by c and d, respectively, and add these scalar multiples of the last row
to the (m

1)-st row, and (m − 2)-nd row, respectively, and finally, replace the last

row by the zero row. As before we see that φ is an injective additive map. Thus,
in order to see that it preserves adjacency in both directions it is enough to show
that for every A

∈ M

m

×n

(

C) we have rank A = 1 ⇐⇒ rank φ(A) = 1. And again,

as before we have rank A = 1

rank φ(A) = 1. So, assume that rank φ(A) = 1.

Then clearly, A

= 0. We have to check that determinants of all 2×2 submatrices of

A = [a

ij

] are zero. We know that determinants of all 2

× 2 submatrices of φ(A) are

zero. Take 2

× 2 submatrices corresponding to the first two columns, and the first

two rows, the first and the (m

2)-nd row, and the (m−2)-nd row and the (m−1)-

st row and calculate their determinants. Applying the fact that τ is endomorphism
we get

τ (a

11

a

22

− a

21

a

12

) = 0,

τ (a

11

a

m

2,2

− a

m

2,1

a

12

) + (a

11

a

m2

− a

m1

a

12

) = 0,

and

τ (a

m

2,1

a

m

1,2

− a

m

1,1

a

m

2,2

) + (a

m1

a

m

1,2

− a

m

1,1

a

m2

)

+(a

m

2,1

a

m2

− a

m

2,2

a

m1

) = 0,

and since c, d are algebraically independent the determinants of the following 2

× 2

submatrices of A must be zero:

0 = a

11

a

22

− a

21

a

12

= a

11

a

m

2,2

− a

m

2,1

a

12

= a

11

a

m2

− a

m1

a

12

= a

m

2,1

a

m

1,2

− a

m

1,1

a

m

2,2

= a

m1

a

m

1,2

− a

m

1,1

a

m2

= a

m

2,1

a

m2

− a

m

2,2

a

m1

.

It is now easy to verify that all 2

× 2 matrices of A are singular, as desired.

background image

3. EXAMPLES

11

Let p, q be integers 2

≤ p ≤ m, 2 ≤ q ≤ n. Using the same idea several times,

and then using it again with columns instead of rows, one can now construct maps
φ : M

m

×n

(

C) → M

m

×n

(

C) which preserve adjacency in both directions such that

φ

B

0

0

0

=

B

τ

0

0

0

for every B

∈ M

p

×q

(

C), and

φ(A) =

0
0

0

for every A

∈ M

m

×n

(

C). Here, stands for a p × q matrix.

Assume next that

D is an infinite division ring and let us construct an exotic

adjacency preserving map from M

7

(

D) to M

p

(

D), where p ≥ 3.

Example

3.3. Write

D\{0} as a disjoint union D\{0} = MN L, where all

the sets

D, M, N , L are of the same cardinality. Choose subsets V, W ⊂ M

2

7

(

D) such

that A, B are not adjacent whenever A

∈ V and B ∈ W. Let ϕ

1

: M

1

7

(

D) → M,

ϕ

2

: M

2

7

(

D) \ (V ∪ W) → N , ϕ

3

: M

3

7

(

D) → L, and ϕ

j

: M

j

7

(

D) D \ {0},

j = 4, 5, 6, 7, be injective maps such that the ranges of ϕ

5

and ϕ

7

are disjoint. Let

ϕ

8

:

V → D \ {0} and ϕ

9

:

W → D \ {0} be injective maps. The map φ : M

7

(

D)

M

p

(

D) defined by φ(0) = 0,

φ(A) = ϕ

1

(A)E

11

,

A

∈ M

1

7

(

D),

φ(A) = ϕ

2

(A)E

11

,

A

∈ M

2

7

(

D) \ (V ∪ W),

φ(A) = ϕ

8

(A)E

12

,

A

∈ V,

φ(A) = ϕ

9

(A)E

21

,

A

∈ W,

φ(A) = ϕ

3

(A)E

11

,

A

∈ M

3

7

(

D),

φ(A) = ϕ

4

(A)E

11

+ E

12

,

A

∈ M

4

7

(

D),

φ(A) = E

12

+ E

21

+ ϕ

5

(A)E

31

,

A

∈ M

5

7

(

D),

φ(A) = E

12

+ E

21

+ E

33

+ ϕ

6

(A)E

32

,

A

∈ M

6

7

(

D),

and

φ(A) = E

12

+ E

21

+ ϕ

7

(A)E

31

,

A

∈ M

7

7

(

D)

preserves adjacency.

Indeed, all we need to observe is that if matrices A and B are adjacent, then

either they are of the same rank, or rank A = rank B

± 1. Moreover, it is injective.

It is clear that using similar ideas one can construct further examples of adjacency
preserving maps between matrix spaces with a rather wild behaviour. Moreover, a
compositum of adjacency preserving maps is again an adjacency preserving map.
Thus, combining the examples obtained so far we can arrive at adjacency preserving
maps whose general form can not be described easily.

Therefore we will (as already explained in Introduction) restrict our atten-

tion to adjacency preserving maps φ : M

m

×n

(

D) → M

p

×q

(

D) satisfying the addi-

tional assumption that there exist A

0

, B

0

∈ M

m

×n

(

D) satisfying d(φ(A

0

), φ(B

0

)) =

min

{m, n} (then we have automatically min{p, q} ≥ min{m, n}). Clearly, standard

maps satisfy this additional condition. We continue with non-standard examples
of such maps.

The notion of a degenerate adjacency preserving map is rather complicated.

We will therefore first restrict to the special case when m

≥ n, φ(0) = 0, and

background image

12

PETER ˇ

SEMRL

φ(E

11

+ . . . + E

nn

) = E

11

+ . . . + E

nn

(note that the matrix units E

11

, . . . , E

nn

on

the left hand side of this equation belong to M

m

×n

(

D), while E

11

, . . . , E

nn

on the

right hand side stand for the first n matrix units on the main diagonal of M

p

×q

(

D)).

Later on we will see that the general case can be always reduced to this special case.

We say that a point c in a metric space M with the distance function d lies in

between points a, b

∈ M if

d(a, b) = d(a, c) + d(c, b).

Obviously, if a map f : M

1

→ M

2

between two metric spaces with distance functions

d

1

and d

2

, respectively, is a contraction, that is, d

2

(f (x), f (y))

≤ d

1

(x, y) for all

x, y

∈ M

1

, and if d

2

(f (a), f (b)) = d

1

(a, b) for a certain pair of points a, b

∈ M

1

,

then f maps the set of points that lie in between a and b into the set of points that
lie in between f (a) and f (b).

Later on (see Lemma 5.1) we will prove that in M

m

×n

(

D) a matrix R lies in

between 0 and E

11

+ . . . + E

nn

∈ M

m

×n

(

D) with respect to the arithmetic distance

if and only if

R =

Q

0

where Q is an n

×n idempotent matrix. And a matrix S in M

p

×q

(

D) lies in between

0 and E

11

+ . . . + E

nn

∈ M

p

×q

(

D) if and only if

S =

P

0

0

0

where P is an n

× n idempotent matrix.

Assume that φ : M

m

×n

(

D) → M

p

×q

(

D) is an adjacency preserving map satis-

fying φ(0) = 0 and φ(E

11

+ . . . + E

nn

) = E

11

+ . . . + E

nn

. By the above remarks,

φ maps the set

Q of all matrices of the form

(8)

R =

Q

0

where Q is an n

× n idempotent matrix into the set P of all matrices

R =

P

0

0

0

with P being an n

× n idempotent matrix. Such a map will be called a degenerate

adjacency preserving map if its restriction to

Q will be of a special rather simple

form.

Example

3.4. We assume that m, p, q

≥ n ≥ 3 and D is an infinite division

ring. We define Δ :

Q → P in the following way. Set Δ(0) = 0. Let j be an

integer, 1 < j < n, and ϕ

j

be a map from the set of all n

× n idempotent matrices

of rank j into

D with the property that ϕ

j

(Q

1

)

= ϕ

j

(Q

2

) whenever Q

1

and Q

2

are

adjacent idempotent n

× n matrices of rank j. Note that R in ( 8) is of rank j if

and only if Q is of rank j. For every rank one R

∈ Q as in ( 8) we define

Δ(R) = E

11

+ ϕ

1

(Q)E

12

,

for every rank two R

∈ Q as in ( 8) we define

Δ(R) = E

11

+ E

22

+ ϕ

2

(Q)E

32

,

background image

3. EXAMPLES

13

and for every rank three R

∈ Q as in ( 8) we define

Δ(R) = E

11

+ E

22

+ E

33

+ ϕ

3

(Q)E

34

.

We continue in the same way. It is easy to guess how Δ acts on matrices from

Q

of rank 4, . . . , n

2. If n is even, then for every rank n − 1 matrix R ∈ Q as in ( 8)

we have

Δ(R) = E

11

+ . . . + E

n

1,n−1

+ ϕ

n

1

(Q)E

n

1,n

,

and if n is odd, then for every rank n

1 matrix R ∈ Q as in ( 8) we have

Δ(R) = E

11

+ . . . + E

n

1,n−1

+ ϕ

n

1

(Q)E

n,n

1

,

and finally,

Δ(E

11

+ . . . + E

nn

) = E

11

+ . . . + E

nn

.

We can slightly modify the above construction. We define Δ in the following

way.

Example

3.5. For every rank one R

∈ Q as in ( 8) we define

Δ(R) = E

11

+ ϕ

1

(Q)E

21

,

for every rank two R

∈ Q as in ( 8) we define

Δ(R) = E

11

+ E

22

+ ϕ

2

(Q)E

23

,

and for every rank three R

∈ Q as in ( 8) we define

Δ(R) = E

11

+ E

22

+ E

33

+ ϕ

3

(Q)E

43

,

and then we continue as above.

Definition

3.6. Every adjacency preserving map φ : M

m

×n

(

D) → M

p

×q

(

D)

such that its restriction to

Q is equal to Δ defined as in Example 3.4 or as in

Example 3.5, will be called a degenerate adjacency preserving map. Further, assume
that φ is such a map and let T

1

∈ M

m

(

D), S

1

∈ M

n

(

D), T

2

∈ M

p

(

D), and S

2

M

q

(

D) be invertible matrices. Then the map

A

→ T

2

φ(T

1

AS

1

)S

2

,

A

∈ M

m

×n

(

D),

will also be called a degenerate adjacency preserving map.

Let φ : M

m

×n

(

D) → M

p

×q

(

D) be an adjacency preserving map such that its

restriction to

Q is equal to Δ, where Δ is of the first type above. Take any matrix

t

xy

∈ M

m

×n

(

D) of rank one. Then φ(

t

xy) is adjacent to φ(0) = 0, and therefore,

φ(

t

xy) is of rank one. We can find two different vectors

t

u,

t

v

t

D

n

such that

y

t

u = y

t

v = 1

and

t

uy

0

=

t

xy

and

t

vy
0

=

t

xy.

Then φ(

t

xy) is a rank one matrix adjacent to φ

t

uy

0

= E

11

+ λE

12

as well

as to φ

t

vy
0

= E

11

+ μE

12

. Here, λ, μ are scalars satisfying λ

= μ. It follows

easily that

φ(

t

xy) =

t

f

1

z

background image

14

PETER ˇ

SEMRL

for some nonzero z

D

q

. In other words, all rank one matrices are mapped into

L(

t

f

1

).

Let now T

1

∈ M

m

(

D), S

1

∈ M

n

(

D), T

2

∈ M

p

(

D), and S

2

∈ M

q

(

D) be invertible

matrices. Then the map

A

→ T

2

φ(T

1

AS

1

)S

2

,

A

∈ M

m

×n

(

D),

maps every rank one matrix into L(T

2

t

f

1

).

Of course, we can apply the same arguments to degenerate adjacency preserving

maps of the second type. We conclude that each degenerate adjacency preserving
map φ : M

m

×n

(

D) → M

p

×q

(

D) has the following property: either

there exists a nonzero vector

t

w

1

t

D

p

such that

(9)

φ(M

1

m

×n

(

D)) ⊂ L(

t

w

1

);

or

there exists a nonzero vector w

2

D

q

such that

(10)

φ(M

1

m

×n

(

D)) ⊂ R(w

2

).

We have defined degenerate adjacency preserving maps as the compositions of

two equivalence transformations and an adjacency preserving map φ : M

m

×n

(

D)

M

p

×q

(

D) whose restriction to Q is Δ, where Δ is as described above. There are

two natural questions here. First, does such a map exist? It is straightforward to
show that the answer is in the affirmative.

Example

3.7. We define φ : M

m

×n

(

D) → M

p

×q

(

D) in the following way. Set

φ(0) = 0. Let ϕ

j

be a map from M

j

m

×n

(

D) into D, j = 1, . . . , n−1, with the propety

that ϕ

j

(A)

= ϕ

j

(B) whenever A, B

∈ M

j

m

×n

(

D) are adjacent. In particular, this

property is satisfied when ϕ

j

is injective. Set

φ(A) =


1

ϕ

1

(A)

0

0

. . .

0

0

0

0

0

. . .

0

0

0

0

0

. . .

0

0

0

0

0

. . .

0

..

.

..

.

..

.

..

.

. .. ...

0

0

0

0

. . .

0


for every A

∈ M

1

m

×n

(

D) and

φ(A) =


1

0

0

0

. . .

0

0

1

0

0

. . .

0

0

ϕ

2

(A)

0

0

. . .

0

0

0

0

0

. . .

0

..

.

..

.

..

.

..

.

. .. ...

0

0

0

0

. . .

0


for every A

∈ M

2

m

×n

(

D). We continue in a similar way. For every A ∈ M

3

m

×n

(

D)

we set φ(A) = E

11

+ E

22

+ E

33

+ ϕ

3

(A)E

34

, for every A

∈ M

4

m

×n

(

D) we set

φ(A) = E

11

+ E

22

+ E

33

+ E

44

+ ϕ

4

(A)E

54

,...

Assume first that n is odd. Then we have φ(A) = E

11

+ . . . + E

n

1,n−1

+

ϕ

n

1

(A)E

n,n

1

, A

∈ M

n

1

m

×n

(

D). Let ξ

1

, . . . , ξ

q

: M

n

m

×n

(

D) D be any maps with

the properties:

background image

3. EXAMPLES

15

• If A, B ∈ M

n

m

×n

(

D) are adjacent, then there exists j ∈ {1, 2, . . . , q} such

that ξ

j

(A)

= ξ

j

(B);

• If A ∈ M

n

1

m

×n

(

D) and B ∈ M

n

m

×n

(

D) are adjacent, then either ϕ

n

1

(A)

=

ξ

n

1

(B), or at least one of ξ

1

(B), . . . , ξ

n

2

(B), ξ

n

(B), . . . , ξ

q

(B) is nonzero;

• ξ

n

(E

11

+ . . . + E

nn

) = 1 and ξ

j

(E

11

+ . . . + E

nn

) = 0 for j = 1, . . . , n

1, n + 1, . . . , q.

We define

φ(A) =


1

0

. . .

0

0

. . .

0

0

1

. . .

0

0

. . .

0

..

.

..

.

. ..

..

.

..

.

. ..

..

.

0

0

. . .

1

0

. . .

0

ξ

1

(A)

ξ

2

(A)

. . .

ξ

n

1

(A)

ξ

n

(A)

. . .

ξ

q

(A)

0

0

. . .

0

0

. . .

0

..

.

..

.

. ..

..

.

..

.

. ..

..

.

0

0

. . .

0

0

. . .

0


for every A

∈ M

n

m

×n

(

D). It is easy to verify that φ preserves adjacency and its

restriction to

Q is Δ with Δ : Q → M

p

×q

(

D) being the map as defined above.

If n is even, then φ(A) = E

11

+. . .+E

n

1,n−1

+ϕ

n

1

(A)E

n

1,n

, A

∈ M

n

1

m

×n

(

D).

Let now ξ

1

, . . . , ξ

p

: M

n

m

×n

(

D) D be any maps with the properties:

• If A, B ∈ M

n

m

×n

(

D) are adjacent, then there exists j ∈ {1, 2, . . . , p} such

that ξ

j

(A)

= ξ

j

(B);

• If A ∈ M

n

1

m

×n

(

D) and B ∈ M

n

m

×n

(

D) are adjacent, then either ϕ

n

1

(A)

=

ξ

n

1

(B), or at least one of ξ

1

(B), . . . , ξ

n

2

(B), ξ

n

(B), . . . , ξ

p

(B) is nonzero;

• ξ

n

(E

11

+ . . . + E

nn

) = 1 and ξ

j

(E

11

+ . . . + E

nn

) = 0 for j = 1, . . . , n

1, n + 1, . . . , p.

We define

φ(A) =


1

0

. . .

0

ξ

1

(A)

0

. . .

0

0

1

. . .

0

ξ

2

(A)

0

. . .

0

..

.

..

.

. .. ...

..

.

..

.

. .. ...

0

0

. . .

1

ξ

n

1

(A)

0

. . .

0

0

0

. . .

0

ξ

n

(A)

0

. . .

0

..

.

..

.

. .. ...

..

.

..

.

. .. ...

0

0

. . .

0

ξ

p

(A)

0

. . .

0


for every A

∈ M

n

m

×n

(

D).

It is easy to verify that φ preserves adjacency and its restriction to

Q is Δ with

Δ :

Q → M

p

×q

(

D) being the map as in Example 3.4.

There is another possibility to construct such an adjacency preserving map

from M

m

×n

(

D) to M

p

×q

(

D).

Example

3.8. As above we set ψ(0) = 0 and choose maps ϕ

j

from M

j

m

×n

(

D)

into

D, j = 1, . . . , n − 1, with the propety that ϕ

j

(A)

= ϕ

j

(B) whenever A, B

background image

16

PETER ˇ

SEMRL

M

j

m

×n

(

D) are adjacent. Then we define

ψ(A) =


1

0

0

0

. . .

0

ϕ

1

(A)

0

0

0

. . .

0

0

0

0

0

. . .

0

0

0

0

0

. . .

0

..

.

..

.

..

.

..

.

. .. ...

0

0

0

0

. . .

0


for every A

∈ M

1

m

×n

(

D) and

ψ(A) =


1

0

0

0

. . .

0

0

1

ϕ

2

(A)

0

. . .

0

0

0

0

0

. . .

0

0

0

0

0

. . .

0

..

.

..

.

..

.

..

.

. .. ...

0

0

0

0

. . .

0


for every A

∈ M

2

m

×n

(

D). One can now complete the construction of the map ψ in

exactly the same way as above (in the special case when p = q the map ψ can be
obtained from φ by composing it with the transposition).

Thus, we have obtained two types of degenerate adjacency preserving maps

from M

m

×n

(

D) to M

p

×q

(

D). Further examples can be obtained by composing such

maps with two equivalence transformations.

Note that the above degenerate adjacency preserving maps have rather sim-

ple structure and some nice properties. In particular, they almost preserve rank.
Namely, we have rank φ(A) = rank A for all A

∈ M

m

×n

(

D) with rank A < n and

rank φ(A)

∈ {n − 1, n} for all A ∈ M

m

×n

(

D) with rank A = n. The same is true

for the degenerate adjacency preserving map ψ.

Next, degenerate adjacency preserving maps of the above type map each set

M

r

m

×n

(

D), 1 ≤ r ≤ n − 1, into a line. More precisely, let φ be as in Example 3.7,

T

1

∈ M

m

(

D), S

1

∈ M

n

(

D), T

2

∈ M

p

(

D), and S

2

∈ M

q

(

D) be invertible matrices,

and consider the degenerate adjacency preserving map A

→ T

2

φ(T

1

AS

1

)S

2

, A

M

m

×n

(

D). Clearly, A ∈ M

m

×n

(

D) is of rank r if and only if T

1

AS

1

is of rank r.

Hence, the map A

→ T

2

φ(T

1

AS

1

)S

2

, A

∈ M

m

×n

(

D), maps the set M

r

m

×n

(

D) either

into the set of matrices of the form

T

2

(E

11

+ . . . + E

rr

)S

2

+ T

2

λE

r,r+1

S

2

,

λ

D,

or into the set of matrices of the form

T

2

(E

11

+ . . . + E

rr

)S

2

+ T

2

λE

r+1,r

S

2

,

λ

D.

Let us consider just the first case. Then the set M

r

m

×n

(

D) is mapped into the set

of matrices of the form

(11)

M +

t

xλy,

λ

D,

where M = T

2

(E

11

+ . . . + E

rr

)S

2

,

t

x = T

2

t

f

r

, and y = e

r+1

S

2

. In the language

of geometry of matrices, the sets of matrices of the form (11) are called lines.

It is also easy to verify that if A, B

∈ M

m

×n

(

D) are matrices of rank n and

φ : M

m

×n

(

D) → M

p

×q

(

D) a degenerate adjacency preserving map of the above

type, then either φ(A) = φ(B), or φ(A) and φ(B) are adjacent.

background image

3. EXAMPLES

17

The maps φ and ψ from Examples 3.7 and 3.8, respectively, have been obtained

by extending the map Δ in the most natural way. Let us call the maps of the
form A

→ T

2

φ(T

1

AS

1

)S

2

, A

∈ M

m

×n

(

D), or of the form A → T

2

ψ(T

1

AS

1

)S

2

,

A

∈ M

m

×n

(

D), nice degenerate maps. It is natural to ask whether all degenerate

adjacency preserving maps are nice? Our first guess that the answer is positive
turned out to be wrong. We come now to the second question. Can we describe
the general form of degenerate adjacency preserving maps? We will give a few
examples of degenerate adjacency preserving maps which will show that the answer
is negative. For the sake of simplicity we will consider only maps from M

3

(

D) into

itself. An interested reader can use the same ideas to construct similar examples
on matrix spaces of higher dimensions.

If we restrict to maps from M

3

(

D) into itself, then we are interested in adjacency

preserving maps φ : M

3

(

D) → M

3

(

D) satisfying φ(0) = 0 and rank φ(C) = 3 for

some C

∈ M

3

(

D) of rank three. Such a map is called degenerate if its restriction to

the set of points that lie in between 0 and C is of the special form described above.
Replacing the map φ by the map

A

→ φ(C)

1

φ(CA),

A

∈ M

3

(

D),

we may assume that φ(I) = I. The set of points that lie in between 0 and I is the
set of all idempotents. Then φ is degenerate if

(12)

φ(0) = 0,

φ(I) = I,

and either the set of rank one idempotents is mapped into matrices of the form

(13)

T


1

0

0

0

0

0

0

0


T

1

and the set of rank two idempotents is mapped into matrices of the form

(14)

T


1

0

0

0

1

0

0

0


T

1

and if two idempotents of the same rank are adjacent, their images are different;
or the set of rank one idempotents is mapped into matrices of the form

T


1

0

0

0 0
0

0

0


T

1

and the set of rank two idempotents is mapped into matrices of the form

T


1

0

0

0

1

0

0

0


T

1

and if two idempotents of the same rank are adjacent, their images are different.
Here, T is an invertible 3

× 3 matrix. We will assume from now on that φ is of

the first type above and T = I. We need to show that it can be extended to an
adjacency preserving map φ : M

3

(

D) → M

3

(

D) with wild behaviour outside the set

of idempotent matrices. This will then yield that degenerate maps have a rather
simple form on the set of matrices that lie in between two matrices whose images
are at the maximum possible distance with respect to the arithmetic distance, but
their general form on the complement of this set cannot be described nicely. When

background image

18

PETER ˇ

SEMRL

introducing a notion of degenerate adjacency preserving maps we have started with
a map Δ defined on the set

Q, and then we defined a degenerate adjacency pre-

serving map as any adjacency preserving extension of such a map Δ composed with
two equivalence transformations. The examples that we will present now show that
no better definition is possible.

Example

3.9. Let

D be a disjoint union of the sets D = AB such that all three

sets

D, A, and B are of the same cardinality. Our first example of a degenerate

adjacency preserving map φ : M

3

(

D) → M

3

(

D) is defined by ( 12), ( 13) with the ∗

belonging to

A, ( 14) with the ∗ belonging to A, the set of rank one non-idempotent

matrices is mapped by φ into matrices of the form


∗ ∗ ∗
0

0

0

0

0

0


with the (1, 3)-entry nonzero, and if A and B are two adjacent rank one non-
idempotent matrices we further assume that the
(1, 3)-entries of their φ-images are
different, the set of rank two non-idempotent matrices is mapped by φ into matrices
of the form


1

0

0

0

0

0

0

0


with the

∗ belonging to B, and if A and B are two adjacent rank two non-idempotent

matrices we further assume that the (1, 2)-entries of their φ-images are different,
and the set of rank three matrices

= I is mapped by φ into matrices of the form


1

0

0

0

1

0

0

0


with the

∗ belonging to B, and again we assume that if A and B are two adjacent

rank matrices of rank three different from the identity, then the (3, 2)-entries of
their φ-images are different.

To see that such a map preserves adjacency we assume that A, B

∈ M

3

(

D) are

adjacent. We need to show that then φ(A) and φ(B) are adjacent. We distinguish
several cases:

• A = 0 (then B must be of rank one),

• A is an idempotent of rank one (then B is either the zero matrix, or a

rank one matrix, or a rank two matrix),

• A is a non-idempotent matrix of rank one (then B is either the zero matrix,

or a rank one matrix, or a non-idempotent rank two matrix),

• A is an idempotent of rank two (then B is either a rank one idempotent,

or a rank two matrix, or a rank three matrix),

• A is a non-idempotent matrix of rank two (then B is different from 0 and

I),

• A is a rank three matrix = I (then B is of rank two or three),

• A = I (then B is either idempotent of rank two, or a rank three matrix

= I).

It is straightforward to verify that in all of these cases φ(A) and φ(B) are adjacent.

background image

3. EXAMPLES

19

Now, we see that the behaviour on the set of non-idempotent matrices is not

as simple as in the case of nice degenerate maps. First, non-idempotent rank two
matrices are mapped into matrices of rank one. The set of rank one matrices is not
mapped into a line. And the set of rank two matrices is not mapped into a line as
well. We continue with a somewhat different example.

Example

3.10. Let the map φ : M

3

(

D) → M

3

(

D) be defined by ( 12), ( 13)

with the

∗ belonging to A, ( 14) with the ∗ belonging to A, the set of rank one

non-idempotent matrices is mapped by φ into matrices of the form


1

0

0

0

0

0

0

0


with the

∗ belonging to B, and if A and B are two adjacent rank one non-idempotent

matrices we further assume that the (1, 2)-entries of their φ-images are different,
the set of rank two non-idempotent matrices is mapped by φ into matrices of the
form


1

0

0

0

0

0


with the (2, 2)-entry

= 0, 1, and if A and B are two adjacent rank two non-

idempotent matrices we further assume that the (1, 2)-entries of their φ-images
are different, and the set of rank three matrices

= I is mapped by φ into matrices

of the form


1

0

0

1

0

0

0


with the

∗ in the (3, 2)-position belonging to B, the star in the (1, 2)-position being

0 for all rank three matrices that are adjacent to the identity, but not being zero
for all rank three matrices, and finally we assume that if A and B are two adjacent
matrices of rank three different from the identity, then the
(3, 2)-entries of their
φ-images are different.

The adjacency preserving property can be verified as in the previous example.

This time we have an example of a degenerate adjacency preserving map such that
the set of rank two matrices is not mapped into a line. And there is a rank three
matrix F such that d(φ(I), φ(F )) = 2.

Here is the last example on M

3

(

D).

Example

3.11. Let a map φ : M

3

(

D) → M

3

(

D) be defined by ( 12), ( 13) with

the

∗ belonging to A, ( 14) with the ∗ belonging to A, the set of rank one non-

idempotent matrices is mapped by φ into matrices of the form


∗ ∗ ∗
0

0

0

0

0

0


with the (1, 3)-entry nonzero, and if A and B are two adjacent rank one non-
idempotent matrices we further assume that the
(1, 3)-entries of their φ-images are

background image

20

PETER ˇ

SEMRL

different, the set of rank two non-idempotent matrices is mapped by φ into matrices
of the form


1

0

0

0

0

0

0

0


with the

∗ belonging to B, and if A and B are two adjacent rank two non-idempotent

matrices we further assume that the (1, 2)-entries of their φ-images are different,
the set of rank three matrices that are adjacent to the identity is mapped by φ into
matrices of the form


1

0

0

0

1

0

0

0


with the

∗ belonging to B, and if A and B are two adjacent rank three matrices both

adjacent to the identity, then the (3, 2)-entries of their φ-images are different, and
finally the set of rank three matrices

= I that are not adjacent to the identity is

mapped by φ into matrices of the form


1

0

0

0

0

0

0

0


with the

∗ belonging to A, and if A = I and B = I are two adjacent rank three

matrices both non-adjacent to the identity, then the (1, 2)-entries of their φ-images
are different.

Again, it is easy to verify that this map preserves adjacency. A careful reader

has already observed that this map is a slight modification of the map presented
in Example 3.9 (they act in the same way on rank one and rank two matrices, but
differ on the set of rank three matrices). Thus, they have the same wild behaviour
on non-idempotent matrices of rank one and two. We have here an additional
unexpected property. Namely, there are rank three matrices that are mapped by φ
into rank one matrices.

The standard approach to study adjacency preserving maps invented by Hua

and used by his followers was to study maximal adjacent sets, that is, the maximal
sets of matrices with the property that any two different matrices from this set
are adjacent. Our approach is different. We first reduce the general case to the
square case. Then we show that after modifying adjacency preserving maps in an
appropriate way we can assume that they preserve idempotents and the natural
partial order on the set of idempotents. When discovering this approach it was
our impression that we will be able to show that all adjacency preserving maps
are products of maps described above. Much to our surprise, a careful analysis of
order preserving maps on idempotents gave us further examples of ”wild” adjacency
preserving maps.

Example

3.12. Let τ be a nonzero nonsurjective endomorphism of

D and c ∈ D

a scalar that is not contained in the range of τ . For A

∈ M

m

×n

(

D) we denote by

A

1c

and A

1r

the first column and the first row of A, respectively. Hence, A

τ

1c

is

the m

× 1 matrix obtained in the following way: we take the first column of A and

apply τ entrywise. We define a map φ : M

m

×n

(

D) → M

m

×n

(

D) by

(15)

φ(A) = A

τ

− A

τ
1c

c(1 + τ (a

11

)c)

1

A

τ
1r

,

A

∈ M

m

×n

(

D).

background image

3. EXAMPLES

21

A rather straightforward (but not entirely trivial) computation shows that such

a map preserves adjacency. Of course, we have φ(0) = 0 and it is not difficult to
verify that there exist matrices A

∈ M

m

×n

(

D) with rank φ(A) = min{m, n}.

It is clear that in the above example the first row and the first column can be

replaced by other columns and rows. And then, as a compositum of adjacency pre-
serving maps preserves adjacency, we may combine such maps and those described
in the previous examples to get adjacency preserving maps that at first look seem
to be too complicated to be described nicely.

At this point I would like to express my gratitude to Wen-ling Huang whose

help was essential in getting the following insight into the last example. The ex-
planation that follows gives an interested reader a basic understanding why our
results might be helpful when studying the fundamental theorem of geometry of
Grassmann spaces. Recall first that two m-dimensional subspaces U, V

D

m+n

are said to be adjacent if dim(U

∩ V ) = m − 1. Let x

1

, . . . , x

m

D

m

be linearly

independent vectors. Let further y

1

, . . . , y

m

, u

1

, . . . , u

m

be any vectors in

D

n

. Then

it is trivial to check that the m-dimensional subspaces

span

{

x

1

y

1

, . . . ,

x

m

y

m

} ⊂ D

m+n

and

span

{

x

1

y

1

+ u

1

, . . . ,

x

m

y

m

+ u

m

} ⊂ D

m+n

are adjacent if and only if dim span

{u

1

, . . . , u

m

} = 1. This fact can be reformulated

in the following way. Let A, B be m

× n matrices over D and I the m × m identity

matrix. Then the row spaces of m

× (m + n) matrices

I

A

and

I

B

are adjacent if and only if the matrices A and B are adjacent. It is also clear that
if P

∈ M

m

(

D) and Q ∈ M

m

×n

(

D) are any two matrices, and R ∈ M

m

(

D) is any

invertible matrix, then the row spaces of m

× (m + n) matrices

P

Q

and

RP

RQ

are the same.

Example

3.13. Let M

∈ M

m

(

D), N ∈ M

m

×n

(

D), L ∈ M

n

×m

(

D), and K ∈

M

n

(

D) be matrices such that

E =

M

N

L

K

∈ M

m+n

(

D)

is invertible. Assume further that τ :

D D is a nonzero endomorphism such

that for every A

∈ M

m

×n

(

D) the matrix M + A

τ

L is invertible. Then the map

φ : M

m

×n

(

D) → M

m

×n

(

D) defined by

(16)

φ(A) = (M + A

τ

L)

1

(N + A

τ

K)

preserves adjacency in both directions.

Note that in the special case when L = N = 0 we get a standard adjacency

preserving map from M

m

×n

(

D) into itself.

To prove the adjacency preserving property observe that A, B

∈ M

m

×n

(

D) are

adjacent matrices if and only if A

τ

and B

τ

are adjacent. Equivalently, the row

spaces of matrices

I

A

τ

and

I

B

τ

are adjacent. Now, the invertible matrix

E represents an invertible endomorphism of the left vector space

D

m+n

. Invertible

background image

22

PETER ˇ

SEMRL

endomorphisms map adjacent pairs of subspaces into adjacent pairs of subspaces.
Thus, the row spaces of matrices

I

A

τ

M N

L

K

=

M + A

τ

L

N + A

τ

K

and

M + B

τ

L

N + B

τ

K

are adjacent if and only if the matrices A and B are adjacent. We know that the
row space of the matrix

M + A

τ

L

N + A

τ

K

is the same as the row space of the

matrix

(M + A

τ

L)

1

M + A

τ

L

N + A

τ

K

=

I

(M + A

τ

L)

1

(N + A

τ

K)

.

Hence, we conclude that the row spaces of matrices

I

(M + A

τ

L)

1

(N + A

τ

K)

and

I

(M + B

τ

L)

1

(N + B

τ

K)

are adjacent, and consequently, φ(A) and φ(B) are adjacent if and only if A and B
are adjacent, as desired.

We will show that Example 3.12 is just a special case of Example 3.13. To

this end choose a nonsurjective nonzero endomorphism τ :

D D and an element

c

D, such that c is not contained in the range of τ. Set M = I, L = cE

11

, N = 0,

and K = I. Then E is invertible, and

M + A

τ

L = I + [τ (a

ij

)]cE

11

=


1 + τ (a

11

)c

0

0

. . .

0

τ (a

21

)c

1

0

. . .

0

τ (a

31

)c

0

1

. . .

0

..

.

..

.

..

.

. .. ...

τ (a

m1

)c

0

0

. . .

1


is always invertible, because 1 + τ (a

11

)c

= 0 for any a

11

D. A straightforward

computation shows that with this special choice of matrices M, N, K, L the map φ
is of the form (15).

In order to truly understand Example 3.13 we need to answer one more ques-

tion. When proving that the map φ defined by (16) preserves adjacency in both
directions we have used two assumptions: the invertibility of matrix E and the
property that M + A

τ

L is invertible for every A

∈ M

m

×n

(

D). Of course, we need

the second of these two assumptions if we want to define a map φ by the formula
(16). Then, if we assume that E is invertible, φ preserves adjacency in both direc-
tions. But we are interested in maps preserving adjacency in one direction only.
Thus, the question is whether we do really need to assume that E is invertible
to conclude that φ preserves adjacency? Can we replace this assumption by some
weaker one or simply just omit it?

To answer this question we observe that if a map φ : M

m

×n

(

D) → M

p

×q

(

D) pre-

serves adjacency and d(φ(A

0

), φ(B

0

)) = min

{m, n}, then the map ψ : M

m

×n

(

D)

M

p

×q

(

D) defined by

ψ(A) = φ(A + A

0

)

− φ(A

0

)

preserves adjacency as well. Moreover, it satisfies ψ(0) = 0 and rank ψ(B

0

− A

0

) =

min

{m, n}. Hence, if we want to understand the structure of maps φ : M

m

×n

(

D)

M

p

×q

(

D) preserving adjacency and satisfying d(φ(A

0

), φ(B

0

)) = min

{m, n} for

background image

3. EXAMPLES

23

some A

0

, B

0

∈ M

m

×n

(

D), it is enough to consider the special case of adjacency

preserving maps ψ : M

m

×n

(

D) → M

p

×q

(

D) satisfying ψ(0) = 0 and rank ψ(C

0

) =

min

{m, n} for some C

0

∈ M

m

×n

(

D).

At this point we need to distinguish between the cases m

≥ n and n ≥ m.

To make the statement of our results as well as the proofs simpler we will restrict
throughout the paper to just one of these two cases. Clearly, the other one can be
treated with minor and obvious modifications in almost the same way.

Thus, let m

≥ n and suppose that φ : M

m

×n

(

D) → M

m

×n

(

D) satisfies φ(0) = 0

and rank φ(A

0

) = n for some A

0

∈ M

m

×n

(

D). Assume further that M + A

τ

L is

invertible for all A

∈ M

m

×n

(

D) and φ is defined by (16). We will show that then

N = 0 and both M and K are invertible, and thus, the invertibility of the matrix

E =

M

N

L

K

follows automatically from our assumptions. Indeed, M = M + 0

τ

L is invertible.

Moreover, from φ(0) = 0 we conclude that N = 0. It then follows from rank φ(A

0

) =

n that K is invertible.

As already mentioned our approach to the problem of describing the general

form of adjacency preserving maps is based on the reduction to the problem of the
characterization of order preserving maps on P

n

(

D). Becuase of the importance

of such maps in the study of our problem we have examined them in the paper
[22]. The main result there describes the general form of such maps under the
injectivity assumption and the EAS assumption. We also gave several examples
showing that these two assumptions are indispensable. Because of the intimate
connection between the two problems we can ask if the new examples of adjacency
preserving maps bring some new insight into the study of order preserving maps
on idempotent matrices. As we shall see the answer is in the affirmative.

Indeed, if we restrict to the sqaure case m = n = p = q and if we compose

the map φ given by (16) with the similarity transformation A

→ MAM

1

, and

then impose the condition that 0 and I are mapped into themselves (in the lan-
guage of posets we impose the condition that the unique minimal and the unique
maximal idempotent are mapped into the minimal and the maximal idempotent,
respectively), we arrive at the map ξ : M

n

(

D) → M

n

(

D) of the form

(17)

ξ(A) = (I + A

τ

L)

1

A

τ

(I + L),

A

∈ M

n

(

D),

where τ :

D D is an endomorphism and L is an n × n matrix such that I + A

τ

L

is invertible for every A

∈ M

n

(

D). It is easy to verify that ξ maps the set of

idempotent matrices into itself. Indeed, if P

∈ P

n

(

D), then

(ξ(P ))

2

= (I + P

τ

L)

1

P

τ

(I + L)(I + P

τ

L)

1

P

τ

(I + L)

= (I + P

τ

L)

1

P

τ

[(I + P

τ

L) + (I

− P

τ

)L](I + P

τ

L)

1

P

τ

(I + L).

It follows from P

τ

(I

− P

τ

) = 0 that (ξ(P ))

2

= ξ(P ). Hence, after restricting

ξ to P

n

(

D) we can consider it as a map from P

n

(

D) into itself. It preservers

the order. The shortest proof we were able to find is based on the fact that ξ
preserves adjacency. If P

≤ Q, P, Q ∈ P

n

(

D), and rank Q = rank P + 1, then

obviously, P and Q are adjacent idempotent matrices, and therefore, ξ(P ) and
ξ(Q) are adjacent idempotents, that is, rank (ξ(Q)

− ξ(P )) = 1. Moreover, since

clearly rank ξ(A) = rank A for every A

∈ M

n

(

D), we have rank (ξ(Q)−ξ(P )) = 1 =

rank ξ(Q)

rank ξ(P ). It follows easily (see preliminary results, Lemma 5.1) that

background image

24

PETER ˇ

SEMRL

ξ(P )

≤ ξ(Q). If P, Q ∈ P

n

(

D) are any two idempotents with P ≤ Q and P = Q,

then we can find a chain of idempotents P = P

0

≤ P

1

≤ . . . ≤ P

r

= Q such that

rank P

k

1

+ 1 = rank P

k

, k = 1, . . . , r. It follows that ξ(P ) = ξ(P

0

)

≤ ξ(P

1

)

. . .

≤ ξ(P

r

) = ξ(Q).

Hence, ξ is an example of an order preserving map on P

n

(

D). We were not

aware of the existence of such examples of order preserving maps on idempotent
matrices when writing the paper [22]. Our first impression was that this new insight
will help us to completely understand the structure of order preserving maps on
idempotent matrices without assuming that the underlying division ring is EAS.
At that time the idea was therefore to improve the main result from [22] and then
apply it to finally get the optimal version of Hua’s theorem. It turns out that this
idea does not work.

We will still begin the study of adjacency preserving maps by reducing it to

the square case and then further reducing it to the study of order preserving maps
on idempotent matrices. Unfortunately we do not understand the structure of
such maps completely. Namely, a careful examination of the example of an order
preserving map on 3

× 3 complex idempotent matrices given in [22, pages 160-

162] shows that this map cannot be extended to an adjacency preserving map on
the set of all 3

× 3 complex matrices. In other words, the structural problems

for adjacency preserving maps on square matrices and order preserving maps on
idempotent matrices are not equivalent. Still, our starting ideas will be similar
to those used in [22], but later on several new ideas will be needed to solve our
problem.

Finally, after solving our problem in the square case completely, we will con-

sider also the non-square case but only under the additional assumption that the
underlying division ring is EAS. The last examples in this section will show that
without the EAS assumption the general form of adjacency preserving maps satis-
fying (5) may be very wild and thus, no nice result can be expected in this general
setting.

Let

D be a non-EAS division ring and m > m

1

≥ n. At the begining of this

section we gave an example of an adjacency preserving φ : M

m

×n

(

D) → M

m

1

×n

(

D)

such that φ(0) = 0 and

φ

B

0

=

B

τ

0

for every n

×n matrix B. Here, τ : D D is a nonzero nonsurjective endomorphism.

In particular, such a map satisfies the condition (5). Let us call such maps for a
moment maps of the first type. Such maps φ preserve rank one matrices, for every
matrix A

∈ M

m

×n

(

D) we have rank φ(A) rank A and there exist matrices A such

that rank φ(A) < rank A. Let us call a map φ : M

m

1

×n

(

D) → M

m

1

×n

(

D) of the

form (16 ) with N = 0 and K invertible a map of the second type. Such maps φ
preserve rank, that is, rank φ(A) = rank A, A

∈ M

m

1

×n

(

D). If we compose a map

of the first type with a map of the second type we get a map from M

m

×n

(

D) into

M

m

1

×n

(

D) which preserves adjacency, maps the zero matrix into the zero matrix

and there exists A

∈ M

m

×n

(

D) of rank n such that its image has rank n, too. After

composing the obtained map with a left multiplication with a permutation matrix,
we may further assume that the upper n

× n submatrix of the image of A has rank

n. Now we can compose the map obtained so far with another map of the first
type and then with another map of the second type, thus obtaining an adjacency

background image

3. EXAMPLES

25

preserving map from M

m

×n

(

D) into M

m

2

×n

(

D). Here, n ≤ m

2

< m

1

< m, and

the new map preserves adjacency, maps the zero matrix into the zero matrix, and
maps at least one rank n matrix into a matrix of rank n, but in general decreases
ranks of matrices.

Such a map can be quite complicated and difficult to describe. But if we go on

and compose such a map with a degenerate map from M

m

2

×n

(

D) into M

p

×q

(

D),

p, m

2

, q

≥ n, then we do not believe that anything reasonable can be said about

the general form of such a compositum ψ.

background image

background image

CHAPTER 4

Statement of main results

We are now ready to state our main results. We will deal with maps φ from

M

m

×n

(

D) to M

p

×q

(

D) preserving adjacency in one direction only. We will consider

only the case when m

≥ n, since the case m < n can be treated in exactly the same

way. Further, we will assume that φ(0) = 0 and that there exists a matrix whose
φ-image is of rank n. These are harmless normalizations as we already know that
the problem of descibing the general form of adjacency preserving maps satisfying
(5) can be easily reduced to the special case where the last two assumptions are
satisfied. Throughout we will assume that

D has more than three elements, that

is,

D = F

2

,

F

3

. And finally, we will always assume that n

3. We conjecture that

all our main results remain valid when n = 2. Unfortunately, our proof techniques
do not cover this case.

We will start with the square case.

Theorem

4.1. Let

D be a division ring, D = F

2

,

F

3

, and let n, p, q be integers

with p, q

≥ n ≥ 3. Assume that φ : M

n

(

D) → M

p

×q

(

D) preserves adjacency,

φ(0) = 0, and there exists A

0

∈ M

n

(

D) such that rank φ(A

0

) = n.

Then either there exist invertible matrices T

∈ M

p

(

D) and S ∈ M

q

(

D), a

nonzero endomorphism τ :

D D, and a matrix L ∈ M

n

(

D) with the property that

I + A

τ

L

∈ M

n

(

D) is invertible for every A ∈ M

n

(

D), such that

φ(A) = T

(I + A

τ

L)

1

A

τ

0

0

0

S,

A

∈ M

n

(

D);

or there exist invertible matrices T

∈ M

p

(

D) and S ∈ M

q

(

D), a nonzero anti-

endomorphism σ :

D D, and a matrix L ∈ M

n

(

D) with the property that I +

t

(A

σ

)L

∈ M

n

(

D) is invertible for every A ∈ M

n

(

D), such that

φ(A) = T

(I +

t

(A

σ

)L)

1 t

(A

σ

)

0

0

0

S,

A

∈ M

n

(

D);

or φ is degenerate.

The next step would be to extend this theorem to maps φ : M

m

×n

(

D)

M

p

×q

(

D) with m not necessarily equal to n. The general form of such maps may

be quite complicated, as we have seen in the previous section.

Thus, to get a nice result in the non-square case, we need to impose the EAS

assumption.

Theorem

4.2. Let m, n, p, q be integers with m, p, q

≥ n ≥ 3 and D an EAS

division ring different from

F

2

and

F

3

. Assume that φ : M

m

×n

(

D) → M

p

×q

(

D) pre-

serves adjacency, φ(0) = 0, and there exists A

0

∈ M

m

×n

(

D) such that rank φ(A

0

) =

n.

Then either φ is of the standard form, or it is a degenerate adjacency preserving

map.

27

background image

28

PETER ˇ

SEMRL

We need to add some further assumptions if we want to get rid of degenerate

maps. The obvious way is to assume that adjacency is preserved in both directions.
It turns out that then the assumption that the maximal possible arithmetic distance
in the range of φ is achieved is fulfilled automatically.

Corollary

4.3. Let m, n, p, q be integers with m, p, q

≥ n ≥ 3 and D an

EAS division ring,

D = F

2

,

F

3

. Assume that φ : M

m

×n

(

D) → M

p

×q

(

D) preserves

adjacency in both directions and φ(0) = 0.

Then φ is of the standard form.

Another possibility is to apply the property (9) or (10) of degenerate adjacency

preserving maps.

Corollary

4.4. Let

D be a division ring, D = F

2

,

F

3

, and let n, p, q be integers

with p, q

≥ n ≥ 3. Assume that φ : M

n

(

D) → M

p

×q

(

D) preserves adjacency,

φ(0) = 0, and there exists A

0

∈ M

n

(

D) such that rank φ(A

0

) = n. Assume further

that there exist B

0

, C

0

∈ M

1

n

(

D) such that φ(B

0

)

= φ(C

0

) and φ(B

0

) and φ(C

0

) are

not adjacent.

Then either there exist invertible matrices T

∈ M

p

(

D) and S ∈ M

q

(

D), a

nonzero endomorphism τ :

D D, and a matrix L ∈ M

n

(

D) with the property that

I + A

τ

L

∈ M

n

(

D) is invertible for every A ∈ M

n

(

D), such that

φ(A) = T

(I + A

τ

L)

1

A

τ

0

0

0

S,

A

∈ M

n

(

D);

or there exist invertible matrices T

∈ M

p

(

D) and S ∈ M

q

(

D), a nonzero anti-

endomorphism σ :

D D, and a matrix L ∈ M

n

(

D) with the property that I +

t

(A

σ

)L

∈ M

n

(

D) is invertible for every A ∈ M

n

(

D), such that

φ(A) = T

(I +

t

(A

σ

)L)

1 t

(A

σ

)

0

0

0

S,

A

∈ M

n

(

D).

Corollary

4.5. Let m, n, p, q be integers with m, p, q

≥ n ≥ 3 and D an EAS

division ring different from

F

2

and

F

3

. Assume that φ : M

m

×n

(

D) → M

p

×q

(

D) pre-

serves adjacency, φ(0) = 0, and there exists A

0

∈ M

m

×n

(

D) such that rank φ(A

0

) =

n. Assume further that there exist B

0

, C

0

∈ M

1

m

×n

(

D) such that φ(B

0

)

= φ(C

0

)

and φ(B

0

) and φ(C

0

) are not adjacent.

Then φ is of the standard form.

Clearly, each finite field is EAS. For such fields the result is especially nice.

Corollary

4.6. Let m, n, p, q be integers with m, p, q

≥ n ≥ 3 and F a finite

field with

F = F

2

,

F

3

. Assume that φ : M

m

×n

(

F) → M

p

×q

(

F) preserves adjacency,

φ(0) = 0, and there exists A

0

∈ M

m

×n

(

F) such that rank φ(A

0

) = n. Then φ is of

the standard form.

background image

CHAPTER 5

Proofs

5.1. Preliminary results

Let

D be a division ring, m, n positive integers, and A, B ∈ M

m

×n

(

D). Assume

that

(18)

rank (A + B) = rank A + rank B.

After identifying matrices with operators we claim that

(19)

Im (A + B) = Im A

Im B.

Indeed, we always have Im (A + B)

Im A + Im B. From rank A = dim Im A and

(18) it is now easy to conclude that (19) holds. Moreover, we have

(20)

Ker (A + B) = Ker A

Ker B.

Indeed, the inclusion Ker A

Ker B ⊂ Ker (A + B) holds for any pair of operators

A and B. To prove the equality we take any x

Ker (A+B). From 0 = x(A+B) =

xA + xB and (19) we conclude that xA = xB = 0.

In particular, if A and B are adjacent and rank A < rank B, then B = A + R

for some R of rank one. It follows that rank B = rank (A + R) = rank A + rank R,
and therefore, Im A

Im B and Ker B ⊂ Ker A.

Let us prove another simple consequence that will be important in our proof

of the main results.

Lemma

5.1. Let n, p, q be positive integers with n

≤ p, q and let P

1

∈ P

n

(

D).

Let further P, Q

∈ M

p

×q

(

D) and

P =

P

1

0

0

0

(some zeroes are absent when n = p or n = q). Assume that rank P = rank Q +
rank (P

− Q). Then

Q =

Q

1

0

0

0

,

where Q

1

is an n

× n idempotent matrix and Q

1

≤ P

1

.

Proof. It follows from rank P = rank Q + rank (P

− Q) that Im Q ⊂ Im P and

Ker P

Ker Q. Thus,

Q =

Q

1

0

0

0

,

where Q

1

∈ M

n

(

D). Clearly, rank P

1

= rank Q

1

+ rank (P

1

− Q

1

).

Since P

1

is an idempotent we have

D

n

= Im P

1

Ker P

1

= (Im Q

1

Im (P

1

− Q

1

))

Ker P

1

.

29

background image

30

PETER ˇ

SEMRL

If x

Ker P

1

, then xP

1

= 0 = xQ

1

+ x(P

1

− Q

1

). Because Im P

1

is a direct sum

of Im Q

1

and Im (P

1

− Q

1

), we conclude that 0 = xQ

1

= x(P

1

− Q

1

), x

Ker P

1

.

Further, if x

Im Q

1

, then x

Im P

1

, and consequently, x = xP

1

. It follows that

x = xP

1

= xQ

1

+ x(P

1

− Q

1

). Because x

Im Q

1

and since Im P

1

is a direct

sum of Im Q

1

and Im (P

1

− Q

1

), we have xQ

1

= x and x(P

1

− Q

1

) = 0. Similarly,

we see that xQ

1

= 0 and x(P

1

− Q

1

) = x for every x

Im (P

1

− Q

1

). It follows

that Q

1

:

D

n

D

n

is an idempotent operator acting as the identity on Im Q

1

and

Ker Q

1

= Im (P

1

− Q

1

)

Ker P

1

. It follows directly that Q

1

≤ P

1

.

Lemma

5.2. Let A, B

∈ M

m

×n

(

D) be adjacent matrices such that rank A =

rank B. Then Im A = Im B or Ker A = Ker B.

Proof. Let r = rank A. There is nothing to prove when r = 0. So, assume

that r > 0. Then A =

t

x

1

y

1

+

t

x

2

y

2

+ . . . +

t

x

r

y

r

, where

t

x

1

, . . . ,

t

x

r

are linearly

independent and y

1

, . . . , y

r

are linearly independent. As B is adjacent to A we have

B =

t

x

1

y

1

+

t

x

2

y

2

+ . . . +

t

x

r

y

r

+

t

uv for some nonzero vectors

t

u and v. We

have two possibilities: either

t

x

1

, . . . ,

t

x

r

,

t

u are linearly dependent, or y

1

, . . . , y

r

, v

are linearly dependent, since otherwise B would be of rank r + 1. We will consider
only the first case. In this case

t

u belongs to the linear span of

t

x

1

, . . . ,

t

x

r

, and

therefore we have Ker A

Ker B. But as these two matrices are of the same rank

we actually have Ker A = Ker B, as desired.

We continue with some simple results on adjacency preserving maps.

Lemma

5.3. Let m, n, p, q be positive integers and φ : M

m

×n

(

D) → M

p

×q

(

D)

an adjacency preserving map satisfying φ(0) = 0. Then for every nonzero

t

x

t

D

m

either there exists

t

y

t

D

p

such that

φ(L(

t

x))

⊂ L(

t

y),

or there exists w

D

q

such that

φ(L(

t

x))

⊂ R(w).

Proof.

As φ preserves adjacency and φ(0) = 0 we have φ(M

1

m

×n

(

D))

M

1

p

×q

(

D). A subset S ⊂ M

m

×n

(

D) is called adjacent if every pair of different ma-

trices A, B

∈ S is adjacent. Clearly, L(

t

x) is an adjacent subset of M

1

m

×n

(

D)∪{0}.

Moreover, if

T ⊂ M

m

×n

(

D) is an adjacent set, then φ(T ) is an adjacent set as well.

It follows that φ(L(

t

x)) is an adjacent subset of M

1

p

×q

(

D) ∪ {0}.

Hence, all we need to show is that for every adjacent subset

S ⊂ M

1

p

×q

(

D)∪{0}

there exists

t

y

t

D

p

such that

S ⊂ L(

t

y), or there exists w

D

q

such that

S ⊂ R(w). So, let S ⊂ M

1

p

×q

(

D) ∪ {0} be an adjacent subset. Assume that

S ⊂ L(

t

y) for every nonzero

t

y

t

D

p

. Then we can find A =

t

ab and B =

t

cd in

S with

t

a and

t

c being linearly independent. But because A and B are adjacent, the

vectors b and d must be linearly dependent. Hence, we may assume that A =

t

aw

and B =

t

cw for some nonzero w

D

q

.

We will prove that

S ⊂ R(w). Indeed, let C ∈ S be any nonzero element. Then

C =

t

gh for some nonzero vectors

t

g and h. We need to show that h and w are

linearly dependent. Assume this is not true. Then, because A and C are adjacent
we conclude that

t

g and

t

a are linearly dependent. Similarly,

t

g and

t

c are linearly

dependent. It follows that

t

a and

t

c are linearly dependent, a contradiction.

background image

5.1. PRELIMINARY RESULTS

31

In exactly the same way we prove the following.

Lemma

5.4. Let m, n, p, q be positive integers and φ : M

m

×n

(

D) → M

p

×q

(

D)

an adjacency preserving map satisfying φ(0) = 0. Then for every nonzero z

D

n

either there exists

t

y

t

D

p

such that

φ(R(z))

⊂ L(

t

y),

or there exists w

D

q

such that

φ(R(z))

⊂ R(w).

Let m, n be positive integers. A map η :

D

n

D

m

is called a lineation if it

maps any three collinear points into collinear points. Equivalently, for any pair of
vectors a, b

D

n

there exist vectors c, d

D

m

such that

η(

{a + λb : λ ∈ D}) ⊂ {c + λd : λ ∈ D}.

Similarly, a map ν :

t

D

n

t

D

m

is called a lineation if for every

t

a,

t

b

t

D

n

there

exist vectors

t

c,

t

d

t

D

m

such that ν(

{

t

a +

t

: λ

D}) ⊂ {

t

c +

t

: λ

D}.

And finally, a map ξ :

D

n

t

D

m

is called a lineation if for every a, b

D

n

there

exist vectors

t

c,

t

d

t

D

m

such that ξ(

{a + λb : λ ∈ D}) ⊂ {

t

c +

t

: λ

D}.

Let

D be an EAS division ring, D = F

2

, m = n > 1 an integer, and η, ν, and

ξ lineations as above. Let η(0) = 0, ν(0) = 0, and ξ(0) = 0. Assume further that
these lineations are injective and that their ranges are not contained in any affine
hyperplane. Then there exist invertible matrices A, B, C

∈ M

n

(

D), automorphisms

τ

1

, τ

2

:

D D, and an anti-automorphism σ : D D such that

η

a

1

. . .

a

n

=

τ

1

(a

1

)

. . .

τ

1

(a

n

)

A

for every

a

1

. . .

a

n

D

n

,

ν



a

1

..

.

a

n



⎠ = B


τ

2

(a

1

)

..

.

τ

2

(a

n

)


for every


a

1

..

.

a

n


t

D

n

, and

ξ

a

1

. . .

a

n

= C


σ(a

1

)

..

.

σ(a

n

)


for every

a

1

. . .

a

n

D

n

.

An interested reader can find the proof in the case that

D is commutative in

[1, page 104]. This version of the fundamental theorem of affine geometry is due to
Schaeffer [18] who formulated and proved his result also for general (not necessarily
commutative) division rings. Thus, the descriptions of the general forms of maps
η and ν have been known before, and the fact that the map ξ must be of the
form described above can be easily verified by obvious and simple modifications of
Schaeffer’s proof given in Benz’s monograph [1].

background image

32

PETER ˇ

SEMRL

We continue with matrices over general (not necessarily EAS) division rings.

Let

t

x

t

D

m

be a nonzero vector. Then we can identify L(

t

x)

⊂ M

m

×n

(

D) with

the left vector space

D

n

via the identification y

t

xy, y

D

n

.

Lemma

5.5. Let m, n, p, q be positive integers, m, n

2. Let φ : M

m

×n

(

D)

M

p

×q

(

D) be an adjacency preserving map satisfying φ(0) = 0. Assume that

rank φ(A) = 2 for every A

∈ M

2

m

×n

(

D). Suppose further that for some nonzero

vectors

t

x

t

D

m

and

t

y

t

D

p

we have

φ(L(

t

x))

⊂ L(

t

y).

And finally, assume that for every nonzero u

D

n

there exists a nonzero w

D

q

such that φ(R(u))

⊂ R(w). Then the restriction of φ to L(

t

x) is an injective

lineation of L(

t

x) into L(

t

y).

Proof. Let A, B

∈ L(

t

x) with A

= B. Then A and B are adjacent, and so are

φ(A) and φ(B). In particular, φ(A)

= φ(B).

Let now a, b

D

n

be any vectors. We need to prove that there exist vectors

c, d

D

q

such that

φ

t

x(a + λb) : λ

D

t

y(c + λd) : λ

D

.

In the case when b = 0, the set

L = {

t

x(a + λb) : λ

D} is a singleton and there

is nothing to prove. Thus, assume that b

= 0. If a and b are linearly dependent,

then we may take a = 0. It follows that the line

L is contained in L(

t

x)

∩ R(b),

and therefore φ(

L) ⊂ L(

t

y)

∩ R(d) = {

t

yλd : λ

D} for some nonzero vector d.

Hence, it remains to consider the case when a and b are linearly independent.

We choose a vector

t

w

t

D

m

such that

t

x and

t

w are linearly independent. Then,

since also a and b are linearly independent, the matrix

t

xa +

t

wb has rank two. We

know that φ(

t

xa) =

t

yc for some nonzero vector c, and φ(

t

xa +

t

wb) is a rank

two matrix adjacent to φ(

t

xa) =

t

yc. Hence, φ(

t

xa +

t

wb) =

t

yc +

t

zd for some

t

z linearly independent of

t

y and some d linearly independent of c. Clearly, every

member of

L is adjacent to

t

xa +

t

wb. Therefore, every member of φ(

L) is a rank

one matrix of the form

t

yu adjacent to

t

yc +

t

zd, that is,

t

y(c

− u) +

t

zd must be

of rank one. But as

t

y and

t

z are linearly independent, the vectors c

− u and d are

linearly dependent. Because d is nonzero, we have u

− c = λd for some λ ∈ D. This

completes the proof.

In the same way one can prove the following analogue of the above lemma.

Lemma

5.6. Let m, n, p, q be positive integers, m, n

2. Let φ : M

m

×n

(

D)

M

p

×q

(

D) be an adjacency preserving map satisfying φ(0) = 0. Assume that

rank φ(A) = 2 for every A

∈ M

2

m

×n

(

D). Suppose further that for some nonzero

vectors

t

y

t

D

m

and x

D

q

we have

φ(L(

t

y))

⊂ R(x).

And finally, assume that for every nonzero u

D

n

there exists a nonzero

t

w

t

D

p

such that φ(R(u))

⊂ L(

t

w). Then the restriction of φ to L(

t

y) is an injective

lineation of L(

t

y) into R(x).

The next lemma will be proved by a straightforward computation.

background image

5.1. PRELIMINARY RESULTS

33

Lemma

5.7. Let n

2 be an integer, σ : D D a nonzero anti-endomorphism,

and d

0

, d

1

, . . . , d

n

D scalars such that

d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

)

= 0

for all n-tuples z

1

, . . . , z

n

D. Then the map ξ : D

n

t

D

n

defined by

ξ(z) = ξ

z

1

. . .

z

n

=


σ(z

1

)

..

.

σ(z

n

)


⎦ (d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

))

1

is an injective lineation.

Proof. Let u, y

D

n

be any two vectors with y

= 0. In order to prove that ξ is

a lineation we need to show that all vectors ξ(u + λy)

−ξ(u), λ ∈ D, belong to some

one-dimensional subspace of

t

D

n

. Actually, we will prove that all these vectors are

scalar multiples of the vector

t

x =

t

y

σ

t

u

σ

(d

0

+ a)

1

b,

where we have denoted a = d

1

σ(u

1

)+. . .+d

n

σ(u

n

) and b = d

1

σ(y

1

)+. . .+d

n

σ(y

n

).

Indeed,

ξ(u + λy)

− ξ(u) = (

t

u

σ

+

t

y

σ

σ(λ))(d

0

+ a + (λ))

1

t

u

σ

(d

0

+ a)

1

=

t

u

σ

(d

0

+ a)

1

[(d

0

+ a)

(d

0

+ a + (λ))](d

0

+ a + (λ))

1

+

t

y

σ

σ(λ)(d

0

+ a + (λ))

1

=

t

(λ)(d

0

+ a + (λ))

1

.

It remains to show that ξ is injective. Assume that ξ(z) = ξ(w), that is,

t

z

σ

(d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

))

1

=

t

w

σ

(d

0

+ d

1

σ(w

1

) + . . . + d

n

σ(w

n

))

1

.

We need to show that z = w. If z = 0, then clearly w = 0, and we are done. If not,
then we have

0

=

t

z

σ

=

t

w

σ

α

for some α

D. It follows that actually α ∈ σ(D), which further yields that w = βz

for some β

D. Set c = d

1

σ(z

1

) + . . . + d

n

σ(z

n

). Then ξ(w) = ξ(z) yields

t

z

σ

σ(β)(d

0

+ (β))

1

=

t

z

σ

(d

0

+ c)

1

which implies that σ(β)(d

0

+ (β))

1

= (d

0

+ c)

1

. From here we immediately

get that σ(β) = 1, and consequently, β = 1, or equivalently, z = w.

Remark

5.8. Actually, we have proved a little bit more. Let n

2 be an

integer, σ :

D D a nonzero anti-endomorphism, and d

0

, d

1

, . . . , d

n

D arbitrary

scalars. We denote by

D ⊂ D

n

the set of all n

× 1 matrices

z

1

z

2

. . .

z

n

satisfying d

0

+ d

1

σ(z

1

) + . . . + d

n

σ

n

(z

n

)

= 0. Then the map ξ : D →

t

D

n

defined by

ξ

z

1

. . .

z

n

=


σ(z

1

)

..

.

σ(z

n

)


⎦ (d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

))

1

maps “lines” into lines. With “line” we mean an intersection of a line in

D

n

and

the subset

D. In other words, if three points in D

n

are collinear and all of them

belong to

D, then their ξ-images are collinear as well.

background image

34

PETER ˇ

SEMRL

Lemma

5.9. Let n

3 be an integer, D = F

2

, and ξ :

D

n

t

D

n

an in-

jective lineation satisfying ξ(0) = 0. Assume that σ :

D D is a nonzero anti-

endomorphism, and c, d

2

, . . . , d

n

D are scalars such that

c + d

2

σ(z

2

) + . . . + d

n

σ(z

n

)

= 0

and

ξ

1

z

2

. . .

z

n

=


1

σ(z

2

)

..

.

σ(z

n

)


(c + d

2

σ(z

2

) + . . . + d

n

σ(z

n

))

1

for all z

2

, . . . , z

n

D.

Then there exist d

0

, d

1

D such that d

0

+ d

1

= c and for all z

1

, . . . , z

n

D we

have

d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

)

= 0

and

(21)

ξ

z

1

. . .

z

n

=


σ(z

1

)

..

.

σ(z

n

)


⎦ (d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

))

1

.

Proof. We have

ξ

1

0

. . .

0

=


c

1

0

..

.

0


.

We choose and fix λ

D satisfying λ = 0, 1. Because ξ(0) = 0 and ξ is an injective

lineation we have

ξ

λ

0

. . .

0

=


α

0

..

.

0


for some nonzero α

D. Since λ = 1, we have σ(λ) = 1. We set

d

1

=

α

1

σ(λ)

− c

(σ(λ)

1)

1

and d

0

= c

− d

1

. We have d

0

= 0, since otherwise we would have c = α

1

, which

would imply ξ

1

0

. . .

0

= ξ

λ

0

. . .

0

, a contradiction.

We will verify that (21) holds in the special case when z

1

= λ and z

2

= . . . =

z

n

= 0. We need to show that


σ(λ)

0

..

.

0


(d

0

+ d

1

σ(λ))

1

=


α

0

..

.

0


.

All we need is to calculate the upper entry on the left hand-side. We have

σ(λ) (d

0

+ d

1

σ(λ))

1

= σ(λ) (c

− d

1

+ d

1

σ(λ))

1

= σ(λ) (c + d

1

(σ(λ)

1))

1

= σ(λ) (c + (α

1

σ(λ)

− c))

1

= α,

as desired.

background image

5.1. PRELIMINARY RESULTS

35

Let

D be defined as in Remark 5.8 and the map τ : D →

t

D

n

by

τ

z

1

. . .

z

n

=


σ(z

1

)

..

.

σ(z

n

)


⎦ (d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

))

1

.

Of course, we have

(22)

τ (0) = ξ(0),

τ

λ

0

. . .

0

= ξ

λ

0

. . .

0

and

(23)

τ

1

z

2

. . .

z

n

= ξ

1

z

2

. . .

z

n

for all z

2

, . . . , z

n

D.

If

D is finite, then it is commutative and we do not need to distinguish between

left and right vector spaces, and every nonzero anti-endomorphism is actually an
automorphism. Of course, in this special case we have d

2

= . . . = d

n

. The desired

conclusion follows from [1, A.3.1], but can also be verified directly by a simple and
rather straightforward proof.

So, we will assume from now on that the division ring has infinitely many

elements. We will distinguish two cases. The first one is that at least one of scalars
d

2

, . . . , d

n

is nonzero. With no loss of generality we then assume that d

2

= 0. The

second possibility is, of course, that d

2

= . . . = d

n

= 0. Choose and fix any z

1

D

with z

1

= 0, 1, λ. In the second case we further assume that d

0

+ d

1

σ(z

1

)

= 0. Then

in both cases we can find nonzero scalars u, v

D, u = v, such that

d

0

+ d

1

σ(z

1

) + d

2

σ(u)

= 0 and d

0

+ d

1

σ(z

1

) + d

2

σ(v)

= 0.

It is straightforward to check that the point

z

1

u

0

. . .

0

belongs to the line

l

1

through points

0

0

. . .

0

and

1

z

1

1

u

0

. . .

0

as well as to the line l

2

through points

λ

0

. . .

0

and

1

(1

− λ)(z

1

− λ)

1

u

0

. . .

0

.

Let k

j

, m

j

t

D

n

, j = 1, 2, be the lines with ξ(l

j

)

⊂ k

j

and τ (l

j

)

⊂ m

j

. By (22)

and (23) we have k

j

= m

j

, j = 1, 2. Furthermore, k

1

= k

2

, since otherwise

ξ(0) = 0

∈ k

1

and

ξ(

λ

0

. . .

0

) =


α

0

..

.

0


∈ k

2

would imply that k

1

= k

2

is a line



μ

0

..

.

0


: μ

D


contradicting ξ(

1

z

1

1

u

0

. . .

0

)

∈ k

1

.

It follows that the intersection k

1

∩k

2

contains at most one point. On the other

hand,

τ

z

1

u

0

. . .

0

, ξ

z

1

u

0

. . .

0

∈ k

1

∩ k

2

,

background image

36

PETER ˇ

SEMRL

and therefore, τ

z

1

u

0

. . .

0

= ξ

z

1

u

0

. . .

0

. Thus,

ξ(

z

1

u

0

. . .

0

) =


σ(z

1

)

σ(u)

0

..

.

0


(d

0

+ d

1

σ(z

1

) + d

2

σ(u))

1

,

and similarly,

ξ(

z

1

v

0

. . .

0

) =


σ(z

1

)

σ(v)

0

..

.

0


(d

0

+ d

1

σ(z

1

) + d

2

σ(v))

1

.

Clearly, ξ(

z

1

0

0

. . .

0

) belongs to the line through the points

ξ(

0

0

0

. . .

0

)

and

ξ(

1

0

0

. . .

0

)

as well as to the line through the points

ξ(

z

1

u

0

. . .

0

)

and

ξ(

z

1

v

0

. . .

0

).

Therefore,

ξ(

z

1

0

0

. . .

0

) =



0
0

..

.

0


=


σ(z

1

)

σ(u)

0

..

.

0


a

1

+



σ(z

1

)

σ(v)

0

..

.

0


b

1


σ(z

1

)

σ(u)

0

..

.

0


a

1


y

for some y

D. Here, a = d

0

+ d

1

σ(z

1

) + d

2

σ(u) and b = d

0

+ d

1

σ(z

1

) + d

2

σ(v).

Because σ(u)a

1

= 0, we necessarily have

σ(v)b

1

− σ(u)a

1

= 0,

or equivalently,

(v

1

)

− aσ(u

1

)

= 0.

This yields (d

0

+ d

1

σ(z

1

)) (σ(v

1

)

− σ(u

1

))

= 0, and since u = v, we finally

conclude that d

0

+ d

1

σ(z

1

)

= 0. In other words,

z

1

0

0

. . .

0

∈ D. Now, in

the same way as above we conclude that

ξ(

z

1

0

0

. . .

0

) = τ (

z

1

0

0

. . .

0

)

(24)

=


σ(z

1

)

0

..

.

0


(d

0

+ d

1

σ(z

1

))

1

.

background image

5.1. PRELIMINARY RESULTS

37

We have proved this for all z

1

D satisfying z

1

= 0, 1, λ and in the case that d

2

=

. . . = d

n

= 0 we needed the additional assumption that d

0

+ d

1

σ(z

1

)

= 0. But we

already know that (24) holds true for z

1

∈ {0, 1, λ}. Thus,

z

1

0

0

. . .

0

∈ D

and (24) holds true for all scalars z

1

except in the case when d

2

= . . . = d

n

= 0

and d

0

+ d

1

σ(z

1

) = 0.

We assume now that

z

1

z

2

z

3

. . .

z

n

is any matrix satisfying the follow-

ing condtions:

• z

1

= 0, 1,

at least one of the scalars z

2

, . . . , z

n

is nonzero, and

in the so called second case (the case when d

2

= . . . = d

n

= 0) we

additionally assume that d

0

+ d

1

σ(1 + z

1

)

= 0.

It is our aim to prove that then

z

1

z

2

z

3

. . .

z

n

∈ D and (21) holds true.

Assume for a moment that we have already proved the above statement. In

particular, if we are dealing with the case when d

2

= . . . = d

n

= 0, then we get

that d

0

+ d

1

σ(z

1

)

= 0 whenever z

1

= 0, 1 and d

0

+ d

1

+ d

1

σ(z

1

)

= 0. Thus,

we conclude that if d

0

+ d

1

σ(w) = 0 for some w

D, then either w = 0, or

w = 1, or d

0

+ d

1

+ d

1

σ(w) = 0. The first two possibilities cannot occur because

d

0

= 0 and d

0

+ d

1

= 0, while in the last case we would have simultaneously

d

0

+ d

1

σ(w) = 0 and d

0

+ d

1

+ d

1

σ(w) = 0 yielding that d

1

= 0.

But then

d

0

+ d

1

σ(w) = d

0

= 0, a contradiction. Hence, once we will prove our statement

we will see that the third condition is automatically satisfied. And moreover, we
know now that

z

1

0

0

. . .

0

∈ D and (24) holds for every z

1

D.

So, let us now prove the above statement. We will first consider the case when

z

1

= 1. Clearly, ξ(

z

1

z

2

. . .

z

n

) belongs to the line through the points

ξ(

0

0

0

. . .

0

)

and

ξ(

1

z

1

1

z

2

z

1

1

z

3

. . .

z

1

1

z

n

) =


σ(z

1

)

σ(z

2

)

..

.

σ(z

n

)


ν

where ν = σ(z

1

)

1

(d

0

+ d

1

+ d

2

σ(z

1

1

z

2

) + . . . + d

n

σ(z

1

1

z

n

))

1

. It follows that

(25)

ξ(

z

1

z

2

. . .

z

n

) =


σ(z

1

)

σ(z

2

)

..

.

σ(z

n

)


x

for some x

D. On the other hand,

z

1

z

2

. . .

z

n

=

z

1

+ 1

0

. . .

0

+z

1

1

(

1

z

1

z

2

. . .

z

1

z

n

z

1

+ 1

0

. . .

0

).

It follows that

ξ(

z

1

z

2

. . .

z

n

) =


1 + σ(z

1

)

0

..

.

0


e

1

background image

38

PETER ˇ

SEMRL

(26)



1

σ(z

1

z

2

)

..

.

σ(z

1

z

n

)


d

1


1 + σ(z

1

)

0

..

.

0


e

1


y

for some y

D. Here e = d

0

+ d

1

+ d

1

σ(z

1

) and d = d

0

+ d

1

+ d

2

σ(z

1

z

2

) + . . . +

d

n

σ(z

1

z

n

).

We know that at least one of the scalars z

2

, . . . , z

n

is nonzero. With no loss of

generality we assume that z

2

= 0. Comparing the second entries of (25) and (26)

we arrive at

x = σ(z

1

)d

1

y.

Applying this equation and comparing the first entries of (25) and (26) we get

((σ(z

1

)

2

1)d

1

+ (σ(z

1

) + 1)e

1

)y = (1 + σ(z

1

))e

1

= 0.

It follows that

(σ(z

1

)

2

1)d

1

+ (σ(z

1

) + 1)e

1

= 0,

or equivalently, σ(1

− z

1

)d

1

− e

1

= 0. This further yields

d

− eσ(1 − z

1

)

= 0,

which is easily seen to be equivalent to

d

0

+ d

1

σ(z

1

) + . . . + d

n

σ(z

n

)

= 0.

It follows that

z

1

z

2

. . .

z

n

∈ D and then we see as above that (21) holds.

It remains to prove our statement in the case when z

1

=

1. If 1 = 1, we are

done. Otherwise, 2 is invertible. We observe that then

ξ(

1 z

2

. . .

z

n

) =


1

σ(z

2

)

..

.

σ(z

n

)


x

for some x

D and if β ∈ D is different from 1, then

1 z

2

. . .

z

n

=

1

0

. . .

0

2(β − 1)

1

β

1
2

(β

1)z

2

. . .

1
2

(β

1)z

n

1

0

. . .

0

.

We complete the proof in this case in exactly the same way as above.

If we summarize all the facts obtained so far, then we know that

z

1

z

2

z

3

. . .

z

n

∈ D

and (21) holds for all

z

1

z

2

z

3

. . .

z

n

D

n

satisfying z

1

= 0. So, assume

now that z

2

, . . . , z

n

are any scalars not all of them being zero. We must show that

0

z

2

z

3

. . .

z

n

∈ D and

ξ

0

z

2

. . .

z

n

=


0

σ(z

2

)

..

.

σ(z

n

)


(d

0

+ d

2

σ(z

2

) + . . . + d

n

σ(z

n

))

1

.

As similar ideas as above work in this case as well we leave the details to the reader.

background image

5.1. PRELIMINARY RESULTS

39

The next lemma will be given without proof. It can be easily verified by a

straightforward computation.

Lemma

5.10. Let

t

u

t

D

n

and v

D

n

be vectors satisfying v

t

u

= 1. Then

the matrix I +

t

uv is invertible and

(I +

t

uv)

1

= I

t

u(1 + v

t

u)

1

v.

We continue with some lemmas concerning adjacency of subspaces in a vector

space.

Lemma

5.11. Let n, r be integers, 1

≤ r ≤ n−4. Let a, b, c, d, g

1

, . . . , g

r

D

n

be

linearly independent vectors. Assume that

W ⊂ D

n

is an r + 2-dimensional linear

subspace such that for every λ

D the subspaces W and

U

λ

= span

{a + c, b + λc, g

1

, . . . , g

r

}

are adjacent, and for every λ

D the subspaces W and

Z

λ

= span

{a + λd, b + d, g

1

, . . . , g

r

}

are adjacent. Then

span

{g

1

, . . . , g

r

} ⊂ W.

Proof. We know that

U

1

is adjacent to

W and that Z

0

is adjacent to

W. Our

goal is to show that

(27)

dim(

U

1

∩ W ∩ Z

0

)

≥ r.

Assume for a moment we have already proved this. A straightforward computation
shows that

U

1

∩ Z

0

= span

{g

1

, . . . , g

r

}.

It then follows from (27) that

U

1

∩ W ∩ Z

0

=

U

1

∩ Z

0

= span

{g

1

, . . . , g

r

}.

The conclusion of our lemma follows directly from the above equation.

Assume that (27) does not hold. Then because dim

U

1

∩ W = r + 1 we have

U

1

∩ W = (U

1

∩ W ∩ Z

0

)

⊕ Y

where

Y ⊂ D

n

is a subspace with dim

Y ≥ 2. Now, we obviously have

Y ⊂ W, Y ∩ Z

0

=

{0}, and dim Y ≥ 2,

which contradicts the fact that

W and Z

0

are adjacent.

Lemma

5.12. Let n, r be integers, 0

≤ r ≤ n − 4. Let a, b, c, g

1

, . . . , g

r

D

n

be

linearly independent vectors. Assume that

W ⊂ D

n

is an r + 2-dimensional linear

subspace such that

span

{g

1

, . . . , g

r

} ⊂ W

and for every λ

D the subspaces W and

span

{a + c, b + λc, g

1

, . . . , g

r

}

are adjacent. Suppose also that a + c

∈ W. Then there exist scalars γ, δ ∈ D such

that

W = span {b + γ(a + c), c + δ(a + c), g

1

, . . . , g

r

}.

background image

40

PETER ˇ

SEMRL

Proof. There exists a nonzero z

D

n

such that

z

∈ W ∩ span {a + c, b, g

1

, . . . , g

r

}

and z

span {g

1

, . . . , g

r

}. Such a z must be of the form

z = αb + β(a + c) +

r

j=1

μ

j

g

j

.

If α = 0, then we would have β

= 0 which would imply a + c ∈ W, a contradiction.

Therefore,

b + γ(a + c)

∈ W

for some γ

D.

Replacing the subspace span

{a+c, b, g

1

, . . . , g

r

} by span {a+c, b+c, g

1

, . . . , g

r

}

in the above considerations we arrive at

b + c + γ

(a + c)

∈ W

for some γ

D. It follows that

c + δ(a + c)

∈ W.

Here, δ = γ

− γ.

In order to complete the proof we need to verify that vectors b + γ(a + c) and

c + δ(a + c) are linearly independent. The verification is trivial.

Lemma

5.13. Let n, r be integers, 0

≤ r ≤ n−4. Let a, b, c, d, g

1

, . . . , g

r

D

n

be

linearly independent vectors. Assume that

W ⊂ D

n

is an r + 2-dimensional linear

subspace such that for every λ

D the subspaces W and

span

{a + c, b + λc, g

1

, . . . , g

r

}

are adjacent, and for every λ

D the subspaces W and

span

{a + λd, b + d, g

1

, . . . , g

r

}

are adjacent. Then

W = span {a + c, b + d, g

1

, . . . , g

r

}

or

W = span {a, b, g

1

, . . . , g

r

}

Proof. According to Lemma 5.11 we have g

1

, . . . , g

r

∈ W. Assume that a + c ∈

W. Then, by the previous lemma there exist scalars γ, δ ∈ D such that

W = span {b + γ(a + c), c + δ(a + c), g

1

, . . . , g

r

}.

By the assumptions, there exists

z

∈ W ∩ span {a, b + d, g

1

, . . . , g

r

}

such that z

span {g

1

, . . . , g

r

}. Therefore

z = α(b + γ(a + c)) + β(c + δ(a + c)) + h

1

= σa + τ (b + d) + h

2

for some scalars α, β, σ, τ with (α, β)

= (0, 0) and some h

1

, h

2

span {g

1

, . . . , g

r

}.

It follows that

αγ + βδ

− σ = 0, α − τ = 0, αγ + β + βδ = 0, and τ = 0.

background image

5.1. PRELIMINARY RESULTS

41

Thus, α = 0, and consequently, β

= 0, which yields that δ = 1. This implies that

W = span {b + γc, a, g

1

, . . . , g

r

}.

We continue by finding

v

∈ W ∩ span {a + d, b + d, g

1

, . . . , g

r

}

such that v

span {g

1

, . . . , g

r

}. Using exactly the same arguments as above we

conclude that γ = 0, and consequently,

W = span {a, b, g

1

, . . . , g

r

}.

If b+d

∈ W, then in the same way we conclude that W = span {a, b, g

1

, . . . , g

r

}.

The proof is completed.

Remark

5.14. The readers that are familiar with the theory of Grassmanni-

ans and in particular, with the structure of maximal adjacent sets, can prove the
above lemma directly without using Lemmas
5.11 and 5.12. All that one needs to
observe is that each of
(r + 2)-dimensional subspaces span

{a + c, b + λc, g

1

, . . . , g

r

}

contains (r + 1)-dimensional subspace span

{a + c, g

1

, . . . , g

r

} and is contained in

(r + 3)-dimensional subspace span

{a, b, c, g

1

, . . . , g

r

}. Moreover, span {a + c, b +

λc, g

1

, . . . , g

r

} and span {a+c, b+μc, g

1

, . . . , g

r

} are adjacent whenever λ = μ. It fol-

lows

from

the

well-known

structural

result

for

maximal

adjacent

sub-

sets of Grassmannians that

W either contains span {a + c, g

1

, . . . , g

r

} or is con-

tained in span

{a, b, c, g

1

, . . . , g

r

}. Similarly, the subspace W either contains span {b+

d, g

1

, . . . , g

r

} or is contained in span {a, b, d, g

1

, . . . , g

r

}. Since span {a+c, g

1

, . . . , g

r

}

span {a, b, d, g

1

, . . . , g

r

} and span {b+d, g

1

, . . . , g

r

} ⊂ span {a, b, c, g

1

, . . . , g

r

}, the

subspace

W either contains both span {a+c, g

1

, . . . , g

r

} and span {b+d, g

1

, . . . , g

r

},

or is contained in both span

{a, b, c, g

1

, . . . , g

r

} and span {a, b, d, g

1

, . . . , g

r

}. This

completes the proof.

Lemma

5.15. Let

E D be two division rings, k and n positive integers,

2

≤ k ≤ n, and A = [a

ij

]

∈ M

n

(

E) a matrix such that rank A = k and a

ij

= 0

whenever j > k (the matrix A has nonzero entries only in the first k columns). Let

X

Y

∈ M

n

×2n

(

D) be a matrix of rank n with X and Y both n × n matrices.

Assume that for every B = [b

ij

]

∈ M

n

(

E) satisfying

• b

ij

= 0 whenever j > k,

• there exists an integer r, 1 ≤ r ≤ k, such that b

ir

= 0, i = 1, . . . , n (that

is, one of the first k columns of B is zero), and

• A and B are adjacent,

the row spaces of matrices

X

Y

and

I

B

are adjacent. Then there exists an

invertible matrix P

∈ M

n

(

D) such that

X

Y

= P

I

A

.

In the case when k = 2 we have the additional possibility that X is invertible and
Y
= 0.

Proof. Due to our assumptions on the matrix A we know that the first k

columns of A are linearly independent, and all the other columns are zero. It
follows that there exists an invertible n

× n matrix C with entries in E such that

CA =

I

k

0

0

0

,

background image

42

PETER ˇ

SEMRL

where I

k

stands for the k

× k identity matrix. The matrix B ∈ M

n

(

E) satisfies

the three conditions in the statement of our lemma if and only if the matrix CB

M

n

(

E) satisfies the first two conditions and CA and CB are adjacent. The row

spaces of matrices

X

Y

and

I

B

are adjacent if and only if the row spaces

of matrices

XC

1

Y

=

X

Y

C

1

0

0

I

and

I

CB

= C

I

B

C

1

0

0

I

are adjacent. And finally, we have

X

Y

= P

I

A

if and only if

XC

1

Y

= P C

1

I

CA

.

Thus, we may assume with no loss of generality that

A =

I

k

0

0

0

.

Let us denote the row space of the matrix

X

Y

by

W. Then choosing first

B = E

11

+ λE

21

+

k

j=3

E

jj

and then

B = E

22

+ λE

12

+

k

j=3

E

jj

we see that for every λ

D the n-dimensional subspace W ⊂ D

2n

is adjacent to

span

{e

1

+ e

n+1

, e

2

+ λe

n+1

, e

3

+ e

n+3

, . . . , e

k

+ e

n+k

, e

k+1

, . . . , e

n

}

as well as to

span

{e

1

+ λe

n+2

, e

2

+ e

n+2

, e

3

+ e

n+3

, . . . , e

k

+ e

n+k

, e

k+1

, . . . , e

n

}.

Applying Lemma 5.13 we conclude that either

W = span {e

1

+ e

n+1

, e

2

+ e

n+2

, e

3

+ e

n+3

, . . . , e

k

+ e

n+k

, e

k+1

, . . . , e

n

},

or equivalently,

X

Y

= P

I

A

for some invertible P

∈ M

n

(

D); or

W = span {e

1

, e

2

, e

3

+ e

n+3

, . . . , e

k

+ e

n+k

, e

k+1

, . . . , e

n

},

or equivalently,

X

Y

= P

I

E

33

+ . . . + E

kk

for some invertible P

∈ M

n

(

D).

In the first case we are done. All we need to do to complete the proof is to show
that the second possibility cannot occur when k

3. Indeed, in this case the row

spaces of matrices

I

E

33

+ . . . + E

kk

and

I

E

11

+ . . . + E

k

1,k−1

would not

be adjacent, a contradiction.

Lemma

5.16. Let

E D be two division rings, D = F

2

, k and n positive

integers, 2

≤ k ≤ n, and A = [a

ij

]

∈ M

n

(

E) a nonzero matrix such that rank A < k

and a

ij

= 0 whenever j > k. Let

X

Y

∈ M

n

×2n

(

D) be a matrix of rank n with

X and Y both n

× n matrices. Assume that for every B = [b

ij

]

∈ M

n

(

E) satisfying

• b

ij

= 0 whenever j > k,

rank B = rank A + 1, and

background image

5.1. PRELIMINARY RESULTS

43

• A and B are adjacent

the row spaces of matrices

X

Y

and

I

B

are adjacent. Then there exists an

invertible matrix P

∈ M

n

(

D) such that

X

Y

= P

I

A

.

Proof. Applying the same argument as in the proof of Lemma 5.15 we may

assume with no loss of generality that

A =

I

r

0

0

0

,

where r = rank A

∈ {1, . . . , k − 1}. Let us denote the row space of the matrix

X

Y

by

W. Then choosing first

B = E

11

+ . . . + E

rr

+ λE

r+1,r+1

+ μE

r,r+1

and then

B = E

11

+ . . . + E

rr

+ λE

r+1,r+1

+ μE

r+1,r

we see that for every pair of scalars λ, μ

D with λ = 0 the n-dimensional subspace

W ⊂ D

2n

is adjacent to

U(λ, μ) = span {e

1

+ e

n+1

, . . . , e

r

1

+ e

n+r

1

, e

r

+ e

n+r

+ μe

n+r+1

,

e

r+1

+ λe

n+r+1

, e

r+2

, . . . , e

n

}

as well as to

Z(λ, μ) = span {e

1

+ e

n+1

, . . . , e

r

1

+ e

n+r

1

, e

r

+ e

n+r

,

e

r+1

+ μe

n+r

+ λe

n+r+1

, e

r+2

, . . . , e

n

}.

As in Lemma 5.11 we prove that

dim(

U(1, 1) ∩ W ∩ Z(1, 1)) ≥ n − 2,

and obviously,

U(1, 1) ∩ Z(1, 1) = span {e

1

+ e

n+1

, . . . , e

r

1

+ e

n+r

1

, e

r+2

, . . . , e

n

}.

As in Lemma 5.11 we conclude that

span

{e

1

+ e

n+1

, . . . , e

r

1

+ e

n+r

1

, e

r+2

, . . . , e

n

} ⊂ W.

Our next goal is to show that e

r

+ e

n+r

∈ W. Assume that this is not true.

For every pair λ, μ

D, λ = 0, there exists a nonzero z(λ, μ) D

2n

such that

z(λ, μ)

∈ W ∩ Z(λ, μ)

and z(λ, μ)

span {e

1

+e

n+1

, . . . , e

r

1

+e

n+r

1

, e

r+2

, . . . , e

n

}. Such a z(λ, μ) must

be of the form

z(λ, μ) = α(e

r+1

+ μe

n+r

+ λe

n+r+1

) + β(e

r

+ e

n+r

)

+

r

1

j=1

τ

j

(e

j

+ e

n+j

) +

n

j=r+2

τ

j

e

j

.

If α = 0, then we would have β

= 0 which would imply e

r

+ e

n+r

∈ W, a contra-

diction. Therefore, if σ

D \ {0, 1}, then

w

1

= e

r+1

+ e

n+r+1

+ γ

1

(e

r

+ e

n+r

)

∈ W,

w

2

= e

r+1

+ e

n+r

+ e

n+r+1

+ γ

2

(e

r

+ e

n+r

)

∈ W,

background image

44

PETER ˇ

SEMRL

and

w

3

= e

r+1

+ +σe

n+r+1

+ γ

3

(e

r

+ e

n+r

)

∈ W

for some γ

j

D, j = 1, 2, 3. It follows that

span

{w

1

, w

2

, w

3

, e

1

+ e

n+1

, . . . , e

r

1

+ e

n+r

1

, e

r+2

, . . . , e

n

} ⊂ W,

implying that dim

W ≥ n + 1, a contradiction.

We have thus proved that e

r

+ e

n+r

∈ W. In order to complete the proof

we have to show that e

r+1

∈ W as well. Using the fact that W and U(1, 1) are

adjacent, we see that either

• e

r+1

∈ W; or

• e

r+1

+ δe

n+r+1

∈ W for some nonzero δ ∈ D; or

• e

n+r+1

∈ W.

In the first case we are done. It remains to show that the other two cases cannot
occur. In the second case we would have

W = U(δ, 0)

contradicting the fact that these two subspaces are adjacent. And finally, to show
that the last possibility leads to a contradiction we choose the matrix

B = E

11

+ . . . + E

rr

+ (E

rr

+ E

r,r+1

+ E

r+1,r

+ E

r+1,r+1

).

The verification that in this case

W and the row space of

I

B

are not adjacent

is straightforward and left to the reader.

We will complete this section by a result on order, rank, and adjacency pre-

serving maps on the set of idempotent matrices. It should be mentioned that order
preserving maps on P

n

(

D) have been already studied in [21] and [22] under the

additional assumption of injectivity (and the commutativity of the division ring

D

in [21]). We do not have the injectivity assumption here. On the other hand, the
adjacency preserving property implies a certain weak form of injectivity. Namely,
if P, Q

∈ P

n

(

D) are adjacent, then ϕ(P ) = ϕ(Q). Moreover, we will assume that

rank is preserved which yields that ϕ(P )

= ϕ(Q) also in the case when P and Q

have different ranks. Therefore it is not surprising that the proof of the next lemma
does not contain essentially new ideas.

Lemma

5.17. Let

D = F

2

be a division ring, n an integer

3, and ϕ :

P

n

(

D) → P

n

(

D) an order, rank, and adjacency preserving map. We further as-

sume that for every nonzero

t

y

t

D

n

either there exists a nonzero x

D

n

such that ϕ(P L(

t

y))

⊂ P R(x), or there exists a nonzero

t

w

t

D

n

such that

ϕ(P L(

t

y))

⊂ P L(

t

w). Similarly, we assume that for every nonzero z

D

n

ei-

ther there exists a nonzero x

D

n

such that ϕ(P R(z))

⊂ P R(x), or there exists

a nonzero

t

w

t

D

n

such that ϕ(P R(z))

⊂ P L(

t

w). And finally, we suppose that

there exists a nonzero

t

y

0

t

D

n

such that ϕ(P L(

t

y

0

))

⊂ P R(x

0

) for some nonzero

x

0

D

n

.

Then either

ϕ(P

1

n

(

D)) ⊂ P R(x

0

),

or for every linearly independent n-tuple

t

y

1

, . . . ,

t

y

n

t

D

n

and every linearly inde-

pendent n-tuple z

1

, . . . , z

n

D

n

there exist linearly independent n-tuples x

1

, . . . , x

n

background image

5.1. PRELIMINARY RESULTS

45

D

n

and

t

w

1

, . . . ,

t

w

n

t

D

n

such that

ϕ(P L(

t

y

i

))

⊂ P R(x

i

)

and

ϕ(P R(z

i

))

⊂ P L(

t

w

i

),

i = 1, . . . , n.

Proof. We will first show that for every nonzero

t

w

t

D

n

we have ϕ(P L(

t

w))

P R(u) for some u

D

n

, u

= 0. Assume on the contrary that there exists a nonzero

t

w

t

D

n

such that ϕ(P L(

t

w))

⊂ P L(

t

z) for some nonzero

t

z

t

D

n

. The inter-

section P R(x

0

)

∩ P L(

t

z) contains at most one element. Indeed, if x

0

t

z

= 0, then

P R(x

0

)

∩ P L(

t

z) =

{

t

z(x

0

t

z)

1

x

0

} and otherwise P R(x

0

)

∩ P L(

t

z) is the empty

set. It follows that the vectors

t

w and

t

y

0

must be linearly independent. Since

n

3 we can find linearly independent vectors a, b, c ∈ D

n

satisfying

a

t

y

0

= b

t

w = 1

and

a

t

w = b

t

y

0

= c

t

y

0

= c

t

w = 0.

The rank one idempotents ϕ(

t

y

0

a +

t

y

0

b)

∈ P R(x

0

) and ϕ(

t

wa +

t

wb)

∈ P L(

t

z)

are adjacent. Hence, one of them must belong to the intersection P R(x

0

)

∩P L(

t

z).

We will consider just one of the two possibilities, say

(28)

ϕ(

t

y

0

a +

t

y

0

b) = R,

where we have denoted R =

t

z(x

0

t

z)

1

x

0

. The pair of rank one idempotents

ϕ(

t

y

0

a +

t

y

0

b +

t

y

0

c)

∈ P R(x

0

) and ϕ(

t

wa +

t

wb +

t

wc)

∈ P L(

t

z) is adjacent

as well. Hence, one of them must be equal to R. But ϕ(

t

y

0

a +

t

y

0

b +

t

y

0

c) is

adjacent to ϕ(

t

y

0

a +

t

y

0

b), and theorefore (28) yields that

(29)

ϕ(

t

wa +

t

wb +

t

wc) = R.

Finally, as

D = F

2

we can find λ

D with λ = 0, 1, and then we consider the pair of

adjacent rank one idempotents ϕ(

t

y

0

a +

t

y

0

λb +

t

y

0

c)

∈ P R(x

0

) and ϕ(

t

1

a +

t

wb +

t

1

c)

∈ P L(

t

z). As before we conclude that one of them is equal to R

contradicting either (28), or (29).

Our next goal is to prove that either for every nonzero z

D

n

there exists a

nonzero

t

w

t

D

n

such that ϕ(P R(z))

⊂ P L(

t

w), or ϕ(P

1

n

(

D)) ⊂ P R(x

0

). The

same proof as above yields that either for every nonzero z

D

n

there exists a

nonzero

t

w

t

D

n

such that ϕ(P R(z))

⊂ P L(

t

w), or for every nonzero z

D

n

there exists a nonzero u

D

n

such that ϕ(P R(z))

⊂ P R(u). All we have to do is

to show that the second possibility implies that ϕ(P

1

n

(

D)) ⊂ P R(x

0

). In order to

get this inclusion we have to show ϕ(P R(z))

⊂ P R(x

0

) for every nonzero z

D

n

.

If z

t

y

0

= 0, then

t

y

0

(z

t

y

0

)

1

z

∈ P L(

t

y

0

), and consequently, ϕ(

t

y

0

(z

t

y

0

)

1

z) =

t

ax

0

for some

t

a

t

D

n

with x

t

0

a = 1. We know that ϕ(P R(z))

⊂ P R(u) for

some nonzero u

D

n

. Thus, ϕ(P R(z))

⊂ P R(x

0

) for every z

D

n

satisfying

z

t

y

0

= 0. If z

t

y

0

= 0, then we can find

t

y

1

t

D

n

and z

1

D

n

such that z

t

y

1

= 0,

z

1

t

y

0

= 0, and z

1

t

y

1

= 1. We know that ϕ(P L(

t

y

1

))

⊂ P R(x

1

) for some nonzero

x

1

D

n

. Moreover, by the previous step we have ϕ(P R(z

1

))

⊂ P R(x

0

). As

t

y

1

z

1

∈ P L(

t

y

1

)

∩ P R(z

1

) we have necessarily ϕ(P L(

t

y

1

))

⊂ P R(x

0

). Because

z

t

y

1

= 0 the above argument shows that ϕ(P R(z)) ⊂ P R(x

0

) in this case as well.

We have shown that ϕ(P R(z))

⊂ P R(x

0

) for every nonzero z

D

n

, as desired.

Hence, assume from now on that ϕ(P

1

n

(

D)) ⊂ P R(x

0

). We will show then that

for every pair of linearly independent n-tuples

t

y

1

, . . . ,

t

y

n

t

D

n

and z

1

, . . . , z

n

background image

46

PETER ˇ

SEMRL

D

n

there exist linearly independent n-tuples x

1

, . . . , x

n

D

n

and

t

w

1

, . . . ,

t

w

n

t

D

n

such that

ϕ(P L(

t

y

i

))

⊂ P R(x

i

)

and

ϕ(P R(z

i

))

⊂ P L(

t

w

i

),

i = 1, . . . , n.

We will verify this statement by induction on n. We start with the case when n = 3.
We know that for every nonzero

t

y

t

D

3

and every nonzero z

D

3

there exist

nonzero x

D

3

and

t

w

t

D

3

such that

ϕ(P L(

t

y))

⊂ P R(x) and ϕ(P R(z)) ⊂ P L(

t

w).

We will show only that if

t

y

1

,

t

y

2

,

t

y

3

t

D

3

are linearly independent and if

ϕ(P L(

t

y

i

))

⊂ P R(x

i

), i = 1, 2, 3, then x

1

, x

2

, x

3

are linearly independent. In the

same way one can then prove that if z

1

, z

2

, z

3

D

3

are linearly independent and if

ϕ(P R(z

i

))

⊂ P L(

t

w

i

), i = 1, 2, 3, then

t

w

1

,

t

w

2

, and

t

w

3

are linearly independent.

So, assume that

t

y

1

,

t

y

2

,

t

y

3

t

D

3

are linearly independent and choose nonzero

x

1

, x

2

, x

3

D

3

such that ϕ(P L(

t

y

i

))

⊂ P R(x

i

), i = 1, 2, 3. Let T

∈ M

3

(

D) be the

invertible matrix satisfying T

t

e

1

=

t

y

1

, T (

t

e

1

+

t

e

2

) =

t

y

2

, and T (

t

e

1

+

t

e

2

+

t

e

3

) =

t

y

3

. Then, after replacing ϕ by P

→ ϕ(T P T

1

), we may assume that ϕ(P L(

t

e

1

))

P R(x

1

), ϕ(P L(

t

e

1

+

t

e

2

))

⊂ P R(x

2

), and ϕ(P L(

t

e

1

+

t

e

2

+

t

e

3

))

⊂ P R(x

3

). We

know that ϕ(0) = 0, ϕ(E

11

) = SE

11

S

1

, ϕ(E

11

+ E

22

) = S(E

11

+ E

22

)S

1

, and

ϕ(I) = I for some invertible matrix S

∈ M

3

(

D). After composing ϕ with the

similarity transformation P

→ S

1

P S, we may further assume that

ϕ(0) = 0, ϕ(E

11

) = E

11

, ϕ(E

11

+ E

22

) = E

11

+ E

22

,

and

ϕ(I) = I.

Then, of course, we also have to replace the vectors x

1

, x

2

, x

3

by x

1

S, x

2

S, and x

3

S,

respectively.

Now, we have

E

11

1

0

0

P

,

where P is any 2

× 2 idempotent. It follows that E

11

= ϕ(E

11

)

≤ ϕ

1

0

0

P

,

P

∈ P

2

(

D). Hence, there exists an order, rank, and adjacency preserving map

ξ : P

2

(

D) → P

2

(

D) such that

ϕ

1

0

0

P

=

1

0

0

ξ(P )

,

P

∈ P

2

(

D).

Because of ϕ(E

11

) = E

11

and ϕ(P L(

t

e

1

))

⊂ P R(x

1

) for some nonzero x

1

D

3

we

have ϕ(P L(

t

e

1

))

⊂ P R(e

1

). Further,


1

λ

0

0

0

0

0

0

0


≤ E

11

+ E

22

,

λ

D,

and thus,

ϕ



1

λ

0

0

0

0

0

0

0



⎠ =


1

0

0

η(λ)

0

0

0

0

0


, λ ∈ D,

for some map η :

D D with η(0) = 0. Because ϕ maps adjacent pairs of

idempotents into adjacent pairs of idempotents, the map η is injective. For every

background image

5.1. PRELIMINARY RESULTS

47

λ

D we have


1

λ

0

0

0

0

0

0

0



1

0

0

0

1

0

0

1

0


,

and therefore


1

0

0

η(λ)

0

0

0

0

0



1

0

0

ξ

1

0

1

0


, λ ∈ D.

Moreover, ξ

1

0

1

0

=

1

0

0

0

is an idempotent of rank one. Thus

ϕ



1

0

0

0

1

0

0

1

0



⎠ =


1

0

0

0

1

a

0

0

0


for some nonzero a

D. Applying the same idea as above once more we see that

ϕ



1

0

0

1

0

0

0

0

0



⎠ =


1

b

0

0

0

0

0

0

0


for some b

= 0. So, ϕ(P L(

t

e

1

+

t

e

2

))

⊂ P R(e

1

+ be

2

). Because


1

0

0

1

0

0

1

0

0



1

0

0

0

1

0

0

1

0


and ϕ(P R(e

1

))

⊂ P L(

t

e

1

) we have

ϕ



1

0

0

1

0

0

1

0

0



⎠ =


1

α

β

0

0

0

0

0

0



1

0

0

0

1

a

0

0

0


for some α, β

D. Hence, αa = β. If α = 0, then β = 0 because of adjacency

preserving property, a contradiction. Thus, α

= 0, and consequently, β = 0. We

conclude that ϕ(P L(

t

e

1

+

t

e

2

+

t

e

3

))

⊂ P R(e

1

+ αe

2

+ βe

3

). Now, e

1

, e

1

+ be

2

,

and e

1

+ αe

2

+ βe

3

are linearly independent. This completes the proof in the case

n = 3.

We assume now that our statement holds true for n and we want to prove it

for n + 1. As before we may assume that for every nonzero

t

y

t

D

n+1

and every

nonzero z

D

n+1

there exist nonzero x

D

n+1

and

t

w

t

D

n+1

such that

ϕ(P L(

t

y))

⊂ P R(x) and ϕ(P R(z)) ⊂ P L(

t

w),

ϕ(0) = 0, ϕ(E

11

) = E

11

, ϕ(E

11

+ E

22

) = E

11

+ E

22

, . . . , ϕ(I) = I, and

t

y

1

=

t

e

1

,

t

y

2

=

t

e

1

+

t

e

2

, . . . ,

t

y

n+1

=

t

e

1

+ . . . +

t

e

n+1

. In the same way as above we see

that there exists an order, rank, and adjacency preserving map ξ : P

n

(

D) → P

n

(

D)

such that

ϕ

1

0

0

P

=

1

0

0

ξ(P )

,

P

∈ P

n

(

D).

Also, ϕ(P L(

t

e

1

))

⊂ P R(e

1

) and ϕ(P R(e

1

))

⊂ P L(

t

e

1

).

background image

48

PETER ˇ

SEMRL

Because


1

0

0

. . .

0

0 0 . . . 0
0

0

0

. . .

0

..

.

..

.

..

.

. .. ...

0

0

0

. . .

0



1

0

0

. . .

0

0

1

∗ . . . ∗

0

0

0

. . .

0

..

.

..

.

..

.

. .. ...

0

0

0

. . .

0


for any choice of entries denoted by

, we obtain using the same argument as before

that

ϕ



1

0

0

. . .

0

0

1

∗ . . . ∗

0

0

0

. . .

0

..

.

..

.

..

.

. .. ...

0

0

0

. . .

0



=


1

0

0

. . .

0

0

1

0

. . .

0

0

0 . . . 0

..

.

..

.

..

.

. .. ...

0

0 . . . 0


,

and consequently,

ξ



1

∗ . . . ∗

0

0

. . .

0

..

.

..

.

. .. ...

0

0

. . .

0



=


1

0

. . .

0

0 . . . 0

..

.

..

.

. .. ...

0 . . . 0


.

Similarly,

(30)

ξ



1

0

. . .

0

0 . . . 0

..

.

..

.

. .. ...

0 . . . 0



=


1

∗ . . . ∗

0

0

. . .

0

..

.

..

.

. .. ...

0

0

. . .

0


.

Because ξ preserves rank one idempotents and adjacency, each subset P L(

t

y)

P

n

(

D) is mapped either into some P L(

t

w) or some P R(x), and the same is true

for each subset P R(z)

⊂ P

n

(

D). By the last two equations the ξ-image of the set of

all rank one idempotents is not a subset of some P R(x). Thus, we can now apply
the induction hypothesis on the map ξ. Denote

S

k

=


1

0

0

. . .

0

0

1

0

. . .

0

0

1

0

. . .

0

..

.

..

.

..

.

. .. ...

0

1

0

. . .

0

0

0

0

. . .

0

..

.

..

.

..

.

. .. ...

0

0

0

. . .

0


∈ P

n+1

(

D)

background image

5.1. PRELIMINARY RESULTS

49

and

P

k

=


1

0

0

. . .

0

1

0

0

. . .

0

1

0

0

. . .

0

..

.

..

.

..

.

. .. ...

1

0

0

. . .

0

0

0

0

. . .

0

..

.

..

.

..

.

. .. ...

0

0

0

. . .

0


∈ P

n+1

(

D),

k = 1, . . . , n. Here, S

k

has exactly k nonzero entries in the second column and

exactly the first k + 1 entries of the first column of P

k

are equal to 1.

From S

k

≤ E

11

+ . . . + E

k+1,k+1

, (30), and the induction hypothesis we get

that

ϕ(S

k

) =


1

0

0

. . .

0

0

. . .

0

0

1

∗ . . . a

k

0

. . .

0

0

0

0

. . .

0

0

. . .

0

..

.

..

.

..

.

. .. ... ... ... ...

0

0

0

. . .

0

0

. . .

0


,

where the entry a

k

in the (2, k + 1)-position is nonzero. Because ϕ(P R(e

1

))

P L(

t

e

1

) and P

k

≤ E

11

+ . . . + E

k+1,k+1

we have

ϕ(P

k

) = Q

k

=


1

w

k

∗ . . . c

k

0

. . .

0

0

0

0

. . .

0

0

. . .

0

..

.

..

.

..

.

. .. ... ... ... ...

0

0

0

. . .

0

0

. . .

0


,

where c

k

is in the (1, k + 1)-position. Moreover, Q

k

≤ ϕ(S

k

) which yields


1

w

k

∗ . . . c

k

0

. . .

0

0

0

0

. . .

0

0

. . .

0

0

0

0

. . .

0

0

. . .

0

..

.

..

.

..

.

. .. ... ... ... ...

0

0

0

. . .

0

0

. . .

0



1

0

0

. . .

0

0

. . .

0

0

1

∗ . . . a

k

0

. . .

0

0

0

0

. . .

0

0

. . .

0

..

.

..

.

..

.

. .. ... ... ... ...

0

0

0

. . .

0

0

. . .

0


=


1

w

k

∗ . . . c

k

0

. . .

0

0

0

0

. . .

0

0

. . .

0

0

0

0

. . .

0

0

. . .

0

..

.

..

.

..

.

. .. ... ... ... ...

0

0

0

. . .

0

0

. . .

0


.

If w

k

= 0 then ϕ(P

k

) = E

11

contradicting the adjacency preserving property of ϕ.

Therefore w

k

a

k

= c

k

= 0. Obviously, ϕ(P L(

t

y

k+1

))

⊂ P R(x

k+1

) where x

k+1

=

e

1

+ w

k

e

2

+ . . . + c

k

e

k+1

. The induction proof is completed.

background image

50

PETER ˇ

SEMRL

5.2. Splitting the proof of main results into subcases

We are now ready to start with the proofs of our main results. Thus, let

m, n, p, q be integers with m, p, q

≥ n ≥ 3 and D a division ring, D = F

2

,

F

3

.

Assume that φ : M

m

×n

(

D) → M

p

×q

(

D) preserves adjacency, φ(0) = 0, and there

exists A

0

∈ M

m

×n

(

D) such that rank φ(A

0

) = n. We know that φ is a contraction,

that is, d(φ(A), φ(B))

≤ d(A, B) for every pair A, B ∈ M

m

×n

(

D). In particular,

rank A

0

= d(A

0

, 0)

≥ d(φ(A

0

), φ(0)) = n, and therefore, rank A

0

= n.

Let S

∈ M

m

(

D), T ∈ M

n

(

D), S

1

∈ M

p

(

D), and T

1

∈ M

q

(

D) be invertible

matrices. It is straightforward to verify that replacing the map φ by the map
A

→ S

1

φ(SAT )T

1

, A

∈ M

m

×n

(

D), does not affect neither the assumptions nor the

conclusion of Theorems 4.1 and 4.2. Because we know that there are invertible
matrices S

∈ M

m

(

D), T ∈ M

n

(

D), S

1

∈ M

p

(

D), and T

1

∈ M

q

(

D) such that

A

0

= S(E

11

+. . .+E

nn

)T

∈ M

m

×n

(

D) and S

1

φ(A

0

)T

1

= E

11

+. . .+E

nn

∈ M

p

×q

(

D),

we may, and we will assume with no loss of generality that

φ(E

11

+ . . . + E

nn

) = E

11

+ . . . + E

nn

.

We will now prove that there exists an order preserving map ϕ : P

n

(

D) → P

n

(

D)

such that for every P

∈ P

n

(

D) we have

(31)

φ

P

0

=

ϕ(P )

0

0

0

,

where the zero on the left-hand side denotes the (m

− n) × n zero matrix, and

the zeroes in the matrix on the right-hand side of the equation stand for the zero
matrices of the sizes n

× (q − n), (p − n) × n, and (p − n) × (q − n). Furthermore, we

have P

≤ Q ⇒ ϕ(P ) ≤ ϕ(Q), P, Q ∈ P

n

(

D), and rank ϕ(P ) = rank P , P ∈ P

n

(

D).

Indeed, let P, Q be any pair of n

× n idempotent matrices with rank P = r and

P

≤ Q. We have to show that

φ

P

0

=

P

1

0

0

0

and

φ

Q

0

=

Q

1

0

0

0

,

where P

1

and Q

1

are n

× n idempotents with P

1

≤ Q

1

and rank P

1

= r. We know

that there exists an invertible matrix T

∈ M

n

(

D) such that T P T

1

= E

11

+. . .+E

rr

and T QT

1

= E

11

+ . . . + E

ss

, where s = rank Q

≥ r. We have

0

≤ T

1

E

11

T

≤ T

1

(E

11

+ E

22

)T

≤ . . . ≤ T

1

(E

11

+ . . . + E

r

1,r−1

)T

≤ P

≤ T

1

(E

11

+ . . . + E

r+1,r+1

)T

≤ . . . ≤ T

1

(E

11

+ . . . + E

s

1,s−1

)T

≤ Q

≤ T

1

(E

11

+ . . . + E

s+1,s+1

)T

≤ . . . ≤ I

n

,

where I

n

stands for the n

× n identity matrix.

We will now apply the fact that φ preserves adjacency.

Thus, φ(0) = 0

is adjacent to φ

T

1

E

11

T

0

, and φ

T

1

E

11

T

0

is adjacent to the matrix

φ

T

1

(E

11

+ E

22

)T

0

, and, ..., and φ

T

1

(E

11

+ . . . + E

n

1,n−1

)T

0

is ad-

jacent to

φ

I

n

0

=

I

n

0

0

0

.

background image

5.2. SPLITTING THE PROOF OF MAIN RESULTS INTO SUBCASES

51

It follows that φ

T

1

E

11

T

0

is of rank at most one, and therefore, the matrix

φ

T

1

(E

11

+ E

22

)T

0

is of rank at most two, and, ..., and

φ

T

1

(E

11

+ . . . + E

n

1,n−1

)T

0

is of rank at most n

1.

The matrix φ

T

1

(E

11

+ . . . + E

n

1,n−1

)T

0

is of rank at most n

1 and

is adjacent to

I

n

0

0

0

.

It follows that it is of rank n

1. Moreover, by Lemma 5.1,

φ

T

1

(E

11

+ . . . + E

n

1,n−1

)T

0

=

P

n

1

0

0

0

,

where P

n

1

is an n

× n idempotent of rank n − 1. Now,

φ

T

1

(E

11

+ . . . + E

n

2,n−2

)T

0

is of rank at most n

2 and is adjacent to

P

n

1

0

0

0

.

It follows that

φ

T

1

(E

11

+ . . . + E

n

2,n−2

)T

0

=

P

n

2

0

0

0

,

where P

n

2

is an n

× n idempotent of rank n − 2 and P

n

2

≤ P

n

1

≤ I

n

. We

continue in the same way and conclude that

φ

P

0

=

P

1

0

0

0

and

φ

Q

0

=

Q

1

0

0

0

,

where P

1

and Q

1

are n

× n idempotents with P

1

≤ Q

1

and rank P

1

= r.

In the next step we will observe that for every nonzero

t

y

t

D

n

either there

exists a nonzero x

D

n

such that ϕ(P L(

t

y))

⊂ P R(x), or there exists a nonzero

t

w

t

D

n

such that ϕ(P L(

t

y))

⊂ P L(

t

w). Indeed, this follows directly from

Lemma 5.3. Similarly, for every nonzero z

D

n

either there exists a nonzero

x

D

n

such that ϕ(P R(z))

⊂ P R(x), or there exists a nonzero

t

w

t

D

n

such

that ϕ(P R(z))

⊂ P L(

t

w). From now on we will assume that there exists a nonzero

t

y

0

t

D

n

such that ϕ(P L(

t

y

0

))

⊂ P R(x

0

) for some nonzero x

0

D

n

. The other

case can be treated in almost the same way. We are now in a position to apply
Lemma 5.17. Thus, one of the following two conditions hold:

for every pair of linearly independent n-tuples

t

y

1

, . . . ,

t

y

n

t

D

n

and

z

1

, . . . , z

n

D

n

there exist linearly independent n-tuples x

1

, . . . , x

n

D

n

and

t

w

1

, . . . ,

t

w

n

t

D

n

such that ϕ(P L(

t

y

i

))

⊂ P R(x

i

) and ϕ(P R(z

i

))

P L(

t

w

i

), i = 1, . . . , n; or

• ϕ(P

1

n

(

D)) ⊂ P R(x

0

).

background image

52

PETER ˇ

SEMRL

We will complete the proof in the second case in the subsection Degenerate case.

Hence, we will assume from now on that the first condition holds true.
Let us next prove that for every nonzero

t

y

,

t

y

,

t

y

t

D

n

and x

, x

, x

D

n

satisfying

t

y

span {

t

y

,

t

y

}, ϕ(P L(

t

y

))

⊂ P R(x

), ϕ(P L(

t

y

))

⊂ P R(x

),

and ϕ(P L(

t

y

))

⊂ P R(x

) we have x

span {x

, x

}, and similarly, for every

nonzero z

, z

, z

D

n

and

t

w

,

t

w

,

t

w

t

D

n

satisfying z

span {z

, z

},

ϕ(P R(z

))

⊂ P L(

t

w

), ϕ(P R(z

))

⊂ P L(

t

w

), and ϕ(P R(z

))

⊂ P L(

t

w

) we

have

t

w

span {

t

w

,

t

w

}.

We first show that x

span {x

, x

}. There is nothing to prove if

t

y

and

t

y

are linearly dependent. So, assume that they are linearly independent. We know
that then x

and x

are linearly independent as well. We can find v

, v

, v

D

n

such that v

span { v

, v

}, v

t

y

= v

t

y

= v

t

y

= 1, and v

t

y

= v

t

y

= 0.

Then

t

y

v

t

y

v

+

t

y

v

, and therefore ϕ(

t

y

v

)

≤ ϕ(

t

y

v

+

t

y

v

). Because

ϕ(

t

y

v

)

∈ P R(x

) and ϕ(

t

y

v

)

≤ ϕ(

t

y

v

+

t

y

v

), the image of ϕ(

t

y

v

+

t

y

v

) contains x

. It contains x

as well. As the image of ϕ(

t

y

v

+

t

y

v

)

is two-dimensional, it is equal to the linear span of x

and x

. It follows that the

image of ϕ(

t

y

v

), which is the linear span of x

, must be contained in the linear

span of x

and x

. Similarly, we prove the second part of the above statement.

Now, we know that for every nonzero z

D

n

there is a nonzero

t

w

t

D

n

such that ϕ(P R(z))

⊂ P L(

t

w). Clearly, the map ψ

1

:

P(D

n

)

P(

t

D

n

) given by

ψ

1

([z]) = [

t

w], where z and

t

w are as above, is well-defined. The above statement

implies that ψ

1

satisfies all the assumptions of a slighlty modified version of the

fundamental theorem of projective geometry which was formulated as Proposition
2.7 in [22]. Thus, there exists an anti-endomorphism σ :

D D and an invertible

matrix T

1

such that

ψ

1

([z]) = [T

1

t

z

σ

],

z

D

n

\ {0}.

Similarly, there exists an anti-endomorphism τ :

D D and an invertible matrix

T

2

such that for every

t

y

t

D

n

\ {0} and x ∈ D

n

\ {0} with ϕ(P L(

t

y))

⊂ P R(x)

we have

[x] = [y

τ

T

2

].

Now, if

t

yz is any idempotent of rank one, that is, z

t

y = 1, then ϕ(

t

yz) belongs to

P R(y

τ

T

2

) as well as to P L(T

1

t

z

σ

). It follows that

ϕ(

t

yz) = T

1

t

z

σ

α y

τ

T

2

for some α

D. Since T

1

t

z

σ

α y

τ

T

2

is an idempotent we have α y

τ

T

2

T

1

t

z

σ

= 1.

This clearly yields that y

τ

T

2

T

1

t

z

σ

= 0 and α = (y

τ

T

2

T

1

t

z

σ

)

1

. To conclude, we

have

(32)

ϕ(

t

yz) = T

1

t

z

σ

(y

τ

T

2

T

1

t

z

σ

)

1

y

τ

T

2

for every idempotent

t

yz of rank one.

It is our next goal to prove that for every n

× n matrix A we have

φ

A

0

=

0
0

0

,

where

stands for an n × n matrix of the same rank as A. In other words, we will

prove that there exists a map ϕ : M

n

(

D) → M

n

(

D) such that

φ

A

0

=

ϕ(A)

0

0

0

,

background image

5.2. SPLITTING THE PROOF OF MAIN RESULTS INTO SUBCASES

53

rank ϕ(A) = rank A, A

∈ M

n

(

D), and (32) holds for every idempotent

t

yz

∈ M

n

(

D)

of rank one. At first look there is an inconsistency in our notation as we have used
the same symbol ϕ first for a map from P

n

(

D) into itself and now for a map acting

on M

n

(

D). However, there is no problem here as the map ϕ defined above is

the extension of the previously defined map acting on the subset of idempotent
matrices.

Assume for a moment that the existence of a map ϕ : M

n

(

D) → M

n

(

D) with the

above properties has already been proved. In subsection Square case we will prove
that then there exist matrices T, S, L

∈ M

n

(

D) such that T and S are invertible,

I +

t

(A

σ

)L is invertible for every A

∈ M

n

(

D), and

ϕ(A) = T (I +

t

(A

σ

)L)

1 t

(A

σ

)S

for all A

∈ M

n

(

D). This completes the proof of Theorem 4.1. In order to complete

the proof of Theorem 4.2 as well, we assume from now on that

D is an EAS division

ring. Then σ is surjective and therefore we have that I + AL is invertible for every
square matrix A. Of course, this is possible only if L = 0. Hence, the proof of
Theorem 4.2 has been reduced to the special case that

φ

A

0

=

T

t

(A

σ

)S

0

0

0

holds true for every A

∈ M

n

(

D). We first replace the map φ by the map

B

T

1

0

0

I

φ(B)

S

1

0

0

I

,

B

∈ M

m

×n

(

D),

where the matrix on the left side of φ(B) is of the size p

× p, and the size of the

matrix on the right hand side is q

× q. After replacing the obtained map φ by the

map

B

t

φ(B)

σ

1

!

,

B

∈ M

m

×n

(

D),

we end up with an adjacency preserving map φ : M

m

×n

(

D) → M

q

×p

(

D) satisfying

φ

A

0

=

A

0

0

0

for every A

∈ M

n

(

D). We need to prove that q ≥ m and that φ is the standard

embedding of M

m

×n

(

D) into M

q

×p

(

D) composed with some equivalence transfor-

mation, that is,

φ(A) = U

A

0

0

0

V,

A

∈ M

m

×n

(

D),

for some invertible U

∈ M

q

(

D) and V ∈ M

p

(

D). We will verify this in one of the

subsequent subsections. This will complete the proofs of both main theorems.

It remains to prove the existence of a map ϕ : M

n

(

D) → M

n

(

D) with the above

described properties. Let A

∈ M

n

(

D) be of rank r. Then

A =

r

j=1

t

w

j

u

j

for some linearly independent vectors

t

w

1

, . . . ,

t

w

r

t

D

n

and some linearly inde-

pendent vectors u

1

, . . . , u

r

D

n

. We will show that

φ

A

0

=

B

0

0

0

background image

54

PETER ˇ

SEMRL

for some B

∈ M

n

(

D) with

(33)

Im B = span

{w

τ
1

T

2

, . . . , w

τ
r

T

2

}

and

(34)

Ker B =

{x ∈ D

n

: xT

1

t

u

σ
1

= 0, . . . , xT

1

t

u

σ
r

= 0

}.

The proof will be done by induction on r.

In the case r = 1, that is A =

t

w

1

u

1

, we know that

φ

A

0

and

φ

t

w

1

z

0

=

T

1

t

z

σ

(w

τ

1

T

2

T

1

t

z

σ

)

1

w

τ

1

T

2

0

0

0

are adjacent for every z

D

n

with z

t

w

1

= 1 and z

= u

1

. One can find two such

linearly independent vectors z, and consequently,

φ

t

w

1

u

1

0

=

t

a w

τ

1

T

2

0

t

b w

τ

1

T

2

0

for some

t

a

t

D

n

and some

t

b

t

D

p

−n

. Now, applying also the fact that

φ

A

0

and

φ

t

zu

1

0

are adjacent for every

t

z

t

D

n

with u

1

t

z = 1 and

t

z

=

t

w

1

we arrive at the

desired conclusion that

φ

t

w

1

u

1

0

=

T

1

t

u

σ

1

γ w

τ

1

T

2

0

0

0

for some nonzero γ

D.

Assume now that A =

"

r
j
=1

t

w

j

u

j

for some integer r, 1 < r

≤ n, and that the

desired conclusion holds for all matrices A

i

= A

t

w

i

u

i

, i = 1, . . . , r, that is,

φ

A

i

0

=

B

i

0

0

0

with

Im B

i

= span

{w

τ
1

T

2

, . . . , w

τ
i

1

T

2

, w

τ
i
+1

T

2

, . . . , w

τ
r

T

2

}

and

Ker B

i

=

{x ∈ D

n

: xT

1

t

u

σ
1

= 0, . . . , xT

1

t

u

σ
i

1

= 0, xT

1

t

u

σ
i
+1

= 0, . . . ,

xT

1

t

u

σ
r

= 0

}.

Because φ

A

0

and φ

A

i

0

are adjacent, the rank of φ

A

0

is either r, or

r

1, or r − 2. Let us start with the first case. Then (see the second paragraph of

the subsection Preliminary results) we know that

Im φ

A

i

0

Im φ

A

0

and

Ker φ

A

0

Ker φ

A

i

0

for every i = 1, . . . , r. The desired conclusion follows trivially.

We need to show that the possibilities that the rank of φ

A

0

is r

1 or r−2

cannot occur. This is easy when r

3. Indeed, in the case that rank of φ

A

0

background image

5.3. SQUARE CASE

55

is r

1 we have for every i = 1, . . . , r by Lemma 5.2 either

Im φ

A

i

0

= Im φ

A

0

,

or

Ker φ

A

0

= Ker φ

A

i

0

,

which is impossible because the images of the operators φ

A

i

0

, i = 1, . . . , r, are

pairwise different, and the same is true for their kernels. And if rank of φ

A

0

is r

2, then for every i = 1, . . . , r we have

Im φ

A

0

Im φ

A

i

0

implying that φ

A

0

= 0, a contradiction.

Finally, we need to show that φ

A

0

cannot be the zero matrix or a rank one

matrix when r = 2. Assume on the contrary, that it is of rank one. If A =

t

w

1

u

1

+

t

w

2

u

2

with

t

w

1

and

t

w

2

linearly independent and u

1

and u

2

linearly independent,

then φ

A

0

is a rank one matrix adjacent to

φ

t

w

1

(u

1

+ λu

2

)

0

=

T

1

(

t

u

σ

1

+

t

u

σ

2

σ(λ))γ(λ) w

τ

1

T

2

0

0

0

for every λ

D. Here, for each scalar λ, γ(λ) D is nonzero. It follows that

φ

A

0

=

t

a w

τ

1

T

2

0

t

b w

τ

1

T

2

0

for some

t

a

t

D

n

and some

t

b

t

D

p

−n

. In the same way we get that

φ

A

0

=

t

c w

τ

2

T

2

0

t

d w

τ

2

T

2

0

for some

t

c

t

D

n

and some

t

d

t

D

p

−n

, a contradiction.

And finally, if φ

A

0

= 0 for some n

× n matrix of rank two, then we can

find a rank two matrix B

∈ M

n

(

D) adjacent to A. Then φ

A

0

and φ

B

0

are adjacent, and therefore φ

B

0

is of rank one, a contradiction. The proof

of both main theorems will be completed once we verify the statements that have
been left to be proven in the next three subsections.

5.3. Square case

The goal of this subsection is to deal with one of the special cases that re-

main to be proved. Our assumptions are that σ, τ :

D D are nonzero anti-

endomorphisms, ϕ : M

n

(

D) → M

n

(

D) is an adjacency preserving map satisfy-

ing ϕ(0) = 0, rank ϕ(A) = rank A, A

∈ M

n

(

D), and (32) holds for every idem-

potent

t

yz

∈ M

n

(

D) of rank one. We need to prove that there exist matrices

background image

56

PETER ˇ

SEMRL

T, S, L

∈ M

n

(

D) such that T and S are invertible, I +

t

(A

σ

)L is invertible for every

A

∈ M

n

(

D), and

ϕ(A) = T (I +

t

(A

σ

)L)

1 t

(A

σ

)S

for all A

∈ M

n

(

D).

Replacing ϕ by the map A

→ T

1

1

ϕ(A)T

1

2

, A

∈ M

n

(

D), we have

ϕ(

t

yz) =

t

z

σ

(y

τ

R

t

z

σ

)

1

y

τ

for every idempotent

t

yz of rank one. Here, R = [r

ij

] = T

2

T

1

∈ M

n

(

D) is invertible.

Choosing

t

y =

t

f

1

we get

ϕ



1

z

2

. . .

z

n

0

0

. . .

0

..

.

..

.

. .. ...

0

0

. . .

0



=


1

σ(z

2

)

..

.

σ(z

n

)


(r

11

+ r

12

σ(z

2

) + . . . + r

1n

σ(z

n

))

1

1

0

. . .

0

for all z

2

, . . . , z

n

D. By Lemmas 5.6 and 5.9, there exist scalars p, q ∈ D such

that p + q = r

11

,

p + (z

1

) + r

12

σ(z

2

) + . . . + r

1n

σ(z

n

)

= 0

and

ϕ



z

1

z

2

. . .

z

n

0

0

. . .

0

..

.

..

.

. .. ...

0

0

. . .

0



=


σ(z

1

)(p + (z

1

) + r

12

σ(z

2

) + . . . + r

1n

σ(z

n

))

1

0

. . .

0

σ(z

2

)(p + (z

1

) + r

12

σ(z

2

) + . . . + r

1n

σ(z

n

))

1

0

. . .

0

..

.

..

.

. .. ...

σ(z

n

)(p + (z

1

) + r

12

σ(z

2

) + . . . + r

1n

σ(z

n

))

1

0

. . .

0


for all z

1

, z

2

, . . . , z

n

D. Clearly, p = 0. Denote by diag (p, 1, . . . , 1) the diagonal

n

× n matrix with diagonal entries p, 1, . . . , 1. Replacing the map ϕ by the map

A

→ ϕ(A) diag (p, 1, . . . , 1), A ∈ M

n

(

D),

we arrive at

ϕ



z

1

z

2

. . .

z

n

0

0

. . .

0

..

.

..

.

. .. ...

0

0

. . .

0



=


σ(z

1

)(1 + l

11

σ(z

1

) + l

12

σ(z

2

) + . . . + l

1n

σ(z

n

))

1

0

. . .

0

σ(z

2

)(1 + l

11

σ(z

1

) + l

12

σ(z

2

) + . . . + l

1n

σ(z

n

))

1

0

. . .

0

..

.

..

.

. .. ...

σ(z

n

)(1 + l

11

σ(z

1

) + l

12

σ(z

2

) + . . . + l

1n

σ(z

n

))

1

0

. . .

0


background image

5.3. SQUARE CASE

57

for all z

1

, z

2

, . . . , z

n

D. Here, l

11

= p

1

q, l

12

= p

1

r

12

, . . . , l

1n

= p

1

r

1n

are

scalars with the property that 1 + l

11

σ(z

1

) + l

12

σ(z

2

) + . . . + l

1n

σ(z

n

)

= 0 for all

z

1

, . . . , z

n

D.

Now, we repeat the same procedure with

t

y =

t

f

j

, j = 2, . . . , n. We get

scalars l

ij

D, 1 ≤ i, j ≤ n. Set L = [l

ij

]

∈ M

n

(

D) and L = {A ∈ M

n

(

D) :

I +

t

(A

σ

)L is invertible

}. We define a map θ : L → M

n

(

D) by

θ(A) = (I +

t

(A

σ

)L)

1 t

(A

σ

).

We need to show that

L = M

n

(

D) and ϕ(A) = θ(A) for every A ∈ M

n

(

D).

Let us start with a matrix A of the form

A =


z

1

z

2

. . .

z

n

0

0

. . .

0

..

.

..

.

. .. ...

0

0

. . .

0


.

Then I +

t

(A

σ

)L = I +

t

uv with

t

u =


σ(z

1

)

..

.

σ(z

n

)


⎦ and v =

l

11

. . .

l

1n

.

Note that then

t

(A

σ

) =

t

ue

1

, and therefore, by Lemma 5.10, I+

t

(A

σ

)L is invertible

and we have

θ(A) = (I +

t

uv)

1 t

ue

1

= (I

t

u(1 + v

t

u)

1

v)

t

ue

1

=

t

u(1

(1 + v

t

u)

1

v

t

u)e

1

=

t

u(1 + v

t

u)

1

e

1

= ϕ(A).

In exactly the same way we prove that A

∈ L and ϕ(A) = θ(A) for every A ∈ M

n

(

D)

having nonzero entries only in the i-th row, i = 2, . . . , n.

Let k, r be positive integers, 1

≤ r ≤ k ≤ n. We define M

k,r

⊂ M

n

(

D) to be

the set of all matrices A

∈ M

n

(

D) having exactly k nonzero rows (that is, exactly

n

− k rows of A are zero) and satisfying rank A = r. Set

L

k

=

k
j
=1

M

k,j

.

We will complete the proof in our special case by showing that for every k

{1, . . . , n} we have L

k

⊂ L and ϕ(A) = θ(A) for every A ∈ L

k

. The proof will be

carried out by induction on k. The case k = 1 has already been proved.

Assume now that 1 < k

≤ n and that the above statement holds for k − 1. We

need to prove that

L

k

⊂ L and that ϕ(A) = θ(A) for every A ∈ L

k

. We will first

prove that

M

k,k

⊂ L and that ϕ(A) = θ(A) for every A ∈ M

k,k

. Thus, take a

matrix A

∈ M

k,k

. With no loss of generality we assume that the first k rows of A

are linearly independent and all the rows below the k-th row are zero. We know
that ϕ(A) is adjacent to ϕ(B) for every matrix B such that B has exactly k

1

nonzero rows and A and B are adjacent. Of course, because A and B are adjacent
and rank B < k = rank A, only the first k rows of B may be nonzero (in fact, one
of them is zero and the others must be linearly independent). For every such B the
row spaces of matrices

I

ϕ(B)

=

I

(I +

t

(B

σ

)L)

1 t

(B

σ

)

and

I

ϕ(A)

background image

58

PETER ˇ

SEMRL

are adjacent. Therefore the row spaces of matrices

I +

t

(B

σ

)L

t

(B

σ

)

I

0

−L I

=

I

t

(B

σ

)

and

(35)

I

ϕ(A)

I

0

−L I

=

X

Y

are adjacent.

We apply Lemma 5.15 with

t

(A

σ

) instead of A and

E = σ(D) to conclude that

there is an invertible P

∈ M

n

(

D) such that

X

Y

= P

I

t

(A

σ

)

which together with (35) yields

I

ϕ(A)

= P

I +

t

(A

σ

)L

t

(A

σ

)

.

It follows that A

∈ L and ϕ(A) = θ(A), as desired.

Of course, by Lemma 5.15 we have one more possibility, that is, rank A = 2

and ϕ(A) = 0. However, this possibility cannot occur due to our assumption that
rank ϕ(A) = rank A.

In order to prove that

M

k,k

1

⊂ L and that ϕ(A) = θ(A) for every A ∈ M

k,k

1

we use the same idea as above together with Lemma 5.16. Applying the same
trick and the same Lemma once more we conclude that

M

k,k

2

⊂ L and that

ϕ(A) = θ(A) for every A

∈ M

k,k

2

. It is now clear that the inductive approach

yields that

L

k

⊂ L and that ϕ(A) = θ(A) for every A ∈ L

k

, as desired.

5.4. Degenerate case

In this subsection we will complete the proofs of Theorms 4.1 and 4.2 in one

of the cases that remain unproved in the above discussion. We are interested in
the special case where m, n, p, q are integers with m, p, q

≥ n ≥ 3, φ : M

m

×n

(

D)

M

p

×q

(

D) is a map which preserves adjacency and satisfies φ(0) = 0, φ(E

11

+ . . . +

E

nn

) = E

11

+ . . . + E

nn

, there exists an order preserving map ϕ : P

n

(

D) → P

n

(

D)

such that (31) holds true for every P

∈ P

n

(

D), and ϕ(P

1

n

(

D)) ⊂ P R(x) for some

nonzero x

D

n

. We need to prove that φ is a degenerate map.

Hence, we have an order preserving map ϕ : P

n

(

D) → P

n

(

D) such that

ϕ(P

1

n

(

D)) ⊂ P R(x) for some nonzero x ∈ D

n

and ϕ(P ) and ϕ(Q) are adjacent

whenever P, Q

∈ P

n

(

D) are adjacent. Moreover, we know that rank ϕ(P ) = rank P

for every P

∈ P

n

(

D). All we need to show is that ϕ is of the desired form, that

is, up to a similarity, ϕ maps idempotents of rank one into the set E

11

+

DE

21

,

idempotents of rank two into the set E

11

+ E

22

+

DE

23

, idempotents of rank three

into the set E

11

+ E

22

+ E

33

+

DE

43

, and so on.

We will prove this by induction on n. It should be mentioned that this part

of the proof is based on known ideas. We start with the 3

× 3 case. So, assume

that ϕ : P

3

(

D) → P

3

(

D) is a rank, order and adjacency preserving map, and

ϕ(P

1

3

(

D)) ⊂ P R(x) for some nonzero x ∈ D

3

.

background image

5.4. DEGENERATE CASE

59

Our first claim is that if idempotents P, Q of rank two satisfy Im P = Im Q

then Ker ϕ(P ) = Ker ϕ(Q). There is no loss of generality in assuming that

P =


1

0

0

0

1

0

0

0

0


.

Then Im Q is the linear span of e

1

and e

2

and because Q is of rank two we necessarily

have

Q =


1

0

0

0

1

0

λ

μ

0


for some λ, μ

D. If λ = μ = 0, then P = Q and we are done. Thus, we may

assume that there exists an invertible S

∈ M

2

(

D) such that

λ

μ

S =

0

1

.

After replacing P and Q by T

1

P T and T

1

QT , respectively, where

T =

S

0

0

1

we may further assume that

P =


1

0

0

0

1

0

0

0

0


⎦ and Q =


1

0

0

0

1

0

0

1

0


.

Consider the map R

→ W ϕ(R)W

1

, R

∈ P

3

(

D), where W is an appropriate

invertible matrix, instead of the map ϕ. Then we may assume that ϕ(P ) = P .
It follows that ϕ(E

11

) is an idempotent of rank one having nonzero entries only

in the upper left 2

× 2 corner. Composing ϕ with yet another similarity transfor-

mation, we may assume that ϕ(E

11

) = E

11

without affecting our assumption that

ϕ(E

11

+ E

22

) = E

11

+ E

22

. Consequently, all rank one idempotents are mapped

into idempotents of the form

(36)


1

0

0

0 0

0 0


.

Obviously, we have


1

α

0

0

0

0

0

0

0


≤ P, Q

for every α

D. Therefore,

ϕ



1

α

0

0

0

0

0

0

0




1

0

0

0

1

0

0

0

0


and at the same time the matrix on the left hand side of this inequality is of the
form (36). It follows directly that for every α

D there is a δ ∈ D such that

ϕ



1

α

0

0

0

0

0

0

0



⎠ =


1

0

0

δ

0

0

0

0

0


.

background image

60

PETER ˇ

SEMRL

Because ϕ preserves adjacency, there are at least two different δ’s satisfying


1

0

0

δ

0

0

0

0

0


≤ ϕ(Q).

A simple computation yields that

ϕ(Q) =


1

0

0

0

1

μ

0

0

ξ


for some μ, ξ

D and since ϕ(Q) is of rank two we have necessarily ξ = 0. Hence,

Ker ϕ(P ) = Ker ϕ(Q) = span

{e

3

}, as desired.

In exactly the same way we prove that if two idempotents P, Q of rank two

satisfy Ker P = Ker Q then Ker ϕ(P ) = Ker ϕ(Q).

We will next prove that Ker ϕ(P ) = Ker ϕ(Q) for every pair of rank two idem-

potents P and Q. Denote by U and V the two dimensional images of P and
Q, respectively. Then there is a nonzero vector w

D

3

that does not belong to

U

∪ V . Let R

1

and R

2

be rank two idempotents with kernel span

{w} and im-

ages U and V , respectively. By the previous steps we have Ker ϕ(P ) = Ker ϕ(R

1

),

Ker φ(Q) = Ker φ(R

2

), and Ker ϕ(R

1

) = Ker ϕ(R

2

). Hence, the ϕ-images of idem-

potents of rank two have all the same kernel.

Composing ϕ once more by an appropriate similarity transformation we may

assume that ϕ(E

11

) = E

11

, ϕ(E

11

+ E

22

) = E

11

+ E

22

, and ϕ(E

11

+ E

22

+ E

33

) =

E

11

+ E

22

+ E

33

. Applying the fact that φ(E

11

) = E

11

we first note that every

idempotent of rank one is mapped into an idempotent of the form (36). Because
the ϕ-images of rank two idempotents all have the same kernel, every idempotent
of rank two is mapped into an idempotent of the form


1

0

0

1

0

0

0


.

It follows that every idempotent of rank one is mapped into an idempotent of the
form


1

0

0

0 0
0

0

0


.

Since every rank two idempotent majorizes some rank one idempotent we finally
conclude that for every P

∈ P

3

(

D) of rank two the idempotent ϕ(P ) is of the form


1

0

0

0

1

0

0

0


.

The proof in the case n = 3 is completed.

Now we have to prove the induction step. We have an order preserving map

ϕ : P

n

(

D) → P

n

(

D) such that ϕ(P

1

n

(

D)) ⊂ P R(x) for some nonzero x ∈ D, ϕ(P ) and

ϕ(Q) are adjacent whenever P, Q

∈ P

n

(

D) are adjacent, and rank ϕ(P ) = rank P

for every P

∈ P

n

(

D). After composing it by a suitable similarity transformation

we may assume that ϕ(E

11

+ . . . + E

kk

) = E

11

+ . . . + E

kk

, k = 1, . . . , n. It follows

background image

5.4. DEGENERATE CASE

61

that ϕ(P

1

n

(

D)) ⊂ P R(e

1

). If we take any idempotent P

∈ P

n

(

D) of rank n−1, then

P

n

(

D)[≤ P ] = {Q ∈ P

n

(

D) : Q ≤ P } can be identified with P

n

1

(

D). Clearly,

ϕ(P

n

(

D)[≤ P ]) ⊂ P

n

(

D)[≤ ϕ(P )].

The restriction of ϕ to P

n

(

D)[≤ P ] considered as a map from P

n

(

D)[≤ P ] into

P

n

(

D)[≤ ϕ(P )] can be thus identified with an order, rank, and adjacency preserving

map from P

n

1

(

D) into itself. Identifying matrices with operators we see that this

restriction sends all rank one idempotents into rank one idempotents having the
same one-dimensional image. Therefore we can apply the induction hypothesis.

Let Q, R

∈ P

n

(

D) be two idempotents of rank two. We want to show that ϕ(Q)

and ϕ(R) have the same kernel. If there exists an idempotent P of rank n

1 such

that Q

≤ P and R ≤ P , then this is true because the restriction of ϕ to P

n

(

D)[≤ P ]

is of the desired form by the induction hypothesis.

For an arbitrary pair of idempotents Q, R

∈ P

n

(

D) of rank two we proceed as

follows. We choose idempotents Q

1

and R

1

of rank n

2 such that Q ≤ Q

1

and

R

≤ R

1

. We will show that we can find a string of idempotents Q

1

, Q

2

, . . . Q

k

= R

1

of rank n

2 and a string of idempotents P

1

, . . . , P

k

1

of rank n

1 such that

Q

1

≤ P

1

and

Q

2

≤ P

1

,

Q

2

≤ P

2

and

Q

3

≤ P

2

,

..

.

Q

k

1

≤ P

k

1

and

Q

k

= R

1

≤ P

k

1

.

We will postpone the proof of this statement till the end of the section. Assume for
a moment that we have already proved it. Then, by the previous paragraph, the
ϕ-images of any two rank two idempotents such that first one is below Q

1

and the

second one below Q

2

, have the same kernel. Similarly, ϕ-images of any two rank

two idempotents such that first one is below Q

2

and the second one below Q

3

, have

the same kernel,... It follows that ϕ(Q) and ϕ(R) have the same kernel.

We have shown that all the ϕ-images of rank two idempotents have the same

kernel. Since ϕ(E

11

+ E

22

) = E

11

+ E

22

this unique kernel is the linear span

of e

3

, . . . , e

n

. This together with the fact that ϕ(P

1

n

(

D)) ⊂ P R(e

1

) imply that

ϕ(P

1

n

(

D)) ⊂ E

11

+

DE

21

.

If n > 4, then the same arguments yield that all idempotents of rank three

are mapped into idempotents with the same three-dimensional image. And since
ϕ(E

11

+E

22

+E

33

) = E

11

+E

22

+E

33

, this joint image is the linear span of e

1

, e

2

, e

3

.

Consequently, each idempotent of rank two is mapped into an idempotent of rank
two whose kernel is the linear span of e

3

, . . . , e

n

while its image is contained in the

linear span of e

1

, e

2

, e

3

. Such idempotents are of the form


1

0

a

0

. . .

0

0

1

b

0

. . .

0

0

0

0

0

. . .

0

..

.

..

.

..

.

..

.

. .. ...

0

0

0

0

. . .

0


for some scalars a, b. Applying the fact that ϕ(P

1

n

(

D)) ⊂ E

11

+

DE

21

, we conclude

that a = 0. Therefore, the set of rank two idempotents is mapped by ϕ into the
set E

11

+ E

22

+

DE

23

.

background image

62

PETER ˇ

SEMRL

We repeat this procedure and then we need to distinguish two cases. We will

consider only the case when n is even, as the case when n is odd can be treated in
exactly the same way. In the case when n is even we get that rank one idempotents
are mapped into

E

11

+

DE

21

,

rank two idempotents are mapped into

E

11

+ E

22

+

DE

23

, ...

and idempotents of rank n

3 are mapped into

(37)

E

11

+ . . . + E

n

3,n−3

+

DE

n

2,n−3

,

and

(38)

Ker ϕ(Q) = span

{e

n

1

, e

n

}

for every idempotent Q of rank n

2.

We introduce a new map ψ : P

n

(

D) → P

n

(

D) by ψ(P ) = I − ϕ(I − P ),

P

∈ P

n

(

D). Of course, this is again an order, adjacency, and rank preserving map.

The adjacency preserving property and the rank one preserving property imply
that for every nonzero

t

y

t

D

n

either there exists a nonzero x

D

n

such that

ϕ(P L(

t

y))

⊂ P R(x), or there exists a nonzero

t

w

t

D

n

such that ϕ(P L(

t

y))

P L(

t

w), and for every nonzero z

D

n

either there exists a nonzero x

D

n

such

that ϕ(P R(z))

⊂ P R(x), or there exists a nonzero

t

w

t

D

n

such that ϕ(P R(z))

P L(

t

w). Thus, applying Lemma 5.17 and its obvious analogue we conclude that

we have the following four possibilities:

• ψ(P

1

n

(

D)) ⊂ P R(x) for some nonzero x ∈ D

n

,

• ψ(P

1

n

(

D)) ⊂ P L(

t

y) for some nonzero

t

y

t

D

n

,

for every linearly independent n-tuple

t

y

1

, . . . ,

t

y

n

t

D

n

and every lin-

early independent n-tuple z

1

, . . . , z

n

D

n

there exist linearly independent

n-tuples x

1

, . . . , x

n

D

n

and

t

w

1

, . . . ,

t

w

n

t

D

n

such that

ψ(P L(

t

y

i

))

⊂ P R(x

i

)

and

ψ(P R(z

i

))

⊂ P L(

t

w

i

),

i = 1, . . . , n,

for every linearly independent n-tuple

t

y

1

, . . . ,

t

y

n

t

D

n

and every lin-

early independent n-tuple z

1

, . . . , z

n

D

n

there exist linearly independent

n-tuples x

1

, . . . , x

n

D

n

and

t

w

1

, . . . ,

t

w

n

t

D

n

such that

ψ(P L(

t

y

i

))

⊂ P L(

t

w

i

)

and

ψ(P R(z

i

))

⊂ P R(x

i

),

i = 1, . . . , n.

The behaviour of the map ψ on the set of rank one idempotents is determined by
the behaviour of ϕ on the set of rank n

1 idempotents. For every idempotent P of

rank one we can find an idempotent Q of rank one such that Q

≤ I − P . Therefore,

e

1

Im ϕ(Q) Im ϕ(I − P ) = Ker ψ(P ).

It follows that we have on the first two possibilities above.

Assume first that ψ(P

1

n

(

D)) ⊂ P R(x) for some nonzero x ∈ D

n

. Observe that

ψ(E

jj

+ . . . + E

nn

) = E

jj

+ . . . + E

nn

for all integers j, 1

≤ j ≤ n. Applying the

same approach as we have used in the study of map ϕ we conclude that ψ(P

1

n

(

D))

E

nn

+

DE

n

1,n

and the kernel of ψ-image of every idempotent Q of rank two is

the linear span of

{e

1

, . . . , e

n

2

}. This is further equivalent to the fact that the

ϕ-image of every Q of rank n

2 is the linear span of {e

1

, . . . , e

n

2

}. This together

with (38) yield that

ϕ(Q) = E

11

+ . . . + E

n

2,n−2

,

background image

5.4. DEGENERATE CASE

63

contradicting the fact that ϕ(P )

= ϕ(Q) whenever P and Q are adjacent idempo-

tents of rank n

2.

Therefore, we have the second possibility above, that is, ψ(P

1

n

(

D)) ⊂ P L(

t

y)

for some nonzero

t

y

t

D

n

. It follows that

ψ(P

1

n

(

D)) ⊂ E

nn

+

DE

n,n

1

.

Equivalently, we have

ϕ(Q) = E

11

+ . . . + E

n

1,n−1

+

DE

n,n

1

for every idempotent Q of rank n

1. It follows that the ϕ-image of every idempotent

Q of rank n

2 is contained in the linear span of e

1

, . . . , e

n

1

. From here we get

using (38) that for every idempotent Q of rank n

2 we have

ϕ(Q) = E

11

+ . . . + E

n

2,n−2

+

n

2

j=1

λ

j

E

j,n

1

for some scalars λ

1

, . . . , λ

n

2

. Because of (37) we finally conclude that

ϕ(Q) = E

11

+ . . . + E

n

2,n−2

+

DE

n

2,n−1

for all idempotents Q of rank n

2.

Thus, in order to complete the proof in this case we need to verify that for

any two idempotents Q, R of rank n

2 there are a string of idempotents Q =

Q

0

, Q

1

, . . . , Q

k

= R of rank n

2 and a string of idempotents P

1

, . . . , P

k

of rank

n

1 such that

Q

0

≤ P

1

and

Q

1

≤ P

1

,

Q

1

≤ P

2

and

Q

2

≤ P

2

,

..

.

Q

k

1

≤ P

k

and

Q

k

≤ P

k

.

We will say that the idempotents Q and R are connected if two such strings exist.
With this terminology we need to show that any two idempotents of rank n

2

are connected. Clearly, if Q, R, P are idempotents of rank n

2 and Q and R are

connected, and R and P are connected, then Q and P are connected as well.

Let us start with the case when Q, R are idempotents of rank n

2 with

R = Q +

t

xy, where y belongs to the image of Q and Q

t

x = 0. We may assume

that

t

x and y are both nonzero. Then we can find z

D

n

such that zQ = 0 and

z

t

x = 1. It is straighforward to verify that Q +

t

xz is an idempotent of rank n

1

satisfying Q, R

≤ Q +

t

xz.

We now consider the case where the ranges of Q and R coincide. After an

appropriate change of basis we may assume that

Q =

I

0

0

0

and

R =

I

0

N

0

where I is the (n

2) × (n − 2) identity matrix and N is a 2 × (n − 2) matrix.

There is nothing to prove if N = 0 and if N is of rank one, then we are done by
the previous step. It remains to consider the case when N is of rank two. Then we
can find an invertible 2

× 2 matrix T and an invertible (n − 2) × (n − 2) matrix S

such that

T N S =

1

0

0

. . .

0

0

1

0

. . .

0

background image

64

PETER ˇ

SEMRL

=

1

0

0

. . .

0

0

0

0

. . .

0

+

0

0

0

. . .

0

0

1

0

. . .

0

= N

1

+ N

2

.

Hence,

S

1

0

0

T

Q

S

0

0

T

1

=

I

0

0

0

and

S

1

0

0

T

R

S

0

0

T

1

=

I

0

N

1

+ N

2

0

.

By the previous step we know that

I

0

0

0

and

I

0

N

1

0

are connected, and

I

0

N

1

0

and

I

0

N

1

+ N

2

0

are connected. Thus, Q and R are connected.

Assume now that Q and R commute. After an appropriate change of basis we

have

Q =


I

p

0

0

0

0

I

q

0

0

0

0

0

0

0

0

0

0


⎦ and R =


0

0

0

0

0

I

q

0

0

0

0

I

p

0

0

0

0

0


.

Here, I

p

and I

q

are the p

× p identity matrix and the q × q identity matrix, respec-

tively, and p

∈ {0, 1, 2}. Connectedness of Q and R can be now easily verified.

Let finally Q and R be any idempotents of rank n

2. We decompose D

n

= U

1

U

2

⊕U

3

⊕U

4

, where U

1

is the intersection of the images of Q and R, Im Q = U

1

⊕U

2

,

and Im R = U

1

⊕ U

3

. Note that some of the subspaces U

j

may be the zero spaces.

Let Q

1

be the idempotent of rank n

2 whose image is U

1

⊕ U

2

and whose kernel

is U

3

⊕ U

4

, and let Q

2

be the idempotent of rank n

2 whose image is U

1

⊕ U

3

and whose kernel is U

2

⊕ U

4

. Then Q and Q

1

are connected because they have

the same images, Q

1

and Q

2

are connected because they commute, and Q

2

and

R are connected because their images are the same. It follows that Q and R are
connected, as desired.

5.5. Non-square case

The aim of this subsection is to prove that if

D is an EAS division ring, and an

adjacency preserving map φ : M

m

×n

(

D) → M

q

×p

(

D) satisfies

φ

A

0

=

A

0

0

0

for every A

∈ M

n

(

D), then q ≥ m and

φ(A) = U

A

0

0

0

V,

A

∈ M

m

×n

(

D),

for some invertible U

∈ M

q

(

D) and V ∈ M

p

(

D).

background image

5.5. NON-SQUARE CASE

65

We will prove this statement inductively. All we need to do is to prove that if

r

∈ {0, 1, . . . , m − n − 1} and there exist invertible U

1

∈ M

q

(

D) and V

1

∈ M

p

(

D)

such that

φ

A

0

= U

1

A

0

0

0

V

1

for every A

∈ M

(n+r)

×n

(

D), then q ≥ n + r + 1 and

φ

A

0

= U

A

0

0

0

V,

A

∈ M

(n+r+1)

×n

(

D),

for some invertible U

∈ M

q

(

D) and V ∈ M

p

(

D).

With no loss of generality we may assume that U

1

and V

1

are the identity

matrices of the appropriate sizes. Set

A = E

r+2,1

+ E

r+3,2

+ . . . + E

n+r+1,n

∈ M

m

×n

(

D).

For arbitrary scalars λ

1

, . . . , λ

n+r

D we define

C(λ

1

, . . . , λ

n+r

) = E

r+2,1

+ E

r+3,2

+ . . . + E

n+r,n

1

+λ

1

E

1,n

+ . . . + λ

n+r

E

n+r,n

∈ M

m

×n

(

D).

We know that φ(C(λ

1

, . . . , λ

n+r

)) = D(λ

1

, . . . , λ

n+r

)

∈ M

q

×p

(

D), where

D(λ

1

, . . . , λ

n+r

) = E

r+2,1

+ E

r+3,2

+ . . . + E

n+r,n

1

+λ

1

E

1,n

+ . . . + λ

n+r

E

n+r,n

∈ M

q

×p

(

D)

(note that the formulas for C(λ

1

, . . . , λ

n+r

) and D(λ

1

, . . . , λ

n+r

) are the same, but

in the first case the E

ij

’s denote the matrix units in M

m

×n

(

D), while in the second

case they stand for the matrix units in M

q

×p

(

D)). We further know that φ(A) is

adjacent to φ(C(λ

1

, . . . , λ

n+r

)) for all scalars λ

1

, . . . , λ

n+r

. Consequently,

φ(A)

(E

r+2,1

+ E

r+3,2

+ . . . + E

n+r,n

1

)

is adjacent to

λ

1

E

1,n

+ . . . + λ

n+r

E

n+r,n

,

λ

1

, . . . , λ

n+r

D.

It follows trivially that

φ(A) = E

r+2,1

+ E

r+3,2

+ . . . + E

n+r,n

1

+

q

j=1

μ

j

E

j,n

for some scalars μ

1

, . . . , μ

q

with at least one of μ

n+r+1

, . . . , μ

q

being nonzero. In

particular, q

≥ n + r + 1. After replacing φ by the map A → U

2

φ(A), where

U

2

∈ M

q

(

D) is an invertible matrix satisfying

U

2

t

f

1

=

t

f

1

, . . . , U

2

t

f

n+r

=

t

f

n+r

, U

2


q

j=1

μ

j

t

f

j


⎠ =

t

f

n+r+1

(here one needs to know that the symbol

t

f

r

denotes the r-th vector in the standard

basis of

t

D

m

as well as the r-th vector in the standard basis of

t

D

q

), we may assume

that

φ

A

0

=

A

0

0

0

for every A

∈ M

(n+r)

×n

(

D) and

φ(E

r+2,1

+ E

r+3,2

+ . . . + E

n+r+1,n

) = E

r+2,1

+ E

r+3,2

+ . . . + E

n+r+1,n

.

background image

66

PETER ˇ

SEMRL

Using the last equation together with our result for the square case we conclude

that

φ



0

(r+1)

×n

A

0

(m

(n+r+1))×n



⎠ =


0

(r+1)

×n

0

(r+1)

×(p−n)

M ξ(A)N

0

n

×(p−n)

0

(q

(n+r+1))×n

0

(q

(n+r+1))×(p−n)


for all A

∈ M

n

(

D). Here, 0

j

×k

denotes the j

× k zero matrix, M and N are

invertible n

× n matrices, and either ξ(A) = A

τ

with τ being an automorphism of

D, or ξ(A) =

t

(A

σ

) with σ being an antiautomorphism of

D. Because

φ



0

(r+1)

×n

E

11

+ . . . + E

jj

0

(m

(n+r+1))×n



=


0

(r+1)

×n

0

(r+1)

×(p−n)

E

11

+ . . . + E

jj

0

n

×(p−n)

0

(q

(n+r+1))×n

0

(q

(n+r+1))×(p−n)


, j = 1, . . . , n,

we conclede that M = N

1

is a diagonal matrix. Moreover, M ξ(A)M

1

= A for

every A = μE

ij

, 1

≤ i ≤ n − 1, 1 ≤ j ≤ n, μ ∈ D, and consequently, ξ(A) = A

τ

,

where τ :

D D is an inner automorphism τ(λ) = c

1

λc, λ

D, for some nonzero

c

D and M = cI. It follows that

φ



0

(r+1)

×n

A

0

(m

(n+r+1))×n



⎠ =


0

(r+1)

×n

0

(r+1)

×(p−n)

A

0

n

×(p−n)

0

(q

(n+r+1))×n

0

(q

(n+r+1))×(p−n)


for all A

∈ M

n

(

D).

We need to prove that

φ

A

0

=

A

0

0

0

,

A

∈ M

(n+r+1)

×n

(

D),

and we already know that this is true whenever the last row of A is zero or the first
r + 1 rows of A are zero. Verifying the above equality for each A

∈ M

(n+r+1)

×n

(

D)

is not difficult and can be done in several different ways. We will outline one of the
possibilities and leave the details to the reader.

We first prove that

B = φ

A

0

is of rank two whenever A

∈ M

(n+r+1)

×n

(

D) is of rank two. We know that this

is true when A =

t

uv +

t

xy and

t

u,

t

x

span {

t

f

1

, . . . ,

t

f

n+r

}. When the last

row of A is nonzero, we can find

t

w,

t

z

span {

t

f

1

, . . . ,

t

f

n+r

} such that

t

u,

t

x

span

{

t

f

n+r+1

,

t

w,

t

z

}. We have just proved that rank B is two also in the case

when

t

w,

t

z

span {

t

f

r+2

, . . . ,

t

f

n+r

}. Exactly the same proof works after a

suitable change of basis for any pair of vectors

t

w,

t

z.

We know that φ(R(u))

⊂ R(u ⊕ 0) for every u ∈ D

n

(here, u

0 D

p

is the

vector whose first n coordinates coincide with u and all the others are equal to

zero) and that every matrix

A

0

of rank two is mapped into a matrix of rank two.

By a suitable analouge of Lemma 5.5 we conclude that the restriction of φ is an
injective lineation of R(u) into R(u

0). As we know that this lineation acts like

background image

5.6. PROOFS OF COROLLARIES

67

the identity on all vectors

t

x

span {

t

f

1

, . . . ,

t

f

n+r

} ∪ span {

t

f

r+2

, . . . ,

t

f

n+r+1

},

we conclude that

φ

A

0

=

A

0

0

0

for every A

∈ M

(n+r+1)

×n

(

D) of rank one.

After proving the above and knowing that rank of B equals two whenever A is

of rank two, we proceed inductively. At each step we assume that

φ

A

0

=

A

0

0

0

for every A

∈ M

(n+r+1)

×n

(

D) of rank k and that rank of

φ

A

0

equals k + 1 for every A

∈ M

(n+r+1)

×n

(

D) of rank k + 1. Now take any A ∈

M

(n+r+1)

×n

(

D) of rank k+1 and denote by J the set of all matrices C ∈ M

(n+r+1)

×n

(

D)

such that rank C = k and A and C are adjacent. Then

φ

A

0

and

C

0

0

0

are adjacent for all C

∈ J . It follows easily that

φ

A

0

= 0

A

0

0

0

.

It remains to show (in case that k + 2

≤ n) that

D = φ

A

0

is of rank k + 2 whenever A =

"

k+2
j=1

t

u

j

v

j

is of rank k + 2. Set A

j

= A

t

u

j

v

j

,

j = 1, . . . , k + 2. As D is adjacent to

D

j

=

A

j

0

0

0

for every j = 1, . . . , k + 2, we have rank D

∈ {k, k + 1, k + 2}. In the case when

rank D = k we have Im D

Im D

j

, j = 1, . . . , k + 2, contradicting the fact that

k+2
j=1

Im D

j

=

{0}.

In the case when rank D = k + 1 we have by Lemma 5.2 that Im D = Im D

j

or

Ker D = Ker D

j

, j = 1, . . . , k + 2, contradicting the fact that the images as well as

the kernels of operators D

j

are pairwise distinct. Consequently, rank D = k + 2, as

desired. This completes the proof.

5.6. Proofs of corollaries

The proofs of all corollaries but the first one are rather straightforward.

Proof of Corollary 4.3. Obviously, degenerate maps do not preserve adjacency

in both directions. Therefore Corollary 4.3 follows directly from Theorem 4.2 once
we prove that under our assumptions there exists a matrix A

0

∈ M

m

×n

(

D) such that

rank φ(A

0

) = n. In fact, we will prove even more, namely that rank φ(A) = rank A

for every A

∈ M

m

×n

(

D).

background image

68

PETER ˇ

SEMRL

There is nothing to prove when A = 0. Rank one matrices are adjacent to 0,

and consequently, their φ-images are adjacent to φ(0) = 0. Hence, φ maps rank
one matrices into rank one matrices. If A is of rank two, then A is adjacent to
some rank one matrix. It follows that φ(A) is adjacent to some rank one matrix,
and is therefore either the zero matrix, or a rank one matrix, or a rank two matrix.
We need to prove that the first two possibilities cannot occur. Assume first that
φ(A) = 0. Then φ(A) is adjacent to φ(B) for every B of rank one, and consequently,
A is adjacent to every B of rank one, a contradiction. It is clear that φ(A) is not of
rank one, since otherwise φ(A) would be adjacent to φ(0) = 0 which would imply
that A is of rank one, a contradiction. Hence, we have shown that rank two matrices
are mapped by φ into rank two matrices.

In the next step we will show that φ(A) is of rank three whenever A is of

rank three. Each rank three matrix is adjacent to some rank two matrix, and
therefore rank φ(A)

∈ {1, 2, 3} for every A of rank three. As before, φ(A) cannot

be a rank one matrix. We thus need to show that rank φ(A)

= 2 for every A of

rank three. Assume on the contrary that there is A

∈ M

m

×n

(

D) of rank three with

rank φ(A) = 2.

By Lemma 5.3, for every nonzero

t

x

t

D

m

either there exists

t

y

t

D

p

such

that φ(L(

t

x))

⊂ L(

t

y); or there exists w

D

q

such that φ(L(

t

x))

⊂ R(w). We

will consider only the case when there exists

t

y

t

D

p

such that

φ(L(

t

x))

⊂ L(

t

y).

We claim that then for every u

D

n

there exists w

D

q

such that φ(R(u))

⊂ R(w).

If this was not true, then by Lemma 5.4 we would be able to find a nonzero u

D

n

and a nonzero

t

z

t

D

p

such that φ(R(u))

⊂ L(

t

z). As the intersection of R(u)

and L(

t

x) contains a nonzero matrix, the vectors

t

z and

t

y are linearly dependent.

There is no loss of generality in assuming that

t

z =

t

y. Choose a, b

D

n

, a

= b,

such that a and u are linearly independent, and b and u are linearly independent.
Further, choose

t

c

t

D

m

such that

t

x and

t

c are linearly independent. Then

φ(

t

xa) and φ(

t

cu) are not adjacent, and φ(

t

xb) and φ(

t

cu) are not adjacent as

well. Moreover, all three matrices φ(

t

xa)

= φ(

t

xb) and φ(

t

cu) belong to the set

L(

t

y) having the property that any two distinct members are adjacent.

This contradiction shows that for every nonzero u

D

n

there exists a nonzero

w

D

q

such that φ(R(u))

⊂ R(w). And then, in the same way we conclude that

for every nonzero

t

x

t

D

m

there exists a nonzero

t

y

t

D

p

such that φ(L(

t

x))

L(

t

y).

Let A =

t

x

1

u

1

+

t

x

2

u

2

+

t

x

3

u

3

∈ M

m

×n

(

D). Then

t

x

1

,

t

x

2

,

t

x

3

are linearly

independent and u

1

, u

2

, u

3

are linearly independent as well. Set A

j

= A

t

x

j

u

j

.

We know that we have

φ(L(

t

x

i

))

⊂ L(

t

y

i

),

i = 1, 2, 3,

and

φ(R(u

i

))

⊂ R(w

i

),

i = 1, 2, 3.

It is our aim to prove that

t

y

1

,

t

y

2

,

t

y

3

as well as w

1

, w

2

, w

3

are linearly independent.

Assume for a moment that we have already proved this. We claim that then

the two-dimensional image of the operator φ(A

1

) is spanned by w

2

and w

3

. Indeed,

φ(A

1

) is adjacent to φ(

t

x

2

u

2

), that is, the rank two matrix φ(A

1

) can be written

as a sum of two rank one matrices φ(A

1

) = φ(

t

x

2

u

2

) + R, and by (19), we have

Im φ(

t

x

2

u

2

)

Im φ(A

1

). It follows that w

2

Im φ(A

1

). Similarly, w

3

Im φ(A

1

),

background image

5.6. PROOFS OF COROLLARIES

69

and since w

2

and w

3

are linearly independent and φ(A

1

) is of rank two, we can

conclude that the image of φ(A

1

) is the linear span of w

2

and w

3

.

In the same way we see that the kernel of φ(A

1

) consists of all vectors z

D

p

satisfying z

t

y

2

= z

t

y

3

= 0. And then it is clear that Im φ(A

2

) = span

{w

1

, w

3

}

and Im φ(A

3

) = span

{w

1

, w

2

}. And the kernels of φ(A

1

), φ(A

2

), and φ(A

3

) are

pairwise different subspaces of

D

p

of codimension two. Now, φ(A) is a rank two

matrix adjacent to φ(A

i

), i = 1, 2, 3. It follows from Lemma 5.2 that for each i,

1

≤ i ≤ 3, we have Im φ(A) = Im φ(A

i

) or Ker φ(A) = Ker φ(A

i

). This is impossible

becuase images as well as kernels of operators φ(A

i

) are pairwise different.

Thus, in order to complete the proof of the fact that rank φ(A) = 3 for every

A

∈ M

m

×n

(

D) of rank three, we need to verify that

t

y

1

,

t

y

2

,

t

y

3

as well as w

1

, w

2

, w

3

are linearly independent. First observe that

t

y

1

and

t

y

2

are linearly independent.

If this was not true, we would have φ(L(

t

x

i

))

⊂ L(

t

y

1

), i = 1, 2. Choose linearly

independent vectors a, b, c

D

n

and set A =

t

x

1

a, B =

t

x

2

b, and C =

t

x

2

c. Then

φ(A), φ(B), φ(C)

∈ L(

t

y

1

). Any two distinct matrices from L(

t

y

1

) are adjacent.

But φ(A) and φ(B) are not adjacent and also φ(A) and φ(C) are not adjacent. It
follows that φ(A) = φ(B) and φ(A) = φ(C). This contradicts the fact that φ(B)
and φ(C) are adjacent.

In the same way we see that any two vectors from the set

{

t

y

1

,

t

y

2

,

t

y

3

} are

linearly independent, and any two vectors from the set

{w

1

, w

2

, w

3

} are linearly

independent as well.

By Lemma 5.5, the restriction of φ to L(

t

x

1

) is an injective lineation from

L(

t

x

1

) into L(

t

y

1

). We further know that

φ(

{

t

x

1

λu

1

: λ

D}) ⊂ {

t

y

1

λw

1

: λ

D}

and

φ(

{

t

x

1

λu

2

: λ

D}) ⊂ {

t

y

1

λw

2

: λ

D}.

It follows that

φ(

{

t

x

1

(λu

1

+ μu

2

) : λ, μ

D}) ⊂ {

t

y

1

(λw

1

+ μw

2

) : λ, μ

D}

and the restriction of φ to the set

{

t

x

1

(λu

1

+ μu

2

) : λ, μ

D} is a lineation of

this set into

{

t

y

1

(λw

1

+ μw

2

) : λ, μ

D} whose range is not contained in any

hyperplane. By Schaeffer’s theorem, this restriction is surjective. In other words,
for every pair of scalars λ, μ

D not both equal to zero there exists a pair of scalars

α, β

D not both zero such that

(39)

φ(R(αu

1

+ βu

2

))

⊂ R(λw

1

+ μw

2

).

Now, if w

1

, w

2

, w

3

were linearly dependent, then we would have w

3

= λw

1

+μw

2

for some scalars λ, μ not both being equal to zero. This would further imply that
φ(R(u

3

))

⊂ R(λw

1

+μw

2

) which by the same argument as above is in a contradiction

with (39). This completes the proof of the statement that rank φ(A) = 3 whenever
rank A = 3.

We continue by induction. We assume that k

3 and that rank φ(A) = k

whenever rank A = k and we need to prove that rank φ(A) = k + 1 for every A of
rank k + 1.

So, let C

∈ M

m

×n

(

D) be of rank k + 1. After replacing the map φ by the map

A

→ T

2

φ(T

1

AS

1

)S

2

, where T

1

, T

2

, S

1

, S

2

are all square invertible matrices of the

appropriate sizes we may assume with no loss of generality that

C = E

11

+ . . . + E

k+1,k+1

background image

70

PETER ˇ

SEMRL

and

φ(E

11

+ . . . + E

kk

) = E

11

+ . . . + E

kk

.

By Theorem 4.2 we know that either there exists an invertible k

× k matrix T and

an automorphism τ :

D D such that

φ

A

0

0

0

=

T A

τ

T

1

0

0

0

for every k

× k matrix A; or there exists an invertible k × k matrix T and an

anti-automorphism σ :

D D such that

φ

A

0

0

0

=

T

t

A

σ

T

1

0

0

0

for every A

∈ M

k

(

D). After composing φ with yet another equivalence transforma-

tion we may further assume that T = I. We will consider just the second of the
above two possibilities.

We know that the φ-image of E

k+1,k+1

is a rank one matrix, φ(E

k+1,k+1

) =

t

uv. It is our aim to prove that

t

u,

t

f

1

, . . . ,

t

f

k

are linearly independent and that

v, e

1

, . . . , e

k

are linearly independent. Assume for a moment we have already proved

this. Then after composing φ once more with an equivalence transformation we may
assume that φ(E

k+1,k+1

) = E

k+1,k+1

.

We know that the restriction of φ to matrices having nonzero entries only in

the second, the third,..., and the (k + 1)-st row and the second, the third,..., and
the (k + 1)-st column is standard. In particular, it is additive. Therefore, we have

φ(E

22

+ . . . + E

kk

+ E

k+1,k+1

) = φ(E

22

+ . . . + E

kk

) + φ(E

k+1,k+1

)

= E

22

+ . . . + E

kk

+ E

k+1,k+1

.

Let A

j

= E

11

+ . . . + E

k+1,k+1

− E

jj

∈ M

m

×n

(

D) and B

j

= E

11

+ . . . + E

k+1,k+1

E

jj

∈ M

p

×q

(

D), j = 1, . . . , k + 1. Then we prove in the same way that

φ(A

j

) = B

j

,

j = 1, . . . , k + 1.

We have to show that φ(C) = φ(E

11

+ . . . + E

k+1,k+1

) is of rank k + 1. As it is

adjacent to each B

j

, j = 1, . . . , k + 1, it has to be of rank either k + 1, or k, or

k

1. We will show that the last two possibilities cannot occur. If rank φ(C) was k,

then by Lemma 5.2 for each j

∈ {1, . . . , k + 1} we would have Im φ(C) = Im B

j

or

Ker φ(C) = Ker B

j

, which is impossible because the images of the B

j

’s are pairwise

different and the same is true for the kernels of the B

j

’s. If rank φ(C) was k

1,

then we would have Im φ(C)

Im B

j

, j = 1, . . . , k + 1, which is again impossible

as the intersection of the images of the operators B

j

, j = 1, . . . , k + 1, is the zero

space.

Thus, in order to complete the proof we need to show that

t

u,

t

f

1

, . . . ,

t

f

k

are

linearly indpendent and that v, e

1

, . . . , e

k

are linearly independent. We will verify

only the linear independence of

t

u,

t

f

1

, . . . ,

t

f

k

. Assume on the contrary that this

is not true. Then

t

u =

t

f

1

λ

1

+ . . . +

t

f

k

λ

k

for some scalars λ

1

, . . . , λ

k

, not all of them being zero. We know that

φ(R(e

k+1

))

⊂ L(

t

u)

background image

ACKNOWLEDGMENTS

71

or φ(R(e

k+1

))

⊂ R(v). Using the fact that φ(R(e

1

))

⊂ L(

t

f

1

) we easily conclude

that we have the first possibility. But we know that also

φ(R(σ

1

(λ

1

)e

1

+ . . . + σ

1

(λ

k

)e

k

))

⊂ L(

t

u).

We can find M, N

∈ R(σ

1

(λ

1

)e

1

+ . . . + σ

1

(λ

k

)e

k

) such that M

= N, and neither

M nor N is adjacent to E

k+1,k+1

. But φ(M )

= φ(N) and φ(E

k+1,k+1

) all belong to

L(

t

u), and consequently, φ(E

k+1,k+1

) is adjacent to φ(M ) or φ(N ), a contradiction.

Proof of Corollaries 4.4 and 4.5. All we need to observe is that degenerate

adjacency preserving maps satisfy (9) or (10). But then for any pair of rank one
matrices B, C

∈ M

m

×n

(

D) we have d(φ(B), φ(C)) 1, a contradiction.

Proof of Corollary 4.6. Once again we only need to prove that φ is not degen-

erate. Assume on the contrary that it is degenerate. It follows that there exists a
map ϕ : P

1

n

(

D) D such that ϕ(P ) = ϕ(Q) whenever P and Q are adjacent rank

one idempotents. In particular, we have an injective map from the set of all rank
one idempotents of the form


1

∗ ∗ . . . ∗

0

0

0

. . .

0

..

.

..

.

..

.

. .. ...

0

0

0

. . .

0


into

D, a contradiction.

Acknowledgments

The author would like to thank Wen-ling Huang for many fruitful discussions

on the topic of this paper. The author would also like to express his gratitude to
the referee whose suggestions help to improve the exposition.

background image

background image

Bibliography

[1] Walter

Benz,

Geometrische

Transformationen

(German),

Bibliographisches

Institut,

Mannheim, 1992. Unter besonderer Ber¨

ucksichtigung der Lorentztransformationen. [With

special reference to the Lorentz transformations]. MR1183223 (93i:51002)

[2] Wei-Liang Chow, On the geometry of algebraic homogeneous spaces, Ann. of Math. (2) 50

(1949), 32–67. MR0028057 (10,396d)

[3] G. Frobenius, ¨

Uber die Darstellung der endlichen Gruppen durch Linear Substitutionen,

Sitzungsber. Deutsch. Akad. Wiss. Berlin (1897), 994-1015.

[4] Loo-Keng Hua, Geometries of matrices. I. Generalizations of von Staudt’s theorem, Trans.

Amer. Math. Soc. 57 (1945), 441–481. MR0012679 (7,58b)

[5] Loo-Keng Hua, Geometries of matrices. I

1

. Arithmetical construction, Trans. Amer. Math.

Soc. 57 (1945), 482–490. MR0012680 (7,58c)

[6] Loo-Keng Hua, Geometries of symmetric matrices over the real field. I, C. R. (Doklady)

Acad. Sci. URSS (N.S.) 53 (1946), 95–97. MR0018767 (8,328b)

[7] Loo-Keng Hua, Geometries of symmetric matrices over the real field. II, C. R. (Doklady)

Acad. Sci. URSS (N.S.) 53 (1946), 195–196. MR0018768 (8,328c)

[8] Loo-Keng Hua, Geometrices of matrices. II. Study of involutions in the geometry of sym-

metric matrices, Trans. Amer. Math. Soc. 61 (1947), 193–228. MR0022203 (9,171d)

[9] Loo-Keng Hua, Geometries of matrices. III. Fundamental theorems in the geometries of

symmetric matrices, Trans. Amer. Math. Soc. 61 (1947), 229–255. MR0022204 (9,171e)

[10] Loo-Keng Hua, Geometry of symmetric matrices over any field with characteristic other than

two, Ann. of Math. (2) 50 (1949), 8–31. MR0028296 (10,424h)

[11] Loo-Keng Hua, A theorem on matrices over a sfield and its applications (English, with

Chinese summary), J. Chinese Math. Soc. (N.S.) 1 (1951), 110–163. MR0071414 (17,123a)

[12] Wen-ling Huang and Peter ˇ

Semrl, Adjacency preserving maps on Hermitian matrices,

Canad. J. Math. 60 (2008), no. 5, 1050–1066, DOI 10.4153/CJM-2008-047-1. MR2442047
(2009j:15004)

[13] Wen-ling Huang and Zhe-Xian Wan, Adjacency preserving mappings of rectangular matrices,

Beitr¨

age Algebra Geom. 45 (2004), no. 2, 435–446. MR2093176 (2005h:15091)

[14] Nathan Jacobson, Lectures in abstract algebra. Vol. II. Linear algebra, D. Van Nostrand Co.,

Inc., Toronto-New York-London, 1953. MR0053905 (14,837e)

[15] H. Kestelman, Automorphisms of the field of complex numbers, Proc. London Math. Soc. (2)

53 (1951), 1–12. MR0041206 (12,812b)

[16] Chi-Kwong Li and Stephen Pierce, Linear preserver problems, Amer. Math. Monthly 108

(2001), no. 7, 591–605, DOI 10.2307/2695268. MR1862098 (2002g:15005)

[17] Marvin Marcus, Linear transformations on matrices, J. Res. Nat. Bur. Standards Sect. B

75B (1971), 107–113. MR0309953 (46 #9056)

[18] H. Schaeffer, ¨

Uber eine Verallgemeinerung des Fundamentalsatzes in desarguesschen affinen

Ebenen, Techn. Univ. M¨

unchen TUM-M8010.

[19] Peter ˇ

Semrl, On Hua’s fundamental theorem of the geometry of rectangular matrices, J.

Algebra 248 (2002), no. 1, 366–380, DOI 10.1006/jabr.2001.9052. MR1879022 (2002k:15005)

[20] Peter ˇ

Semrl, Generalized symmetry transformations on quaternionic indefinite inner product

spaces: an extension of quaternionic version of Wigner’s theorem, Comm. Math. Phys. 242
(2003), no. 3, 579–584, DOI 10.1007/s00220-003-0957-7. MR2020281 (2005h:47127)

[21] Peter ˇ

Semrl, Hua’s fundamental theorem of the geometry of matrices, J. Algebra 272 (2004),

no. 2, 801–837, DOI 10.1016/j.jalgebra.2003.07.019. MR2028082 (2004k:16083)

[22] Peter ˇ

Semrl, Maps on idempotent matrices over division rings, J. Algebra 298 (2006), no. 1,

142–187, DOI 10.1016/j.jalgebra.2005.08.010. MR2215122 (2007a:16040)

73

background image

74

BIBLIOGRAPHY

[23] Peter ˇ

Semrl, Comparability preserving maps on Hilbert space effect algebras, Comm. Math.

Phys. 313 (2012), no. 2, 375–384, DOI 10.1007/s00220-012-1434-y. MR2942953

[24] Peter ˇ

Semrl, Symmetries on bounded observables: a unified approach based on adja-

cency preserving maps, Integral Equations Operator Theory 72 (2012), no. 1, 7–66, DOI
10.1007/s00020-011-1917-9. MR2872605

[25] Zhe-Xian Wan, Geometry of matrices, World Scientific Publishing Co. Inc., River Edge, NJ,

1996. In memory of Professor L. K. Hua (1910–1985). MR1409610 (98a:51001)

[26] R. Westwick, On adjacency preserving maps, Canad. Math. Bull. 17 (1974), no. 3, 403–405;

correction, ibid. 17 (1974), no. 4, 623. MR0379519 (52 #424)

background image

Editorial Information

To be published in the Memoirs, a paper must be correct, new, nontrivial, and sig-

nificant.

Further, it must be well written and of interest to a substantial number of

mathematicians. Piecemeal results, such as an inconclusive step toward an unproved ma-
jor theorem or a minor variation on a known result, are in general not acceptable for
publication.

Papers appearing in Memoirs are generally at least 80 and not more than 200 published

pages in length. Papers less than 80 or more than 200 published pages require the approval
of the Managing Editor of the Transactions/Memoirs Editorial Board. Published pages are
the same size as those generated in the style files provided for

AMS-L

A

TEX or A

MS-TEX.

Information on the backlog for this journal can be found on the AMS website starting

from http://www.ams.org/memo.

A Consent to Publish is required before we can begin processing your paper. After

a paper is accepted for publication, the Providence office will send a Consent to Publish
and Copyright Agreement to all authors of the paper. By submitting a paper to the
Memoirs, authors certify that the results have not been submitted to nor are they un-
der consideration for publication by another journal, conference proceedings, or similar
publication.

Information for Authors

Memoirs is an author-prepared publication.

Once formatted for print and on-line

publication, articles will be published as is with the addition of AMS-prepared frontmatter
and backmatter. Articles are not copyedited; however, confirmation copy will be sent to
the authors.

Initial submission. The AMS uses Centralized Manuscript Processing for initial sub-

missions. Authors should submit a PDF file using the Initial Manuscript Submission form
found at www.ams.org/submission/memo, or send one copy of the manuscript to the follow-
ing address: Centralized Manuscript Processing, MEMOIRS OF THE AMS, 201 Charles
Street, Providence, RI 02904-2294 USA. If a paper copy is being forwarded to the AMS,
indicate that it is for Memoirs and include the name of the corresponding author, contact
information such as email address or mailing address, and the name of an appropriate
Editor to review the paper (see the list of Editors below).

The paper must contain a descriptive title and an abstract that summarizes the article

in language suitable for workers in the general field (algebra, analysis, etc.). The descrip-
tive title
should be short, but informative; useless or vague phrases such as “some remarks
about” or “concerning” should be avoided. The abstract should be at least one com-
plete sentence, and at most 300 words. Included with the footnotes to the paper should
be the 2010 Mathematics Subject Classification representing the primary and secondary
subjects of the article. The classifications are accessible from www.ams.org/msc/. The
Mathematics Subject Classification footnote may be followed by a list of key words and
phrases
describing the subject matter of the article and taken from it. Journal abbrevi-
ations used in bibliographies are listed in the latest Mathematical Reviews annual index.
The series abbreviations are also accessible from www.ams.org/msnhtml/serials.pdf. To
help in preparing and verifying references, the AMS offers MR Lookup, a Reference Tool
for Linking, at www.ams.org/mrlookup/.

Electronically prepared manuscripts. The AMS encourages electronically pre-

pared manuscripts, with a strong preference for

AMS-L

A

TEX. To this end, the Society

has prepared

AMS-L

A

TEX author packages for each AMS publication. Author packages

include instructions for preparing electronic manuscripts, samples, and a style file that gen-
erates the particular design specifications of that publication series. Though

AMS-L

A

TEX

is the highly preferred format of TEX, author packages are also available in A

MS-TEX.

Authors may retrieve an author package for Memoirs of the AMS from www.ams.org/

journals/memo/memoauthorpac.html or via FTP to ftp.ams.org (login as anonymous,
enter your complete email address as password, and type cd pub/author-info). The

background image

AMS Author Handbook and the Instruction Manual are available in PDF format from the
author package link. The author package can also be obtained free of charge by sending
email to tech-support@ams.org or from the Publication Division, American Mathematical
Society, 201 Charles St., Providence, RI 02904-2294, USA. When requesting an author
package, please specify

AMS-L

A

TEX or A

MS-TEX and the publication in which your paper

will appear. Please be sure to include your complete mailing address.

After acceptance. The source files for the final version of the electronic manuscript

should be sent to the Providence office immediately after the paper has been accepted for
publication. The author should also submit a PDF of the final version of the paper to the
editor, who will forward a copy to the Providence office.

Accepted electronically prepared files can be submitted via the web at www.ams.org/

submit-book-journal/, sent via FTP, or sent on CD to the Electronic Prepress Depart-
ment, American Mathematical Society, 201 Charles Street, Providence, RI 02904-2294
USA. TEX source files and graphic files can be transferred over the Internet by FTP to
the Internet node ftp.ams.org (130.44.1.100). When sending a manuscript electronically
via CD, please be sure to include a message indicating that the paper is for the Memoirs.

Electronic graphics. Comprehensive instructions on preparing graphics are available

at www.ams.org/authors/journals.html. A few of the major requirements are given
here.

Submit files for graphics as EPS (Encapsulated PostScript) files. This includes graphics

originated via a graphics application as well as scanned photographs or other computer-
generated images. If this is not possible, TIFF files are acceptable as long as they can be
opened in Adobe Photoshop or Illustrator.

Authors using graphics packages for the creation of electronic art should also avoid the

use of any lines thinner than 0.5 points in width. Many graphics packages allow the user
to specify a “hairline” for a very thin line. Hairlines often look acceptable when proofed
on a typical laser printer. However, when produced on a high-resolution laser imagesetter,
hairlines become nearly invisible and will be lost entirely in the final printing process.

Screens should be set to values between 15% and 85%. Screens which fall outside of this

range are too light or too dark to print correctly. Variations of screens within a graphic
should be no less than 10%.

Inquiries. Any inquiries concerning a paper that has been accepted for publication

should be sent to memo-query@ams.org or directly to the Electronic Prepress Department,
American Mathematical Society, 201 Charles St., Providence, RI 02904-2294 USA.

background image

Editors

This journal is designed particularly for long research papers, normally at least 80 pages in

length, and groups of cognate papers in pure and applied mathematics. Papers intended for
publication in the Memoirs should be addressed to one of the following editors. The AMS uses
Centralized Manuscript Processing for initial submissions to AMS journals. Authors should follow
instructions listed on the Initial Submission page found at www.ams.org/memo/memosubmit.html.

Algebra, to ALEXANDER KLESHCHEV, Department of Mathematics, University of Oregon, Eu-

gene, OR 97403-1222; e-mail: klesh@uoregon.edu

Algebraic geometry, to DAN ABRAMOVICH, Department of Mathematics, Brown University,

Box 1917, Providence, RI 02912; e-mail: amsedit@math.brown.edu

Algebraic topology, to SOREN GALATIUS, Department of Mathematics, Stanford University,

Stanford, CA 94305 USA; e-mail: transactions@lists.stanford.edu

Arithmetic geometry, to TED CHINBURG, Department of Mathematics, University of Pennsyl-

vania, Philadelphia, PA 19104-6395; e-mail: math-tams@math.upenn.edu

Automorphic forms, representation theory and combinatorics, to DANIEL BUMP, De-

partment of Mathematics, Stanford University, Building 380, Sloan Hall, Stanford, California 94305;
e-mail: bump@math.stanford.edu

Combinatorics and discrete geometry, to IGOR PAK, Department of Mathematics, University

of California, Los Angeles, California 90095; e-mail: pak@math.ucla.edu

Commutative and homological algebra, to LUCHEZAR L. AVRAMOV, Department of Math-

ematics, University of Nebraska, Lincoln, NE 68588-0130; e-mail: avramov@math.unl.edu

Differential geometry and global analysis, to CHRIS WOODWARD, Department of Mathemat-

ics, Rutgers University, 110 Frelinghuysen Road, Piscataway, NJ 08854; e-mail: ctw@math.rutgers.edu

Dynamical systems and ergodic theory and complex analysis, to YUNPING JIANG, Depart-

ment of Mathematics, CUNY Queens College and Graduate Center, 65-30 Kissena Blvd., Flushing, NY
11367; e-mail: Yunping.Jiang@qc.cuny.edu

Ergodic theory and combinatorics, to VITALY BERGELSON, Ohio State University, Depart-

ment of Mathematics, 231 W. 18th Ave, Columbus, OH 43210; e-mail: vitaly@math.ohio-state.edu

Functional analysis and operator algebras, to STEFAAN VAES, KU Leuven, Department of

Mathematics, Celestijnenlaan 200B, B-3001 Leuven, Belgium; e-mail: stefaan.vaes@wis.kuleuven.be

Geometric analysis, to WILLIAM P. MINICOZZI II, Department of Mathematics, Johns Hopkins

University, 3400 N. Charles St., Baltimore, MD 21218; e-mail: trans@math.jhu.edu

Geometric topology, to MARK FEIGHN, Math Department, Rutgers University, Newark, NJ

07102; e-mail: feighn@andromeda.rutgers.edu

Harmonic analysis, complex analysis, to MALABIKA PRAMANIK, Department of Mathe-

matics, 1984 Mathematics Road, University of British Columbia, Vancouver, BC, Canada V6T 1Z2;
e-mail: malabika@math.ubc.ca

Harmonic analysis, representation theory, and Lie theory, to E. P. VAN DEN BAN, De-

partment of Mathematics, Utrecht University, P.O. Box 80 010, 3508 TA Utrecht, The Netherlands;
e-mail: E.P.vandenBan@uu.nl

Logic, to ANTONIO MONTALBAN, Department of Mathematics, The University of California,

Berkeley, Evans Hall #3840, Berkeley, California, CA 94720; e-mail: antonio@math.berkeley.edu

Number theory, to SHANKAR SEN, Department of Mathematics, 505 Malott Hall, Cornell Uni-

versity, Ithaca, NY 14853; e-mail: ss70@cornell.edu

Partial differential equations, to MARKUS KEEL, School of Mathematics, University of Min-

nesota, Minneapolis, MN 55455; e-mail: keel@math.umn.edu

Partial differential equations and functional analysis, to ALEXANDER KISELEV, Depart-

ment of Mathematics, University of Wisconsin-Madison, 480 Lincoln Dr., Madison, WI 53706; e-mail:
kisilev@math.wisc.edu

Probability and statistics, to PATRICK FITZSIMMONS, Department of Mathematics, University

of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0112; e-mail: pfitzsim@math.ucsd.edu

Real analysis and partial differential equations, to WILHELM SCHLAG, Department of Math-

ematics, The University of Chicago, 5734 South University Avenue, Chicago, IL 60615; e-mail: schlag@
math.uchicago.edu

All other communications to the editors, should be addressed to the Managing Editor, ALE-

JANDRO ADEM, Department of Mathematics, The University of British Columbia, Room 121, 1984
Mathematics Road, Vancouver, B.C., Canada V6T 1Z2; e-mail: adem@math.ubc.ca

background image

SELECTED PUBLISHED TITLES IN THIS SERIES

1088 Mark Green, Phillip Griffiths, and Matt Kerr, Special Values of Automorphic

Cohomology Classes, 2014

1087 Colin J. Bushnell and Guy Henniart, To an Effective Local Langlands

Correspondence, 2014

1086 Stefan Ivanov, Ivan Minchev, and Dimiter Vassilev, Quaternionic Contact

Einstein Structures and the Quaternionic Contact Yamabe Problem, 2014

1085 A. L. Carey, V. Gayral, A. Rennie, and F. A. Sukochev, Index Theory for

Locally Compact Noncommutative Geometries, 2014

1084 Michael S. Weiss and Bruce E. Williams, Automorphisms of Manifolds and

Algebraic K-Theory: Part III, 2014

1083 Jakob Wachsmuth and Stefan Teufel, Effective Hamiltonians for Constrained

Quantum Systems, 2014

1082 Fabian Ziltener, A Quantum Kirwan Map: Bubbling and Fredholm Theory for

Symplectic Vortices over the Plane, 2014

1081 Sy-David Friedman, Tapani Hyttinen, and Vadim Kulikov, Generalized

Descriptive Set Theory and Classification Theory, 2014

1080 Vin de Silva, Joel W. Robbin, and Dietmar A. Salamon, Combinatorial Floer

Homology, 2014

1079 Pascal Lambrechts and Ismar Voli´

c, Formality of the Little N -disks Operad, 2013

1078 Milen Yakimov, On the Spectra of Quantum Groups, 2013

1077 Christopher P. Bendel, Daniel K. Nakano, Brian J. Parshall, and Cornelius

Pillen, Cohomology for Quantum Groups via the Geometry of the Nullcone, 2013

1076 Jaeyoung Byeon and Kazunaga Tanaka, Semiclassical Standing Waves with

Clustering Peaks for Nonlinear Schr¨

odinger Equations, 2013

1075 Deguang Han, David R. Larson, Bei Liu, and Rui Liu, Operator-Valued

Measures, Dilations, and the Theory of Frames, 2013

1074 David Dos Santos Ferreira and Wolfgang Staubach, Global and Local Regularity

of Fourier Integral Operators on Weighted and Unweighted Spaces, 2013

1073 Hajime Koba, Nonlinear Stability of Ekman Boundary Layers in Rotating Stratified

Fluids, 2014

1072 Victor Reiner, Franco Saliola, and Volkmar Welker, Spectra of Symmetrized

Shuffling Operators, 2014

1071 Florin Diacu, Relative Equilibria in the 3-Dimensional Curved n-Body Problem, 2014

1070 Alejandro D. de Acosta and Peter Ney, Large Deviations for Additive Functionals

of Markov Chains, 2014

1069 Ioan Bejenaru and Daniel Tataru, Near Soliton Evolution for Equivariant

Schr¨

odinger Maps in Two Spatial Dimensions, 2014

1068 Florica C. Cˆ

ırstea, A Complete Classification of the Isolated Singularities for

Nonlinear Elliptic Equations with Inverse Square Potentials, 2014

1067 A. Gonz´

alez-Enr´

ıquez, A. Haro, and R. de la Llave, Singularity Theory for

Non-Twist KAM Tori, 2014

1066 Jos´

e ´

Angel Pel´

aez and Jouni R¨

atty¨

a, Weighted Bergman Spaces Induced by Rapidly

Increasing Weights, 2014

1065 Emmanuel Schertzer, Rongfeng Sun, and Jan M. Swart, Stochastic Flows in the

Brownian Web and Net, 2014

1064 J. L. Flores, J. Herrera, and M. S´

anchez, Gromov, Cauchy and Causal Boundaries

for Riemannian, Finslerian and Lorentzian Manifolds, 2013

For a complete list of titles in this series, visit the

AMS Bookstore at www.ams.org/bookstore/memoseries/.

background image

ISBN 978-0-8218-9845-1

9 780821 898451

MEMO/232/1089


Document Outline


Wyszukiwarka

Podobne podstrony:
Diacu F Relative equilibria in the 3 dimensional curved n body problem (MEMO1071, AMS, 2014)(ISBN 97
Albano P , Bove A Wave front set of solutions to sums of squares of vector fields (MEMO1039, AMS, 20
Iwaniec T , Onninen J n harmonic mappings between annuli the art of integrating free Lagrangians (M
Inci H , Kappeler T , Topalov P On the regularity of the composition of diffeomorphisms (MEMO1062, A
Śliwerski, Andrzej Psychometric properties of the Polish version of the Cognitive Triad Inventory (
Lax P D , Zalcman L Complex proofs of real theorems (ULECT058, AMS, 2012)(ISBN 9780821875599)(O)(106
Iglesias Zemmour P The moment maps in diffeology (MEMO0972, AMS, 2010)(ISBN 9780821847091)(85s) MDdg
Pelayo A Symplectic actions of 2 tori on 4 manifolds (MEMO0959, AMS, 2010)(ISBN 9780821847138)(96s)
humours of last night part version of merrily kiss the quaker
Majewski, Marek; Bors, Dorota On the existence of an optimal solution of the Mayer problem governed
Cantata 140 (shorter version of The Crack In Space)
The LA46 drawings DIS version of the Lenovo V460 drawings discrete graphics Wistron LA46 CRT BD
The LA46 drawings DIS version of the Lenovo V460 drawings discrete graphics Wistron LA46 USB BD
Mullins Eustace, The Contract Murder of Eustace Mullins (Different Version) (2006)
Akerlof George A The Economics of Tagging as Applied to the Optimal Income Tax, Welfare Programs, a
István Czachesz The Coptic and Old Slavonic Versions of the Ascension of Isaiah Some Text Critical O
The Craft Grimoire of Eclectic Magick Version 1 2
The LA46 drawings DIS version of the Lenovo V460 drawings discrete graphics Wistron LA46 Power BD
The Tibetan Book Of The Dead Print Version

więcej podobnych podstron