Probl. Anal. Issues Anal. Vol. 5(23), No. 2, 2016, pp. 69-78
DOI: 10.15393/j3.art.2016.3510
69
UDC 517.28, 517.54, 517.41
V. V. Starkov
JACOBIAN CONJECTURE, TWO-DIMENSIONAL CASE
Abstract. The Jacobian Conjecture was first formulated by O. Keller in 1939. In the modern form it supposes injectivity of the polynomial mapping f : ^ (Cn ^ Cn) provided that jacobian Jf = const = 0. In this note we consider structure of polynomial mappings f that provide Jf = const = 0.
Key words: Jacobian conjecture
2010 Mathematical Subject Classification: 14R15
Introduction. Denote the set of all polynomials in Rn (or Cn) of degree not higher than m by Pm. Let Pm be the set of all polynomial mappings F = (F1,...,Fn): ^ (or Cn ^ Cn), Fk G Pm(k = = 1,..., n) of degree degF < m. The Jacobi matrix and the jacobian of mapping F are denoted by DF and JF, respectivly. In the complex case both DF and JF are complex. The Jacobian Conjecture (JC) formulated by Keller [1] in 1939 in its modern form is:
if F G Pm and Jf = const = 0 then F is injective in Rn (Cn). Proof of the conjecture would allow to use it widely in a number of branches of mathematics. Beside the one given above, also other equivalent formulations exist. Many publications are devoted to this conjecture: see e.g. [2-6]. In particular, in [7] the conjecture is proved for F G P2 for any n; in [8] it is checked for n = 2 and F G Pioo. However, it has not been proved neither to be true nor to be false for any n. It is included in the list of "Mathematical Problems for the Next Century" [9].
In the note we consider the question of structure of mappings F G Pm with JF = const = 0. It seems to be the most important for proof or rejection of (JC). Solving this problem and applying criteria or sufficient conditions of injectivity of mappings would help to proceed in (JC). We here obtain results only for n = 2 and m = 2, 3. However, they serve as
©Petrozavodsk State University, 2016
[MglHl
a starting point for results in the general case (n, m > 3) in our future article with S. Ponnusamy. Let us pass to results.
Theorem 1. Let F(x, y) = (U(x, y), T^(x, y)) be a polynomial mapping, F(0,0) = 0, ¿/(x,y),t>(x,y) e P2. Then JF = const = 0 iff F = Ao of o B, where A and B are linear homogeneous nondegenerate mappings, f (x,y) = K^y^O^y^
u(x,y)= x + tt2 (x + y)2, v(x,y)= y - «2 (x + y)2 , (1)
a2 is an arbitrary fixed constant.
Case m = 3 is considered using Theorem 1.
Theorem 2. Let F(x,y) = (U(x,y),'V(x,y)) be a polynomial mapping, F(0,0) = 0, f/(x,y),t>(x,y) e P3. Then JF = const = 0 iff F = Ao of o B, where A and B are linear homogeneous nondegenerate mappings,
f (x,y) = K^y^K^y^
u(x,y) = x + «2(x + y)2 + «3(x + y)3, v(x,y) = y - «2(x + y)2 - «3^ + y)3,
a2 and a3 are arbitrary fixed constants.
It is natural to assume that statements similar to Theorems 1 and 2 hold in Pm for any m > 3, i.e., to the conjecture:
If F(x,y) = (U(x, y), y(x, y)) e Pm, F(0, 0) = 0, then Jf = const = 0 iff F = A o f o B, where A and B are linear homogeneous nondegenerate mappings, f(x,y) = (u(x,y), v(x,y)),
u(x,y) = x + «2(x + y)2 + ... + «m(x + y)m,
v(x,y) = y - «2(x + y)2 - ... - am(x + y)m,
where a2,..., am are arbitrary fixed constants.
This conjecture is implicitly supported by our (with S. Ponnusamy) Theorem A. The Jacobian Conjecture is true for mappings F(X) = = (A o f o B)(X), where X = (xi,..., xn) e Rn, A and B are linear such that det A, det B = 0, f = (u1,..., un), for k = 1,..., n
Uk (X) = + Yk [«2 (xi + ... + xn)2 + «3(xi + ... + xn )3 + ... +
+am(xi + ... + xn)m],
a, Yk e R with J2n=1 Yk = 0.
Remark. All formulated theorems hold both in real and in complex case. Proof of Theorem 1. Let $(x,y) = (DF)-1 (0,0)F(x,y) =
= (U(x,y^ V(x,y));then
U (x, y) = x + A2 x2 + A1xy + A0y2 = x + L(x, y),
V (x, y) = y + a2 x2 + aixy + ao y2 = y + l(x, y), Therefore 1 = J$(x, y) = 1 + I + II, where
I = + ly = ° 11 = Lx ly — lXLy = 0,
as I and II are homogeneous polynomials of first and second degree, respectively.
77 = 0 ^ 2A2x + Ai^ _ Aix + 2Aoy A2 = Ai = A 2a2x + a1y a1 x + 2a0y a2 a1 a0 '
i.e. polynomials L and l are mutually proportional. Therefore such «, n G G R exist that
«L + nl = 0, | + |nl = 0. (2)
Then
I = 0 ^^ «ly = nlx ^^ £(a1x + 2aoy) = n(2a2x + a1y), (3)
i.e. ^a1 = 2na2, 2a0« = na1. Let us use the following designation: a0 := := n2r, r G R. Then a1 = 2«nr, a2 = «2r and l = r(«x + ny)2.
First we consider the case « = 0. Then from equality (2) we receive
L = - «l = - «r(«x + ny)2 and we denote r := p«, p G R. Then
l = p«(«x + ny)2, L = -pn(«x + ny)2
and
$ = (U,V), U(x,y) = x-pn(«x + ny)2,V(x,y) = y + p«(«x + ny)2. (4) Note that for a fixed «, n = 0 and for a linear mappings with matrix
(vi ^, B= a 0
0 1/nr B V0 V
the polynomial mapping
Ao f oB = (x + (£x + ny)2,y - — (£x + ny)2 ^ £ n
(f is from the formulation of Theorem 1). We put here a2 := -p£n, then the parameter a2 is any fixed number since p is arbitrary. We have A o f o B = (U, V) = Thus polynomial mapping (4) coinsides with (1) up to linear transformations with nondegenerate matrices.
In case £ = 0 we have, from (2), l = 0 and equality I = 0 implies Lx = 0. Thus L = A0y2 and $(x, y) = (x + A0y2, y). Let A and B are linear homogeneous mappings with matrixes
A = 1) , and B = (1 0) ,
respectively. Then
f (x, y) = Ao $(x,y) oB = (x + Ao(x + y)2, y - Ao(x + y)2).
Sufficiency in Theorem 1 is checked by direct calculations. The proof is complete. □
Proof of Theorem 2. We will take the Jacobi matrix DF(0, 0) as matrix A. Then for the polynomial mapping $(x,y) := DF-1(0, 0) o F(x,y) = = (U(x,y), V(x,y)) we have
U(x, y) = x + L(x, y) + W(x, У), V(x, y) = y + y) + w(x, y),
where L and l are homogeneous polynomials of degree 2, W and w are those of degree 3, J$ (x,y) = 1. Therefore
(x, y) = 1 = 1 + (Lx + ly) + (Lxly - lxLy + Wx + Wy) +
+ [L x wy lx Wy + Wx ly - WxLy ] +
Wx Wy
wx wy
(5)
The last term of (5) (and only it) is a homogeneous polynomial from , so it equals 0. Thus lines of the determinant in (5) are proportional, i. e. if we denote
W := Ax3 + Bx2y + Cxy2 + Dy3, w := ax3 + 6x2y + cxy2 + dy3,
then
Wx _ Wy 3Ax2 + 2Bxy + Cy2 _ Bx2 + 2Cxy + 3Dy2 wx wy 3ax2 + 2bxy + cy2 bx2 + 2cxy + 3dy2
A = B C = D (6)
a b ' c d
Here and in the sequel we can assume that the coefficients a, b, c, d are nonzero, because otherwise consider the polynomial mapping $* := A-1 o o Ae with an appropriate homogeneous linear mapping Ae with matrix
Ae =
1 + e e —e 1 — e
instead of here e is the independent variable (we do not consider the case w = 0 = W that is reduced to Theorem 1). Indeed,
$* = (x + L* + W*, y + l* + w*), (W*,w*) = A-1 o (W, w) o Ae,
where L* and l* are homogeneous polynomials of the second degree and W* and w* are that of the third,
w* := a(e)x3 + 6(e)x2y + c(e)xy2 + d(e)y3.
Also
a(e) = a(1 + e)4 - 6(1 + e)3e + c(1 + e)2e2 - d(1 + e)e3+
+A(1 + e)3e - B(1 + e)2e2 + C(1 + e)e3 - De4,
and equality (6) easily implies a(e) ^ 0; the same is true for other coefficients: 6(e), c(e), d(e) ^ 0. So (6) implies
3Ax2 + 2Bxy + Cy2 A Bx2 + 2Cxy + 3Dy2 B
3ax2 + 2bxy + cy2 a bx2 + 2cxy + 3dy2 b Divide this equality on y and pass to limit as y ^ 0 to obtain
B a c B
a(C - cB) = b(B - bA) = 0 C = B. b a c b
Therefore, W = Aw for some constant A. Note that this means that original forms W and w are proportional with transform A-1 o (W, w) o Ae not taken into account.
Also in (5) the third degree is only in the last square bracket, so it equals 0 and
LxWy - AlxWy + Awxly - WxLy = 0,
i.e. for all (x, y)
(L - A1)xWy = (L - A1)yWx. (7)
Note that the case wx = 0 = wy suits conditions of Theorem 1, so we need not to consider it. We consider three cases. 1) (L - A1)x = 0 = (L - A1)y.
Let us show that such numbers a, b (|a| + |b| = 0) exist, that
Wx = (L - A1)x(ax + by), Wy = (L - Al)y(ax + by). (8)
First assume that
(L - Al)^
(L-^ = COnSt.
Denote linear functions I := (L - Al)x, II := (L - Al)y. If t := y/x, Wx can be decomposed in the field of complex numbers to product of linear factors Wx = £x2(t - t1)(t - t2), £ = const; analogously for Wy. Then in
(7)
(L - Al)xWy = (L - Al)ywx = I ■ II ■ III, where III is also some linear factor. Thus
Wx = I ■ III = (L - Al)x ■ III, Wy = II ■ III = (L - Al)y ■ III,
i.e. we obtain (8). Now let
(L - Al)x , , x
(L-Â^ = C = const(=0).
Decompose Wy to linear factors, as in the previous paragraph:
Wy = (ax + Py)(Yx + 5y).
Assume that these factors are not mutually proportional, the case a = y, P = 5 is considered in the similar way. From (7) we have Wx = cWy. So
Wxy = cWyy = c[P(yx + 5y) + 5(ax + Py)] = Wyx = a(Yx + 5y) + Y(ax + Py)
(yx + ¿y)(c^ — a) = (ax + j%)(y — c5). Thus c^ — a = 0 = y — c5, i.e. c^ = a, y = This implies
wy = ^¿(cx + y)2, wx = c^5(cx + y)2.
Denote (L—A1)y = qx+py; then (L—A1)x = c(qx+py) and (L—A1)xy = = q = pc. Therefore
(L — A1)x = pc(cx + y), (L — A1)y = p(cx + y).
Assumption of case 1) implies p = 0. Now we have
wy = (L — Al)y (cx + y) —, wx = cwy = (L — A1)x(cx + y) —.
pp
This completes the proof of (8).
Further we consider b = 0, the symmetric case a = 0 is similar. From (8) we have
w = J wxdx + C1(y) = (ax + by)(L — Al) — a J(L — A1)dx + C1 (y). (9)
Denote (L — Al) = Ax2 + Bxy+Cy2. From this, (9) and the second equality in (8) we have
b(Ax2 + Bxy + Cy2) = a (^Bx2 + 2Cyx^ — C1 (y)
2 r, 2aC 4 a2C
C1 (y) = —B = —, A =
and
r ,, ^ (a \2/r , ,N 2C (ax + by) . ,N 2aC (ax + by)
L—Al = C (^x + y) , (L—A1)y =-^"b-—, (L—A1)x =-^--.
From here and (7)
(L — Al)x a wx (L — Al)y b wy
follows. Besides, (8) implies
wx = 2bC (ax + by)2, wy = "b^ (ax + by)2, (10)
and
2aC 2C
Lx = Alx + (ax + by), Ly = Aly + — (ax + by). (11)
Both parentheses in (5) equal zero due to identity (5).
Lx + = 0,
Lxly - lxLy + AWx + Wy = 0
xy
Write down (12) using (11) in the form
Alx + 2b2C (ax + by) + ly = 0.
Using (13), (11) and (10) get
(12) (13)
(14)
aly - blx + (Aa + b)(ax + by) = 0.
(15)
From this, taking (14) into account, obtain
(Aa + b)lx = (Aa + b) -
2a2 C
IT
(ax + by).
(16)
If (Aa + 6) = 0, then from (16) it follows that aC = 0. Therefore, using the first equality from (11) we obtain (L — A/)x = 0. But this contradicts the assumption of case 1). If (Aa + 6) = 0, then
lx =
1
2a2 C
b2(Aa + b)
(ax + by).
If a = 0 then /x = by and from (11) implies
Lx = A6y (L — A/)x = 0;
this contradicts the assumption of case 1). Therefore a = 0. Then from (15):
2a2 C , "I ,
+ Aa (ax + by)
aly = -
b(Aa + b)
and since a = 0 then
l=
yx -
2a2 C b(Aa + b)
+ Aa
= lxy = b
1
2a2 C
b2 (Aa + b)
Consequently b = — Aa, this contradicts the assumption (Aa + b) = 0. Thus, case 1) is not realised.
2) Let (L — A1)x = (L — A1)y = 0. Then L = Al and from (13) we see that
Awx + wy =0. (17)
Repeating the proof of Theorem 1 using equalities L = Al and (12) instead of (2) and (3), we obtain, similarly to conclusion of the proof of Theorem 1, that
$(x,y) = (x + «2(x + y)2 + W (x,y),y — «2(x + y)2 + w(x,y))
up to linear mappings A and B from formulation of Theorem 2. In particular, this implies that constant A = —1 in equality L = Al. Then equality
3
(17) becomes wy = wx. Denote w(x,y) = ^^ xky3-k and get
k=0
33
Wx = £ xk-1 y3-k = Wy = £(3 — fc)&xky2-k. k=0 k=0
Compare coefficients of the same powers in this equality to obtain the
3 — j
recurrent equation = ^^ — ,, j = 0,1, 2, i.e. = 3^0 = ,
^3 = y#0. So,
w = ^0(x + y)3, W = Aw = —w = —^0(x + y)3.
This finishes the proof of case 2) of Theorem 2.
3) Let (L — A1)x = 0, (L — A1)y = 0 (the symmetric case is considered similarly).
Then (7) implies wx = 0 (but wy = 0, otherwise we will come to situation from formulation of Theorem 1. These assumptions mean that (L — Al) = ry2, w = sy3, r = 0 = s are constants. So
Ly = Aly + 2ry, Lx = Alx. (18)
Then from (12) we have
A/x + /y = 0, (19)
3s 2r '
from (13) and (18) wy = 2ry1x, therefore 1x = —y. Now (19) implies
3sA f 3s
/y = -A/x = —1 = /xdx + C (y) = + C (y^
, 3s ^ * 3sA
*y = 3sx + C /(y) = — 3^,
3s
i. e., C/ (y) = — — (Ay + x). The left-hand side of the last equality depends
only on y, therefore this equality can be true only for s = 0; thus w = 0 and W = Aw = 0. And we have the case that is reduced to Theorem 1.
Sufficiency in Theorem 2 is checked by direct calculations. Theorem 2 is proved. □
Acknowledgment. This work was supported by Russian Foundation for Basic Research (project 14-01-00510) and the Strategic Development Program of Petrozavodsk State University.
References
[1] Keller O.-H. Ganze Cremona-Transformationen. Monatshefte Math. Phys., 1939, vol. 47, pp. 299-306.
[2] van den Essen A. Polynomial Automorphisms and the Jacobian Conjecture. Volume 190 of Progress in Mathematics, Birkhauser Verlag, Basel, 2000.
[3] Druzkowski L. M. On the global asymptotic stability problem and the Jacobian conjecture. Control and Cybernetics, 2005, vol. 34, no. 3, pp. 747-762.
[4] Pinchuk S. A counterexample to the strong real Jacobian conjecture, Math. Z. 1994, vol. 217, pp. 1-4.
[5] Yagzhev A. V. Keller's problem.. Siberian Math. J. 1980, vol. 21, no. 5, pp. 747-754.
[6] Kulikov V. S. Generalized and local Jacobian problems. Russian Academy of Sciences. Izvestiya Mathematics. 1993, vol. 41, no. 2, pp. 351-365.
[7] Wang S. S.-S. A Jacobian criterion for separability. J. of Algebra. 1980, vol. 65, no. 2, pp. 453-494.
[8] Moh T. T. On the global Jacobian conjecture and the configuration of roots. J. reine und angew. Math. 1983, vol. 340, pp. 140-212.
[9] Smale S. Mathematical Problems for the Next Century. Math. Intelligencer. 1998, vol. 20, no. 2, pp. 7-15.
Received November 08, 2016. In revised form, December 14, 2016. Accepted December 15, 2016.
Petrozavodsk State University
33, Lenina pr., Petrozavodsk 185910, Russia
E-mail: [email protected]