Dual basis. About the classical definition

To the base (e 1, ..., e n) of the module E with respect to the form f is such a basis (c 1, ..., c n) module E, what

where E is a free K-module over a commutative ring K with unity, and f is non-singular on E.

Let be E *-module coupled to E, a (e * 1,. . ., f * n) - basis E *, conjugate to the original basis E: e i* (e i)=1, e i* (e i) = 0,. Then to each bilinear form f on E there correspond mappings j f, y f: defined by equalities

If f is non-singular, then each of the mapping , j f y f is an isomorphism, and vice versa. Moreover, dual to ( e 1, ..., f n) basis ( c i. ... ., with p) is characterized by the property that

E. N. Kuzmin.


Encyclopedia of Mathematics. - M .: Soviet encyclopedia... I. M. Vinogradov. 1977-1985.

See what "DUAL BASIS" is in other dictionaries:

    The set X is the minimal subset B that generates it. Generation means that by applying operations of a certain class to elements, any element is obtained. This concept is related to the concept of dependence: elements of X are put into ... by means of operations from ... Encyclopedia of mathematics

    In mathematics, the Casimir invariant, or the Casimir operator, is a remarkable element of the center of the universal enveloping algebra of the Lie algebra. An example is the square of the angular momentum operator, which is the Casimir invariant of the 3-dimensional group ... ... Wikipedia

    Or the dual space is the space of linear functionals on a given linear space. Contents 1 Linear conjugate space definition 2 Properties ... Wikipedia

    Not to be confused with the "simplex method" method of optimizing an arbitrary function. See Nelder Mead's method Simplex method an algorithm for solving an optimization problem of linear programming by iterating over the vertices of a convex polyhedron in ... ... Wikipedia

    Planar graph, a graph admitting regular stacking on a plane (see Graph stacking). In other words, the graph G is called. flat, if it can be depicted on a plane so that the vertices correspond to different points of the plane, and the lines, ... ... Encyclopedia of mathematics

    Biography. The teachings of Marx. Philosophical materialism. Dialectics. Materialistic understanding of history. Class struggle. Economic doctrine of Marx. Price. Surplus value. Socialism. The tactics of the class struggle of the proletariat ... Literary encyclopedia

    Differential algebraic method for studying systems differential equations and manifolds with different structures. Algebraic. the method is based on the Grassmann algebra. Let V be a 2n-dimensional vector space over an arbitrary ... ... Encyclopedia of mathematics

    Jacobian, an algebraic curve S is a principally polarized abelian variety associated with this curve. Sometimes the Ya.m. is simply commutative algebraic. group. If S is a smooth projective curve of genus. over field C or, in classic. ... ... Encyclopedia of mathematics

    RELATIONSHIP Public relations, including as their elements: 1) subjects with their statuses and roles, values ​​and norms, needs and interests, incentives and motives; 2) the content of the activities of subjects and their interactions, ... ... Philosophical Encyclopedia

    Contents: I. R. Modern; II. History of the city of R .; III. Roman history before the fall of the Western R. Empire; IV. Roman law. I. Rome (Roma) the capital of the Italian kingdom, on the Tiber river, in the so-called Roman Campania, at 41 ° 53 54 north ... ... encyclopedic Dictionary F. Brockhaus and I.A. Efron

Definition 10.1. A mapping f: L → R which is defined on linear space L and takes on real values, called linear function (also linear form, linear functional) , if it creates two conditions in Dovlev:

a) f (x + y) = f (x) + f (x), x, y ∈ L;

b) f (λx) = λf (x), х ∈ L, λ ∈ R.

Comparing this definition with definition 4.1 linear operator, we will see a lot in common. If we consider the set of real numbers as a one-dimensional linear space, then we can say that a linear function is a linear operator whose image space is one-dimensional.

Let us choose some basis e = (e 1 ... e n) in the linear space L. Then for any vector х ∈ L with coordinates х = (х 1; ... х n) T

f (x) = f (x 1 e 1 + ... + x n e n) = xi 1 f (e 1) + ... + x n f (e n) = a 1 x 1 + ... + a n x n = ax,

where a = (ai ... an), a * = / (e *), i = 1, n. Therefore, a linear function is uniquely determined by its values ​​on the basis vectors. On the contrary, if the function f (x) through the coordinates x of the vector x is expressed in the form f (x) = as much, then this function is linear, and the string a is composed of the values ​​of this function on the basis vectors. Thus, a one-to-one correspondence is established between the set of linear forms on the linear space £ and strings of length n.

Linear forms can be added and multiplied by real numbers according to the rules:

(f + g) (x) = f (x) + g (x), (λf) (x) = λf (x).

The operations introduced in this way transform the set of linear forms in the space L into a linear space. This linear space is called conjugate space with respect to the linear space L and denote L *.

Based on the basis e chosen in the space L, we construct a basis in the dual space L *. For each vector e i from the basis e, consider linear shape f i, for which f i (e i) = 1 and f i (e j) = 0 for all vectors e j, except for e i. We obtain a system of linear forms f1, ..., f "e C *. Let us show that this is a linearly independent system. Let some linear combination of these forms be equal to the zero linear form / = aif1 + ... + anfn = 0. The form / takes zero values ​​on all basis vectors. But

The zero values ​​of f on the basis vectors are equivalent to the equalities α i = 0, i = 1, n, and therefore the system of linear forms f 1, ..., f n is linearly independent.

The system of linear forms f 1, ..., f n is a basis in the dual space. Indeed, since this is a linearly independent system of linear forms, it suffices to prove that any linear form from L * is a linear combination of them. Let us choose an arbitrary linear form f from L * and let a 1 ... and n be the values ​​of the form f on the basis vectors. These values ​​uniquely define the linear shape. But the linear combination f "= a 1 fi + ... + anfn is also a linear form, which on the basis vectors takes the same values ​​a 1, ..., and n. Hence, these two linear forms coincide, and we obtain the equality f = f "= a 1 f 1 + ... + anfn, that is, decomposition of an arbitrarily chosen linear form in the system of forms f 1, ..., f n

The above reasoning shows that the dual space L * has the same dimension as L. The basis f 1, ..., f n constructed by us depends on the choice of the basis e in the space L.

Definition 10.2. Bases e 1, ..., e n and f 1, ..., f n linear space L and the dual space L * are called biorthogonal, or reciprocal , if

If the bases e 1, ..., e n and f 1, ..., fn are mutual, then the coordinates of an arbitrary form f in the basis f 1, ..., fn are the values ​​of this form on the vectors of the mutual basis e 1, .. ., e n. When considering the linear space L and the dual space L * together, the elements of each of these spaces are called vectors, but the elements of the dual space L * are called covariant vectors (covectors) , and elements from the linear space L - contravariant vectors (or just vectors). The coordinates of those and others are determined mainly in mutual bases, while the coordinates of contravariant vectors have an index at the top, and for covariant vectors - at the bottom.

There are two ways to look at the f (x) record. Having fixed the form f, we vary the vector x, getting all possible values ​​of the linear form. But if we fix the vector x and vary the linear form f, then we get a function defined on the dual space L *. It is easy to verify that this function is linear, since, according to the definition of the sum of linear forms and the product of a linear form by a number,

(f + g) (x) = f (x) + g (x), (λf) (x) = λf (x)

So, to each vector x ∈ L there corresponds a linear form on the dual space L, or an element double conjugate space (L *) * = L **. We get a mapping φ: L → L **. It is not difficult to make sure that this mapping linearly and what is it injectively... It follows from injectivity that dimimφ = dimL = n. But the dual space L * has the same dimension as L, and dimL ** = dimL * = dimL. Thus, the dimension of the linear subspace imφ in L ** coincides with the dimension of the entire dual dual space. Hence, imφ = L ** and the mapping φ is isomorphism... Note that this isomorphism is not associated with the choice of any basis. Therefore, it is natural to identify the linear forms on L * with elements of L. This means that the dual dual space coincides with the original linear space: L ** = L. If L * is conjugate to L, then L is also conjugate to L *.

The reciprocity of the linear space and the space conjugated to it indicates the symmetry of the connection between vectors and covectors. Therefore, instead of writing f (x), it is more convenient to use another form of notation, symmetric: (f, x). Linear forms will also now be denoted in bold italics: (f, x). The accepted notation is similar to the dot product notation, but unlike the latter, the arguments in the new notation are taken from different spaces. The notation (f, x) itself can be considered as a representation of the mapping defined on the set L * × L, which associates a pair of covector and vector real number... Moreover, the indicated mapping is linear in each of the arguments.

Theorem 10.1. Let b and c be two bases n-dimensional linear space L, U is the transition matrix from b to c. The bases b * and c * of the dual space L *, reciprocal with the bases b and c, respectively, are interconnected by the relations

c * = b * (U T) -1 b * = c * U T

The coordinates f c = (f c 1 ... f c n) of the linear form f in the basis c * are the values ​​of this form on the vectors of the basis c = (c 1 ... c n). Let us find out how the coordinates of the form f are related in two bases c * and b *.

Bases b and c are connected with each other using the transition matrix by the matrix relation c = bU (see 1.8). This relation is the equality of rows of length n composed of vectors. Equality of the rows of vectors implies the equality of the rows of values ​​of the linear form f on these vectors :

((f, c 1) ... (f, c n)) = ((f, b 1) ... (f, b n)) U,

where f b and f c are the designations of the rows of coordinates of the form f in the bases b * and c *, respectively. By transposing this equality, we get the accepted form of linking the coordinates of the elements of linear space, in which the coordinates are written in columns:

(f c) T = U T (f b) T.

This relation means that the matrix U T is the transition matrix from the basis c *, which plays the role of the old in the formula, to the basis b *, which plays the role of the new. Therefore, b * = c * U T, whence multiplying by the matrix (U T) -1 we get c * = b * (U T) -1.

If the linear space L is Euclidean, then the scalar product generates an isomorphism between L and l *, independent of the basis, which allows us to identify the Euclidean space with its conjugate. Indeed, for any vector a ∈ L, the mapping x → (a, x) is a linear form in L, since scalar product linearly in the second of its arguments. A mapping ψ arises, which assigns to a vector a ∈ L a linear form f a (x) - (a, x). This mapping is linear by virtue of the scalar product properties and is injective. Injectivity follows from the fact that if (a, x) = 0 for any x ∈ L, then (a, a) = 0, that is, a = 0. Since the linear spaces L and L * are finite-dimensional and have the same dimensions, the mapping ∈ is bijective and realizes an isomorphism of these spaces. So, for the Euclidean space L * = L. In this sense, the Euclidean space is a "self-adjoint" space.

Dual basis

Often a tensor is represented as a multidimensional table (where d- the dimension of the vector space over which the tensor is specified, and the number of factors coincides with the "tensor valence"), filled with numbers ( tensor components).

Such a representation (with the exception of tensors of valence zero - scalars) is possible only after choosing a basis (or coordinate system); when the basis is changed, the tensor components change in a certain way. In this case, the tensor itself as a "geometric entity" does not depend on the choice of the basis. This can be seen on the example of a vector, which is a special case of a tensor: the components of the vector change when the coordinate axes are changed, but the vector itself - the visual image of which can be just a drawn arrow - does not change from this.

The term "tensor" is also often an abbreviation for the term "tensor field", which is studied by tensor calculus.

Definitions

Modern definition

Rank tensor ( n,m) over d-dimensional vector space V is an element of the tensor product m spaces V and n conjugate spaces V* (that is, spaces of linear functionals (1-forms) on V )

Sum of numbers n + m called valence tensor (also often called rank). Rank tensor ( n,m) is also called n once covariant and m once contravariant.

NB often the term rank are used synonymously with the term defined here valence... The opposite also happens, that is, the use of the term valence in meaning rank defined here.

Tensor as a multilinear function

Just as a covariant tensor of rank (1,0) can be represented as a linear functional, a tensor τ of rank ( n, 0) it is convenient to think of it as a function of n vector arguments, which is linear in each argument v i(such functions are called multilinear), that is, for any constant c out of the field F(over which the vector space is defined)

In the same vein, a tensor τ of arbitrary rank ( n,m) is represented by a multilinear functional of n vectors and m covectors:

Tensor components

Let's choose in space V basis, and accordingly - dual basis in conjugate space V* (i.e , where is the Kronecker symbol).

Then in the tensor product of the spaces the basis naturally arises

.

If we define a tensor as a multilinear function, then its components are determined by the values ​​of this function on the basis:

After that, the tensor can be written as a linear combination of basic tensor products:

Subscripts component tensors are called covariant, and the upper ones are called contravariant. For example, the expansion of some doubly covariant tensor h would be like this:

About the classical definition

The classical approach to tensor definition, more common in the physics literature, starts with representing tensors in components. A tensor is defined as a geometric object that is described by a multidimensional array, that is, a set of numbers numbered by several indices, or, in other words, a table (generally speaking, n-dimensional, where n - valence tensor (see above)).

The main tensor operations are addition, which in this approach reduces to componentwise addition, similar to vectors, and convolution - with vectors, with each other and with themselves, generalizing matrix multiplication, scalar product of vectors, and taking the trace of a matrix. Multiplication of a tensor by a number (by a scalar) can, if desired, be considered a special case of convolution; it is reduced to componentwise multiplication.

The values ​​of numbers in an array, or tensor components, depend on the coordinate system, but the tensor itself, as geometric entity, does not depend on them. Manifestations of this geometric essence can be understood as a lot: various scalar invariants, symmetry / antisymmetry of indices, relations between tensors, and more. For example, the dot product and the length of vectors do not change when the axes are rotated, and the metric tensor always remains symmetric. Convolutions of any tensors with themselves and / or other tensors (including vectors), if no indices remain as a result, are scalars, that is, invariants under the change of coordinates: these are general way construction of scalar invariants.

When changing the coordinate system, the tensor components are transformed according to a certain linear law.

Knowing the components of a tensor in one coordinate system, it is always possible to calculate its components in another, if a coordinate transformation matrix is ​​specified. Thus, the second approach can be summarized as a formula:

tensor = array of components + transformation law of components when changing basis

It should be noted that this implies that all tensors (all tensors over one vector space), regardless of their rank (that is, vectors as well), are transformed through the same coordinate transformation matrix (and its dual, if there is superscripts and subscripts). The components of the tensor are thus transformed according to the same law as the corresponding components of the tensor product of vectors (in an amount equal to the valence of the tensor), taking into account the covariance-contravariance of the components.

For example, the tensor components

is transformed in the same way as the components of the tensor works of three vectors, that is, as the product of the components of these vectors

Since the transformation of the vector components is known, in this way one can easily formulate the simplest version of the classical definition of a tensor.

Examples of

As follows from the definition, the components of a tensor must change in a certain way synchronously with the components of the vectors of the space on which it is defined, when transforming coordinates. therefore not any plate or quantity with indices that looks like a representation of a tensor actually represents a tensor.

  • A simple, albeit somewhat artificial, example of such a sign, not representing a tensor, there can be a plate, the components of which represent a set of arbitrary numbers that do not change in any way under arbitrary transformations of coordinates. Such an object does not represent a tensor, or, in any case, does not represent a tensor in the linear space in which the coordinate transformation took place. So, a set of three numbers does not represent a three-dimensional vector, unless these numbers are transformed when the coordinates are changed in a very specific way.
  • Also, in the general case, the subset of the components of the highest rank tensor not is the lowest rank tensor.
  • Not represents a tensor also an object, all components of which are zeros in at least one nondegenerate coordinate system (in a complete basis), while in the other at least one component is nonzero. This fact is a consequence of the (poly-) linearity of tensors.

There are objects that not only look like tensors, but for which tensor operations are defined (and have a reasonable and correct meaning) (convolution with other tensors, in particular, with vectors), but at the same time tensors are not:

  • First of all, tensors do not include the matrices themselves (Jacobi matrices) of the coordinate transformation, which is a special case of a diffeomorphism between two manifolds, with the help of which the classical definition of a tensor is introduced, although in many of their properties they resemble a tensor. For them, you can also enter superscripts and subscripts, multiplication, addition and convolution operations. However, in contrast to the tensor, whose components depend only on the coordinates on a given manifold, the components of the Jacobi matrix also depend on the coordinates on the image manifold. This difference is obvious in the case when the Jacobi matrices of a diffeomorphism of two arbitrary manifolds are considered; however, when the manifold is mapped into itself, it can be overlooked, since the tangent spaces of the image and inverse image are isomorphic (not canonically). Nevertheless, it persists. The analogy between Jacobi matrices and tensors can be developed if we consider arbitrary vector bundles over a manifold and their products, and not only the tangent and cotangent bundles.

Tensor operations

Tensors allow the following algebraic operations:

  • Multiplication by a scalar - like a vector or a scalar (special cases of a tensor);
  • Addition of tensors of the same valence and composition of indices (the sum can be calculated component-wise, as for vectors);
    • The presence of multiplication by a scalar and addition of tensors make the space of tensors of the same type a linear space.
The components of the tensor product are the products of the corresponding components of the factors, for example:

Symmetries

Tensors with a certain symmetry property often arise in various kinds of applications.

Symmetric in two co- (contra-) variant indices is a tensor that satisfies the following requirement:

or in components

Linear operators of quantum mechanics, of course, can also be interpreted as tensors over some abstract spaces (spaces of states), but traditionally such an application of the term tensor is practically not used, and in general it is extremely rarely used to describe linear operators over infinite-dimensional spaces. Generally in physics, the term tensor tends to be applied only to tensors over ordinary physical 3-dimensional space or 4-dimensional space-time, or, in extreme cases, over the most simple and direct generalizations of these spaces, although the fundamental possibility of applying it in more general cases is not a secret.

Examples of tensors in physics are:

  • a metric tensor over a pseudo-Riemannian 4-dimensional manifold, which is a development of the concept of Newtonian gravitational potential in general relativity.
  • the Riemannian curvature tensor expressed through it and its convolutions, associated in the same theory with the energy of the gravitational field and directly included in the basic equation of the theory.
  • tensor of the electromagnetic field over the Minkowski space, containing the strength of the electric and magnetic field and is the main object of classical electrodynamics in 4-dimensional notation. In particular, Maxwell's equations are written with his help in the form of a single 4-dimensional equation.
  • stresses and strains in the theory of elasticity are described by tensors over 3-dimensional Euclidean space. The same applies to quantities such as elastic moduli.
  • almost most of the quantities that are scalar characteristics of a substance in the case of isotropy of the latter are tensors in the case of an anisotropic substance. More specifically, this refers to the substantial coefficients connecting vector quantities or facing the products (in particular, squares) of vectors. Examples include electrical conductivity (also its inverse resistivity), thermal conductivity, dielectric susceptibility and permittivity, speed of sound (depending on direction), etc.
  • In the mechanics of an absolutely rigid body, the most important role is played by the tensor of inertia, which connects the angular velocity with the angular momentum and kinetic energy of rotation. This tensor differs from most other tensors in physics, which are, generally speaking, tensor fields, in that one tensor characterizes one absolutely rigid body, completely determining, together with its mass, its inertia.
  • the tensors included in the multipole expansion have a similar property: only one tensor entirely represents the moment of distribution of charges of the corresponding order at a given time.
  • often in physics, the Levi-Civita pseudotensor is useful, which is included, for example, in the coordinate notation of the vector and mixed products of vectors. The components of this tensor are always written in almost the same way (up to a scalar factor depending on the metric), and in the right orthonormal basis, they are always exactly the same (each is equal to 0, +1, or −1).

It is easy to see that most tensors in physics (excluding scalars and vectors) have only two indices. Tensors with high valence (such as the Riemann tensor in general relativity) are found, as a rule, only in theories that are considered quite complex, and even then they often appear mainly in the form of their convolutions of lower valence. Most are symmetrical or antisymmetric.

The simplest illustration that makes it possible to understand the physical (and partly geometric) meaning of tensors, and more precisely, symmetric tensors of the second rank, will probably be the consideration of the (specific) electrical conductivity tensor σ. It is intuitively clear that an anisotropic medium, for example, a crystal, or even some specially made artificial material, will generally not conduct current equally easily in all directions (for example, due to the shape and orientation of molecules, atomic layers or some supramolecular structures - one can imagine, for example, thin wires of a well-conducting metal, equally oriented and fused into a poorly conducting medium). Let's take as a basis for simplicity and concreteness, latest model(good conductive wires in a poorly conductive environment). The electrical conductivity along the wires will be large, let's call it σ 1, and across - small, let's designate it σ 2. (It is clear that in the general case (for example, when the wires are flattened in cross-section and this flattening is also oriented in all wires in the same way, the electrical conductivity σ 3 will differ from σ 2, in the case of round uniformly distributed wires - σ 2 = σ 3, but not are equal to σ 1) .Rather nontrivial in the general case, but rather obvious in our example, the fact is that there are three mutually perpendicular directions for which the relationship between the current density vector and the strength of the electric field causing it will be related simply by a numerical factor (in our example For example, this is the direction along the wires, the second - along their oblateness and the third perpendicular to the first two). But any vector can be decomposed into components in these convenient directions:

then we can write for each component:

And we will see that for any direction that does not coincide with 1, 2 and 3, the vector will no longer coincide in the direction with, unless at least two of σ 1, σ 2 and σ 3 are equal.

Moving on to arbitrary Cartesian coordinates that does not coincide with these selected directions, we will have to include a rotation matrix for transforming coordinates, and therefore, in an arbitrary coordinate system, the relationship between and will look like this:

that is, the electrical conductivity tensor will be represented by a symmetric matrix.