Definition of a set of vectors. Linear spaces: definition and examples. Related definitions and properties

A vector (linear) space is a set of vectors (elements) with real components, in which the operations of adding vectors and multiplying a vector by a number that satisfy certain axioms (properties) are defined

1) x+at=at+X(commutability of addition);

2)(X+at)+z=x+(y+z) (associativity of addition);

3) there is a zero vector 0 (or null vector) satisfying the condition x+ 0 =x: for any vector x;

4) for any vector X there is an opposite vector at such that X+at = 0 ,

5) 1 x=X,

6) a(bx)=(ab)X(associativity of multiplication);

7) (a+b)X=ah+bx(distributive property with respect to a numerical factor);

8) a(X+at)=ah+ay(distributive property with respect to the vector factor).

A linear (vector) space V(P) over a field P is a non-empty set V. The elements of the set V are called vectors, and the elements of the field P are called scalars.

The simplest properties.

1. The vector space is an abelian group (a group in which the group operation is commutative. The group operation in abelian groups is usually called "addition" and is denoted by the + sign)

2. The neutral element is the only one that follows from the group properties for any .

3. For any opposite element is the only one that follows from the group properties.

4.(–1) x = – x for any x є V.

5.(–α) x = α(–x) = – (αx) for any α є P and x є V.

Expression a 1 e 1+a 2 e 2++a n e n(1) is called a linear combination of vectors e 1 , e 2 ,..., e n with coefficients a 1 , a 2,..., a n . Linear combination (1) is called nontrivial if at least one of the coefficients a 1 , a 2 ,..., a n different from zero. Vectors e 1 , e 2 ,..., e n are called linearly dependent if there exists a nontrivial combination (1) that is a zero vector. Otherwise (that is, if only a trivial combination of vectors e 1 , e 2 ,..., e n equal to zero vector) vectors e 1 , e 2 ,..., e n is called linearly independent.

The dimension of a space is the maximum number of LZ vectors contained in it.

vector space is called n-dimensional (or has "dimension n"), if it contains n linearly independent elements e 1 , e 2 ,..., e n , and any n+ 1 elements are linearly dependent (generalized condition B). vector space are called infinite-dimensional if in it for any natural n exists n linearly independent vectors. Any n linearly independent vectors of the n-dimensional vector space form the basis of this space. If e 1 , e 2 ,..., e n- basis vector space, then any vector X of this space can be represented uniquely as a linear combination of basis vectors: x=a 1 e 1+a 2 e 2+... +a n e n.
At the same time, the numbers a 1 , a 2, ..., a n are called the coordinates of the vector X in this basis.

Golovizin V.V. Lectures on Algebra and Geometry. 4

Lectures on Algebra and Geometry. Semester 2.

Lecture 22. Vector spaces.

Brief content: definition of a vector space, its simplest properties, systems of vectors, linear combination of a system of vectors, trivial and non-trivial linear combination, linearly dependent and independent systems of vectors, conditions for linear dependence or independence of a system of vectors, subsystems of a system of vectors, systems of columns of an arithmetic vector space.

item 1. Definition of a vector space and its simplest properties.

Here, for the convenience of the reader, we repeat the content of paragraph 13 of lecture 1.

Definition. Let be an arbitrary non-empty set, whose elements we will call vectors, K is a field, whose elements we will call scalars. Let an internal binary algebraic operation be defined on the set, which we will denote by the sign + and call the addition of vectors. Let also an external binary algebraic operation be defined on the set, which we will call the multiplication of a vector by a scalar and denote by the multiplication sign. In other words, two mappings are defined:

A set together with these two algebraic operations is called a vector space over a field K if the following axioms hold:

1. Addition is associative, i.e.

2. There is a zero vector, i.e.

3. For any vector, there is an opposite one:

The vector y, opposite to the vector x, is usually denoted by -x, so that

4. Addition is commutative, i.e. .

5. Multiplication of a vector by a scalar obeys the law of associativity, i.e.

where the product is the product of scalars defined in the field K.

6. , where 1 is the unit of the field K.

7. Multiplication of a vector by a scalar is distributive with respect to vector addition:

8. Multiplication of a vector by a scalar is distributive with respect to addition of scalars: .

Definition. The vector space over the field of real numbers is called the real vector space.

Theorem. (The simplest properties of vector spaces.)

1. There is only one null vector in a vector space.

2. In a vector space, any vector has a unique opposite to it.

3. or
.

4. .

Proof. 1) The uniqueness of the zero vector is proved in the same way as the uniqueness of the identity matrix and, in general, as the uniqueness of the neutral element of any internal binary algebraic operation.

Let 0 be the zero vector of the vector space V. Then . Let
is another zero vector. Then . Let's take the first case
, and in the second
. Then
and
, whence it follows that
, etc.

2a) First we prove that the product of a zero scalar and any vector is equal to a zero vector.

Let
. Then, applying the vector space axioms, we get:

With respect to addition, a vector space is an Abelian group, and the cancellation law holds in any group. Applying the law of reduction, the last equality implies

.

2b) Now let us prove assertion 4). Let
is an arbitrary vector. Then

It immediately follows from this that the vector
is the opposite of x.

2c) Let now
. Then, applying the vector space axioms,
and
we get:

2d) Let
and let's assume that
. Because
, where K is a field, then there exists
. Let's multiply the equality
left to
:
, whence follows
or
or
.

The theorem has been proven.

item 2. Examples of vector spaces.

1) A set of numerical real functions of one variable, continuous on the interval (0; 1) with respect to the usual operations of adding functions and multiplying a function by a number.

2) The set of polynomials from one letter with coefficients from the field K with respect to the addition of polynomials and multiplication of polynomials by a scalar.

3) Set complex numbers regarding addition of complex numbers and multiplication by a real number.

4) A set of matrices of the same size with elements from the field K with respect to matrix addition and matrix multiplication by a scalar.

The following example is an important special case of Example 4.

5) Let be an arbitrary natural number. Denote by the set of all columns of height n, i.e., set of matrices over a field K of size
.

The set is a vector space over the field K and is called the arithmetic vector space of columns of height n over the field K.

In particular, if instead of an arbitrary field K we take the field of real numbers, then the vector space
is called the real arithmetic vector space of columns of height n.

Similarly, the set of matrices over a field K of size is also a vector space
or otherwise, strings of length n. It is also denoted by and is also called the arithmetic vector space of strings of length n over the field K.

item 3. Systems of vectors of a vector space.

Definition. A system of vectors of a vector space is any finite non-empty set of vectors of this space.

Designation:
.

Definition. Expression

, (1)

where are the scalars of the field K, are the vectors of the vector space V, is called a linear combination of the system of vectors
. The scalars are called the coefficients of this linear combination.

Definition. If all coefficients of the linear combination (1) are equal to zero, then such a linear combination is called trivial, otherwise it is nontrivial.

Example. Let
a system of three vectors in a vector space V. Then

is a trivial linear combination of a given system of vectors;

is a non-trivial linear combination of a given system of vectors, since the first coefficient of this combination
.

Definition. If any vector x of a vector space V can be represented as:

then we say that the vector x is linearly expressed in terms of the vectors of the system
. In this case, we also say that the system
linearly represents the vector x.

Comment. In this and the previous definition, the word "linear" is often omitted and the system is said to represent a vector, or the vector is expressed in terms of the system's vectors, and so on.

Example. Let
is a system of two columns in the arithmetic real vector space of columns of height 2. Then the column
expressed linearly in terms of the columns of the system, or the given column system linearly represents column x. Really,

item 4. Linearly dependent and linearly independent systems of vectors in a vector space.

Since the product of a zero scalar by any vector is a zero vector and the sum of zero vectors is equal to a zero vector, then for any system of vectors the equality

It follows that the null vector is linearly expressed in terms of the vectors of any system of vectors, or, in other words, any system of vectors linearly represents the null vector.

Example. Let
. In this case the null column can be linearly expressed in terms of the columns of the system in more than one way:

or

To distinguish between these methods of linear representation of the zero vector, we introduce the following definition.

Definition. If the equality

and all the coefficients , then we say that the system
represents the null vector trivially. If in equality (3) at least one of the coefficients
is not equal to zero, then we say that the system of vectors
represents the null vector in a non-trivial way.

From the last example, we see that there are systems of vectors that can represent the null vector in a non-trivial way. From the following example, we will see that there are systems of vectors that cannot non-trivially represent the null vector.

Example. Let
is a system of two columns from the vector space . Consider the equality:

,

where
unknown coefficients. Using the rules for multiplying a column by a scalar (number) and adding columns, we get the equality:

.

It follows from the definition of matrix equality that
and
.

Thus, the given system cannot represent the null column in a non-trivial way.

It follows from the above examples that there are two types of vector systems. Some systems represent the null vector in a non-trivial way, while others do not. Note once again that any system of vectors represents the null vector trivially.

Definition. A vector space vector system that represents the zero vector ONLY trivially is said to be linearly independent.

Definition. A system of vectors in a vector space that can non-trivially represent a null vector is called linearly dependent.

The last definition can be given in a more detailed form.

Definition. Vector system
vector space V is called linearly dependent if there is such a non-zero set of scalars of the field K

Comment. Any system of vectors
can represent the null vector trivially:

But this is not enough to find out whether a given system of vectors is linearly dependent or linearly independent. It follows from the definition that a linearly independent system of vectors cannot represent the zero vector in a nontrivial way, but only in a trivial way. Therefore, in order to verify the linear independence of a given system of vectors, it is necessary to consider the representation of zero by an arbitrary linear combination of this system of vectors:

If this equality is impossible, provided that at least one coefficient of this linear combination is non-zero, then this system is, by definition, linearly independent.

So in the examples of the previous paragraph, the column system
is linearly independent, and the column system
is linearly dependent.

The linear independence of the system of columns is proved similarly , , ... ,

from the space , where K is an arbitrary field, n is an arbitrary natural number.

The following theorems give several criteria for linear dependence and, accordingly, linear independence of systems of vectors.

Theorem. (A necessary and sufficient condition for the linear dependence of a system of vectors.)

A system of vectors in a vector space is linearly dependent if and only if one of the vectors of the system is linearly expressed in terms of the other vectors of this system.

Proof. Need. Let the system
linearly dependent. Then, by definition, it represents the null vector in a non-trivial way, i.e. there is a non-trivial linear combination of this system of vectors equal to the zero vector:

where at least one of the coefficients of this linear combination is not equal to zero. Let
,
.

Divide both parts of the previous equality by this non-zero coefficient (i.e., multiply by :

Denote:
, where .

those. one of the vectors of the system is linearly expressed in terms of other vectors of this system, etc.

Adequacy. Let one of the vectors of the system be linearly expressed in terms of other vectors of this system:

Let's move the vector to the right side of this equation:

Since the coefficient at the vector equals
, then we have a nontrivial representation of zero by a system of vectors
, which means that this system of vectors is linearly dependent, etc.

The theorem has been proven.

Consequence.

1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.

2. A system of vectors containing a zero vector or two equal vector, is linearly dependent.

Proof.

1) Necessity. Let the system be linearly independent. Assume the opposite and there is a system vector that is linearly expressed through other vectors of this system. Then, by the theorem, the system is linearly dependent, and we arrive at a contradiction.

Adequacy. Let none of the vectors of the system be expressed in terms of others. Let's assume the opposite. Let the system be linearly dependent, but then it follows from the theorem that there is a system vector that is linearly expressed through other vectors of this system, and we again come to a contradiction.

2a) Let the system contain a zero vector. Assume for definiteness that the vector
:. Then the equality

those. one of the vectors of the system is linearly expressed in terms of the other vectors of this system. It follows from the theorem that such a system of vectors is linearly dependent, so on.

Note that this fact can be proved directly from the definition of a linearly dependent system of vectors.

Because
, then the following equality is obvious

This is a non-trivial representation of the zero vector, which means that the system
is linearly dependent.

2b) Let the system have two equal vectors. Let for definiteness
. Then the equality

Those. the first vector is linearly expressed in terms of the other vectors of the same system. It follows from the theorem that the given system is linearly dependent, and so on.

Similarly to the previous one, this assertion can also be proved directly from the definition of a linearly dependent system.

Indeed, since
, then the equality

those. we have a non-trivial representation of the null vector.

The consequence is proven.

Theorem (On the linear dependence of a system of one vector.

A system consisting of one vector is linearly dependent if and only if this vector is zero.

Proof.

Need. Let the system
linearly dependent, i.e. there exists a non-trivial representation of the null vector

,

where
and
. It follows from the simplest properties of a vector space that then
.

Adequacy. Let the system consist of one zero vector
. Then this system represents the zero vector nontrivially

,

whence it follows linear dependence systems
.

The theorem has been proven.

Consequence. A system consisting of one vector is linearly independent if and only if this vector is nonzero.

The proof is left to the reader as an exercise.

4.3.1 Linear space definition

Let ā , , - elements of some set ā , , L and λ , μ - real numbers, λ , μ R..

The set L is calledlinear orvector space, if two operations are defined:

1 0 . Addition. Each pair of elements of this set is associated with an element of the same set, called their sum

ā + =

2°.Multiplication by a number. Any real number λ and element ā L an element of the same set is assigned λ ā L and the following properties are met:

1. ā+= + ā;

2. ā+(+ )=(ā+ )+ ;

3. exists null element
, such that ā +=ā ;

4. exists opposite element -
such that ā +(-ā )=.

If λ , μ - real numbers, then:

5. λ(μ , ā)= λ μ ā ;

6. 1ā= ā;

7. λ(ā +)= λ ā+λ ;

8. (λ+ μ ) ā=λ ā + μ ā

Elements of the linear space ā, , ... are called vectors.

The exercise. Show yourself that these sets form linear spaces:

1) The set of geometric vectors on the plane;

2) A set of geometric vectors in three-dimensional space;

3) A set of polynomials of some degree;

4) A set of matrices of the same dimension.

4.3.2 Linearly dependent and independent vectors. Dimension and basis of space

Linear combination vectors ā 1 , ā 2 , …, ā n Lis called a vector of the same space of the form:

,

where λ i - real numbers.

Vectors ā 1 , .. , ā n calledlinearly independent, if their linear combination is a zero vector if and only if all λ i are equal to zero, that is

λ i=0

If the linear combination is a zero vector and at least one of λ i is different from zero, then these vectors are called linearly dependent. The latter means that at least one of the vectors can be represented as a linear combination of other vectors. Indeed, let and, for example,
. then,
, where

.

The maximally linearly independent ordered system of vectors is called basis space L. The number of basis vectors is called dimension space.

Let's assume that there is n linearly independent vectors, then the space is called n-dimensional. Other space vectors can be represented as a linear combination n basis vectors. per basis n- dimensional space can be taken any n linearly independent vectors of this space.

Example 17. Find the basis and dimension of given linear spaces:

a) sets of vectors lying on a line (collinear to some line)

b) the set of vectors belonging to the plane

c) set of vectors of three-dimensional space

d) the set of polynomials of degree at most two.

Solution.

a) Any two vectors lying on a line will be linearly dependent, since the vectors are collinear
, then
, λ - scalar. Therefore, the basis of this space is only one (any) vector other than zero.

Usually this space is R, its dimension is 1.

b) any two non-collinear vectors
are linearly independent, and any three vectors in the plane are linearly dependent. For any vector , there are numbers and such that
. The space is called two-dimensional, denoted R 2 .

The basis of a two-dimensional space is formed by any two non-collinear vectors.

v) Any three non-coplanar vectors will be linearly independent, they form the basis of a three-dimensional space R 3 .

G) As a basis for the space of polynomials of degree at most two, one can choose the following three vectors: ē 1 = x 2 ; ē 2 = x; ē 3 =1 .

(1 is a polynomial, identically equal to one). This space will be three-dimensional.

Corresponding to such a vector space. Some authors equate Euclidean and pre-Hilbert space. In this article, the first definition will be taken as the initial one.

N (\displaystyle n)-dimensional Euclidean space is usually denoted E n (\displaystyle \mathbb (E) ^(n)); the notation is also often used when it is clear from the context that the space is provided with a natural Euclidean structure.

Formal definition

To define a Euclidean space, it is easiest to take as the basic concept of the dot product. A Euclidean vector space is defined as a finite-dimensional vector space over the field of real numbers, on the pairs of vectors of which a real-valued function is given (⋅ , ⋅) , (\displaystyle (\cdot ,\cdot),) with the following three properties:

Euclidean space example - coordinate space R n , (\displaystyle \mathbb (R) ^(n),) consisting of all possible sets of real numbers (x 1 , x 2 , … , x n) , (\displaystyle (x_(1),x_(2),\ldots ,x_(n)),) scalar product in which is determined by the formula (x , y) = ∑ i = 1 n x i y i = x 1 y 1 + x 2 y 2 + ⋯ + x n y n . (\displaystyle (x,y)=\sum _(i=1)^(n)x_(i)y_(i)=x_(1)y_(1)+x_(2)y_(2)+\cdots +x_(n)y_(n).)

Lengths and angles

Given on the Euclidean space dot product enough to introduce the geometric concepts of length and angle. Vector length u (\displaystyle u) defined as (u , u) (\displaystyle (\sqrt ((u,u)))) and denoted | u | . (\displaystyle |u|.) The positive definiteness of the inner product guarantees that the length of a non-zero vector is non-zero, and it follows from the bilinearity that | a u | = | a | | u | , (\displaystyle |au|=|a||u|,) that is, the lengths of proportional vectors are proportional.

Angle between vectors u (\displaystyle u) and v (\displaystyle v) is determined by the formula φ = arccos ⁡ ((x, y) | x | | y |) . (\displaystyle \varphi =\arccos \left((\frac ((x,y))(|x||y|))\right).) It follows from the cosine theorem that for a two-dimensional Euclidean space ( euclidean plane) this definition angle coincides with the usual. Orthogonal vectors, as in three-dimensional space, can be defined as vectors, the angle between which is equal to π 2 . (\displaystyle (\frac (\pi )(2)).)

Cauchy-Bunyakovsky-Schwarz inequality and triangle inequality

There is one gap left in the definition of angle given above: in order to arccos ⁡ ((x , y) | x | | y |) (\displaystyle \arccos \left((\frac ((x,y))(|x||y|))\right)) was defined, it is necessary that the inequality | (x, y) | x | | y | | ≤ 1. (\displaystyle \left|(\frac ((x,y))(|x||y|))\right|\leqslant 1.) This inequality indeed holds in an arbitrary Euclidean space, it is called the Cauchy-Bunyakovsky-Schwarz inequality. This inequality, in turn, implies the triangle inequality: | u+v | ⩽ | u | + | v | . (\displaystyle |u+v|\leqslant |u|+|v|.) The triangle inequality, together with the length properties listed above, means that the length of a vector is a norm on a Euclidean vector space, and the function d(x, y) = | x − y | (\displaystyle d(x,y)=|x-y|) defines the structure of a metric space on the Euclidean space (this function is called the Euclidean metric). In particular, the distance between elements (points) x (\displaystyle x) and y (\displaystyle y) coordinate space R n (\displaystyle \mathbb (R) ^(n)) given by the formula d (x , y) = ‖ x − y ‖ = ∑ i = 1 n (x i − y i) 2 . (\displaystyle d(\mathbf (x) ,\mathbf (y))=\|\mathbf (x) -\mathbf (y) \|=(\sqrt (\sum _(i=1)^(n) (x_(i)-y_(i))^(2))).)

Algebraic properties

Orthonormal bases

Dual spaces and operators

Any vector x (\displaystyle x) Euclidean space defines a linear functional x ∗ (\displaystyle x^(*)) on this space, defined as x ∗ (y) = (x , y) . (\displaystyle x^(*)(y)=(x,y).) This comparison is an isomorphism between the Euclidean space and its dual space and allows them to be identified without compromising calculations. In particular, adjoint operators can be considered as acting on the original space, and not on its dual, and self-adjoint operators can be defined as operators coinciding with their adjoint ones. In an orthonormal basis, the matrix of the adjoint operator is transposed to the matrix of the original operator, and the matrix of the self-adjoint operator is symmetric.

Euclidean space motions

Euclidean space motions are metric-preserving transformations (also called isometries). Motion Example - Parallel Translation to Vector v (\displaystyle v), which translates the point p (\displaystyle p) exactly p+v (\displaystyle p+v). It is easy to see that any movement is a composition of parallel translation and transformation that keeps one point fixed. By choosing a fixed point as the origin, any such motion can be considered as

Lecture 6. Vector space.

Main questions.

1. Vector linear space.

2. Basis and dimension of space.

3. Orientation of space.

4. Decomposition of a vector in terms of a basis.

5. Vector coordinates.

1. Vector linear space.

A set consisting of elements of any nature, in which linear operations are defined: the addition of two elements and the multiplication of an element by a number are called spaces, and their elements are vectors this space and are denoted in the same way as vector quantities in geometry: . Vectors such abstract spaces, as a rule, have nothing in common with ordinary geometric vectors. The elements of abstract spaces can be functions, a system of numbers, matrices, etc., and in a particular case, ordinary vectors. Therefore, such spaces are called vector spaces .

The vector spaces are, For example, the set of collinear vectors, denoted by V1 , the set of coplanar vectors V2 , set of ordinary (real space) vectors V3 .

For this particular case, we can give the following definition of a vector space.

Definition 1. The set of vectors is called vector space, if the linear combination of any vectors of the set is also a vector of this set. The vectors themselves are called elements vector space.

More important both theoretically and appliedly is the general (abstract) concept of a vector space.


Definition 2. A bunch of R elements , in which for any two elements and the sum is defined and for any element https://pandia.ru/text/80/142/images/image006_75.gif" width="68" height="20"> called vector(or linear) space, and its elements are vectors, if the operations of adding vectors and multiplying a vector by a number satisfy the following conditions ( axioms) :

1) addition is commutative, i.e..gif" width="184" height="25">;

3) there is such an element (zero vector) that for any https://pandia.ru/text/80/142/images/image003_99.gif" width="45" height="20">.gif" width=" 99"height="27">;

5) for any vectors and and any number λ, the equality holds;

6) for any vectors and any numbers λ and µ equality is valid https://pandia.ru/text/80/142/images/image003_99.gif" width="45 height=20" height="20"> and any numbers λ and µ fair ;

8) https://pandia.ru/text/80/142/images/image003_99.gif" width="45" height="20"> .

From the axioms that define the vector space follow the simplest consequences :

1. In a vector space, there is only one zero - an element - a zero vector.

2. In a vector space, each vector has a unique opposite vector.

3. For each element, the equality is fulfilled.

4. For anyone real number λ and zero vector https://pandia.ru/text/80/142/images/image017_45.gif" width="68" height="25">.

5..gif" width="145" height="28">

6..gif" width="15" height="19 src=">.gif" width="71" height="24 src="> is a vector that satisfies the equality https://pandia.ru/text/80 /142/images/image026_26.gif" width="73" height="24">.

So, indeed, the set of all geometric vectors is also a linear (vector) space, since for the elements of this set, the actions of addition and multiplication by a number are defined that satisfy the formulated axioms.

2. Basis and dimension of space.

The essential concepts of a vector space are the concepts of basis and dimension.

Definition. The set of linearly independent vectors, taken in a certain order, through which any vector of space is linearly expressed, is called basis this space. Vectors. The spaces that make up the basis are called basic .

The basis of the set of vectors located on an arbitrary line can be considered one collinear to this line vector .

Basis on the plane let's call two non-collinear vectors on this plane, taken in a certain order https://pandia.ru/text/80/142/images/image029_29.gif" width="61" height="24"> .

If the basis vectors are pairwise perpendicular (orthogonal), then the basis is called orthogonal, and if these vectors have length equal to one, then the basis is called orthonormal .

Largest number linearly independent vectors of space is called dimension this space, i.e., the dimension of the space coincides with the number of basis vectors of this space.

So, according to these definitions:

1. One-dimensional space V1 is a straight line, and the basis consists of one collinear vector https://pandia.ru/text/80/142/images/image028_22.gif" width="39" height="23 src="> .

3. Ordinary space is three-dimensional space V3 , whose basis consists of three non-coplanar vectors .

From here we see that the number of basis vectors on a straight line, on a plane, in real space coincides with what in geometry is usually called the number of dimensions (dimension) of a straight line, plane, space. Therefore, it is natural to introduce a more general definition.


Definition. vector space R called n- dimensional if it contains at most n linearly independent vectors and is denoted R n. Number n called dimension space.

In accordance with the dimension of the space are divided into finite-dimensional and infinite-dimensional. The dimension of a null space is, by definition, assumed to be zero.

Remark 1. In each space, you can specify as many bases as you like, but all the bases of this space consist of the same number of vectors.

Remark 2. V n- in a dimensional vector space, a basis is any ordered collection n linearly independent vectors.

3. Orientation of space.

Let the basis vectors in space V3 have common beginning and ordered, i.e. it is indicated which vector is considered the first, which - the second and which - the third. For example, in a basis, vectors are ordered according to indexation.

For to orient space, it is necessary to set some basis and declare it positive .

It can be shown that the set of all bases of a space falls into two classes, that is, into two non-intersecting subsets.

a) all bases belonging to one subset (class) have the same orientation (bases of the same name);

b) any two bases belonging to various subsets (classes), have opposite orientation, ( different names bases).

If one of the two classes of bases of a space is declared positive, and the other is negative, then we say that this space oriented .

Often, when orienting space, some bases are called right, while others are leftists .

https://pandia.ru/text/80/142/images/image029_29.gif" width="61" height="24 src="> called right if, when viewed from the end of the third vector, the shortest rotation of the first vector https://pandia.ru/text/80/142/images/image033_23.gif" width="16" height="23"> is carried out counterclock-wise(Fig. 1.8, a).

https://pandia.ru/text/80/142/images/image036_22.gif" width="16" height="24">

https://pandia.ru/text/80/142/images/image037_23.gif" width="15" height="23">

https://pandia.ru/text/80/142/images/image039_23.gif" width="13" height="19">

https://pandia.ru/text/80/142/images/image033_23.gif" width="16" height="23">

Rice. 1.8. Right basis (a) and left basis (b)

Usually, the right basis of the space is declared to be a positive basis

The right (left) basis of space can also be determined using the rule of the "right" ("left") screw or gimlet.

By analogy with this, the concept of right and left triplets non-complementary vectors that must be ordered (Fig. 1.8).

Thus, in the general case, two ordered triples of non-coplanar vectors have the same orientation (they have the same name) in the space V3 if they are both right or both left, and - opposite orientation (opposite), if one of them is right and the other is left.

The same is done in the case of space V2 (planes).

4. Decomposition of a vector in terms of a basis.

For simplicity of reasoning, we will consider this question using the example of a three-dimensional vector space R3 .

Let https://pandia.ru/text/80/142/images/image021_36.gif" width="15" height="19"> be an arbitrary vector of this space.