Definition and examples of Euclidean spaces. Euclidean spaces. Linear Algebra Definition of Euclidean Space

Euclidean space

Euclidean space(also Euclidean space) - in the original sense, a space whose properties are described by the axioms of Euclidean geometry. In this case, the space is assumed to have dimension 3.

In the modern sense, in a more general sense, it can refer to one of the similar and closely related objects defined below. Usually -dimensional Euclidean space is denoted by , although not quite acceptable notation is often used.

,

in the simplest case ( euclidean norm):

where (in Euclidean space, one can always choose a basis in which exactly this simplest version is true).

2. Metric space corresponding to the space described above. That is, with the metric entered by the formula:

,

Related definitions

  • Under euclidean metric the metric described above can be understood as well as the corresponding Riemannian metric.
  • Local Euclideanity usually means that each tangent space of a Riemannian manifold is a Euclidean space with all the following properties, for example, the possibility (due to the smoothness of the metric) to introduce coordinates in a small neighborhood of a point in which the distance is expressed (up to some order ) as described above.
  • A metric space is also called locally Euclidean if it is possible to introduce coordinates on it in which the metric is Euclidean (in the sense of the second definition) everywhere (or at least on a finite region) - which, for example, is a Riemannian manifold of zero curvature.

Examples

Good examples of Euclidean spaces are the following spaces:

More abstract example:

Variations and Generalizations

see also

Links


Wikimedia Foundation. 2010 .

See what "Euclidean space" is in other dictionaries:

    A finite-dimensional vector space with a positive-definite scalar product. Is immediate. generalization of ordinary three-dimensional space. In E. p., there are Cartesian coordinates, in which the scalar product of (xy)vectors x ... Physical Encyclopedia

    A space whose properties are studied in Euclidean geometry. In a broader sense, Euclidean space is an n-dimensional vector space in which the scalar product is defined ... Big Encyclopedic Dictionary

    Euclidean space- space, the properties of which are described by the axioms of Euclidean geometry. In a simplified way, you can define Euclidean space as a space on a plane or in a three-dimensional volume in which rectangular (Cartesian) coordinates are given, and ... ... Beginnings of modern natural science

    Euclidean space- see Multidimensional (n dimensional) vector space, Vector (linear) space ... Economic and Mathematical Dictionary

    euclidean space- - [L.G. Sumenko. English Russian Dictionary of Information Technologies. M .: GP TsNIIS, 2003.] Topics information technology in general EN Cartesian space ... Technical Translator's Handbook

    A space whose properties are studied in Euclidean geometry. In a broader sense, Euclidean space is an n-dimensional vector space in which the scalar product is defined. * * * Euclidean space Euclidean… … encyclopedic Dictionary

    A space whose properties are studied in Euclidean geometry. In a broader sense, E. p. n dimensional vector space, in which the scalar product is defined … Natural science. encyclopedic Dictionary

    A space whose properties are described by the axioms of Euclidean geometry. In a more general sense, E. p. is a finite-dimensional real vector space Rn with an inner product (x, y), x, in a swarm in appropriately chosen coordinates ... ... Mathematical Encyclopedia

    - (in mathematics) a space whose properties are described by the axioms of Euclidean geometry (See Euclidean geometry). In a more general sense, E. p. is called an n-dimensional vector space in which it is possible to introduce some special ... ... Great Soviet Encyclopedia

    - [by the name of other Greek. mathematics of Euclid (Eukleides; 3rd century BC)] space, including multidimensional, in which it is possible to enter coordinates x1, ..., xn so that the distance p (M, M) between points M (x1 ..., x n) and M (x 1, .... xn) can be ... ... Big encyclopedic polytechnic dictionary

§3. Dimension and basis of a vector space

Linear combination of vectors

Trivial and non-trivial linear combination

Linearly dependent and linearly independent vectors

Properties of a vector space related to the linear dependence of vectors

P-dimensional vector space

Dimension of vector space

Decomposition of a vector in terms of a basis

§4. Transition to a new basis

Transition matrix from the old basis to the new one

Vector coordinates in new basis

§5. Euclidean space

Scalar product

Euclidean space

Length (norm) of the vector

Vector length properties

Angle between vectors

Orthogonal vectors

Orthonormal basis


§ 3. Dimension and basis of a vector space

Consider some vector space (V, M, ∘) over the field R. Let be some elements of the set V, i.e., vectors.

Linear combination vectors is any vector equal to the sum of the products of these vectors by arbitrary elements of the field R(i.e. to scalars) :

If all scalars are equal to zero, then such a linear combination is called trivial(the simplest), and .

If at least one scalar is non-zero, the linear combination is called non-trivial.

The vectors are called linearly independent, unless the trivial linear combination of these vectors is :

The vectors are called linearly dependent, if there is at least one non-trivial linear combination of these vectors equal to .

Example. Consider the set of ordered sets of quadruples of real numbers - this is a vector space over the field of real numbers. Task: Find out if the vectors are , and linearly dependent.

Solution.

Let us compose a linear combination of these vectors: , where are unknown numbers. We require that this linear combination be equal to the zero vector: .

In this equality, we write the vectors as columns of numbers:

If there are such numbers for which this equality is satisfied, and at least one of the numbers is not equal to zero, then this is a non-trivial linear combination and the vectors are linearly dependent.

Let's do the following:

Thus, the problem is reduced to solving a system of linear equations:

Solving it, we get:

The ranks of the extended and main matrices of the system are equal and less than the number of unknowns, therefore, the system has an infinite number of solutions.

Let , then and .

So, for these vectors there is a non-trivial linear combination, for example, at , which is equal to the zero vector, which means that these vectors are linearly dependent.

We note some vector space properties related to the linear dependence of vectors:

1. If the vectors are linearly dependent, then at least one of them is a linear combination of the others.

2. If among the vectors there is a zero vector , then these vectors are linearly dependent.

3. If some of the vectors are linearly dependent, then all these vectors are linearly dependent.

The vector space V is called P-dimensional vector space if it contains P linearly independent vectors, and any set of ( P+ 1) vectors is linearly dependent.

Number P called vector space dimension, and is denoted dim(V) from the English "dimension" - dimension (measurement, size, size, size, length, etc.).

Aggregate P linearly independent vectors P-dimensional vector space is called basis.

(*)
Theorem(on the expansion of a vector in terms of the basis): Each vector of a vector space can be represented (and uniquely) as a linear combination of basis vectors:

Formula (*) is called vector decomposition basis, and the numbers vector coordinates in this basis .

There can be more than one or even infinitely many bases in a vector space. In each new basis, the same vector will have different coordinates.


§ 4. Transition to a new basis

In linear algebra, the problem often arises of finding the coordinates of a vector in a new basis, if its coordinates in the old basis are known.

Consider some P-dimensional vector space (V, +, ) over a field R. Let there be two bases in this space: old and new .

Task: find the coordinates of the vector in the new basis.

Let the vectors of the new basis in the old basis have a decomposition:

,

Let's write out the coordinates of the vectors in the matrix not in rows, as they are written in the system, but in columns:

The resulting matrix is ​​called transition matrix from the old base to the new.

The transition matrix relates the coordinates of any vector in the old and new basis by the following relationship:

,

where are the desired coordinates of the vector in the new basis.

Thus, the problem of finding the coordinates of the vector in the new basis is reduced to solving the matrix equation: , where X– matrix-column of vector coordinates in the old basis, A is the transition matrix from the old basis to the new one, X* is the desired matrix-column of the vector coordinates in the new basis. From the matrix equation we get:

So, vector coordinates in a new basis are found from the equality:

.

Example. In some basis, expansions of vectors are given:

Find the coordinates of the vector in the basis .

Solution.

1. Write out the transition matrix to a new basis, i.e. we write the coordinates of the vectors in the old basis in columns:

2. Find the matrix A –1:

3. Perform the multiplication , where are the coordinates of the vector :

Answer: .


§ 5. Euclidean space

Consider some P-dimensional vector space (V, +, ) over the field of real numbers R. Let be some basis of this space.

Let us introduce in this vector space metric, i.e. Let's define a method for measuring lengths and angles. To do this, we define the notion of a scalar product.

Even at school, all students get acquainted with the concept of "Euclidean geometry", the main provisions of which are focused around several axioms based on such geometric elements as point, plane, line, motion. All of them together form what has long been known under the term "Euclidean space".

Euclidean which is based on the position of the scalar multiplication of vectors, is a special case of a linear (affine) space that satisfies a number of requirements. Firstly, the scalar product of vectors is absolutely symmetrical, that is, the vector with coordinates (x; y) is quantitatively identical to the vector with coordinates (y; x), but opposite in direction.

Secondly, if the scalar product of a vector with itself is performed, then the result of this action will be positive. The only exception will be the case when the initial and final coordinates of this vector are equal to zero: in this case, its product with itself will also be equal to zero.

Thirdly, the scalar product is distributive, that is, it is possible to decompose one of its coordinates into the sum of two values, which will not entail any changes in the final result of the scalar multiplication of vectors. Finally, fourthly, when vectors are multiplied by the same scalar product, they will also increase by the same factor.

In the event that all these four conditions are met, we can say with confidence that we have a Euclidean space.

Euclidean space from a practical point of view can be characterized by the following specific examples:

  1. The simplest case is the presence of a set of vectors with a scalar product defined according to the basic laws of geometry.
  2. Euclidean space will also be obtained if by vectors we mean some finite set of real numbers with a given formula describing their scalar sum or product.
  3. A special case of the Euclidean space is the so-called zero space, which is obtained if the scalar length of both vectors is equal to zero.

Euclidean space has a number of specific properties. Firstly, the scalar factor can be taken out of brackets both from the first and the second factor of the scalar product, the result from this will not change in any way. Secondly, along with the distributivity of the first element of the scalar product, the distributivity of the second element also acts. In addition, in addition to the scalar sum of vectors, distributivity also takes place in the case of vector subtraction. Finally, thirdly, with the scalar multiplication of a vector by zero, the result will also be zero.

Thus, the Euclidean space is the most important geometric concept used in solving problems with the mutual arrangement of vectors relative to each other, which is characterized by such a concept as the scalar product.

Definition of Euclidean space

Definition 1. The real linear space is called Euclidean, if it defines an operation that associates any two vectors x and y from this space number, called the scalar product of vectors x and y and denoted(x,y), for which the following conditions are met:

1. (x, y) = (y, x);

2. (x + y,z) = (x,z) + (y,z) , where z- any vector belonging to the given linear space;

3. (?x,y) = ? (x,y) , where ? - any number;

4. (x,x) ? 0 , and (x,x) = 0 x = 0.

For example, in the linear space of one-column matrices, the scalar product of vectors

can be defined by the formula

Euclidean space of dimensions n denote En. notice, that there are both finite-dimensional and infinite-dimensional Euclidean spaces.

Definition 2. The length (modulus) of the vector x in Euclidean space En called (xx) and denote it like this: |x| = (xx). For any vector in the Euclidean spacethere is a length, and for the zero vector it is equal to zero.

Multiplying a non-zero vector x per number , we get a vector, length which is equal to one. This operation is called rationing vector x.

For example, in the space of one-column matrices, the length of the vector can be defined by the formula:

Cauchy-Bunyakovsky inequality

Let x? En and y? En are any two vectors. Let us prove that the following inequality holds for them:

(The Cauchy-Bunyakovsky inequality)

Proof. Let? - any real number. It's obvious that (?x ? y,?x ? y) ? 0. On the other hand, due to the properties of the scalar product, we can write

Got that

The discriminant of this square trinomial cannot be positive, i.e. , from which follows:

The inequality has been proven.

triangle inequality

Let x and y are arbitrary vectors of the Euclidean space En , i.e. x? en and y? en.

Let's prove that . (Triangle inequality).

Proof. It's obvious that On the other side,. Taking into account the Cauchy-Bunyakovsky inequality, we obtain

The triangle inequality is proved.

Euclidean space norm

Definition 1 . linear space?called metric, if any two elements of this space x and y assigned non-negativenumber? (x,y), called the distance between x and y , (? (x,y)? 0) , andconditions (axioms):

1) ? (x,y) = 0 x = y

2) ? (x,y) = ? (y,x)(symmetry);

3) for any three vectors x, y and z this space? (x,y) ? ? (x,z) + ? (z,y).

Comment. The elements of a metric space are usually called points.

The Euclidean space En is metric, moreover, as the distance between vectors x? en and y? En can be taken x ? y.

So, for example, in the space of one-column matrices, where

hence

Definition 2 . linear space?called normalized, if each vector x from this space, a non-negative number called him the norm x. In this case, the following axioms are fulfilled:

It is easy to see that a normed space is a metric space. property. Indeed, as the distance between x and y can take . In Euclideanspace En as the norm of any vector x? En is taken as its length, those. .

So, the Euclidean space En is a metric space and moreover, the Euclidean space En is a normed space.

Angle between vectors

Definition 1 . Angle between non-zero vectors a and b euclidean spaceE n name the number for which

Definition 2 . Vectors x and y Euclidean space En called orthogoneflax, if they satisfy the equality (x,y) = 0.

If x and y are nonzero, then it follows from the definition that the angle between them is equal to

Note that the null vector is, by definition, considered orthogonal to any vector.

Example . In the geometric (coordinate) space?3, which is a special case of Euclidean space, orts i, j and k mutually orthogonal.

Orthonormal basis

Definition 1 . Basis e1,e2 ,...,en of the Euclidean space En is called orthogoneflax, if the vectors of this basis are pairwise orthogonal, i.e. if

Definition 2 . If all vectors of the orthogonal basis e1, e2 ,...,en are single, i.e. e i = 1 (i = 1,2,...,n) , then the basis is called orthonormal, i.e. fororthonormal basis

Theorem. (on the construction of an orthonormal basis)

Every Euclidean space E n has orthonormal bases.

Proof . Let us prove the theorem for the case n = 3.

Let E1 ,E2 ,E3 be some arbitrary basis of the Euclidean space E3 Let's build some orthonormal basisin this space.Let's put where ? - some real number, which we chooseso that (e1 ,e2 ) = 0, then we get

and obviously what? = 0 if E1 and E2 are orthogonal, i.e. in this case e2 = E2 , and , because this is the basis vector.

Considering that (e1 ,e2 ) = 0, we get

Obviously, if e1 and e2 are orthogonal with the vector E3 , i.e. in this case one should take e3 = E3 . Vector E3 ? 0 , because E1 , E2 and E3 are linearly independent,hence e3 ? 0.

In addition, it follows from the above reasoning that e3 cannot be represented in the form linear combination of the vectors e1 and e2 , hence the vectors e1 , e2 , e3 are linearly independentsims and are pairwise orthogonal, therefore, they can be taken as the basis of the Euclideanspaces E3 . It remains only to normalize the constructed basis, for which it sufficesdivide each of the constructed vectors by its length. Then we get

So we have built a basis is an orthonormal basis. The theorem has been proven.

The applied method of constructing an orthonormal basis from an arbitrary basis is called orthogonalization process . Note that during the prooftheorem, we have established that pairwise orthogonal vectors are linearly independent. Besides if is an orthonormal basis in En , then for any vector x? Enthere is only one decomposition

where x1 , x2 ,..., xn are the coordinates of the vector x in this orthonormal basis.

Because

then multiplying scalar equality (*) by, we get .

In what follows, we will consider only orthonormal bases, and therefore for ease of writing them, the zeros on top of the basis vectorswe will drop.

Consider a linear space L. Along with the operations of adding vectors and multiplying a vector by a number, we introduce one more operation in this space, the operation of scalar multiplication.

Definition 1

If each pair of vectors a , b н L, by some rule, associate a real number, denoted by the symbol ( a , b ) and satisfying the conditions

1. (a , b ) = (b ,a ),

2. (a + With , b ) = (a , b ) + (With , b ),

3. (a a , b ) = a( a , b )

4. > 0 " a ¹ 0 u = 0 Û a = 0 ,

then this rule is called scalar multiplication , and the number ( a , b ) is called scalar product vector a per vector b .

The number is called scalar square vector a and denote , i.e. .

Conditions 1) - 4) are called dot product properties: the first is a property symmetry(commutativity), the second and third - properties linearity, fourth - positive definiteness, and the condition w is called the condition non-degeneracy scalar product.

Definition 2

Euclidean space is the real linear space on which the operation of scalar multiplication of vectors is introduced.

Euclidean space is denoted by E.

Properties 1) - 4) of the scalar product are called axioms euclidean space.

Consider examples of Euclidean spaces.

· The spaces V 2 and V 3 are Euclidean spaces, because on them, the scalar product satisfying all the axioms was defined as follows

In linear space R P(x) polynomials of degree at most P scalar multiplication of vectors and can be introduced by the formula

Let's check the implementation of the properties of the scalar product for the introduced operation.

2) Consider . Let then

4) . But the sum of the squares of any numbers is always greater than or equal to zero, and is equal to zero if and only if all these numbers are equal to zero. Hence, , if the polynomial is not identically equal to zero (that is, there are non-zero coefficients among its coefficients) and Û when , which means .

Thus, all properties of the scalar product are satisfied, which means that the equality defines the scalar multiplication of vectors in the space R P(x), and this space itself is Euclidean.

In linear space R n vector dot multiplication per vector can be determined by the formula

Let us show that in any linear space scalar multiplication can be defined, i.e. any linear space can be made into a Euclidean space. To do this, take in the space L n arbitrary basis ( a 1 , a 2 , …, a P). Let in this basis

a= a 1 a 1 + a2 a 2 + …+ a Pa P and b = b1 a 1 + b2 a 2 + …+ b Pa P.

(a , b ) = a 1 b 1 + a 2 b 2 + …+ a P b P. (*)

Let's check the implementation of the properties of the scalar product:

1) (a , b ) = a 1 b 1 + a 2 b 2 + …+ a P b P= b 1 a 1 + b 2 a 2 + …+b P a P= (b , a ),

2) If , then

Then

(a+ With , b ) =

= (a , b ) + (With , b ).

3. (l a , b ) = (la 1)b 1 + (la 2)b 2 + …+ (la P)b P= la 1 b 1 + la 2 b 2 + …+ la P b P =

L(a 1 b 1) + l(a 2 b 2) + …+ l(a P b P) = l ( a , b ).

4. " a ¹ 0 and if and only if all a i= 0, i.e. a = 0 .

Therefore, equality ( a , b ) = a 1 b 1 + a 2 b 2 + …+ a P b P defines in L n scalar product.

Note that the considered equality ( a , b ) = a 1 b 1 + a 2 b 2 + …+ a P b P for different space bases gives different values ​​of the scalar product of the same vectors a and b . Moreover, the scalar product can be defined in some fundamentally different way. Therefore, we will call the task of the scalar product using the equality (*) traditional.

Definition 3

Norma vector a the arithmetic value of the square root of the scalar square of this vector.

The norm of a vector is denoted by || a ||, or [ a ], or | a | . So, then the definition

||a || .

The following properties of the norm hold:

1. ||a || = 0 Û a =0 .

2. ||a a ||= |a|.|| a || "a OR.

3. |(a , b )| £ || a ||.||b || (the Cauchy-Bunyakovsky inequality).

4. ||a +b || £ || a || + ||b || (triangle inequality).

In the Euclidean spaces V 2 and V 3 with the traditionally specified scalar multiplication, the norm of the vector ` a is its length

||`a|| = |`a|.

In the Euclidean space R n with scalar multiplication vector norm is equal to

||a || = .

Definition 4

Vector a Euclidean space is called normalized (or single) if its norm is equal to one: || a || = 1.

If a ¹ 0 , then the vectors and are unit vectors. Finding for a given vector a the corresponding unit vector (or ) is called rationing vector a .

It follows from the Cauchy-Bunyakovsky inequality that

Where ,

so the ratio can be thought of as the cosine of some angle.

Definition 5

Angle j (0 £ j corner between vectors a and b euclidean space.

Thus, the angle between the vectors a and b Euclidean space is defined by the formula

j = = arccos .

Note that the introduction of scalar multiplication in linear space makes it possible to perform in this space "measurements" similar to those that are possible in the space of geometric vectors, namely, the measurement of "lengths" of vectors and "angles" between vectors, while choosing the form of specifying scalar multiplication is analogous to choosing a "scale" for such measurements. This makes it possible to extend the methods of geometry associated with measurements to arbitrary linear spaces, thereby significantly strengthening the means of studying mathematical objects encountered in algebra and analysis.

Definition 6

Vectors a and b Euclidean spaces are called orthogonal , if their dot product is zero:

Note that if at least one of the vectors is zero, then the equality holds. Indeed, since the zero vector can be represented as 0 = 0.a , then ( 0 , b ) = (0.a , b ) = 0.(a , b ) = 0. Therefore, zero vector is orthogonal to any vector euclidean space.

Definition 7

Vector system a 1 , a 2 , …, a T Euclidean space is called orthogonal , if these vectors are pairwise orthogonal, i.e.

(a i, a j) = 0 "i¹ j, i,j=1,2,…,m.

Vector system a 1 , a 2 , …, a T Euclidean space is called orthonormal (or orthonormal ) if it is orthogonal and each of its vectors is normalized, i.e.

(a i, a j) = , i,j= 1,2, …, m.

An orthogonal system of vectors has the following properties:

1. If is an orthogonal system of nonzero vectors, then the system obtained by normalizing each of the vectors of this system is also orthogonal.

2. An orthogonal system of nonzero vectors is linearly independent.

If any orthogonal, and hence orthonormal system of vectors is linearly independent, then can such a system form a basis of a given space? This question is answered by the following theorem.

Theorem 3

In every P-dimensional Euclidean space ( ) there is an orthonormal basis.

Proof

To prove a theorem means find this basis. Therefore, we will proceed as follows.

In a given Euclidean space, consider an arbitrary basis ( a 1 , a 2 , …, a n), we construct an orthogonal basis from it ( g 1 , g 2 , …, g n), and then we normalize the vectors of this basis, i.e. let . Then the system of vectors ( e 1 , e 2 ,…, e n) forms an orthonormal basis.

So let B :( a 1 , a 2 , …, a n) is an arbitrary basis of the considered space.

1. Let's put

g 1 = a 1 ,g 2 = a 2 + g 1

and choose the coefficient so that the vector g 2 was orthogonal to the vector g 1 , i.e. ( g 1 , g 2) = 0. Since

,

then from equality find = - .

Then the vector g 2 = a 2 – g 1 orthogonal to vector g 1 .

g 3 = a 3 + g 1 + g 2 ,

and choose and so that the vector g 3 was orthogonal and g 2 , and g 3 , i.e. ( g 1 , g 3) = 0 and ( g 2 , g 3) = 0. Find

Then from the equalities and we find accordingly and .

So the vector g 3 = a 3 –` g 1 – g 2 orthogonal to vectors g 1 and g 2 .

Similarly, we construct the vector

g 4 = a 4 –` g 1 – g 2 – g 3 .

It is easy to check that ( g 1 , g 4) = 0, (g 2 , g 4) = 0, (g 3 , g 4) = 0. 2 – … – g k –1 ,k = 2, 3, …,n.

3) Normalize the resulting system of vectors ( g 1 , g 2 , …, g P), i.e. put .

4) Write down the orthonormal basis ( e 1 , e 2 , …, e n}.

In what follows, the orthonormal basis will be denoted

B 0:( e 1 , e 2 , …, e n}.

We note the following orthonormal basis properties.

1) In an orthonormal basis, the scalar product of any two space vectors is equal to the sum of the products of their respective coordinates: ( a , b ) = a 1 b 1 + a 2 b 2 + …+ a P b P.

2) If in some basis the scalar product of two vectors is equal to the sum of the products of their respective coordinates, then this basis is orthonormal.

Thus, any basis of a Euclidean space will be orthonormal if scalar product defined as the sum of products of vector coordinates in this basis.

3) In an orthonormal basis, the norm of a vector is equal to the square root of the sum of the squares of its coordinates.

||a || = .

Definition 8.

The set M is called metric space , if there is a rule according to which any two of its elements X and at some real number r( X ,at ) called distance between these elements, satisfying the conditions:

1.r( X ,at ) = r( at ,X );

2.r( X ,at )³0 for any X and at , and r( X ,at )=0 if and only if X = at ;

3.r( X ,at ) £ r( X , z ) + r( at , z ) for any three elements X , at , z OM.

The elements of a metric space are called dots.

An example of a metric space is the space R n, in it the distance between points (vectors of this space) can be determined by the formula r( X ,at ) = || X at ||.