The matrix inverse to the matrix of the system has the form. Matrix method for solving slough: an example of solving using an inverse matrix. How to find the inverse of a matrix

Similar to the inverse in many properties.

Collegiate YouTube

    1 / 5

    ✪ Inverse matrix (2 ways of finding)

    ✪ How to find the inverse of a matrix - bezbotvy

    ✪ Inverse matrix # 1

    ✪ Solving a system of equations by the inverse matrix method - bezbotvy

    ✪ Inverse Matrix

    Subtitles

Inverse Matrix Properties

  • det A - 1 = 1 det A (\ displaystyle \ det A ^ (- 1) = (\ frac (1) (\ det A))), where det (\ displaystyle \ \ det) denotes a determinant.
  • (A B) - 1 = B - 1 A - 1 (\ displaystyle \ (AB) ^ (- 1) = B ^ (- 1) A ^ (- 1)) for two square invertible matrices A (\ displaystyle A) and B (\ displaystyle B).
  • (A T) - 1 = (A - 1) T (\ displaystyle \ (A ^ (T)) ^ (- 1) = (A ^ (- 1)) ^ (T)), where (...) T (\ displaystyle (...) ^ (T)) denotes a transposed matrix.
  • (k A) - 1 = k - 1 A - 1 (\ displaystyle \ (kA) ^ (- 1) = k ^ (- 1) A ^ (- 1)) for any coefficient k ≠ 0 (\ displaystyle k \ not = 0).
  • E - 1 = E (\ displaystyle \ E ^ (- 1) = E).
  • If it is necessary to solve a system of linear equations, (b is a nonzero vector) where x (\ displaystyle x) is the required vector, and if A - 1 (\ displaystyle A ^ (- 1)) exists, then x = A - 1 b (\ displaystyle x = A ^ (- 1) b)... Otherwise, either the dimension of the space of solutions is greater than zero, or there are none at all.

Methods for finding the inverse matrix

If the matrix is ​​invertible, then you can use one of the following methods to find the inverse of the matrix:

Exact (direct) methods

Gauss-Jordan method

Let's take two matrices: itself A and a single E... Let's give the matrix A to the identity matrix by the Gauss-Jordan method, applying transformations by rows (you can also apply transformations by columns, but not shuffle). After applying each operation to the first matrix, apply the same operation to the second. When the reduction of the first matrix to the unit form is completed, the second matrix will be equal to A −1.

When using the Gaussian method, the first matrix will be multiplied from the left by one of the elementary matrices Λ i (\ displaystyle \ Lambda _ (i))(transvection or diagonal matrix with ones on the main diagonal except for one position):

Λ 1 ⋅ ⋯ ⋅ Λ n ⋅ A = Λ A = E ⇒ Λ = A - 1 (\ displaystyle \ Lambda _ (1) \ cdot \ dots \ cdot \ Lambda _ (n) \ cdot A = \ Lambda A = E \ Rightarrow \ Lambda = A ^ (- 1)). Λ m = [1… 0 - a 1 m / amm 0… 0… 0… 1 - am - 1 m / amm 0… 0 0… 0 1 / amm 0… 0 0… 0 - am + 1 m / amm 1 … 0… 0… 0 - anm / amm 0… 1] (\ displaystyle \ Lambda _ (m) = (\ begin (bmatrix) 1 & \ dots & 0 & -a_ (1m) / a_ (mm) & 0 & \ dots & 0 \\ &&& \ dots &&& \\ 0 & \ dots & 1 & -a_ (m-1m) / a_ (mm) & 0 & \ dots & 0 \\ 0 & \ dots & 0 & 1 / a_ (mm) & 0 & \ dots & 0 \\ 0 & \ dots & 0 & -a_ ( m + 1m) / a_ (mm) & 1 & \ dots & 0 \\ &&& \ dots &&& \\ 0 & \ dots & 0 & -a_ (nm) / a_ (mm) & 0 & \ dots & 1 \ end (bmatrix))).

The second matrix after applying all the operations will be equal to Λ (\ displaystyle \ Lambda), that is, it will be the desired one. Algorithm complexity - O (n 3) (\ displaystyle O (n ^ (3))).

Using the matrix of algebraic complements

Matrix inverse to matrix A (\ displaystyle A), can be represented as

A - 1 = adj (A) det (A) (\ displaystyle (A) ^ (- 1) = (((\ mbox (adj)) (A)) \ over (\ det (A))))

where adj (A) (\ displaystyle (\ mbox (adj)) (A))- attached matrix;

The complexity of the algorithm depends on the complexity of the algorithm for calculating the determinant O det and is equal to O (n²) · O det.

Using LU / LUP decomposition

Matrix Equation A X = I n (\ displaystyle AX = I_ (n)) for the inverse matrix X (\ displaystyle X) can be viewed as a set n (\ displaystyle n) systems of the form A x = b (\ displaystyle Ax = b)... We denote i (\ displaystyle i) th column of the matrix X (\ displaystyle X) across X i (\ displaystyle X_ (i)); then A X i = e i (\ displaystyle AX_ (i) = e_ (i)), i = 1,…, n (\ displaystyle i = 1, \ ldots, n),because the i (\ displaystyle i) th column of the matrix I n (\ displaystyle I_ (n)) is the unit vector e i (\ displaystyle e_ (i))... in other words, finding the inverse matrix is ​​reduced to solving n equations with one matrix and different right-hand sides. After performing the LUP decomposition (time O (n³)), solving each of the n equations takes time O (n²), so this part of the work also takes time O (n³).

If the matrix A is nondegenerate, then the LUP decomposition can be calculated for it P A = L U (\ displaystyle PA = LU)... Let be P A = B (\ displaystyle PA = B), B - 1 = D (\ displaystyle B ^ (- 1) = D)... Then from the properties of the inverse matrix we can write: D = U - 1 L - 1 (\ displaystyle D = U ^ (- 1) L ^ (- 1))... If we multiply this equality by U and L, then we can get two equalities of the form U D = L - 1 (\ displaystyle UD = L ^ (- 1)) and D L = U - 1 (\ displaystyle DL = U ^ (- 1))... The first of these equalities is a system of n² linear equations for n (n + 1) 2 (\ displaystyle (\ frac (n (n + 1)) (2))) of which the right-hand sides are known (from the properties of triangular matrices). The second also represents a system of n² linear equations for n (n - 1) 2 (\ displaystyle (\ frac (n (n-1)) (2))) of which the right-hand sides are known (also from the properties of triangular matrices). Together they represent a system of n² equalities. Using these equalities, we can recursively determine all n² elements of the matrix D. Then from the equality (PA) −1 = A −1 P −1 = B −1 = D. we obtain the equality A - 1 = D P (\ displaystyle A ^ (- 1) = DP).

In the case of using the LU decomposition, no permutation of the columns of the matrix D is required, but the solution may diverge even if the matrix A is nondegenerate.

The complexity of the algorithm is O (n³).

Iterative methods

Schultz methods

(Ψ k = E - AU k, U k + 1 = U k ∑ i = 0 n Ψ ki (\ displaystyle (\ begin (cases) \ Psi _ (k) = E-AU_ (k), \\ U_ ( k + 1) = U_ (k) \ sum _ (i = 0) ^ (n) \ Psi _ (k) ^ (i) \ end (cases)))

Error estimation

Choosing an initial guess

The problem of choosing an initial approximation in the processes of iterative matrix inversion considered here does not allow treating them as independent universal methods competing with direct methods of inversion based, for example, on LU-decomposition of matrices. There are some recommendations for choosing U 0 (\ displaystyle U_ (0)) ensuring the fulfillment of the condition ρ (Ψ 0) < 1 {\displaystyle \rho (\Psi _{0})<1} (the spectral radius of the matrix is ​​less than one), which is necessary and sufficient for the convergence of the process. However, in this case, first, it is required to know the upper bound for the spectrum of the inverted matrix A or the matrix A A T (\ displaystyle AA ^ (T))(namely, if A is a symmetric positive definite matrix and ρ (A) ≤ β (\ displaystyle \ rho (A) \ leq \ beta), then you can take U 0 = α E (\ displaystyle U_ (0) = (\ alpha) E), where ; if A is an arbitrary nondegenerate matrix and ρ (A A T) ≤ β (\ displaystyle \ rho (AA ^ (T)) \ leq \ beta) then it is believed U 0 = α A T (\ displaystyle U_ (0) = (\ alpha) A ^ (T)) where also α ∈ (0, 2 β) (\ displaystyle \ alpha \ in \ left (0, (\ frac (2) (\ beta)) \ right)); you can of course simplify the situation and take advantage of the fact that ρ (A A T) ≤ k A A T k (\ displaystyle \ rho (AA ^ (T)) \ leq (\ mathcal (k)) AA ^ (T) (\ mathcal (k))), put U 0 = A T ‖ A A T ‖ (\ displaystyle U_ (0) = (\ frac (A ^ (T)) (\ | AA ^ (T) \ |)))). Second, with such a definition of the initial matrix, there is no guarantee that ‖ Ψ 0 ‖ (\ displaystyle \ | \ Psi _ (0) \ |) will be small (it may even be ‖ Ψ 0 ‖> 1 (\ displaystyle \ | \ Psi _ (0) \ |> 1)), and a high order of convergence rate will not be revealed immediately.

Examples of

Matrix 2x2

Unable to parse expression (syntax error): (\ displaystyle \ mathbf (A) ^ (- 1) = \ begin (bmatrix) a & b \\ c & d \\ \ end (bmatrix) ^ (- 1) = \ frac (1) (\ det (\ mathbf (A))) \ begin & \! \! - b \\ -c & \, a \\ \ end (bmatrix) = \ frac (1) (ad - bc) \ begin (bmatrix) \, \, \, d & \! \! - b \\ -c & \, a \\ \ end (bmatrix).)

Inversion of a 2x2 matrix is ​​possible only if a d - b c = det A ≠ 0 (\ displaystyle ad-bc = \ det A \ neq 0).

Let a square matrix be given. Find the inverse of the matrix.

The first way. In Theorem 4.1 of the existence and uniqueness of the inverse matrix, one of the ways of finding it is indicated.

1. Calculate the determinant of this matrix. If, then the inverse matrix does not exist (the matrix is ​​degenerate).

2. Construct a matrix from the algebraic complements of the elements of the matrix.

3. Transpose the matrix to get the attached matrix .

4. Find the inverse matrix (4.1) by dividing all the elements of the adjoint matrix by the determinant

Second way. Elementary transformations can be used to find the inverse matrix.

1. Construct a block matrix by assigning an identity matrix of the same order to this matrix.

2. With the help of elementary transformations performed on the rows of the matrix, bring its left block to the simplest form. In this case, the block matrix is ​​reduced to the form, where is the square matrix obtained as a result of transformations from the identity matrix.

3. If, then the block is equal to the inverse of the matrix, ie. If, then the matrix has no inverse.

Indeed, with the help of elementary transformations of the rows of the matrix, we can reduce its left block to a simplified form (see Fig. 1.5). In this case, the block matrix is ​​transformed to the form, where is an elementary matrix that satisfies the equality. If the matrix is ​​nondegenerate, then according to item 2 of Remarks 3.3 its simplified form coincides with the unit matrix. Then it follows from the equality that. If the matrix is ​​degenerate, then its simplified form differs from the identity matrix, and the matrix has no inverse.

11. Matrix equations and their solution. Matrix form of SLAE notation. The matrix method (inverse matrix method) for solving the SLAE and the conditions for its applicability.

Matrix equations are equations of the form: A * X = C; X * A = C; A * X * B = C where matrix A, B, C are known, the matrix X is not known, if the matrices A and B are not degenerate, then the solutions of the original matrices will be written in the corresponding form: X = A -1 * C; X = C * A -1; X = A -1 * C * B -1 Matrix notation for systems of linear algebraic equations. Several matrices can be associated with each SLAE; moreover, the SLAE itself can be written in the form of a matrix equation. For SLAE (1), consider the following matrices:

The matrix A is called system matrix... The elements of this matrix are the coefficients of the given SLAE.

The matrix A˜ is called matrix extended system... It is obtained by adding to the system matrix a column containing free terms b1, b2, ..., bm. Typically, this column is separated by a vertical line for clarity.

The column matrix B is called matrix of free members, and the column matrix X is matrix of unknowns.

Using the above notation, SLAE (1) can be written in the form of a matrix equation: A⋅X = B.

Note

The matrices associated with the system can be written in different ways: it all depends on the order of the variables and equations of the considered SLAE. But in any case, the order of the unknowns in each equation of a given SLAE must be the same.

The matrix method is suitable for solving SLAEs in which the number of equations coincides with the number of unknown variables and the determinant of the main matrix of the system is nonzero. If the system contains more than three equations, then finding the inverse matrix requires significant computational efforts, therefore, in this case, it is advisable to use to solve Gauss method.

12. Homogeneous SLAEs, conditions for the existence of their nonzero solutions. Properties of particular solutions of homogeneous SLAEs.

A linear equation is called homogeneous if its free term is equal to zero, and inhomogeneous otherwise. A system consisting of homogeneous equations is called homogeneous and has the general form:

13 . The concept of linear independence and dependence of particular solutions of a homogeneous SLAE. Fundamental decision system (FDS) and its finding. Representation of the general solution of a homogeneous SLAE in terms of the FSR.

Function system y 1 (x ), y 2 (x ), …, y n (x ) is called linearly dependent on the interval ( a , b ) if there is a set of constant coefficients that are not equal to zero simultaneously, such that the linear combination of these functions is identically zero on ( a , b ): for . If equality for is possible only for, the system of functions y 1 (x ), y 2 (x ), …, y n (x ) is called linearly independent on the interval ( a , b ). In other words, the functions y 1 (x ), y 2 (x ), …, y n (x ) linearly dependent on the interval ( a , b ), if there is zero on ( a , b ) their nontrivial linear combination. Functions y 1 (x ),y 2 (x ), …, y n (x ) linearly independent on the interval ( a , b ) if only their trivial linear combination is identically zero on ( a , b ).

Fundamental Decision System (FDS) a homogeneous SLAE is called the basis of this system of columns.

The number of elements in the FSR is equal to the number of unknowns in the system minus the rank of the system matrix. Any solution to the original system is a linear combination of FSR solutions.

Theorem

The general solution of an inhomogeneous SLAE is equal to the sum of a particular solution of an inhomogeneous SLAE and general solution corresponding to a homogeneous SLAE.

1 . If the columns are solutions to a homogeneous system of equations, then any linear combination of them is also a solution to a homogeneous system.

Indeed, it follows from the equalities that

those. a linear combination of solutions is a solution to a homogeneous system.

2. If the rank of the matrix of a homogeneous system is equal, then the system has linearly independent solutions.

Indeed, by formulas (5.13) of the general solution of the homogeneous system, we find particular solutions, giving the free variables the following standard value sets (each time assuming that one of the free variables is equal to one, and the rest are equal to zero):

which are linearly independent. Indeed, if we compose a matrix from these columns, then its last rows form the identity matrix. Consequently, the minor located in the last lines is not zero (it is equal to one), i.e. is basic. Therefore, the rank of the matrix will be equal. Hence, all columns of this matrix are linearly independent (see Theorem 3.4).

Any set of linearly independent solutions of a homogeneous system is called fundamental system (set) of solutions .

14 Minor th order, basic minor, matrix rank. Calculation of the rank of the matrix.

A minor of order k of a matrix A is the determinant of some of its square submatrix of order k.

In an m x n matrix A, a minor of order r is called basic if it is nonzero, and all minors of higher order, if they exist, are equal to zero.

The columns and rows of matrix A, at the intersection of which there is a basic minor, are called basic columns and rows of A.

Theorem 1. (On the rank of a matrix). For any matrix, the minor rank is equal to the row rank and is equal to the column rank.

Theorem 2. (On basic minor). Each column of the matrix is ​​decomposed into a linear combination of its base columns.

The rank of the matrix (or the minor rank) is the order of the basic minor, or, in other words, the largest order for which nonzero minors exist. The rank of the zero matrix is ​​considered 0 by definition.

We note two obvious properties of the minor rank.

1) The rank of a matrix does not change when transposed, since when a matrix is ​​transposed, all its submatrices are transposed and the minors do not change.

2) If A 'is a submatrix of the matrix A, then the rank of A' does not exceed the rank of A, since the nonzero minor included in A 'is included in A.

15. The concept of a -dimensional arithmetic vector. Equality of vectors. Actions on vectors (addition, subtraction, multiplication by a number, multiplication by a matrix). Linear combination of vectors.

Ordered collection n valid or complex numbers called n-dimensional vector... The numbers are called vector coordinates.

Two (non-zero) vectors a and b are equal if they are equidirectional and have the same modulus. All zero vectors are considered equal. In all other cases, the vectors are not equal.

Addition of vectors. There are two ways to add vectors: 1. Parallelogram rule. To add the vectors and, place the origins of both at the same point. We finish building to the parallelogram and draw the diagonal of the parallelogram from the same point. This will be the sum of vectors.

2. The second way to add vectors is the triangle rule. Let's take the same vectors and. Add the beginning of the second to the end of the first vector. Now let's connect the beginning of the first and the end of the second. This is the sum of vectors and. Several vectors can be added according to the same rule. We attach them one by one, and then we connect the beginning of the first with the end of the last.

Subtraction of vectors. The vector is directed opposite to the vector. The lengths of the vectors are the same. Now it is clear what vector subtraction is. The difference of vectors and is the sum of the vector and the vector.

Multiplying a vector by a number

When multiplying a vector by a number k, you get a vector whose length is k times different from its length. It is codirectional with the vector if k is greater than zero, and oppositely directed if k is less than zero.

The scalar product of vectors is the product of the lengths of the vectors by the cosine of the angle between them. If vectors are perpendicular, their dot product is zero. That's how scalar product expressed in terms of the coordinates of the vectors and.

Linear combination of vectors

Linear combination of vectors called the vector

where - the coefficients of the linear combination. If a combination is called trivial if it is nontrivial.

16 .Dot product of arithmetic vectors. Vector length and angle between vectors. Orthogonality of vectors.

The scalar product of vectors a and b is a number

The dot product is used to calculate: 1) finding the angle between them; 2) finding the projection of the vectors; 3) calculating the length of the vector; 4) the conditions for the vectors being perpendicular.

The length of the segment AB is called the distance between points A and B. The angle between vectors A and B is called the angle α = (a, b), 0≤ α ≤П. On which it is necessary to rotate 1 vector so that its directions coincide with another vector. Provided that their beginnings coincide.

The unit vector a is called a vector a having unit length and directions a.

17. Vector system and its linear combination. The concept of linear dependence and independence of a system of vectors. A theorem on the necessary and sufficient conditions for the linear dependence of a system of vectors.

A system of vectors a1, a2, ..., an is called linearly dependent if there exist numbers λ1, λ2, ..., λn such that at least one of them is nonzero and λ1a1 + λ2a2 + ... + λnan = 0. Otherwise, the system is called linearly independent.

Two vectors a1 and a2 are called collinear if their directions coincide or opposite.

Three vectors a1, a2 and a3 are called coplanar if they are parallel to some plane.

Geometric criteria for linear dependence:

a) the system (a1, a2) is linearly dependent if and only if the vectors a1 and a2 are collinear.

b) the system (a1, a2, a3) is linearly dependent if and only if the vectors a1, a2, and a3 are coplanar.

theorem. (A necessary and sufficient condition for the linear dependence systems vectors.)

Vector system vector space is an linearly dependent if and only if one of the vectors of the system is linearly expressed in terms of the others vector this system.

Corollary. 1. A system of vectors of a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system. 2. A vector system containing a zero vector or two equal vectors is linearly dependent.

Matrix Algebra - Inverse Matrix

inverse matrix

Inverse matrix is called a matrix that, when multiplied both on the right and on the left by a given matrix, gives the identity matrix.
Let us denote the matrix inverse to the matrix BUT through, then, according to the definition, we get:

where E Is the identity matrix.
Square matrix called non-singular (non-degenerate) if its determinant is not zero. Otherwise, it is called special (degenerate) or singular.

The following theorem holds: every nonsingular matrix has an inverse.

The operation of finding the inverse matrix is ​​called appeal matrices. Consider the matrix inversion algorithm. Let there be given a nonsingular matrix n-th order:

where Δ = det A ≠ 0.

Algebraic Complement of an Element matrices n-th order BUT the determinant of the matrix ( n–1) th order obtained by deleting i-th line and j th column of the matrix BUT:

Let's compose the so-called attached matrix:

where are the algebraic complements of the corresponding elements of the matrix BUT.
Note that the algebraic complements of the elements of the rows of the matrix BUT are placed in the corresponding columns of the matrix à , that is, the matrix is ​​transposed at the same time.
By dividing all the elements of the matrix à by Δ - the value of the determinant of the matrix BUT, we get the inverse matrix as a result:

We note a number of special properties of the inverse matrix:
1) for a given matrix BUT its inverse matrix is the only one;
2) if there is an inverse matrix, then right reverse and left reverse matrices coincide with it;
3) a special (degenerate) square matrix does not have an inverse matrix.

Basic properties of the inverse matrix:
1) the determinant of the inverse matrix and the determinant of the original matrix are reciprocal values;
2) the inverse matrix of the product of square matrices is equal to the product of inverse matrices of factors, taken in reverse order:

3) the transposed inverse of the matrix is ​​equal to the inverse of the given transposed matrix:

EXAMPLE Calculate the inverse of the given matrix.

ALGEBRAIC SUPPLEMENTS AND MINORS

Let us have a determinant of the third order: .

Minor corresponding to this element a ij determinant of the third order, is called the determinant of the second order, obtained from the given by deleting the row and column, at the intersection of which the given element stands, i.e. i-th line and j th column. Minors corresponding to a given element a ij will denote M ij.

For example, minor M 12 corresponding to element a 12, there will be a determinant , which is obtained by deleting the 1st row and 2nd column from the given determinant.

Thus, the formula defining the determinant of the third order shows that this determinant is equal to the sum the products of the elements of the 1st line by the corresponding minors; in this case, the minor corresponding to the element a 12, taken with a “-” sign, i.e. we can write that

. (1)

Similarly, we can introduce definitions of minors for determinants of the second order and higher orders.

Let's introduce one more concept.

Algebraic complement element a ij determinant is called its minor M ij multiplied by (–1) i + j.

Algebraic Complement of an Element a ij denoted A ij.

From the definition we obtain that the connection between the algebraic complement of an element and its minor is expressed by the equality A ij= (–1) i + j M ij.

For example,

Example. A determinant is given. To find A 13, A 21, A 32.

It is easy to see that using the algebraic complements of elements, formula (1) can be written in the form:

Similarly to this formula, you can get the decomposition of the determinant into the elements of any row or column.

For example, the factorization of the determinant by the elements of the 2nd line can be obtained as follows. According to property 2 of the determinant, we have:

Let us expand the resulting determinant by the elements of the 1st row.

. (2)

From here since the determinants of the second order in formula (2) are the minors of the elements a 21, a 22, a 23... Thus, i.e. we got the decomposition of the determinant in terms of the elements of the 2nd row.

Similarly, you can get the factorization of the determinant in terms of the elements of the third row. Using property 1 of determinants (about transposition), one can show that similar expansions are also valid for the expansion in terms of column elements.

Thus, the following theorem is true.

Theorem (on the expansion of a determinant in a given row or column). The determinant is equal to the sum of the products of elements of any of its rows (or columns) by their algebraic complements.

All of the above is also true for determinants of any higher order.

Examples.

INVERSE MATRIX

The concept of an inverse matrix is ​​introduced only for square matrices.

If A Is a square matrix, then reverse for it, the matrix is ​​called the matrix, denoted A -1 and satisfying the condition. (This definition is introduced by analogy with multiplication of numbers)

inverse matrix Is a matrix A −1, when multiplied by which the given initial matrix A results in the identity matrix E:

АA −1 = A −1 A =E.

Inverse matrix method.

Inverse matrix method- This is one of the most common methods for solving matrices and is used to solve systems of linear algebraic equations (SLAE) in cases where the number of unknowns corresponds to the number of equations.

Let there be a system n linear equations with n unknown:

Such a system can be written as a matrix equation A * X = B,

where
- system matrix,

- column of unknowns,

- column of free coefficients.

From the derived matrix equation, we express X by multiplying both sides of the matrix equation on the left by A -1, as a result of which we have:

A -1 * A * X = A -1 * B

Knowing that A -1 * A = E, then E * X = A -1 * B or X = A -1 * B.

The next step is to define the inverse matrix A -1 and multiplied by the column of free members B.

Matrix inverse to matrix A exists only when det A≠ 0 ... In view of this, when solving SLAEs by the inverse matrix method, the first thing to do is find det A... If det A≠ 0 , then the system has only one solution, which can be obtained by the inverse matrix method, but if det A = 0, then such a system inverse matrix method does not dare.

Inverse matrix solution.

Sequence of actions for inverse matrix solutions:

  1. We get the determinant of the matrix A... If the determinant is greater than zero, we solve the inverse matrix further, if it is equal to zero, then the inverse matrix cannot be found here.
  2. Finding the transposed matrix AT.
  3. We are looking for algebraic complements, after which we replace all elements of the matrix with their algebraic complements.
  4. We collect the inverse matrix from the algebraic complements: we divide all the elements of the resulting matrix by the determinant of the initially given matrix. The resulting matrix will be the desired inverse matrix relative to the original one.

The below algorithm inverse matrix solutions essentially the same as the above, the difference is only a few steps: first of all, we define the algebraic complements, and only after that we calculate the union matrix C.

  1. Determine if a given matrix is ​​square. If the answer is negative, it becomes clear that there can be no inverse matrix for it.
  2. Determine if a given matrix is ​​square. If the answer is negative, it becomes clear that there can be no inverse matrix for it.
  3. Computing algebraic complements.
  4. We compose an allied (mutual, adjoined) matrix C.
  5. We compose the inverse matrix from algebraic complements: all elements of the adjoint matrix C divided by the determinant of the initial matrix. The resulting matrix will be the desired inverse matrix with respect to the given one.
  6. We check the work done: we multiply the initial and received matrices, the result should be the identity matrix.

This is best done with an adjoint matrix.

Theorem: If we assign the identity matrix of the same order to the square matrix on the right side and, using elementary transformations over the rows, transform the initial matrix on the left into the identity matrix, then the one obtained on the right side will be inverse to the initial one.

An example of finding the inverse matrix.

The task. For matrix find the inverse by the adjoint matrix method.

Solution. Add to the given matrix BUT on the right is the 2nd order identity matrix:

Subtract the 2nd from the 1st line:

We subtract the first 2 from the second line: