Methods for determining residuals. Systems of linear equations. Field theory problems

Department of Physical Chemistry SFU (RSU)

NUMERICAL METHODS AND PROGRAMMING

Materials for the lecture course

Systems linear equations

Systems of n linear equations with n unknowns x 1, x 2, ..., x n are generally written as follows:

where a ij and b i are arbitrary constants. The number n of unknowns is called the order of the system.

The solution to the equation is such a set of values ​​of the variables x 1, x 2,…, x n, which simultaneously converts all the equations of the system into identity.

Necessary and sufficient condition the existence and uniqueness of the solution of the system of equations is the linear independence of the equations. Or, more precisely, the inequality to zero of the determinant composed of the coefficients of the system of equations:

An equivalent (and very convenient !!!) notation of the system of linear equations is the matrix notation

or abbreviated ,

which is easy to see if we use the matrix multiplication rules: the element at the intersection of the i-th row and j-th column of the result matrix is ​​the scalar product of the i-th row vector of the first matrix and the j-th column vector of the second matrix.

The coefficients of the unknowns form a square matrix of size n x n, (A), the variables and free terms of the equations are column vectors of length n(X) and (B), respectively.

The solution to the system of equations is the vector (X *), which turns this matrix equation into identity.

To solve a system of linear equations, exact methods (direct) are used in which the number of arithmetic operations required to find a solution is precisely determined by the order of the system and iterative (approximate) methods, in which a step-by-step, iterative refinement of the solution is carried out.

It is possible to estimate the proximity of any vector (X) i to the solution of the system of equations by estimating the proximity of the residual vector, calculated in the following way, to the zero vector:

To express the measure of proximity in the form of a number, any norm vector, for example, Euclidean norm or vector length in n-dimensional space (another definition is the square root of the dot product of a vector by itself):

Sometimes other vector norms are used: maximum-norm (equal to the largest component of the vector in absolute value)

or norm-sum (equal to the sum of the absolute values ​​of the vector components)

Conditionality of linear algebraic systems

Numerical solution of systems algebraic equations is a frequently solved problem within the framework of mathematical modeling. In this case, both the dimension of the problem and the nature of the matrices can change significantly. Calculations carried out with a certain accuracy also affect the result of the solution. linear systems... In addition, the coefficients of the system themselves - matrix (A) and free terms - (B) can be represented with a certain error.

Let's give an example:

System of equations

It has, as it is easy to verify by substitution, the unique solution x = 1, y = 1.

Suppose that when preparing the system for the solution, the right side of the first equation was determined with a small absolute error of +0.01, that is, the right side of the first equation was taken equal to 11.01 instead of 11.

The only solution to this system of equations will already be the vector x = 11.01 y = 0.

As you can easily see, in this case, the error in determining the values ​​of the variables turns out to be significantly greater than the error of the coefficient. Problems in which a small change in the initial parameters dramatically affects the result are called ill-conditioned.

Let us consider in general form a system of linear equations in which the vector of free terms (B) is presented with some absolute error (ΔB).

If the vector (X) is an exact solution to the equation with an "exact" vector (IN) .

then if there is an error in the right-hand side (ΔВ), the solution to the system of equations will differ from (X) by some vector (ΔX) , which can be written as follows:

Expand the brackets on the right side

And let's take into account the exact equation

by multiplying both sides of the equality by the inverse of the coefficient matrix

We get

those. the absolute error (ΔX) for calculating the solution vector (X) is equal to the product of the matrix inverse to the matrix of the coefficients of the system of equations by the vector of the absolute error (ΔВ).

If we go from matrices and vectors to the corresponding norms, then we get that the norm of the vector (ΔX) will be less or equal to the product of the norms of the inverse matrix and the norm of the error vector

Thus, if the norm of the inverse matrix is ​​large, then the absolute error of the solution can be significantly greater than the absolute error of the right-hand sides of the equations of the system.

Let us estimate how will in this case relate relative the error of the solution and the relative error of the coefficients. To do this, we normalize the two equations obtained earlier:

Let us multiply separately the left and right sides of the inequalities, which obviously does not change the sign of the inequality and divide both sides by and, finally we get:

The quantity is called the number (measure) of conditionality matrices A. The degree of influence of the error of the coefficients of the system of equations on the error of the obtained solution depends on this value. If this number is small, then the relative error of the solution will not differ much from the relative error of the coefficients. The larger the conditionality number, the greater will be the influence of the error of the coefficients on the error of the solution.

A similar analysis can be carried out for the case of the presence of an error in setting the matrix of system coefficients (ΔA). And in this case, the condition number also arises.

For the considered numerical example

and

If we take, for example, the matrix maximum norm,

, then we get

for the matrix (A) the norm is 1011, and for the inverse matrix (A) - (A) -1 - 1101. Thus, the conditionality number turns out to be more than 1,000,000!

Direct (exact) methods

Gauss and Gauss-Jordan method

The solution algorithm consists in reducing the extended matrix of the system of equations to triangular form (Gauss method) or pseudo-diagonal form (Gauss-Jordan method).

Cramer's method

In this method (if the determinant composed of the coefficients of the system is not equal to zero), the values ​​of the variables are determined as follows

I = 1, 2, ..., n

Here, the denominator contains the determinant of the matrix of the coefficients of the system. The numerator contains the determinant of the matrix obtained from the coefficient matrix by replacing i th column by the column vector of free members of the system.

For a system written in general form :

Matrix inversion method

The solution to the system of equations written in matrix form is easy to find if we use the definition of the inverse matrix:

(A) (A) -1 = (A) -1 (A) = (1),

where (1) is the unit diagonal matrix.

Really,

Let us multiply both sides of the equation on the left by the inverse matrix of coefficients of the system (A) -1

Thus, to solve the system, it is necessary to invert the matrix of the coefficients of the system and multiply the result by the column vector of free terms .

Despite the simplicity of notation, the method has sufficient computational complexity, which consists in finding the inverse matrix .

Approximate (iterative) methods

Simple iteration method, Seidel method

These methods are considered on the example of systems of nonlinear equations.

Minimal residual method

To solve linear systems of equations, various methods of searching for extrema can also be used. The problem of solving the system of equations is replaced by the equivalent problem of finding the extremum of the function n variables.

Yu. Ya. Ledyankin

METHODS OF WEIGHED MISTAKES, COLLOCATIONS, MOMENTS. METHOD FOR PARALLEL IMPLEMENTATION IN A SINGLE COMPUTING FLOW FOR SOLVING PROBLEMS OF MATHEMATICAL PHYSICS

Abstract. To develop the idea of ​​parallel implementation of methods in important languages, bells, moments in a single numerical flow of development of mathematical physics problems with detailed data in the view of folding structures on the processor elements, knives. Proponations of methods, descriptions and looks on specific test butts, will be for the cunning mathematicians and developers of methods and structures of special processes for the development of problems of mathematical physics and other enterprises.

Key words: parallel calculation, single calculation method, folding structures of data, problems of mathematical physics, method of endless elements.

Annotation. The idea of ​​parallel implementation of the methods of weighted residuals, collocations, moments in a single computational flow for solving problems of mathematical physics with data processing in the form of complex structures on processor elements, including a scalar multiplier, is developed. The proposed method, described and considered on specific test examples, will be useful to mathematicians and developers of methods and structures of special processors for solving problems of mathematical physics and other problems.

Keywords: parallel computing, single computational flow, complex data structures, problems of mathematical physics, finite element method.

Abstract. The idea of ​​a parallel implementation of the methods of weighted residuals, collocations, moments in a single computational flow of solutions to the problems in mathematical physics with data processing in a form of complex structures based on processor elements including scalar multiplier is developed. The proposed method is described and reviewed on the specific test examples. It will be useful for mathematicians and developers of methods and structures of specially designed processors for solutions to the problems in mathematical physics and other problems.

Keywords: parallel computing, single computational flow, compound data structures, problems in mathematical physics, finite elements method.

1. Introduction

The application of the finite element method (FEM) for modeling and solving problems of mathematical physics (MF) was facilitated by the success in the implementation of computational methods and, in particular, the methods of weighted residuals (MVM). The name MVN covers a whole class of discrete methods for approximating differential and integral equations described analytically. The main purpose of their application is to discretize the analytical equation and reduce it to a system of algebraic equations, the calculation of the coefficients of which (by solving the system) gives an approximate, but with high degree accuracy, solution. In this case, well-known analytical weighting functions are used, often called trial ones. Instead of analytical functions, discrete functions can also be used. The choice of various weighting functions also determines the method of weighted residuals, that is, accuracy, convergence and degree of complexity.

The transition to new technologies of information processing also presupposes a new approach to the description and methods of implementation of MVN. It is necessary to consider the main MVN, which, in addition to the basic setting, include the methods of least squares (OLS), moments (MM), collocations (MC), Galerkin (MG), etc., described in. From the first chapter of it, an approach to solving the problems of MVN is taken, as well as examples for evaluating the results obtained and comparing them with the results obtained by the authors of V.V.

© Ledyankin Yu.Ya., 2012

ISSN 1028-9763. Mathematical machines and systems, 2012, No. 2

The author's goal is to develop an MVN for parallel implementation in solving problems of mathematical physics by the finite element method (FEM) with data processing at the level of complex structures in the ideology of a single computational flow.

This work (or rather, the cycle

2. Basis of weighted residual methods

On a sequence of functions of the form

01 (x), Ф2 (x), ..., Фп (x) (1)

scalar product< Ф, ф >kind

< Ф1, Ф2>= J Ф1 (x) * Ф2 (x) dx (2)

for a system of test (basis) functions F. with coefficients ai

a Φ, + a2 Φ2 + ... + an Φn = 0 (3)

possibly

li u - ^ a * φ. li< e. (4)

If we define the operator L () as an action that, when applied to a given function, leads to the appearance of some other function p

then for a linear operator L positive definite for all x, the scalar product of the operator L (u) and another function v can be composed:

< L(u), v >=< и, L* (v) + J dS, (6)

where S is the bounding surface;

F, G - differential operators, the form of which is determined by integration by parts;

L is the operator conjugate to the operator L (if L * = L, then L is self-adjoint);

F (v) - contains terms with v appearing at the 1st stage of integration by parts;

G (u) - similar, but contains terms with u;

F (v) - defines the main boundary conditions (GU);

G (u) - defines non-essential or natural GU.

If for L * = L, F (u) on S 1, G (u) - S2 is given and S1 + S2 = S< L(u), а также

u> ..> 0 for all nontrivial u (which satisfy homogeneous PG), then it works

weighted residuals method.

It is possible to establish numerical procedures for the approximate solution of a system of differential (integral) equations of the form

L (u) = p, x e V with GU; (7)

F (u) = g, x e S, (8)

where £ is the outer boundary of the region;

and is the exact solution, which is approximated by a set of functions fk ():

u = X ak fk; (nine)

ak - unknown parameters;

fk - linearly independent functions.

After substitution of (9) in (7), the error function e (residual) is obtained:

e = b (u) - p = 0. (10)

For the exact solution e = 0, therefore, they strive for the error to be equal to 0 on the average, setting equal to zero the integrals taken from the residual with some weight functions

< е, w >= 0, I = 1,2, ..., N, (11)

where w 1 is a set of weight functions.

Then, if the parameters ak in (9) are constant, then the differential equation (7) is reduced to a system of algebraic equations. The number of independent relations is given by the number of unknown coefficients ak and functions w¡.

3. The essence of the proposed approach with RPM

The proposed method of using MVN is focused on the implementation of a single technological flow (UTP) for processing complex data structures (SDS) when solving a class of problems MF, etc. in parallel structures, the processing elements (PE) of which contain a scalar multiplier (SC). Using the RDM allows writing polynomials in matrix form, convenient for processing with the help of CS.

The essence of the approach consists in using the same type of mathematical expressions when loading the initial data, calculating the coefficients of finite elements and the global stiffness matrix with the load vector, obtaining a pseudo-solution followed by processing the calculation results in the form of polynomials. For this purpose, they are recorded as RMP. This ensures the preservation of the uniformity of the structure of the special processor (SP) and, as a result, its manufacturability during manufacture, cost, etc. And in the solution process, such a device provides ultra-high performance due to parallel processing of the SSD, and also eliminates a simple matrix device, etc.

To solve the problem, you need to do the following:

1. Describe test functions (splines) in matrix form and define operations on splines presented in the form of RDM.

2. Differentiate and integrate polynomials in matrix or vector form.

3. Multiply and add the above matrices and vectors before or after performing differentiation or integration operations.

4. Perform operations on PP. 2, 3 analytically and numerically.

5. Calculate the resulting expressions.

We write the polynomials / (X) and / 2 (X) in the form of the RDM:

and the product of the polynomials

7 = / (X) = / (X) / (X) = Yo + Yi X1 + Y2 X2 + ^ =

Ao b0 + (a b0 + a o b) + (a b2 + ax ^ + ^ 2 b0) + ...

in matrix form

(aobo) (a, bo + aob1) (aob1 + a1b1 + a1bo) - 1 X0

Y = / (X) * [Ф 1] * [Ф 2] = (aobo) (albo + aob1) --- Xi- (15)

_ (aobo) ---] X2

The main disadvantage of the matrix representation and performing operations on them such as multiplication, addition, etc. in matrix form is redundancy both in the representation and when performing procedures on them. So, when multiplying the matrices Att and X mm, it is necessary to perform m3 operations, and to perform matrix multiplication, it is necessary

have t2 SU. Therefore, to reduce equipment consumption and reduce the time for performing multiplication operations, it is proposed to use a vector one instead of the matrix representation of the polynomial X. This will reduce the number of control systems by a factor of m, and operations will

Comment. C shows the possibility of reducing the number of operations to m. The multiplication of matrices presented in the form of RDM is performed as a product of vectors of the same dimension.

The procedure for multiplying the vectors / (X) in SD (written using the RDM in matrix form [Ф]) can be replaced with the procedure for multiplying a matrix by a vector. To perform it as an operation of multiplying a matrix by a vector in (15), it is necessary to replace the representation of the function / X with the transposed / 1 (X) (in matrix form [F1] by [F1] T):

/ Т (X) * [Ф 1] Т = а1 ао

and in (13) to represent in the form / 2 (X ") (in matrix form [Ф 2]) as a column vector:

Representing always the first polynomial (multiplicand) in the form / 1T (X), and the second polynomial (factor) as a column / 2 * (X), we define the product of two polynomials in general form:

7 = / (X) * / 2 (X) - * / T (X) * W = [Ф 1] т * [Ф 2]

and in matrix form:

a0 b ■ (a0b0)> 0 "

g = aj a0 * b = (aA + a0b1) = g = [Ф *] * f * (X),

1 о Ö sT 2 a2 і А _ _ (a2b0 + a1b1 + a0b2) 1 _ g _

[f: (X)] Г =.

Formulation of the problem

With regard to one of the MVN - the collocation method - we will show the possibility of using the RDM described above.

Collocation method

Let us consider an example from the solution of an equation of the form

L (u) - p = q2 u / q x2 + u + x = 0 (20)

on the interval 0< x < 1 с ГУ, и = 0 при x = 0, и = 0 при x = 1. (21)

The values ​​of the differential equation are satisfied at several collocation points.

An approximation is chosen in the form of the expression

and = x (1-x) (a1 + a2 x + ...),

which satisfies the PG for any a ..

Leaving only 2 terms in the approximating equation (22), we write it down using the RDM:

and = x (1-x) (a1 + a2 x) = a1 (x-x2) + a2 (x2 -x3) ^ a

A1 r + a2g. (23)

To get the error, calculate the first derivative of the approximating function:

E u / E x = E (a1 x -a1 x 2 + a2 x 2 -a2 x3) E x = (a 1-2a1 x + 2a2 x-3a2x2) =

A1 (1 - 2 x) + a2 (2 x - 3x2) (24)

and the second derivative

d 2 i / d x 2 = d (a j -2 a 1 x +2 a 2 x 2 -3 a2 x3) dx =

D (a1 - 2a1 x) dx + d (2a2x - 3a2x) dx = (- 2 a j + 2 a 2 x -6 a 2 x).

After specifying the approximating function (23) from equation (10), we determine the error e, using its writing in the form of the RDM:

e = b (u) -p = x + a1 (-2+ x-x 2) + a2 (2-6x + x2-x3) ^

1 о 0 0 1 "- 2 1 -10" 1 -1 6 - 2 1

0 1 0 - 2 1 -1 2 - 6 1

B (u) - p = (-2 a 1 + 2 a2 x -6 a2 x 2) + a1 (x - x2) + a2 (x2 - x3) + x = 0. (27)

x = (2a1 - 2a2 + 6a2x) - (a1 (x - x2) + (a2 (x2 - x3) =

A1 (2 - x + x 2) + a2 (-2 + 6 x - x2 + x3). (27a)

For the selected collocation points x 1 = 1/4, x 2 = 1/2, a system of equations is drawn up.

I (2 - x1 + x1) a1 (-2 + 6x1 - x1 + x1) a2 = 1/4,

[(2-x2 + x22) a1 (-2 + 6x2 -x22 + x ^) a2 = 1/2.

Having calculated the values ​​of the coefficients for the unknowns a1 and a2 by solving the system

"29/16 - 35/64" * b 1 = 11/41 or "0.8125 - 2.1875"

7/4 7/8 ґ 2 11 / 2I 1.75 0.875 _

we define the values ​​^ = 6/31 (0.193548), and 2 = 40/217 (0.184332). This will provide the calculation of the values ​​we need from (23) and:

u = x (1-x) (a 1 + a 2 x) = a 1 (x -x2) + a 2 (x2- x 3) = (x (1-x) (6/31 + 40/217 x ). (thirty)

For хі = 1,2,3 we have

x 1 = 1/4, u1 = 1/4 * (1-1 / 4) * (6/31 + 40/217 * 1/4) = 0.0436,

x 2 = 1/2, u2 = 1/2 * (1-1 / 2) * (6/31 + 40/217 * 1/2) = 0.0703,

x 3 = 3/4, u3 = 3/4 * (1-3 / 4) * (6/31 + 40/217 * 3/4) = 0.0618.

We will describe the solution to the problem in stages.

Representation of polynomials of approximating functions (22) in the form

A1 (x -x) + a2 (x - x) ^ a1

0 1 -1 0 0 1 -1 01 0

0 0 1 -1 0 0 1 00 0

and the operator [A] of matrix differentiation in the form

E / E x = [A] =

00200 0 0 0 3 0 00004

Let's calculate:

Analytically, the first derivative of the function and from (24):

E u / E x = E (a1 x-a1 x2 + a2 x2-a2 x3) / E x = (a1-2a1 x) + (2a2 x-3 a2 x2);

E u / E x = [A] * a 1

x1 Eh + [A] * a

x1 Eh + [A] * a x2

x1 Eh = a 1 „2

1 0 0 2 - 1 o 1 s - 6 0 "

0 2 - x1 + a 2 2 -

1 - 2 1 x 2 2 1 2

and the second derivative of the function and (25) written in the form of the RDM:

E 2 and / E x 2 = E / E x (-2 a1 + 2 a2 -6 a2 x) =

і 0 0 2 - і х0 1 0 6 - 2 1

[A] a 1 0 2 - x1 Ex + [A] a2 2 - 6

x1 Ex + [A] a2

Let us calculate the error (e) by writing equation (26) in the form of the RDM:

"0 1 0 0" "- 2 1 -1 0" "2 - 6 1 -1"

0 1 0 - 2 1 -1 і 1

0 1 1 - 2 1 2 2 - 6

Setting e = 0 for the selected collocation points x1 = 1/4, x2 = 1/2 from the equations

x i + (- 2+ x i -x 2) a 1+ (2-6x i + x 2 -x 3) a2 = 0, (36)

x i = (2-x i + x 2) a 1 + (- 2 + 6x i-x 2 + x 3) a 2, (36a)

make up a system. We represent the polynomials for a1 and a2 in the form of circulants (regular matrix representation):

/ (x a1) = a 1 (2- x i + x 2) ^ a 1

A 1 * [x x x x] ^ / (x „),

I (x) = a 2 (-2 + 6 xi - x2 + x3) ^ a

A 2 [-2 6-1 1] * [x x x x] ^ I (x).

For the selected collocation points of the equations of system (28), the values ​​of the function f * (xa) in the vector notation can be calculated as follows:

for x2 = 1/2:

x = 1 x1 = 1/4 x2 = 1/16

A1 [(2-1 1] * [x 0 x; x 2] t = a1 (2-1 / 4 + 1/16) = a11.8125; (37)

a 1 * [x 2 = 1 x 2 = 1/2 x 2 = 1/4] t = a 1 (2 * 1-1 * 1/2 + 1 * 1/4) = a 17/4 = a 11 , 75;

a 2 [-2 6-1 1] * [x 0 = 1 x 1 = 1/4 x 2 = 1/16 x 3 = 1/64] t = a 2 (-2 * 1 + 6 * 1/4 -1 * 1/16 + 1 * 1/64) =

A 2 35/64 = -a 2 0.5469;

for x2 = 1/2:

a 2 [-2 + 6-11] * [x 2 = 1 x 2 = 1/2 x 2 = 1 / 4x 2 = 1/8] t = a 2 (-2 * 1 + 6 * 1 / 2- 1 * 1/4 + 1 * 1/8) =

A 2 7/8 = a 2 0.875.

Then the system of equations (28) with the coefficients calculated in (35) - (39) for the previously determined collocation points in the numerical form can be written as follows:

1,8125 0,5469 1,75 0,875

[а.1 =] 0.25 I а І 10.50 і

and its solution according to the exclusion scheme

"1,8125 - 0,5469 0,25 "

0 2,542968 0,46875

B 22 = 1 / 2.542968 = 0.393241, bn = 1 / 1.8125 = 0.5517

a2 = b 22 * ​​0.46875 = 0.1843,

a = b 11 * (0.25 + 0.1843 * 0.5469) = 0.1935.

According to the calculated values ​​of the coefficients a1 and a2 for equation (23) in the form of writing, equation (30) can be represented in the form of a regular matrix representation:

u = a 1 (x-x 2) + a 2 (x 2-x 3) =

which allows us to calculate the coefficient values ​​for the selected collocation points using the SD:

a (x 1-x 2) = a * T = a10.1875 (for x1 = 1/4), (43)

a (x 2-x 2) = a1 * T = a10.25 (for x2 = 1/2), a2 (x 2-x 3) = a2 * T = a20.0469 (for x1 = 1.4), (44)

a2 (x 2-x 2) = a2 * T = a20,1406 (for x2 = 1/2).

After that, we calculate the values ​​of the function and:

u = a10.1875 + a2 0.0469 = 0.1935 * 0.1875 + 0.1843 * 0.0469 = 0.0449, (45)

u = a10.25 + a2 0.1406 = 0.1935 * 0.25 + 0.1843 * 0.1406 = 0.0742.

For a new collocation point x = 3/4, we calculate (with options in the computational process):

u = a1 * T + a2 * T = a1 (3 / 4-9 / 16) + a2 (9 / 16-27 / 64) = = (a10.75-a10.5625) + (a2 0.5625-a20 , 4219) = 0.0346 + 0.0259 = 0.0605. (46)

Method of moments

The method of moments, also included in the group under the general name MVN, for a given system of equations involves considering the scalar product of the residual e by some weight function w:

< e, w >= 0, i = 1,2, ..., N, (47)

where wi are weight functions. They can be any set of linearly independent functions,

for example, 1, x, x2, x3, ....

If the condition of vanishing of the moments of the residual of a higher order is satisfied

< е, х^ >= 0, i = 0.1, (48)

then this procedure is called the method of moments.

An example from the entry (20) - (22) is considered:

L (u) - p = E2u / E x2 + u + x = 0, u = x (1- x) a1 + x 2 (1- x) a2

and the error function in it is orthogonalized with respect to 1 and x:

j e * 1 * dx = 0, (49)

j e * x * dx = 0 (49a)

with substitution of the error e into the system (49), (49a)

e = a j (-2+ x - x 2) + a 2 (2-6 x + x 2 - x 3) + x, (50)

x1 dx + a2 [A] *

x1 dx + a 2 [A]:

A 1 (-2x + x 2/2-x 3/3) | 0 + a 2 (2x-6x 2/2 + x 3/3-x 4/4) | "0 + x 2/2 |" 0 : = a 1 (-2 + 1/2 -1/3) + a 2 (2-3 + 1 / 3-1 / 4) +1 / 2 =

11/6 a, -11/12 a 2 + 1/2 = 0.

11/6 a 1 + 11/12 a 2 = 1/2, where [A] is the integration matrix in the RDM notation, equal to

Comment. It should be remembered that when integrating functions (with RDM writing) in the form of the product of the matrix [A] by the column vector I * (x), the number of columns m of the matrix must be equal to the number of rows m of the vector, and the dimension of the resulting column vector mk must be one more: mk = m +1.

By analogy with (51), we calculate the second integral (49a):

| Є * x * dx = | (x + a 1 (-2+ x-x 2)) dx +1 a 2 (2-6x + x2-x 3) dx =

= | (x 2) dx +1 a 1 (-2 x + x 2- x 3) dx +1 a 2 (2 x -6 x 2+ x 3- x 4) dx =

X 3/3 | 0 + a 1 (-2x 2/2 + x 3/3-x 4/4) | 0 + a 2 (x 2-2x 3 + 1 / 4x 4-1 / 5 x 5) | 0 =

= [A] * t dx + a 1 [A] * t dx + a 2 [A] * t dx =

x2 + a -1 + a 2 x3

T | 0 + a 1t | 0 + a2 t | = = 1/3 + a1 (-1 + 1 / 3-1 / 4) + a2 (1-2 + 1 / 4-1 / 5) = 1/3 + a111 / 12- a219 / 20. (11/12 a1 + 19/20 a2 = 1/3).

Hence the system of equations for a1 and a2:

"11/6 11/12". I a-1 or "1.8333 0.91667"

11/12 19/20 1a2 1.1 / 3I 0.91667 0.95

We solve system (54) by the elimination method:

1,8333 0,91667 0,5

0 0,9014 0,1526_

X11 = 1 / 1.18333 = 0.54546, b22 = 1 / 0.9014 = 1.1094, a2 = 0.1695, a1 = 0.18796.

Hence, from x (1- x) (a1 + a2 x) for the selected collocation points we have u = 0.25 * 0.75 (0.18796 + 0.1695 * 0.25) = 0.04319, u2 = 0 , 5 * 0.5 (0.18796 + 0.1695 * 0.5) = 0.06816, U3 = 0.75 * 0.25 (0.18796 + 0.1695 * 0.75) = 0.0590 ...

It can be seen from what has been considered that the use of MVN, collocations, moments fits well into the general ideology of parallel implementation of the computational process at the level of processing complex data structures in a single technological flow. This also shows the possibility of implementing the methods in a special processor built on the basis of processor elements with a scalar multiplier in its basis, with its orientation towards solving problems of mathematical physics.

Comparison of the values ​​calculated by the proposed method with the values ​​calculated by the traditional approximate method shows their closeness.

Writing the initial equations in the form of RDM allows:

Raise the level of information processing (including performing operations such as multiplication, differentiation, integration of polynomials) from the level of numbers to the level of processing complex data structures;

Parallelize the computational process by performing operations in the PE using scalar multipliers included in its structure, that is, in a parallel special processor on a structure designed to calculate the stiffness matrices of finite elements (FE), and then a global matrix with a load vector in relation to the use of an FEM in solving tasks of the MF.

The implementation of this approach will expand the field of application of parallel structures of processing elements (PE), built on the basis of scalar multipliers, will allow unifying the computational process, starting from the stage of preparing the initial data with the subsequent solution of the global stiffness matrix, thus organizing a single technological (computational) flow when solving tasks of the MF.

In conclusion, I would like to recall with deep gratitude V.M. Glushkova. He initiated work on the use of the finite element method, supported the development of new numerical methods and structures for solving problems of mathematical physics using FEM and other numerical methods. Viktor Mikhailovich considered this task difficult, but very important.

I believe that J. Connor and C. Brabbia, who published a book necessary for the work of mathematicians and engineers, can be attributed to the initiators of the author's work from the angle of vision, identified in my series of articles on the use of MVN for solving MF problems with the help of FEM. The methods proposed above can be successfully applied when solving the problems described in their book, on a joint venture or vector processors of the CUDA architecture of the Tesla series from nVidia, for example, or AMD as part of a GPU - CPU (graphics and central cluster processors) that perform vector operations.

BIBLIOGRAPHY

1. Connor J. The finite element method in fluid mechanics / J. Connor, K. Brabbia; per. from English - L .: Shipbuilding, 1979 .-- 264 p.

2. A single technological flow in the organization of computations - a way to increase the performance of parallel structures on processor elements of transputer type / Ledyankin Yu.Ya.

Kiev, 1989 .-- 20 p. - (Preprint / Academy of Sciences of the Ukrainian SSR, Glushkov Institute of Cybernetics; 89-57).

3. Ledyankin Yu.Ya. On the question of transformation and parallel input of boundary conditions when solving boundary value problems in a single computational flow / Yu.Ya. Ledyankin // Mathematical machines and systems. - 2012. - No. 1. - P. 28 - 35.

4. Adinets A. Graphic call to supercomputers / A. Adinets, V. Voevodin // Open systems. - 2008. - No. 4. - P. 35 - 41.

Weighted residuals method

FEM is based on the method of weighted residuals, the essence of which is as follows: a function is selected that satisfies differential equations and boundary conditions, but it is not chosen arbitrarily, since such a selection is hardly possible already in two-dimensional space, but using special methods.

Let the state of some medium be described by the following differential operator, with a given boundary condition:

Here L is a differential operator (for example, the Laplace operator),

V - phase variable - unknown function to be found,

P is a value independent of V,

V (G) = V g is a boundary condition of the first kind (Dirichlet), that is, the value of the phase variable is set on the boundary.

We will look for a solution using a function of the following form:

Here V * is an approximate solution,

F is a function that satisfies the boundary conditions,

N m - trial functions, which on the boundary of the region must be equal to zero,

A m - unknown coefficients that must be found from the condition of the best satisfaction of the differential operator,

M is the number of trial functions.

If we substitute V * into the original differential operator, then we get a residual that takes different values ​​at different points of the region:

It is necessary to formulate a condition to minimize this discrepancy over the entire region. One of the options for such a condition may be the following equation:

Here W n are some weight functions, depending on the choice of which the variants of the method of weighted residuals are distinguished,

S is the region of space in which the solution is sought.

When choosing delta-functions as weight functions, we will have a method that is called the method of pointwise collocation, for piecewise-constant functions - the method of collocation by subdomains, but the most common is the Galerkin method, in which trial functions N are chosen as weight functions. In this case, if the number of test functions is equal to the number of weight functions, after expanding certain integrals, we arrive at a closed system of algebraic equations for the coefficients A.

where the coefficients of the matrix K and the vector Q are calculated by the formulas:

After finding the coefficients A and substituting them in (1), we obtain a solution to the original problem.

The disadvantages of the method of weighted residuals are obvious: since the solution is sought over the entire region at once, the number of trial and weight functions must be significant to ensure acceptable accuracy, but at the same time difficulties arise in calculating the coefficients Kij and Qi, especially when solving plane and volumetric problems when it is required calculation of double and triple integrals over areas with curvilinear boundaries. Therefore, in practice, this method was not used until the finite element method was invented.

The idea of ​​FEM is to use simple trial and weight functions in the method of weighted residuals, but not in the entire domain S, but in its individual subdomains (finite elements). The accuracy of solving the problem must be ensured by using a large number finite elements (FE), while FE can be of a simple form and the calculation of integrals over them should not cause any special difficulties. Mathematically, the transition from the method of weighted residuals to the FEM is carried out using special test functions, which are also called global basis functions, which have the following properties:

1) at the approximation node, the functions have a value equal to one;

2) the functions are nonzero only in FE containing this approximation node, in the rest of the region are equal to zero.

Influence of various factors on sludge work

Method of viroshuvannya crystals

Following the Czokhralskiy method, the vyroblyayut vityaguvannya vgoru on the seed of monocrystal from a bath with a melt. Heating up call me back for the help of NHF vipprominuvannya. For the nobility of those who win, vicoristovuyut dodatkova pich ...

The method of capillary viscometry is based on Poiseuille's law of a viscous liquid, which describes the laws of motion of a liquid in a capillary. Let us present the hydrodynamic equation for a stationary fluid flow ...

Methods and tools for measuring the viscosity of a liquid

The vibrational method of viscometry is based on determining changes in the parameters of forced vibrations of a body with a regular geometric shape, called the probe of a vibrating viscometer, when it is immersed in the medium under study ...

Installation of electrical wiring. Assembling power equipment control circuits

The fourth day of practice. I learned how to connect wires with a bandage method ...

Health and education of a bird for 28800 heads of replacement chickens in cell batteries BKM-3

The point-by-point method of lighting allows for the possibility of increasing the illumination of the lamps, which is necessary for the setting of the specified illumination in any point of the surface, when the lamps are changed ...

Principles of tomography

The simplest NMR test is the stationary MR (or sweep MR) method. There are two ways to conduct this experiment. In the first, continuous RF irradiation at a constant frequency, examines energy levels ...

Design of the system and power supply of the machine-building plant

Danish allowance method

Development of a heat-shielding material with a minimum coefficient of thermal conductivity

The classical method for solving equation (1.3) is the method of separation of variables (Fourier method). The basis of which is the assumption that the solution can be represented as a product of two functions ...

Calculation of natural and artificial lighting of the sewing workshop

The point method is suitable for calculating any lighting system with randomly oriented work surfaces. The method is based on the equation connecting illumination and light intensity (the law of conservation of energy for lighting technology). (five...

Ridky crystals

Standard spectroscopic methods are used for the growth of rare crystals. In the period of intensive follow-up of the mesomorphic camp of the vikonanii, a number of robotic methods of IP spectroscopy ...

Natural vibrations of plates

One of the most common methods for solving partial differential equations is the separation of variables or Fourier method. Suppose you want to find a function ...

Composition, properties and classification natural gases, methods for determining their composition

Chromatography (from the Greek chroma, genus chromatos - color and grbrpho - I write * A. chromatography; n. Chromatographie; f. Chromatographie; and. Cromatografнa) - a method of separation, analysis and study of mixtures of substances ...

Filtering methods for acoustic signals

The correlation method allows you to determine the tightness of the linear relationship between the investigated and basic functions. This is easier to understand with an example. Let there be a pulse radar ...

Statistically indeterminate systems and fracture fatigue physics

Let us generalize the above approach to "extra" constraints. In the method of forces, each resolving equation is essentially a deformation compatibility condition written for points 1, 2, etc. (Fig. 6.13). Let's write this condition for the first point. ...

Fundamental topic 🙂 Wrote a notepad program that implements the weighted residuals method in Mathematica. It turned out quite interesting, there is room for experiments on changing the test and weight functions.

Weighted residuals method: Algorithm

The problem to be solved by the method of weighted residuals

For example, let's take a problem from Kwon, Bang's book "The Finite Element Method Using MATLAB" (p. 31).

Weighted residuals method in Mathematica

Integrals will not be displayed on the site, so the text is presented in the form of a picture:

Weighted residuals - Mathematica notebook text with no output cells.

According to the text. In the first three lines, the residual function is determined, then the test function with the parameter is selected, and then the residual is calculated when the test function is substituted.

Next, three methods are implemented first - the collocation method, then the least squares method and the Galerkin method. For collocation, the sampling point. For each of these methods, its own weighting function is set, and the test function remains the same. The weighted residual is calculated - the integral over the solution area of ​​the problem from the product of the weight function and the residual. Integration is performed over, but our expression under the integral includes the parameter of the test function. Therefore, after integration, we have a function of. The weighted residual expression is then set to zero and the parameter is determined. Thus, the test function becomes a defined function of only one argument.

Having studied one method in relative detail, we turn to the presentation of other methods in whole classes. The most common class is the weighted residual methods. They proceed from the assumption that the desired function can be represented as a functional series, for example:

The function f 0 is usually chosen so that it satisfies the initial and boundary conditions as accurately as possible (if possible). The approximating (test) functions f j are assumed to be known. Mathematicians came up with a number of requirements for such functions, but we will not discuss them here. We restrict ourselves to the fact that the polynomials and trigonometric functions these requirements are met. A few more examples of sets of similar functions will be considered when describing specific methods.

The coefficients a j are not known in advance, and they should be determined from the system of equations obtained from the original equation. Only a finite number of members are taken from an infinite series.

In the equation that is supposed to be solved, all terms are rewritten to the left side, only zero remains on the right side. Thus, the equation is reduced to the form

If an approximate solution (written in the form of a finite sum of preselected functions) is substituted into this equation, then it will not be identically satisfied. Therefore, we can write

where the value R is called the residual. In general, the residual is a function of x, y, z and t. The problem is reduced to finding such coefficients a j that the residual remains small in the entire computational domain. The concept of "small" in these methods means that the integrals over the computational domain of the residual multiplied by some weight functions are equal to zero. I.e

Having given a finite number of weight functions, we obtain a system of equations for finding unknown coefficients. By specifying various trial approximating (trial) and various weighting functions, we can easily obtain a whole class of methods called weighted residual methods.

Here are some examples of the simplest methods from this class.



Subdomain method. The computational domain is divided into several subdomains D m, which may overlap each other. The weight function is set in the form

Thus, the equality of the integral of the residual over each subdomain is ensured. The method served as the basis for a number of methods (one of them will be discussed below).

Colocation method. The Dirac delta function is used as weighting functions

Where x =(x, y, z). Let me remind you that the Dirac function is a tricky function that is equal to zero everywhere except for the origin. But at the beginning it takes on a value unknown to science such that any integral over the area containing the origin is equal to one. Simply put: we set a certain number of points (often called nodes in this approach). The original equation will be satisfied at these points. There are approaches to the selection of these points and trial functions that maximize the accuracy with a limited number of nodes. But we will not discuss them here.

Least square method. The method is based on minimizing the value

But it is easy to show that it also belongs to the class of weighted residual methods. Weight functions for it are functions of the form

Perhaps this is the most famous method of this class among non-specialists, but far from the most popular among specialists.

Galerkin's method. In this method, approximating (trial) functions are taken as weight functions. I.e

The method is widely used in cases when one wants to find a solution in the form of a continuous (rather than a grid) function.

Let us consider the application of these methods to the calculation of the deformation of a cantilevered beam of length L. Let the deviation from the centerline be described by the equation

The boundary conditions are given in the form

We will look for a solution in the form

Then the residual will be written in the form

To find the unknown coefficients a and b, we need to compose a system of two equations. We will do this with all the considered methods.

Colocation method. Select two points at the ends of the beam. We equate the residual in them to zero

We get

As you can see, the colocation method is quite simple to implement, but it is inferior in accuracy to other methods.

Subdomain method. We divide the entire length of the beam into two subareas. In each of them, the integral of the residual is equated to zero.

Galerkin's method. We take the integrals of the residual multiplied by test functions.

Least square method.

The least squares method is the most computationally expensive without giving a noticeable gain in accuracy. Therefore, it is rarely used in solving practical problems.