How to mix voiceovers with maximum quality. Mixing Sound mixing

Description

Mixing is not a purely technical process of combining various tracks into a single whole, it is rather a creative activity, on which the peculiarities of the sound of the result depend. The process of mixing music is quite time consuming, because the following aspects must be taken into account:

  • volume balance of different tracks;
  • panorama - arrangement and mixing of each recorded instrument with each other, creating an appropriate acoustic environment;
  • dynamics - creating a composition of a musical sound recording, presentation form; the introduction, development, climax and decline are sustained;
  • transparency - mixing tracks so that each recording instrument is distinguishable and readable without drowning out other instruments;
  • density - responsible for the saturation of the sound;
  • various sound effects

Mixing

Mixing(English mixing, actual syn. mixing) - the stage of the technological process in sound recording, which consists in overlaying all recorded tracks and parts, mixing their sound, synchronizing the playing time. Mixing is usually combined with finishing the recording (setting the volumes of the parts, applying special and noise effects).

see also

Links

  • Famous sound engineers talk about Mixing (interview)

Wikimedia Foundation. 2010.

Synonyms:

See what "Mixing" is in other dictionaries:

    Noun., Number of synonyms: 1 superposition of sound signals (1) ASIS synonym dictionary. V.N. Trishin. 2013 ... Synonym dictionary

    I cf. Complementarity, addition of several signals in sound recording or sound broadcasting systems. II cf. Artificially reducing the effect of any event, reducing unwanted resonance. Efremova's Explanatory Dictionary. T.F. Efremova. 2000 ... Modern explanatory dictionary of the Russian language by Efremova

    Adjusting the sound level by the operator when recording or broadcasting music. radio broadcasts to highlight the sound of an individual instrument (performer) or a group of instruments. New Dictionary foreign words... by EdwART ... Dictionary of foreign words of the Russian language

    mixing- mixing, I ... Russian spelling dictionary

    Mixing- - volume control, sound offset when recording or broadcasting speech and music on the radio. Used to emphasize the sound of one instrument (performer), to gradually introduce or fade out the sound ... Encyclopedic Dictionary of Media

    mixing- see mix; I; Wed ... Dictionary of many expressions

    cross mixing- A method of gradually replacing one audio or video signal with another. Mixing is based on a gradual decrease in the intensity of the original (replaced) signal and a gradual increase in the intensity of the replacing one. [L.M. Nevdyaev. Telecommunication ... ... Technical translator's guide

    Logan Mader Logan Mader Date of birth November 16, 1970 (1970 11 16) (41 years old) Place of birth Montreal, Canada Years active ... Wikipedia

    Birch, Martin Contents 1 Musical career 2 Selected discography ... Wikipedia

    Martin Birch ... Wikipedia

Books

  • , Zdziarski Jonathan, The book is dedicated to the development of mobile applications and games for the iPhone and iPod Touch using the Apple SDK. The main stages of the development process, the Objective-C language, as well as all the main ... Category: Programs and utilities for digital devices Series: Bestsellers O`Reilly Publisher: BHV,
  • iPhone SDK. Application Development, Zdziarski J., The book is dedicated to the development of mobile applications and games for the iPhone and iPod Touch using the Apple SDK. The main stages of the development process are described, the Objective-C language, as well as all the main ... Category:

Sound mixing

Adding sound effects


In the previous chapter, you completed your movie footage. Now is the time to do the soundtrack. This is an equally important step in the movie making process. Incorrectly edited or distorted sound can spoil the viewing experience, even if the movie footage is flawless. In this chapter, you will learn the basic techniques for editing and mixing audio, as well as methods for adding sound effects and recording voice.

Premiere Pro provides all the tools you need to work on the soundtrack for your project. The soundtrack of the film can be monaural, stereo or multichannel (5.1 format).

In most cases, the original video material of the film already contains sound, recorded on a videotape by the built-in or external microphone of the camcorder. This soundtrack can be left to sound the dialogue and events of the film. To the existing sound, you can add background music and various noise effects (door creak, surf noise, car engine sound, etc.). To do this, you need files with music or sounds. You can also re-dub your movie using Premiere Pro's recording tools.

The project contains several audio tracks. This allows you to mix sound that is on different tracks, for example, combine the musical accompaniment of a movie with dialogue. Additional audio tracks can be added to the movie if needed.

Premiere Pro contains a variety of audio effects. Premiere Pro CS3 has increased the number of available sound effects over previous versions. You can change the frequency response of the sound with the equalizer, add echo, reverb and other effects.

Removing audio clips from audio tracks

Some clips of this project contain soundtrack. During the making of the film, you have already noticed that this soundtrack contains noises caused by the wind (in the Clip05.avi and Clip02.avi clips), the noise of the car engine (in the Clip04.avi clip) and helicopter (in the Clip12.avi clip).

Let's remove the sound with the noise of the wind. We will not delete the sounds of the engine of the car and the helicopter, since we will mix them in the future with the musical accompaniment of the film, which we will also add later.

1. Click on the part of the Clip05.avi clip located on the Audio 1 track (for convenience, let's call this part the Clip05.avi sound clip).

2. Press the Delete key. The audio clip will be removed from the track, and its video component will disappear with it.

The audio and video components of one clip are linked to each other, so you cannot delete one of them without first separating the audio and video.

1. Undo the last action by pressing the keyboard shortcut Ctrl + Z. The Clip05.avi clip will reappear in the sequence.

2. Click on the Clip05.avi clip. Note that the entire Clip05.avi is highlighted regardless of whether you click on the video or audio portion of the clip.

3. Right-click on the Clip05.avi clip. A context menu will appear on the screen.

4. In it, execute the Unlink command.

5. Click outside the Clip05.avi clip to deselect it.

6. Now click on the audio clip Clip05.avi located on the Audio 1 track. Note that this time only the audio portion of the clip is highlighted. This means that the audio and video clip Clip05.avi are disconnected (Fig. 7.1).

Rice. 7.1. Highlighted only the sound of the clip Clip05.avi


7. Press the Delete key. The audio component of the Clip05.avi clip will be removed from the audio track, and the video component will remain in place.

8. Separate the audio and video clips of Clip02.avi using the above method and remove the audio component of this clip.

To fix the material, delete the rest of the audio clips located on the Audio 1 and Audio 3 tracks, except for the Clip04.avi and Clip12.avi clips.

Only two sound clips should remain in the project - Clip04.avi and Clip12.avi (Fig. 7.2).


Rice. 7.2. There are only two sound clips left in the project


Removing audio clips from tracks is straightforward. All you need to do is decouple the video and audio from the clip and remove the audio portion. In the same way, you can remove the video component separately from the audio, if, for example, you want to leave only the soundtrack of the clip in the movie.

Adding a sound clip to a sequence

Placing audio clips in sequence is a simple procedure. Let's add the sound file Sound.wav to the movie, which will become the soundtrack for the movie. Since you previously loaded this clip into the project, you can find it in the clip list in the Project window.

1. Click and hold the Sound.wav clip in the Project window and drag the clip into the Timeline window onto an empty audio track, such as Audio 2.

2. Drag the Sound.wav clip on the Audio 2 track until the left edge of the clip aligns with the timeline zero. This is necessary in order for the musical accompaniment to begin simultaneously with the beginning of the film (Fig. 7.3).


Rice. 7.3. A new sound clip has been added to the sequence


Note

When adding audio files to a sequence, it does not matter which tracks they are placed on. When adding footage clips, the order of the tracks was taken into account, since the video of the upstream tracks overlaps the video of the downstream ones. It's different with audio files - audio located on different tracks will be mixed in the same way.

Play the sequence. Now the film contains musical accompaniment, and in the places where the clips Clip04.avi and Clip12.avi are located, the noises of the engines of a car and a helicopter are heard simultaneously with the music. The same effect can be obtained by turning on the TV and tape recorder at the same time. The sounds played by these devices will be mixed. As a result, most likely, you will hear a cacophony, in which it is difficult to make out where is the voice of the TV show host, and where is the music played by the tape recorder. To avoid this in your movie, you need to properly mix the sounds that are simultaneously played in the sequence.

Sound mixing

In everyday life, sound surrounds us everywhere. Going out into the street, you simultaneously hear the noise of cars, the voices of children playing, music coming from a window nearby, etc. At the same time, you can calmly talk with your interlocutor - extraneous sounds do not bother you. These are the sounds you are used to. They are naturally mixed (the volume of these sounds is balanced, unless you are trying to talk on the phone next to a roaring jet) and come from different directions.

Imagine now what you will hear if you add several different sound bites (dialogue, music, car or wind noise) to the project and place them in one section of the sequence. You will not be happy with the soundtrack of these sequences. However, when watching any feature film, you hear many different sounds - the phone ringing, door creaking, actors' voices, sounds of falling objects, etc. At the same time, you hear many sounds at the same time, but they do not hurt your ears. This is because when editing the soundtrack of the film, the volume for each sound was correctly selected, and if the film contains stereo or surround sound, then their balance.

Change the volume of audio clips and audio tracks

In this project we will mix musical accompaniment and sounds of car and helicopter noise. To do this, you need to slightly turn down the volume of music playback in these sections of the sequence.

1. Expand the Audio 2 track so you can see the frequency response of the Sound.wav clip. To do this, click on the triangle in the track header area (Fig. 7.4).


Rice. 7.4. Audio 2 track unrolled


2. Zoom in on the Timeline window to work more easily.

3. Scroll the project in the Timeline window so that you see the Clip04.avi clip.

Notice the yellow line running horizontally in the middle of the Audio 2 track. This is volume line... At the moment it is straight, which means that the volume of the clip located on the track is unchanged. Let's create keyframes for changing the volume level and change the direction of the volume line at these points.

1. Move the cursor of the current editing position to the beginning of the clip named Clip04.avi.

2. Click the Add / Remove Keyframe button. At the point of intersection of the cursor of the current editing position with the volume line of the Audio 2 track (Sound 2), a key frame (round dot) will appear (Fig. 7.5).


Rice. 7.5. Keyframe on the audio track


3. Move the cursor of the current editing position 20–25 frames to the right.

4. Click the Add / Remove Keyframe button. A second keyframe will appear on the volume line at the position indicated by the cursor of the current edit position.

Decrease the volume of the music between the two created keyframes.

1. Move the cursor of the current edit position to the second (right) keyframe of the volume line.

2. Hold down the mouse button and drag the keyframe just below. A tooltip next to the mouse pointer displays a message about the time position of the keyframe and the volume level in decibels (dB).

3. Drag the keyframe vertically to set the volume to approximately –8 dB (Fig. 7.6).


Rice. 7.6. Decrease the volume level in the second keyframe


The trajectory of the volume line has changed - between the first and second keyframes, the line goes down below, and after the second keyframe it remains horizontal, but below the original level. This graph is deciphered as follows. First, the movie soundtrack is played at nominal volume (with the default medium volume set by the program). The nominal loudness level in Premiere Pro is 0 dB. After the first keyframe, there is a gradual decrease in volume, which lasts until the second. After the second keyframe, the loudness remains unchanged (the loudness line is parallel to the horizontal axis of the audio track), but below the nominal (located below the center of the audio track).

After the end of the Clip04.avi clip, the sound volume should be returned to the nominal value, that is, the volume should be made the same as it was before the first keyframe. To do this, you need to create two more keyframes on the volume line at the end position of the Clip04.avi clip.

1. Create a keyframe on the volume line of the Audio 2 audio track, 20-25 frames to the left of Clip04.avi.

2. Now create a keyframe on the volume line of the Audio 2 track at the position corresponding to the end of Clip04.avi.

3. While pressing and holding the mouse button on the fourth keyframe (counting from the left), drag the keyframe up so that the volume level in the tooltip corresponds to 0.00 dB (Fig. 7.7).


Rice. 7.7. In the fourth keyframe, the volume level is returned to its original


The loudness graph now looks like this. After the start of Clip04.avi, the sound volume decreases smoothly. After 20-25 seconds (this depends on the distance between the first and second volume keyframes of your sequence), the volume level remains unchanged until the third keyframe, but the volume in this section is lower than nominal. After the third keyframe, the loudness graph grows upward. This means that the volume level increases smoothly. In the fourth keyframe, the loudness level reaches the nominal value and remains unchanged until the end of the sequence.

Play this part of the sequence. You will find that the volume of the music is reduced in an area with a car engine noise.

While listening to the section of the sequence you are currently working on, notice that the motor sound at the end of Clip04.avi is cut off abruptly. It sounds unnatural. Let's smoothly remove the volume of the end of the soundtrack of the Clip04.avi clip. This can be done by creating and moving keyframes on the volume line on the Audio 1 track at the end of the clip. However, let's take a look at other ways Premiere Pro lets you mix sounds.

1. Select the Clip04.avi clip. If you have previously disconnected the audio and video of this clip, select the audio portion of the clip.

2. Place the cursor of the current editing position 20–25 frames to the left of the end of the Clip04.avi clip. The cursor position of the current edit position should roughly match the third keyframe on the Audio 2 track.

3. Click the Effect Controls tab.

4. Expand the Volume effect group on the Effect Controls tab. Note that the Toggle Animation button on the Level parameter is pressed down, allowing you to start creating keyframes right away.

5. Click the Add / Remove Keyframe button located on the right side of the Level group on the Effect Controls tab. A diamond-shaped keyframe appears on the artboard of the Effect Controls tab. A keyframe will also appear on the volume line of the Audio 1 track at the intersection with the cursor of the current edit position.

6. Move the cursor of the current editing position to the end of the Clip04.avi clip.

7. Expand the Level group on the Effect Controls tab by clicking the triangle to the left of the group name to display the volume slider.

8. Move the volume slider to the far left. This position corresponds to the minimum volume level at which the sound is no longer heard.

The timeline of the Effect Controls tab has a graph of the volume change. Between two keyframes (the second was created automatically when the position of the volume level slider was changed), the graph line goes down, which means that the volume level in this area decreases. By the second keyframe, the volume level becomes minimum (Fig. 7.8).


Rice. 7.8. Clip volume graph Clip04.avi


Take a look at the audio component of Clip04.avi in ​​the Timeline window. Keyframes were also created on the volume line of this clip, and the trajectory of the volume line between these points was changed (Fig. 7.9).


Rice. 7.9. Keyframes of the clip Clip04.avi


Play part of the sequence to hear the end of Clip04.avi. Shortly before the end of the clip, the volume of the sound of the car's engine gradually decreased. This happened simultaneously with a smooth increase in the volume of the musical accompaniment. This way, you've gently mixed two simultaneously playing audio clips.

We will return to the soundtrack for Clip04.avi later. Now we will mix the musical accompaniment of the film with the soundtrack of the Clip12.avi video. First, you need to decrease the volume of the musical accompaniment in the section of the sequence occupied by the Clip12.avi clip.

1. Scroll the sequence in the Timeline window so that you see the Clip02.avi clip.

2. Highlight a music clip on the Audio 2 track.

3. Place the cursor of the current editing position a few seconds to the left of the beginning of the Clip12.avi clip.

4. Click the Audio Mixer tab.

5. In the drop-down list at the top of the Audio 2 control group, select Touch (Figure 7.10).


Rice. 7.10. Selecting the automation mode


Read the following practical steps before proceeding to avoid distraction later.

1. Play the sequence by pressing the Play button in the Program screen.

2. Position the mouse pointer over the volume slider located in the Audio 2 group of the Audio Mixer tab and be ready to move it.

3. As soon as the cursor of the current position is in front of the beginning of the Clip12.avi clip, move the slider a little lower and hold it in this position.

4. Release the slider shortly before the end of the Clip12.avi clip. The slider will automatically return to its original position.

The above method requires skill and attention, but it is indispensable when mixing long sound fragments, where you have to frequently change the volume level. You may not succeed the first time - use the History tab to undo the action and try again.

Let's see what happened when mixing in the above way.

Click on the Show Keyframes button located in the title area of ​​the Audio 2 track, and in the context menu that appears, execute the Show Track Keyframes command (Figure 7.11).

Rice. 7.11. Context menu for selecting the keyframes display mode


The above mixing method involves changing the volume not of a single clip, but of the entire audio track. On the Audio 2 track, in the Clip12.avi clip area, you'll see several keyframes (Figure 7.12).


Rice. 7.12. Keyframes of an Audio 2 track (Sound 2)


The first keyframes (there are probably several of them) were created by moving the volume slider down. Over the course of these frames, the loudness graph decreases. When you held the slider in one position, no keyframes were created - the line of the volume graph is horizontal. The moment you released the slider, a keyframe was created, the slider began to return to its original position, and the volume began to increase, as evidenced by the rising line of the volume graph. Once the volume slider has returned to its original position, the last keyframe has been created, after which the volume remains unchanged.

Using this mixing method, you can play the project and adjust the volume of the sound track by ear in different parts. For example, throughout the entire movie or fragment, mute the music during dialogue playback and increase the volume of the music where it is provided by the movie script.

Try to decrease the volume of the music in the section of the clip named Clip12.avi several times, undoing the previously performed mixing, until you achieve that the volume of the music at the beginning of the clip of Clip12.avi gradually decreases, and at the end - increases.

Now, as an independent work, try changing the volume of the Clip12.avi soundtrack located on the Audio 1 track so that the volume of the helicopter engine gradually increases at the beginning of the clip and also decreases smoothly at the end. Make sure that the appearance and disappearance of the sound of the helicopter engine in the interruption you created is not abrupt.

Automating Mixing Using the Audio Mixer Tab

Mixing using the Audio Mixer tab controls can take place in several modes. The method for creating keyframes is determined by the mode selected at the top of the group of controls for each audio track on the Audio Mixer tab. You can select one of the following mixing modes:

Off. The audio track will play at the volume set in the Audio Mixer tab. Any previously made changes to the track volume are ignored;

Read. Attempts to change the volume level will not lead to the creation of keyframes and, as a result, to a change in the volume of the audio track;

Latch Keyframes are only created when the volume level is changed. In this mode, you can change the balance of the audio track, the volume level of which was changed earlier. New keyframes will be added to the keyframes in which the volume level changes, in which the balance will change;

Write. Unconditional creation of new key frames occurs, and all key frames previously created in this section of the audio track are deleted;

Touch. Works the same as Latch mode. The difference is that in Touch mode, the controls located on the Audio Mixer tab automatically return to their original positions as soon as you release them.

When mixing audio, you can use any of the above modes. Touch mode is useful for changing the volume for a short period, and the volume will return to its original level as soon as you release the corresponding knob. Latch mode, on the other hand, is good for changing the volume on long fragments, since it does not require holding the volume control - it remains in the set position and does not return to its place, as in the case of using the Touch mode.

In future projects that you will create yourself, try using the Audio Mixer tab and different modes when mixing audio. With experience, you will find that it is very convenient.

On the Audio Mixer tab, there are audio balance controls above the volume sliders. By rotating them, you can shift the balance of the audio track to the right or left audio channel. At the same time, you can also create keyframes in one of the above mixing modes.

At different stages of mixing, especially when a project contains several simultaneously sounding clips, it may be necessary to temporarily mute one audio track or, conversely, all audio tracks except one, in order to concentrate on its sound. For this, the buttons located above the volume slider are intended:

Mute Track (the button depicts a speaker) mutes the volume of the corresponding group's audio track. If you press the Mute Track button in the Audio 1 group, the Audio 1 track will be muted and the Mute Track button will be held down. To reactivate the muted track, press the Mute Track button again;

Solo Track (Trumpet button) mutes the volume of all audio tracks in the project, except for the corresponding group track. If you click Solo Track in the Audio 2 group, the volume of all audio tracks in the project other than Audio 2 will be muted.

Balance change

When making a movie with stereo soundtrack, sound balance plays an important role. Stereo sound is output through two speakers located in front of the viewer at some distance from each other (these can also be TV speakers located on the left and right of the screen). Changing the balance shifts the volume from one speaker to the other, adding dimension to the movie soundtrack. For example, if a movie has a scene with a car driving from left to right, the sound of the car's engine also shifts from the left speaker to the right speaker. This gives the viewer the feeling that he is at the center of the action in the scene.

In this project there is a clip with a car (Clip04.avi), which at the beginning of the film is in the center of the frame, then moves to the left of it, and then back to the center. Let's shift the balance of the soundtrack of the Clip04.avi clip in accordance with the position of the car in the frame.

1. Expand the Audio Effects group located on the Effects tab.

2. Expand the Stereo group located under the Audio Effects group. The Effects tab will display a list of sound effects available for clips with stereo sound (Fig. 7.13).

Rice. 7.13. Stereo group sound effects


3. Select the Clip04.avi clip in the sequence.

4. Click the Effect Controls tab.

5. Drag the Balance effect from the Effects tab to the Audio Effects group of the Effect Controls tab.

6. Expand the Balance effect group under the Effect Controls tab, then expand the Balance parameter to display the slider for adjusting the balance.

7. Move the cursor of the current editing position to the beginning of the clip named Clip04.avi.

8. Click the Toggle Animation button to the left of the Balance option to enable keyframes. A keyframe appears on the artboard of the Effect Controls tab.

9. Move the cursor of the current editing position to the point where the car is located on the left side of the frame (this point is located approximately 35 frames after the beginning of the clip).

10. Move the Balance slider to the left so that the Balance parameter is approximately –80 (a negative balance value indicates that the sound is shifted to the left channel).

11. Move the cursor of the current edit position to approximately the middle of Clip04.avi. At this point in the clip, the car is again centered in the frame.

12. Move the Balance slider to the middle position. The Balance parameter must be zero. This means that the volumes of the left and right channels are the same.

On the timeline for Balance, three keyframes and a balance graph should be created. Initially, the line of the chart goes up, which indicates a shift of the balance to the left, after the second keyframe, the line goes down, and after the third - it is horizontal (Fig. 7.14).


Rice. 7.14. Balance change graph


Play part of the sequence with Clip04.avi. The sound of the car engine now matches the position of the car in the frame. Increase the distance between the speakers or use headphones if you don't feel the difference. You can also temporarily mute the music track so that the sound does not distract you.

When creating future projects, try to animate the soundtrack of the film, shifting the balance in accordance with the events in the frame. In this case, you should not completely shift the balance to one of the channels - it sounds unnatural. Even if the sound source is on the right, the person hears the sound in the left ear, but a little quieter. Consider this when working on soundtracks.

Adding sound effects

Premiere Pro contains a variety of sound effects for sound correction. Let's take a look at how to add a sound effect to a sequence.

Clip12.avi contains the sound of a helicopter engine. Let's make the sound of the helicopter natural by adjusting the frequency response of the audio component of this clip.

1. Select the Clip12.avi clip in the sequence.

2. Make sure the Effect Controls tab is open.

3. Drag the EQ sound effect from the Stereo group in the Audio Effects group of the Effects tab to the Effect Controls tab in the Audio Effects group.

4. Expand the EQ effect group by clicking the triangle to the left of the effect name, then click the Custom Setup parameter. The Effect Controls tab will display the EQ controls (Figure 7.15).

Rice. 7.15. EQ (Equalizer) effect controls


Reinforce the low frequencies while reducing the midrange.

1. Check the boxes for Low, Mid1, Mid2, and Mid3. Markers will appear on the line of the graph of the frequency response of the equalizer screen (Fig. 7.16).

Rice. 7.16. Frequency response markers


Note

The High checkbox can be left unchecked as the treble level will not change.

2. Drag the Low marker up so that a bend appears on the graph line (Fig. 7.17).

Rice. 7.17. Low frequencies are boosted


3. Move the markers M1 (Medium1), M2 (Medium2) and M3 (Medium3) a little lower. The frequency response line should look like this (Figure 7.18).

Rice. 7.18. Example for setting the equalizer


The graph line you created indicates that you raised the low frequencies (bass) by moving the left side of the line up, and weakened the sound of the mid frequencies by moving the middle part of the line below.

Play part of the sequence with Clip12.avi. The sound of the helicopter engine has become more saturated.

Each marker on the graph line is responsible for a specific frequency range of sound. So, for example, the Low marker is located in the frequency adjustment area in the 50 Hz range. You can change the frequency range of each marker by moving it horizontally. You can also change the marker frequency range by rotating the Freq knob in the corresponding group. The marker will move left or right. For example, to enhance the bass in the 20 Hz region, move the Low marker to the left. The Gain controls are responsible for moving the corresponding markers vertically, that is, instead of moving the markers up or down with the mouse button, you can use the Gain controls.

An equalizer can often help you achieve acceptable sound even from a seemingly hopeless recording, so you should carefully study how it works and experiment with its settings.

You can practice by applying the EQ effect to the music clip on the Audio 2 track. Experiment with different settings equalizer. Try boosting the bass and treble of the music a little, and you'll see that the sound becomes richer.

You can master the rest of the sound effects yourself by applying them to the sound clips of the project and changing their parameters. Remember that any sound effect can be removed by highlighting its name in the Effect Controls tab and pressing the Delete key.

More extensive possibilities for editing and processing sound are provided by the Audition program from Adobe. It is a powerful sound editor with many tools. If such a program is installed on your computer, you can load a sound clip from the project into it for processing. To do this, select a sound clip in the sequence and execute the menu command Edit> Edit in Adobe Audition (Edit> Edit in Adobe Audition). This will load the Audition program, and the selected clip is opened within the project of this application. When you finish processing the audio file and close Audition, any changes will be applied to the movie project clip.

Recording audio with Premiere Pro

Premiere Pro lets you record audio from an external source, such as a microphone, while your project is playing. This is useful for recording voice-overs - you follow the events of the movie on the Program screen and comment on them at the same time. The recording takes place in real time.

Before you start recording a commentary, you should connect a microphone to the microphone input of the sound card, activate the microphone input in the Windows mixer and adjust the recording level (you can read how to do this in the Windows help system or in the corresponding literature). When using speakers to reproduce sound, turn them off or turn down the volume so that no feedback (whistling) occurs between the speakers and the microphone. The best solution would be to use headphones.

1. Highlight the audio track to which you want to add the recorded commentary. The best way to do this is to select a track that does not contain audio clips.

2. Move the cursor to the current edit position just before the start of the movie events for which you want to record a commentary.

3. Click the Audio Mixer tab.

4. Click the Enable Track for Recording button in the highlighted track group (the microphone button above the volume slider).

5. Click the Record button located at the bottom of the Audio Mixer tab (red circle button). The highlighted track will be put in record-ready state.

6. Click the Play button at the bottom of the Audio Mixer tab or Program screen. The project starts playing from the point marked by the cursor of the current edit position.

7. Watching the events in the frame, say the desired comment.

8. Press the Stop button at the bottom of the Audio Mixer tab or Program screen to end recording the commentary. The sequence will stop playing and a new clip will appear on the audio track you specified. This is the comment you recorded (Figure 7.19).


Rice. 7.19. Sound clip recorded with Premiere Pro


All editing and processing tools can be applied to the comment clip on the audio track. This clip is the default audio file that Premiere Pro creates in your project folder.

You can not only record a commentary on the film, but also completely re-sound the project if the dialogues of the film do not suit you in terms of the content or quality of the recording. At the same time, on the same section of the film, you can record sound several times (for example, sound dialogues with several voices) by adding the required number of audio tracks and placing records on them.

Features of creating soundtrack in 5.1 format

This section is theoretical. You are creating a movie with stereophonic soundtrack, so this section does not apply to your project. It will describe only the basic steps and methods of creating a project with 5.1 surround sound. The 5.1 format implies audio output through six speakers, five of which are called satellites, and one - subwoofer... Four satellites are usually installed in the corners of the room and are called frontal and rear and one called central, - in front of the viewer. The sixth speaker, dedicated to reproducing bass, is called a subwoofer. It can be installed anywhere, depending on the acoustics of the room.

The viewer is surrounded by speakers, each of which reproduces an independent soundtrack channel. 5.1 surround sound creates the illusion of being on the scene of the film for the viewer.

Premiere Pro lets you create movies with this sound. To do this, create a new project and prepare mono audio files that will be played by each speaker separately, or create one six-channel audio file, which can be done in Adobe Audition.

When creating a new project, you should immediately specify the number and type of audio tracks in the sequence. To do this, you need to do the following.

1. In the New Project dialog ( New project) go to the Custom Setup tab.

3. In the Audio group, select 5.1 from the Master drop-down list.

4. In the field with the Mono counter (Mono) specify the number of monophonic tracks - 6 (Fig. 7.20).


Rice. 7.20. 5.1 Project Sequence Settings


After closing the dialog, six monophonic audio tracks and a 5.1 master track will appear in the sequence (Fig. 7.21).


Rice. 7.21. Track header area in a 5.1 audio project


The most important job comes when mixing the soundtrack of such a project.

Take a look at the Audio Mixer tab (Figure 7.22).


Rice. 7.22. Audio Mixer tab in 5.1 project


Instead of stereo balance controls, rectangular graphic elements symbolizing a room appeared in the group of each track.

To output the sound of a track to a specific speaker, move the marker located in the center of the "room" to the appropriate position. For example, to play the first track from the left front speaker, move the marker to the upper left corner (Fig. 7.23).

Rice. 7.23. Offset the sound of the track to the front left speaker


Five cells are highlighted on the diagram where you can place a marker. The marker can not be moved to the very corner, but only shifted towards it. In this case, the sound will be distributed between the speakers, and the speaker to which the marker is located closer will play the corresponding track louder.

The subwoofer marker can be left in the center of the circuit, but for its track you should reduce the amount of treble and increase the bass sound. This is done by the knobs to the left of each room layout. The bass regulator (Fig. 7.24) of each satellite determines the level of the low-frequency component of the sound that will be reproduced by the subwoofer.

Rice. 7.24. Bass control


When mixing audio, you can also create keyframes to change the volume and panning of the sound of each speaker individually. You can smoothly transfer the playback of a track from one speaker to another, for example from the front left corner of a room to the rear right. To do this, move the marker along the scheme when the keyframes creation mode is on.

Creating a surround soundtrack is a long and painstaking work. This section is only intended to provide you with the basics of creating such sound, so work with projects that contain stereo sound for now, and when you're more comfortable using Premiere Pro and audio editing programs, you can try creating a surround sound movie.

Summary

In this chapter, you learned about the basic methods of creating and editing soundtracks for a movie. Film scoring should be taken as seriously as video footage, so pay enough attention to it. Try to use quality speakers or headphones for monitoring the soundtrack. Otherwise, you may simply not hear some of the nuances of the soundtrack, and in the future, when playing a movie on high-quality equipment, all the flaws that you did not notice while working on the sound may appear.

This completes the creation of the film project. You can review the entire sequence you created from start to finish and make adjustments as needed.

In the final chapter, you have to turn your project into video files of various formats.

Mixing live sound is a responsible and error-free process.

Not all musical groups can afford to have a personal sound engineer. Moreover, a small one working at corporate parties and holidays. An individual is an extra member with whom you have to share a fee. And this does not give pleasure to anyone. Except for the sound engineer himself, of course. Therefore, a small team can and should "make sound" on their own.

In this article, we will look at the basic techniques and methods of successfully mixing sound on our own when conducting various events with the participation of live musical groups. Moreover, in the limited time provided to us to prepare for the start of the event.

Mixing: Create live sound.

Installation.

The mixing process begins with the most rational installation of sound equipment in the hall. At this stage, it is important to understand how you will install the portal speakers: behind your back to hear the sound from them, or stand behind them to hear the sound only from the monitors. The first option is appropriate if you do not have monitors. This increases the likelihood that your microphones will "start" (self-excitation). But if you try, then, in principle, you can work this way. Do not forget to only turn off or "mute" the microphones in between compartments.

Choose a place to stand. It is best to install it on a table or a special stand. It's nice when the console is mounted in a rack. During my work, the remote control is behind my back or to my right, at waist level and within reach. This is the most optimal solution for installing the console, as I am doing it myself.

This is followed by equipment switching. Cables must be routed so that no one can catch them in any way. The same applies to the extension cord, which is used if the power outlet is located fairly far from the installation site of the equipment. If necessary, tape the wire securely to the floor in places where people pass, or place a carpet of some kind on top of it. And if possible, run the wire over the passage.

After that, a test recording starts - a soundtrack you know well. Raise the channel level, master faders of the console, slightly increase the volume of the amplifier to understand if all the speakers are sounding, and if there are no whistles, hums and mains interference. And only after that you can bring the level of the crossover (if used) and the amplifier to operating values.

Next comes the installation and connection and tools. It is more logical to connect them to the console in accordance with the position of the performers on the stage to make it easier to navigate. You can label the corresponding artist, microphone, or instrument underneath the mixer faders. Then we check if the instruments and microphones are working.

Frequency division, equalization and processing.

When using a multiband amplification system, we set the frequency crossover points and the level for each band.

For equalization, either an external 31-band equalizer is used, connected to the outputs of the mixing console, or a 9-band output equalizer built into the mixer, or only a three-band equalizer on each channel. The goal of equalization is to prevent the microphones from self-oscillating and to create the desired tonal balance in the sound.

I use a two-channel 15-band equalizer at the output of the mixing console and in front of the crossover (or in front). Behringer 1502 FBQ with feedback indication. The photo to the left of the mixing console shows the following processing devices that I usually use (top to bottom):

  • CPU Behringer Virtualizer Proused as a two-channel compressor (now I have a four-channel compressor dbx 1046) ;
  • equalizer Behringer 1502 FBQ;
  • active crossoverAltoX-P234(2/3 stereo or 4 mono).

First, you should put on a familiar recording and adjust the equalizer of its channel by ear. From it, you can roughly determine what to do next with the microphone and instruments.

Usually the position of the EQ knobs on a mixer for microphone and guitars is roughly known to me. For a start, you just need to speak into each microphone (" one, two, three, sausage, diner«) And generally understand which frequencies to tidy up and which ones to add. The same goes for tools. We create a rough idea of ​​the sound of each of them, and then proceed to control testing of the sound.

Sound control testing.

At this stage, there is usually very little time left. Therefore, during control testing, we perform one or two songs, no more. We adjust the rest during the performance, in the process of performing the first couple of songs. For this, the appropriate compositions are selected, where my hands are not too busy, and the sound can be adjusted.

Handles first Gain the pre-gain of each microphone and instrument is set. As a rule, the preamp knobs on the console are set to 12 hours. The microphone preamplification may be slightly less, depending on the degree of self-excitation of the microphone in this room. The console channel and master faders are then positioned around the 0 dB, and for control testing, the volume of each channel and the overall volume are adjustable.

During the test performance of the song, I go out with a microphone to the hall and listen to what sound there is. Then I go back to the stage and match the sound in the hall with the sound of the monitors on the stage. The main task is to achieve approximately the same sound level, including tonal balance both in the hall and on stage. To do this, I use the monitor EQ control. As I already noted, there is no separate sound engineer in our team. And in the process of work I have no opportunity to go out into the hall and listen. But on stage, I create a sound in advance that is close to the sound of the hall. And the regulation of the general sound is no longer blind.

Subgroups and submixing.

To simultaneously control several sound sources at the same time (for example, microphones that sound a drum kit) on the mixing console, subgroups (if any) are used. Using subgroups reduces the number of console faders needed to control the overall sound of the system.

Submixing can also be carried out from an additional console - a submixer. You can connect both drum kit microphones and microphones (as I often do). Several submixers can also be used in the system.

Setting up monitors.

The process is no less important than building a portal system. For if the performers do not hear well themselves and their neighbors, then do not expect good performance. Even with great sound in the hall.

When we work together under the minus, then we need only one or two active monitors, which I connect to the CTRL Outs outputs of the mixing console with a separate volume control. Note that I never use pre-fader Aux sends for monitoring in this case. For, as mentioned above, I need to achieve approximately the same sound both in the hall and on the stage. After all, it is on the stage that I am during the concert, as well as my mixing console with processing devices.

Here's another reason why I don't use pre-fader aux sends when I'm on stage rather than in the audience. Once I decided to do Feng Shui monitoring through the pre-fader Aux. We worked and everything was great on stage. After some time, the wife of one of the vocalists came up from the audience and said that her husband's voice was completely inaudible in the hall.

It turned out that in a rush I just forgot to press the button Mute on his mic channel. Which did not affect the sound level of the pre-fader send in any way. Therefore, his voice sounded on stage, but not in the hall. And I did not have the opportunity to listen to what was being heard in the audience.

The issue of monitoring is especially acute for dubbing, where I play the bass guitar. But I play for a reason, but sitting (or standing) at the mixing console, if there is such an opportunity.

IN recent times my children help me a lot: they sit next to me, and I tell them which faders to move and which knobs to turn. Because my hands are busy with the instrument at this time.

For Three-in-One, I monitor with the pre-fader Aux of the main console. If it has at least 2 pre-fader and 1 post-fader send (not counting the built-in effects processor), then I additionally use three channels of the monitor mixer. This is a sound complex Yamaha Stagepass 500, whose passive speakers are used in the first line as monitors for vocalists.

Active speakers are connected to the Yamaha monitor outputs (with separate level control) Behringer B210D or FBT Pro MaxX 10a- this is the second monitor line for percussionist and guitarist. And I listen and adjust the sound from the audience.

So, if there are three free sends of the main console (for example, the remote Alto L16), the sound comes to the monitor mixer as follows:

  • From Aux 1 to the first channel of Yamaha, all the vocalists and the acoustic guitar played by our female singers are “fed”. This channel is processed by the built-in Yamaha reverb so that vocals on stage are not dry and the vocalists are comfortable.
  • From Aux 2 to the second Yamaha channel, , primarily the cajon. You can also add a reverb here, but very moderately.
  • From Aux 3, the second guitar and bass are fed into the third Yamaha channel. Here, of course, no reverb. I give just a little bass so that it doesn't interfere with the vocalists. And in the hall I will "drive" him exactly as much as necessary. But also without fanaticism.

If I work with the remote control Mackie 1402 VLZ PRO, where there is only one monitor send, then I connect the Aux Send 1 output to one Yamaha channel and distribute the sound to all monitors - both active and passive. Here, without a personal monitor reverb for vocalists ...

Conclusion.

Of course, the art of live mixing is not a one-day process, but the result of many years of concert practice in completely different conditions. But the approaches listed here will help you make this process easier. Good luck!

Mixing is perhaps the most critical stage in creating records, the best of which outlive their authors.

When recording (which is now often referred to as "tracking") primary audio material from the musicians and the producer, you can sometimes hear the phrase: "Let's fix this problem when mixing." Fortunately, this phrase is already becoming history - many sound engineers understand that if a part sounds bad in a multitrack session, then it will not be easy to correct it when mixing (mixing). If you can fix the error when recording tracks by simply re-recording an unimportantly played fragment (especially when recording live music), then when mixing you will have to spend several times more effort to mask the problem. All digital wonders - horizontal shifting of inaccurate notes in time, correction of intonation with autotune and replacement of timbres - work only for the dying genre of "pop", but we are engaged in real Live Music.

Mixing multitrack recordings to stereo is an art. Therefore, there are no purely technological methods like “if it sounds like this, you have to do this”. Each movement of the faders (virtual or real) is determined primarily by what sounds from your monitors, and here the main rule is: if the recording sounds good - just leave it, don't touch anything! I just want to add that you still have to check: if the soundtrack sounds good from your monitors, will it sound the same on different audio systems of potential buyers of your music? In any case, the most important thing when mixing is to send some kind of "message" to the listener through the music you are working with. And whatever this "message" may be - from the most sublime to the most mundane, believe me, it is the most important component of the recording.

The first thing you need to successfully mix is ​​a well-recorded multitrack session. If there is one, then the mixing process will be smooth, calm and painless, if not, then all the creative energy will have to be spent not on creating a "message to descendants", but on fighting the banal marriage of the primary record. The worst situation in which you can find yourself is when you are asked to mix a piece that someone had previously recorded poorly. I have been in this position more than once and I advise you not to get into it. Automating the mixing process helps to correct technical errors, but no one has yet invented knobs that breathe energy and life into a recording.

If the initial recording is good, then you still have to upset some inexperienced colleagues: there are no exact recipes for how the mix should be done, since the only way to learn how to mix successfully is to go through a thorny path of trial and error. But still, there are some tips that will help a novice sound engineer get good, and in the future - good sound. No academies will teach you how to make great sound - everyone learns this alone at the remote control. My favorite method of working, and best suited to the music I like to mix (rock and pop), is to first focus on getting the most out of each track in a multitrack session and then combine them into a coherent whole. with a master plan.

Tracks of a typical multitrack session can be roughly divided into sounding too "thin" and requiring a certain compaction, and excessively "meaty", requiring "clearing" the spectrum and specifying the sound. There is, fortunately, a third category - tracks devoid of these two drawbacks.

To begin with, before mixing, you should carefully listen to all the tracks of the future composition (I do not accept the word "thing", which is often called a song or an instrumental piece) and determine which of the categories they belong to. All this is part of the creative process, and by solving such problems, you will breathe your own individual style into the music you mix.

Drum machine samples and raw synthesizer sounds often fall under the category of "subtle" sounds. In the days of the dominance of DX-type synthesizers, this was a problem, but today, when even a small studio has a wide range of software and hardware processing tools, "skinny" mixes have no right to exist. Compaction of subtle sounds can be done at any stage of the recording - whether recording the primary tracks or mixing. One of the best and most proven methods that has worked so far is to play the sound through a speaker system and pick it up with a microphone. It is believed that this technology is only good for electric guitars, but it is also effective for synthesized sounds, drum machine samples and even vocals if you are looking for its original sound. Even in our ultra-technological time, a small guitar amp (preferably a tube amp) can do a miracle with a skinny and lifeless sound. When playing a bass guitar through such a combo, of course, you will not get powerful lows due to the small size of the speaker, but in our case of pickup with a microphone, the size of the speaker does not matter. By placing the mic closer to the cone, you get great bass. This signal can also be blended into the original dry signal.

One of the means of thickening the sound is reverb, but it is important not to overdo it with it. With reverb, you can achieve a thick sound from a song, but still get the sound of a swimming pool. To thicken the sound, a short reverberation with a high level is most often used. Early reflection programs and short gated reverbs are great. The “tails” of these treatments are practically inaudible - such reverberation merges with the original signal, thickening it. The output signal of the reverb should be monophonic and panned to the same point in space where the main signal of the instrument being processed is coming from. In our age of surroundings, such a recommendation may seem old-fashioned, but this technique works consistently. The short monaural reverberation is “stuck” to the original signal, while the stereo reverb is distributed between the left and right monitors. Good means Compression also includes compression, EQ and chorus and flanger effects.

Chorus, in addition to thickening, creates the illusion of space and movement, but it pushes sources deeper into the imaginary soundstage, even deeper than reverb. If you need to push the chorus-processed sound forward, try panning the unprocessed sound to one side and panning the chorus-processed sound the other.

Reverb creates a sense of space, but when mixing it must be borne in mind that reverb also tends to "blur" the localization of the original sound in the stereo base - as it happens in real life... Try to determine the direction of sound coming in a church or swimming pool - it feels like all sounds come from under the domes of buildings!

Now about the overly dense sounding tracks. Younger sound engineers often prefer to bring up the areas of the spectrum that give them more density. But it's worth remembering that the equalizer can also be used to suppress unwanted, "cluttered" frequencies - this method seems to be more effective. Each musical instrument has certain characteristic frequencies that are responsible for the "energy" of its sound. If the equalizer significantly increases the level of these frequencies, the sound of the instrument will become a parody of itself (a similar technique is used by cartoonists, exaggeratingly highlighting the most character traits their characters). By cutting off the characteristic frequencies, you will allow those parts of the instrument spectrum that are responsible for the transparency and intelligibility of the sound to sound. Again, it takes a lot of trial and error to figure out which of these methods works to improve the sound of the song you're mixing, and which doesn't.

Using the Time Correction function, you can correct inaccurately played, time-shifted notes. For example, the bass player was slightly ahead of hitting the big drum. In modern sequencers, this is very easy to do, but in the days of line editing with scissors on analog tape, this was a problem. In those days, the problem with the synchronization of bass and bass drum was solved by using a gate included in the bass chain. The side chain of the gate fed the bass drum signal, which controlled the moment the bass sound appeared. True, in this case the gate “ate up” the attack of the bass, but it was still better than double notes.

Now that we've got the session tracks in relative order, let's start building the mix. I practice two main methods of mixing, the “building a building” method and the “painting a picture” method, which will be explored using a typical rock / pop group consisting of drums, bass, electric guitars, keyboards, lead vocals and backing vocals. Percussion (bongos, etc.) and a small brass section may also be present.

The first, simpler and more common method is to build a mix like a house: first we create a foundation, put a frame of beams on it, lead it up, then crown it with a roof and finish our "construction" with finishing and decorative works. I think the analogy is accurate enough.

The foundation will most likely be drums and bass (although there are exceptions - for example, a guitar ballad with the support of a string group. A typical example is the piece "Yesterday" by The Beatles). In rhythm pieces, it is important that the rhythm section is a solid support for the rest of the instruments. I start off by bringing the kick drum up to a level where the console's RMS meters read -10dB from maximum. Next, I balance the snare so that it harmonizes comfortably with the snare. The latter often plays a strong beat, while the snare drum plays a weak beat, and the "rhythmic swing" formed by this pair should create a pulsating movement, a spring.

Now add the bass and the rest of the drum kit (hi-hat, toms, cymbals). I prefer gating the hat (except for jazz pieces) - this helps to some extent cleanse the snare of hi-hat bleed. Volume gating has become a standard downmix process, but it is not worth setting the gate to the full suppression range. Firstly, the sound in this case can turn out to be clicking, and secondly, small penetrations between the gated sounds give the sound of toms, het and bass drum more naturalness.

You will also find that a gate set to suppression within 12 dB opens faster than when set to full suppression. At this stage, you should make sure that the base of the foundation is solid and monolithic - not a single beat or note breaks the rhythm. Here again, micro-correction in time may be required so that the bass and drums turn into a kind of single "machine". For example, in rock music, the snare drum is often played "with a delay", which gives the piece a sweeping and resilient rhythm. If the brass plays exactly in the beat, you can move it back a little, and it can sound "swing-like". If everything is done correctly at this stage, the rest of the work turns into an easy walk through our new home.

The skeleton of a mix usually consists of "pad" (strings, choir) instruments that form the harmonic structure of the piece. We pan the stereo sounds of the keyboards hard left or right and bring them to a level where the “gaps” between the bass notes are filled - not quietly, but not so that the pads distract the listener from the bass line. For inexpensive keyboards, make sure there are no phase problems in the sounds they generate before panning.

Now we introduce the rhythm guitar, if there is one. If the guitar is overdriven (overdrive, distosh), then its spectrum is wide enough and tends to dissolve keyboards and compete with vocals. Usually I significantly cut (up to 9 dB) high frequencies in heavily overloaded guitars (and these are found more and more often) and select the 800 ... 1.2 kHz (+ 3 ... 6 dB) region. If you have two overloaded guitars in a piece, it is best to play them through amps from different manufacturers (eg Mesa / Boogie and Fender), ideally the guitars should also be of different brands.

Further, the resulting structure can be crowned with a lead vocal or solo instrument (for an instrumental piece), which are in the center of the mix. Sometimes, for a comfortable placement of a vocal (soloist) in a mix in an instrumental, a certain “frequency niche” is cleared for it. Reverb is a useful tool for vocals, allowing them to rest harmoniously in the overall mix. But excessive reverb pushes vocals into the back of the stage, and we would like to hear the voice at the forefront. Therefore, you need to set the pre delay time to 60… 100 ms and make sure that the vocals are in the right place in the mix.

Reverb is one of the most important and often used processing tools in the studio, so you shouldn't use low-quality software plugins. Not only that, they significantly reduce the performance of the CPU. Use a high-quality hardware reverb or a high-quality plug-in to process such an important element of the mix as vocals, and record the processed vocals on a separate track. Do not process bass sounds with long reverberations, unless you get a particularly original sound. Such treatments cause a bubbling sound in the mix. If you want to add volume to the big drum, try short algorithms like ambience or gated reverb. If at a certain place you want to process all drums with reverb, cut the lows in the signal sent to the reverb - you will get a more transparent sound.

Percussion (bongos, congas, maracas, coubel, tambourine, etc.) and backing vocals can serve as "decorations". Here, reverberation is mainly needed to fill the “gaps” in the musical texture.

The second method of mixing phonograms is more suitable for sound engineers with a rich imagination, and I call it the "painting a picture" method. Imagine that your control monitors are a window to another world, where sounds live, and you are painting a three-dimensional picture of this amazing world. In your picture, every sound has its place, color, brightness and even trajectory.

The first step is to determine the location of each instrument from the list of tracks. The place of the most significant instruments (big drum, bass part, lead vocals) is in front and in the center of the mix, the support instruments are placed behind them a little to the back of the stage. Toms, pads are panned across the full width of the stereo base.

Having drawn the plan of our "world of sounds", we begin to implement it. You can pan instruments to the left or right, but you can also move them forward or inward. In order to place the tool in the back of the scene, do the following:

Reduce the level of its signal;

Cut high and low frequencies on it;

Add reverb.

It is very difficult to get the sound source to plunge into the mix to the required depth and occupy the appropriate sound plan. Also, movement inward or forward can be done with the help of exciters and even ... compressors (I found this effect in Teletronix LA-2A). Today it has become fashionable to "plant" solo vocals right in front of the listener's face.

Mixes built on these two principles have their own characteristic sound. So, mixes, mixed according to the "construction" method, have a transparent and magnificent sound, while mixes created according to the "painting" principle sound natural, organic and "orchestral".

Bringing an element of randomness to music can produce unexpected and creative results. Therefore, there is a third, alternative mixing method - to put the track faders in random positions and critically listen to the result!

The first part of the article discussed the issues of correcting the primary tracks, the choice of processing devices and, most importantly, determining the general mixing strategy. Now is the time to summarize all the stages of mixing, from how the sound engineer should listen to the music being mixed, to the relationship with the producer and the musicians. (And next time we'll look at the basics of mastering and the requirements for a downmixed phonogram presented to a customer who is likely to hand it over to the CD factory.)

So let's continue with our exploration of the question - what do we call a good mix?

The individual signals from which the mix is ​​formed are better added to an integral texture if they are initially cleaned of uninformative frequencies (I call this process "primary shaping"). For example, there is nothing in most vocals below 80 ... 100 Hz, unless, of course, your vocalist is a bass-profundo. Therefore, it is irrational to keep the equalizer in the vocal track chain open up to 20 Hz: low-frequency "spits" from the vocalist's breathing and the stamping of feet will creep into the final phonogram. Use low-cut and high-cut filters, as well as attenuate frequencies that, in principle, are absent in the spectra of the musical instruments being mixed. To do this, it is worth brushing up on the knowledge of these spectra. For example, in the sound of an uncluttered electric guitar, frequencies above 4 ... 5 kHz are uninformative, violins - below 200 Hz, snare - below 100 Hz, etc.

One of the main operations a sound engineer does when mixing is setting fader levels to get a good balance in the soundtrack. Naturally, balance is a matter of taste for the producer, but even here there are a few simple tricks that make the life of a sound engineer much easier. The first thing to do is listen to a roughly balanced mix to see if there are parts in the song that can be mixed without changing the fader levels. It is clear that the longer such sections are, the better. Oddly enough, these fragments can contain the most important elements - vocals, solo instrumental and hooks. You should collect the logical parts of the mix into subgroups - drums, backing vocals, pads. This allows you to control the levels of grouped elements with one or two faders. This will make it easier and more accurate to balance the mix.

When building a mix, try to make the phonogram sound as close as possible to the desired result even before the stage of adding effects. After that, if necessary, you can compress the vocals and process them with a little reverb. At the same time, he must comfortably "rely" on the instrumental accompaniment. By and large, effects are only needed to give the mix a final polish - they should not be used to compensate for problems with poor sound balance or "cure" inaccurately played parts. Continuing the theme of using effects, I advise you to resist the temptation to correct the poor performance of the musicians with a "bouquet" of effects - it never works! For example, using reverb, you can place sound sources at different points in space. But if the games are played irregularly, then the effects will only "smear" these irregular sources, and the problems will still remain.

When mixing dissimilar sound textures into a common mix, it is sometimes difficult to make them merge into a single texture, especially when it comes to synthesized sounds. You can get many instruments playing at the same time, but not together. One way to integrate these sounds is to apply one overall effect in different proportions to different elements of the mix. The first thing that comes to mind is reverb, and we are not original in this approach - many ingenious recordings of the 70s and 80s are collected in this way. Avoid panning the bass sounds in the mix to different sides of the stereo picture, as they are high energy and need to be evenly distributed between the left and right speakers. Bass sounds contain a minimum of spatial information, but they contain treble harmonics, which can sound more directional.

In the course of the piece, you should not change the level of drums and bass without special need, since the rhythm section traditionally serves as a background against which other musical instruments sound. Within the rhythm section, its natural dynamics should dominate, and you shouldn't create artificial dynamics with faders.

In loaded mixes, certain instruments, such as overdrive guitars and synth pads, can be ducked. Lead vocals should be the control signal for the ducker. In this way, for these sources, you can "clean up" the mix: when vocals appear, the level of instruments sounding in the mid-frequency part of the spectrum is lowered by 2 ... 3 dB. Even minimal ducking improves the transparency of the song, and its lyrics become surprisingly "readable"! Ducker can be compressor or gate based: fast attack time is used and release time is set by ear. A short release time can result in an audible pumping sound, but this only adds extra drive to rock music.

But don't get carried away with the enhancers of the overall mix! Once, in an attempt to improve the sound of phonograms, I just ruined the finished album with the dbx 120XP! The bypass button on the enhancer is the best controller in this process, it will show you exactly how drastically the enhancer changes the sound and whether its action is useful. In my opinion, many of these "enhancers" fail mids; they are a kind of "crutch" for those sound engineers who cannot accurately balance the mid frequencies.

It is much more useful to enhance the individual elements of the mix with an enhancer, for example, vocals, acoustic guitars, and samples of acoustic instruments. The main goal here is to bring the main sounds of the mix to the fore. To do this, you should include the enhancer in the insert in the subgroup, to which the corresponding elements of the mix should be directed. Listen carefully to the vocals processed in this way, since the enhancer has the peculiarity of emphasizing sibilant consonants in them.

When mixing vocals, it is important for the sound engineer to remember that listening to the song repeatedly leads to the fact that the perception of the text is dulled. The listener will listen to this song much less often, so when mixing it is easy to fall into the typical trap and "sink" the vocals - after all, you already clearly hear the song with your inner ear. An ordinary listener will notice a failed vocal at the first listening, although one of the goals of experienced producers is to make them listen to the lyrics by making the vocals a little quieter. The degree of this "slightly" is a secret behind the seven seals of experienced producers and sound engineers.

Effects algorithms designed to broaden the stereo base of the audio track being mixed can lead to poor mono compatibility. Use the mono button to check the audio loss. The simplest option to expand the stereo base is to generate an antiphase signal from the processed sound and pan the direct and processed signals to opposite points of the mix space. Mono compatibility is important for cases when a phonogram is broadcast on television and radio stations in mono mode. Also, most FM receivers automatically switch to mono mode when the radio signal is weak.

Mono compatibility should be checked by changing the amplitude and / or timbre balance of the phonogram: if noticeable changes are heard, it is necessary to reduce the expansion of the stereo base of the main elements of the mix and process only minor elements: percussion, sound effects, processing returns.

The purpose of the equalizers is to suppress problem frequencies. Has anyone wondered why shrinking is better? The thing is that human hearing is less sensitive to a decrease in levels than to an increase in spectral components. This is especially true when using low budget EQ models.

Without accurate sound control, it will not be possible to mix the phonogram well. What is precise control? This is a question of questions, and quite a few copies have been broken. Monitors and studio rooms must meet certain requirements in order to obtain an objective sound image. These requirements are often incompatible, but the main ones are:

To improve the sound of musical instruments (in the tone-studio) and thus to enhance the creativity in the musicians' playing;

Give the sound engineer objective information about what is actually being recorded.

As we can see, both requirements are essential and important, but often mutually exclusive.

The best studios in the world have always sought to acquire the largest and most powerful monitors that a studio budget could handle. These monitors worked in the far field, their fidelity varied significantly, but one of the main selection criteria was their high sound pressure. Demonstration of the phonogram to the customer at high volume was and remains one of the necessary conditions for a successful studio.

Today's trend of using small monitors in studios has become dominant. Personally, I believe that so far there are no small-sized monitors, the sound quality of which is comparable to large far-field monitors (by the term "sound quality" I mean fast transient characteristics, low total harmonic distortion, a wide frequency range up to 25 ... 30 Hz without using additional subwoofers, etc.). No doubt there are excellent small monitors, but they are designed to test the sound of the finished mix rather than to "assemble" it. Unfortunately, the laws of physics give undeniable advantages to their larger brethren.

Do not monitor the sound for a long time at maximum power! Control with high level gives the listener an additional emotional impetus, but the end user will not constantly listen to the CD at prohibitive volume. At first, a high volume temporarily displaces the sound experience of the sound engineer, then irreversible changes can occur. It is only useful to briefly check the mix at high volume. But forget about this rule if you mix music for dance clubs!

Near-field monitors are designed to simulate the sound of a phonogram on small-sized boomboxes, car players and home audio systems. But what if the studio does not have the funds to purchase large monitors and cannot rent a room of the corresponding volume? Work on small monitors! Alas, the main problem with small-sized speaker systems is the inability to fully reproduce low frequencies. Of course, sometimes small speakers sound quite "meaty", but when you compare them with widescreen monitors, it becomes immediately clear how much low-frequency information is lost. Therefore, you should think three times, raising the frequency level below 100 Hz when mixing at the request of the customer.

The main temptation when working on small-sized monitors is the following: if studio monitors do not give out enough low, an inexperienced sound engineer tries to compensate for this shortage by "turning" the bass knobs. I constantly have to deal with an overestimated level of low frequencies in phonograms (especially dance ones) made in home studios. When playing such material through large concert systems, sometimes it was necessary to cut the bass by 6 ... 9 dB! The addition of low end also causes the cone of small (6… 8 ″) woofers of small monitors to operate at maximum amplitudes. Considering that most of these speakers have a two-way design with a crossover frequency of 2.5 ... 4 kHz, the sound engineer receives incorrect information about the mid-frequency region of the phonogram, and this can no longer be compensated for by simply cutting the low end at a concert. From time to time, you need to check the soundtrack being mixed in headphones - they will allow you to hear quiet noise such as clicks and low-level distortions that are hard to hear in monitors. But it is impossible to mix only with headphones in any case, since they form an incorrect stereo picture and are unpredictable when building the lower edge of the phonogram spectrum.

Mixing for your own pleasure is something that a professional studio sound engineer cannot afford. He can add effects as he sees fit, build a balance according to his own taste, but at the same time we must not forget that the final recommendations come from the producer or other customer of the phonogram. If he and your tastes match - you will enjoy the mixing process, if not - get ready for the hard chore.

I believe that in small studios, quite often recording amateurs and semi-professionals, the sound engineer actually acts as a co-producer of the recording. In such situations, it is unlikely that the band member who claims to be the producer of the album is as highly qualified as a studio sound engineer. The problem is easily solved by establishing creative and psychological contacts between the participants in the recording and mixing. Here knowledge from the field of diplomacy and psychology comes to the fore - it is very difficult to become a professional sound engineer without them.

An experienced sound engineer will always give the artist, group and producer the opportunity to feel that the creative principle during recording and mixing came from them, and the sound engineer only quickly and accurately shaped the sound according to their instructions ... This is again psychology, tact and diplomacy. A nightmare for any sound engineer is when the entire band is on the mixing desk to give advice and comment, forgetting that the work of the musicians ends after the recording of the last part in the studio. Now the producer and sound engineer should work, because in the final mix, the vocalist will naturally want to hear more vocals, the lead guitarist - his guitar louder, the drummer - drums, the bassist will decide that there is not enough bass, etc. It's funny, but during the mixing, none of the musicians ever asked me to make their part quieter!

Once for a group of five musicians, I made six (!) Mixes - in accordance with the requirements of each of the members of the group (of course, for their money). I made the sixth mix, having accumulated the general wishes of the musicians. All the phonograms were given to the musicians and the producer, and, oddly (or logical), they chose my version.

Human hearing is a strange thing. Not only do our ears themselves look comical enough, they sometimes tend to "hear" things that are not actually happening. Once I was mixing a band and the guitarist asked me to make his guitar a little brighter. Although I thought that the guitar in the mix sounds the way it is required, nevertheless (considering that the guitarist paid for the recording), I took up the treble control and decided to emphasize them a little, hoping to quietly return everything to its place before the final reset of the phonogram to the master. -recorder. "So?" I asked. “Just a little more,” the guitarist replied. With my peripheral vision, I saw that he was closely watching my actions. When the guitar became prickly and razor-sharp, the guitarist stopped. We both heard the treble on the guitar! Later, I did not tell the guitarist about this case, but then I mistakenly took up the potentiometer of the adjacent, unused mixer track! That is, I did not change the tone of the guitar at all, but we both heard the rise to the top! We expected to hear this rise and we heard it.

Don't assume that your hearing is telling you the truth in any state - give it a good rest before mixing. From time to time, compare the sound of your mix with the best works of famous producers, and on the same monitors. This is especially important when using an enhancer - the ear easily gets used to its sound and stops noticing the processed mixes.

And in the end - one more piece of advice. Always archive the components of the mix, i.e. the session tracks. Perhaps someday you will have to sit down for a new mix, maybe in a multichannel format ...

Mixing is a creative process. But do not forget the main thing: you must control this process, you must control the equipment - but not vice versa! Instruments should never impose their sound on you!

And in conclusion, on our own (www.musicaldoctor.narod.ru) we will give several addresses of online studios, where experienced specialists, if necessary, can perform for you a high-quality mixing of your songs or instrumentals (upload files of rendered tracks of your project to the server and through at the agreed time, you will receive a ready mixed track for downloading):

http://megamixing.com/

The cost of mixing one song: 8000 rubles.

http://www.andivaxmastering.com/new/

Online mixing and mastering service from Andrey Vakhnenko (aka Andi Vax).

Mixing cost per song: 150 euros

http://everestmusic.ru

Online recording studio Everest Music

The cost of mixing one song: 300 rubles / track

http://manifold-studio.com

Online studio Manifold

Mixing cost per song: from 140 euros

All details are on the respective sites.

idea of ​​the general rules of sound editing. Introduce the basic principles of sound editing for a film in Adobe Premiere.

Basic rules for sound editing

Sound is very important for a good movie. Unfortunately, amateur cinema often cannot boast of high-quality sound. To avoid the most common mistakes when editing sound, we recommend using the following tips.

  • As a rule, the original sound (which is recorded on the camera simultaneously with the video) is of not very good quality, primarily due to the large amount of background noise. Therefore, for films, the sound is re-recorded once again in studio conditions. If you want to use original sound in your movie, try to minimize the amount of background noise.
  • If you need to add some sounds, you can record them yourself or use a library of ready-made sounds (for example, on the Internet at www.wavsounds.com).
  • Very rarely, there is only speech in a film. For greater likelihood, various kinds of noises are specially added (rustles, steps, squeaks). Do not forget about this, especially if you create a separate video sequence, and then overlay specially recorded audio.
  • Remember that each picture on the screen should have a musical fragment corresponding to it in mood and rhythm. Videos and music should be harmonious.

Audio tracks

In Adobe Premiere, editing audio is similar in its basic principles to editing video files.

  • The sound quality in a project is set when defining project presets. One of the main parameters here is the Rate, which determines the rate at which the sound is digitally represented. The higher this indicator, the better the sound quality.
  • Sound files are loaded into the project as sources and are placed in the "Project" window (Project). They can be placed in a separate folder.
  • There are special audio tracks for editing sound in the "Timeline" window, the number of which is not limited (Audio 1, 2,…).
  • All the same commands are available for audio tracks as for video tracks.
  • Toggle track output mode.
  • Track opening or closing mode (Toggle track lock).
  • The triangle next to the track name opens it. If the track is expanded, then we will see the amplitude graph of the audio clip.

Let's consider the basic techniques and commands of Adobe Premiere used for audio editing.

Change the volume of a sound clip

To change the volume of the entire clip by the same amount, you can pull the volume control ribbon down (volume down) or up (volume up). At the same time, the amount of sound change will be displayed in decibels in the pop-up window. The same operation can be done using the Effect Controls tab. First you need to select the clip, open the tab, then open the "Level" parameter and use the slider or the keyboard to set the required value.

If dynamic volume change is required, then keyframes must be used for this. The algorithm of actions is the same as for adjusting the transparency of clips: you can adjust the position of keyframes and their parameters both directly in the Timeline window and in the Effect Controls tab.

Sound mixing

Combining sounds located on different tracks into a single soundtrack for a movie is called sound mixing. It is necessary that each sound be heard by the viewer, does not drown out the rest, and go without interference. To do this, the program has a special "Audio Mixer!" (Audio Mixer) in a separate tab. You can see all the changes and effects assigned to clips, and immediately edit them here. The sound from each track is regulated in a separate column, the title of which corresponds to the name track.

The work of the audio mixer on each track is included in several modes. For our work, it will be enough to use three modes:

  • Off - ignores all sound effects for this track during playback;
  • Read - plays all the effects for this track;
  • Write - Writes all the effects assigned to the track and creates the corresponding keyframes.

To edit the sound, you need to set the cursor to the desired position and start playing the file, as you adjust the parameters. The audio mixer will automatically keyframe audio tracks. Then, if necessary, you can correct the location of the key points manually in the Timeline window.


Rice. 17.2. Audio Mixer Tab