Cleo Berlin: Red As Red Can Be – DoDoHouse

Band: DoDoHouse

Song: Red As Red Can Be

Production: Cleo (Angelo Thomaz, Juan Manuel Puñales and Mattia Battegazzore)

 

1. Recording

The session was divided into three parts:

  • instrumental session with all the instruments playing live with a live female pilot vocal;
  • lead vocal session;
  • backing vocal session.

1.1a Instrumental session (set up)

This and the following sessions took place at the Funkhaus’ K4 studio.

The wettest room was used for the cajon and two room microphones were placed at a couple meters of distance in an A/B stereo technique to emphasize the sensation of space. As close techniques, one mic was placed in front and one behind to capture all the frequencies and beats.

The female pilot vocal was performed live in the central room, to give more vibe to the instrumental players and to involve everyone in the session.

For the guitar, a close microphone was placed near the 12th fret and a couple of small diaphragm condenser mikes set with an X/Y technique. This was to achieve a guitar sound on the mix with a strong presence on the mids and not too spacey (X/Y creates space but it is not overly strong).

The bass went directly into a DI box via a tuner pedal. From here the mic signal went directly into the desk while the Hi-Z link output went to a mic preamp and compressor.

A Nord emulating a piano sound was used and its outputs were plugged two DI boxes before reaching the desk mic inputs.

1.1b Instrumental session (recording)

The cajon was recorded first together with the female pilot vocal and, after recording enough material, a quick comp was done to get good playback for keyboard and bass. The guitar player was recorded too but his sound was not sent to the others, to hold back some material for future comping and editing without affecting the other players performance.

1.2 Vocal session

We opted for recording the vocals simultaneously but in different rooms. All the mics were set with a pop filter and a low-cut on the way in. Limited time did not allow to record abundant material but the singers were pretty precise and hard compression on the way in for the lead vocal contributed to reach a good result.

2. Editing

The Cajon was the most delicate, given that its groove is not meant to stick to the grid. After careful comping, beat detective was used only to tighten up the main beats and, after clip separation, it was conformed at a strength of 90% and a tolerance of 10% to keep the real vibe. It was decided to use lots of copy and paste to create the best cajon track possible.

Bass and keyboard were easy to edit: comping was already adequate for dynamic, time and performance. Some gain corrections and beat detective were used to follow the previous edits.

The best guitar parts were selected solely from a performance point of view and, then, a lot of beat detective was used. It was not easy to use this tool here but, after copy-and-paste and some arrangement decisions, the track was good enough for the song.

Working on the lead vocal was a pleasure; because of his timbre and genre, no hard pitch correction was needed and some freedom was taken to leave the spinal vibe. While comping, it was easy to notice he sang some of the takes at a certain distance from the microphone and some up near. This was ideal to create three different tracks: one central, created out of the higher gain clips, and two doubles with the lower ones. This helped for pitch-correction too: the central one was slightly revised in Logic and the others were left at their comping stage to create some pleasant effects, helping corrections to flow more naturally. Only a few gain automation nodes were needed, thanks to hard compression on the way in and to his dynamically balanced singing.

Time correction was needed only on the backing vocals. After comping, pitch correcting and gain automating these, they needed to be slightly moved in time to make a more precise sounding vocal harmony.

3. Mixing and Mastering

Following a rough mix, which was made easily on Pro Tools in an hour, the mixing session started. Each single track was sent through a Trident desk in which all the most important EQing and dynamic processing were done (also thanks to some outboard analog compressors). Right after this, the mix was finished on Pro Tools using all the digital equipment needed.

Mastering was completed in Pro Tools and using Isotope Ozone 6 as the main tool. Some classic American folk songs were taken as references.

 

Advertisements
Cleo Berlin: Red As Red Can Be – DoDoHouse

A guide to Mastering and its significance in 2018.

Our first article as Cleo Berlin about mastering. Enjoy!

Cleo Berlin

Today, any solo musician or band can produce their records on their own, achieving a satisfactory result for a much smaller budget than a few decades ago. Usually, the projects are carried out with a lot of enthusiasm but without much knowledge about audio engineering as a science. Their “how to” and problem solving is many times quickly taken from  articles or tutorial videos on the internet. This does not mean that all these works reach a low result but, compared to the times when producing a record for releasing called for at least one expert for the engineering side of it, it’s not uncommon that they end up containing sonic flaws that aren´t necessarily intentional. The natural tendency is that the artist will try to spend the least possible amount of money to get the result they are looking for, and since mastering is the stage of production…

View original post 947 more words

A guide to Mastering and its significance in 2018.

Top Down – Let’s Do It

Top Down – Let’s Do It (Recording)

J

We met this blues-based punk-and-rock-and-roll-spirited-power-trio-formed-in-Portland while making live sound at the loved and missed XB Liebig (which had an unfair and sad ending and so I encourage you by the way to read the following: Message to XB-Liebig from Multiversal and Statement of XB collective concerning the eviction on Oct 15th 2017).

This track came out of a 4 hour session we did back in September with my pal and amico Mattia at the Funkhaus’ K4 studio. Three songs were recorded and mixed out of this session but this was the only one released to the moment. Hold on for the rest!! Since we are planning a second session on the following months and that may come out all together on an album :)!

All the instrumental recording was made live and in the same room on a pretty straight-forward setup from which I don’t remember much more than that…

View original post 253 more words

Top Down – Let’s Do It

Herbstsonne – Que Dónde Está (Recording)

https://w.soundcloud.com/player/?visual=true

Single by band from Santiago de Chile based in Berlin. I was in charge of the production and engineering, assisted by Mattia Battegazzore and Angelo Thomaz (the three of us recently formed Cleo, an audio production partnership running a small studio in Marzahn 🙂 ) Pre-Production Herbstsonne is a two piece band consisting on Cecilia […]

via Que Dónde Está – Herbstsonne (2017) — J

Herbstsonne – Que Dónde Está (Recording)

Sampling: Italian Dinner?!?

Italian Dinner?!? is a musical track made up of samples taken from several recordings made during a friend’s dinner party here in Berlin. It was decided to use only that material to try to achieve something completely unique and, in a certain way, alternative. All the takes were done by using a stereo microphone recorder. It was inspired by Empire Of Coffee by Matthew Herbert.

After completing the dinner recordings, the most time-consuming part of the work was that of selecting all the best samples out of the over twenty takes executed.

The chosen ones were dived into three categories: rhythmical, voices and non-percussion instruments. Thanks to the Ableton digital tool Simpler, all the sounds in the first two categories were treated with its “1-shot” modality in which each time you press the linked key, you hear the whole selected sample with all its editing (fade in, fade out, filters, etc…). All the samples were, then, gathered into a layout so that it would be easier to play them as if they were instruments. Thus, both the percussive instrument and the vocal one were achieved and all the sounds were roughly mixed following their audio characteristics.

Three other tracks were created but, this time, thanks to the “classic” mode in the Sampler. This mode helps you to create an instrument by using only one sound: the original sound is linked to the C3, while all the others are pitched and, when needed, stretched to all the other notes. In this case food mixer, a tap and a beer-drawing instruments were created by using this method.

The song was built from a horizontal point of view: twenty or more loops were created and linked to a scene and, afterwards, they were recorded in the arrangement view in a way that sounded right and made some sense.

All the proper mixing was done at the very last stage with the addition of three return tracks per each kind of instruments with reverb on it (for the vocals a very fast delay was added).

Some simple volume automations were executed too but it was decided not to push this button too hard, letting the sampling itself taking the lead.

This is just a rough mix.

Sampling: Italian Dinner?!?

Maria’s Transformation: Metropolis (Synthesis for diagetic and non diagetic sounds)

Metropolis is a Fritz Lang 1927 science-fiction movie. It is a silent film and the score was composed for a large orchestra by Gottfried Huppertz: he tried to mix the classical world with some more modern sounds to emphasise its industrial and apocalyptic environment. This movie is considered to be the pioneer of the science-fiction genre and has inspired a lot of contemporary  production styles.

The target of this project was to create both non-diagetic and diagetic sounds on a selected extract from this movie, using only modulation synthesis and trying to reinterpret only the frames,  without being influenced by the original sounds.

MV5BNDAzNTkyODg1MF5BMl5BanBnXkFtZTgwMDA3NDkwMDE@._V1_UY1200_CR72,0,630,1200_AL_.jpg

Non-diagetic sounds

The non-diagetic sounds are everything whose source is not visible and is not intrinsic to the action. Some examples within this category could be: the soundtrack (mood music/score) or the sound effects which add to the drama.

Four instruments were created thanks to modulation synthesis and all of them were created by the Operator (digital synthesizer) in Ableton.

The first one is a high-pitched sine wave with an LFO constantly modulating both its filter and its amplitude. When the amplitude of a waveform is modulated, this synthesis is called AM synthesis (Amplitude Modulation Synthesis). If the modulating wave rate is below 20 Hz, the tremolo effect is audible on the original wave form but, if it approaches the audible range, it is more difficult for the human ear to  detect each individual amplitude fluctuation in the carrier and “sidebands” are produced. The frequency of these sidebands (which are usually inharmonic overtones) is the sum and the difference between the carrier and the modulator. If we are modulating a complex tone made up of more than just one frequency, two sidebands are produced for each. A good reference for this kind of synthesis is to be found in some of the Karlheinz Stockhausen works. Anyway, the idea of this non-diagetic sound was to create a constant harmonising looped high pitched drone to go with the whole frame for a sense of tension. The bass takes on a similar role and, with his fast arpeggio and boomy sound, it aims to keep the viewer’s attention. In order to further enhance this, and to raise the tension level, some automation on the bpm of the entire track was added to speed up the whole soundtrack right through to the end. This was to typical of the famous Jaws 1975 movie theme in which the shark is never seen but when the particular sound is heard, everybody is aware of its presence and, when the animal gets closer, the interval is ever shorter.

In the middle range two other instruments with the same modulation were created and then everything was panned and mixed to fill out the space properly. Thence, some volume and filtering automation were set up too, to facilitate arranging.

Diagetic sounds

All the sounds whose source is visible on the screen or whose source is implicated to be present by reason of action are called diagetic. This reference-category can be found in Forbidden Planets (1956): this was one of the first movies in which synthesis was used.

Numerous samples of diagetic sources appear in this piece of audio so it was really hard to distinguish between them, create, synchronise and mix them. The working process was based on creating one sound at a time, repeating it whenever necessary (with proper editing when needed) and mixing it with what had already been created. Some of these sounds were created by using AM synthesis like the sound of the lighting cylinder at the very beginning: here the amplitude of the triangular waveform (in digital VCO A) is modulated by the Operator LFO with a high rate but a low amount. The sound of the switch, as all the others in the project, is created in another track by an envelope with a short attack and fast decay linked to the noise.

The rest of the sounds are mostly made  most by FM synthesis (Frequency Modulation synthesis). This technique was developed by John Chowning in the 60’s and is based on modulating the frequency of one oscillator with another one. When you put two oscillators (or more) in series, the pitch of the first one is modulated by the next. If the rate is below 20 Hz a vibrato-effect is reached but if you feed it at a faster rate, then sidebands are created. It is similar to AM but, in this case, more than two sidebands are created relating to the feeding oscillator amplitude. Most FM result in a very clangorous and a-tonal sound. The way to make it sound harmonic is to use harmonic ratios between carrier and modulator frequency. An example of FM synthesis in this project may be found with the energy rings sound (fluctuating around the robot). Here four oscillators in series were employed and they were automated so that they are turned on one after the other depending on the number of rings we see on the screen. All the waves are sine waves, to get a rounded effect and the automation is carried out while the scene is changing to make it un-noticeable.

Some other sounds were created using both AM and FM synthesis. An example of this is electric noise: a sine wave is feeding a squared wave and the LFO is modulating both the filter and the amplitude of the sine.

For a better experience use anything but laptop or smartphone speakers.

Maria’s Transformation: Metropolis (Synthesis for diagetic and non diagetic sounds)