What is the difference between Audio time and Music time?
Audio time ( or “real” time ) is the length and tempo of a performance as executed and recorded. Whereas “music” (BBT, for Bar-Beat-Tick) time is a series of arranging data (notes and rests) that can be executed very quickly, or very slowly, and still be considered the same.
In “music” time, a quarter-note is a quarter note, regardless of tempo. MIDI consists only of arrangement data, and so it is possible to arrange a MIDI sequence and later increase or decrease the tempo. Just like musicians playing from a score, the arrangement’s tempo will increase, but the musical expression and timbre will remain largely unchanged.
When recording audio, the performers are recorded in real-time subject to the players’ own tempo and interpretation of the music. And once recorded, you cannot easily change the tempo of the recording. Once audio is recorded, it requires quite extensive processing and manipulation to change the tempo … and even more so if you don’t want the pitch to change as well ( i.e. the “chipmunk” effect )
The problem is that Mixbus (and other DAWs) are expected to show both MIDI and Audio data on the same timeline. And there’s some ambiguity: one can imagine audio data (such as a single drum hit) that has no inherent tempo, and can be considered arrangement data, and subject to tempo changes in the musical timeline; or one can imagine a MIDI event that is related to the audio timeline and should not move if (for example) the tempo is changed.
Why is this so hard?
Attempting to shorten or lengthen an audio performance, after it is recorded, requires the sound to be digitally manipulated which will incur some loss of quality. Because Mixbus is intended for high-quality editing, mixing, and mastering tasks, we have chosen to give priority to the “audio” timeline. But there are equally valid reasons why a user might be more interested in the “music” time, and so both music and audio-timelines must coexist.
Consider these examples:
- A live performance is recorded, and the musicians have varied widely in tempo and meter. The user would like to develop a “grid” which follows the varying tempo, and allows editing to the “grid”, just as if the performance were recorded strictly to a metronome. Or perhaps the user would like to add MIDI arrangement using a musical score editor, and these notes must somehow be synced to the performance’s tempo.
- A MIDI performance is generated with a fixed tempo. Later, a vocal and instrumental tracks are added to the MIDI arrangement, and these tracks are edited with crossfades to assemble the best-sounding mix. Now the arranger would like to gradually decrease the tempo at the end of the song ( i.e. ritardando ). The MIDI events, being tied to the Music timeline, will stretch farther apart, but will stay in tune. But what will become of the audio regions?
- A session which mixes spoken-word sections with musical sections. During a musical portion (implemented with MIDI), the tempo is increased, so the musical portion finishes faster than it did, when the audio was recorded. Should the audio voiceover move left to accommodate the shorter musical segment?
Mixbus has some mechanisms to accommodate these situations. But it can get quite complicated.
We have chosen to focus our efforts on the “audio” timeline, rather than the “music” timeline, which is more common among those workstations that began as MIDI sequencers.
How do you use Audio Time and Musical Time?
Every item on the timeline ( regions, markers, and automation control points ) must be either connected to the “audio” timeline, or the music timeline. If an item is attached to the musical timeline, then any changes to the BBT (bar-beat-tick) timeline will be moved along with the musical timeline. For example, if you increase the tempo then the regions will move to occur earlier.
If you are assembling “samples” of a kick drum alongside MIDI-sequences, then it is appropriate to attach the regions to the musical timeline.
What are the default settings for the different timelines?
By default, audio regions use the audio timeline. You can right-click on an audio region to “glue” it to the BBT musical timeline.
By default, MIDI regions are “glued” to the BBT timeline, while audio regions are not. You can right-click on an audio region (or selection of region) to “glue” it to the BBT musical timeline.
Tempo markers are glued to the BBT timeline by default, but they may be unglued (and therefore attached to the audio timeline), by right-clicking on them.
Automation control points are attached to the audio timeline, and cannot (currently) be changed.
Currently, even if a MIDI region is attached to the audio timeline, and it won’t move when the tempo map is moved; its CONTENTS will still stretch. That is not desired, so we will likely fix it in a future version.
This is complicated stuff. Investigating other DAW manuals and videos will reveal that there are no easy, non-destructive ways to solve these issues.
So what is the preferred workflow for managing a project that uses both the audio and BBT timelines?
Generally speaking, Mixbus is optimized to work in one of these 2 ways:
1) Record a live performance on the audio timeline, (optionally) map the tempo, and then record MIDI instruments, but never move the tempo/BBT-map after you’ve started adding MIDI.
2) Arrange your song in MIDI, and adjust tempos/timeline as needed. Then record audio, and never again move the tempo/BBT-map after you’ve started recording audio.