A Brief Synopsis of the Theory of Holistic Tonality

Peter McClard
86 min readFeb 8, 2020

PDF Available HERE

Content Listing

Introduction
Postulates
Fundamental Principles
The Nomenclature of Holistic Tonality
Classes of Music
Uses of Music
Musical Realms, Genres and Styles
Classes of Instruments
Classes of Temporal Structure
Classes of Dynamics
Tonal Constructs in Music
The Classifications of Scales and Modes
Classes of Musicians
Sonic Contexts of Holistic Tonality
Generative Music
Cyclophonic Synthesis
Postulate 3: Explanations of Musical Axes
Operations
Composing Music Holistically
Performing Holistic Compositions
Using Hyperinstruments in Performance and Recording
On the Teaching of Music
Conclusion

Introduction

Many things and ideas in this treatise are not claimed as original but inclusions of prior music theories and terms for the sake of completeness. Forgive me for those times I seem to redefine an existing meaning of a word. I try to do so only in the context of my own thinking. If anything, think of these as alternate ways of seeing the same things.

This theory flows out of a natural set of circumstances that took place earlier in my life at the age of 21. I was a musician (guitar since 8) and had earlier entered into a course of study in Acoustic Engineering in hopes of producing architecture that would optimize the musical experience. Becoming disillusioned with the engineering curriculum, I transferred to St. John’s College in Santa Fe, NM to study philosophy and where I was pleased to find there was a parallel music program that taught about the very roots of western music such as how polyphony evolved. Somewhere in the course of these studies my mother bought me a Casio calculator that had an interesting feature where it would play back the numbers in the display as musical intervals. This was a revelation, amplifying what I was learning about the ratios that made up musical intervals. I then bought a Casio VL-Tone, a handheld miniature keyboard that had a primitive sequencer built in and I started creating sequences of notes and speeding them up as much as possible and noted that the sequences became sounds of their own, independent, yet related, to the notes they were composed of. I conducted a number of experiments where I created major scales made of minor scales, and all sorts of combinations of scales to glean the differences. We were also studying Euclid and Appolonius so I was primed for organizing these experiments into a cogent theoretical structure. The culmination of this was my Senior Essay (Thesis) called Holistic Tonality: An Attempt of Expand Musical Possibilities in the Face of the Third Millennium because I was fascinated by the future of music. This is still available in the St. John’s Library. Before, during and after college I had learned BASIC computer programming and began programming early PC’s (TI-99, Commodore 64, Amiga) to further my experimentation and founded a company, with family members, called Hologramophone Research which later led to my other businesses of Gluon and Techné Media. Our first products were Pixound and HyperChord for the Amiga. HyperChord was featured in the MIT publication, Computer Music Journal and various Electronic Music publications at the time, as was Pixound. To this day, I build upon my findings and this is my attempt to document and share the discoveries I have made over the years.

All music theories seek to make sense of and codify the relationships created by variations in perceived pitches over time, usually in the form of timbre, notes, chords, rhythms and structures. Because computers and digital audio processors have given us supreme control over the fine details of sound, a modern music theory must include the panoply of new possibilities. The long tradition of theories going back to the Greeks whose disciplined use of modes was geared in creating a set of rules by which music could be made more logical, harmonious and, generally, more appropriate toward the goal of exalting God in the service of the Church, does not adequately equip modern composers and musicians for the future.

Early theory such as put forth by Boethius¹ or Tinctoris², were concerned with resolving the imperfections of musical notes as the use of aliquot (whole number) ratios such as 1:3, 2:3, 4:5, etc. caused octave inequivalence and dissonance which was thought to be unholy and undesired. Also, early theories were concerned with the formal notation of music so that it could be conveyed from place to place and generation to generation. Music scholars of those times proscribed the use of unison and parallel motion, which was very limiting yet still produced sublime results such as Gregorian Chant and later, the work of Machaut.

Later theories began to expand into the world of polyphony whereby chords became more advanced over time and eventually counterpoint and complex forms of musical structure evolved. Polyphony remained relatively simplistic through much of the Middle Ages and up into the early Renaissance leading to the advanced musical theory of Rameau³ which was a turning point on the cusp of equal temperament during the Baroque period which led to an explosion of complex music that endures as supreme examples in: Bach, Vivaldi, Scarlatti and others.

The Theory of Holistic Tonality builds upon other music theories, most notably that of Schenker⁴ whose work on harmony and counterpoint is far more comprehensive. However, Schenker could not have anticipated computers, synthesizers and fractal mathematics so his theories could not be fully extended into the modern era.

In all traditional phases of music, up until the mid 20th Century, musicians were generally limited to instruments which relied on the timbral qualities produced from naturally vibrating materials such as wood, skins, strings, or columns of air all of which were direct analogues of the most important instrument of all, the human eardrum. In other words, as the string vibrates, so does the eardrum. In the end, all instruments causes the air between the instrument and the listener to be set into motion, culminating in the physical vibration of the eardrum which in turn converts the vibration via a series of tiny bones and the cochlea into electro-chemical signals to the auditory portion of the brain which then imagines the sensation or perception of music.

Even modern synthesizers eventually drive the surface of a speaker cone (an even closer analog to the eardrum) or other device, which must vibrate according to the laws of physics. It is no surprise then that theories would center around vibrational patterns that would be perceived as harmonious or “pleasing to the ear.” It wasn’t until Helmholz⁵ analyzed these vibrations more closely that it was understood that timbre was the sum of a complex “vibrational chemistry” where some quantity of overtone mixed with others to create a unique tonal quality such as that of a violin, flute or piano. Holistic Tonality simply states that: All music is only Timbre.

In a way the theory of Holistic Tonality is sort of a Unified Theory of Music incorporating ideas from all existing music theories and traditions, expanding upon them and adding multiple levels of complexity. In another view it’s a sort of Musical Calculus that examines the smallest differences in sound dƒ/dt and asks us what is the ultimate musical control over these differences? In yet another view it has aspects of a Musical Relativity, where frames of reference change the meaning of musical events both large and small and our relationship to the music, as well as the internal relationships of the musical content. It requires a much deeper multi-disciplinary approach that involves physiology, psychology and audio synthesis seamlessly enmeshed with compositional discipline but I would never dare to attempt a full exegesis of these many disciplines which would take a lifetime to complete, if even possible.

I should mention, even though this is a “theory,” I have conducted years of experiments to prove to myself these concepts are solid. I have heard for myself the power of these ideas and that has compelled me to document some of my findings. Perhaps the reader would benefit from downloading some of my apps that make use of this theory such as HyperChord, Different Drummer and Pixound to confirm my findings for themselves.

I hope these musings have value to someone and can inspire others to explore further. But, if it can achieve one thing, to get musicians to listen more carefully in order to expand thinking beyond, and deeper than mere melodies, harmonies and rhythms, into the reality of the sound they hear and make — that would be enough. Even better, would be for these ideas to lead to a new level of musical expression where pure emotions and inspirations are amplified and glorified and music is used as a healing force for the spirit, mind and body. I would be remiss to neglect to mention the potential for harm or psychological injury that could be perpetrated by weaponizing these methods or overexerting their influences. I implore the utilization be peaceful, applied subtly and for the benefit of the arts, therapy and in pursuit of enlightenment, happiness and celebration of the great mystery of life.

Download the eBook version here.

Postulates

Postulate 1:

All music is timbre.

Postulate 2:

Timbre combined with timbre creates timbre.

Postulate 3:

Music is a multi-dimensional imaginary construct that operates on Three Primary Axes: Time (Temporal), Vibration (Harmonic) and Amplitude (Dynamic) and affects harmonic resonance within a listener, perceived as music.

Postulate 4:

All sentient beings can experience music.

Postulate 5:

Music does not exist outside of time or consciousness. Music must have an audience.

Fundamental Principles

1. All music is timbre.

Corollary: Music and timbre are interchangeable. Music is timbre applied to consciousness. Music and timbre do not exist outside of consciousness. Outside of consciousness, music is merely wave and particle motion through space-time. Music and timbre can exist in an entirely imaginary realm because it requires consciousness of some degree to have imagination. Imaginary timbre is only experienced by the imaginer.

Timbre is generally thought of as the way an instrument sounds but when instruments combine, they produce a different sound which itself can be considered a more complex instrument with its own timbre. No matter how many instruments combine in however many ways, our ears will react to the totality of the sound as a unified experience. Of course, the skilled listener can discern the individual parts which is a miracle of human cognition but if one was to record the music and place it into a digital sound editor, the ability to discern the parts becomes almost impossible by looking at the recorded audio. The recording device has no preconceptions about what made the sound it is recording and a single instrument playing might look quite similar to a full band or even a full orchestra. I posit that they are indeed to be considered as variations of the same thing, timbre.

Every musical composition is a drawn out timbre on one time scale or another. But we must note that music is actually much more that timbre for that is only the result of the work. The inspiration, intent, harmonies, melodies, rhythms, dynamics and artistic effort that goes into creating music has many intangible qualities and there will always be a psychological and subjective aspect to the musical experience. Traditional compositional structures such as: Blues, Sonata, Symphony, Minuet, etc. are arbitrary familiar formats for human-scale experiences. In holistic tonality, these are still considered timbres.

Timbre can be structured or unstructured, organic or synthetic, homogeneous or heterogeneous. Timbres can be, and usually are, composed of other timbres and, depending on the targeted time scale, can become recognizable compositions.

Musicological studies have found that timbre in the traditional sense, or tone quality, supersedes all other musical elements when determining which has the greatest emotional impact on the listener. In other words, a great composition or song with a crappy sound falls flat and is not appreciated. Therefore it makes sense to elevate timbre as the core element of music and to change our thinking as to its purpose and scope.

The fractal dimension of timbre is a useful form of classification. See Fractal Dimension and Classification of Music by M. Bigerelle, A. Iost, 1999.

2. Timbre operates independent of time scale.

Corollary: Perception of timbre is governed by the “sampling rate” of the auditory system of the listener.

The nature of Time itself remains to this day a mystery. It is even feasible with modern quantum physics that all possible music throughout the Universe is in the process of playing or has already played within the confines of the Grand Big Bang. Holistic Tonality seeks to first remove the human element from musical perception and suspend the bias we have for our time scale linked to the days and spans of our lives. Naturally, since we perceive time to flow at a certain pace, we will be attracted to musical events that occur on our scale. It is now known that some animals can better hear not only higher or lower pitches than us, but can communicate complex social messages in timbre. For example, a particular birdsong may appear to us as a unified single sound but nonetheless contain rich details of pitch and harmonic changes and convey hidden messages to potential mates or adversaries. Rats have been discovered to laugh in a high frequency range that sounds to us like squeaks and crickets slowed down sounds like singing. One creature’s squeak is another’s laugh. Time scale is critical to the perception of music but not to the musicality itself, which is independent of time scale. Likewise, we can be assured that an entire long symphony for us, or our speech itself may sound as a squeak to a consciousness that operated at a larger time scale or as a ponderous slow motion drone to a short time scale consciousness.

The band, Bear In Heaven, pre-released their album, I Love You, It’s Cool streamed over a period of several months slowed down 800,000 percent but at the same pitch (FFT) of the original which resulted in a long drone of ever-shifting timbral qualities which was as valid musically as the works played at 100% speed.

Going back to traditional music theory, it was the imperceptible “squeaks” of the higher frequencies that stealthily governed the need to have a Dominant or Subdominant since these were strong components of the overtones they were hearing and to this day dominate musical timbres. Many overtones are perceived on a subliminal basis, no doubt linking us to our evolutionary ancestors that operated on a faster “speed of consciousness.” Very few people are aware of the individual tones that make up a timbre and we tend to unify these tones into a single sound. Holistic Tonality posits that timbre governs or suggests structure subliminally, and that since traditional timbres are structured in the classical Overtone Series, the larger structures that were suggested were similar to those found in the Human Music Catalog. This sort of mental and physical reverse-engineering of music from the sounds it is composed of gets at a core principle of the Holism at the center of the theory.

Holistic Tonality does not necessarily distinguish between sound (audio) and music except in regard to the Musical Intent because both are means of vibrating molecules of air and each can be perceived as music to the listener. For example, one person might merely hear a subway passing by where another will hear a clickety clackety rhythm they think is musical. The musical experience is left to the observer.

Note that the above audiogram is merely a PCM (Pulse Code Modulation) encoding of an analog signal and is not a pitch graph. Nonetheless, it becomes a useful graph of musical content as it relates to the vibrational physics of sound projection.

It is not until one starts to deconstruct the temporal aspects of musical structure that one can see clearly that timbre is unified with structure, if only on a different time scale. By understanding the relationships between timbre and structure, the modern composer is able to create new masterpieces where there are no boundaries between timbre and structure. In other words, one can compose timbre that freely morphs into musical structure, mixes with it, becomes it, and vice versa, larger musical structures which wind themselves into timbre on another time scale. In a sense, this is no different than what composers have always done using orchestration, dynamics and musical constructs to flow through time musically and tell their story to the listener.

This is similar in concept but expands on the theory of Granular Synthesis first proposed by the Greek composer Xenakis

The composer Iannis Xenakis (1960) was the first to explicate a compositional theory for “grains” of sound. He began by adopting the following lemma: “All sound, even continuous musical variation, is conceived as an assemblage of a large number of elementary sounds adequately disposed in time. In the attack, body, and decline of a complex sound, thousands of pure sounds appear in a more or less short interval of time Δt.” Xenakis created granular sounds using analog tone generators and tape splicing. These appear in the composition Analogique A-B for string orchestra and tape (1959).⁶

Musical Structure is the architectural presence of Musical Intent that is conveyed by the composer to affect the conscious resonance of the listener and to elicit storytelling and an emotional journey within a given time constraint, generally less than a few hours, more typically a few minutes.

3. No two (biological) observers can experience the same music.

Corollary: Music is composed for a target audience of 1 or more that is expected to appreciate the composer’s musical intent. The smallest audience is oneself.

There are biological limits to the conscious perception of short sounds that should be taken into consideration as one can study considerable research on the topic including The Neurophysiological Bases of Auditory Perception by Enrique Lopez-Poveda, Alan R. Palmer and Ray Meddis and the work of Al Bregman in auditory scene analysis. However, timbre strikes the mind on the subconscious level as well as the conscious and little research is available on the effects of otherwise imperceptible “tones” on the subconscious which itself is still largely misunderstood. It is obvious that our brains are not equipped with fine-tuned oscilloscopes that can discern each peak and valley of waves that hit our eardrums and are decoded as “the sound” of an instrument, yet we can easily hear the difference between a violin and a cello. The question here is only whether a music composer could use this or that effect at such and such a time to elicit a desired emotional response in the target listener. Due to the nature of qualia, or the impossibility of having another’s exact experience and the fact that no two listeners can occupy the same space at the same time, it is safe to say musical experience is unique to the listener.

Larger creatures tend to respond to lower frequencies, having larger sensory organs and smaller ones to higher frequencies, though there are many exceptions such as dolphins and size itself is somewhat relativistic. By speeding up or slowing down other species’ communications we can better understand them and they will tend to have the same fractal dimension as ours. Little is known about the “speed of consciousness” which is somewhat akin to the flops (floating point operations per second) of computer vernacular but it is obvious that some animals can discern small details of sound that slip by our perception. We also know that some animals seem to be innately musical as anyone who has heard birdsongs or whale songs can tell. We still have much to learn about what is conveyed from mind to mind via air and water vibrations, whether purely informational or also containing emotional content as we ascribe to music. We can safely assume that billions of years of evolution did not make us the only musical creature and that Nature has the ability to miniaturize to the point of making literal nanotechnology in the form of cells that communicate electrochemically at very high speeds so we can’t assume music is only experienced by larger creatures with larger brains.

Frequency, measured in Hertz (Hz) determines the vibrational pitch of sound and the human hearing range is limited to between 20 Hz and 20 kHz with the most sensitive range found near the speaking pitches from 1 to 4 kHz. Timbre is composed of complex overlays of shifting frequencies. It is no surprise that oscillations on various time scales would provide a means to communicate musically since it is found throughout Nature from heartbeats and seasons to the light and warmth of the Sun. Not only that, but there is evidence that the very foundations of reality are built up from unimaginably small vibrating “strings” in the fabric of space-time (String Theory).

Music is a psychological phenomenon inextricably tied to the emotional state of the listener. A listener can be receptive or resistant to a given musical content at a given time and can change for reasons that might not be fully understood. Each listener gravitates toward the music of their time, most likely the music they were exposed to in their formative years so it is important to give children a wide range of music to listen to so they can enjoy the full musical palette throughout life.

4. Tempo operates independently of Timbre and governs the rate at which Musical Structure is revealed.

Generally measured in BPM (Beats Per Minute), the Tempo is like a master clock for Musical Measures, a quantum of musical content similar in purpose to a sentence in grammar. The Length in Beats of a given measure is determined by its Time Signature. Its Duration is determined by its Tempo and number of Beats. In Holistic Tonality there are no fixed rules regarding aliquot beats and in fact one could have an irrational number of beats such as π beats per measure. Generally speaking it is important to have an absolute value for a single beat which by convention is a Quarter Note. But this name is prejudicial to any number of non-4 beat Time Signatures. For example in a 5-beat Measure world, this would be called a Quintile Note. Therefore in Holistic Tonality, this unit is referred to as a Beat Quantum. Tempo is therefore measured in Beat Quanta Per Measure.

A Beat Quantum can be described as a unit of time that is used to divide the Musical Content into Units of Emphasis whereby Rhythm can be discerned except in Arrhythmic Music where rhythmic patterns and regular emphasis are avoided intentionally. Rhythm can be implicit or explicitly revealed by the musician or composer and there is a great deal of freedom in how one approaches rhythmic emphasis with respect to the Tempo. Musical Envelopes control dynamics which can be made to increase upon certain beats in a given measure creating accents and syncopation.

5. Any melody can be synthesized using sufficient oscillators.

Corollary: Harmony can only be synthesized using parallel systems of oscillators. This is a direct consequence of the Fourier Series where any continuous curve fitting an arbitrary set of points (ie. notes on a staff) can be fit according to the law: note = y = ƒ(x) = a1sinx + a2sin2x + a3sin3x + … where x represent time relative to a system of measures. Corollary 2: Any system of synthesis can be used for the purpose creating musical structure and melody can be thus synthesized at will. A Melodic Oscillator is generative of one voice. Stacking them creates harmony. Compressing the time scale of a Melodic Oscillator changes its relationship to the musical structure.

It should be noted that many functions cannot be precisely calculated by means of Fourier Series but that in the case of a single-line melody plotted out on a piano roll type graph and then connecting these points into a curve, these can be synthesized.

Lemma: Any Harmony can likewise be synthesized using sufficient oscillators operating in parallel with multiple melodic oscillators.

6. Playability is governed by the physical skill of the musician and the nature of physical or virtual instruments available.

Corollary: Not all music is playable, though it may be constructable and recordable.

There is an indeterminate large set of music which can be enjoyed yet that is not possible to play without assistance. Playabilty is determined by biological limitations such as the speed of signals from the brain to the fingers and the ability of muscles to contract and relax and to be sufficiently precise in placement and timing. Even the most skilled musician does not control the finest details of sound that are operating in the kilohertz time domain (1 millisecond or less) though they can guide timbre by employing vibrato and dynamics and through various techniques.

Playability is not dependent on any one aspect of music and some simple gestures may be as difficult to achieve as a complex musical phrase with lots of fast notes. For example, one might find it very difficult to play a single note on guitar with the same power and feeling that Jimi Hendrix made seem easy, producing a timbre that was interesting and pleasing to the audience while another guitarist could play hundreds of notes very fast and leave the audience bored or unimpressed.

The Nomenclature of Holistic Tonality

I found it necessary to create a new nomenclature in order to convey the principles of Holistic Tonality. While I don’t expect these terms to be adopted for common use and also don’t claim that these are the best terms for such, they will hopefully provide a cohesive language with which to talk about Holistic Tonality.

I have formulated numerous experiments which have led me to classify a new class of musical sub-structures I call: sonons, superintervals, hyperintervals, superscales, hyperscales, superchords, hyperchords, etc. In general, the use of the “super” prefix indicates a single voice and “hyper” refers to harmonic or multiple simultaneous voices.

A superchord is defined as any sequence of single pitches for a given voice over time that is used as timbre or effect. Technically, any melody is a superchord in one temporal frame of reference or another. Also, if one were to closely examine a single note with a “microscope” the sound is composed of superchord material. Thus the holism of Holistic Tonality. But even though we can rightly call these things superchords by analysis, that does not mean the composer intended them as such. Intentionally creating superchords for the targeted time frame and audience unleashes the compositional power of their use, which is a primary goal of this theory.

In these experiments, for simplicity, let us first use an idealized synthetic instrument that itself has very little color, essentially producing a pure sine wave. This instrument is also able to play a sequence of pitches at any speed but we must have a reference so that we can make valid comparisons. Let’s use 60 BPM as a reference tempo so that our quarter note is exactly one second. In the world of holistic tonality, traditional notation proves limited because within a one second span hundreds of “notes” can go by so our quarter note is really only a durational placeholder for timbre. The work of Schillinger⁷ could lead to a functional notation for this sort of music but he did not anticipate the extreme control over timbre that were later developed with digital synthesis so it would need to be modified to incorporate the deeper level of composition that is now possible.

Lemma: Adding superchords horizontally produces new superchords. Adding them vertically produces either hyperintervals or hyperchords.

For our first experiment we play a one octave C Major scale in the span of our quarter note producing a now classic video game sound. This sound is a timbre. To prove this, change it to play 2 and then 3 octaves in the same span producing a totally different sound yet with similar characteristics. Any musical scale can be used for this and each scale of course takes on the tonal characteristics of the scale: eg. chromatic, whole tone, gypsy, 19-tone, etc. Now, let us add another dimension of motion to the experiment by introducing a second “carrier scale.” To do this we simply transpose our original C Major scale along a C Major scale, thus: C Maj, D Maj, E Maj, F Maj, G Maj, A Maj and B Maj, each time with the entire inner scale playing within our quarter note giving us 7 seconds of a C Major scale made of Major scales. This is a superscale. During these experiments it is important to swap in different scales and record the perception that occurs. For example, now play a C min scale made of Majors and a C Major made of minors and a Minor made of Minors, etc. and record the differences perceived. Because there are too many permutations to consider and it will be left to future generations to catalog the possibilities, we must construct computer-based HyperInstruments⁸ that can freely examine these and make them useful to musicians and composers. These experiments can be sped up so that individual tones are not perceived by the human ear but their effects are — very much like overtones behave.

This grid is based on 12-tone Equal Temperament but could have another base or mix them.

Scales made of scales, being superscales (actually, scales made of superchords), can be self-similar, or homophonic where the same scale is transposed by itself recursively to any degree permissible by math in theory and physics in practice. Or they can be heterophonic where any number of scales can be transposed by any other scales. Indeed, any or all of the scales can exist in different tuning systems or temperaments since they are merely mechanisms of transposing. For simplicity, most examples are referring to the standard 12-tone Equal Temperament system.

The order of a superscale is the number of levels of scale employed. A 1st-order superscale is the same as a traditional scale with no subscales. A 2nd-order is a carrier scale and a subscale, etc.

For example, Pentatonic/Dorian/Ionian would be a 3rd-order superscale with an Ionian carrier scale made of Dorian carrier scales made of Pentatonic scales. One octave of such a hyperscale would contain 7 x 7 x 5 tones = 245 tones. Harmonic emphasis can be applied on the carrier scale by sustaining the root (or inversions thereof) of each carried scale or not. The duration of each tone would be governed by the tempo and beat value of each carrier scale tone. For simplicity, we assume isorhythmic division but tremendous expressional differences can be created by changing the internal rhythms of any of the scales. Likewise, the dynamics of superscales can be varied endlessly to many effects, but all this gets into the employment as musical content, not the ideal itself.

A superscale is a subclass of superchords. A superscale is really a type of superchord. This is easily proven by changing the nature and/or the length of the carrier scale(s) which can even be 1 note. Below I will delve into the classes of scales and the various ways they can be constructed which in Holistic Tonality is quite different than in traditional music theory, somewhat due to the nature of electronic music and new possibilities.

Now let’s simplify our experiment to examine the temporal chemistry when only 1 or 2 tones are used. Play a sequence of 32 C tones in our quarter note (technically 1/128th notes). This will produce a straight C tone. Now introduce at first 1 G tone in the sequence which will alter the timbre but it will remain a C tone. This is a superinterval (a superfifth). Introduce more G tones and in different patterns and it will take on qualities of a perfect 5th yet no two patterns will sound identical because they do not have timbral equivalence.

Lemma: A superinterval is a subclass of superchord.

Introducing a parallel superinterval or superchord produces a hyperinterval.

Introducing 3 or more parallel superintervals or superchords produces a hyperchord.

Using tones that form a triadic or higher order chords (as opposed to two tone intervals) produces a superchord.

Multiple parallel superchords also produce hyperchords.

A single sub-timbral unit (the smallest possible) is referred to as a sonon. Each physical system has its own sonon. For example a digital sample that has 44,000 samples per second can not represent a sound smaller than 1/44,000th of a second which is well below the threshold of audibility.

Sonons are not timbre, nor are they music, nor are they grains.

Sonons can have timbral qualities that are only recognized on other time scales, but not the time scale it is being used to compose on.

While a sonon requires time to exist, it can be thought of sonically as a single moment in time. Relative to atomic and photonic processes, the time scale of a sonon is enormous because it involves the physical motion of atoms in air.

The smallest unit of timbre is referred to as a timbron. A timbron is not musically useful except in combination with many other timbrons.

Sonons beget timbrons which beget audible pitches which beget superintervals which beget hyperintervals which produce superchords and hyperchords, etc.

All music is comprised of sonons which are the smallest musical units, or musical quanta. Sonons are the only musical units that are not timbre yet are paradoxically the things that timbre is made of.

Any temporally consecutive or superimposed sonons comprise timbre. Other musical quanta follow outward from the sonon, for example giving us recognizable discrete “notes” with independent envelope structures.

Natural Timbre is created by Natural Instruments and is governed by the perfect chaos of quantum physics.

Synthetic Timbre is created by electronic means employing a myriad of techniques including: analog, additive, FM, sampling, granular, modeling, etc. and combinations thereof. By employing quantum computers, natural timbre could be synthesized, i.e. natural synthesis.

Music that is composed holistically can be described with the adjective holosonic.

Classes of Music

Natural Music is music that is found in Nature, including all bird songs and other emanations of creatures great and small. It is the sound of the wind through the trees or a babbling brook. Natural music does not employ instruments.

Constructed Music is music that is created deliberately by means of inspiration, training, planning, memorization, notation or otherwise recorded and employs instruments of all types to perform or record. Generally speaking, all human music is constructed, even when spontaneous or improvised. A case can be made for human singing, clapping or other such instrument-free music as being Natural Music. However, if it makes use of musical scales or advanced techniques and does not come from an innate, naive ability, it is constructed.

Holistic Music is a subtype of Constructed Music where the structure is composed into the minute details of audio sound production as well as on the larger scale of traditional musical structure.

Subliminal Music is musical content that is not registered consciously but still has psychological impact. This can be created intentionally and also exists in almost all music naturally since many of music’s finer details are left for the subconscious or inner mind to process, especially when it comes to processing timbre. Indeed, a good Holistic Composer will be targeting the subconscious effects of timbral expression and should not even bother attempting to make all timbre accessible to the conscious.

Imaginary Music would seem to be free of the limitations of vibrational physics but there is no current way to bypass the timbral limitations of the eardrum and inner ear for physical music. However, cognition is actually powered by electromagnetic phenomena which have their own vibrational qualities which makes one wonder if in imagining a C note playing vibrates something in the mind at that frequency or if we can imagine frequencies without the need of the frequency itself.

Imaginary music can only be heard by the imaginer. However, it can be sometimes be approximated with a skilled musician’s ability to “make it real,” especially the notes imagined, less so the timbre. Yet at the same time it is quite possible to imagine such fantastical music that can defy one’s ability to transcribe or even explain to another person. Many people, even non-musicians have heard such things in their minds as “music of the spheres” or music that defies description.

Visual Music is where visual content is artfully constructed either using a musical sensibility of form, palette and dynamics or where it is used in conjunction with a musical performance to enhance the emotional or artistic impact.

Artificial Music is music that is conceived or produced entirely algorithmically using Artificial Intelligence or other programmatic means. It must still be listened to and appreciated by conscious beings, however.

Corporeal Music is music that does not get processed by audio receptors into the mind but flows through the body as vibrations and pulses. Deaf people can thus experience rhythm and lower frequencies of musical content and music can be composed to maximize this experience. Anyone who has danced in a loud club with a serious subwoofer has experienced this very direct form of music, perhaps the most ancient of all since touch was an import first sense for survival. The ears in a sense are nothing more than a refined touch sensor that has been optimized for higher frequencies. A frequency of 6 Hz is known to produce nausea also so that should probably be avoided.

Hybrid Music is music comprised of any combination of the above types.

Atomic Music is music which manifests itself by virtue of octaves and other intervals in atomic and molecular interactions. H2O is a perfect example of atomic octaves, not only blending in a 2:1 ratio but where the atomic weights of 1:16 are also octaval. This might explain why water tastes like octaves sound—satisfying but not particularly interesting. Elements are governed by complex vibrational frequencies and have specific spectra which can be likened to musical scales.

Metaphysical Music is music applied to domains outside of tangible reality. For example, when we get a “bad vibe” we may be reacting to a sort of dissonance produced in a resonant field we don’t fully understand. Philosopher Rupert Sheldrake has written extensively on the topic of Morphic Resonance, a hidden field he says explains certain proven extrasensory perceptions observed in animals and humans such as the ability to feel when someone is staring at you from behind. Likewise, a “good vibe” could be thought of as a form of consonance.

Musical Performance is a musical event where constructed music is conveyed to an audience. Such performances are often combined with Visual Performance which can include: architecture, art, lighting, effects, dance, projection, real-time generative and other such expressions or combinations thereof.

We can never hear the full performance since we have only two ears and eyes with limited frequency response and observe it from one location at a time.

All music has a fractal dimension which can be calculated (usually between 1.65 and 1.68). Benoit Mandelbrot devised a means by which to calculate partial dimensions between the regular dimensions. This is a vary useful tool in musicology because it allows us to measure the complexity of timbre and thus the ability to understand the differences between various sonic experiences and to compare fractal dimension between different realms, genres and works.

Musical Events are instances of specific musical content or intent performed at a given time within a composition.

Music Notation is a system of symbolic primitives and marks that can be used to represent musical events on a timeline. This can be likened to an alphabet or pictograms in writing. Each type of music and often types of instruments require their own notation systems in order to convey the possibilities of that music or instrument.

A Music Score is a roadmap, plan or instruction conveyed to musicians via system of Music Notation to facilitate musical performance, analysis or reproduction of a composer’s musical intent. These can be printed or displayed digitally.

Musical Domain, Realms, Genres and Styles

The Musical Domain is the overarching container of all musical Realms, Genres and Styles. In a sense this could simply be the Universe which seems to be governed by musical influences with respect to proportion, consonance and dissonance but for the purposes of this theory is generally in regards to musical content, expression and conscious musical intent.

Musical Realm is a large category of music that can be easily separated out from others and that can contain many genres and styles yet they remain within that realm. Examples of musical realms might be Classical, Folk, Jazz, Indigenous, Rock, etc.

Musical Genres are numerous in count and not appropriate to list here. Suffice it to say that they stem from historical and cultural evolution and changes that are ongoing. Music genres come and go and are born and die and are even reborn. We live in an extraordinary time when we can have ready access to a Musical Catalog of millions of songs, compositions, composers and works of every sort from ancient to cutting edge.

A Genre is distinct in certain qualities of Musical Structure, instrumentation, tempo and intangible qualities but that lends itself to be identifiable.

A Musical Style can be similar to a genre but generally would be a subset of a given genre that was identifiable perhaps as a particular artist or a difference in approach. For example, Chicago or St. Louis Blues both belong to the Blues Genre but are distinct styles that an expert in blues would discern. It can be difficult to determine as to whether it deserves the Genre or the Style classification and it would generally fall to experts familiar with a genre to know where a particular work fits in.

Nothing prevents music from incorporating multiple realms, genres or styles and indeed Fusion is a widely used term in music whereby two or more types of music are “fused” together, even possible spawning an entirely new genre. The chart below could be applied to any and all realms of music and is a fascinating way to look at how music evolves from generation to generation. Holistic Tonality encourages all manner of fusion in the mission to create great timbre.

This demonstrates the tree-like evolution of musical genres and styles

Musical Presentation relates to the scope and the maximum number of parts and instruments used and include:

Simple—a single performer or main voice
Duet—two performers or main voices
Trio—three performers or main voices
Quartet—four performers or main voices
Ensemble—from 5 up to 12 performers or voices
Orchestral—from 13 to 128 performers or voices
Complex—unlimited in number of performers or voices

Any “tet” can be mentioned and used to specify the number of performers when known or important to do so such as Quintet or Octet.

Any number of Subclasses of Presentation pertain to the instrumentation and role of performers and include for example:
Choir—the exclusive use of singing parts to perform a given work in harmony or unison (timbral harmony).
Consort—the use of a single type of non-vocal instrument by multiple players, eg. Robert Fripp and his League of Crafty Guitarists.
Concerto—the featuring of a main instrument within a larger presentation
Opera—combination of group and solo singing with orchestral work, often employing stagecraft and acting.
Symphonic—an orchesteral presentation of many instruments, often using multiples of similar instruments in sections to amplify and enrich the sound.

Uses of Music

Music has many uses and purposes and is often used in ways not originally intended. These are the most common uses of music:

Artistic Expression—In its purest form, music stands on its own merits and is experienced as a supreme form of expression and creativity.

Entertainment—Much of music’s history is consumed in the pursuit of supporting entertainment, whether for the masses or the elite. This includes film and game soundtracks, concerts and small gigs, parties, raves, marching bands, parades and more.

Celebration—Music is an important part of most celebrations, some of the most common being birthday songs,

Rituals—Almost every religion makes ample use of very specific music for specific rites, occasions and religious cycles. Social rituals such as weddings and memorials also employ a variety of music depending on culture. Sporting events are often punctuated by anthems and musical emphasis.

Atmospheric—Music is often employed as background music where it is not expected to be noticed but provides ambience to a setting. This is a great use of ambient music in fact.

Therapy—The growing number of outlets for therapeutic music such as meditation apps and music therapists shows the tremendous potential for music to play a major role in medicine and psychotherapy. Massage therapists often employ relaxing music to enhance the experience and promote relaxation. Music can take one’s mind off of pain and promote healing.

Dance—Music is the prime mover of dance as it provides a backbone of rhythm and expression to propel and synchronize the dancers.

Education—Incorporating music into the educational process enhances memory and increases focus.

Social Bonding—Music can become a form of group interaction such as singing a song together or drum circles where it’s not really about the performance but the act of playing together.

Deterrent and Harassment—Music has been used to annoy people by playing music that is not liked, bad music or loud music in hopes that it will keep people from gathering or loitering in certain spots.

Warfare—Music has been used in wartime both to inspire the troops and to terrify the enemy.

Classes of Instruments

An instrument is any single named Voice used in a composition but can include the combinations of all voices or subgroups thereof, or the same, from various frames of reference as listed below. There are implied instruments between the musician and the listener such as the room and the ear which bring their own colorations to the musical perception. An audio instrument produces a sound directly from recorded audio and can be triggered by a physical or virtual controller.

It should be noted that the traditional connotation of timbre regards the sound and tone quality of instruments. This is still true but only in as far as instruments produce timbre which is combined with other timbres to produce music so the instrument’s timbre is a subtimbre of the music’s and the music is the overtimbre of the instrument’s.

The Auricular Instrument is the ear and the parts thereof such as the Eardrum, Ossicles and Cochlea. This instrument has its own physical characteristics which both color and limit human auditory experience.

The Corporeal Instrument is the body which can be used for generating musical rhythm such as by clapping or finger snapping or more often, singing, chanting and rapping. This instrument is also a receiver for Corporeal Music whereby the body can perceive certain lower frequencies by vibrating the body directly or from the surrounding air. Higher frequencies can also be perceived directly by using the bones as transducers. This was likely the very first human instrument. Elephants are known to communicate over large distances by low frequencies entering the feet.

The Common Instrument is the Medium between the ear and the music production elements, in effect, the listening space. In reality, this is the medium all music permeates in its final journey to our ears. Even if a listener has headphones or ear buds, the air in the ear canal is the medium of transmission. Each medium has it’s own vibrational constraints and colors and limits what we hear according to various characteristics of density, temperature, acoustics and more.

The Architectural Instrument is the space confining the musical performance. This can have a tremendous effect on the coloration and quality of the music by virtue of the shape of the room, the materials of reflection and even the audience members themselves. Natural Reverb occurs upon reflection of sound off of surfaces and can even produce echoes in some instances, even in outdoor venues. Recording studios will often try to deaden the space in order to record the driest sound so that reverb can be added in the mixing process and fully controlled.

Physical instruments are instruments that directly cause air to vibrate upon the physical actions of a player. To be more precise, these instruments themselves cause air to vibrate by imparting their own vibrations to the air such as a plucked string or by triggering waves to form such as in a pipe. Often, such instruments have means by which the pitch can be changed by a shortening or lengthening or mass of the vibrating component. However, some instruments are made for playing a very limited range of notes, if not one and may not have complications such as frets or valves.

Natural instruments are a subclass of physical instruments and are those that are non-electronic and composed of natural materials designed to actuate and amplify air vibration using mathematical divisions of the actuator, be it a string or a column of air or a percussion type. Examples of natural instruments include: Human voice, Violin, Guitar, Piano, Oboe, Trumpet, etc.

Percussive Instruments are a subclass of natural instruments and are those that are specialized in articulating time. These instruments are likely our most ancient and primal, even tying into the heartbeat itself and deeply rooted in all tribal cultures from which we arose. The sophistication around drumming and rhythm is an entire language unto itself and the tonal variations of percussive instruments is endless both in the instruments themselves and the way they are played. In a way, the drum represents a very low frequency (LFO) of sound in the way it elevates beats and changes the envelope of the timbre on clearly delimited beats in a range of cycles that remains on the lower end and is more closely linked to BPM whereas other frequencies are measured in Hertz which are cycles per second, making drums 60 times lower in frequency, 6 to 7 octaves below the low end of audio. Percussive instruments by their nature have short attacks so as to accurately mark a moment of time with clarity. The sounds of drums are endlessly complex and utilize many high and low frequencies that cut through the soundscape momentarily. The power of drumming to evoke emotion and inspire people to move is a foundation of human culture. Examples include: Tabla, Conga, Snare, Bass Drum, Cymbals, Toms, Wood Drums and many, many more. Percussion is sometimes said to be the backbone of music giving all other parts a clear rhythmic structure on which to build.

Artificial Instruments are another subclass of physical instruments that require circuitry or the employment of artificial materials which themselves do not vibrate air such as a Theremin which translates electromagnetic vibrations into air vibrations to produce haunting sounds on a continuum, unless it’s a digital theremin which can trigger MIDI (Musical Instrument Digital Interface) notes on a scale. Analog synthesizers such as original Moog are also artificial instruments in that they employ non-digital circuitry to produce and combine oscillations, filters, envelopes, etc.

An Electric Guitar can be considered an artificial instrument of a sort that employs vibrating strings, wood and natural properties, yet reduces all of these factors through magnetic coil pickups responding to a vibrating metal string to produce a current that is then amplified by means of circuitry and then often greatly enhanced by means of effects. This often results in a sound that very little resembles the acoustic sound left behind and largely unheard.

Virtual instruments are those that cause air to vibrate based on software and hardware outputs. Examples of virtual instruments include: Synthesizers, generative software. Virtual instruments can be made to emulate natural instruments either through sampling, playing a pre-recorded note of a real instrument at various volumes (velocity) or via modeling, using algorithms to mimic the vibrational behavior of a natural instrument.

Drum Machines and Digital Drums are a subclass of Virtual Instruments dedicated to playing percussive parts which tend to have shorter envelopes with a fast attack.

A Bridge Instrument is an emulator that can bridge between natural instruments to seamlessly delve into composed timbre while maintaining timbral aspects of the original instrument. This is a powerful tool for Holistic Composers who wish to integrate the beauty of traditional orchestration yet be able to explore more advanced possibilities of musical expression that would be impossible otherwise. It is theoretically possible to incorporate one such instrument that could handle multiple emulations simultaneously but it would be a bit unwieldy in a performance of a more complex work and thus may be more desirable to employ several as needed. Such an instrument could, for example, be queued to take a cello note and give it a special flourish or ornamentation and then pass the music back to the cello/cellist to continue.

Virtual instruments can be controlled via hardware or software User Interfaces or via HyperInstruments, physical or software interfaces that play virtual instruments. These are sometimes referred to as Controllers although I would classify the average MIDI controller as a more rudimentary Hyperinstrument. An Arpeggiator is a bit more along the lines of a hyperinstrument in that it takes a simpler physical input to produce a more complex result structurally unrelated to the musician’s physical actions, i.e. Hold down 4 keys on a keyboard and an arpeggiated pattern occurs.

Composed Instruments are a subclass of virtual instruments that have been crafted to fulfill a musical score whereby the finest details of the timbre have been intentionally made to sound a given way at a given time. This is an advanced Holistic Tonality technique.

Visual Instruments are those that are conceived to trigger visual effects and colors either based on a performer’s actions (eg. Color Organ) or in response to musical events by means of lighting, projection or other automations. Music Visualization is a means by which the musical content is directly reflected, in parallel, into a visual experience based on either the musical structure, dynamics or timbral aspects of the music or combinations thereof. A dancer is a Natural Visual Instrument.

An Imaginary Instrument is one that is used to imagine sounds. These are perhaps the least limited of all instruments because they are not bound to the rules of physics other than those that govern the brain. The amplitude of imagined instruments is indeterminate but it’s safe to say no one ever went deaf listening to imaginary instruments.

The Whole Instrument is the sum total of all instruments used in any combination of the above types to produce the sound that is taken in by the audience as music. This is another way to say that all music is timbre. We always hear the Whole Instrument, or the final result of everything in a cohesive whole of vibration, time and amplitude.

Sectional Instruments are best exemplified by a section of an orchestra such as the “string section.” Though comprised of individual instruments, these often act as a single instrument producing a desired timbre, eg. the sound of the string section. Often times, parts are duplicated to multiple instances, producing not only a louder sound but a richer chorus of slightly different sounds at slightly different pitches, thus affecting the timbre. Proving that this is simply another form of instrument is easy since a single electronic instrument can simulate and entire orchestra section.

All instruments, except imaginary create timbre in the Common Instrument.

Every timbre heard is a composition, therefore compositions are independent of time scale too. In other words, Beethoven’s 5th Symphony could be used as a “note” in a larger symphony.

The Quality of an instrument impacts both its playability and its sound. An instrument is said to be of high quality if it is built well, has great intonation (tuning), produces a good sound and feels good to play. Creating quality instruments requires great expertise and attention to detail.

Oddly enough, in the right hands, a poor instrument can still be used to create quality music. Even slightly out of tune instruments are desired in some circumstances such as a honkey tonk piano or a funky old guitar and substituting a perfectly tuned Steinway would ruin the honkey tonk experience.

Electric and Electronic instruments of the 20th Century and beyond are intertwined in an ever growing ecosystem of computers, patches, routers, interfaces, effects, amplifiers, cables and wires, Bluetooth, speakers, etc. Each year, new instruments, sounds and capabilities grow and folks barely have time to master them or even learn them before the next thing comes along. Some also die out and fade in a sort of ongoing Darwinian battle for the minds and hearts of musicians and their audiences. Much of it is commercially driven enterprises seeking profits.

Classes of Temporal Structure

The Foundational Axis of Music is Time. Anything that happens in music requires a certain amount of elapsed time to occur to be registered on the conscious. Below are categorizations of the temporal divisions in music.

Compositions are musical structures with a distinct beginning and end and which contain the desired Musical Content to be conveyed to the audience. The shortest composition would be composed of one sonon but might not be perceptible nor can it be considered music because it has no discernable content except perhaps relative to the performance-space. It could be made as a novelty. The shortest perceptible sound was measured at 1.2ms (Irwin & Purdy, 1982) so that’s about how long the shortest composition could be.

A Composition is any music that is played, composed or otherwise created for the purpose of listening to it. Imaginary compositions only exist for the imaginer though they can be realized for other listeners, depending on the skill of the composer and the instruments available. A composition must utilize at least one instrument or sound. A null composition is silence as the example created by John Cage⁹.

Compositions can be broken into the following categories:

  • Structural makes use of traditional compositional techniques such as orchestration, arrangement, counterpoint, harmony and somewhat defined sections. Uses instruments to play notes and is somewhat timbre invariant. Examples: symphony, song, jazz standard, sonata, blues, etc.
  • Timbral focuses on instruments, sounds and textures with less defined structure and sections. Examples: ambient, electronic, trance, techno, noise.
  • Holistic seamlessly combines structural and timbral where musical structure is continued into the finer details of sound.
  • Null, a composition comprised entirely of rests or silence.
  • Hybrid, a combination of any of the above.

Compositions come in every imaginable form but can be more generally composed of two types of musical content in any combination:

  1. Articulated, where timbre is divided into distinct packages of notes and chords with clearly defined envelopes
  2. Timbral, where timbre is more amorphous and alters more gradually such as found for example in Laurie Spiegel’s Expanding Universe composition or the works of David First who uses long drawn out guitar sounds. Also, certain apps such as Henry Löwengard’s Droneo are used to create interesting long sounds with rich overtones. Ancient drone instruments such as Didgereedoo, Tambura and Hurdy Gurdy produce timbral music.

Sections are composed of N Measures and are generally analogous to a paragraph or a chapter in writing. Compositions can use repeating sections or be structured into recognizable parts by stringing together N number of sections in arbitrary or organized patterns such as ABABCABA. This pattern is sometimes referred to as the Form of the music and can be so strict as to govern the Genre of the music, eg. EDM, Blues, Sonata, etc. or so lax as to have no recognizable independent sections at all, which is said to be Through-composed.

Measures are absolute units of time intervals and are an important element of Musical Structure. Measures can vary in length in the course of a composition or can remain constant. Measures can be delineated in beats or seconds. A measure delineated by beats is governed by the Active Time Signature at the beginning of the measure which determines how many beat units are in the measure. The length of a measure can change arbitrarily at any time within musical structure. The duration of a beat-delineated measure is determined by the time signature and the tempo. The duration of a time-delineated measure is absolute and not necessarily affected by tempo unless it is declared that 1 measure = N sec @ X BPM. A time signature of 5@90 means each measure is 5 seconds long at 90 BPM. Time-delineated measures are more freeform as to internal structure and can subdivide arbitrarily whereas beat-delineated measures will generally structurally subdivide into aliquot divisions of a beat or sub-beat. Even so, Holistic Tonality assumes nothing about the content of any portion of any measure which may or may not conform to any rules.

Time Signature is a declaration of expected Beats Per Measure and in Holistic Tonality is not required to be humanly playable. Time signature can be declared globally or locally on a measure by measure basis.

Time signatures are time-domain independent. The traditional idea of time signature is still valid insofar as a composer desires to create envelope patterns relative to measures but other more subtle patterns may exist at other levels of the composition completely unrelated to the structure or time signature.

All Rhythmic and Dynamic structures are envelopes for timbre. Silence and rests are only in the context of a given sonic unit. For example, if musicians rest on a note and someone in the audience coughs during that rest, the rest only exists in the composition-space of the piece but not in the performance-space. The cough is included in the timbre of the performance. Also, during a rest there may be reverberation, ambient or ancillary musician-generated sound such as breathing that is also in the performance-space.

A Beat is an arbitrary division of a measure, most commonly 1/4th of a measure is referred to as a quarter note. But this simple classification falls flat when you divide measures into anything but multiples of two. For example we have no concept of an eleventh note. For this we use the tuplet notation, ie. 11-tuplet but when using computers we are only limited by clock speed and binary math. This bias toward time octaves is no doubt built in to our inner brain and for some reason we are good at halving and doubling or even tripling tempos but we are not as natural at odd divisions or multiples of time where we become less accurate without electronic assistance.

Sub-beats are further divisions of the beat into useful segments for musical expression, accents, timbral changes etc. Sub-beats can be any number not exceeding the number of consecutive sonons that can be played within a beat.

Sub-beats are a way to think of N-tuplets whereby a longer beat is divided into N parts spanning the duration of the beat. An 8th note triplet creates three sub-beats of a quarter note.

Unit beat is the duration of a sonon.

All music is comprised of sonons which are the smallest musical units, or musical quanta. Sonons are the only musical units that are not timbre.

Any temporally consecutive sonons comprise timbre.

Swing is a temporal construct that changes emphasis and accents toward a triplet feel and is often employed in jazz, blues and hip hop. Swing is applied at a beat or sub-beat level, most commonly where quarter notes are divided into eight-note triplets where the middle note of the triplet is skipped. Likewise eight notes can be divided into 16th note triplets and so on although the perception of swing is generally felt when applied to slower notes such as quarters. Perfect swing is played exactly on the triplet but partial swing can also be applied where one can slide the play point to times before the perfect triplet and still maintain a swing feel.

There exists in software the concept of a Groove Template which is generally a type of mapping or filter that alters the time location of music events based on a given algorithm. Such alteration could effectively make notes play slightly before or after a perfect idealized time location to either humanize a rhythm or make it emulate a certain style of playing. In Holistic Tonality, this would be considered an assistive compositional or performance tool.

Time domain is any arbitrary contiguous segment of a composition. Absolute time domain is marked by the beginning of a given time domain, in seconds, relative to the Big Bang and the duration of the time domain¹⁰. This places a composition into the historical record of the known universe but is of limited use and accuracy since for all we know the Universe is Eternal. Relative time domain is marked by the beginning of a time domain with respect to the beginning of a composition and duration. All compositions start at zero in the relative time domain, the beginning. Pick-up notes can be represented starting at a negative time. Time zero is beat 0 of any composition but can be referred to as beat 1 in traditional notation. A Time Stamp can be given to a musical event using a desired level of precision such as microseconds or milliseconds.

Tempo can be measured as seconds per measure, measures per second, beats per minute (traditional), minutes per beat, beats per second or seconds per beat. Tempo can change arbitrarily in the course of a composition. As we dive down into the audio level of holistic composition, tempo exists on a different order where kilohertz is used, or thousands of cycles per second. For most people anything above 15 KHz is inaudible, though it may still be perceived by the subconscious.

Common terms are employed to label tempos including:

  • Larghissimo — very, very slow (20 bpm and below)
  • Grave — slow and solemn (20–40 bpm)
  • Lento — slowly (40–60 bpm)
  • Largo — broadly (40–60 bpm)
  • Larghetto — rather broadly (60–66 bpm)
  • Adagio — slow and stately (literally, “at ease”) (66–76 bpm)
  • Adagietto — rather slow (70–80 bpm)
  • Andante moderato — a bit slower than andante
  • Andante — at a walking pace (76–108 bpm)
  • Andantino — slightly faster than andante
  • Moderato — moderately (108–120 bpm)
  • Allegretto — moderately fast (but less so than allegro)
  • Allegro moderato — moderately quick (112–124 bpm)
  • Allegro — fast, quickly and bright (120–168 bpm)
  • Vivace — lively and fast (≈140 bpm) (quicker than allegro)
  • Vivacissimo — very fast and lively
  • Allegrissimo — very fast
  • Presto — very fast (168–200 bpm)
  • Prestissimo — extremely fast (more than 200bpm)

Tempo is often used as an expressive component of music where it is either increased (accelerando) or decreased gradually (allargando) or abruptly. A change in tempo does not necessitate a perceived change and it can even be disguised by changing beat values to compensate, though usually it is meant to be heard.

In Holistic Tonality, tempo is a useful anchor to tie the larger structures to but becomes less meaningful on the audio or timbral level since timbre exists on many time scales as mentioned. In that sense it becomes a frame of reference for a given time scale which is very important in performance.

Common terms for tempo changes include:

  • Accelerando — speeding up (abbreviation: accel.)
  • Allargando — growing broader; decreasing tempo, usually near the end of a piece
  • Calando — going slower (and usually also softer)
  • Doppio movimento — double speed
  • Lentando — gradual slowing and softer
  • Meno mosso — less movement or slower
  • Mosso — movement, more lively, or quicker, much like più mosso, but not as extreme
  • Più mosso — more movement or faster
  • Precipitando — hurrying, going faster/forward
  • Rallentando — gradual slowing down (abbreviation: rall.)
  • Ritardando — less gradual slowing down (more sudden decrease in tempo than rallentando; abbreviation: rit. or more specifically, ritard.)
  • Ritenuto — slightly slower; temporarily holding back. (Note that the abbreviation for ritenuto can also be rit. Thus a more specific abbreviation is riten. Also sometimes ritenuto does not reflect a tempo change but a character change instead.)
  • Rubato — free adjustment of tempo for expressive purposes
  • Stretto — in faster tempo, often near the conclusion of a section. (Note that in fugal compositions, the term stretto refers to the imitation of the subject in close succession, before the subject is completed, and as such, suitable for the close of the fugue. Used in this context, the term is not necessarily related to tempo.)
  • Stringendo — pressing on faster (literally “tightening”)

In Holistic Tonality it becomes almost useless to name every type of tempo change since they are unlimited in variety. Changing tempo is like changing a magnetic field and re-aligning pixels on a screen which can be as varied as every image that has ever been on a television screen. However, we can categorize them in terms of the method applied:

  • Smooth—a gradual change of tempo
  • Abrupt—a quick change of tempo
  • Logarithmic—changing tempo based on a logarithmic scale
  • Geometric—changing tempo on a geometric scale
  • Algorithmic—using any algorithm to govern tempo changes
  • Random—altering tempo randomly

Microtempo is tempo as it applies inside of beats where thousands of musical events can take place in the context of holistic tonality. This can be unrelated to the tempo of the larger musical structure.

Supertempo is tempo applied to superchords independent of the Master Tempo. For example, one could trigger superchords with a tempo of 780 BPM on quarter notes ticking along at 90 BPM on a separate time line.

A metronome is an assistive tempo tool to give a musician an absolute reference of beat value and can employ a simple audible click or a more elaborate pattern that emphasizes certain beats and sub-beats. Today, a drum machine or click track can serve as a metronome and there are many apps that serve the purpose.

A ramp is where a tempo change is affected by means of changing beat values instead of the master clock. So one could have a measure of quarter notes followed by eighths, followed by sixteenths, etc. and it would have the effect of doubling the tempo each measure. I have create a ramp editor in HyperChord that takes this concept to an extreme and adds a y-axis component of fatness, where more chord notes are introduced over time, creating more intensity.

The sum of all beats in a composition comprise the Length of the composition for the compositional time domain. The Duration of the composition is the sum of all beats at the tempo they are played at and is measured in seconds.

Tempo Octaves are a doubling or halving of tempo.

Beat octaves are a doubling or halving of beats.

Computers give us new ways to divide time and I have experimented with various techniques such as Beat Spirals where successive note durations are reduced or increased by a given algorithm, either subtractive or divisive. For example a duration of one second followed by one of 95/100th, 90/100ths, etc. is subtractive using .05s intervals of diminishment. Use a multiplier instead, such as .61833 and you get a sort of rhythmic Golden Spiral. These constructs are quite pleasing and depending on the note pattern used make perfectly valid musical content that is unplayable by human hands.

This means there is an infinite set of algorithmic rhythm patterns to be explored by future composers and hyperinstrumentalists.

Music that is invariant in beats and tempo is isorhythmic and becomes quite boring very quickly. Electronic music can play so precisely that it becomes machinelike and robotic sounding and hits the ear with a thud, even when the notes are right. A certain amount of imprecision applied at the right time, keeps the ear happy and wanting what’s next, possibly because it speaks to our organic nature.

Likewise, music that is too imprecise rhythmically is annoying and rubs the ear wrong and is perceived as disorderly, sloppy or amateurish. The human ear generally tolerates only so many wrongly played notes before giving up on enjoyment (unless it’s your kid playing).

Rhythmic acuity is a crucial skill in musicianship and as they say, timing is everything.

Classes of Dynamics

Amplitude can be likened to volume or loudness at a given time. Human hearing operates within a limited dynamic range from complete silence up to a deafening 150 Db (jet engine). In wave theory, amplitude is essentially the height of the wave such as ripples in a pond or 100 ft waves in the middle of the ocean which are very dangerous, just like the jet engine.

Envelope is the amplitude of timbre at a given time. The musical analog to envelope is Dynamics. Each instrument or voice in a composition has its own envelopes that can combine with other instrument envelopes to create the Composition Envelope.

All rhythmic and dynamic structures in music are envelopes for timbre. Silence is a Null Envelope and is in the context of a given music-space. For example, if musicians rest on a note and someone in the audience coughs during that rest, the rest only exists in the composition-space of the piece but not in the performance-space. The cough is included in the timbre of the performance. Also, during a rest there may be reverberation, ambient or ancillary musician-generated sound such as breathing that is also in the performance-space.

Rests are parts of timbre with no (or near-zero) amplitude for a given instrument. Rests are of arbitrary duration applied to arbitrary voices within a composition. A Solo is when rests are applied to all but one instrument or voice in a composition.

Timbre with no amplitude is silence or global rest. Rests do not necessarily create silence in timbre since they may only apply to certain instruments and not others.

Rhythm is the employment of rests and envelopes to create metric patterns and syncopation. Many aspects of rhythm are intangible, such as swing, groove and feel and the human ear is highly attuned to small variations in rhythm. The possible theoretical permutations for rhythm are infinite since any time domain can mathematically be broken into an infinite number of subdivisions in combination with every possible envelope and accent pattern. However in practical terms the rhythmic possibilities and permutations for a given time domain would be finite but extremely large. Also, we humans tend to gravitate toward rhythmic patterns based on heartbeats and breathing, the most primal rhythms.

Crescendos and Decrescendos are dynamic envelopes applied at a measure level of a musical structure.

Accents are dynamic envelopes applied at a beat level.

Sub-accents are dynamics applied at a sub-beat level.

Envelopes in synthesis are broken into 4 parts: Attack Decay Sustain and Release (ADSR)

This exact same model can be applied to any time domain within a composition for a given voice, a given group of voices or all voices (Master Envelope) but it is best used for shorter duration events. For example, a staccato note is one that has a very short decay and little, if any sustain. However, straight lines are not sufficiently subtle and actual dynamics are much more complex mathematically. Cubic splines or trigonometric functions provide a wider range of continuous contours, however there needs to be room for discontinuity and glitches for full expression.

The Range of a dynamic change is the difference between the highest amplitude and the lowest amplitude of a given envelope. A Diminuendo can be likened to a Decrescendo with a smaller range.

However, this is an arbitrary construct not suitable to governing Holistic Composition which may have thousands of such divisions, curves and ramps. Indeed, musical or audio envelopes need not be thought of as having discreet sections at all but rather simply as dynamic contours or shapes.

Legato is the amount by which notes overlap adjacent notes and tends to create a smoothing between notes.

Piano and Forte are the traditional Italian terms for soft and loud and all good music uses changes in dynamics to increase musical expression. Each composer will make ample use of dynamic marks. In the end, dynamics serves the purpose of giving music room to breath and the listener crucial differences to appreciate from passage to passage. They allow emphasis and punctuation to make sense of the music and tell a more interesting story and paint a rich musical landscape with hills and valleys, meadows and roaring rivers. They allow for delicate, pastoral sensibilities to be juxtaposed with soul crushing, thunderous highs and everything in between.

The two basic dynamic indications in music are:

  • p or piano, meaning “soft”.
  • f or forte, meaning “loud”.

More subtle degrees of loudness or softness are indicated by:

  • mp, standing for mezzo-piano, meaning “moderately soft”.
  • mf, standing for mezzo-forte, meaning “moderately loud”.
  • più p, standing for più piano and meaning “more soft”.
  • più f, standing for più forte and meaning “more loud”.

Use of up to three consecutive fs or ps is also common:

  • pp, standing for pianissimo and meaning “very soft”.
  • ff, standing for fortissimo and meaning “very loud”.
  • ppp, standing for pianississimo and meaning “very very soft”.
  • fff, standing for fortississimo and meaning “very very loud”.

Music without dynamics is isodynamic and will tend to tire or bore the listener eventually, especially if it is stuck at a high decibel level.

As with other musical constructs, Holistic Tonality requires an extension of the dynamic nomenclature. Indeed, even outside of Holistic Tonality music and how it is produced has changed so much that the original meaning of dynamic labels fails to fully capture the changes. Amplification, for example, adds a huge multiplier to dynamics and even a level of confusion because the quietest gestures can be amplified to near painful levels intentionally. Things that occur as nearly inaudible on an acoustic instrument can be made not only audible but a key element of timbre. One of the great powers of electric instruments is to bring to fore the microscopic (microaudic) details of overtones and shine a spotlight on their beauty and this explains their popularity as well. If we just amplified crappy sounds it wouldn’t be liked so much.

Furthermore, we need to distinguish between what the player does and what the amplification system does. Because a player may be on ppp but the amplifier might be on FFF, and it may be intended. So a separate instruction is required for how something is to be played and how it is to finally sound to the audience.

In a purely mathematical sense therefor, dynamic values can be said to vary from 0.000 to 1.000, zero being silence and 1.0 the loudest one should go to remain safe with hearing intact. So ƒƒƒƒƒ could be called 1.0 and the rest could divide the unit loudness according to proportion. Perhaps an italic FFFFF could indicate the final loudness and the ƒƒƒƒƒ the way it is played. Because one could play an electric guitar as loudly as possible and the output could be near inaudible or PPPPP.

Many musical structures have direct analogues in the visual arts to be employed by visual musicians.

Tonal Constructs in Music

Musical Structure is the timbre and envelope of a composition relative to a system of measures.

Pitch is the proscribed fundamental frequency of timbre at a given time. The analog is a note on a staff. Perceived pitch is subjective. Pitch can be explicit, implicit or indeterminate at a given moment.

Scales and Modes are sets of pitches as described below in detail.

A Note is any timbre that can be perceived by a listener to have a distinct pitch. Notes have arbitrary duration left up to the composer or player.

Melody is any single-line composition for any instrument or sound that varies in pitch at least once. Melodies that have no significant variance in pitch are drones. Melody is often said to be memorable because we can repeat a strong melody that imprints itself upon us. Other parts of music are harder to remember such as harmony or complex rhythm and timbre itself is almost impossible to remember well.

Harmony is any composition that involves simultaneous, stacked melodies or melodies combined with drones. Both harmony and melody produce timbre so technically harmony only produces a richer timbre yet a single complex drone could have more overtones than stacked melodies of a less complex nature because timbre is of arbitrary complexity.

Harmony can also be likened to layers or overlays. This might actually be a better way to view them as they are often thought of in a “flattened” form existing in the same plane as the other notes, merely adding some notes above or below other notes on the staff. I prefer see them as more 3-dimensional and having independence. We gain many more dynamic possibilities by using Harmonic Planes wherein the notes that are harmonizing with those above or below can have an entirely separate algorithm governing, for instance, the amplitude or envelopes which is a bit trickier to notate when they are on the same staff. This might traditionally be referred to as a Part with its own staff but it’s a bit different in that the same instrument is expected to alter its harmonies instead of a separate part.

A Part is a separate voice within the orchestration/arrangement which has its own melodies and harmonies and is also a type of layer of musical content and is expected to mesh in a meaningful way with the other parts. As a logical construct of holistic tonality a part is a subtimbre.

Harmonies provide the richness and depth of musical relationships and give us the sense of tension and release that moves the musical story along and keeps us engaged. Generally speaking these can be categorized thusly:

  • Unison where the same notes are played by different voices (timbral harmony)
  • Parallel where voices move in the same direction
  • Divergent where voices move away and closer for each other, producing counterpoint

Motifs are Musical Patterns or repeating elements within Musical Structure and can be thematic in which case they play a central role in the musical statement of the work or incidental, in a secondary role. Motifs are often transformed by means of operations and imaginative treatments so they become more subtle or less obvious to the audience. This points to a natural subliminal quality to music which can say things without saying them. We feel the presence and the connection to transformed motifs but we don’t always consciously recognize them.

Holistic Motifs are those that not only repeat on the structural level of the music but also are embedded into the pre-timbral (superchord) level and timbral level (audio). Therefor the length of holistic motifs tend to be shorter. The longer the holistic motif, the more it becomes a self-similar musical fractal. The depth of self-similarity is also the recursivity of the motif so for example r=3 would mean the motif would transpose itself (r=0=seed pattern) once (r=1), then that result would be transposed (r=2), then that result (r=3) so for a three (s) note seed we would spawn a 3⁴ or 81 note superchord or s^(r+1).

Modality is an arbitrary set of pitches within which melody or harmony operates in a given time domain. The analog for modality is musical scale such as chromatic or pentatonic though Modes are a subset of scales explained herein.

Synchronization is the alignment of one timbre of one instrument or music with another in the time-domain using a grid system of beats and measures.

Syncopation is the orderly misalignment of timbre and envelopes to produce interesting rhythmic effects.

Dissonance and consonance are subjective qualities or effects of musical harmony. One listener’s consonance is another’s dissonance though we know that physics and mathematics plays a strong role in this perception. Sonance is only possible when 2 or more pitches are perceived in proximity to each other. In general, due to the nature of hearing, consonance is experienced most around the Octave and the Fifth and the Third, the three notes of a regular triad. Notes that are close together produce more dissonance because of the complex interactions of the frequencies. Dissonance is used to create musical tension and consonance is used to resolve the tension.

Tuning is the science of adjusting the absolute pitches to conform to relative to a standard frequency such as A 440 and to reflect the desired temperament relative to that base frequency. Instruments should be well-tuned to reflect the musical content as it was meant to be heard though as one studies more, one realizes that most tuning is a compromise and is never quite perfect. Good musicians know when things are in tune and it’s no fun to listen to one who thinks they are in tune but are not. A poorly tuned instrument within a smaller ensemble can spoil the sound, but in a larger ensemble could be heard as an interesting timbral effect. Indeed sound synthesis makes ample use of detuning between oscillators to create more interesting sounds and piano string triplets are purposefully detuned to make the sound more lively like a natural chorus and resonance. Fretted instruments pose a challenge for tuners because string vibrations do not conform to the mathematical perfection of equal temperament so one chord can be perfectly tuned and another slightly off.

Temperament is a System of Tuning of a set of pitches over a given octave range and determines the overall subjective level of consonance or dissonance perceived. Equal Temperament is a system whereby sequential notes in a scale are governed by the Nth root of 2 where the most common form is 12th root of two which produces the 12-note chromatic scale we are most familiar with. However, there are many types of temperaments which include: Pythagorean, Just, Mean Tone and Equal. In Holistic Tonality there are no limits on how these are combined or used.

Cacophony is the highest level of dissonance and results from superimposing unsynchronized timbres and sounds, the result of which is always timbre.

The tonal quality of music can be categorized as follows:

Tonal: music that makes more use of consonance within the chromatic 12-tone scale based on divisions of octaves by the 12th root of 2. In tonal music, often a key is implied or what holistic tonality refers to as a tonal center.

Atonal: music that makes more use of dissonance within the 12-tone scale.

Macrotonal: music that uses intervals greater than the twelfth root of 2

Microtonal: music that uses intervals smaller than the twelfth root of 2

Crosstonal: music that combines any of the above tonal qualities.

The Classifications of Scales and Modes

Scales serve so many purposes in music, primarily as a relative set of ordered pitches. In modern music scales are also something to be indexed into algorithmically so Holistic Tonality requires a different approach to scales.

A scale is any organized set of 3 or more discreet pitches and come in the following forms:

Octival: a finite set that exactly repeats in each octave (linear) with octaves of each note, eg. Chromatic scale, Ionian, Pentatonic, 19-Tone, etc.

Meta: a set that repeats over N octaves greater than 1 (linear). For example a Meta 4th could have the following sequence: C2, F2, B♭2, E♭3, A♭3, D♭4, G♭4, B4, E5, A5, D6, G6, C7 repeating on the 5th octave. Using octave equivalency on a meta scale is a valid type of operation that reduces the above scale to include all the chromatic notes of the C2 octave—a simple circle of 4ths tone row. However this loses the full musical power of a meta scale which is meant to have an expansive sound that sweeps across the octaves. Metascales do not partake of the rule of octave equivalency.

Melodic: a set that changes per octave (nonlinear). Melodic scales are pitch sets to be indexed in various ways.

Natural: a set that retains octaves in the tonic only but where notes within the octaves are not octaves of each other

Unnatural: a set that has no octaves

Algorithmic: a scale which pitches are calculated in real-time

Hybrid: a set that combines any or all of the above

A scale can be tonal, atonal, microtonal or combinations thereof. A continuum is not considered a musical scale, though it may share attributes. Some scales are only useful in the field of audification and musification (see below).

The scale, being the organized set of pitches, has nothing to do with the musical intent or the employment of a scale in a composition which is at the discretion of the composer and can involve instrumentation, rhythms, dynamics. When employed, it becomes a set of discrete musical events drawing from the scale elements.

The first element of a scale is said to have an integer index of E1 and the last is EN, where N is an arbitrary integer.

The minimum of a scale is the pitch value of the lowest element.

The maximum of a scale is the pitch value of the highest element.

The range of a scale, expressed as an integer, is the difference between its lowest pitch index and its highest pitch index (R = EN.- E1).

The octave range of a scale (RO) is the number of octaves that can contain the scale. This can be expressed as an integer or as a floating point decimal for precision.

The domain of a scale, expressed as 2 integers (equal temperament) or 2 floats where Middle C = 1.0, the first being the offset from Middle C of the scale’s bottom element E1 and it’s top element EN. The standard 88-key piano keyboard has a domain of {-39,48}. If the domain of the scale is above middle C then both numbers would be positive. Domain maps roughly into the traditional concepts of: Bass, Tenor, Alto and Soprano.

The Tonal Center of a scale is an arbitrary anchor about which the relativity of the pitches is understood, most commonly the Root. However, an C Maj scale with a Tonal Center of D, is really a Dorian scale. The reason this is separated out is because a set of pitches in use need not start on the Root note any more than a piano’s first note needs to be a C. D Dorian scale with a Tonal Center of C is an Ionian scale. This merely completes a required logical symmetry built-in to the relativity of pitches. The Root is the default Tonal Center when not specified. Even Atonal or Meta scales can be assigned a tonal center. This way any passages built on such scales can be easily transposed relative to this tonal center. The tonal center is not the key of the scale unless it’s a tonal scale or mode.

The Size of a scale is the number of individual pitches it contains, S. Theoretically, scales can extend to frequency zero and infinity. However, for practical reasons they are limited to the range of hearing of the listener.

The Octavity of a scale (O) is the average number of pitches per octave. This can be expressed as an integer or as a floating point decimal for precision. Meta scales can have an octavity < 1.0. For example a Meta 9th has 14 half-steps per pitch. A meta double octave has an octavity of 0.5. A meta octave has an octavity of 1.0. The chromatic scale has an octavity of 12.

A scale can be Ordered where each successive element is either higher (ascending) or lower (descending) or unordered (melodic) or monotonous (flat).

The Direction of a scale can be flat, ascending, descending or mixed.

Scale indexing is the process of selecting the Nth element of a given array of pitches. A final note may be calculated by selecting the indexed element of a scale, adding a key offset, adding an octave offset and limiting to a specified range (domain) by use of octave subtraction or addition.

Superscale is a scale made of transposed superchords or superintervals.

Hyperscale is a harmonized superscale which can also be a harmonized scale since a scale is a first order superscale. Hyperscales need not be harmonized synchronously. In other words, they can be set out of phase with each other to create an endless set of possibilities including the famous Shepard Scale which creates an audio illusion of an endlessly ascending or descending scale similar to a barber pole. However to complete the illusion requires subtle dynamic changes within each harmonized scale that fades in and fades out. By applying this to higher order superscales, a greater range of effects can be created.

Audification is the process by which pure data or information is converted into audio. With computers it is possible to audify any data from any format, most commonly the: CD, MP3, DVD but also including images, video, text and more. Any data can be audified. Audification is governed by algorithms.

Musification is a similar process to audification but where data is converted into musical structures, mapped into scales and generally made more suitable for musical performance. Technically, musification and audification both produce timbre and are therefore the same, differing only algorithmically. I wrote a program called Pixound which maps RGB information from images into scales and it makes extensive use of all scale types: Octival, Meta, Melodic and Hybrid. Because RGB are fundamental color components of our eyes, Pixound¹¹ has a claim to musical permanence for all future generations who wish to use color or light as a musical controller. Ideally, one would be able to seamlessly move between audification (synthesizer) and musification (sequencer), controlling musical output on several levels.

In the process of musification where data of higher ranges is mapped into scales of smaller ranges, scale expansion can be used. In this case, scales are padded by repeating adjacent pitches (usually isometrically) to make data mapping more convenient. Increasing the octave range of a scale is another method but can produce undesirable low or high notes.

Modes are produced by operations performed on a Root Scale. For example, the Diatonic Modes of Ionian, Dorian, etc. are produced by following the Diatonic Scale Root Scale (Diatonic Ordered Set), starting from each progressive tone and up an octave.

Any non-parallel superintervals, superchords or hyperchords comprise supercounterpoint.

Supercounterpoint is a subclass of timbre.

Any supercounterpoint can be composed but only synthetic timbres can be composed. Natural timbres cannot currently be composed due to quantum physics and chaos. Theoretically, one could use a quantum computer to compose hypertimbres which would be analogs to natural timbres. Natural timbres can be simulated to a high degree of authenticity.

Mixing any two or more supercounterpoints comprises a hypercounterpoint.

The effect of timbre on a listener cannot be fully controlled due to qualia therefor music will always be subjective.

We can never experience the full performance of music since we have only two ears with limited frequency response.

Tonality is an arbitrary system of composition and modality within which a composer operates in a given time domain.

Pitch Sets can be discrete or non-discrete and ordered and non-ordered. Ordered pitch sets are those where each successive pitch is higher or lower to the previous. The Direction of an Ordered Pitch Set can be ascending or descending or flat.

Key Signature denotes a relatively small set of pitches which are expected to occur in a given time domain. In general, it is only useful in notation and performance but is not particularly helpful in composing timbre. However, the Tonal Center of a given passage of music should be known so that proper emphasis can be given to reinforce the desired harmonic structure. Key signature can be declared globally or locally on a per measure basis.

A Grain, in the vernacular of Granular Synthesis is composed of many sonons but a sonon is not a grain, though if it were it would be called a Unit Grain, an indivisible portion of sound.

Octaves represent tonal equivalence in different time scales. Octaves apply to all vibrational phenomena including electromagnetic and matter. Physical matter can also be considered timbre. For example, water is a 2:1 ratio of Hydrogen to Oxygen which have a 16:1 atomic mass ratio. This is likely why water tastes like an octave sounds, very satisfying but not particularly interesting. It is no mistake that a good meal presents the palette with pleasing harmonies between the flavors and fragrances likewise can be “consonant” or “dissonant” to the sense of smell. This may very well be an interesting area of study which might also lead to a better understanding of cross-sensory experiences known as synesthesia.

Classes of Musicians

A Musician is anybody who plays a musical instrument or composes music.

A Virtuoso is a musician who demonstrates extraordinary skill either by innate abilities or from dedicating a large portion of his or her life to the mastery of an instrument or instruments. A society without virtuosos is musically inactive.

A Composer is someone who imagines, plans, evolves or discovers musical ideas and then uses a means of recording, such as musical notation or digital recording to make their ideas available to others. Composers are often musicians and vice versa.

A composer’s or musician’s job, whether practical or artistic, is to create a sequence of sonons with a given envelope, to create timbre.

An Instrumentalist is a musician who specializes in playing a certain type of instrument, whatever that might be. New types of instrumentalists are always coming along as new instruments come along. If an instrument is around long enough it becomes master-able because there is sufficient time and interest to be dedicated to its mastery, possibly producing virtuosi of the instrument. If an instrument dies off or goes out of fashion it can become inert or extinct and so will its mastery. There are those who study and learn such old instruments, even ancient ones that only exist in writings but the mastery thereof is hard to judge, yet it provides valuable insights into how music might have sounded in bygone times.

An Algorithmic Musician is one that thinks and composes nearly entirely using formulaic methods that can be programmed. In a way, such music is meta-composed because the algorithmic composer is not always interested in the details of what is produced but the potential for what might be produced. Such composers often employ liberal doses of randomization and stochastic methods so that many musical choices are left to the computer and “the gods.” The level of parameterization and randomization is within the control of the meta-composer where it can be tightly controlled for a more predictable range of results or loosely for a broader range. There are no limits on the complexity, quality or musical impact of algorithmically generated music.

Good instrumentalists will often do things that are intangible and can’t be put into notation, producing extra tones or expressions that other players of the same instrument can’t get. Some call this Touch, which is a good word for it because it involves the physical contact with the instrument in the form of timing, vibrato, glissando, tremolo, dampening, emphasis, embouchure, etc. in subtle combinations that are no easier to understand than human intuition, psychology or physiology are.

Besides touch, good musicians also tend to possess a musical mind that is well trained or innately talented, be it in the skills of harmonization, a rich imagination, intuition, listening skills, anticipation, audience resonance, rhythmic skills and many others. Some possess the Pied Piper skill of leadership, bringing the audience along n a musical journey. Others possess pure genius in the level of astounding complexity they can manage and are even not fully appreciated until more study of their work is done, even after they pass away.

Musical Imagination is an essential trait for a good composer. There is a lot of mystery surrounding the human ability to synthesize new things out of nothing — to invent. Composers such as Mozart and Beethoven were known to imagine complete works in their minds, a sort of divine channeling as though they were merely transcribing what the gods had played in their heads. Sir Paul McCartney imagined the song Yesterday in a dream and woke up remembering it clearly, as have many musicians. Dreaming seems like a fertile ground for the imagination, unfettered by the conscious mind and all its distractions. Almost anybody can imagine a melody or rhythm. The discipline of capturing those fragments into a full work of music can take years to acquire but it is something one can improve upon and a mental muscle they can exercise. Musical imagination can become stunted or even atrophied without practice and many great musicians lack a rich imagination but are masterful performers of music that others have imagined for them to play. In fact, instrumental expression itself takes a type of inspiration and feeling all its own that allows musicians to evoke a composer’s intent or add to it.

Musical notation is a means by which timbre and envelopes are conveyed from one musician to the next. Roughly speaking, notes are analogous to sonons and dynamics to envelopes. Traditional musical notation is entirely geared toward the human time-scale and is not adequate for composing holistically though it can be borrowed from on a gross structural level. Notation Systems vary and can be developed as needed to convey a particular instruction for performing musical actions. In essence, notation is a guide for performers or theoreticians to replicate and actualize a composer’s intent or to analyze a work of music.

A Meta Musician is someone who works with pre-recorded or algorithmic musical generation but doesn’t necessarily play an instrument themselves, such as a DJ. Because the methods of musical playback have become so sophisticated, the artistic expression of playback and triggering has itself become a new form of musicianship. Technically, if you can “press play,” you are a meta musician.

A Meta Composer is someone who creates an unusual “instrument” that contains its own compositional powers, such as software containing creative algorithms, but it may be used by others composers in some unique way that create’s music that maintains characteristics imprinted by the instrument maker’s algorithms. In a strange way, a Master teacher can impart knowledge on the apprentice who carries on certain aspects of the master and so forth and so the master is a meta composer.

A Musical Theoretician is someone who is interested in, studies and understands the underpinnings of music, how it is made, why it has its effects and then creates models based on this knowledge. These theories can then become “instructional cookbooks” for musicians and composers to work with and expand upon with practical examples.

A Music Therapist is someone who utilizes music in order to affect betterment of the condition of a patient/client, whether it be to improve mood, reduce anxiety, inspire hope or more. The benefits of music therapy are well known and understanding of music’s potential to heal us is growing. Holistic music goes hand in hand with Holistic medicine.

A Music Appreciator is someone who does not necessarily have musical skills other than they like to listen to music and enjoy its richness and are moved by timbre.

A Music Audience is one or more appreciators of music experiencing a musical performance together, though they are not required to appreciate the music they are listening too.

A Music Inert is a person who has no affinity for music and can’t enjoy it, no matter how sublime. Some people are forced by their society to forgo music for religious reasons and these are the unfortunate Music Deprived.

A Visual Musician is a person who specializes in creating visual musical experiences using various means of production.

An Artificial Musician is one that is created by means of Artificial Intelligence (AI) and robotics and performs physically on instruments either externally or built-in.

A Virtual Musician is one that lives in software and governed by Artifical Intelligence (AI) and Machine Learning (ML) and can compose and/or play artificial music or execute musical scores.

A Telemusician is one that uses telepresence to perform in one place and then have that performance broadcast or carried out by artificial musicians in other places, remotely.

Sonic Contexts of Holistic Tonality

Context is perhaps the most important aspect of musical events, i.e. when and where a particular musical event takes place, both within the musical structure and the larger world. It is only through context that the ear and mind make sense of what is being heard, drawing contrasts with what was just heard and anticipating what will be next. The subjectivity of dissonance and consonance, tension and release has much to do with context and that’s why taking music out of its context, it loses its meaning, even more so than isolating a single word out of a sentence which at least has a definition.

Many theories of music spend most of the time addressing musical context, stressing mechanisms such as the Circle of 5ths, I-IV-V, II-V-I, etc. as ways to create logical contexts that please the listener or to solve a musical problem of traversing between and connecting passages in “expected ways.” Holistic Tonality assumes an infinite set of contexts to yet be discovered and so welcomes all of these constructs, their inversions and variations such as the Circle of 4ths or the Circle of m2nds (chromatic scale), or the Circle of Meta Intervals. Each will produce useful guides to new music that will engage the listeners in new ways, especially when applied on a timbral level.

Holistic Tonality seeks to place all musical experiences in proper context. This requires us to include larger structures of time and space within which music exists. I use the suffix sone appended to various structures in the Universe to name sonic realms, which may or may not be musical.

The largest musical unit, which is also The Timbre, is referred to as the Cosmosone (all sounds in the Universe). There is only one Cosmosone per Universe (or black hole).

All music takes place within the timbre of the Cosmosone.

There are an arbitrary number of musical units including but not limited to:

Cosmosone: all sound in cosmos
Galactisone: all sound in galaxy
Stellasone: all sound in star system
Terrasone: all sound on Earth
Biosone: all sound in biosphere
Hydrosone: all sound in water
Urbasone: all sound in a city
________sone: all sound of that type

Music can never be completely extracted out of its Musical Context. Any time you hear music, it is before one thing and after another. Musical Context itself is timbre. When you listen to a concert or an album, it is heavily colored by which song or work goes first, second, etc. There are many factors that contribute to this such as tempo, key, instruments, lyrics, etc. Whether intentional or not, every musical context is holistically related to every other context. Thus does a concert become one unified performance of music with changing dynamics. In fact, the entire life works of a given composer (composesone) has its own timbre on one time scale or another. The fractal dimensions and other analytical parameters of his or her music characterizes a musical style which is more or less original and unique. A composer who is aware of the musical connection between everything they do is able to compose the greatest possible music, which is life itself.

Generative Music

Music that is created by means of algorithms whether they be structural (sequencers) or timbral (synthesizers) in nature is a vast class of music, most of which has not yet been made. Indeed, a single set of algorithms can present so many permutations that to hear them all could take countless eons of time. Therefore, algorithmic music has two main purposes:

  1. Goal based (functional) — to achieve a predetermined musical output based on a relatively small, well-understood set parameters. Drum Machines are great examples of this as they may employ algorithms to achieve a more human feel or to add fills. Arpeggiators are also good examples.
  2. Exploratory — to find new possibilities or combinations which may or may not be useful or even listenable based on a wide-ranging set of parameters and/or randomization
  3. Entertainment — to produce new musical passages suitable for gaming or dancing within certain allowed parameters

Cyclophonic Synthesis

Cyclophonic Synthesis employs the use of trigonometric sinusoidal LFOs (Low Frequency Oscillator), also known as waves, to construct both musical structure and musical sounds played by that musical structure. By taking advantage of the purity of the sine wave and its many variations including: square, triangle, sawtooth, step, etc. one can explore a vast search space of cogent, orderly musical content.

As is well known, any musical parameter can be assigned an LFO with additional additive LFOs combined in all sorts of ways. Many important software steps are required to take a basic ideal like this and incorporate it into a practical solution that others can use.

Different Drummer and Cyclophone (2014) are two software programs I have created to utilize this method of music production.

While there is no mathematical limit on how many waves one can add together in how many ways, practicality requires us to pick and choose the types of things we end up doing. Example parameters to be controlled parametrically and in parallel include but are not limited to:

  1. Note Wave: Changes note values within a given scale and key (see below)
  2. Rest Wave: Creates gaps in the harmonic pattern based on waves (see below)
  3. Tie Wave: Create staccato and legato variations on top of the note wave (see below)
  4. Dynamic Wave: Control the loudness or velocity notes to create rhythmic or dynamic emphasis (see below)
  5. Pan Wave: Control the left-right stereo position of played notes. This can also be applied to 3-D spatial positioning of sound.
  6. Beat Wave: Changing beat value based on amplitude
  7. Modulation Waves: Changing various synthesizer modulations and controllers such as resonance or pitch bends
  8. Effect Wave: Changing effects
  9. Key Wave: Changing keys
  10. Harmony Wave: Changing harmony based on amplitude
  11. Mode Wave: Changing musical scales
  12. Variation Wave: Changing the amount of randomization
  13. Voice Wave: Any audio synthesis methods employing trigonometric parameters. Generally, voice synthesis goes well beyond LFO into higher frequencies.

Each of these component waves is composed of a Fundamental wave measured in Cycles Per Measure or for lower frequencies Measures Per Cycle and N partials of that fundamental, making for an astronomical set of variables that allow you to dial in a vast array of musical possibilities. The number of components (partials) that comprise a wave (Fourier Series) or the trigonometric algorithms used on a partial are nearly limitless, very similar to any and all methods used for sound synthesis.

Below are some figures to illustrate the principles:

Amplitude of the note wave maps into a musical scale
The rest wave creates rests above the x-axis
The tie wave governs when notes are tied
The dynamics wave controls volume or velocity

To recreate a cyclophonic composition only requires recalling the exact parameters used to create the waves and applying them at a given tempo.

Wave amplitudes can be calculated and applied discreetly (on a beat value grid) or continuously for certain parameters such as dynamics.

Each wave has variants whereby playback is altered. For example a note wave can be made to stop the previous note when a new note is triggered or to allow the previous note to complete which is more natural for percussive sounds. Likewise the rest wave can clip the last note before the rest or allow it to complete into the rest.

Interface screenshot from Different Drummer on iPad

Postulate 3: Explanations of Musical Axes

Music is said to be imaginary because it originates in the imagination of the musician, whether they be a composer, meta-composer or performer. And though the air may have been vibrated to convey music to the listener, it is then decoded back into the listener’s imagination as perception of music. So all experience of music takes place in the mind.

Musical Axes represent dimensions of musical variation enabling the full range of expression of all possible musics. More complex than axes in a cartesian grid, these axes are organized around the Primary Time Axis. Due to Postulate 5, no music can take place out of time, whether imagined or physical. Along the Time Axis, the other dimensions of music form a set of relationships to the timeline as variations of timbre. The Vibration Dimension is responsible for creating higher and lower vibrational pitches that are perceived as melody. The Amplitude Dimension governs dynamics and envelopes on different levels.

Secondary Axes can be considered as well: The Harmonic Dimension governs overlays of Vibrations to produce deeper and richer relationships perceived as Tonality or Harmony. Harmony can be implied or explicit. The Operations Dimension governs musical processing and is required for all electronic instruments and systems. It can also be used by composers to help construct musical ideas that are related. The Concrete Dimension governs the transmission of the music to the perceiver which can be through neurological activity in the case of imagined music or as a means to vibrate physical objects to convey sound to an auditory organ. The Conscious Resonance Dimension is the final axis whereby the perceiver is moved by a musical experience. This is the most complex of the dimensions because it can represent intangible feelings for example how one feels seeing and hearing a live performance vs. a pre-recorded one. There exists a sort of psychic resonance between the performer and the audience that is not well understood but has been noted many times by both performers and audience members.

Operations

Algorithms can be used to process musical information on all levels and are unlimited in number or complexity. Common examples of operations are:

Harmonization and counterpoint

Axial reflection or mirroring on any the the musical axes (eg. negative melody or harmony, crab cannon)

Transposition and superposition of musical elements (eg. repeats, fugue)

Signal processing and effects (eg. reverb, compressor, echo, etc.)

Musical structure effects such as fugue or rounds, musical range limiting (by means of octaves or compression), tone rows (arbitrary rules of note usage), etc.

Visualization (on audio or musical information or both)

Mapping is used to translate data from one type such as color to indexes of notes in a scale. This often requires some form of scaling or limiting to account for different gamuts.

An operation generally operates on a set of input data whether musical structure or audio and produces an output data set according to the algorithm(s) of the operation. This can be done in a compositional or a performance (real-time) context. All real-time operations consume time so have a given latency. Any latency under 6 milliseconds is considered musically insignificant.

Composers often use operations and tricks to create, vary and extend musical content. Schoenberg had his tone rows which were based on a simple rule of utilizing all 12 tones of the chromatic scale within a given passage of music. Such a rule could be applied to any set of pitches to good effect and 12 tones is completely arbitrary. Whatever the size of the set, the melodic permutations would be as the square of the number so there are 144 possible sequences of 12 tones when the rule of octave equivalence is applied. Within those 144 sequences can be made an infinite set of variations by means of rhythm, voicing, dynamics and harmonization.

Modern musicians can’t live without analog and digital effects such as reverbs, delays, chorus, EQ, harmonizers, autotune, etc. which is nothing more that various parameterized operations on audio signals and data taking a given input and producing an output. If one wanted to one could make a case that we ourselves are only performing operations on input to produce output. Anyone who has seen a good beatboxer can certainly see a demonstration of this in practice with no additional equipment required!

Visualization of music can greatly enhance the musical experience. Most visualization used currently is merely synchronized graphical content where visual changes are made in conjunction with musical events and changes along the timeline and is only as limited as the imagination and the means used. However, there are more advanced visualizations that have a deeper link to the musical content reflecting harmonic content, timbre and other musical structure. Because light is also a vibrational, frequency-based phenomenon it is well-suited for displaying the intricacies of musical relationships and enhancing appreciation thereof.

Composing Music Holistically

While all music is already holistic by nature by virtue of Postulate 1, composers have not usually been aware of the fact. A true holistic composer is one that is not only an expert on the melodies, harmonies, rhythms and dynamics of composition but is also an expert in sound design with an imagination that flows freely between these levels. The closest we have to this are skilled electronic musicians with excellent musical training who are comfortable in a world of software, sequencers and synthesizers. These composers tend to be less versed in complex counterpoint and compositional arrangement and more interested playing with sounds so the larger musical structures tend to be repetitive (trance, rave and club oriented) or they forgo composition altogether. Traditional composers may have impressive orchestration skills but tend to rely on ready-made, traditional instruments and various expressions that those afford and tend to be purists eschewing electronica. Holistic composers would put all these skills together and the resulting compositions would have other worldly qualities where melody and harmony seamlessly melt into timbre and interesting audio transformations, freely flowing between the different levels of musical structure, consequently affecting the listener on more levels at once. In an extreme case, the composer would create a body of work in the course of a lifetime that would itself be considered a composition of highest order.

As of 2020, the tools are not quite in place to compose properly in a holistic manner, but the possibility is real if the right software and/or hardware were created. I predict by 2025 it will exist and several precursors do exist such as HyperScore from MIT and MAX.

Music that is composed or performed holistically is said to be holosonic.

Performing Holistic Compositions

First, let me say that performing holistically can be applied to any performance. All that is necessary is that you view the totality of the performance as the musical goal, not the individual parts. Having played in many bands with talented folks I witnessed both in myself and others various degrees of self-interest and holistic thinking. Taken to an extreme, a self-absorbed musician can lack the ability to hear how the ensemble sounds together, how the parts blend and reinforce each other and may not even really hear the other parts (which can be difficult anyway). Bobby Knotkoff, Neil Young violinist at one point gave me a simple tip. Less is more. Whether you are solo singer, in a band or in a full orchestra (where it’s impossible to hear the other parts fully), stay focused on the product, the timbre you are collectively making. It is every musician’s main job to make this timbre true to the musical ideas and content. Some of this is handled by a sound technicians if you have one but often it is up to the others to not step on each other’s parts, not bunch up too much in the same sounds and registers and listen!

Performing a holistic composition has other technical challenges added on top of that, being to manage and balance the things that are played by musicians with the things that are played by machines. This requires technical skills that can be anathema to traditional musicians who are used to creating each note and chord themselves. But if we limited music to only what we could physically play, we would fall far short of what is musically possible.

It’s hard to imagine sight reading a complex score that included instructions for how sounds were to be modified on a timescale of milliseconds or less but it is possible with the use of assistive devices or by a cleverly devised notation that referred to presets that could be modified in real-time. The live playability of a composition need not be considered a crucial factor in its artistic merit any more than we might expect all artists to be speed artists and paint before our eyes to a catchy beat.

Because it is important that music not be restricted in any way, nothing dictates what instruments or sounds can be used or combined when composing or playing music. Therefore it should be considered perfectly natural to use traditional instruments, synthesizers of all types and any other sound in any measure, quantity, combination or arrangement since the result is always timbre and musical resonance with the audience.

However, it is important that we not place restrictions of playability on music so that we might construct music to be recorded that can’t really be performed live. The Beatles famously recorded Sgt. Pepper with no intention of playing it live as a band and this freed them up to add all sorts of new sounds and orchestration to produce a lasting and important work of art.

Since holistic composing requires digital assistance, it is well-suited to the use of visualization which can be used to amplify the emotional impact of the work.

It is not at all strange to imagine a Light Symphony which could strike the mind of the observer with every bit as much impact as a musical performance, even without an audio component. With advances in projects and VR, these experiences can rival the ability of music to directly reach a purely emotional part of our being and become masterpieces in their own right. Such techniques and methods would warrant an entirely separate treatise and theory to explore fully.

Using Hyperinstruments in Performance and Recording

Hyperinstruments perform operations used to create timbral results in time. These can be highly specialized such as in Hyperchord 2.0 where Sparklers represent “shiny” musical sparkles that can be mixed into a performance or Ramps that can create complex orchestral cadences or crescendos.

Sparkers perform specialized musical operations on frequencies of notes in specified patterns
Ramps are templates for musical structure used for emphatic introductions and dramatic breaks

The above program, HyperChord, provides numerous ways to play with superchords and superintervals and of course, hyperchords. Sparklers are in fact superchords based on various intervals, chords and scales. By changing the various parameters, one can hear very distinct changes in the sound, using the same pitches in different orders and octaves.

Hyperinstruments are a vast area of endeavor for present and future musicians and programmers and are only limited by imagination and physics. However, they are absolutely necessary for the performance of certain types of music which are unattainable by normal musicianship such as high density note structures and irrational times. Their use should be seen as natural and completely legitimate in the pursuit of musical artistry both in live performance and recording.

Numerous modern artists have made their reputation playing Hyperinstruments, often giving their unique instruments their name. Roy “Future Man” Wooten plays something he dubbed the Drumitar that he uses to play and trigger drums in various ways that free him up from the constraints of sitting in a fixed location and has a wide range of expression. Ben Neill plays his evolved Mutant Trumpet which has a dizzying array of switches and options for altering, looping and accompanying himself while using the trumpet as the “tone generator.” Tod Machover of MIT Media Lab is a pioneer of Hyperinstruments and has employed them in operas and numerous compositions and improvisations.

I designed a Hyperinstrument called the Supercorder in 1984 but never built it but did go on to write Hyperchord 1.0 for the Amiga which was a virtual version and have many recordings using it including the Improvised Symphony series.

Synthesizing Musical Structure

Heretofore, synthesizers have been considered tools for constructing new sounds in the traditional meaning of timbre. Using various frequency and amplitude modulation techniques combined with envelopes we use synthesizers to perform the role of traditional instruments, playing such and such a note at such and such a time.

Holistic Tonality frees us to use the same synthesizer controls to create musical structure. An example of this put into practice is the iPad App, Different Drummer. This software borrows various LFO and Oscillator controls to govern the behavior of notes, rests, ties, dynamics and spatial positioning, independent of the instrument sounds used. In this software, a fundamental frequency is expressed as Cycles per Measure (ƒ >= 1) or Measures per Cycle (0 < ƒ <1). Musical structure that is created using trigonometric means is said to be Cyclophonic.

On the Teaching of Music

Musical instruction should start at an early age. Certain people may have a sort of “color blindness” to music and are not emotionally moved by the harmonic and rhythmic richness and are not prime candidates for music instruction. Those that enjoy music and show interest make the best students.

Musical Talent occurs fairly rarely whereby the individual shows exceptional ability at one or more aspects of musicianship such as rhythm, singing, perfect pitch, etc. These are the prime candidates for not just music instruction but for musical careers.

Early instruction should focus on listening, enjoyment, learning songs and familiarity with the parts of music. It is important to get students playing actual complete works that can be performed and memorized so that they will be encouraged to continue. If we make it too difficult or arduous by inundating them with theory or exercises then they may get the wrong impression and be turned off. The sooner one feels like a musician, the more likely one is to become a musician or at least a musical person.

Some instruments are more frustrating to learn because they take more practice to get a satisfactory result. Percussion and chromatic percussion such as piano are good first instruments because to play a note is a simple thing. Recorders and Fretted instruments are next in simplicity as they require less training to get started. Learning to drum out rhythms is a good way to teach the importance of timing in music and to develop rhythmic sensibility. Many great instrumentalists’ first instrument was drums.

Students should be encouraged to become familiar with at least two instruments, even if they are attracted to a primary instrument. Students should be taught about many instruments, if only to hear them and to understand the differences and to get a lay of the musical landscape. Overspecialization at an early age might cause musical bias that could hinder a student later.

Students should learn about timbre, color, tone and sound early on so they understand the mission of music is to make good vibrations in the air, to produce timbre. This also trains them to be critical listeners to the details of music.

It is never too early to teach some music theory although it should be introduced only where needed or in small doses to most students. The main thing is to spark a natural curiosity so that the student will ask questions.

Once a student is able to perform a number of works from memory, they are ready to learn more theory such as musical scales, chords and basic harmony, etc.

Learning Reading and Sight Reading of musical notation is a gradual process and should be encouraged using various pedagogical techniques. It should be noted that many famous musicians such as Sir Paul McCartney never learned to sight read and that overemphasis of this can lead to a lack of improvisational skills. Many classical virtuos are completely hampered in improvisation because they are so trained to react to notes on a page rather than to imagine them and so they have no confidence without the sheet music in front of them. The imagination must be exercised or it becomes flabby and weak or worse, atrophies.

Improvisation should be encouraged early on which can quickly morph into musical composition. It is extremely important for students to exercise their musical imaginations so they understand that music has rules and rules can and should be stretched and broken. By working on improvisation, a student gains a musical identity and can find a lifetime of expression and enjoyment in creating new music.

Students should practice a minimum of 1/2hr per day. The goal of any teacher is to make their students want to practice in order to improve or because they enjoy the activity. Forcing students to practice against their will is encouraged in the beginning to get them started but after a year or so if they are not showing interest it is best to let them pursue other activities and maybe they will come back to it later with a better perspective.

All manner of assistive techniques and devices can and should be employed in making music more understandable, more enjoyable or easier to practice and learn including: apps and programs, videos, recordings, MIDI, Circle of 5ths, Chord Wheels, Books of all sorts.

Conclusion

It can be confusing to talk about timbre in all its glory and levels and try to relate it to our everyday experience of music “down here on Earth.” It’s also problematic to use the word timbre to represent all music because it presents issues of self-reference and recursion or even tautology creating a “chicken without joints,” not to mention that timbre has a well-established meaning already. However, please keep in mind that this does not change the meaning of timbre, only its scope and we should always strive to put music in the context of the entirety of existence and not limit it to our preferences or bias except in cases of specialization.

It should not be construed from Holistic Tonality theory that any music is more or less inferior to another or that any music is primitive or modern but only that humans will constantly search for the next form of musical expression using the tools of the realm and the era. Nor should it be construed that a single drummer or flute player or any other instrumentalist should be required to explore computerization or synthesis and in fact I would greatly encourage the sublime mixture of any and all forms of musical expression without limit whatsoever.

It is conceivable through timbral composition to create different musical systems which suggest different musical structures or emphasis. For example one could construct a system whereby the Dominant was not based on a perfect 5th but on a Tritone. But since the human ear is tuned to the 5th, we cannot fully appreciate such musical systems. The 12-tone equal temperament system is adjusted to our eardrums, though only approximately — except for octaves. It is conceivable that if we were to develop a way to bypass the ears to perceive music, we could then overcome the inherent bias of eardrum vibrational overtone bias.

It would not be right to merely say we have reached the end of musical possibilities and everything we do from now on will just be a variation on what has gone before. There are many more permutations of even a brief musical work than there are atoms in the known Universe so it is unlikely every combination of possibilities will ever be reached except over eternal timeframes. One can only wonder what the music of the 22nd Century (let alone the 30th Century might sound like but perhaps the ideas presented here will give the next generation a clue on where to begin the next phase of musical evolution. It is entirely possible that music of the future will be directly conveyed to the mind, bypassing certain physical limitations or that some strange, post-singularity cybernetic humans of the future will merely hear music on demand inside their quantum circuits.

It may very well be that we are on the cusp of an entirely new era of artificial music composed by AI with such complexity as to be entirely unimaginable and unmanageable by us mere biological mortals yet it will still be those with the consciousness to perceive and enjoy the music who are the final arbiters of musical quality and value. We may even find that the Universe has already tried this many times and that in the end, the finest music gets is an old bluesman picking a guitar on a porch or a child singing a beautiful song.

[1] Boethius’ De institutione musica, between the years of 1491 and 1492
[2] Diffinitorium musicae, Liber de arte contrapuncti 1475
[3] 1722 Treatise on Harmony
[4] Harmony (Harmonielehre, or "Theory of Harmony”) published in 1906
[5] In 1863 Helmholtz published Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik (On the Sensations of Tone as a Physiological Basis for the Theory of Music)
[6] Iannis Xenakis. Analogique A-B (1959), on Alpha & Omega
[7] The Schillinger System of Musical Composition
[8] A term coined by Tod Machover at MIT to denote a sort of meta instrument that plays things beyond what the player “inputs” into the system or otherwise amplifies their ability.
[9] John Cage is a 20th Century composer who wrote 4’33”, the world’s first official null composition.
[10] Theoretical Exact Beginning of the Universe currently unknown
[11] Pixound is musical software for converting color into musical information.

Author

Peter McClard has founded numerous companies including Hologramophone Research, Gluon, CaptureWorks and Techné Media and has been writing music and music software for several decades including: HyperChord, Pixound, Cyclophone, Different Drummer and others. peter.mcclard@gmail.com

© Copyright 2014–2020 Peter McClard. All rights reserved.

--

--

Peter McClard

As a creative type, entrepreneur and philosopher, I write on many topics and try to offer solutions to, or useful insights into common problems.