108401
Code-golf is all about reducing an algorithm to the absolute smallest size possible whilst still doing the same thing. Likewise, minimalism is the art of expressing an idea with as little information as possible, thus it follows that code-golfing bytebeat to express a musical idea in the smallest space possible would be minimalist music.
This raises some questions. If you were to program some minimalist music as bytebeat, is that the most pure form of minimalism? Nothing is left to interpretation. The performance of the piece is set in stone 100% clearly and thus the complexity of the piece is 100% governed by the complexity of the source file (or more accurately, the stripped and optimised compiled binary). So what does it mean to compose minimalist bytebeat? Perhaps you could start with a musical idea and reduce it down to its simplest and perhaps more importantly most easy to compute form and call that the finished product.
Something that has fascinated my for the longest time has been the thue-morse sequence. It is incredibly simple to compute and more importantly somewhat difficult to predict the next term in the sequence thus keeping it musically interesting and very minimal. You could construct the output waveform from the thue-morse sequence and change pitches and volume and the number of instruments derived entirely from the thue-morse sequence with very little additional code.
In the spirit of minimalism and information theory, programming minimalist bytebeat would likely have to be done in assembly to grant full control over the output as the way the computer performs it is in my opinion as important as the performance itself. Measuring the level of information in a piece of bytebeat is better done from the size of the output binary than the source code because the compiler may include libraries or other pieces of code in to sort of 'cheat'. But to what level do we wish to not 'cheat'? A true manuscript perhaps leaves nothing for the performer to infer. Do you write the code for an open source micro-controller and write in the circuit diagram? Or to be more extreme supply the absolute minimal circuit diagram that would play the music you've written? Or to be even more extreme, do you need to supply schematics for the machines that make the microchips? And maybe even the machines that make those?
I think one of the purposes of this discussion is to ask where you draw the line at performance instructions. There is always infinitely more detail you can put out to make sure that your music is performed exactly to spec and is perfectly repeatable. If an enormous amount of information is required just to detail the simplest music fully, and thus we rely on filling in the blanks with human experience, does minimalism really exist?
This raises some questions. If you were to program some minimalist music as bytebeat, is that the most pure form of minimalism? Nothing is left to interpretation. The performance of the piece is set in stone 100% clearly and thus the complexity of the piece is 100% governed by the complexity of the source file (or more accurately, the stripped and optimised compiled binary). So what does it mean to compose minimalist bytebeat? Perhaps you could start with a musical idea and reduce it down to its simplest and perhaps more importantly most easy to compute form and call that the finished product.
Something that has fascinated my for the longest time has been the thue-morse sequence. It is incredibly simple to compute and more importantly somewhat difficult to predict the next term in the sequence thus keeping it musically interesting and very minimal. You could construct the output waveform from the thue-morse sequence and change pitches and volume and the number of instruments derived entirely from the thue-morse sequence with very little additional code.
In the spirit of minimalism and information theory, programming minimalist bytebeat would likely have to be done in assembly to grant full control over the output as the way the computer performs it is in my opinion as important as the performance itself. Measuring the level of information in a piece of bytebeat is better done from the size of the output binary than the source code because the compiler may include libraries or other pieces of code in to sort of 'cheat'. But to what level do we wish to not 'cheat'? A true manuscript perhaps leaves nothing for the performer to infer. Do you write the code for an open source micro-controller and write in the circuit diagram? Or to be more extreme supply the absolute minimal circuit diagram that would play the music you've written? Or to be even more extreme, do you need to supply schematics for the machines that make the microchips? And maybe even the machines that make those?
I think one of the purposes of this discussion is to ask where you draw the line at performance instructions. There is always infinitely more detail you can put out to make sure that your music is performed exactly to spec and is perfectly repeatable. If an enormous amount of information is required just to detail the simplest music fully, and thus we rely on filling in the blanks with human experience, does minimalism really exist?