Protracker Player, part 3: The music machine

In my previous posts, I talked about my approach to resampling and organizing my classes by having the AudioGenerator manage the position in the song and sending data to the ChannelAudioGenerators. In this post, I'm going to talk more in depth about the ChannelAudioGenerators.

The ChannelAudioGenerators are gophers. They get their instructions from the AudioGenerator and just do what they're told. They have no knowledge at all about what position we are in the song, what the other ChannelAudioGenerators are doing, and when the song will end. They just receive their instructions and product their output. They are just music machines that do what they are told.



The AudioGenerator has several interfaces into the ChannelAudioGenerator. They can be boiled down to two groups: telling it what to do, and getting its output.

Tell it what to do:

  • setRowData
  • applyStartOfRowEffects
  • applyPerTickEffects
Get output:
  • getNextSample
Everything else is a private helper function to make the code better-organized.

SetRowData is applied once per row, at the start of a new row. It simply sends new row data to the generator, and it knows what it needs to do depending on what is received. If it receives a new instrument, it replaces the currently playing instrument. If it receives a new effect, it updates the current effect. If it receives a new pitch value, it replaces the current pitch value.

The effect functions could probably be refactored into a single function if we're being honest - all of them are applied on a per-tick basis, it's just that some of them are applied at the start of a row, and the others are applied at the start of a tick.

The effects themselves are applied in several ways, but ultimately most of them boil down to one of two things: they either change the pitch, or change the panning. The pitch effects include sliding the pitch up, down, sliding to a specific pitch, or doing an arpeggio. The volume effects include fading the volume in or out, or just setting the volume to a different level.

Another factor is stereo panning. If you're doing stereo output, a single sample actually is a pair of values: one for the left speaker, one for the right. A sample that is to be played on the left speaker would have only have left values with a right value of 0, and vice versa for right speakers. A sample to be played in the "middle" would have identical values.

Unlike later trackers, ProTracker only had two stereo panning positions: left and right. Channels one and four were left, two and three were right. ProTracker did not support any effects to change the panning position (although other variations of the format did). So, the ChannelAudioGenerators only have to remember whether they are left or right, and adjust the output accordingly.

When the AudioGenerator generates samples, all it needs to do is add all the numbers together and send them to the output device, taking care to make sure the numbers don't exceed the maximum or minimum sample values.

So, that's the gist of how I made my ProTracker player in Kotlin. Where do I go from here? Ultimately, I don't think Kotlin is the best language for writing an audio player. Kotlin does many things well, but it's typically running on the JVM, and for audio playback that involves a lot of processsing, I'd rather have something that runs natively. Rust or Go might be better suited to the task.

I may try this project again in the future using Rust, but in the meantime I may be taking on a web project that's also related to mod music and the demoscene. I'll have more to report on that later.

I hope you've enjoyed this series.

Comments