I'd been exploring the WebAudio API and the various things that people have made with it. I was blown away by awesome demos such as this Karplus-Strong synthesis demo and this Collection of WebAudio effects, and I wanted to try it out for myself.
Raw mouse and keyboard events are passed to Rust and handled entirely there. There are keyboard shortcuts implemented for a variety of actions such as moving notes, copy-pasting selections, and playing back the current composition. State is maintained for a set of selected notes which can be selected by clicking and dragging while holding shift or control-selecting individual notes. The goal was to make the editing experience as efficient as possible and give users access to higher-level methods of manuipulating the note data.
Note data is stored internally in skip lists, one for each note line. I chose this data structure due to the fact that we the majority of operations are random insertions and deletions triggered when users modify notes in the middle of a composition. It also supports "stabbing" queries, where the goal is to either find what note intersects a given beat or what notes bound it if there are no intersections.
Along the same, I created a really neat text-based representation of the skip list which is printed to the console in debug mode:
The UI for the synthesizer controls is implemented using
react-control-panel, my React port of the
control-panel project. Changing values affects the synthesizer live, being applied to all of the underlying voices individually.
As it turned out, the default Polyphonic synthesizer built into Tone.JS had some issues when used to dynamically play notes such as when users seleted them on the editor. Voices (underlying monophonic synths) were getting re-used before the note they last played finished, leading to permananently playing notes and audio artifacts. To get around this, I implemented my own polyphonic synth state manager in Rust that uses the least-recently-used voice first. I combined this with a static scheduling algorithm that pre-calculated the optimal order in which to use voices in order to maximize the time between release and the next attack. This is used in the playback feature to schedule attacks/releases on individual voices all at once.
Whenevery they are played, compositions are serialized into a binary format and Base64-encoded (all from Rust) and then saved into the browser's
localStorage. Saved compositions are loaded during application initialization if they exist. In the future, the goal is to allow import/export from MIDI files and perhaps even MIDI keyboards using WebMIDI.
This project is still very much WIP, and there's a lot left to do. For example, scrolling/zooming compsitions is still unimplemented (and may be a possible performance bottleneck). As previously mentioned, import/export to MIDI is missing as well. A help guide, more UI controls for stuff like BPM, more ergonomic behavior for playback, and a variety of other things are also missing. My goal with this isn't to create an all-inclusing web-based music production environment; I want to create an effective MIDI editor and synthesizer capable of allowing users to write and play back compositions. I may embed it into a large application or add more features later on.