It's great that fluidsynth provides audio drivers, but for those of us who want to take care of actually playing the audio ourselves (in my case I'm mixing in other waveforms) what do you think about the idea of a "fluidsynth core" type of build config?
ie. It just builds the bits required to synthesize the sample data, and no audio drivers. Is glib still a hard dependency at that point? I feel like it removes all threading concerns- or is the synthesizer itself multithreaded? The main system-level interfacing I see remaining is reading soundfonts from disk, which standard C FILE streams should do portably. I expect I'm missing something as I don't know much about fluidsynth internals.
FWIW I wrote my own script¹ to compile a minimal fluidsynth on windows. It was so long ago I can't remember if that's because I struggled with cmake or because I chose not to install cmake (I've never had a good experience with it). And I'm biased in the sense that I'd love to see the glib dependency disappear (it's pretty annoying to build on non-linux systems! although it's only done once per platform so not a huge cost overall).