AudioWorkletNode support is on the roadmap, along with proper multichannel (merger/splitter) visualization. I'm prioritizing core graph stability first, then expanding node coverage.
If you're interested, you can subscribe for updates at the bottom of the landing page. I'll share progress there.
I'm not familiar with the Web Audio API, but I am interested in audio APIs in general. Could you explain where the designers went wrong? The templates listed in this (very nice!) app seem to look relatively sane as these things go.
Any chance for an AudioWorkletNode example?
AudioWorkletNode support is on the roadmap, along with proper multichannel (merger/splitter) visualization. I'm prioritizing core graph stability first, then expanding node coverage.
If you're interested, you can subscribe for updates at the bottom of the landing page. I'll share progress there.
For instance, you can't restart an oscillator node. Once you call stop it's the end. A node that's stopped is also immediately disconnected.
ScriptProcessorNode, broadly speaking, sucked, because it was running in the main thread:
https://developer.mozilla.org/en-US/docs/Web/API/ScriptProce...
so eventually it was replaced with AudioWorkletNode combined with AudioWorkletProcessor:
https://developer.mozilla.org/en-US/docs/Web/API/AudioWorkle...
https://developer.mozilla.org/en-US/docs/Web/API/AudioWorkle...
Arguably people would have been happy having just these two from the get go.
Compare this to an API that was already mainstream back when Web Audio API was first developed:
https://steinbergmedia.github.io/vst3_dev_portal/pages/Techn...