WeSearch

Max/MSP external for running neural amplifier captures

·1 min read · 0 reactions · 0 comments · 1 view
#neural~#max/msp#amp modeling#real-time audio#neural networks
Max/MSP external for running neural amplifier captures
⚡ TL;DR · AI summary

The neural~ Max/MSP external allows real-time loading and processing of neural amplifier models, supporting formats like NAM and AIDA-X while managing sample rate resampling. It integrates with Max for Live via the Live Amp Modeler demo and provides detailed messaging for model status and audio parameters. The object handles audio signal input and output, model loading, and error reporting. Builds are tested on macOS with Windows cross-compilation support using specified tools and dependencies.

Original article
GitHub
Read full at GitHub →
Opening excerpt (first ~120 words) tap to expand

neural~ This Max/MSP object loads and runs neural amplifier models in real time. It supports NAM and AIDA-X models, and handles resampling to the host rate. Sound demo via Max for Live: Live Amp Modeler. The object's inlet accepts the following messages: (signal): The mono audio signal. load <model path>: Load a neural amp model (.nam or .json/.aidax). clear: Unload the current model. prewarm: (NAM-only) Prewarm the model to avoid digital artifacts. bang: Report model status. The object's first outlet outputs: (signal): The processed audio signal. The object's second outled outputs the following messages: loaded <model path>: Path to model upon successful load. latency <ms>: Audio latency (non-zero when model and host sample rates differ).

Excerpt limited to ~120 words for fair-use compliance. The full article is at GitHub.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from GitHub