For real-time speech processing on iOS with minimal latency, you’ll want to focus on apps or setups that leverage iOS’s low-latency audio pipeline (Core Audio + Audio Units). Since you’re prioritising clear speech with noise reduction, not audiophile-grade sound, the emphasis will be on fast DSP and reliable voice isolation. With AudioBus3 or AUM (by Kymatica) you can route live mic input → noise reduction plugin → headphones with very low latency. It supports AUv3 plugins like Brusfri (noise reduction) or FabFilter Pro-G (gate), and has configurable buffer size for minimal latency.
If you want the simplest iOS-only solution:
- Download AUM and Brusfri (or a free AUv3 noise gate).
- Use an audio interface (like iRig Pro or Focusrite iTrack) with direct monitoring off, and buffer at 64 samples.
- Connect your mic → iOS device → AUM chain → headphones.
This setup will give you clear speech with sub-10ms latency, which is very natural for live monitoring.