The goal of VR and AR is to immerse the user in a created world by fooling the human perceptual system into perceiving rendered objects as real. This must be done without the brain experiencing fatigue: accurate audio representation plays a crucial role in achieving this. Unlike vision with a narrow foveated field of view, human hearing covers all directions in full 3D. When the rendered audio and vision do not agree, the user falls out of the experience. The importance of audio for VR and AR are being increasingly realized, and VisiSonics is developing a comprehensive toolset to address the needs of industry. We'll describe several products developed by VisiSonics that are based on over a decade of research. These include propagation engines that are embedded in standard authoring workflows for gaming (Unity, Unreal, Wwise, FMOD) and movie postproduction (Adobe, ProTools); capture of audio into high-order ambisonics and MPeg-H; personalization of 3D audio to the individual's head shape via customization of the head-related transfer function and others. We'll demonstrate workflow solutions designed to enrich the audio immersion for the gaming, video post-production and capture in VR/AR.