Sample: Simultaneous Audio Playback via Waveform Audio (waveOut) API

The minimalistic sample demonstrates support of [deprecated] Waveform Audio API for multiple playback streams.

Depending on command line parameters, the application starts threads to open audio hardware using separate waveOutOpen call and stream one or more generated sine waves:

  • 1,000 Hz sine wave as 22,050 Hz, Mono, 16-bit PCM (command line parameter “a”)
  • 5,000 Hz sine wave as 32,000 Hz, Mono, 16-bit PCM (command line parameter “b”)
  • 15,000 Hz sine wave as 44,100 Hz, Mono, 16-bit PCM (command line parameter “c”)
Check(waveOutOpen(&hWaveOut, WAVE_MAPPER, &WaveFormatEx, NULL, NULL, CALLBACK_NULL));
WAVEHDR* pWaveHeader;
HGLOBAL hWaveHeader = (WAVEHDR*) GlobalAlloc(GMEM_MOVEABLE | GMEM_SHARE, sizeof *pWaveHeader + WaveFormatEx.nAvgBytesPerSec * 10);
pWaveHeader = (WAVEHDR*) GlobalLock(hWaveHeader);
pWaveHeader->lpData = (LPSTR) (BYTE*) (pWaveHeader + 1);
pWaveHeader->dwBufferLength = WaveFormatEx.nAvgBytesPerSec * 10;
//pWaveHeader->dwUser = 
pWaveHeader->dwFlags = 0;
pWaveHeader->dwLoops = 0;
#pragma region Generate Actual Data
    SHORT* pnData = (SHORT*) pWaveHeader->lpData;
    SIZE_T nDataCount = pWaveHeader->dwBufferLength / sizeof *pnData;
    for(SIZE_T nIndex = 0; nIndex < nDataCount; nIndex++)
    pnData[nIndex] = (SHORT) (32000 * sin(1.0 * nIndex / WaveFormatEx.nSamplesPerSec * nFrequency * 2 * M_PI));
#pragma endregion 
Check(waveOutPrepareHeader(hWaveOut, pWaveHeader, sizeof *pWaveHeader)); 
Check(waveOutWrite(hWaveOut, pWaveHeader, sizeof *pWaveHeader)); 

The operating system is supposed to mix the waves, which can be easily perceived taking place. It is possible to run the application with multiple waveforms within a process, e.g. “abc” command line parameter, and/or start multiple instances of the application.

A binary [Win32] and partial Visual C++ .NET 2010 source code are available from SVN.

3 Replies to “Sample: Simultaneous Audio Playback via Waveform Audio (waveOut) API”

  1. By remarkable (scary?) coincidence I am working on an audio mixer right now and was wondering whether to write my own (or use BASS) or let the OS do the mixing. I noticed OS audio mixing has very low latency, lower than I would deem reliable in user mode. Also the resampling seems to work very well. I could e.g. hook the audio stream to the sound card (Vista+) to read back the mix into my app, likely at the cost of some more latency. What do you think?

  2. There are a few APIs out there, so one the one hand there are choices and on the other all APIs but one are wrappers over the primary.

    If there are no special requirements as for latency and mixing accuracy, DirectShow or DirectSound look the best (depending on what interface is better for app integration, also if codecs are to be used). ACM/WaveOut look really obsolete, they might be only good for something really simple.

    My understanding is that if one does not need to support anything older than Vista, WASAPI should be used as underlying API, otherwise DirectSound, or both to better cover variety of OS’es.

    For ultra low latency and/or ultra precise accuracy, one would prefer to mix himself, or otherwise why bother and not letting system do the mixing.

    Windows provides good support for audio, including mixing, time accuracy, reasonable latency, codecs, resampling, even echo cancellation – why not leverage all this (unless again something very special is in question).

Leave a Reply