A while back, we were working on a media subsystem for The Church of Jesus Christ of Latter-day Saints. They needed software-controlled multimedia playback with specific requirements for their temples worldwide.
Now, the attached image isn’t an exact representation of our work, but it captures the essence: LDS and technology go hand in hand.
Back in the day, we used #DirectShow as our multimedia framework, and boy, did we face some interesting challenges. One that sticks out in memory is related to audio delivery. Picture this: we had a multi-channel audio output card from AudioScience, Inc., and our task was to schedule audio delivery in perfect sync across multiple physical audio connectors. But wait, there’s more! We also had to toggle outputs on and off while others were already belting out sound. And when we turned on a fresh audio stream, it had to seamlessly match the signal already in play. Oh, and don’t forget — the video part of this signal was streaming nonstop and couldn’t be interrupted.
Now, let me tell you, this wasn’t a walk in the park. The multimedia framework was designed back in the ’90s, with the quaint notion that once you set up your playback topology, you couldn’t tweak anything while the show was running.
But guess what? Our software spread its wings and flew to over a hundred locations worldwide. Many moons have passed, but who knows — it might still be chugging along out there.