Technical Sound Design Project



This project showcases techniques and some possible approaches for dialogue, voice accessibility,  footsteps, and 3D object oriented audio in Unreal Engine with Wwise. The project features programming in Blueprints, C++, Python, and XML, as well as tools created to facilitate implementation for audio teams. Such tools include WAAPI scripts, audio actor components in blueprints and C++, and scriptable editor tools. 



Archived repository at     

play-through (no commentary )

Dialogue implementation and tools



Here the dialogue it's being implemented with Wwise external audio source. The implementation required creating:
an XML script for Wwise to reference the external audio files and to generate them as .wem files (github link) 

Custom dialogue system using Unreal behavioral trees

Events in blueprints using the Set_External_Source  Media_with_Ids node


When the dialogue is taking place in a noisy environment the volume of the ambience sounds is attenuated via the Ambience_vol_RTPC. This game parameter is driven by a meter effect in the Ambience_sideChain bus which receives the signal from the Dialogue actor mixer.    



For this project I created an interactable actor class for the player to engage with.

The main character has a scan  functionality where by holding the right mouse button down and moving the mouse around they can trigger
a reaction from the interactable objects in the scene. When triggered, the interactable actors play a main character monologue related to the specific interactable object in the scene.

The play audio function in the interactable actor has an exposed variable for the Media_Id. This Id will set the correct dialogue for the external audio source dialogue event. Since the variable is exposed we can edit it whenever we place a new interactable actor in the scene. 



In this prototype, I created a WAAPI script to load voice accessibility files, with their corresponding switch container, switch group, time stretch sound effect, and events.


By duplicating this script and changing it's variables I ended up with five accessibility voice commands. I can click each of these scripts once and instantly generate all the necessary building block for the implementation in Wwise. 

In Unreal all the voice accessibility functions start by checking if accessibility features is true. Most of the voice accessibility logic is pretty basic with the exception of the dialogue reply prompts. When the player is prompted to select a reply by pressing the key X, there are two scripts working in conjunction. 

First, in the player blueprint with a looped MultiGate node we can post the audio event replies in consecutive order and set different integer values for the Respond variable index.


Then in the dialogue widget we can update the Respond Index and post the correct dialogue reply based on the last Respond index.


Footsteps Implementation and tools




I created a Python script to generate the footsteps system in Wwise. This includes one switch container for surface material type, one switch container for wetness level, multiple random containers, switch groups, assigned switches, and a play_footsteps trigger event.

Although making the script took longer than it would have taken me have I directly worked in Wwise, now I can use it as a template for future projects. And it would only require one click of the mouse.

In Unreal the footsteps are triggered by an animation notify from the character's animation_blueprint. Once the notify is received, the footstep function checks with a line trace for actor component tags to identify and set the RTPC wetness level. Then it looks for the material surface type and with a switch node it assigns the correct surface switch type before posting the event.




I created an editor tool using the Unreal scriptable tools plugins. The tool allows us to choose a static mesh, assign a component tag for the wetness levels, and a physical material for the surface type. Then we can simply drag our mouse around the map to both add and scale the static mesh in our scene. If we don't want the mesh to be visible we can simply turn visibility to false. 

 Scriptable editor blueprint:



After finishing the footsteps implementation for the main character, I thought it would be interesting to modify this implementation and have a modular approach. This led me to create a blueprint actor component with the footsteps function and a variable AK_footsteps audio Event. Once the footsteps component is added to a character blueprint we can easily call the component function. The only prep work we need to do is, add an AK audio component to our actor, assign a footsteps Wwise event to it, and set the footsteps component event to the new AK audio component. 

With this simple tool we can easily add audio footstep functionality to any character in our game. Having the option to choose which event will be triggered allows us to use different audio depending on the character. For example, if our character is big we can use a heavy footsteps event in Wwise and set this event as the main component footsteps event.   

Fly-by object implementation and C++ component



As you can see there are three box colliders attached to this drone. Each collider will trigger a different event switch in Wwise, based on their position in relation to the player. With Wwise's position editor we can automate the 3d position triggered by each switch. Also we can set an attenuation based on distance from the player.

In addition the flying drone has a looping engine sound that is modulated to create a doppler effect. The modulation is controlled by a distance RTPC that drives: pitch, volume, high pass filter, and spread.



In order to calculate the flying drone RTPC distance from the player in Unreal, I created C++ component named Wwise_RealTime_RTPC. This component can run the SetRTPCValue function every frame in our script. 

However, running this function every frame is overkill. We don't need to update our sound every frame. Because of that I make sure to set the component's tick interval to a higher value than zero. In this case I set the component's tick to .07. With blueprints we can set the RTPC value with our custom component and since our component updates less regularly than the Tick function, we don't have to worry about waisting memory.

GitHub reference link

Without this component we could add a delay at the end of the function and then loop back to update. This method is still ver useful but it requires a significant amount of extra nodes, specially if we want to add bool conditions. I like the simplicity of using this custom component.

Object follow along spline



In this example you can hear the sound of the ocean following the player along the beach. In blueprints, the ocean sound actor is set to follow our main character along a spline. You can see how the actor follows the player by looking at the cube mesh attached to the audio object.

To make the ocean sound more believable, I attached two more Ak_audio events to the ocean actor. This events are triggered at different times to avoid unwanted phase cancelation and to enhance the effect of the waves moving in a 3D space. By being attached to the same actor, the three audio events follow the player along the spline.