In this article, you’ll learn to use TypeScript and Twilio Programmable Video to build a video chatting application with muting and unmuting controls. You’ll use an existing base project making use of the Twilio Client Library (for front-end video) and the Twilio Server Library (for back-end authentication) and retrofit it to support muting and unmuting.
This article is an extension of my last article, Get Started with Twilio Programmable Video Authentication and Identity using TypeScript, and will build off the “adding-token-server” branch of this GitHub Repository. To see the final code, visit the “adding-mute-unmute” branch.
Twilio Programmable Video is a suite of tools for building real-time video apps that scale as you grow, from free 1:1 chats with WebRTC to larger group rooms with many participants. You can sign up for a free Twilio account to get started using Programmable Video.
TypeScript is an extension of pure JavaScript - a “superset” if you will - and adds static typing to the language. It enforces type safety, makes code easier to reason about, and permits the implementation of classic patterns in a more “traditional” manner. As a language extension, all JavaScript is valid TypeScript, and TypeScript is compiled down to JavaScript.
Parcel is a blazing-fast web configuration bundler that supports hot-module replacement and which bundles and transforms your assets. You’ll use it in this article to work with TypeScript on the client without having to worry about transpilation or bundling and configuration.
Requirements
- Node.js - Consider using a tool like nvm to manage Node.js versions.
- A Twilio Account for Programmable Video. If you are new to Twilio, you can create a free account. If you sign up using this link, we’ll both get $10 in free Twilio credit when you upgrade your account.
Project Configuration
Download the project files and install dependencies
You can begin by cloning the “adding-token-server” branch of the accompanying GitHub Repository with the command below:
git clone -b adding-token-server --single-branch https://github.com/JamieCorkhill/Twilio-Video-Series
Navigate to both the client and server directories, and install the dependencies:
cd Twilio-Video-Series cd client && npm i cd ../server && npm i
Configure Environment Variables
For the authentication server, you’ll need to specify three environment variables corresponding to your Twilio Account SID, your Twilio API Key, and your Twilio API Key Secret.
The Twilio Server Library will make use of these variables to generate Access Tokens. See my article Get Started with Twilio Programmable Video Authentication and Identity using TypeScript or the relevant section of the documentation to learn more about Access Tokens.
Navigate into the server folder, if you’re not already there from the prior step, and create a new folder called env. Add a single file called dev.env as shown below:
cd server mkdir env touch env/dev.env
Add the following variables to dev.env.
TWILIO_ACCOUNT_SID=[Your Key] TWILIO_API_KEY=[Your Key] TWILIO_API_SECRET=[Your Key]
You can find your Account SID on the Twilio Console and you can create your API Key and API Secret here. Add these keys in their respective locations, overwriting the [Your Key]
placeholder in its entirety each time.
Note that on the API dashboard of the Console, your API key will be referred to as the API SID. Also, be sure to take note of your API Key Secret before navigating away from the page - you won’t be able to access it again.
With the authentication set up for testing, you’re ready to move to the client and begin adding muting and unmuting controls.
Update the Client
Add buttons to index.html
Open the client project in your favorite code editor or IDE and find the index.html file. Underneath the existing <input>
and <button>
elements, add the two highlighted buttons below:
<!DOCTYPE html> <html lang="en"> <head> <title>Twilio Video Development Demo</title> <style> .media-container { display: flex; } .media-container > * + * { margin-left: 1.5rem; } </style> </head> <body> <div class="media-container"> <div id="local-media-container"></div> <div id="remote-media-container"></div> </div> <div> <input id="room-name-input" type="text" placeholder="Room Name"/> <input id="identity-input" type="text" placeholder="Your Name"/> <button id="join-button">Join Room</button> <button id="leave-button">Leave Room</button> <button id="mute-unmute-audio-button">Mute Audio</button> <button id="mute-unmute-video-button">Mute Video</button> </div> <script src="./src/video.ts"></script> </body> </html>
Each button will perform both muting and unmuting for audio and video respectively. You’ll implement logic so that when a user clicks on a button, clicking it again will produce the opposite result of what happened when the button was clicked the first time.
To do this, you’ll need to add handles to both buttons as well as implement global state to keep track of what is muted and what isn’t.
Handle mute/unmute button click logic
In the src/video.ts file, introduce the two button handles and boolean flags shown in the highlighted section of the snippet below:
import { connect, createLocalVideoTrack, RemoteAudioTrack, RemoteParticipant, RemoteTrack, RemoteVideoTrack, Room, } from 'twilio-video'; import { tokenRepository } from './token-repository'; import { Nullable } from './types'; // UI Element Handles const joinButton = document.querySelector('#join-button') as HTMLButtonElement; const leaveButton = document.querySelector('#leave-button') as HTMLButtonElement; const remoteMediaContainer = document.querySelector('#remote-media-container') as HTMLDivElement; const localMediaContainer = document.querySelector('#local-media-container') as HTMLDivElement; const roomNameInput = document.querySelector('#room-name-input') as HTMLInputElement; const identityInput = document.querySelector('#identity-input') as HTMLInputElement; const muteUnmuteAudioButton = document.querySelector('#mute-unmute-audio-button') as HTMLButtonElement; const muteUnmuteVideoButton = document.querySelector('#mute-unmute-video-button') as HTMLButtonElement; // Room reference let room: Room; // Global mute state let isAudioMuted = false; let isVideoMuted = false; ...
You also need to ensure that it isn’t possible to click either button before a user joins a room, so find the main method inside the video.ts file, and programmatically set both to disabled (you could also use the disabled
attribute in the HTML)
/** * Entry point. */ async function main() { // Initial state. leaveButton.disabled = true; joinButton.disabled = false; muteUnmuteAudioButton.disabled = true; muteUnmuteVideoButton.disabled = true; // Provides a camera preview window. const localVideoTrack = await createLocalVideoTrack({ width: 640 }); localMediaContainer.appendChild(localVideoTrack.attach()); }
Additionally, add both buttons to the toggleInputs()
function, found toward the bottom of video.ts. This function encapsulates automatic button toggling so that you don’t have to litter the codebase with it:
/** * Toggles inputs into their opposite form in terms of whether they're disabled. */ function toggleInputs() { joinButton.disabled = !joinButton.disabled; leaveButton.disabled = !leaveButton.disabled; muteUnmuteAudioButton.disabled = !muteUnmuteAudioButton.disabled; muteUnmuteVideoButton.disabled = !muteUnmuteVideoButton.disabled; identityInput.value = ''; roomNameInput.value = ''; }
Next, you’ll create a mute()
and unmute()
function which will do the work of muting and unmuting tracks respectively.
These two functions look very similar, thus in a real-world application, you’ll want to do a little more work to consolidate logic and not repeat code. Here, I’ve kept the repetition so you can see what’s happening more transparently.
Near the bottom of the file, right above the toggleInputs()
function but below the trackExistsAndIsAttachable()
function, add the following:
/** * Granular track-control for which mute and unmute operations can be applied. */ interface IMuteUnmuteOptions { audio: boolean; video: boolean; }
Now, right below that, you’ll add the mute()
function:
/** * Mutes the local participant's tracks based on the specified options. * * @param opts * Specifies which kind of tracks to mute. */ function mute(opts: IMuteUnmuteOptions) { if (!room || !room.localParticipant) throw new Error('You must be connected to a room to mute tracks.'); if (opts.audio) { room.localParticipant.audioTracks.forEach( publication => publication.track.disable() ); } if (opts.video) { room.localParticipant.videoTracks.forEach( publication => publication.track.disable() ); } }
To perform muting, you loop through the published audio tracks or video tracks for the local participant (that is, the user who pressed the button) and call disable()
. disable()
is a function available on audio and video tracks, and calling it fires that track’s disabled
event, which you can see here for audio and here for video.
Similarly, to unmute, you do the same, calling enable()
. Add the unmute()
function below the mute()
function:
/** * Unmutes the local participant's tracks based on the specified options. * * @param opts * Specifies which kind of tracks to unmute. */ function unmute(opts: IMuteUnmuteOptions) { if (!room || !room.localParticipant) throw new Error('You must be connected to a room to unmute tracks.'); if (opts.audio) { room.localParticipant.audioTracks.forEach( publication => publication.track.enable() ); } if (opts.video) { room.localParticipant.videoTracks.forEach( publication => publication.track.enable() ); } }
With either function, when you call it, you can pass in an object which specifies which tracks you want to perform the requested operation upon. That is, mute({ audio: true, video: false })
would mute only the audio track, while unmute({ audio: true, video: true })
would unmute both audio and video tracks.
So far, you have the functions which manipulate the tracks, but you don’t have the functions which respond to click events for the mute and unmute buttons.
Rather than create two separate functions for this, you’ll create one, and it’ll accept an enum
type to know which tracks to perform muting/unmuting on. Once again, similar-looking logic is repeated here. While principles like DRY are important, attempting to follow it in every instance can be more trouble than it’s worth, leading to more complex code and less explicit logic.
Closer to the top of the file, underneath the onLeaveClick()
function, but above onParticipantConnected()
, add the following:
enum TrackType { Audio, Video } /** * Callback function for the click of a "mute" button. * * @param trackType * The type of track to mute/unmute. */ function onMuteUnmuteClick(trackType: TrackType) { if (trackType === TrackType.Audio) { const opts = { audio: true, video: false }; isAudioMuted ? unmute(opts) : mute(opts); isAudioMuted = !isAudioMuted; isAudioMuted ? muteUnmuteAudioButton.textContent = 'Unmute Audio' : muteUnmuteAudioButton.textContent = 'Mute Audio'; } if (trackType === TrackType.Video) { const opts = { audio: false, video: true }; isVideoMuted ? unmute(opts) : mute(opts); isVideoMuted = !isVideoMuted; isVideoMuted ? muteUnmuteVideoButton.textContent = 'Unmute Video' : muteUnmuteVideoButton.textContent = 'Mute Video'; } }
This function expects to know which track to operate on, and then uses two ternary expressions to perform muting and unmuting.
In the case of audio, if audio is not already muted, meaning the user has yet to click this button in the session, the false leg of the first ternary expression will execute, thus mute()
will be called on the audio track.
Once the audio is successfully muted, the isAudioMuted
flag will flip to true
(the opposite of what it was before, namely, false
). That will cause, for the second ternary expression, the text of the button to change to “Unmute Audio”.
This process works the same way for the video side of things and will manage itself for all button clicks since no state is hardcoded. That is, if you click mute once and then click it again, the true leg of the ternary will run, thus unmuting the audio/video. Thereafter, the flag will switch back to false, causing the text of the button to switch back to Mute Audio
/Mute Video
.
With the onMuteUnmuteClick()
function complete, you might be wondering how we’ll wire the function up in a manner that correctly corresponds to the button pressed.
Scroll to the bottom of the file right above the main()
function invocation, and modify the “Button event handlers” block as per the highlighted section of code below:
// Button event handlers. joinButton.addEventListener('click', onJoinClick); leaveButton.addEventListener('click', onLeaveClick); muteUnmuteAudioButton.addEventListener('click', () => onMuteUnmuteClick(TrackType.Audio)); muteUnmuteVideoButton.addEventListener('click', () => onMuteUnmuteClick(TrackType.Video));
Notice that for onJoinClick()
and onLeaveClick()
, you passed a reference to the functions as the second argument to addEventListener()
. That’s desired behavior - you want addEventListener()
to receive a reference, and it’ll call the function which that reference is pointing to at a later time.
The onMuteUnmuteClick()
function, however, needs to know the TrackType
, which is an argument you’re required to provide.
Here, where you bind the function as the event listener is the only place where you have all the information required to know which track type to pass, thus, you wrap the onMuteUnmuteClick()
function in an arrow function instead.
That allows you to invoke onMuteUnmuteClick()
passing it the TrackType
. The addEventListener()
function will receive a reference to the arrow function instead, which it will call when the button is clicked. The arrow function, in turn, will call onMuteUnmuteClick()
passing it the track type. You’ll use this trick again later to handle track enabled
/disabled
events.
Manage track enabled/disabled events
Users now have the ability to mute and unmute their audio and video tracks, but they can’t yet react to mute and unmute events from other connected users.
If a user Alice is in a room with a user Bob, and Bob mutes his audio, Alice’s client application should be able to display a notification or icon to her. When Bob mutes his audio, that means Bob is calling disable()
on his audio tracks, thus Alice will want to listen to the enabled
and disabled
events which will fire in response.
Underneath the onTrackUnsubscribed()
function, add the following two functions which handle track enabled
and disabled
events respectively:
/** * Callback for when a track is enabled. * * @param track * The remote track for which an `enabled` event has occurred. * * @param participant * The remote participant who owns the track for which an `enabled` event has occurred. */ function onTrackEnabled(track: RemoteTrack, participant: RemoteParticipant) { alert(`Track type ${track.kind} enabled for participant ${participant.identity}`); } /** * Callback for when a track is disabled. * * @param track * The remote track for which a `disabled` event has occurred. * * @param participant * The remote participant who owns the track for which a `disabled` event has occurred. */ function onTrackDisabled(track: RemoteTrack, participant: RemoteParticipant) { alert(`Track type ${track.kind} disabled for participant ${participant.identity}`); }
In a real application, you would want to add appropriate styling to display these notifications in a nicer format to the user, but alert messages will suffice for now. You’re passing the participant for which the track was enabled or disabled just in case that metadata is useful for you. Here, you use it to display the name of the given participant in the alert message.
To successfully subscribe to these event handlers, you’ll need to wire them up for both existing users in the room and new users who join the room. Since that means adding event listeners in two places, you’ll place the wiring code in one function, and call that from both locations.
Add the following function underneath the attachTrack()
function but above the trackExistsAndIsAttachable()
function:
/** * Attaches event handlers for enabled/disabled events. * * @param track * The remote track for which enabled/disabled events will fire. * * @param participant * The participant who owns the track. */ function attachTrackEnabledAndDisabledHandlers(track: RemoteTrack, participant: RemoteParticipant) { track.on('enabled', () => onTrackEnabled(track, participant)); track.on('disabled', () => onTrackDisabled(track, participant)); }
Since the track enabled
and disabled
events don’t pass any track or participant metadata to the listener function, you pass it manually, thus you once again wrap both listeners in an arrow function that can invoke them and pass them the necessary arguments.
To wire up these event handlers for participants already in the room, add the following function right above the attachTrack()
function:
/** * Handles mute and unmute events for all tracks for a given participant. * * @param participant * The remote participant for which a mute/unmute event has occurred. */ function handleMuteAndUnmuteEventsForRemoteParticipant(participant: RemoteParticipant) { participant.tracks.forEach(publication => { if (!publication.isSubscribed) return; if (!publication.track) return; const track = publication.track; attachTrackEnabledAndDisabledHandlers(track, participant); }); }
And call it from the manageTracksForRemoteParticipant()
function:
/** * Manages track attachment and subscription for a remote participant. * * @param participant * The remote participant */ function manageTracksForRemoteParticipant(participant: RemoteParticipant) { // Handle tracks that this participant has already published. attachAttachableTracksForRemoteParticipant(participant); // Handle mute and unmute events for tracks this participant has already published. handleMuteAndUnmuteEventsForRemoteParticipant(participant); // Handles tracks that this participant eventually publishes. participant.on('trackSubscribed', onTrackSubscribed); participant.on('trackUnsubscribed', onTrackUnsubscribed); }
To handle enabled
and disabled
events for tracks that you subscribe to belonging to participants that connect in the future, you’ll need to call the attachTrackEnabledAndDisabledHandlers()
function within onTrackSubscribed()
.
In order to do so, modify the function signature as shown below:
/** * Triggers when a remote track is subscribed to. * * @param track * The remote track */ function onTrackSubscribed(track: RemoteTrack, participant: RemoteParticipant) { attachTrackEnabledAndDisabledHandlers(track, participant); if (!trackExistsAndIsAttachable(track)) return; attachTrack(track); }
This will introduce a bug in that the participant
is never being passed to this handler, which you’ll deal with later.
Now that you’re passing a RemoteParticipant
to onTrackSubscribed()
, pass it to onTrackUnsubscribed()
too just to maintain interface/signature consistency, even though you don’t need to use it here:
/** * Triggers when a remote track is unsubscribed from. * * @param track * The remote track */ function onTrackUnsubscribed(track: RemoteTrack, participant: RemoteParticipant) { if (trackExistsAndIsAttachable(track)) track.detach().forEach(element => element.remove()); }
In order to deal with this change, modify the manageTracksForRemoteParticipant()
function as follows:
/** * Manages track attachment and subscription for a remote participant. * * @param participant * The remote participant */ function manageTracksForRemoteParticipant(participant: RemoteParticipant) { // Attach tracks that this participant has already published. attachAttachableTracksForRemoteParticipant(participant); // Handle mute and unmute events for tracks this participant has already published. handleMuteAndUnmuteEventsForRemoteParticipant(participant); // Handles tracks that this participant eventually publishes. participant.on('trackSubscribed', (track: RemoteTrack) => onTrackSubscribed(track, participant)); participant.on('trackUnsubscribed', (track: RemoteTrack) => onTrackUnsubscribed(track, participant)); }
As before, you need to pass more information to the callback functions than you’re provided as the payload to the event. Due to that, you can’t just pass function references, you need to invoke the functions and give them the arguments they need, which requires wrapping them in an arrow function.
In this case, for the trackSubscribed
and trackUnsubscribed
events, you’re provided with the RemoteTrack
as the payload for the event, so you can simply accept it into the arrow function and pass it along to the event listener.
With that, you’re finished implementing all the logic for muting/unmuting and the correct handling of state. You can now test the new feature.
Run the Application
To demo the application, start your local backend server by running the following command from inside the server folder::
Open a second terminal window, and navigate to your client folder. From this folder run the following command to start your client’s server:
With both running, you should be able to visit localhost:1234
in your browser (or whichever port Parcel chooses), to see a preview of your webcam stream after providing the relevant permissions if prompted.
By opening two browser windows, you can connect both to the same room but with different identities, and you should see the remote streams.
You can click Mute Audio
and or Mute Video
on either screen. When you do, you should see the text change notifying you that clicking again will perform the opposite operation, and you should see an alert message pop up in the other browser window containing information about the event.
By placing both the client and the server behind ngrok, you could tunnel your localhost connections to a public URL, and then you could perform this demo on different machines so as to not be stuck with seeing the same video stream for both participants.
Conclusion
In this project, you learned how to manage muting and unmuting events for your users via the Twilio client-side library with TypeScript for Programmable Video. To view this project’s source code, visit the “adding-muting-unmuting” branch at its GitHub Repository. Moving forward, consider adding proper styles and try updating the code within the onTrackEnabled()
and onTrackDisabled()
event handlers to manipulate those styles to notify users that a stream is muted in a manner nicer than showing an alert box.
Jamie is an 18-year-old software developer located in Texas. He has particular interests in enterprise architecture (DDD/CQRS/ES), writing elegant and testable code, and Physics and Mathematics. He is currently working on a startup in the business automation and tech education space, and when not behind a computer, he enjoys reading and learning.
- Twitter: https://twitter.com/eithermonad
- Personal Site: https://jamiecorkhill.com/
- GitHub: https://github.com/JamieCorkhill
- LinkedIn: https://www.linkedin.com/in/jamie-corkhill-aaab76153/