Wednesday, 16 May, 2018 UTC


We’ve looked before at how to capture a user’s screen in Chrome and Firefox. Good news, another browser has support now, Microsoft’s Edge.
Let’s see how we can capture the screen with Edge.

What you need

  • The latest version of Edge, which is currently version 42 with EdgeHTML version 17 (if you’re on a Mac like me, you can get a free virtual machine with Windows 10 and Edge installed to test on)
  • A text editor
  • A local web server – I like to use serve for things like this
  • Either ngrok or an equivalent tunnelling service or TLS certificates set up for localhost (we’ll see why later)

Screen Capture

Whereas Chrome required an extension to be built and Firefox used getUserMedia with a mediaSource constraint of "screen" to get a handle on the stream of the screen, once again Edge uses a different method. While this doesn’t sound great, it’s actually better for Edge as they are following the proposed W3C spec for screen capture.
This support was shipped as part of the April update to Windows 10 and Edge and is part of the EdgeHTML engine version 17. So how does it work?

The code

To get access to a media stream of the screen in Edge, the code looks a bit like this:
navigator.getDisplayMedia().then(returnedStream => {
  // use the stream
If you compare this to the Firefox version it is a little simpler. Rather than calling navigator.mediaDevices.getUserMedia and passing a media constraint of { video: { mediaSource: 'screen' } } you just call getDisplayMedia. In this version, getDisplayMedia doesn’t take any media constraints, so the user gets to choose whether to display their application or their desktop.
To explore how this all fits together, let’s build the same example application that we built for Chrome and Firefox, capturing the screen and then showing it in a <video> element.

Building screen capture

Create a new directory and an index.html file. We’re going to use the same HTML structure as the Chrome example. Add the following to your index.html file:
<!DOCTYPE html>
<html lang="en">

  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">

  <h1>Show my screen</h1>

  <video autoplay id="screen-view" width="50%"></video>

  <button id="get-screen">Get the screen</button>
  <button id="stop-screen" style="display:none">Stop the screen</button>

    // Fill in the rest here

This gives us a simple HTML page with a <video> element and a couple of buttons.
The rest of our code goes between the <script> tags. We start by getting references to the elements we’re going to use. Then we define a variable for the stream of the screen and an event listener that stops the stream when you click the stop stream button.
(() => {
  const video = document.getElementById('screen-view');
  const getScreen = document.getElementById('get-screen');
  const stopScreen = document.getElementById('stop-screen');
  let stream;
  // Fill in the rest here
  stopScreen.addEventListener('click', event => {
    stream.getTracks().forEach(track => track.stop());
    video.src = ''; = 'none'; = 'inline';
Now, when a user clicks on the “Get the screen” button we’ll call the getDisplayMedia function:
let stream;

getScreen.addEventListener('click', event => {
Calling getDisplayMedia will show a prompt to the user asking for permission to use their screen. The user can then select the window or entire desktop they want to share. The getDisplayMedia method returns a promise, so once that has completed successfully the promise will resolve with a stream of the screen. We then need to set that as the source of the <video>:
let stream;

getScreen.addEventListener('click', event => {
  navigator.getDisplayMedia().then(returnedStream => {
    stream = returnedStream;
    video.src = URL.createObjectURL(stream); = "none"; = "inline";
  }).catch(err => {
    console.error('Could not get stream: ', err);
We add a catch to the promise to deal with errors or if the user denies permission, and that is all the code we need.

Capture the screen

To run the example we do need to serve the HTML from a local web server. I like to do this with an npm module called serve. If you have Node.js and npm installed, you can install it with:
npm install serve -g
You can then navigate using the command line to the directory where you saved your index.html file and serve it on localhost:5000 by entering:
serve .
If you have another method you use to serve static files on localhost, you can use that too.
We’re not done yet, much like Firefox, Edge requires the site to be served on HTTPS in order to allow developers access to the screen capture API. Rather than try to sort out self signed certificates on your development machine, I suggest using ngrok to sidestep this issue. I normally use ngrok to test webhooks locally, but it has the added benefit of giving you an HTTPS URL that points at your local machine. Install ngrok (check out these instructions if you are installing on Windows) and start it up to point at localhost:5000:
ngrok http 5000
Grab the HTTPS URL and enter that in Edge.
Press the “Get the screen” button and you will be able to get a stream of the user’s choice of application or desktop.

Next steps

Now we’ve seen screen capture in Chrome, Firefox and Edge. If you want to take a look at the code for all three, check out the GitHub repo.
As you can see, getDisplayMedia is a much easier way of getting hold of a user’s screen than building an extension or checking specific versions of Firefox. If you think this spec should be implemented by those browsers, get in touch with them by raising or supporting their open bugs.
Do you have any ideas or plans for screen capture in browsers? Tell me what you’re thinking in the comments below. Or feel free to reach out on Twitter at @philnash or by email at [email protected]
Screen capture in Microsoft Edge