
Welcome to one more blog entry. In this blog we will work on creating a browser based screen capture utility. We may need this kind of utility for several reasons. One thing that comes to mind is screen sharing during meeting.
Screen capture can be a useful utility to use along with webrtc for creating video chat client utility. There may also be the use case where you send this data to a backend server for further processing. I can go into webrtc discussion in a later blog, for this one we intend to use an HTML5 api to capture user screen.
Starting simple: taking a screenshot

We will start with taking a basic screenshot. We will use html2canvas library for the screenshots. This library is created by Niklas Von Hertzen and can be found on his github page. It does not rely on WebRTC and forms the screenshot by reading the DOM. We will create a simple page with one random image and a canvas area to show generated screenshot.
I am using external libraries for this project (bulma.css is just for convenience). For the code, we are only showing relevant lines. If you need to get full source, please check my public GitHub page. The images used for this project are downloaded to assist in this demonstartion. I do not own the copyright to any of these images, copyright stays with respective owners.
<head> <link rel="stylesheet" href="libs/bulma-0.7.4/css/bulma.min.css" /> <script type="text/javascript" src="libs/html2canvas.min.js"></script> </head> <body> <img style="border: 7px groove gold;" width='400' height='300' id='randomImage' /> <canvas style="border: 7px groove grey;" width='400' height='300' id='scrCanvas' /> <button class="button is-primary" onclick='screenshot();'>Screenshot</button> <button class="button is-secondary" onclick='imageSwitch();'>Random</button> </body>
Following code is for switching images – it is called when Random is clicked. There is not much going on here.
function imageSwitch() { var images = [ 'images/scenery/img01.jpg', 'images/scenery/img02.jpg', 'images/scenery/img11.jpg', 'images/scenery/img12.jpg' ]; var image = images[Math.floor(Math.random()*images.length)]; var himg = document.getElementById("randomImage"); himg.src = image; }
Now the writing part. This is also as simple as calling the library. This library returns a canvas object and we copy it over to the empty canvas we created above.
function screenshot() { html2canvas(document.body).then(function(canvas) { var cvs = document.getElementById("scrCanvas"); var ctx = cvs.getContext('2d'); ctx.clearRect(0, 0, cvs.width, cvs.height); ctx.drawImage(canvas, 0, 0, cvs.width, cvs.height); }); }
Recording a Screen capture Video

For this project, we will create a very simple screen to show a video from Youtube. As always, this is for demonstration only, and I do not encourage recording videos from Youtube. However, since we are capturing video, a moving video makes it easier to test than static page. This page above contains an input field to take an URL, and an iframe component to display the video.
<input id="insrc" class="input" value="https://www.youtube.com/embed/nVhNCTH8pDs?autoplay=1" /> <input class="button is-link is-rounded" type="submit" value="Go" onclick="loadvid();" /> <input class="button is-primary is-rounded is-light" value="Start" onclick="startRec();" /> <input class="button is-danger is-rounded is-light" value="Stop" onclick="stopRec();" /> <iframe id="player" style="width:900px;height:567px;margin-left: auto; margin-right: auto;" frameborder="0" allow="display-capture"> </iframe>
Loading a Video
Now let us analyze the different methods that we are using here. We will define some globals to keep values used across methods. The first thing we do is to load the video. Remember that you cannot embed Youtube URLs directly as they have same origin policy. So, we have to select Share for the video and copy the link to that URL.
var recorder = ""; var stream = ""; var blob = []; function loadvid() { var isrc = document.getElementById("insrc"); var plyr = document.getElementById("player"); plyr.src = isrc.value; };
Starting Capture
We then come to core of video capture. We will be using getDisplayMedia API introduced with WebRTC. It can capture both Audio and Video. For Chrome on Mac, make sure to check Allow Audio checkbox. However, check the following link – a lot of browsers still do not support Audio capture. Media Stream API will accept constraints as parameter where you can specify what should be recorded (video/ audio). Audio will be ignored in browsers that do not support this feature.
function startRec() { if (navigator.mediaDevices.getDisplayMedia) { navigator.mediaDevices.getDisplayMedia({ video: true, audio: true }).then(function(_stream) { stream = _stream; recorder = new MediaRecorder(stream); recorder.ondataavailable = function(e) { // Get data - you can also send to backend server blob.push(e.data); } // Push data every 10 seconds recorder.start(10000); }); } else { alert("Screen Recording not supported"); } };
Verify if getDisplayMedia is supported first. At a minimum we can start a MediRecorder and start polling for data every n seconds. As soon as we get this data we will add it to an array where we are storing all received data. MediaRecorder can also optionally take recording parameters. For example we can specify what video type should be recorded to. Default seems to be different for each browser, Safari records in mp4, Chrome seems to use matroska (mkv). We can also specify a specific version of webm based on support.

if (MediaRecorder.isTypeSupported) { if (MediaRecorder.isTypeSupported('video/webm;codecs=vp9')) { recordingOpts = {mimeType: 'video/webm; codecs=vp9'}; } else if (MediaRecorder.isTypeSupported('video/webm;codecs=vp8')) { recordingOpts = {mimeType: 'video/webm; codecs=vp8'}; } else { recordingOpts = {mimeType: 'video/mp4'}; } } else { recordingOpts = {mimeType: 'video/mp4'}; }
Stopping Capture
When recoding is stopped, we merge all stored blobs into one. We will also stop media recorder. To stop media, we will stop each track that is playing for this media.
function stopRec() { if (recorder != "") { recorder.stop(); stream.getTracks().forEach(function(track) { track.stop(); }); var blobf = new Blob(blob, {type: blob[0].type}); } };
Since we have the captured data now, we can decide what we can do with this. We can send the blob captured to the server for processing. For this demo, we will download the blob to local.
// Create a link to download var evt = document.createEvent('MouseEvents'); var lnk = document.createElement('a'); lnk.download = "Youtube.mkv"; // I am using Chrome, and know mkv is generated lnk.href = window.URL.createObjectURL(blobf); lnk.dataset.downloadurl = [blobf.type, lnk.download, lnk.href].join(':'); evt.initMouseEvent('click', true, false, window, 0, 0, 0, 0, 0, false, false, false, false, 0, null); lnk.dispatchEvent(evt);
We create a anchor which contains the blob. Finally we click this link to download the file to default download location.
Conclusion
We have seen two different ways here to get screenshots. First one used a library and the second option we used a function from WebRTC to capture. Normally we will use these APIs with a backend service. WebSockets is a good option to send this data. Hope you found this helpful. I will add the codes to my public GitHub page.