<canvas>
.git clone -b getting-started https://github.com/philnash/react-web-audio.git
cd react-web-audio npm install
npm start
getUserMedia
API. Then, once permission is granted, it will add the audio to the state of the application.src/App.js
and setup the state object in the App
component's constructor.class App extends Component { constructor(props) { super(props); this.state = { audio: null }; } render() {
getUserMedia
to request access to the microphone and set audio stream in the state if it is successful. Add the following to the component:async getMicrophone() { const audio = await navigator.mediaDevices.getUserMedia({ audio: true, video: false }); this.setState({ audio }); }
MediaTrack
s associated with the MediaStream
that getUserMedia
returns and stops them, finally removing the stream from the state.stopMicrophone() { this.state.audio.getTracks().forEach(track => track.stop()); this.setState({ audio: null }); }
toggleMicrophone() { if (this.state.audio) { this.stopMicrophone(); } else { this.getMicrophone(); } }
constructor(props) { super(props); this.state = { audio: null }; this.toggleMicrophone = this.toggleMicrophone.bind(this); }
render
function so that the button toggles between getting and stopping the microphone input.render() { return ( <div className="App"> <main> <div className="controls"> <button onClick={this.toggleMicrophone}> {this.state.audio ? 'Stop microphone' : 'Get microphone input'} </button> </div> </main> </div> ); }
src
directory for the analysis; call it AudioAnalyser.js
. We're going to pass the audio stream to this component via the props
. This component is going to be responsible for using the Web Audio API to analyse the audio stream and store that analysis in the state.import React, { Component } from 'react'; class AudioAnalyser extends Component { } export default AudioAnalyser;
AudioContext
(Safari still only supports the webkit prefixed version of this, sadly). Then we'll create an AnalyserNode
that will do the heavy lifting for us.AnalyserNode
we need to know the frequencyBinCount
which, according to the documentation, generally equates to the number of data values that will be available to play with for a visualisation. We'll create an array of 8-bit unsigned integers, a Uint8Array
, the length of the frequencyBinCount
. This dataArray
will be used to store the waveform data that the AnalyserNode
will be creating.createMediaStreamSource
on the AudioContext
object, passing in the stream. Once we have the source we can then connect the analyser.componentDidMount() { this.audioContext = new (window.AudioContext || window.webkitAudioContext)(); this.analyser = this.audioContext.createAnalyser(); this.dataArray = new Uint8Array(this.analyser.frequencyBinCount); this.source = this.audioContext.createMediaStreamSource(this.props.audio); this.source.connect(this.analyser); }
AnalyserNode
's getByteTimeDomainData
method every time we want to update the visualisation. Since we will be animating this visualisation, we'll call upon the browser's requestAnimationFrame
API to pull the latest audio data from the AnalyserNode
everytime we want to update the visualisation.requestAnimationFrame
runs. The function will copy the current waveform as an array of integers, from the AnalyserNode
into the dataArray
. It will then update the audioData
property in the component's state with the dataArray
. Finally, it will call on requestAnimationFrame
again to request the next update.tick() { this.analyser.getByteTimeDomainData(this.dataArray); this.setState({ audioData: this.dataArray }); this.rafId = requestAnimationFrame(this.tick); }
componentDidMount
method after we connect the source to the analyser.this.source.connect(this.analyser); this.rafId = requestAnimationFrame(this.tick); }
Uint8Array
and also bind the scope of the tick
function to the component.constructor(props) { super(props); this.state = { audioData: new Uint8Array(0) }; this.tick = this.tick.bind(this); }
componentWillUnmount
method that cancels the animation frame and disconnects the audio nodes.componentWillUnmount() { cancelAnimationFrame(this.rafId); this.analyser.disconnect(); this.source.disconnect(); }
render
method to the component with the following:render() { return <textarea value={this.state.audioData} />; }
src/App.js
and import the AudioAnalyser
component:import React, { Component } from 'react'; import AudioAnalyser from './AudioAnalyser';
render
function include the <AudioAnalyser>
component only if the state contains the audio stream.render() { return ( <div className="App"> <div className="controls"> <button onClick={this.toggleMicrophone}> {this.state.audio ? 'Stop microphone' : 'Get microphone input'} </button> </div> {this.state.audio ? <AudioAnalyser audio={this.state.audio} /> : ''} </div> ); }
<textarea>
. Looking at a bunch of numbers updating is no fun though, so let's add a new component to visualise this data.AudioVisualiser.js
and fill in the boilerplate we need.import React, { Component } from 'react'; class AudioVisualiser extends Component { } export default AudioVisualiser;
render
method. We want to draw onto a <canvas>
so we'll render one to the page.render() { return <canvas width="300" height="300" />; }
<canvas>
element so that we can draw on it later. In the constructor create the reference using React.createRef()
and add the ref
attribute to the <canvas>
element.constructor(props) { super(props); this.canvas = React.createRef(); } render() { return <canvas width="300" height="300" ref={this.canvas} />; }
audioData
we created in the previous component and draw a line from left to right between each data point in the array.draw
. This function will be called each time we get new data from the analyser. We start by setting up the variables we want to use:audioData
from the props
and its lengthref
x
which will be used to track across the canvassliceWidth
, the amount we will move to the right every time we drawdraw() { const { audioData } = this.props; const canvas = this.canvas.current; const height = canvas.height; const width = canvas.width; const context = canvas.getContext('2d'); let x = 0; const sliceWidth = (width * 1.0) / audioData.length;
context.lineWidth = 2; context.strokeStyle = '#000000'; context.clearRect(0, 0, width, height);
context.beginPath(); context.moveTo(0, height / 2);
audioData
. Each data point is between 0 and 255. To normalise this to our canvas we divide by 255 and then multiply by the height of the canvas. We then draw a line from the previous point to this one and increment x
by the sliceWidth
.for (const item of audioData) { const y = (item / 255.0) * height; context.lineTo(x, y); x += sliceWidth; }
context.lineTo(x, height / 2); context.stroke(); }
draw
function needs to run every time the audioData
is updated. Add the following function to the component:componentDidUpdate() { this.draw(); }
src/AudioAnalyser.js
and import the AudioVisualiser
component.import React, { Component } from 'react'; import AudioVisualiser from './AudioVisualiser';
render
method that renders the <AudioVisualiser>
and passes the audioData
from the state as a property.render() { return <AudioVisualiser audioData={this.state.audioData} />; }
npm start
, if it's not running anymore, and open the browser to localhost:3000 again. Click the button, make some noise, and watch the visualiser come to life.