amefi

Guitar tuner

Published: Tue Sep 26 2023

A
Estimated Frequency: 226.40 Hz
Closest note: A, octave 3
Error: 49.64 cents

How it works

Obtaining sound samples

The application leverages the AudioContext API for real-time audio processing.

const audioContext = new AudioContext();
const microphoneDevice = await navigator.mediaDevices.getUserMedia({
audio: {
noiseSuppression: false,
echoCancellation: false,
autoGainControl: false,
},
});
const microphoneNode = audioContext.createMediaStreamSource(microphoneDevice);

A custom audio node is developed by extending AudioWorkletProcessor . The process method captures sampled microphone data.

const audioSamplerDefaultOptions = {
numSamples: 2048,
};
class AudioSampler extends AudioWorkletProcessor {
constructor(options) {
super();
this.numSamples =
options.processorOptions?.numSamples ??
audioSamplerDefaultOptions.numSamples;
this.updateIntervalMs =
options.processorOptions?.updateIntervalMs ??
// The time it takes to fill n samples at the sample rate.
(this.numSamples / sampleRate) * 1000;
this.samples = new Float32Array(this.numSamples);
this.currentSampleIndex = 0;
this.lastUpdateTime = 0;
}
// Keep the last numSamples samples
updateSamples(input) {
const newLen = input.length + this.currentSampleIndex;
if (newLen < this.numSamples) {
this.samples.set(input, this.currentSampleIndex);
this.currentSampleIndex = newLen;
} else {
// Shift existing samples to make room for new ones
this.samples.copyWithin(0, input.length);
// Insert the new samples at the end
this.samples.set(input, this.numSamples - input.length);
}
}
process(inputs) {
// Use a single channel for simplicity.
const input = inputs[0][0];
// If there is no input, skip processing.
if (!input || input.length === 0) {
return true; // Keep the processor alive, but skip processing
}
this.updateSamples(input);
if (Date.now() - this.lastUpdateTime > this.updateIntervalMs) {
this.port.postMessage({
samples: this.samples,
});
this.lastUpdateTime = Date.now();
}
return true; // Keep the processor alive.
}
}
registerProcessor("audio-sampling-processor", AudioSampler);

The audio processor is loaded asynchronously and communicates with the main thread for further processing.

await audioContext.audioWorklet.addModule("/audio-sampling-processor.js");
const samplingNode = new AudioWorkletNode(
audioContext,
"audio-sampling-processor"
);
microphoneNode.connect(samplingNode);

Frequency Estimation

Guitar strings do not emit a single frequency. Rather, they emit a fundamental frequency and a series of harmonics. To accurately detect the fundamental frequency, the YIN algorithm is employed, which is based on the autocorrelation of the sampled audio.

An implementation of the YIN algorithm is available in this GitHub repository.

The output of the YIN algorithm still tends to fluctuate. To stabilize the estimate for the tuner, the output is refined by taking the median of the most recent results, removing harmonics and outliers.

let mostRecentEstimates = [];
let estimatedFrequency = 0;
samplingNode.port.onmessage = (event) => {
const samples = event.data.samples;
const estimateForSamples = estimateFundamentalFrequency(
samples,
tunerGraph.audioContext.sampleRate
);
if (estimateForSamples !== null) {
mostRecentEstimates = [
...mostRecentEstimates,
estimateForSamples,
].slice(-10);
estimatedFrequency = median(mostRecentEstimates);
}
};

Frequencies and Musical Notes

Guitars are typically tuned in the 12-tone equal temperament system (12-TET). In the 12-TET, the frequency of a note is related to the frequency of the previous note by a factor of 21/12. The frequency of the A note in the fourth octave is is defined to be 440 Hz, and the frequency of every other note is calculated from there:

const A4_FREQUENCY = 440.0; // Reference frequency for A4
const notes = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"];
export function frequencyForNote(note: string, octave: number): number {
const keyNumber = notes.indexOf(note);
return A4_FREQUENCY * Math.pow(2, (keyNumber - 9 + (octave - 4) * 12) / 12);
}
type Note = {
note: string;
octave: number;
frequency: number;
};

For the estimated frequency, the closest note is found:

export function closestNoteToFrequency(frequency: number): Note {
const closestNoteNumber = Math.round(
12 * Math.log2(frequency / A4_FREQUENCY)
);
const closestNoteIndex = (closestNoteNumber + 9) % 12;
const note = notes[(closestNoteIndex + 12) % 12];
const octave = Math.floor((closestNoteNumber + 9) / 12) + 4;
const noteFrequency = frequencyForNote(note, octave);
return {
note,
octave,
frequency: noteFrequency,
};
}

Error in cents

The unit used to express the difference between two frequencies is the 'cent.' One cent represents 1/100th of a semitone in the 12-tone equal temperament system:

export function differenceInCents(
frequency1: number,
frequency2: number
): number {
return 1200 * Math.log2(frequency2 / frequency1);
}

The difference between the frequency and the frequency of the closest note is at most 50 cents. Thus the visualization spans +- 50 cents in each direction.

The bars in the tuner are colored using the error in cents. And that is all there is to it!