Hello Tone.js
Learning notes from first time trying Tone.js
On This Page
Official documentation: https://tonejs.github.io
Back story: After skimming through Web Audio + MIDI API stuff, I decided to start from tone.js. While they've got an extensive API documentation, I am learning the basics from the readme to avoid getting overwhelmed.
What?
- "Web Audio framework for creating interactive music in the browser"
- Has common DAW features, such as:
- "transport" for synchronizing and scheduling events
- synths and effects
From this readme, it seems we do most things by creating a new instance of Tone.js classes. The ones discussed here:
new Tone.Synth()
- same with other instruments, eg.
Tone.FMSynth()
,Tone.AMSynth()
,Tone.Polysynth()
- same with other instruments, eg.
new Tone.Loop()
new Tone.Player()
new Tone.Sampler()
new Tone.Distortion()
- same with other effects, eg.
Tone.Filter()
,Tone.FeedbackDelay()
- same with other effects, eg.
Important methods:
toDestination()
triggerAttack()
,triggerRelease()
,triggerAttackRelease()
- attack = start the note (amplitude rises)
- release = ends the note (amplitude goes to 0)
- attackRelease = combination of both methods above. It takes 3 arguments:
- note (either Hz frequency in int or note name in string, eg.
"D#2"
) - duration in second or "tempo-relative value" (
"8n"
is an 1/8 note) - (optional) when to play "along the AudioContext time"
- eg.
now
,now + 0.5
,now + 1
- eg.
- note (either Hz frequency in int or note name in string, eg.
now()
-- gets current AudioContext time- WebAudio API uses AudioContext time to schedule events, counting in seconds. Tone.js abstracts it away so we don't need to calculate/convert seconds to our audio based on the tempo. Thus we can use like
4n
to create a ¼ note.
- WebAudio API uses AudioContext time to schedule events, counting in seconds. Tone.js abstracts it away so we don't need to calculate/convert seconds to our audio based on the tempo. Thus we can use like
start()
-- returns a promise. Resolved = ready.loaded()
-- returns a promise. Resolved = all files have finished loading.
Tone.Transport
is an object(?)--not a class or a method--where we can call several methods, eg. start()
and bpm.rampTo(800, 10)
.
- "like the arrangement view in a Digital Audio Workstation or channels in a Tracker"
- "can be started, stopped, looped and adjusted on the fly"
- I'm not sure exactly how it works yet but seems vital.
We can pass these types of tempo values to any methods which take time argument:
"4n"
-- 1/4 note"1m"
-- 1 measure (not sure what this means)"8t"
-- 1/8 note triplet
I have not fully understood Signal, but basically they are multiple properties that "have a few built in methods for creating automation curves." The example given is the frequency
property in an Oscillator object.
Usage
Create an instance of the (mono) synth object.
// Create a new instance of the synth object.
// We can loop, combine, or simply play it.
const mySynth = new Tone.Synth().toDestination();
Play our synth.
// Play something--in this case a synth object created elsewhere.
mySynth.triggerAttackRelease("C4", "8n");
Create an instance of the polyphonic synth object.
// Create a new instance of the poly synth object.
// The parameter can be any mono synth.
const myPolySynth = new Tone.PolySynth(Tone.Synth).toDestination();
Play the poly synth. triggerRelease
must be given a note or array of notes.
❓ I think we can simply replace triggerRelease
below with triggerAttackRelease
with the same parameters. TODO: Try to rewrite the code below and confirm.
const now = Tone.now()
myPolySynth.triggerAttack("D4", now);
myPolySynth.triggerAttack("F4", now + 0.5);
myPolySynth.triggerAttack("A4", now + 1);
myPolySynth.triggerRelease(["D4", "F4", "A4"], now + 4);
Create an instance of the player object to play a sample (an audio file).
const mySnare = new Tone.Player("https://foo.com/snare.mp3").toDestination();
Play multiple samplers.
const mySampler = new Tone.Sampler(myObj).toDestination();
const myObj = {
urls: {
"C4": "C4.mp3",
"F#4": "Fs4.mp3",
"A4": "A4.mp3",
},
release: 1,
baseUrl: "https://tonejs.github.io/audio/salamander/",
}
Play the sample (or do other things with it) after all files finish loading.
// Play a single sample from Tone.player().
Tone.loaded().then(() => { mySnare.start() });
// Play multiple samplers from Tone.Sampler().
// Even if we only supply notes for C4, F#4, and A4, Tonejs converts the samples to other pitches.
Tone.loaded().then(() => { mySampler.triggerAttackRelease(["Eb4", "G4", "Bb4"], 4) });
Create (but not play) loops.
// Loop A: play a note every quarter-note.
const loopA = new Tone.Loop(loopFn, "4n").start(0);
// Loop B: play another note every off quarter-note--see start() param.
const loopB = new Tone.Loop(loopFn, "4n").start("8n");
// Loops take a function that plays a sound as the first argument.
// The 3rd argument ("time") for the delay is passed from the loop.
const loopFn = (time) => { mySynth.triggerAttackRelease("C2", "8n", time) }
Play the loops.
❓ I'm not sure whether we use Tone.Transport.start()
to start all types of sounds (loops, synths, samplers?). Neither am I sure about the the exact use case of Tone.Transport.start()
vs Tone.start()
.
// Play our loops defined above.
Tone.Transport.start();
Create an instance of an effect object (eg. Distortion) and connect a player object to it.
const myDistortion = new Tone.Distortion(0.4).toDestination();
// Connect the player we made above.
mySnare.connect(myDistortion);
// I haven't tried this but we should be able to do the same with a sampler object.
mySampler.connect(myDistortion);
We can connect to multiple effects (eg. Distortion, Filter).
// Assume myDistortion and myFilter have been instantiated.
mySampler.connect(myDistortion);
mySampler.connect(myFilter);
Create an oscillator object that smoothly ramps between two frequencies.
const myOsc = new Tone.Oscillator().toDestination();
// Define starting frequency and _ramp_ it up over 2 seconds.
myOsc.frequency.value = "C4";
myOsc.frequency.rampTo("C2", 2);
// Start the oscillator for 2 seconds
myOsc.start().stop("+3");