Music · Technology · The Intersection
Where the music ends and the engineering begins is a question I stopped trying to answer a long time ago. This is what happens when both live in the same brain.
I spent my undergraduate years doing two things that everyone around me seemed to think were in tension: making music and studying computer science and engineering. They weren't in tension for me at all. They were the same impulse — I wanted to build things and hear them work. It just happened that sometimes the thing I was building made a sound, and sometimes it ran on a server. Both felt like the same kind of problem-solving, just expressed differently.
The intersection always excited me more than either side individually. I wasn't drawn to pure software engineering for its own sake, and I wasn't interested in music as just performance or composition. What I wanted was the place where the two touched — tools that didn't exist yet, instruments that no one had made, experiences that required both sets of skills to produce. That's the space I've always worked in, even when I didn't have a clean name for it.
The frustration that finally pushed me fully into browser-based audio was JUCE — the C++ framework that most professional audio software is built on. It's powerful, it's industry-standard, and it treated me like I owed it something. Hours of build configuration before a single sample played. Documentation written for people who already understood everything it was trying to explain. Every small change requiring a full rebuild cycle. I kept thinking: there has to be a better way to prototype audio ideas quickly, without a PhD in signal processing and a week of CMake wrestling just to hear a sine wave. And there was. The Web Audio API.
The moment I got a synthesizer running in a browser tab — no install, no build system, immediate feedback, shareable with a URL — something clicked. The node graph model maps almost one-to-one onto how a producer thinks about signal routing. You can hear the result of a change instantly. You can share what you've built with anyone. And the gap between "idea" and "working prototype" collapses to the time it takes to open a text editor. I've been building in it ever since, and I've made three tools I'm genuinely proud of.
A fully browser-based subtractive synthesizer built with the Web Audio API. Three oscillators with individual waveform selection, octave shifting and detuning. Lowpass filter with cutoff and resonance. Full ADSR amplitude and filter envelopes. LFO with assignable destination. Everything you'd expect from a capable soft synth — running in a tab, with no install required.
Also known as the Chop Deck. A browser-based sampler designed around the workflow of chopping and flipping samples the way a producer would — not just playback, but slicing a sample into segments, triggering individual chops, and simulating the scratch and stutter techniques that define a particular school of sample-based music production. Built from scratch in the browser, no DAW required.
The one that started from a personal need. A dedicated hard dance kick drum synthesizer — because every hardstyle, rawstyle and hard techno producer knows the kick is everything, and building one from scratch gives you control that no sample pack ever will. Fenetrix is the full version. Fenelite is the stripped-back lite edition. Fenelite Pro is the oneshot-focused version built for drop-in production use. All three run entirely in the browser.
After spending time building these tools and realising how little good material existed for producers who wanted to cross into audio programming — not academic DSP theory, not dry API documentation, but genuinely educational and interactive content written for people who already understand music — I built a course. Four lessons. Each one adds a new layer of complexity and a working interactive demo you can play with in the browser immediately. No prior programming experience required beyond basic JavaScript familiarity. By the end, you'll have the conceptual and practical foundation to design and ship your own web-based audio tools.
The Web Audio API from zero. AudioContext, OscillatorNode, GainNode — the entire node graph model explained through working instruments. Seven chapters building a complete synthesizer piece by piece: first sound, pitch and frequency, waveforms, ADSR envelopes, a playable keyboard, filters, and the full patch.
Every drum in your sample library is a mathematical operation on air pressure. Kick, snare, hi-hat and clap synthesized from nothing — no samples, no imports. Then all four assembled into a working 16-step drum sequencer using precise AudioContext scheduling. The reason your kicks have transients, why snares buzz, what makes a hi-hat metallic.
Saturation, overdrive, bitcrushing and foldback distortion — all explained as mathematical functions applied to every sample. WaveShaperNode and transfer curves visualised in real time alongside the waveform they produce. Ends with a full four-stage character chain where you toggle each distortion type in and out while the signal runs through.
Tremolo, vibrato, filter wobble, autopan and ring modulation — five effects that are all the same oscillator connected to five different destinations. The lesson walks from sub-audio LFO rates all the way up into audio-rate modulation, at which point the modulator stops being an effect and starts generating new frequencies. The conceptual bridge into FM synthesis.
Everything from the previous four lessons assembled into a complete two-oscillator subtractive synthesizer — with amplitude and filter envelopes, detuning, oscillator mix, and five presets from lead to bass. Then a fully interactive piano roll: place notes by clicking, drag to resize, click to delete, BPM control, animated playhead, and a lookahead scheduler that keeps everything in time. The synth settings carry directly into the piano roll.
The course is written assuming you know what a filter sounds like before you know how it works — which is to say, it's written for producers first, programmers second. Every concept is introduced with the musical context before the technical one. Every chapter has a working demo you can interact with before reading a single line of code. The code is shown to explain what you just heard, not the other way around.
Start the Course
Sound from Scratch →