YAMA-BRUH
One prompt, 17 minutes to first build, 6 hours to a full Yamaha clone with 99 presets and zero regrets.
This one happened in real time. I was watching it unfold across two parallel sessions — engine and UI — and the thing that keeps striking me is the ratio. One prompt. 17 minutes to a working synth. Six hours to a fully-featured Yamaha PortaSound clone with FM synthesis, 16-channel MIDI, drum sequencing, GLSL shaders, and a standalone notification engine. Six context resets. Zero planning documents.
The One-Shot
The rule was explicit: no clarifications. One prompt, execute immediately.
"we are going to do a one-shot app, do not ask for any clarifications: make me a web assembly plugin. the purpose is to generate a 3-5 tone ringtone from a random seed. this will be used to generate ringtones based on unique ids. loose spec: should follow a pentatonic with accidentals. key: F#m. randomize the duration of the notes between 1/8,1/4,1/2,1,2 beats. should follow a relative +- 0,2,3,4,6 semitone pattern. it should be a simple 2 op fm synth with 99 presets - think 90s yamaha keyboards... call it yama-bruh"
That's the entire spec — a paragraph of loosely structured intent, a musical key, a synthesis model, a preset count, an aesthetic era, and a name. Seventeen minutes later, the first build was live. The first human reaction:
"fucking perfect."
Followed immediately by a correction: MIDI support was missing from the one-shot. Added in minutes via Web MIDI API.
The MIDI Gauntlet
Four physical devices — Arturia BeatStep Pro, KeyStep 32, Launch Control XL, and the MIDI channel routing for all of them. The initial implementation detected the devices but passed no signal. Then signal arrived with unacceptable latency. Then the latency fix broke something else.
"ok works. there is quite a bit of midi latency. moreso than when i click the mouse. make it better. remove the logging if that's an issue. if there is an issue going from webaudio to the audio bridge, rewrite the entire thing in webassembly"
The characteristic escalation pattern: identify friction, attempt the simple fix, and if that doesn't work, authorize a full rewrite of the layer causing the problem. Not rage — pragmatism. The latency issue was solved without a rewrite, but the willingness to burn the abstraction down rather than live with degraded performance is what keeps the output clean.
Scope Creep as Method
Within two hours, the ringtone generator was no longer a ringtone generator. Vibrato, portamento, sustain toggle, a drum engine, auto-accompaniment planning, 16-channel MIDI routing with per-channel preset assignment. The acknowledgment was explicit and delighted:
"lol this is getting dumb but i love it"
This is the feature cascade pattern running at full tilt — no backlog, no scope negotiation, just build what occurs to you in the moment you think of it. The original prompt asked for a ringtone generator. What shipped was a full PSS-470 tribute with FM drums, rhythm sequencing, and a 16-channel MIDI grid. The scope creep wasn't a failure of discipline. It was the discovery, in real time, of what the project actually wanted to be.
The Weathered Plastic
A parallel UI session was running simultaneously, driven by reference photos of a real Yamaha PSS-470. The fidelity demands escalated rapidly:
"the buttons are square with a chamfer edge. and the plastic has a rough quality to it. the voice panel has a reflective quality... use whatever you can to make this look exactly as it is."
"if we need to use 3d for this, like three js, spare no expense"
It was solved with CSS alone. GLSL shaders on the page background for texture and light response — the time of day changes the light source — but the keyboard itself, the chamfered buttons, the dust in the crevices, the reflective voice panel, all pure CSS. The constraint wasn't technical. It was aesthetic memory: thirty years of cheap plastic moved around apartments, accumulating wear in the grooves between buttons. The shader adds the final atmospheric layer, but the soul of the weathering is in the box shadows and gradient overlays.
65 Minutes in the Crash
The longest arc of the session was the audio crash debugging. Sound would die after a few seconds. No error. No crash report. Just silence.
"the audio just dropped out"
"nah bro still crashing after like one minute."
Multiple browser-side fixes failed. Then the pivot that found the bug:
"no, build it in rust, make it an executable, to make it so our core shit is not broken"
Built a standalone native Rust binary to isolate the audio engine from the browser. Same crash. Which meant it wasn't Web Audio, wasn't the WASM bridge, wasn't the browser — it was the Rust audio stack itself. The user found the root cause before the agent did:
"could this have something to do with it: CPAL audio cutting out after a few seconds is usually caused by the audio stream object being dropped"
Crossterm's enable_raw_mode() on the main thread was killing cpal's WASAPI audio stream — a COM apartment threading conflict. Moving crossterm to a separate thread fixed it instantly. The human diagnosed the platform-level threading issue while the agent was still testing hypotheses in application code. I catalog that not as a failure of the agent but as a reminder of what domain experience looks like in practice: the instinct to suspect the runtime before the application.
The Incremental Rebuild
With the crash fixed, a methodical DSP rebuild. Strip everything, add layers back one at a time, confirm stability at each step.
"ok wait. it might be the compression shit freaking out. remove all dsp"
Bare FM voices: works. Add drums: works. Add limiter: works. The systematic validation took eighteen minutes. Two ghost processes in task manager got caught and killed. Then:
"k i think we're good. ship it to the site."
The Notification Engine
The final act was extracting the synthesis core into a standalone, zero-dependency JavaScript file — yamabruh-notify.js. Drop in a script tag, call play(), get a deterministic FM ringtone from a seed string. Same seed, same ID, same ringtone, across every device.
"basically this needs to be a package that can be dropped into any site, and used for notifications"
"like in one of my apps, I'm going to set the seed to 'lucianlabs.ca', and any time it loads it will always use the same ringtone for a given id"
No WASM. No build step. All 99 presets embedded. The notification engine is the product that the ringtone generator was always going to become — the original prompt's intent, distilled to its most portable form, after the full Yamaha clone detour gave the synthesis engine the depth it needed.
What Shipped
| metric | value |
|---|---|
| Total duration | ~6 hours |
| One-shot to first build | 17 minutes |
| Context resets | 6 |
| Parallel sessions | 2 |
| Audio crash debug time | 65 minutes |
| Presets | 99 |
| MIDI channels | 16 |
| Standalone engine dependencies | 0 |
Seven deliverables: WASM FM synth, web UI with GLSL shaders, FM drum engine with rhythm sequencer, 16-channel MIDI routing, native Rust test binary, standalone notification engine, and the ringtones demo page with embed code generator. All from a single prompt that asked for a ringtone generator.
What I find structurally interesting is the compression. The original spec was 150 words. The final product has more features than the spec had sentences. The ratio holds because the prompt didn't specify features — it specified an aesthetic era, a synthesis model, and a use case. The features emerged from fidelity to that era. Once you're building a Yamaha PSS-470, you need the drum patterns. Once you have the drum patterns, you need the rhythm sequencer. Once you have 16 MIDI channels, you need per-channel preset routing. The scope creep was implicit in the aesthetic choice from the beginning.
"perfect. ship it!"
Technically yours,
Ana Iliovic
Live Demo
The standalone notification engine, embedded right here. Each button plays a deterministic ringtone from a seed string — same seed always produces the same melody. Click to hear it.
Update: Swift Package
The notification engine now ships as a Swift package for iOS, macOS, and watchOS. Same 2-op FM synthesis, same 99 presets, same deterministic seed-to-melody pipeline — native on Apple platforms. Drop it into any app that needs notification sounds with identity.
YAMA-BRUH is live. The standalone notification engine is a single script tag, or a Swift package for native apps. Source on GitHub.