Upload a master or mixdown. Fifteen detectors score the mix across loudness, dynamics, spectral balance, low-end, stereo image, vocal presence, transients, and editorial fit. A senior engineer voice writes the verdict. Free. No watermark. No signup.
What an AI mix analyzer can and cannot tell you.
An AI mix analyzer measures objective audio properties. It does not have ears, does not understand context, does not know what the producer was going for. What it does well is catch the technical issues that consistently keep mixes off editorial playlists, the things that show up as numbers no matter how the track is supposed to feel.
What it catches reliably
- Loudness. Integrated LUFS, true peak, loudness range. The track's relationship to streaming-platform normalization targets (Spotify -14, Apple -16, Tidal -14).
- Dynamic range. Crest factor, peak-to-short-term-loudness ratio. Whether the master has been over-limited.
- Spectral balance. Energy share across six bands (sub, low, low-mid, mid, high-mid, air) compared to genre-typical distributions.
- Low-end health. Sub-region energy, mono compatibility below 120 Hz. Whether the bass will translate to a club PA system.
- Stereo image. Side-to-mid ratio, L/R correlation. Phase issues that lose energy on mono summing.
- Transient response. Onset density, attack times. Whether transients have been softened by limiting.
- Inter-sample peaks. Reconstruction peaks above 0 dBFS that distort once a streaming codec re-encodes.
What it cannot tell you
- Whether the song is good. Hooks, lyrics, melodic memorability are outside what spectral analysis sees.
- Whether the arrangement is right for the genre. The detector flags dynamic-contrast issues, but it cannot rewrite a verse.
- Whether a vocal performance is on pitch or in time. Those are perceptual, not spectral.
- Whether the genre tag is right. The analyzer trusts you when you say "house" — the targets shift accordingly. Pick wrong and the whole report is off.
Why human curator feedback still matters
Spotify editors do not run LUFS analyzers when they listen. They listen, and they decide if the track holds attention for 60 seconds. A track can score 9.5 on the analyzer and still get rejected because the hook never lands. The analyzer's job is to make sure no objective issue is the reason a track gets rejected. The hook is your job. The judgement is the curator's.
If the analyzer says the master is technically clean, the next question is "does Ben think this fits Chill Afro House 2026?" — that is a €3 submission away.
Common questions.
Will this make my track louder?
No. This tool analyzes your mix and tells you what to change. It does not process audio. The forthcoming AI Mixer (Phase 3) will perform actual stem-level processing once it ships.
Why is my -8 LUFS master flagged on Spotify?
Spotify normalizes everything to -14 LUFS. A -8 LUFS master gets pulled down by 6 dB, which means you are giving away 6 dB of dynamic range with no perceived loudness benefit. The track will not sound louder than a master delivered at -10 LUFS. It will only sound flatter.
How accurate is the AI score?
The 15 detectors measure objective audio properties using the same standards as professional mastering tools. Scoring is calibrated against contemporary releases. The composite score is a useful proxy for technical mix quality, not a substitute for a human listening to the track in context.
Can I share this report with my mastering engineer?
Yes. Each report has a permanent share link. Send the link, your engineer sees the same numbers and the same engineer commentary you do.
Do you store my audio?
No. The original audio is deleted as soon as analysis finishes. We keep only the numeric report (LUFS, spectral curve, etc.) and the written commentary. Reports persist for 30 days for share-link replay, then auto-delete.
Why is the score weighted toward foundational mix?
The composite is 40% foundational mix, 25% performance and arrangement, 20% master treatment and delivery, 15% genre fit. Foundational mix issues compound across every other layer, so they get the heaviest weight. A track with poor low-end balance cannot be saved by good master treatment.
What genres do you support?
House, Afro House, Deep House, Techno, Hip-Hop, Pop, R&B, Indie, and Electronic-Other. Genre choice changes the target loudness, dynamic range, spectral balance, and tempo windows. Pick the closest match. If your genre is not listed, pick Other.
What is the difference between this and the AI Mixer (Phase 3)?
This tool, Phase 1, analyzes a single mixed track and tells you what is happening. The AI Mixer, Phase 3, will accept individual stems (drums, bass, vocals, music) and perform actual mixing with wet/dry control per stem and an automated bounce. Phase 1 is the brain that powers Phase 3. You do not need to run Phase 1 first to use Phase 3 once it ships.
—