Paste a Spotify or SoundCloud link, or upload an mp3, and we'll tell you if the track was made by a human or generated by Suno, Udio, or another AI model. Free. Three checks per IP per day. No signup.
Looking for AI fingerprints in vocal phonemes, micro-timing, and frequency artefacts.
Confidence: —/ 100 AI
An AI music detector listens for the same things a trained ear listens for, only at scale. The signal is in the texture of the audio, not the lyrics or the mood. Three families of feature do most of the work:
When a human sings the word solstice, the formant transitions between the /s/, the /ɒ/, the /l/, and the /s/ again carry tiny micro-timing variations rooted in physical anatomy: vocal fold tension, breath pressure, tongue position. AI vocal models reconstruct these transitions from training data and tend to produce slightly too-uniform phoneme durations and seam artefacts at consonant edges. Detectors look for unnatural smoothness.
Human drummers drift. By a handful of milliseconds, but consistently. AI generators produce notes locked to a perfect grid, or with synthetic humanisation that adds randomness rather than the structured drift a real player produces. A detector measures the standard deviation of onset times against the inferred grid and flags rhythms that are either too perfect or too noisy in a way that sounds wrong to anyone who plays an instrument.
AI music generators leave statistical signatures in the spectrogram: tiny artefacts from the encoder, characteristic harmonic distributions in synthesised instruments, and reverb tails that decay with mathematical regularity rather than the chaotic mixing of a real room. None of these are audible to a casual listener, but they're consistent enough across thousands of generated tracks that a model trained on them spots them at >90% accuracy on clean cases.
Listen at the boundary between two vowel sounds in a sung lyric. If you can hear a brief artefact like a tiny click or a smear of the wrong vowel, that's a vocal model fighting itself.
Drum hits landing at exactly the same place every bar. No swing, no human drift. AI rhythms either lock to grid or randomise without structure, neither of which a real drummer does.
Human songwriters break rhyme schemes for emphasis. AI models rhyme too often and too cleanly, especially on internal rhymes. Watch for rhymes that sound forced into the meter.
Real reverb tails are textured by the room they were captured in. AI-generated reverb decays smoothly and identically every time. If the tail of a snare sounds too clean, that's a tell.
AI tracks often sound like the average of a genre rather than a specific record from it. Afro house that sounds vaguely 2018-Black-Coffee-adjacent without ever committing to a clear reference is a yellow flag.
The singer's voice subtly changes timbre between verses. Real singers have a stable vocal signature across a song; AI vocals can drift between a few learned identities.
AI submissions to playlist services have grown roughly an order of magnitude between 2024 and 2026. Most playlist curators auto-decline detected AI tracks not because they hate the technology, but because their audience comes for human-made music and listener trust is the asset that takes years to build and seconds to lose. Labels use detection to filter incoming demos before A&R wastes time on a generated track. Distributors increasingly require disclosure for AI-assisted work.
For listeners, the question is more interesting: does it matter to you if the song you love was made by a person? There's no right answer. The tool exists so you can decide with the information.
If your track is human-made and lands in afro house or deep house, send it to Ben. €3, listened to in full, written feedback within 72 hours. AI submissions are declined.
Pitch your track — €3 →