AI music detector

Is this song AI?

Paste a Spotify or SoundCloud link, or upload an mp3, and we'll tell you if the track was made by a human or generated by Suno, Udio, or another AI model. Free. Three checks per IP per day. No signup.

Up to 10 MB. mp3, wav, or m4a. The file is sent to the detection backend and deleted within seconds. Nothing is stored on our side beyond the result hash.

3 free checks left today on this IP
Analysing audio…

Looking for AI fingerprints in vocal phonemes, micro-timing, and frequency artefacts.

◆ The verdict

Confidence: / 100 AI

Human Borderline AI

How AI music detection actually works.

An AI music detector listens for the same things a trained ear listens for, only at scale. The signal is in the texture of the audio, not the lyrics or the mood. Three families of feature do most of the work:

1. Phoneme stability in vocals

When a human sings the word solstice, the formant transitions between the /s/, the /ɒ/, the /l/, and the /s/ again carry tiny micro-timing variations rooted in physical anatomy: vocal fold tension, breath pressure, tongue position. AI vocal models reconstruct these transitions from training data and tend to produce slightly too-uniform phoneme durations and seam artefacts at consonant edges. Detectors look for unnatural smoothness.

2. Micro-timing in rhythm

Human drummers drift. By a handful of milliseconds, but consistently. AI generators produce notes locked to a perfect grid, or with synthetic humanisation that adds randomness rather than the structured drift a real player produces. A detector measures the standard deviation of onset times against the inferred grid and flags rhythms that are either too perfect or too noisy in a way that sounds wrong to anyone who plays an instrument.

3. Frequency-domain fingerprints

AI music generators leave statistical signatures in the spectrogram: tiny artefacts from the encoder, characteristic harmonic distributions in synthesised instruments, and reverb tails that decay with mathematical regularity rather than the chaotic mixing of a real room. None of these are audible to a casual listener, but they're consistent enough across thousands of generated tracks that a model trained on them spots them at >90% accuracy on clean cases.

Telltale signs you can hear.

The vocal seam

Listen at the boundary between two vowel sounds in a sung lyric. If you can hear a brief artefact like a tiny click or a smear of the wrong vowel, that's a vocal model fighting itself.

Rhythm that's too perfect

Drum hits landing at exactly the same place every bar. No swing, no human drift. AI rhythms either lock to grid or randomise without structure, neither of which a real drummer does.

Lyrics that over-rhyme

Human songwriters break rhyme schemes for emphasis. AI models rhyme too often and too cleanly, especially on internal rhymes. Watch for rhymes that sound forced into the meter.

Reverb that decays wrong

Real reverb tails are textured by the room they were captured in. AI-generated reverb decays smoothly and identically every time. If the tail of a snare sounds too clean, that's a tell.

Genre averaging

AI tracks often sound like the average of a genre rather than a specific record from it. Afro house that sounds vaguely 2018-Black-Coffee-adjacent without ever committing to a clear reference is a yellow flag.

Vocal identity drift

The singer's voice subtly changes timbre between verses. Real singers have a stable vocal signature across a song; AI vocals can drift between a few learned identities.

Why it matters for curators and labels.

AI submissions to playlist services have grown roughly an order of magnitude between 2024 and 2026. Most playlist curators auto-decline detected AI tracks not because they hate the technology, but because their audience comes for human-made music and listener trust is the asset that takes years to build and seconds to lose. Labels use detection to filter incoming demos before A&R wastes time on a generated track. Distributors increasingly require disclosure for AI-assisted work.

For listeners, the question is more interesting: does it matter to you if the song you love was made by a person? There's no right answer. The tool exists so you can decide with the information.

Common questions

How accurate is AI music detection?
On clearly Suno or Udio generated tracks, current detectors score above 90 percent confidence. On hybrid tracks (human composition with AI vocal stems, or AI instrumentation under a human topline), accuracy drops to 60 to 75 percent. Always treat the result as a strong signal rather than a verdict.
Can the detector tell Suno from Udio?
It returns a single human-or-AI score, not a model attribution. Suno tends to leave more vocal seam artefacts, Udio tends to be cleaner instrumentally with stronger structural symmetry, but both are improving fast. Reliable model fingerprinting is not yet possible from a single 30-second clip.
Will Spotify remove AI music?
Spotify's stated policy as of 2026 is that fully AI-generated tracks are allowed if disclosed, but undisclosed AI tracks that infringe identity (deepfake voices) or game streaming may be removed. Editorial playlists generally avoid AI-only tracks. For curator-driven playlists like ours, AI submissions are auto-declined.
Is using AI in music cheating?
Depends who you ask. Most working curators draw the line at the creative core: AI-assisted mixing or stem separation is normal in 2026, fully AI-generated tracks submitted as human work are the issue. It is the misrepresentation that breaks trust, not the tool.
Do you store the audio I upload?
No. The audio is sent to the detection backend, the result is returned, and the file is deleted within seconds. Result records (confidence score, timestamp, anonymous track ID hash) are stored briefly so the share link works, then automatically purged after 30 days.
Why is the tool free?
It runs on a free tier of a detection API plus ad revenue from this page. Three checks per IP per day, hard cap on site-wide daily volume to keep costs sane. If you need unlimited checks for a label or distributor workflow, get in touch.
What if the result is wrong?
Possible, especially on the borderline range (40 to 60 percent). The detector is a tool, not an oracle. If you have evidence that a result is wrong, email us with the share link and a few seconds of context and we'll feed it back into model improvement.

Made it yourself? Submit it.

If your track is human-made and lands in afro house or deep house, send it to Ben. €3, listened to in full, written feedback within 72 hours. AI submissions are declined.

Pitch your track — €3 →