Bit rates of compressed music files

David1961

Thinks s/he gets paid by the post
Joined
Jul 26, 2007
Messages
1,085
I know there are a lot of audiophiles here. Not sure if my ears are just bad or what. I did a little experiment comparing lossless wav files of some songs with compressed mp3s of the same song with a 128 k bitrate. I listened to each and tried to pick out which file was compressed and basically could not tell the difference. First off, I want to say I have decent headphones but certainly not top of the line headphones. In only one song, I noticed that in the compressed file, the backgroud tambourine was a little less crisp, but I had to flip back and forth between the files to really notice. For those who can tell a difference, what exactly do you listen for? Are there any songs where the differences are more apparent? What is your preferred bitrate for compressed music?
Also, is there any way to "subtract" the compressed file from the lossless file and hear the difference? Maybe that would help.
 
I have trouble picking out the difference myself in a straight A-B test (but.... see below), but I still keep everything in lossless format (FLAC). I'll make a compressed version from that 'master' if I need it for a portable player or some other casual use.

First, drive space is cheap, the difference in FLAC and 128 mp3 just isn't much in $.

Second, if I ever need to convert again, I won't go through a second generation of loss.

While I said I can't easily tell in an A-B test, I do think that I 'tire' of the compressed music after listening for ten minutes or so. That would take a while to do in a blind test, and sometimes you just may tire of the music for other reasons. I think it would take a large number of samples to validate. But because of cheap drive space, why take the chance? I made the analogy in another thread that it is like the difference between a car seat that seems comfortable at first, but becomes a pain in the butt after 30 minutes. You just can't tell in a short time.

Further, we know there is a difference. The compression throws away the sounds that it determines we are least likely to hear. Just because it is the lesser, sounds, doesn't mean that we don't notice it. I believe that this is what makes them sound boring after 10 minutes or so - the music is just lacking a little something that keeps it interesting.

If you want to compare, install the sound program Audacity (open source, cross platform). Take a direct CD rip in FLAC, make an mp3 from that file. Load both into Audacity in separate tracks, invert one track, then 'Render and Mix' to a new track. That track will be the difference components. Amplify it a bit, and you will hear a rather echo-ey, shimmery sound. This is the stuff missing from your music. I want it all!

Audacity: Free Audio Editor and Recorder

I was suspicious that this comparison was partially due to little phase offsets that might not really be that audible, but mathematically show up as differences. But when I zoom in bit-by-bit, every peak that I see is perfectly time-aligned with the original (until you get to really low mp3 bit rates).

Audacity is amazing, I've been doing all sorts of tests with it (my hearing is only good to ~ 13-14khz these days, not too bad though - DD could hear my test tone to just over 17K though - this was with a good DAC and Grado SR125 headphones) . Last night, I finally sat down and just listened to a few hours of music (from Bach organ works to Janis Joplin to Norman Blake, to Stravinsky and Mussorgsky), and forgot about cables bits for a while. It was great! ;)

-ERD50
 
Last edited:
You might want to look into this:

https://www.ponomusic.com/ccrz__CCPage?pageKey=aboutus

I'm one of the lucky ones - I can't hear the difference ;-)

Interesting, but IMO, there is a huge difference in comparing 16bit PCM to 24 bit PCM, and comparing 16 bit PCM to lossy compression like mp3.

There simply is no question that lossy mp3 throws away some of the sound. That is what it does, by design. Whether it is an audible difference can be debated, but it throws away audible parts of the sound (that may be masked by other sounds... or maybe not).

But I've done my own tests, and other sites have explained this - 16 bit really covers a very wide dynamic range. And due to dithering, sounds below 96 db are reproduced (along with some noise). I remember the argument that vinyl could capture signals below the noise floor (you hear the tone, along with the surface noise), but it was thought that digital was all or nothing - that below 96dB, the tone simply would not be reproduced at all, it was below the threshold of the converter. But dithering brings this signal above the threshold as the signal rides the noise. Shaped noise dithering pushed the noise up into a frequency range that we are less sensitive to, but still provides the benefit to the converter. It's close to magic!

Bottom line, 16 bits captures more dynamic range than I can hear. So if 24 bit is 'better', it is better in a way that is already below my threshold. I've found I can barely detect a signal about 85 db below full scale, with good headphones and a good DAC, with signals sweeping the most sensitive range of hearing (1,000-4,000 Hz). And a -105 db signal is still reproduced by this system (I can amplify it by 48 db, and it is there, among the noise). So going beyond 16 bit just doesn't seem advantageous (other than for recording and processing, which require more headroom).

I'm still testing myself on this, but grow more confident with each test I do. 16 bit good enough.

Here's a site with some excellent examples of what dithering can accomplish:

Dynamic Range, Dithering and Noise Shaping

They use 8-bit recordings, so the effects are more easily heard. You might be surprised just how good 8-bits sounds! Remember, 8 bits is just 256 different levels to represent the sound, while 16 bit is 65,536 levels. A major difference! Dithering helps tremendously, and shaped dithering makes the noise far less apparent.

As useful as those examples are, their example of 8-bit versus 16-bit 'music' are near worthless. They use 'songs' with almost no dynamic range, and tones that are all swishy-electronic sounding, so any distortion/noise are drowned out anyhow. 8-bits on a string quartet is going to be easy to identify, but you may still be surprised how 'not bad' it is.

-ERD50
 
Last edited:
My hearing is so bad that I just need some kind of sound to remind me of the song and I sort of play it in my head with whatever I'm able to hear keeping time.
 
my experience

I've done many A/B and open side by side tests of the same music - my own - ripped lossless at 16/44 compared to the same music at 24/96 (verified to actually be 24/96). On my very good equipment I cannot reliably tell the difference say 4/5 times.

I'm convinced by my own experiences over years now, that very well recorded 16/44 lossless music sounds superb if properly reproduced on fine equipment. I am not a headphone person so do all my home high quality listening on a well-tuned high-end system. Further, I play the turntable vs digital game as well on occasion, and do with friends. On my system, we also struggle to always identify which is which with a number of analog versus digital samples. Or maybe more correctly, we struggle to say which "sounds better" and why.

I absolutely cannot tell the difference between 24/96 and 24/192 encoding with my ears and I don't believe the real hardliners who say they can tell the difference between lossless WAV/AIFF and ALAC (my preferred format format for storage reasons). You would really need golden ears for that IMO
 
Over time I've used various bitrate settings. Used to research and listened to the experts. However, I now just let various programs do their best (or worst).

Opened Flac to MP3 converter, and freac. They are both set to encode with automatic settings, so that could be just about anything.

I was using 128 VBR years ago, and now shoot for 256 or 320 if I have a choice. Rarely convert a CD anymore.

I am not playing these back on any kind of enhanced audio equipment. The MP3s get loaded to mobile playback devices, but I usually listen in my office through iTunes or VLC.

I probably could not tell the difference on my devices between 128 and 256. Probably on a much better, more expensive setup I could distinguish more.
 
I can tell the difference between 128 Kbps and 320 Kbps MP3 on an iPod + decent pair of IEMs (in-ear monitors). Beyond that, can't really tell the difference. To me, 320 Kbps MP3 and lossless (e.g. FLAC) pretty much sounds the same.

Still, I rip my audio CDs to FLAC just so I have original quality in case I need to reconvert to some other format. While the quality difference in initial lossless to lossy conversion may not be apparent, further lossy to lossy conversions can be pretty obvious.
 
Back
Top Bottom