No.14425976 ViewReplyOriginalReport
Is there a formal approach to estimating spectral efficiency from throughput and signal bandwidth? Could somebody with a communications or information theory background share their opinion?

I thought I could take the throughput and divide it by the 99% bandwidth of my signal and that would be my spectral efficiency. However, the detector in my simulation will always make decisions on every bit, so the bit error rate will never exceed 0.5, and the "spectral efficiency" calculated through my approach will exceed capacity for a low SNR.

For example, I created a plot of the "spectral efficiency" of BPSK as I understand it, computed from the bit error rate and throughput, then compared it to capacity for an AWGN channel, which it exceeds. Can I not relate bit-error-rate to spectral efficiency? What am I not understanding here?