>>14138668>gapYou have to understand what IQ actually is before can understand what a gap in IQ is.
IQ is based on the theory that there is some general "intelligence factor" g associated with each person. Then, assume that children tend to have a g-factor somewhere near the average g-factor of their parents.
However, some children are smarter than their parents, and some dumber. Presume that this is effectively random, with g-factors closer to the average being more likely.
It turns out, if we hypothetically look at a family tree, where the g-factors of each person followed these assumptions, and looked at the g-factors for a single lineage, for most symmetrical and "natural" child g-factor distributions, the g-factors of a lineage in a family tree are a one-dimensional random walk.
This inspired someone to have IQ scales use the normal distribution (bell curve). This is because when you look at the density of all stages/points in an arbitrarily long random walk, the distribution of points converges on the normal distribution, with its median at the starting position. The absolute median g-factor may change over time, but it turns out if you only concern yourself with g-factors of an individual relative to the general population, then you can simply fit your IQ curve to the general population and the result is the same.
Once you do this, in principle, a fixed difference in IQ should correspond to a fixed difference in g-factor. Of course, the assumptions are a little bit strict, and probably don't hold in the long term, but IQ is empirically a good predictor of various "success" metrics, and it's pretty stable over a person's life, so it's probably good enough for comparing g-factor-like "intelligence" between individuals within a single population at a single point in time.
But comparing your IQ to, say, Richard Feynman, is pointless, because his IQ test was taken decades ago, and 125 IQ in 1936 America and Europe is incomparable to 125 IQ in 2022 global.