In IQs, ratio IQ is the intelligent quotient or IQ (see: IQ key) of a person calculated as the ratio of the person's "mental age" [MA], i.e. one's intellectual age, based on a standardized test, e.g. IQ test, or other means, to their "chronological age" [CA], i.e. actual age:

 IQ = 100\frac{MA}{CA} \,

The ratio IQ is basically synonymous with the term Terman IQ, although the latter would generally be considered as an IQ determined by American psychologist Lewis Terman or by one of his tests, such as the Stanford-Benet.

Issues
The ratio IQ formula method was originally designed to test for normalcy and in particular retardation in school children, whereby as such the formula become inaccurate outside of this range and also inaccurate when used with those of adulthood age range (16+).

The issues associate with this formula become most noticeable when parents calculate extreme high-end IQs for their 4-9 year old children, by virtue of the fact that the all one needs is to find a test geared towards eighteen-year-olds and get the child to pass it with scores on par with the average age range of that student, and thus incorrectly conclude via calculate that the child has a 200-range near or above IQ, which would incorrectly but the child above that of the Cox-Buzan IQ estimate of Isaac Newton's IQ of 193, which is the benchmark for a realistic IQ estimate in the genius ceiling range.

All-in-all, IQs calculated via the age ratio method, generally have only a limited applicability, loosely accurate in the 40-170 range, outside of which values calculated become essentially meaningless; a case in point being the example of the child Adragon de Mello, who when "age 4" (chronological age) was determined, by his father, to be scoring on tests at the "mental age" if a sixteen year old; hence the result:

IQ de mello
and the concluding calculation:

IQ de mello 2

and the "reality check" result, when compared with the IQs of real historical geniuses, of nonsense of the 17 Feb 2010 (version 213) of the then active IQ:200+ table, i.e. the concluding point that a four year old boy is twice as smart or twice the genius that Newton was.

Deviation IQ
The "deviation IQ", an intelligence quotient determined by comparing a person's test score with other examinees of the same age, was in fact developed, from 1914 to 1949 by David Wechlser, owing to the inadequacies of the applicability of the ratio method when used with adults. [1] The gist of which being the premise that the norm or normal scores at the center of the bell curve will be made by people with average or normal IQs (100), that those scoring to the right of the curve will have higher IQs, as compared with the norm; something along the lines of the following:

Deviation IQ 1

In the years to following, various people began what is called "norming" various tests, using basically personal invented means, so as to assign higher IQs, far into the genius range (140+), for various tests, e.g. the Wechsler IQ test, the Stanford-Binet, etc.; such as follows:

Deviation IQ 2

Meaning, according to the so-called "deviation IQ" method, that if one scores perfect, or in the top 1 to 2 percent of scores, that one has a deviation IQ or simply IQ of 130+ to 160+, as shown by example above. The problem here is that geniuses tend not to take test that have answers, but rather to take tests that are unsolved or unsurmounted. In other words, scoring perfect on the Stanford-Binet, Wechsler IQ test, Mensa IQ test, or Mega IQ test, etc., does not, by default, make one a genius.

References
1. Colangelo, Nicholas and Davis, Gary A. (1991). Handbook of Gifted Education (pg. 92). Allyn and Bacon.

External links
● DeLacey, Margaret. (2004). “Ratio and Deviation Test Scores”, Tagdpx.org.

TDics icon ns