Anxiety Zone Forums | Login | Register | Announcements | Introduce Yourself | The Lounge | Inspiration Place | Generalized Anxiety Disorder (GAD) | Hypochondria (Health Anxiety) | Panic Disorder and Agoraphobia | Clinical Depression | Specific Phobias | Post-Traumatic Stress Disorder (PTSD) | Social Anxiety Disorder | Obsessive-Compulsive Disorder (OCD) | Other Mental Health Issues | Sleep Disorders | Fibromyalgia & Chronic Fatigue Syndrome (CFS) | Digestive Disorders | Medications and Therapy | Addiction and Recovery | Relationship Issues | The Anxiety Zone Arcade | Conditions Index | Drug Index | Glossary | Symptoms | Therapies | Latest Health News | Member Articles | Member Blogs | Member Gallery | Chat Rooms (Reg. required.) | Search | Community Guidelines

Support Forums And Chats For Generalized Anxiety Disorder (GAD), Hypochondria, Panic Disorder, Clinical Depression, Specific Phobias, Post-Traumatic Stress Disorder (PTSD), Social Anxiety Disorder and Obsessive-Compulsive Disorder (OCD).
- Click on the banner above to visit the Anxiety Zone forums -

Intelligence quotient

An intelligence quotient or IQ is a score derived from a set of standardized tests that were developed with the purpose of measuring a person's cognitive abilities ("intelligence") in relation to their age group.

History

The modern field of IQ testing began with the Stanford-Binet test. Alfred Binet, who created the IQ test in 1905, aimed to identify students who could benefit from extra help in school; his assumption was that lower IQ indicated the need for more teaching, not an inability to learn. This interpretation is still held by some modern experts. The term "intelligence quotient" comes from Binet's test, in which each student's score was the quotient of his or her tested academic age with his or her actual age. Modern IQ tests do not calculate scores in this way, but the term IQ remains in common use.

Online Tests

Although such tests have become wildly popular with the explosion of the internet in recent years, these IQ tests are highly inaccurate. Comparing results among a large set of people shows a common factor: most scores are above 110. Of course, such tests automatically measure very few people in the 70 to 90 range, and hence create a strong upward distortion. Many of these websites do not show the results immediately and instead attempt to sell certificates showing the results.

Distribution

IQ scores are expressed as a number normalized so that the average IQ in an age group is 100 — in other words an individual scoring 115 is above-average when compared to similarly aged people. It is common, but not invariable, practice to standardize so that the standard deviation (?) of scores is 15. Tests are designed so that the distribution of IQ scores is more-or-less Gaussian, that is to say that it follows a bell curve.

(The following numbers apply to IQ scales with a standard deviation ? = 15.) Roughly 68% of the population has an IQ between 85 and 115. The "normal" range, or range between -2 and +2 standard deviations from the mean, is between 70 and 130, and contains about 95% of the population. A score below 70 may indicate mental retardation, and a score above 130 may indicate intellectual giftedness. Retardation may result from normal variation or from a genetic or developmental malady; analogously, some otherwise normal people are very short, and others have dwarfism. Giftedness appears to be normal variation; autistic savants have often astonishing cognitive powers but below-average IQs.

Some writers say that scores outside the range 55 to 145 must be cautiously interpreted because there have not been enough people tested in those ranges to make statistically sound statements. Moreover, at such extreme values, the normal distribution is a less accurate estimate of the IQ distribution.

1% of the population has an IQ of 136 or higher.

IQ and g

Modern IQ tests produce scores for different areas (e.g., language fluency, three-dimensional thinking, etc.), with the summary score calculated from subtest scores. Individual subtest scores tend to correlate with one another, even when seemingly disparate in content. Analyses of an individual's scores on a wide variety of tests (Stanford-Binet, WISC-R, Raven's Progressive Matrices and others) will reveal that they all measure a single common factor and various factors that are specific to each test. This kind of factor analysis has led to the theory that underlying these disparate cognitive tasks is a single factor, termed the g factor, that represents the common-sense concept of intelligence. In the normal population, g and IQ are roughly 90% correlated and are often used interchangeably.

Genetics vs environment

The role of genes and environment in determining IQ is reviewed in Plomin et al. (2001, 2003). The degree to which genetic variation contributes to observed variation in a trait is measured by a statistic called heritability. Heritability scores range from 0 to 1, and can be interpreted as the precentage of variation (e.g. in IQ) that is due to variation in genes. Twins studies and adoption studies are commonly used to determine the heritability of a trait. Until recently heritability was mostly studied in children. These studies yield an estimate of heritability of 0.5; that is, half of the variation in IQ among the children studied was due to variation in their genes. The remaining half was thus due to environmental variation and measurement error. A heritability of 0.5 implies that IQ is "substiantially" heritable.

Considerable research has focused on biological correlates of g; see g theory and the section on brain size below. For example, general intelligence and MRI brain volume measurements are correlated, and the effect is primarily determined by genetic factors.

Environment

Nearly all personality traits show that, contrary to expectations, environmental effects actually cause siblings raised in the same family to be as different as children raised in different families (Harris, 1998; Plomin & Daniels, 1987). Put another way, shared environmental variation for personality is zero, and all environmental effects are nonshared. Intelligence is actually an exception to this rule, at least among children. The IQ of adoptive siblings, who share no genetic relation but do share a common family environment, is correlated at .32. Despite attempts to isolate them, the factors that cause adoptive siblings to be similar have not been identified. However, as explained below, shared family effects on IQ disappear after adolescence.

Active genotype-environment correlation, also called the "nature of nurture", is observed for IQ. This phenomena is measured similarly to heritability; but instead of measuring variation in IQ due to genes, variation in environment due to genes is determined. One study found that 40% of variation in measures of home environment are accounted for by genetic variation. This suggests that the way human beings craft their environment is due in part to genetic influences.

Environmental factors may play a larger role in determining IQ in situations where environmental conditions are more variable. Proper childhood nutrition appears critical for cognitive development; malnutrition can lower IQ. Other research indicates environmental factors such as prenatal exposure to toxins, duration of breastfeeding, and micronutrient deficiency can affect IQ.

Development

It is reasonable to expect that genetic influences on traits like IQ should become less important as we gain experiences with age. Surprisingly, the opposite occurs. Heritability measures in infancy are as low as 20%, around 40% in middle childhood, and as high as 80% in adulthood.

Shared family effects also seem to disapear by adulthood. Adoption studies show that, after adolescence, adopted siblings are no more similar in IQ than strangers (IQ correlation near zero), while full siblings show an IQ correlation of 0.6. Twin studies reinforce this pattern: monozygotic (identical) twins raised separately are highly similar in IQ (0.86), more so than dizygotic (fraternal) twins raised together (0.6) and much more than adopted siblings (~0.0).

Most of the IQ studies described above were conducted in developed countries, such as the United States and Western Europe. However, a few studies have been conducted in Moscow, East Germany, Japan, and India, and those studies produce similar results. Any such investigation is limited to describing the genetic and environmental variation found within the populations studied. This is a caveat of any heritability study.

Mental retardation

Mild to severe mental retardation is a symptom of several hundred single-gene disorders and many chromosomal abnormalities, including small deletions. Based on twin studies, moderate to severe mental retardation does not appear to be famalial (run in families), but mild mental retardation does. That is, the relatives of the moderate to severely mentally retarded have normal IQs, whereas the families of the mildly mentally retarded have low IQ.

Brain size and IQ

Modern studies using MRI imaging have shown that brain size correlates with IQ by a factor of roughly .35 to .40. In 1991, Willerman et al. used data from 40 White American university students and reported a correlation coefficient of .35. Other studies done on samples of Caucasians show similar results, with Andreasen et al (1993) determining a correlation of .38, while Raz et al (1993) obtained a figure of .43 and Wickett et al (1994) obtained a figure of .40. The correlation between brain size and IQ seems to hold for comparisons between and within families (Gignac et al. 2003; Jensen 1994; Jensen & Johnson 1994). However, one study found no within family correlation (Schoenemann et al. 2000). A study on twins (Thompson et al., 2001) showed that frontal gray matter volume was correlated with g and highly heritable. A related study has reported that the correlation between brain size (reported to have a heritability of 0.85) and g is 0.4, and that correlation is mediated entirely by genetic factors (Posthuma et al 2002).

The Flynn effect

Worldwide, IQ scores appear to be slowly rising, a trend known as the Flynn effect, so that tests need repeated renormalization.

Gender and IQ

Most IQ tests are designed so that the average IQs of males and females are equal. However, men tend to score higher in the parts of the test that cover spatial and quantitative abilities, and women generally score higher in the verbal sections. Some research has shown that the variance in men's IQ scores is greater than the variance among women's, as seen in other cognitive test scores. This is why more men than women are found in both very high and very low scoring groups.

In 2005, Haier et al. reported that compared to men, women show more white matter and fewer gray matter areas related to intelligence. They also report that the brain areas correlated with IQ differ between the sexes. They conclude that men and women apparently achieve similar IQ results with different brain regions.

Race and IQ

IQ tests have been strongly criticized as biased, particularly against minorities. However, most agree that the tests themselves are not biased in their construction. The source of the differences in average IQ scores are not known and are an area of active research; most controversial is the theory that part of all of the differences can be explained by genetic factors. Complicating the research is the already mentioned rise in average IQ scores.

Practical importance

Research shows that intelligence plays an important role in many valued life outcomes. In addition to academic success, intelligence correlates with job performance (see below), socioeconomic advancement (e.g., level of education, occupation, and income), and "social pathology" (e.g., adult criminality, poverty, unemployment, dependence on welfare, children outside of marriage). Recent work has demonstrated links between intelligence and health, longevity, and functional literacy. Correlations between g and life outcomes are pervasive, though IQ and happiness do not correlate. IQ and g correlate highly with school performance and job performance, less so with occupational prestige, moderately with income, and only to a small degree with law-abidingness.

Some proponents of IQ have pointed to a number of studies showing a fairly close correlation between IQ and various life outcomes, particularly income. Research in Scotland has shown that a 15-point lower IQ meant people had a fifth less chance of seeing their 76th birthday, while those with a 30-point disadvantage were 37% less likely than those with a higher IQ to live that long. A controversial book IQ and the Wealth of Nations, claims to show that the wealth of a nation correlates closely to its IQ score. This claim has has been both disputed and supported in peer-reviewed papers.

General intelligence (in the literature typically called "cognitive ability") is the best predictor of job performance by the standard measure, validity. Validity is the correlation between score (in this case cognitive ability, as measured, typically, by a paper-and-pencil test) and outcome (in this case job performance, as measured by a range of factors including supervisor ratings, promotions, training success, and tenure), and ranges between -1.0 (the score is perfectly wrong in predicting outcome) and 1.0 (the score perfectly predicts the outcome). The validity of cognitive ability for job performance tends to increase with job complexity and varies across different studies, ranging from 0.2 for unskilled jobs to 0.8 for the most complex jobs.

A large meta-analysis (Hunter and Hunter, 1984) which pooled validity results across many studies encompassing thousands of workers (32,124 for cognitive ability), reports that the validity of cognitive ability for entry-level jobs is 0.54, larger than any other measure including job tryout (0.44), experience (0.18), interview (0.14), age (-0.01), education (0.10), and biographical inventory (0.37).

Because higher test validity allows more accurate prediction of job performance, companies have a strong incentive to use cognitive ability tests to select and promote employees. IQ thus has great practical importance in economic terms. The utility of using a one measure over another is proportional to the difference in their validities, all else equal. This is one economic reason why companies use job interviews (validity 0.14) rather than randomly selecting employees (validity 0.0). Some researchers have echoed the popular claim that "in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much." (Detterman and Daniel, 1989). However, more recent studies suggest IQ continues to confer large benefits even at very high levels. In an analysis of hundreds of siblings, it was found that siblings in the 90th+ centile of IQ (brighter than 90% of the sample) earned more than $13,000 in annual income compared to siblings in the 75-89th centile, who in turn earned $4,000 more than those in the 25-74th centiles (Murray, 1998). Ability and performance for jobs are linearly related, such that at all IQ levels, an increase in IQ translates into a concomitant increase in performance (Coward and Sackett, 1990).

Legal barriers, most prominently the 1971 United States Supreme Court decision Griggs vs. Duke Power Co., have prevented American employers from directly using cognitive ability tests to select employees, despite the tests' high validity. Using cognitive ability scores in selection adversely affects some minority groups, because different groups have different mean scores on tests of cognitive ability.

However, other studies question the real-world importance of whatever is measured with IQ tests. IQ correlates highly with school performance but this correlations seems to decrease the closer one gets to real-world outcomes, like job performance, and still lower for income. It explains less than one sixth of the income variance. Even for school grades, other factors explain most the variance. Regarding economic inequality, one study found that if we could magically give everyone identical IQs, we would still see 90 to 95 percent of the inequality we see today. Another recent study (2002) found that wealth, race and schooling are important to the inheritance of economic status, but IQ is not a major contributor and the genetic transmission of IQ is even less important. Some argue that IQ scores are used as an excuse for not trying to reduce poverty or otherwise improve living standards for all. Claimed low intelligence has historically been used to justify the feudal system and unequal treatment of women.

Social construct?

Some maintain that IQ is a social construct invented by the privileged classes, used to maintain their privilege. Others maintain that intelligence, measured by IQ or g, reflects a real ability, is a useful tool in performing life tasks and has a biological reality.

The social-construct and real-ability interpretations for IQ differences can also be distinguished because they make opposite predictions about what would happen if people were given equal opportunities. The social explanation predicts that equal treatment will eliminate differences, while the real-ability explanation predicts that equal treatment will accentuate differences. Evidence for both outcomes exists. Achievement gaps persist in socioeconomically advantaged, integrated, liberal, suburban school districts in the United States (see Noguera, 2001). Test-score gaps tend to be larger at higher socioeconomic levels (Gottfredson, 2003). Some studies have reported a narrowing of score gaps over time.

While public discourse on IQ testing is generally inflammatory, IQ tests are used ubiquitously in research and education. In general, there is a disparity between the public perception of IQ testing and the opinion of intelligence researchers.

The reduction of intelligence to a single score seems extreme and wrong to many people. Opponents argue that it is much more useful to know a person's strengths and weaknesses than to know their IQ score. Such opponents often cite the example of two people with the same overall IQ score but very different ability profiles. However, most people have highly balanced ability profiles. Differences in subscores are greatest among the most intelligent, which may lead them to this misconception.

IQ scores are not intended to gauge a person's worth, and in many situations, IQ may have little relevance.

The Mismeasure of Man

Many scientists disagree with the practice of psychometrics in general. In The Mismeasure of Man, Professor Stephen Jay Gould strongly disputes the basis of psychometrics as a form of scientific racism, objecting that it is:

...the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status. (pp. 24-25).

Later editions of the book include a refutation of The Bell Curve.

The view of the American Psychological Association

In response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a special task force to publish an investigative report on the research presented in a book.

The findings of the task force state that IQ scores do have high predictive validity for individual (but not necessarily population) differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They agree that individual (again, not necessarily population) differences in intelligence are substantially influenced by genetics.

They state there is little evidence to show that childhood diet influences intelligence except in cases of severe malnutrition. They agree that there are no significant differences between the IQ scores of males and females. The task force agrees that there do exist large differences between the average IQ scores of blacks and whites, and that these differences cannot be attributed to biases in test construction. While they admit there is no empirical evidence supporting it, the APA task force suggests that explanations based on social status and cultural differences may be possible. Regarding genetic causes, noted that there is not much direct evidence on this point, but what little there is fails to support the genetic hypothesis.

The report was published in 1995 and thus does not include a decade of recent research.


The information above is not intended for and should not be used as a substitute for the diagnosis and/or treatment by a licensed, qualified, health-care professional. This article is licensed under the GNU Free Documentation License. It incorporates material originating from the Wikipedia article "Intelligence quotient".

Copyright © 2012 Anxiety Zone - Anxiety Disorders Forum. All Rights Reserved.