Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

PASS THEORY OF PSYCHOLOGY, Study notes of Psychology

PASS THEORY OF PSYCHOLOGY TO BE UNDERSTOOD

Typology: Study notes

2021/2022

Available from 11/08/2023

shanti-niketan-public-school
shanti-niketan-public-school 🇮🇳

4 documents

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
In recent years, Das and his colleagues (Das, 2002; Das, Kar, & Parrila, 1996; Das, Naglieri, & Kirby,
1994) have offered the planning, attention, simultaneous, and successive (PASS) theory as an alternative to the
conceptualization of intelligence as a general mental ability. Specifically, the PASS theory is based on the view
that intelligence is composed of multiple interdependent cognitive processes. From a more applied perspective,
the Das–Naglieri Cognitive Assessment System (CAS; Naglieri & Das, 1997) was developed to measure the
PASS processes. Together, the PASS theory and the CAS have enabled psychologists to make great progress in
the prediction of academic achievement as well as the diagnosis and treatment of learning disabilities (Das,
2002; Naglieri, 1999). Numerous validation studies of CAS scores have been conducted; however, criterion-
related validation has been limited to educational contexts where criterion measures primarily involve cognitive
learning outcomes (e.g., reading comprehension and mathematics reasoning). Although the PASS theory and
CAS were developed with an emphasis on academic achievement, it is our assertion that the PASS theory and
the CAS hold considerable promise for practical applications outside education, including industrial and
military settings. Accordingly, the purpose of the present study was to extend the validation literature by
examining the criterion-related validity of CAS scores in regard to the learning of a complex skill that has both
strong cognitive and psychomotor requirements. Specifically, we assessed the degree to which CAS scores
predicted knowledge and skill acquisition on a computer task that simulated the demands of complex and
dynamic aviation environment.
Until recently, the practice of measuring cognitive ability through process-based models has been almost
nonexistent (Ackerman & Humphreys, 1990). However, it is now widely recognized that cognitive ability is a
multifaceted construct (Anastasi & Urbina, 1997), and attempts have been made to identify the multiple
processes underlying cognitive ability (e.g., Kaufman and Kaufman, 1983, Kaufman and Kaufman, 1993;
Naglieri & Das, 1997; Sternberg, 1988; Woodcock & Johnson, 1989). One noteworthy trend emerging from this
burgeoning literature is the distinction made between higher-order control processes and sensory-based
information-processing components. For example, Sternberg, 1985, Sternberg, 1989 made distinctions between
metacomponents, performance components, and knowledge-acquisition components. In this theory,
metacomponents are higher-order control processes used in planning, monitoring, and evaluating task
performance. Performance components are mental processes used in encoding stimuli, inferring relations
between stimuli, and applying previously learned relations to new situations. Finally, knowledge-acquisition
components are processes involved in learning new information and storing it in memory.
Indeed, research on human learning and memory indicates there is a fundamental distinction between higher-
order control processes and sensory-based processes. For example, Brown (1978) offered a distinction between
metacognitive processes, which he defined as executive skills people use to control their own information
processing, and lower-order cognitive processes of nonexecutive, task-specific skills. In further support of such
a distinction, Carroll (1981) proposed a tentative list of ten basic cognitive processes, which include
metacognitive elements such as monitoring and attention and sensory-based elements such as apprehension and
encoding. However, with respect to the assessment of cognitive ability, no empirical research in the published
literature has addressed the distinction between higher-order and lower-order cognitive processes as it relates to
the acquisition of a complex skill. Therefore, the present study contributes to the extant literature by exploring
the predictive validity of a process-based test of cognitive ability, in which higher-order and lower-order
cognitive processes are distinguished, regarding the acquisition of a complex skill. Because a substantial body
of theoretical and empirical support exists in the educational literature for the CAS (Anastasi & Urbina, 1997;
Das, 2002), we considered it to be the most promising assessment tool for our investigation.
Overview. The CAS measures individual differences in cognition by examining the four distinct but interrelated
cognitive processes articulated in the PASS theory of intelligence: planning, attention, simultaneous processing,
and successive processing (Naglieri, 1999; Naglieri & Das, 1997). Components of the CAS battery reflect the
distinction between (a) higher-order control processes used in planning and monitoring task performance (i.e.,
planning and attention) and (b) information-processing components that involve the movement of information
through working memory (i.e., simultaneous and successive processing). For the present investigation we used
the Basic Battery, which consists of two subtests for each of the four PASS cognitive processes. Planning
subtests require individuals to engage in multiple self-regulatory processes such as creating, applying,
monitoring, and revising plans of action while solving novel tasks. The attention subtests require the detection
of particular stimuli and the inhibition of responses to distracting stimuli. Simultaneous processing subtests
require individuals to integrate separate stimuli into a conceptual group or whole. Successive processing
subtests require individuals to comprehend meaning as it is derived from the order of information.
pf3
pf4

Partial preview of the text

Download PASS THEORY OF PSYCHOLOGY and more Study notes Psychology in PDF only on Docsity!

In recent years, Das and his colleagues (Das, 2002; Das, Kar, & Parrila, 1996; Das, Naglieri, & Kirby,

  1. have offered the planning, attention, simultaneous, and successive (PASS) theory as an alternative to the conceptualization of intelligence as a general mental ability. Specifically, the PASS theory is based on the view that intelligence is composed of multiple interdependent cognitive processes. From a more applied perspective, the Das–Naglieri Cognitive Assessment System (CAS; Naglieri & Das, 1997) was developed to measure the PASS processes. Together, the PASS theory and the CAS have enabled psychologists to make great progress in the prediction of academic achievement as well as the diagnosis and treatment of learning disabilities (Das, 2002; Naglieri, 1999). Numerous validation studies of CAS scores have been conducted; however, criterion- related validation has been limited to educational contexts where criterion measures primarily involve cognitive learning outcomes (e.g., reading comprehension and mathematics reasoning). Although the PASS theory and CAS were developed with an emphasis on academic achievement, it is our assertion that the PASS theory and the CAS hold considerable promise for practical applications outside education, including industrial and military settings. Accordingly, the purpose of the present study was to extend the validation literature by examining the criterion-related validity of CAS scores in regard to the learning of a complex skill that has both strong cognitive and psychomotor requirements. Specifically, we assessed the degree to which CAS scores predicted knowledge and skill acquisition on a computer task that simulated the demands of complex and dynamic aviation environment. Until recently, the practice of measuring cognitive ability through process-based models has been almost nonexistent (Ackerman & Humphreys, 1990). However, it is now widely recognized that cognitive ability is a multifaceted construct (Anastasi & Urbina, 1997), and attempts have been made to identify the multiple processes underlying cognitive ability (e.g., Kaufman and Kaufman, 1983, Kaufman and Kaufman, 1993; Naglieri & Das, 1997; Sternberg, 1988; Woodcock & Johnson, 1989). One noteworthy trend emerging from this burgeoning literature is the distinction made between higher-order control processes and sensory-based information-processing components. For example, Sternberg, 1985, Sternberg, 1989 made distinctions between metacomponents, performance components, and knowledge-acquisition components. In this theory, metacomponents are higher-order control processes used in planning, monitoring, and evaluating task performance. Performance components are mental processes used in encoding stimuli, inferring relations between stimuli, and applying previously learned relations to new situations. Finally, knowledge-acquisition components are processes involved in learning new information and storing it in memory. Indeed, research on human learning and memory indicates there is a fundamental distinction between higher- order control processes and sensory-based processes. For example, Brown (1978) offered a distinction between metacognitive processes, which he defined as executive skills people use to control their own information processing, and lower-order cognitive processes of nonexecutive, task-specific skills. In further support of such a distinction, Carroll (1981) proposed a tentative list of ten basic cognitive processes, which include metacognitive elements such as monitoring and attention and sensory-based elements such as apprehension and encoding. However, with respect to the assessment of cognitive ability, no empirical research in the published literature has addressed the distinction between higher-order and lower-order cognitive processes as it relates to the acquisition of a complex skill. Therefore, the present study contributes to the extant literature by exploring the predictive validity of a process-based test of cognitive ability, in which higher-order and lower-order cognitive processes are distinguished, regarding the acquisition of a complex skill. Because a substantial body of theoretical and empirical support exists in the educational literature for the CAS (Anastasi & Urbina, 1997; Das, 2002), we considered it to be the most promising assessment tool for our investigation. Overview. The CAS measures individual differences in cognition by examining the four distinct but interrelated cognitive processes articulated in the PASS theory of intelligence: planning, attention, simultaneous processing, and successive processing (Naglieri, 1999; Naglieri & Das, 1997). Components of the CAS battery reflect the distinction between (a) higher-order control processes used in planning and monitoring task performance (i.e., planning and attention) and (b) information-processing components that involve the movement of information through working memory (i.e., simultaneous and successive processing). For the present investigation we used the Basic Battery, which consists of two subtests for each of the four PASS cognitive processes. Planning subtests require individuals to engage in multiple self-regulatory processes such as creating, applying, monitoring, and revising plans of action while solving novel tasks. The attention subtests require the detection of particular stimuli and the inhibition of responses to distracting stimuli. Simultaneous processing subtests require individuals to integrate separate stimuli into a conceptual group or whole. Successive processing subtests require individuals to comprehend meaning as it is derived from the order of information.

The PASS model is grounded in research (e.g., Luria, 1966; Posner, 1993) that illustrates the independence of multiple cognitive processes and their respective linkages to different regions in the human brain (Das, 2002). Although much of the applied research using the PASS model and the CAS has taken place in a clinical context, including the diagnosis and design of interventions for children and adolescents with dyslexia, attention deficit disorder, and mental retardation, the PASS model and the CAS were developed to explain both normal and atypical cognitive functioning (Das, 2002). Thus, the CAS can be viewed as an alternative to more traditional tests of cognitive ability and fittingly can be used to assess learning strengths and weaknesses by which decisions can be made regarding the appropriateness of instructional programs (Das, 2002). Previous validation of CAS scores. During the last decade, the PASS theory and the CAS enabled psychologists to make great progress in the diagnosis of learning disabilities in children and adolescents and the design of reliable interventions for individuals with learning disabilities (Das, 2002; Naglieri, 1999; Naglieri & Das, 1997). The CAS has been a useful diagnostic tool and useful as a method for designing interventions because special populations have different group profiles across CAS subtests. Validation studies also have demonstrated an appropriate progression of scores across age categories, and exploratory and confirmatory factor analyses (Naglieri and Das, 1987, Naglieri and Das, 1997) support the four-factor PASS model (cf. Keith & Kranzler, 1999; Keith, Kranzler, & Flanagan, 2001). Additional studies have supported the criterion-related validity of the CAS. For example, results from a study of 1600 children showed that CAS scores were correlated with scores on the Woodcock–Johnson-Revised (WJ-R III) Test of Achievement (Naglieri, 1999). Academic skills measured by the WJ-R III include basic writing skills, reading comprehension, basic mathematic skills, and mathematics reasoning. Overall, correlations between WJ-R III subtests and the CAS scales ranged from 0.35 to 0.64 (Naglieri & Das, 1997). Because it is grounded in both the process-based and multiple-abilities perspectives, the CAS holds considerable promise in the prediction and explanation of complex skill acquisition and training performance. Presently, criterion-related validation of CAS scores has been limited to educational contexts where criteria primarily involve measures of cognitive learning outcomes. Our goal was to further extend this body of research by examining the relationships between CAS scores and a variety of both cognitive and skill-based criteria with respect to a complex task that has both strong cognitive and psychomotor requirements. Considering that previous research has demonstrated that different training criteria are not substantially intercorrelated and consequently should not be considered proxies for each other (Alliger & Janak, 1989; Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997), we believe that using a variety of criteria is an important part of criterion- related validation. Moreover, although Das (2002) indicated that relevant behavioral outputs of the PASS model include physical movements as well as oral and written language, much of the validation of CAS scores has been restricted to language outputs. By including criterion measures for a task that has strong psychomotor requirements, the present study offers a critical extension of previous validation studies. Accordingly, we examined the degree to which CAS scores were predictive of scores on measures of declarative knowledge, knowledge organization, skill acquisition, skill retention, skill reacquisition, and skill transfer. In general, we expected CAS Full scores (a linear composite of the four CAS scales) and scores on the individual CAS scales to correlate with all the learning criteria in this study. However, we were also interested in examining the extent to which CAS scores might be differentially related to various learning criteria. Therefore, the following research questions were examined:

  1. To what extent are CAS Full scores correlated with cognitive and skill-based learning criteria?
  2. To what extent are scores on each of the individual CAS scales correlated with cognitive and skill-based learning criteria?
  3. To what extent are scores on the individual CAS scales differentially related to learning criteria? That is, are scores for different learning criteria associated with scores on different CAS scales? The performance task used in the present study was the computer task Space Fortress (SF; Donchin, 1989; Mane & Donchin, 1989). Space Fortress represents important information processing demands that are present in aviation and other complex tasks (Gopher, Weil, & Bareket, 1994; Hart & Battiste, 1992). These processing demands include short- and long-term memory loading, high workload, dynamic attention allocation, decision- making, prioritization, resource management, discrete motor responses, and difficult manual control elements (Gopher, Weil, & Siegel, 1989).

Critical Analysis Current validation studies support both the construct and concurrent validity of the DAS. Factor analytic data shows the abilities measured by the DAS to be consistent with Concurrent Validity of the DAS and CAS 44 the instrument's theoretical structure. The literature (Elliott, 1990; Dumont, et al., 1996) on the concurrent validity of the DAS finds that the instrument provides a good measure of psychometric "g", and that the DAS cluster scores have adequate convergent and divergent validity. In contrast, literature (Kranzler & Weng, 1995; Kranzler &Z Keith, 1999; Keith & Kranzler, 1999) focusing on the CAS suggests that the PASS model does not measure the abilities it purports to measure, but rather fits best into a theoretical structure similar to that of the DAS. Although studies vary, the literature points to data which suggests that the CAS is a better measure of general intelligence than of distinct cognitive abilities. However, there is a considerable amount of controversy regarding the structure of the CAS. It's authors dispute claims of the CAS following a hierarchical structure and maintain that the CAS is a measure of the correlated PASS model. This study will investigate the concurrent validity between the broad scores, cluster scores, and subtest scores of the DAS and CAS. At this time, there are currently no published studies which have attempted to address the relationship between these instruments. As recently developed assessments, it is crucial to establish their credibility through the use of validation studies. Studies of concurrent validity add not only to the overall psychometric qualifications of the instrument, but also towards support of the constructs on which the assessment is based. This is especially important for the CAS, which has not undergone extensive scrutiny regarding the instrument's concurrent validity. In the case of the DAS, several studies have shown support for it's factors, therefore making it an adequate model to compare the CAS against. Another reason for conducting this study is to update the information presented on Concurrent Validity of the DAS and CAS 45 the DAS's validity. Since many of the studies were performed on instruments that have relatively outdated norms, it is important to offer new information with scales currently in use. There is also the issue that many of the previous studies with the DAS were done with non-theory based instruments. As theories of intellectual and cognitive development gain scientific favor, comparing theory based assessment tools will offer information as to which abilities are being assessed and whether those abilities represent those of the theory on which they were based. In addition, there are few studies that have investigated using a non-special education population of children. Several of the cited studies looked at special populations, such as cognitively or learning disabled children, which decreases the applicability of that data to average populations. In order to adequately validate an assessment of intelligence, there must be sufficient and reliable normative data on various populations. Using a group of non-special education students provides the necessary basis for which discrepancy determinations are derived. It is the aim of this investigation to provide unbiased statistical analysis of the psychometric qualities of these instruments which can assist professionals in their decision to use a particular instrument. As evidence of validity is a key element in determining the usefulness of any standardized instrument, it is also the intention of this study is to provide necessary information as to the interpretative qualities of these instruments for making differential diagnoses regarding cognitive and academic functioning. Finally, investigating the concurrent validity of the instruments will provide information to either support or contradict the previous findings regarding the fit of the theoretical models on which the Concurrent validity of the DAS and CAS 46 instruments are based. Because both the CAS and DAS are based on differing theoretical constructs, certain assumptions could be made regarding the relationship between broad factors. However, based on information from previous research regarding the construct and concurrent validity of each instrument, the following correlations are expected:

  • Based on the literature regarding the construct validity of these instruments, it is expected that moderate to high correlations will be found between the GCA of the DAS and the Full Scale score of the CAS. In essence, both instruments are expected to be adequate measures of general intelligence. * It is also expected that factors claiming to assess similar abilities between the CAS and DAS will correlate highly. Specifically, it is expected that the Verbal Ability cluster will correlate the most with the CAS Simultaneous factor. Because of its proposed ability to assess unique cognitive constructs, the CAS Planning factor would likely show the highest correlation with DAS tasks of fluid