25.12.2019
10

MethodsFirst, study staff drafted approximately 1200 self-report items representing individual granular symptoms in the diagnostic criteria for the 8 primary SCID-CV modules. An expert panel iteratively reviewed, critiqued, and revised items. The resulting items were iteratively administered and revised through 3 rounds of cognitive interviewing with community mental health center participants. In the first 2 rounds, the SCID was also administered to participants to directly compare their Likert self-report and SCID responses. A second expert panel evaluated the final pool of items from cognitive interviewing and criteria in the DSM-5 to construct the SAGE-SR, a computerized adaptive instrument that uses branching logic from a screener section to administer appropriate follow-up questions to refine the differential diagnoses.

The SAGE-SR was administered to healthy controls and outpatient mental health clinic clients to assess test duration and test-retest reliability. Cutoff scores for screening into follow-up diagnostic sections and criteria for inclusion of diagnoses in the differential diagnosis were evaluated.

ResultsThe expert panel reduced the initial 1200 test items to 664 items that panel members agreed collectively represented the SCID items from the 8 targeted modules and DSM criteria for the covered diagnoses. These 664 items were iteratively submitted to 3 rounds of cognitive interviewing with 50 community mental health center participants; the expert panel reviewed session summaries and agreed on a final set of 661 clear and concise self-report items representing the desired criteria in the DSM-5. The SAGE-SR constructed from this item pool took an average of 14 min to complete in a nonclinical sample versus 24 min in a clinical sample. Responses to individual items can be combined to generate DSM criteria endorsements and differential diagnoses, as well as provide indices of individual symptom severity. Preliminary measures of test-retest reliability in a small, nonclinical sample were promising, with good to excellent reliability for screener items in 11 of 13 diagnostic screening modules (intraclass correlation coefficient ICC or kappa coefficients ranging from.60 to.90), with mania achieving fair test-retest reliability (ICC=.50) and other substance use endorsed too infrequently for analysis. IntroductionThe Structured Clinical Interview for DSM-5 (SCID-5) is currently accepted as the gold standard in psychiatric diagnosis and is regularly used in research settings where the accurate diagnosis of primary and comorbid disorders is required for the appropriate determination of study eligibility and assignment to a research condition -. The SCID is also frequently used as the standard against which other diagnostic instruments are validated (eg, -).

The structured format of the SCID with its direct adherence to Diagnostic and Statistical Manual of Mental Disorders (DSM) criteria accounts for its strong test-retest and inter-rater reliability for most diagnoses ,. Overall, the full SCID-5-Research Version (RV) covers 63 diagnoses, takes an average of 90 min to administer, and requires considerable clinician training ,. The Clinician Version (CV) of the SCID for DSM-5 (SCID-5-CV), released in 2014, consists of 10 modules that cover 39 of the most common diagnoses seen in clinical practice and allows screening for an additional 16 diagnoses. Stage I: Self-Report Item Pool DevelopmentAs a first step, we authored a set of approximately 1200 unique self-report items that mirrored the questions in the SCID for DSM-IV and corresponded with criteria outlined in the DSM-IV-TR. In anticipation of the release of DSM-5, we also developed items intended to represent the few anticipated changes to diagnostic criteria occurring between DSM-IV and DSM-5 (prospective changes were made available online before the DSM-5’s publication date).

TeleSage staff developed these items using a rigorous methodology first developed and successfully implemented in our previous instrument development work. Self-report items were drafted for 13 diagnostic categories judged to be the most commonly encountered in clinical practice by the developers of the SCID-CV : (1) depressive disorders, (2) manic and hypomanic disorders, (3) generalized anxiety disorder (GAD), (4) panic disorder, (5) agoraphobia, (6) social anxiety disorder, (7) obsessive-compulsive disorder (OCD), (8) posttraumatic stress disorder (PTSD), (9) adult attention-deficit/hyperactivity disorder (ADHD), (10) psychotic disorders, (11) alcohol use disorder, (12) cannabis use disorder, and (13) other substance use disorders.

Stage II: Cognitive InterviewingThe self-report item pool was divided into 2, with 6 to 7 diagnostic categories (approximately 4 SCID modules) in each half. After engaging in an institutional review board (IRB)-approved informed consent process, participants were given the half of the item pool that corresponded with their individual chart diagnosis. Both halves were then tested and revised over 3 rounds of cognitive interviewing (CI). After each round of CI, session summaries were analyzed by TeleSage staff. All items that posed difficulty for 20% or more of the participants were either omitted or rewritten for the next round of CI.CI is a scientific technique that uses verbal probes and verbal think alouds to determine the perceived meaning of survey questions. For this study, the cognitive interviewer presented each participant with a block of self-report items that corresponded to a single diagnostic category at a time.

Item sets pertaining to each diagnostic category were presented in a balanced, randomized order to control for order effects and ensure that majority of the questions were completed.After reading an item aloud, participants marked their responses to the items. In addition, participants were instructed to circle any item they perceived as unclear or confusing as they completed the self-report assessment. Participants were also encouraged to think aloud while they answered each item.

I know we could charge money, but then we couldn’t achieve our mission: a free online library for everyone. But we still need to pay for servers and staff. Fado music lisbon

Structured clinical interview for dsm-iv pdf free

After participants completed the self-report items, the cognitive interviewer asked follow-up questions to further assess the reason the participant found each circled item unclear or confusing, while also confirming that the participant understood the meaning and intent of items that were not circled. For example, the cognitive interviewer would point out specific words in the question and ask for the meaning of that word (eg, “Can you tell me what irritable means to you?”) or ask, for example, behaviors (eg, “You indicated that you “often” feel sad. Can you give me some examples of how you have felt sad in the past two weeks?”). This process continued until the interviewer probed all items. Interviews were recorded on a digital recorder, and the cognitive interviewer took objective, not interpretive, notes during the session pertaining to the participant’s responses as well.

After the interview, the cognitive interviewer listened to the audio file as needed and converted the notes from the session into a summary indicating items that were particularly difficult for the participant to answer or caused confusion, and items for which the participant’s interpretation did not reflect the item’s intent. By having participants describe all their thoughts out loud as they work their way through questions, it is possible to identify many of the potential problems that could affect a patient’s response in unintended ways.

Using CI to hone questions should improve the likelihood that individual items will ultimately have good psychometric characteristics during quantitative validation.Each of the 3 rounds of CI was conducted with unique participants who engaged in an individual interview; no participant was interviewed twice. Participants in the first 2 rounds of CI were also given a clinician-administered SCID. This SCID contained the same modules (diagnostic categories) that the participants completed in the self-report item pool and included the participant’s specific chart diagnosis. To account for any learning effect, participants were randomized so that half of the participants took the SCID first and half completed the self-report items and CI first. Stage III: Screening Assessment for Guiding Evaluation-Self-Report Instrument Construction and Initial ValidationAn expert panel was convened for this next stage to convert the self-report item pool into the computerized adaptive Screening Assessment for Guiding Evaluation-Self-Report (SAGE-SR). The panel included 2 psychiatrists, 2 clinical psychologists, 1 physician, TeleSage staff members with backgrounds in psychology as well as expertise in mental health item development and SCID administration, and TeleSage staff computer programmers with expertise in computer-adaptive instrument development. To construct an easily understood instrument that could be administered in a time-efficient manner, the SAGE-SR was constructed to have an initial 65-question screener which covered the same 13 diagnostic categories for which items were drafted in stage I.

Respondents would need to endorse screener items at a sufficient threshold (set by the expert panel) within each diagnosis to “screen in” and branch to the remaining self-report items necessary to determine if respondents meet criteria for that diagnosis to be included in the final differential diagnosis. Possible diagnoses that could be returned in this differential diagnosis are presented in, along with the corresponding representation of diagnoses in the SCID-5-CV. FCurrent and past episodes covered.The expert panel examined the newly released DSM-5 criteria for each of the diagnoses covered by the self-report items to determine the most appropriate items for inclusion on the screener using clinical judgment for best fit and criteria that were “essential” or central to each diagnosis. For example, to meet DSM-5 criteria for major depressive disorder, 5 or more of a series of 9 symptoms must be present during the same 2-week period and represent a change from previous functioning ; however, 1 of these 5 symptoms must be either depressed mood or loss of interest or pleasure.

Thus, the expert panel selected 3 self-report items for the screener to represent depressed mood (“I felt sad,” “I felt depressed,” and “I felt hopeless”) and 3 self-report items for the screener to represent loss of interest or pleasure (“I enjoyed life”—reverse coded, “I had difficulty enjoying things that I used to enjoy,” and “I was interested in my usual activities”—reverse coded). If a respondent met the threshold set by the expert panel on these screener items, the adaptive SAGE-SR would present the remaining depressive disorder items after the respondent completed the screener to determine if the respondent endorsed sufficient criteria for any depressive disorder to be considered for differential diagnosis. The expert panel also set the thresholds for determining whether respondents had endorsed sufficient criteria between the screener and follow-up questions for diagnoses to be reported for clinician consideration for differential diagnosis.Once the initial instrument was constructed and programmed for Web-based administration (via personal computer, tablet, or smartphone), TeleSage staff members piloted and tested the Web-based administration of the SAGE-SR to identify any programming glitches. Following this process, healthy participants were recruited to take the SAGE-SR for the purpose of measuring administration time, assessing the appropriateness of the thresholds for screening and differential diagnosis set by the expert panel, identifying any remaining areas of confusion regarding item administration, and for preliminary quantitative validation. A subset of these participants returned for a second session within 1 week for the purpose of assessing test-retest reliability and how consistently participants screened into follow-up sections and received diagnoses for differential diagnostic consideration. Stage II: Cognitive InterviewingA total of 50 adult community mental health outpatients, including individuals with severe and persistent mental illness, were recruited from 2 locations at Centerstone, a private nonprofit mental health organization, in Nashville, TN, and in Bloomington, IN.

Participants were recruited to ensure that they (according to their chart diagnoses) represented all 13 diagnostic categories in the self-report items (or 8 SCID-5-CV modules); participants ranged in age from 18 to 68 years (mean 39.9) and were 60% female (30/50), 86% white (43/50), 12% African American (6/50), and 2% Native American (1/50).For the first round of CI, a total of 18 participants responded to approximately half of the final item pool of 664 items. Thus, each self-report item was tested in 9 cognitive interviews in the first round.

After each interview, a staff member reviewed the recording of the interview and the cognitive interviewer’s notes from the session singling out the following: (1) items that were understood by everyone and (2) items that were difficult for some participants to answer or which were not interpreted as expected. Overall, by the end of the first round of testing, of the original 664 items, 157 items tested very well, 2 items were omitted, 1 item was split into 2 items, and small modifications were made to many additional items to increase clarity.

Sample revised items are presented in, sample omitted items in, and sample retained items are presented in. Sample revised items (with intended diagnostic domain)Reason for revisionOriginal item: I felt anxious.Revised item: I had anxiety.(Anxiety)Participants, particularly those in the South, sometimes defined anxious in the context of “I felt anxious” as excited or eager (eg, “I was anxious to go to the fair”). The noun form, however, did not have the same additional connotation; therefore, the item was revised to use the noun form of anxiety.Original item: I thought I might be God’s personal messenger on Earth.Revised Item: I am the only person who can do God's work on Earth.(Psychotic disorders-religious delusions )The original item produced a high base rate of endorsement among devoutly religious participants.

The revised item is distinct from the notion that all people are God’s children or messengers.Original instructions: Now I’m going to ask you about things you thought you might have seen while you were fully awake and it was light.Revised instructions: Now I’m going to ask you about things you might have seen while you were fully awake and there was enough light to see clearly.(Psychotic disorders-visual hallucinations)Participant thinks aloud and interviewer probing responses indicated high endorsement because of the appearance of shadows due to dim light. The revised instructions clarify that visual hallucinations were present when enough light was present to see clearly (ie, eliminate shadows). Sample omitted items (with intended diagnostic domain)Reason for omissionI felt the presence of evil around me.(Psychotic disorders–religious or persecutory delusion )Responses from participant think aloud and interviewer probing indicated that participants interpreted the item as meaning there were “bad people” (a bad element) around them, which led to a higher base rate of endorsement than was expected.People said I did not show emotions.(Psychotic disorders-affective flattening )Participants stated that people did not say this. AADHD: attention deficit hyperactivity disorder.For the second round of CI, the 157 items that were understood very clearly were set aside, and 22 participants responded to approximately half of the remaining 506 unique items.

Clinical

Thus, each self-report item in the second round was tested in 11 more cognitive interviews. At the end of round 2, one more item was removed, and minor wording changes were made to several other items.In the third round of CI, the 157 items that worked well in the first round were added back to the item pool to reassess the entire item pool.

In addition, 10 CI sessions were conducted, each on half of the modules as before, so that each item received an additional 5 cognitive interviews. There were virtually no misunderstandings in this third round; less than 1% of items were described as confusing by any participant, and there was only 1 instance in which 2 people misunderstood the same item (this item had a content duplicate and was omitted).On conclusion of all 3 rounds of cognitive interviews, the expert panel reviewed the session summaries and agreed on a final set of 661 items that they judged to be clear, concise, and that covered all 13 diagnostic categories. In general, the expert panel erred in keeping items that did well in CI, even if this made for some redundancy as expert panel members knew that the quantitative analysis would enable identification of the most predictive items and allow for future reduction of the item pool.In the first and second rounds of CI, all 40 participants were also given a clinician-administered SCID.

This SCID contained the same modules (and diagnostic categories) that the participants completed in the self-report item pool that included their specific chart diagnosis. The responses to all self-report items were compared with the same participant’s responses to the corresponding SCID item(s) to see whether the self-report items would predict the SCID response for the same item or symptom in a real-life application. In all the cases tested, we found that we could identify 1 or more self-report items that predicted each SCID item endorsement.

More specifically, where participants selected 4 “often” or 5 “always” on the SAGE-SR (or in negatively scored items, a 1 “never” or 2 “rarely” on the Likert scale), the clinician independently endorsed the associated SCID item on the clinician-administered SCID. Stage III: Screening Assessment for Guiding Evaluation-Self-Report Instrument Construction and Initial ValidationEighty-four participants who denied having sought treatment or received medication for a mental illness in the past two years were recruited in Chapel Hill, NC. To recruit participants, study staff passed out flyers describing the study near the campus of a large university and made calls to campus service organizations to describe the study; some participants were recruited directly by study staff through these efforts and others called in to schedule appointments when they learned about the study secondhand as a result of these recruitment strategies.The resulting sample ranged in age from 18 to 34 (mean 20.2) years, was 74% female (62/84), 5% African American (4/84), 14% Asian (12/84), 7% Hispanic (6/84), and 68% white (57/84). An additional 5% of participants reported being of more than one race (4/84), and 1 participant declined to provide race information (1% or 1/84). All participants were asked to take the SAGE-SR using a tablet or laptop. A total of 42 participants returned within 7 days (mean 5.24 days) to take the SAGE-SR a second time.

The 65-item screener covering 13 domains took an average of 7.3 min to administer to this nonclinical sample, with a standard deviation of 2.4 min. When the follow-up items were taken into consideration, the participants took an average 14 min to take the full SAGE-SR, with a standard deviation of 6.8 min. The Tennessee-based clinical sample was recruited via flyers posted in the clinic waiting room. This sample was comprised of 44 participants who ranged in age from 23 to 76 (mean 47.7) years and were 68% female (30/44). Race data was only available for 66% of this sample (29/44); of those that provided race information, the sample was 69% African American (20/29), 3% Asian (1/29), 14% Hispanic (4/29), 10% white (3/29), and 3% other (1/29). As expected, the screener took participants from the clinical sample longer to complete (average completion time of 9.4 min, with a standard deviation of 3.4 min). The full SAGE-SR took on average 24 min to administer in the public sector clinical sample, with a standard deviation of 12.6 min.

In contrast, in research populations, the full NetSCID-CV takes 56 min to administer with a standard deviation of 34 min.Feedback from the nonclinical sample indicated that participants found the SAGE-SR easy to navigate and complete and found nearly all items clear; one exception was the reference to “unwanted thoughts” in the section on obsessive-compulsive disorder, which participants indicated was too vague and confusing. To increase clarity, a definition was added to the display screen for this item: “Unwanted thoughts are thoughts that kept coming back to you even when you didn't want them to.” The only other feedback regarding clarity was regarding some lead prompts that were intended to prime participants to think of the particular period when they were experiencing the specific symptoms they endorsed during the screener to assess concurrence of the follow-up symptoms with the screener symptoms. For example, the lead prompt for the follow-up questions intended to explore generalized anxiety disorder initially read, “Because of my anxiety or worry,” but participants responded that reverse-scored questions did not work with this phrase; subsequently, the lead prompt phrase was changed to “During the time(s) when I felt anxious” After this change, the related concurrency items were well understood.The expert panel convened to review the results from the healthy sample to verify the appropriateness of the screening and diagnostic cutoff criteria. Relatively, few of the nonclinical participants were expected to screen in to take the follow-up questions, and fewer still were expected to meet criteria for inclusion of a diagnosis within the differential. Any items that were endorsed above threshold more than 15% of the time were reviewed by the expert panel. Thresholds for follow-up item administration were intended to be more sensitive, whereas thresholds for diagnosis were intended to be more specific. Minor threshold modifications were made after this review.

For example, as mentioned earlier, 3 self-report items represented depressed mood on the screener (“I felt sad,” “I felt depressed,” and “I felt hopeless”); initially, the threshold for screening in to the follow-up depression items was endorsing any of these 3 items as happening at least “sometimes” in the last 30 days. Diagnostic screening moduleTest-retest reliability95% CIP valueDepressive disorders a.670.46-0.81. EBootstrap methods were unsuccessful to generate a confidence interval for the kappa coefficient for the posttraumatic stress disorder screening question regarding exposure to trauma through work because of the low base rate of this occurrence in our primarily college student sample.In determining how to interpret these measures of reliability, we used 2 relevant resources: (1) the presented rationale for interpreting the reliability coefficients used by the researchers conducting the DSM-5 field trials , and (2) the similar ranges or rationale suggested by Cicchetti. In each of these resources, scores below.60 are considered “fair” or “questionable.” Scores from.60 to.75 or.80 , are considered “good,” whereas scores above either.75 or.80 are considered “excellent.” Within this framework, test-retest reliabilities for agoraphobia, social anxiety disorder, cannabis use disorder, panic disorder, and 1 (to 3, depending on whether the.

Structured Clinical Interview For Dsm-iv Pdf Study

75 or.80 range endpoint is used) of the PTSD items were “excellent,” whereas those for depression, GAD, OCD, ADHD, one (to 3) of the PTSD items, psychotic disorders, and the subdomains of hallucinations and delusions were “good.” The only domain to not reach at least “good” for test-retest reliability was mania or hypomania, which is consistent with previous attempts to develop self-report items for this diagnostic category ,. Principal FindingsThe SAGE-SR was developed as a self-report alternative to the SCID and NetSCID-CV. The development process included the use of an expert panel to draft and iteratively review items as well as review the results of CI regarding item clarity to ensure that the criteria for 13 diagnostic categories commonly seen in clinical practice were well represented in a final pool of 661 well-understood self-report items. Using this item pool, we constructed the SAGE-SR as a 2-part computerized adaptive assessment with an initial 65-item screening instrument from which respondents who meet screening thresholds branch to follow-up questions to determine which diagnoses are returned for a clinician to consider for differential diagnosis.Initial validation efforts with a nonclinical sample yielded promising results; qualitative feedback from participants indicated items and instructions were well understood, whereas the tablet- or laptop-based administration was simple to complete and reasonable in length. Preliminary quantitative validation efforts suggest good consistency in screening algorithms across 2 administration times as well as good to excellent test-retest reliability across all but 1 diagnostic category for the screening items in our small nonclinical sample.

The one domain for which test-retest reliability was weakest was mania or hypomania, which has also proven problematic for other researchers attempting to create self-report diagnostic screening assessments ,. The expert panel made minor revisions to the mania or hypomania self-report items and screening algorithms; whether these revisions improve the test-retest reliability of these items will be addressed in the results from the ongoing quantitative validation with a larger clinical sample. LimitationsWe believe that the item development and qualitative validation procedures described above were very comprehensive, but although the initial quantitative feedback indicates that the SAGE-SR has great promise, the quantitative results are preliminary and based on a small nonclinical sample.

Structured Clinical Interview For Dsm-iv Pdf

Structured Clinical Interview For Dsm-5

Clearly, the results of this initial validation study will need further replication in a larger clinical sample. Data collection in clinical samples is ongoing, and more extensive quantitative validation will be presented once that work is complete. In addition, as noted earlier, the SCID is typically the gold standard against which the accuracy of most diagnostic assessments is measured. A cross-validation of the SAGE-SR’s differential diagnosis against the NetSCID-5-CV’s diagnostic algorithms is also currently underway. ConclusionsThe SAGE-SR has an initial diagnostic screener that branches to groups of follow-up items to efficiently produce a differential diagnosis. Because the assessment is self-report, it should be possible to use the SAGE-SR in routine clinical care both in specialty behavioral health and in primary care settings. The SAGE-SR offers the promise of providing a rigorous differential diagnosis based on the SCID-5-CV and DSM-5 to a clinician before their meeting with the client so that their face-to-face time can be focused on clarifying that diagnosis in a manner that builds the rapport so inherent in the success of a therapeutic relationship.

Indeed, an additional critique offered against the use of either the SCID-5 or other structured clinical interviews in clinical settings is that, despite the diagnostic rigor they provide, it is difficult to build rapport while adhering to a strict and standardized administration protocol.The SAGE-SR helps address the concerns in the field regarding the need for greater diagnostic rigor as well as assessment of possible comorbidities that might be missed in unstructured clinical interviews while doing so in a cost-effective and clinician time-effective manner. The SAGE-SR also fits into the health care movement exemplified by the personal health record in which patients are empowered to provide information to their clinicians and to participate more actively in determining what treatment is most appropriate for them. The SAGE-SR could help primary care practices satisfy the Affordable Care Act’s mandate for screening for depression and alcohol use, while doing so as part of a more comprehensive screen for common behavioral health issues.In addition to its utility for use in routine clinical care in primary care and specialty behavioral health settings, the SAGE-SR offers rigorous coverage of disorders and utility to clinical researchers as well as for epidemiological studies evaluating large number of participants where clinician-based interviewing is not feasible or is prohibitively expensive. The SAGE-SR covers the same diagnostic categories as the SCID-5-CV and all clinical diagnoses in these categories except for psychiatric diagnoses due to another medical condition and substance-induced diagnoses (see ). Thus, the SAGE-SR covers 28 of the 35 disorders in the 8 primary modules of the SCID-5-CV while taking approximately half as long for respondents to complete and without the training and administration time burdens for the clinician.

Like the NetSCID-5-CV, responses to the SAGE-SR populate a detailed database but, unlike the NetSCID-5-CV, the SAGE-SR gathers much more information that could then be available for quantitative analysis. Rather than generating a series of binary criteria endorsements, the SAGE-SR generates a very granular and complete inventory of individual symptoms with Likert scale frequency assessments, thus offering both diagnostic and symptom severity information. This detailed electronic response set can be used to populate admission summaries, progress notes, and discharge summaries, as well as offer a wealth of information on treatment progress and response.

Structured Clinical Interview For Dsm-iv Scid Pdf

The Structured Clinical Interview for DSM-IV (SCID-I/SCID-II; First, Gibbon, Spitzer, Williams, & Benjamin, ) is a semi-structured clinical interview administered by trained clinicians and designed to yield psychiatric diagnoses consistent with DSM-IV/DSM-IV-TR (American Psychiatric Association, ) diagnostic criteria. The duration of administration ranges between 15 min and 2 h. The SCID is designed to begin with open-ended questions that introduce each content area (e.g., “Have you ever had?”), followed by a series of scripted questions to be asked verbatim. At the close of each module, the SCID directs interviewers to append as many additional questions as needed in order to be confident about the validity of their ratings. Interviewers are also encouraged to corroborate their assumptions with collateral data whenever possible.