Abstract

During medical school, students are taught the knowledge, skills, and attitudes required to become competent physicians. Knowledge and skills are rigorously evaluated by written and oral exams, standardized patient scenarios, and ward evaluations. However, evaluation of behaviors, including professionalism, is often implicit, unsystematic and, therefore, inadequate. This is problematic for several reasons. First, medical schools are doing a disservice to future postgraduate training programs, as well as to society, by not explicitly and accurately evaluating this area during medical school. It is recognized that more complaints against physicians to medical societies relate to unprofessional conduct than to lack of knowledge or poor technical skills.1 Yet students who display unprofessional behavior may not be identified in the current system, and will be promoted academically on the basis of adequate performance on tests of knowledge and skills alone.2,3 Second, we are doing a disservice to our students by not providing explicit feedback in this domain, thereby missing valuable opportunities to bring about awareness and improvement. The American Board of Internal Medicine, in its report “Project Professionalism,” discussed the problem of erosion of professionalism during medical training. While knowledge and skills improve markedly over the four years of medical school, there is ample anecdotal evidence, and substantial quantitative evidence, that professional behaviors can diminish over this period.4,5,6 There appears to be an unrealistic expectation that students will arrive at medical school lacking in knowledge and skills, but with a full complement of appropriate behaviors that require no further attention. However, all students are vulnerable to lapses in professional behavior and can benefit from explicit, systematic attention in this domain. The focus of medical education in the past century was on knowledge and skills. For the future of medicine, attention to the teaching and evaluation of professionalism is vital. While this need to evaluate professionalism effectively has been recognized for some time, traditional methods of addressing the problem have not been particularly successful, for several reasons. The traditional approach to this issue has involved the identification and definition of the attitudes and concepts that comprise the concept of professionalism (such as altruism, accountability, excellence, duty, honor, integrity, and respect). Evaluation methods that rely on such abstract and idealized definitions lead us to discuss people, rather than their behaviors, as being honest or dishonest, professional or unprofessional. This implies that professionalism represents a set of stable traits. Interestingly, a large literature exists that suggests the opposite. Many studies in personality psychology have shown that the presence of specific personality traits does not predict behavior.7,8. For example, in one study of psychiatry residents, Minnesota Multiphasic Personality Inventory testing revealed serious personality disorders in the two individuals who eventually lost their licenses for professional misconduct.7 However, several other participants showed the same personality traits, yet had no difficulty reported in 15 years of follow up. Thus, evidence suggests that the identification of specific traits does not allow us to predict an individual's behavior. There are several reasons why this issue is important when discussing the evaluation of professionalism. Stable trait measures do not take into account a recognition that behaviors enacted often involves an effort at resolving a conflict between two (or more) equally worthy professional or personal values. For example, it is easy to say that one must always tell the truth, and that one must always protect patient confidentiality. However, these values may occasionally come into conflict, and the ultimate choice the student makes will depend on the specifics of the situation.9,10 In addition, professional behaviors are known to be highly context-dependent.10,11 One can imagine a basically honest person lying to a patient given a particular context. This does not automatically mean that that person is dishonest, and therefore unprofessional. Certainly in social situations, a decision to always tell the full truth would be considered highly inappropriate. Although the issues of conflict and context are separate at a theoretical level, in day-to-day practice they are likely to interact. One study has shown that 87% of physicians surveyed indicated that deception is acceptable on rare occasions, for example, if the patient would be harmed by knowing the truth, in order to circumvent “ridiculous rules,” or to protect confidentiality.12 Yet, when two specific professional values are in conflict, it is not always predictable which of the two values will take precedence. For example, while it is sometimes appropriate to lie in order to protect patient confidentiality, there are circumstances in which it would be considered more appropriate to break confidentiality rather than tell a lie. As one participant stated, honesty is “usually” the best policy, but everything is taken on a case-by-case basis, and any actions taken depend on the specifics of the people and the situation.12 Traditional ways of evaluating professionalism do not make allowances for these gray areas. Another element of evaluating professionalism involves the process of resolving the conflict. The ultimate choice an individual makes, manifested as the behavior witnessed, does not tell us how he or she arrived at the decision. We know nothing of whether the student recognized the professional “values” that were in conflict, or why the student chose to act in that particular way. So while focusing on behaviors rather than personality or character traits is important, we must also attempt to understand the process that led to the behavior. Thus, if we do not include conflict, context, and the process of resolution in our evaluation methods, we might not be able to conduct the most reliable, valid, and appropriate evaluation of these behaviors. Another reason for the lack of success of traditional approaches is that evaluators have not been willing to identify an individual as unprofessional for actions that appear to be relatively minor. Thus, lapses in professional behavior tend to be ignored or suppressed, due to an understandable reluctance to apply the broad, harsh label of “unprofessional.”13 In one study, clinician supervisors admitted and demonstrated their reluctance to give negative feedback regarding unprofessional behavior, even though in interviews they had stated strongly that they would do so.14 Even if faculty have this willingness, they have been found to have “difficulty in identifying problems, an inability to verify problems, and fear of litigation” that inhibit their reporting of behavioral problems.2 This outcome arises, in part, from the fact that educators and researchers have traditionally focused on this problem from an abstract perspective. The definitions and subcategories of the broader concept of professionalism describe the idealized person, the “consummate professional,” with no room for mistakes. With this theoretical basis, if someone tells a lie, even for a “good” reason, he or she could be suddenly labeled “dishonest,” and therefore, “unprofessional.” The only thing left for the evaluator to decide, then, is how unprofessional the individual is. This top-down focus on professionalism as an abstraction rather than a bottom-up focus on professionalism as a set of actions in context, therefore, is flawed. This paper elaborates on the issues around this problem. First, we review the literature on the types of evaluation instruments used for measuring professionalism in medical education. We then outline fundamental conceptual deficiencies that exist in this literature. We argue that the three most important missing components are: consideration of the contexts in which unprofessional behaviors occur, the conflicts that lead to these lapses, and the reasons students make the choices they make. We then propose strategies for resolving these issues. Method We conducted searches through Medline, Psychlit, and ERIC for literature published over the past 20 years. We included studies that contained original research on the topic of assessment or evaluation of professionalism in medical education, or included instruments to measure professional behavior, professionalism, humanism, behaviors, values, and attitudes. After initial articles were identified, bibliographies were used to identify additional references, and experts in the field were consulted for missing but relevant papers. This process uncovered few studies addressing specific efforts to evaluate professionalism. There was an abundance of articles calling for new and better methods of evaluation, and arguments for why this is so important and neglected. Some papers dealt with certain aspects of professionalism, for example, ethics, communication skills, interpersonal skills, and humanistic behavior, but they did so without extrapolation to the larger notion of professionalism. These studies were included if they highlighted difficulties in evaluating professionalism or provided new insights or solutions, and contained original research. Results Evaluations by Faculty Supervisors. In 1979, the AAMC interviewed approximately 500 clerkship directors about “problem students.” They identified 21 types of problem students, and then asked how often each type of problem was seen, and how difficult the problem was. Among the results from the University of Washington School of Medicine, researchers found that “noncognitive” issues (e.g., bright but poor interpersonal skills) were “frequent and difficult,” but that the very disturbing ones (e.g., cannot be trusted, manipulative) were seen only infrequently.15 Though this survey was done many years ago, it provides an early glimpse of faculty's concerns about the professional behaviors of students. Since then, various other studies have analyzed approaches used by faculty in the evaluation of professionalism, including global rating scales, intraining evaluations, and encounter cards. Ward rating forms, completed by the physician-supervisor, are the most commonly used instruments. In addition to assessing medical knowledge and clinical skills, many of these forms have a single global item to assess professional behavior, which may be subject to extensive rater bias.16,17 A study by Woolliscroft et al. highlights some of the problems of using this type of assessment. The authors found that using a questionnaire, faculty could assess the humanistic qualities of internal medicine residents, at least for the item “doctor-patient relationships.”18 However, it would take 20–50 faculty members per resident to achieve acceptable reproducibility, which calls into question the utility of this instrument. This also suggests that the trait doctor-patient relationships is probably not stable, but rather may be subject to context bias. Different evaluators might see different behaviors or make different interpretations. In a related study, Johnson found that physicians' and nurses' evaluations of intensive care unit residents correlated highly with respect to all criteria except the assessment of humanistic qualities, further highlighting the importance of context.19 To compensate for the problem of infrequent observations, systems have been developed that encourage the repeated observation and documentation of the performances of medical trainees (often on a daily or weekly basis).20,21 This allows for the assessment of knowledge, skills, or professional behaviors with reasonable interrater reliability and construct validity. Such real-time evaluations permit early intervention, facilitate feedback, and guide remediation. However, in a study of encounter cards in the evaluation of anesthesia residents, despite numerous negative comments by supervisors, only 1% of the comments were found to be about unprofessional behaviors.22 Further, those residents who received these negative comments were only rarely rated overall as “performing below level” by their supervisors, despite their all having had critical incident reports and scoring lower on objective testing. This, again, highlights the difficulties faculty have in documenting unprofessional behavior. Faculty can, in fact, be trained to accurately observe and assess specific behaviors. One group developed a reliable assessment of a very specific set of humanistic skills (e.g., introduced self to the patient, acknowledged the agenda from the last visit) by asking faculty to view videotapes of residents' interactions with patients.23 However, even if faculty can identify problematic behavior in a reliable way, they are often reluctant to record it. Burack, using a rigorous qualitative method, demonstrated that faculty have a marked reluctance to respond unambiguously to behaviors that indicate negative attitudes towards patients.14 In interviews, faculty stated that they would not tolerate “this sort of behavior” and would “definitely lay down the law” if such behavior were observed. However, in practice they usually did not respond at all, or did so in such a way as to require interpretation by the learner. The feedback can then be misinterpreted to be permissive. As explanations for this dichotomy, clinicians reported their sympathy for the learners' stress, as well as the possible penalties educators can face for giving negative feedback, such as receiving bad teaching evaluations and being open to personal and legal risks. They felt that if the observed behavior is only a lapse, and the learner is fundamentally “good,” corrective feedback might discourage or frustrate the resident. Conversely, for fundamentally “bad” residents, corrective feedback is seen as futile. Therefore, methods that exist for faculty evaluation of professional behavior are problematic. Evaluations cannot be kept on theoretical, abstract, or definitional levels; thus, these scales have poor reliability. Numerous observations in various contexts need to be made, but attending physicians are present for only a small proportion of the time. In addition, even when lapses in professional behavior are identified, there is great reluctance to report them.14 Nurses and Patients. Some of the reluctance faculty have in evaluating professional behavior results from potential conflict in their roles as teacher, mentor, and evaluator. Other groups, such as patients24,25 or nurses,18,26,27 may not be subject to these conflicts. In addition, these other groups may see the students and residents more often and in different contexts. Woolliscroft's study included groups of nurses and patients; unfortunately, the patients' ratings were not reliable, and it would have required up to 50 patients' assessments to achieve a reproducible estimate of professional behavior.18 Nurses achieved good reproducibility with ten to 20 assessments per resident, but this amount may still be impractical. Because professional behavior is so context-specific, it is not surprising that only low to modest correlations exist between ratings by these different assessors. Also, nurses and patients may face different kinds of pressures that could deter their unbiased reporting of unprofessional behaviors; for example, a patient may be reluctant to jeopardize the continuity of a relationship with a physician even though it is problematic. In addition to highlighting some of the difficulties in evaluating professional behavior, Woolliscroft et al.'s study provides a good example of an attempt to triangulate results as a measure of validity. Peer Evaluation. Peers are in a good position to evaluate each other's professional behaviors because of frequent, close, and varied contact. Thus, the use of peer assessment of professional behaviors may solve many of the problems described for faculty's assessment. However, several problems remain and some new problems may arise through the use of peer assessment. On a positive note, there is some suggestion that medical students' peer evaluations may be the best measures of interpersonal skills available.28,29,30 Thomas et al. reported a pilot study of peer review in residency training using a ten-item questionnaire.31 The items on the form clustered into two domains: “technical skills” and “interpersonal skills,” which included humanistic behaviors. Of particular interest is this study's finding that intern peer evaluations of a composite “professionalism” domain correlated well with faculty evaluations of the same dimension (r =.57, p <.05). An interesting modification of a ranking system that forces students to discriminate among their peers based on certain dimensions of professionalism has been described.32 The authors suggest that such a system enables identification of the top 10–15% of the class, but it is not helpful in discriminating among the rest, perhaps because the students were asked for only positive nominations on the peer-evaluation form. On the other hand, peers, like faculty, seem to have a difficult time discriminating the abstract dimensions of professionalism from each other and from other skills. For example, in a study of peer assessment of professional dimensions, Arnold found very high internal consistency (coefficient alpha) across the dimensions, suggesting a strong halo effect in the ratings of the separate dimensions.29 Further, scores were highly correlated with more knowledge-based measures such as National Board of Medical Examiner's exam (Parts I and II) and grade-point average, suggesting that dimensions other than professionalism were also contributing to the scores. Also, as with faculty ratings, it would appear that a fairly large number of ratings are necessary to obtain stable measures across raters.33,34 Interestingly, the numbers of negative peer evaluations generated in the small groups depended upon the kind of faculty leadership exercised in each group.29 This constitutes yet another example of the importance of context and social climate in peer (and other) assessment methods. In fact, the social climate of peers assessing peers may have negative consequences. That is, while some studies report positive reception of peer feedback, others report marked resistance to peer evaluation even though the evaluations were anonymous and for research purposes only.31,35 Helfer found that senior medical students were more accepting of peer evaluations than were junior students, who lacked confidence in the usefulness of the system.30 Van Rosendaal found that residents worried that the process would undermine their work and personal interrelationships.35 In summary, peer evaluations hold promise for evaluating professionalism. However, before they are likely to be very useful, many of the same problems facing faculty's evaluation of professionalism will have to be solved, and evaluation systems must be developed that will overcome the reluctance of peers to rate one another. Self Evaluation. Several early studies were conducted that involved self-reports of attitude changes during medical training. To varying degrees, these students reported increases in certain attitudes, such as cynicism; were more concerned about making money; or felt that their ethical principles had become eroded or lost.5,6,36,37 Some positive attitudes increased as well, for example, concern for patients, and helpfulness.5 More recently, Clack studied gender differences in medical graduates' self-assessments of personal attributes and found that women generally felt more confident than men in possessing nine of the 16 “ideal” attributes listed.38 These studies indicate that our understanding of students' attitudes, some of which may reflect aspects of “professionalism,” can benefit from self-report questionnaires. However, these studies are comparing groups and trends, not assessing the qualities of individuals. The utility of self-reporting for these purposes might be much more severely limited. Most studies of self-assessment in medicine focus on the assessment of knowledge and skills rather than on professional behavior, but they generally conclude that self-assessment is quite inaccurate.28,39 If physicians are inaccurate at self-assessment in relatively concrete domains (e.g., knowledge), they are likely to have even greater difficulty in a domain such as professionalism, which is less well defined and more socially value-laden. A recent line of research, for example, introduced a model of self-assessment described as the relative ranking technique, in which each participant ranks a set of skills relative to each other from the skill that needs the most work to the one that needs the least.40,41 Despite some success as a self-assessment tool in the relatively constrained domain of interviewing skills, the technique was far less useful when applied to residents' self-assessments of the standard components of a ward assessment form. In this context, the authors discovered that although residents were quite willing to say they need “the most work” with their surgical skills, or to improve their knowledge base, all residents responded that they needed “the least work” in colleague and/or team relationships.41 It appears that when statements are value-laden and abstract (as in issues of professionalism), the bias of social desirability is strong, and self-assessment becomes distorted and potentially misleading. It is apparent that the use of self-assessment in the evaluation of professionalism is difficult. The methods used do not take context into account, making them somewhat threatening. Perhaps a relative ranking system could be attempted that included only elements of professionalism, such as interpersonal skills, communication skills, respect, and integrity. However, it would still be unlikely for a student to say he or she needs more work with honesty. Again, behaviors rather than abstract definitions would need to be incorporated to overcome this limitation. Until further research is done to better understand the nature of self-assessment, its utility for assessing professional behaviors is likely to be limited to formative evaluations and the setting of personal goals. Standardized Patients. There is an extensive body of literature on objective structured clinical examinations (OSCEs) and standardized patients (SPs) and their importance in the evaluation of clinical skills. There is no literature specific to the role of either in the evaluation of professionalism or professional behaviors within medicine; however, there are areas in which issues of professionalism and professional behaviors are touched on indirectly. Using an adaptation of the American Board of Internal Medicine's Physician Satisfaction Questionnaire, Klamen et al. found that SPs could reliably identify some of the professional characteristics of the doctor-patient interaction, including using understandable language and encouraging patients to ask questions.24,42 By contrast, Schnabel et al. asked SPs to assess empathy, interpersonal skills, and patient satisfaction on a 13-item checklist used in a senior-medical-student OSCE, and found that up to 20 ratings were needed to generate reliable measures.43 At the extreme, research conducted using OSCE stations to assess students' skills in dealing with ethical issues concluded that 41 stations would be required to achieve good reliability, even if the content domain were narrowed down to one specific ethical dilemma.44,45,46 At least in part, the difficulty with using OSCE scenarios is the ambiguity with which the concepts are defined on the evaluation form. For example, one set of forms used such anchors as “major problems in demeanor or ethical standards resulting in inadequate ability to deal with the patient's problems” and “actions taken may harm the patient.”47,48 In both instances, unacceptable behaviors are not specified, and judgment is left up to the examiner. On a related note, Arnold suggests that the OSCE, as it now exists, does not discriminate between ethical analysis of a problem and communication skills.49 Another issue with SPs' assessment is the problem of artificiality. Norman, for example, reported on the experience with a physicians' remediation program that uses standardized patient scenarios.50 SPs in a simulated office practice, as well as in standard OSCE stations, were asked to rate physicians' interpersonal skills during each encounter. Compared with the office simulations, the OSCE stations had a low reliability and were felt to be “artificial.” This may increase the likelihood that students in this setting might act as they should rather than as they would. On the other hand, one study has reported several professional lapses in the context of a psychiatry OSCE (the most extreme case involving a student's placing a fleeing SP in a headlock for the purpose of restraint).51 Hodges et al. argue that if stations are more demanding, they may very well discriminate effectively in terms of professional dimensions. Similarly, Vu et al. suggested that SPs' ratings were highly reliable and valid when compared with comments real patients would be expected to make regarding the behaviors they witnessed.52 Again, it is apparent that context is important. Methods of assessment that are more true to life may be more useful than those that involve obviously artificial situations. Students may be aware that there is a professionalism station and respond with actions they assume are on the checklists. It would be interesting to include values conflicts in SP scenarios to specifically assess the students' awareness of the professional values that are involved, and to evaluate their responses. In such a case, there may be more than one right answer, so the students' thought-processes about their actions may be more important than the behaviors they actually display. The low reliability of OSCEs, even when limited to specific dimensions of professionalism, is concerning, and many authors have concluded that the greatest utility of this type of assessment may be in the formative evaluation of students. Longitudinal Observations. More recently, researchers have developed systems for assessing students' professionalism that are triggered by the observation of problematic student behaviors.1,2,4 The evaluation instrument is a specific form that is completed by a clerkship director or faculty member when a student exhibits unprofessional behavior during a rotation. When more than one form has been completed for a specific student, a meeting between an academic committee and the student occurs and remediation is instituted. These systems are based on the concept that students' professional behaviors must be assessed longitudinally, across numerous clinical rotations. Both studies describing this evaluation tool have been qualitative descriptions of systems that are in place, and further reliability and validity studies are anticipated. Such systems are very promising, despite a lack of rigorous evaluation, and may work well for identifying those students with significant lapses in professional behavior. However, in their present state, they may not prove as useful as a method of evaluating all students. The important advance these authors have made is their acknowledgement that labeling a student as “unprofessional” carries a greater negative connotation than simply recording examples of unprofessional behavior. Discussion: Future Directions in the Evaluation of Professional Behavior It should be apparent from the preceding discussion that evaluating professionalism in medical students and residents has proved to be a difficult task. The definition-driven abstract way of thinking about professionalism creates a dichotomy for faculty: either apply a harsh label, or let the lapse go. We know from previous research that faculty are much more likely to let the lapse go, which effectively suppresses discussion, feedback, and attempts at remediation.14 On the other hand, evaluation methods that consider behaviors, rather than individuals, as professional or unprofessional become much less threatening and would be more likely to gain acceptance by faculty and students. The studies reported by Papadakis et al. and Phelan et al. provide two good examples of such systems.1,2 Perhaps these methods will decrease faculty's reluctance to report behaviors that should lead to remediation; this can only help in promoting students' professional development. As developed, these evaluation forms are intended to identify and document serious lapses in professional behavior, which fortunately occur in only a few students. Future research might focus on ways to make these forms useful in the evaluation of all students. However, it is likely that some barriers to their use would still exist; for example, faculty would still have to decide what constitutes a major or minor infraction. These limitations might be minimized if the behavior is placed in a context (of the person, the situation, the harm caused to others), a fair process of review is used, and reasonable judgment is applied.53 Then, any decision made would be justifiable and well supported. Arnold and colleagues use a hybrid of the behavioral and abstract in their measurement tool by attaching behavioral descriptors (such as “I have seen residents refer to patients in derogatory terms”) to abstract dimensions of professionalism (such as “respectfulness”), which is an interesting potential step in this direction.54 We have also argued that professional behavior is much more context-dependent than has usually been acknowledged. All physicians are exposed to situations that challenge their abilities to act professionally, and medical students and residents are no different. In fact, they may be more vulnerable to lapses in professional behavior because of the nature of their training and environment. It is crucial to be aware of the specific context in which a behavior occurs before attempting to evaluate it. For example, Christakis et al. found that the teaching students had received on ethical dilemmas seemed to lack real-life relevance and related more to the context of a practicing physician.55 Focus groups described different dilemmas, which were unique to a third-year student's experience. They highlighted the conflicts between education, patient care, wanting to be a team player, and fear of a poor evaluation. One overriding feature was the construct of authority: students lack it and are wary of challenging it, which often puts them into conflict. It may be necessary to study these behaviors in context more closely to determine their frequency and severity. Since we know that faculty, nurses, students, and residents all see different aspects of professionalism in students, it would be important to gain the perspectives of each of these groups in order to be comprehensive. One way could be to involve each of these groups in focus-group discussions, to determine what they consider to be professional and unprofessional behaviors. Their unique perspectives would help in the design of instruments used in all forms of student assessment. Another technique could be to use an anonymous encounter card system to collect information from students, residents, faculty, and nurses, about what behaviors are actually occurring. This may provide us with a more comprehensive set of behaviors on which to base future evaluation methods. Conflict has also long been identified as a critical component of professional development, and is found as a dominant element in some measures of professional behavior.9,10,11 Although such paper-and-pencil instruments are limited by their artificial nature, some researchers have found that professional behavior can best be identified at the time that students are grappling with these conflicts. One potential implication of this finding is that students could be placed in a situation that involves a conflict of values, for example, with a standardized patient. The behaviors the students display, based on the choices they make, could be evaluated. What might be even more informative is an evaluation of the thought process a student goes through to arrive at his or her ultimate choice. Alternatively, students could be asked to write about professional conflicts they have encountered.56 The language or text from these experiences could be subjected to linguistic or rhetorical analysis to uncover the underlying values of individual students and explore how these values affect the resolution of professional conflicts. Lingard and Haber's studies use a rhetorical framework to explore how the structural patterns of case presentations inform medical students' developing attitudes towards patients and colleagues.57,58 The authors demonstrate that a rhetorical analysis of discourse patterns can reveal critical relationships between the stories novices learn to tell about patients and the decisions they make about how to act on behalf of and in relation to them. Other studies in a similar vein reinforce the potential usefulness of this method.59,60,61 However, the texts that students generate may suffer from the same sense of artificiality that affects OSCE stations, and research in this area would have to be designed to take this issue into account. It is unrealistic to think that one evaluation instrument could capture all that is important in the complex domain of professionalism. As with all high-stakes evaluations, reliability, which depends in part on sample size, is important. No student should receive a grade on his or her knowledge of cardiology from a single-item test; similarly, no student should receive a grade on professionalism without adequate sampling of the domain. Some of the measures outlined above have large sample sizes and are likely to be more useful (peer evaluation, encounter cards), while others rely on a single report or a few reports (SP scenarios, ward evaluations). While the latter may be useful for outliers, the former are more useful for the larger group of students who experience only occasional lapses in professional behavior. It is certain that more than one measurement technique would need to be used, and the greatest validity may result from triangulating results from different sources. Future efforts at understanding professionalism, and future methods of evaluating professionalism, must focus on behaviors rather than personality traits or vague concepts of character. Our understanding and evaluation must include context and conflict in order to be relevant and valid. Ideally, methods of evaluation should include elements of peer assessment and self-assessment, which are becoming required elements in the continuing professional development of all practicing physicians. Finally, we should attempt to understand what drives students to demonstrate occasional lapses in professional behavior, in order to develop effective teaching and remediation in this domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call