How Should We Assess Interviewing and Counseling Skills

The need to teach interviewing and counseling skills has long been established among clinical legal educators. Even among our non-clinical colleagues, these skills are recognized as integral to competent lawyering. While there remains considerable difference of opinion within the United States as to whether teaching such skills should be in a required course or simply be available as an elective, there is no doubt that a twenty-first century American law school must include the teaching of these skills in its curricular array. This paper first briefly describes the structure of legal education in the United States (insofar as clinical and skills teaching is concerned) and the almost total absence of any bar admission training or apprenticeship requirements. If the law schools are not required to fully train all future lawyers and the bar admission authorities likewise disavow responsibility for doing so, should clinical law professors assume the burden? I then go on to discuss the primary clinical evaluation technique of directly observing the student's performance, sometimes referred to as the gold standard method of assessment. Against the backdrop of the assertion that it is beneficial to use multiple methods of assessment, I then describe the several methods I have used to address the question of how best to assess interviewing and counseling skills. As an aside, it becomes clear that much more empirical analysis is in order.

Accompanying the teaching of these skills has been the challenge of evaluating the student performance of such skills.Clinicians knew from the outset that assessment would be qualitatively different from the evaluation methods used for doctrinal teaching. 5We also knew there would be two fundamentally different functions of clinical assessments: the traditional grading or sorting purpose and the feedback/constructive learning function.It is the latter which is and always has been at the core of clinical teaching. 6In the competitive world of law students, however, as with anything that is taught, the formal assessment or grading label remains crucial. 7If it is not tested and graded, it will be devalued by the students. 8The question of how best to do that remains the challenge with which we are grappling. 9 A central aspect of assessment is the learning goal.What are we trying to teach?What is the ideal outcome we would like a student to achieve?It is critical that the method of assessment be directly correlated with those learning objectives.An evaluation is valid if it accurately measures the student in terms of the instructional objectives.10 To state the obvious, if we want the student to learn how to draft advocacy briefs, we do not assess by conducting an oral exam with the student.And then, there is the level of proficiency.Are we talking about an introductory or very elementary level with respect to the skill involved?Or, is a much more advanced level of learning the desired goal?Thus, in discussing alternative methods of assessing interviewing and counseling skills, we need to be clear about what our objectives are.Different methods of evaluation may be more or less appropriate depending on the desired outcome.Thus, clinical evaluations of interviewing or counseling skills could include assessing the examinee's knowledge of interviewing models or theories, the method of preparation, the actual performance of the skill, a self-critique of her or his performance, the resolution of ethical dilemmas or all of the above. 11is paper first briefly describes the structure of legal education in the United States (insofar as clinical and skills teaching is concerned) and the almost total absence of any bar admission training or apprenticeship requirements.If the law schools are not required to fully train all future lawyers 5 Student performance and critique assessment were central to skills teaching.E.g.: "Intellectual mastery is not sufficient .... the goals must include both the student's ability to understand and ability to perform effectively."... "It is essential that each student have at least one opportunity to be critiqued in conducting a full interview".David A.
Binder & Susan C. Price, Instructor's Manual for Legal Interviewing and Counseling, 4 and 22 (1979).6 See, e.g.,Robert Dinerstein, "Report of the Committee on the Future of the In-house Clinic", 42 J.Legal Educ.508 (1992); Peter Toll Hoffman, "Clinical Course Design and The Supervisory Process", 1982 Ariz.St. L.J. 277, 292 (1982); David F Chavkin, "Matchmaker, Matchmaker: Student Collaboration in Clinical Programs", I Clinical L. Rev. 199, 202 (1994); Kenneth R. Kreiling, "Clinical Education and Lawyer Competency: The Process of Learning to Learn From Experience Through Properly Structured Clinical Supervision", 40 Md.L. Rev. 284, 330-334 (1981).7 Stacy L. Brustin & David Chavkin, "Testing the Grades: Evaluating Grading Models in Clinical Legal Education", 3 Clinical L. Rev. 299 (1997) ( a report on a survey of students as to whether they prefer a pass/fail grade or the usual number or letter grades, together with a recommendation for grades with an option for pass/fail).8 See Steven Friedland, "A Critical Inquiry into the Traditional Uses of Law School Evaluation", 23 Pace L. Rev. 147, 171 (2002); Brustin & Chavkin, supra note 7, at 306; Lawrence M. Grosberg, "Should We Test for Interpersonal Lawyering Skills?", 2 Clinical L. Rev. 349, 351 (1996).9 The broader subject of law school testing generally, is of course, worthy of re-examination, and as has been observed, "has been widely overlooked", Friedland, supra note 8, at 149. and the bar admission authorities likewise disavow responsibility for doing so, should clinical law professors assume the burden?1 2 I then go on to discuss the primary clinical evaluation technique of directly observing the student's performance, sometimes referred to as the gold standard method of assessment.Against the backdrop of the assertion that it is beneficial to use multiple methods of assessment, I then describe the several methods I have used to address the question of how best to assess interviewing and counseling skills.As an aside, it becomes clear that much more empirical analysis is in order.13

I. U.S. Legal Education and Bar Admission Standards.
Whether you reside inside or outside the United States, it is important to recognize some key differences between American legal education and licensure and the comparable legal institutions abroad.Much of the rest of the world has a law school and licensing structure different from that in the United States.In the U.S.14 almost all lawyers first obtain a four year undergraduate college degree (typically a BA or BS) and then go on to complete either a full-time three year law school program or a four year part-time program, both culminating with a JD degree. 15Following the acquisition of the JD, an applicant for admission to the bar must successfully pass a bar examination in the state in which the future lawyer wishes to practice. 1 6 Once admitted to the bar, the new attorney is licensed to practice in any and all areas of law and inside or outside of the courtroom.While nearly all states have some form of post-graduate mandatory continuing education requirements 17 , there are almost no apprenticeship requirements 1 8 , as there frequently are elsewhere in the world.In the U.S., the American Bar Association Section on Legal Education and Admission to the Bar is the entity that is authorized by the U.S. Department of Education to accredit law schools.It is the ABA, therefore, that determines what a law school is required to do or not to do in order to be accredited.With the exception of a very small number of states, one cannot sit for a bar examination unless one is a graduate of an accredited law school.The ABA has a fairly detailed set of regulations covering everything from the size of the faculty and the physical 12 More specifically, should clinicians devise and recommend ways to teach and evaluate skills learning for that large group of law school graduates who elect not to take clinics of skills courses? 13 1 am in the process of addressing this dearth of data by conducting various studies of the efficacy of various assessment methods.14 There are exceptions to all of the following generalizations in the text, however, this abbreviated summary of the differences, I believe, will be useful in considering the main theme of this essay regarding methods of evaluating interviewing and counseling skills.

Yournal of Clinical Legal Education
plant to guidelines on the curriculum. 19With respect to experiential learning, either in the form of students working on real cases under the supervision of law professors or practicing lawyers (cf.apprenticing), or in the form of simulation exercises, the ABA does not require that every law graduate complete such training.The accrediting standards do encourage law schools to develop clinical education and they do mandate that every law school actually maintain some kind of a clinical or skills program.

20
Likewise, the state bar admission authorities, all of which are completely independent of the ABA and the law schools, do not require any type of apprenticeship or experiential testing as part of the requirements for admission to the bar. 21There are some bar exam pen and pencil tests that call on bar applicants to demonstrate some familiarity with lawyering skills such as investigation or interviewing or counseling or trial advocacy skills.But, these questions, called "performance tests" 22 , do not in any way require the applicant to perform an oral skill.
What all of this means is that it is very possible for a law student to graduate law school in the U.S., pass a state bar exam and be admitted to practice law, without ever having interviewed or counseled a client (real or simulated) or tried a case or negotiated a transaction or even without having observed such lawyering activities.Thus, in terms of assessment of the actual performance of interviewing or counseling skills, there is neither a formative nor a summative evaluation requirement in law school or after, prior to being licensed to practice law.I am not aware of any data that specifies precisely how many American law graduates fall into this category of persons bereft of any clinical or skills training.But, extrapolating from my own law school experience and my familiarity with many others, it is clear that there is a significant number in this category, perhaps even half or more of American law graduates.Moreover, the remaining students may have taken only a single clinical or skills course, hardly a basis upon which one might conclude that such a student is well versed or even minimally competent in those skills.
It is in this context that law schools must address their institutional responsibilities to produce competent lawyers.If the bar admission officials are not taking steps to ensure the public that this is the case, then law schools, and particularly its clinical faculty, it seems to me, must confront this reality.Even if the ABA does not require that law schools address this need, the law schools certainly are not prohibited by the ABA standards from trying to meet this deficiency.Assuming then that law schools want to address and, indeed, are addressing this need, we come full circle, back to the task of determining how best to teach and evaluate these skills, beginning with interviewing and counseling.

Professorial Observation of Student Performance.
How targets interpersonal skills such as the effective use of empathy.What are the skills or knowledge to be assessed, therefore, is the threshold issue for any teacher.
A second issue is whether the assessment is in the nature of a constructive critique 24 or a "graded" evaluation that directly affects a law student's standing and progression in law school or entry into the profession.There has been a considerable amount of clinical scholarship that focuses on giving constructive feedback, but much less on graded evaluations.Among other things, the critiquefeedback literature has produced the detailed criteria that clinicians invariably use in one-on-one supervision sessions. 25The critique literature also appropriately centers on self reflection and the life-long learning process that is so central to clinical teaching. 26 A critical dimension of any graded evaluation of a skills performance is that it be fair.This raises the same kinds of basic testing issues -the reliability and the validity of the testing measures -that ought to be confronted by anyone who is administering a formative or summative assessment tool of any kind. 27Does the assessment tool accurately and consistently measure what it purports to measure?This holds true for an elementary school teacher giving an English exam to a fifth grade student as well as for a law professor grading a torts essay exam or a bar examiner testing an applicant for admission to the bar. 28spite the many years that I have been evaluating law students' performance of interpersonal lawyering skills -in live client clinics, upper class simulation courses and first year introductory lawyering courses --I still have recurring bouts of concern as to whether I am being fair in applying commonly accepted assessment criteria.For example, in the case of counseling skills, can I apply the criterion that a client receive a "clear summary of options" in a manner that would justify giving one student a B+ and a second, a B? These doubts were fueled by a group of studies done by medical educators who examined the results of analogous clinical evaluations conducted by medical professors. 29What they found was that the medical professors were not consistent in their evaluations, as a result of which, the analysts concluded, those assessments were not fair.The same performance might be assessed differently by different medical professors.Or, the nature of the student-patient interactions observed were so different as to preclude consistent comparative grading of the students.Indeed, similar results were reached with respect to the inconsistency of law professor essay grading. 30One consequence of the medical findings was that medical educators began to resort to other techniques -one might say unorthodox or previously untried  38 San Diego L. Rev. 463.471-72 & n.37-38 (2001).
(Discussing first a study that showed that when a law professor re-read an exam paper, there was only a seventy-five percent chance that the second reading would produce the same pass/fail result.A second study showed a bar exam question had only a sixty-seven percent chance of being graded consistently as to pass/fail by a second reader).
methods -of evaluating medical students' clinical performance. 31 If the medical professors could not assess fairly the clinical interactions of their students, why do we think we are more able to do the same with our students?
Traditionally, clinicians have graded clinic students' interviewing and counseling skills by observing them performing these tasks.This continues to be the most generally accepted method of assessment.The clinical professor presumably is the most qualified person to assess whether a student's overall interviewing and counseling skills performance meets the standards of competent lawyering and to ensure that their evaluations are measuring what they purport to measure. 32This is the so-called "gold standard" evaluation. 33Its validity is also reinforced when the students receive the applicable criteria with adequate advance notice. 34Also, typically, the same criteria would be the learning vehicle for constructive non-graded feedback and student self-reflection as well as the bases upon which an ultimate grade would be determined. 35As stated above, however, while teacher assessments are quite valid, they may not be reliable in terms of consistency.
The professorial observations more often than not would be faculty reviews of videotaped simulations.Usually, in a typical clinic that I have taught, however, we would be lucky to have completed one videotaped simulated preparatory session prior to the student meeting with a real client or a witness.Thus, the notion of repetition as the most valuable aspect of preparation, practice and more practice, is difficult to achieve. 36Observing a student interviewing or counseling a real client might also take place, but the scheduling conflicts and unavoidable distractions and supervisory tensions often would interfere with the accuracy of these assessments or even prohibit these observations.37

Multiple Methods of Evaluation.
The question raised here is: Is the teacher's direct observation of a student's performance of interpersonal skills the most effective way or, indeed, the only way to grade such skills?Most importantly, is it fair?Or, to use testing terminology, is it reliable, accurate and consistent from student to student?Would another clinician give the same grade?And is it the most valid way to assess the overall levels of interviewing and counseling competency, or for that matter, specific components of the skills such as question form or the use of empathy?Why shouldn't we consider other potentially supplementary or complementary ways to assess a student's interviewing and counseling skills?
31 One of these is the "standardized patient" (using actors first to portray patients and then second, to give evaluations of performances  Rev. 191 (2003).37 See Joshua D. Rosenberg, "Interpersonal Dynamics: Helping Lawyers Learn the Skills, and the Importance of Human Relationships in the Practice of Law", 58 U. Miami L. Rev. 1225, 1234 (2004)  (noting that the absence of repetition and the stepped development of these skills necessarily affects the quality of both feedback and grading.
There are at least two rationales that favor using more than one method of evaluating a student's performance of interviewing and counseling.Assume, for example, that the objective is achieving an overall novice level in initial client interviewing.First, using one method -for example, observing a single live client-student interaction -could have, depending on the circumstances, a distorting effect for any one of several reasons and, therefore, an unfair impact on any grade.It could be an interaction with a extremely difficult client.It could involve a much more complicated legal situation.There could be unreasonable time constraints or difficult physical conditions under which the meeting took place.The student might be feeling bad that day, etc.For any of these reasons a grade for interviewing based on this single interaction would be an inaccurate assessment of the student's overall competence level. 38Why not err on the side of caution by not relying too heavily on only one method of evaluation?How can it hurt to use multiple assessment methods?One can also give different weights to different methods.
Still another reason for using more than one kind of testing device (e.g.multiple choice as well as essay questions) is that different students learn differently and, therefore, do better or worse depending on the type of assessment tools used. 39Using a variety of methods enables the students to demonstrate their talents in at least some of the tests.This also improves the "validity" of the overall grade, because it accounts for different ways in which to register the multiple competencies that are necessary for lawyering.

40
In addition, there is an element of repetition that favors giving students an opportunity to improve their abilities to both perform the same skill and reflect their full understanding of the skill and to improve their mastery of the testing device.The latter, I would characterize -not in the pejorative phrases "teaching to the test" or teaching "test-taking skills" -but rather as the opportunity to become familiar and comfortable with a particular kind of exercise.Just as the typical law student learns through repetition how to handle the usual law school essay exam question, they are less prepared for some of these skills testing methods that they may encounter only once.For example, in the case of the standardized client exercises discussed below, the students conducting the second, third or fourth standardized client exercise know and understand the mechanics of administering the exercises better than they did for the first exercise.Their added comfort level often facilitates a higher level of performance.It likewise seems indisputable that a student's experience with repeated tests of an interactive skill such as interviewing or counseling will benefit greatly from such opportunities.The value of repetition was noted earlier. 41Another way to describe the repeated use of testing tools is that it is a way of giving stepped feedback to a student thereby reflecting that student's progress which is recognized as an "essential ingredient for 38 Cf.Friedland, supra note 8, at 185-86 (referring to 39 See Ian Weinstein, "Testing Multiple Intelligences: this similar distorting impact when the grade is Comparing Evaluation by Simulation and Written based solely on a single end of term exam and even Exam", 8 Clinical L. Rev. 247 (2001).See also, John.
worse, if the single question exam deals with the 1 M. Bauman, "Oral Examination as a Method of of 6 course topics the student didn't prepare for.) Evaluating Students", 51 J. Legal Educ 130, 136 See also Glen, supra note 32, at 405 n.264; id. at 446.
(2001)(students performed well on oral exams while Likewise, with respect to evaluating the overall others did better on written exams.)competency of an interviewing performance, one 40 Friedland, supra note 8, at 196. student might be superb in establishing rapport with a client but quite ineffective in the use of 41 See Binder & Bergman, supra, note 36.This would properly formed questions, whereas a second seem to be true whether it is repetition of overall student would be the reverse, but in a way that assessments of complete client interviews or totally undermines the efficacy of the interaction, repetition (in the sense of drills) of more narrow Limiting the evaluation to a single event or to only skills (e.g.active listening or use of T-funnel one aspect of the skill may result in a misleading questioning).overall assessment.
advancement".42Professors Binder and Bergman have certainly persuasively made this "practice and more practice" point regarding interviewing, deposition and counseling skills. 43nally, before reviewing specific new assessment tools, it is worth noting clinicians' dislike of the grading process generally.Brustin and Chavkin recognized this reality in constructing their study as to whether a clinic should be graded in the usual fashion (A, B, C, etc.) or with a Pass/Fail. 44hile I very much believe in detailed feedback and constructive advice as to how students might improve their skills, I frequently find myself delaying the final calculation of grades until the last possible minute.In my experience, the notion of giving a clinic student a "test", therefore, has an even more unpleasant sound to most clinicians.Yet, most of us ultimately do in fact generally give a formal grade of some sort.If giving clinic students something that might be characterized as a test would make our grading better, fairer and more accurate, why not use these other devices.
It is against this backdrop of more traditional methods of clinical assessment that I continue to experiment with different ways to evaluate students' interviewing and counseling skills.The following assessment tools are intended to complement or supplement, NOT replace, the traditional clinician's direct observation.45Test. 46 This is a written exam in which a student views a videotape depiction of a lawyering activity and is asked to analyze all or part of what is on the tape.The ability of a student to articulate (either or both verbally or in writing) why a lawyer effectively counseled a client would suggest a cognitive understanding of skills theory and its requisite components.It not only would reflect a student's knowledge of skills models and theories, but an understanding of how someone succeeded or failed in applying those theories.Assuming one's teaching objectives included an understanding of skills theories, this is a valid way to assess that understanding.Being able to parse the deficiencies of a golf swing will not make you a top golfer. 47ut, it certainly cannot hurt in the development of the reviewer's swing.In simulation classes, we have asked students to view a videotape of someone else conducting a client interview or a counseling session and to write a critical analysis using those same criteria that I use in evaluating them. 48We have used this method in large skills classes both with an in-class exam (open book and closed book) as well as in a take-home exam.I have also given students a narrative of a factual situation or a transcript of a lawyer-client interaction, not unlike a traditional law school exam, and asked them to write an essay as to how they might handle counseling a client in that situation, or Id. at 204. ld at 201; See also Rosenberg, supra note 37. See Brustin & Chavkin, supra note 7, at 300-01.
I would be extremely reluctant to eliminate direct professorial observation of student performances even in the face of questionable reliability and accuracy data.Rather, in addition to implementing the theme of this paper (namely, using multiple methods of assessment) I would take steps to improve the observation method.
See Grosberg, supra, note 8 at 366-378.Contra Rosenberg, supra, note 37 at 1234.To make this assessment a more meaningful one, I give the students a case file that provides the context for the session.
For example, for a counseling session, the file would include an initial client interview memo, as well as file memos and documents reflecting an investigation.It utilizes the same technique used on many bar exams (the "performance test") but adds the video component as part of the exam question.I have referred to these as "videotaped performance tests".Grosberg, supra, note 8. See Stephen P. Klein, "An Analysis of the Relationship Between Trial Practice Skills and Bar Examination Results," January 10, 1983 at 272 of Learning and Evaluation in Law Schools, supra n. 10.
(In an analysis comparing the scores of a videotaped performance type exam and a traditional bar exam, the conclusion was that the former scores "were just as reliable" as the traditional essay questions Id at 293.)

Yournal of Clinical Legal Education
August 2006 in the case of a transcript, analyze the efficacy of the lawyer-client encounter.The former would call for an outline of a counseling session; the latter a critique.These kinds of questions would more explicitly call for an effective application of the readings and videotaped lessons that I typically assign in any course involving interviewing and counseling. 49The desired teaching outcome is for the student to clearly reflect an understanding of the applicable theories and why the lawyer was effective or ineffective.
In many ways such a written critique of another's performance is comparable to a student's self appraisal, especially if it is facilitated by a videotape of the performance. 50In the ideal Binder and Bergman world, the student would have several opportunities to repeat performances of a skill.In that context, it seems to me, the ability to do an effective critique of another's performance would be transferable and identifiable in the student's next performance of the skill.For that reason, the critical analysis skill can play an important role in the student's development of the performance skill.This is in addition to using this assessment method for the independent purpose of testing the student's understanding of and the ability to apply the skills theories.
The term "performance test" (PT) has come to mean in the world of bar exams and legal education, an opportunity to draft a written document that a lawyer might produce; e.g. a deposition outline, a jury summation, an opinion letter, etc.Thus far, bar examiners have not yet extended it to include any oral lawyering performance or, indeed, to include a video component. 5 1The PT typically gives the test-taker a file with original fact documents and some law (cases, statutes, etc.) and asks the student to produce the requested document in a stated period of time based on that file.The PT is in stark contrast to the typical traditional law school exam which gives a one paragraph hypothetical and asks the student to write a judicial opinion resolving the dispute.
In the "videotaped PT", the student is given similar fact and law files and then asked to provide an analysis of both doctrine and the interpersonal skill.For example, the student would then observe a lawyer counseling a client in the case reflected in the case files.In addition to analyzing the case and possibly producing a practice document of some sort (e.g. a negotiation outline), the student would also be asked to provide a critical analysis of the lawyer's counseling skills performance.
In evaluating a student's response to a videotaped PT, the clinician can assess the students' legal reasoning and analysis skills as well as their understanding of what worked or didn't work in the interviewing or counseling performance and why.This depiction of a lawyer applying the law in context offers the students the opportunity to analyze the lawyer's attempt to synthesize the law and the facts and then explain the law and the options available to the client.To make this assessment tool even more realistic, a professor might dispense with the typical in-class short time limits.For example, instead of giving students two or three hours to complete a videotaped PT, they might be given 2-3 days. 52As far as the ultimate grade in a live-client clinic, the video PT could play a role there as well. 5 3This, of course, assumes that learning and understanding skills models and theories is an objective of the clinician.The notion of a written final exam in a clinic, and often even in a simulation course, is contrary to most clinicians' pedagogical instincts.Based on my experience, the use of any kind of a written exam in a clinic remains rare.Once again, however, the starting point or premise for this survey of alternative methods of evaluating interviewing and counseling skills, is that what we're now doing might not be as fair -or accurate for that matter -as we might like to think it is.If that is true, we should be considering ways to move it closer to that ideal. 54The question is whether we remain open to improving our method(s) of evaluation.
[B] Multible Choice Ouestions.The use of multiple choice questions on law school exams or the bar exam has often elicited controversy. 55For some law teachers, this testing method constitutes a poor substitute for an exam that calls on the student to write an answer demonstrating the ability to use and apply the law to a factual situation.The essay question continues to be the dominant form of law school exams. 56I was among those who reacted skeptically (even negatively) to the suggested use of multiple choice questions both on the bar exam and for traditional doctrinal courses.I never even contemplated using them for assessing interpersonal skills proficiency.Yet some professors rely exclusively on multiple choice questions (evidence is one example 57 ) and others are even now proposing their use for skills evaluations. 58ercoming a certain amount of resistance, we first began to use this assessment method in our first-year Lawyering course.Initially, we used multiple choice questions as ungraded in-class quizzes and then later as part of a final exam.One set of questions was based on a transcript of a videotaped interview that the students had viewed.Professor Sergienko discusses the use of multiple choice questions for "skills testing" in a recent article, but his focus is on "higher level cognitive skills" (Sergienko, supra n. 30 at 493-501) and not the kind of interpersonal verbal skills that are the focus of this paper.

Yournal of Clinical Legal Education August 2006
The students were asked to choose the best of four assessments of an exchange. 59Or, they might have to select the applicable concept or rule of law. 60Still another multiple choice question we used was to select the best assessment(s) of a lawyer-client interaction. 61There is much to be said for using such quizzes simply because of the speed with which the student receives the test results.
The quick turn-around to students can be extremely useful in terms of their improvement of their performance.

62
As with other methods to assess interviewing and counseling skills the threshold issue is does it assist us in achieving our teaching objectives, and, repertoire of evaluation tools.I think the answer is a modest yes.While it does not evaluate a student's ability to perform, it does enable us to evaluate a student's understanding of skills theories and lawyering models.One prerequisite, of course, is that the questions are valid, reliable and efficiently administered. 63Do the questions measure what they purport to measure?In our case, did the questions measure a student's cognitive understanding of basic interviewing and counseling concepts?I believe the answer is yes.While intellectual understanding of applicable lawyering theories does not mean someone will be a good interviewer, it does mean something, and it could later aid in someone's actual ability to perform.More importantly, if a lawyer understands why certain interpersonal concepts are valid, that lawyer is more apt to ground her behavior in those principles than to just act instinctively or spontaneously in a very ad hoc manner.
Finally, the use of multiple choice questions simply adds an alternative evaluation device.Its use means that a student's entire assessment is not unduly placed on one evaluation tool. 64It enables us to minimize the deficiencies of an one particular method of assessment.To the extent such questions are used in mid-semester quizzes, and then repeated in a course summative final exam, it also is an effective and efficient method both of providing ongoing or interim feedback (always useful from a learning perspective) and giving them notice of the testing method to be used on the final.From an instructor's perspective, it's also much easier and speedier to grade such questions than essay or other textual test responses, an especially important factor in large classes.
[C] Self Reflection or Self Evaluation.This has certainly been one of the traditional ways in which clinicians have engaged students in the process of evaluation.Indeed, teaching the skill of selfcritique is itself often a clinical teaching objective. 65To the extent that student development of life-long learning habits is a goal, self evaluation can be a valuable tool.A student's selfevaluation/critique often is done in the form of a written document, typically without any (or much) time pressure.But it also could be done more informally and only verbally.For me, in clinics, this is often part of a semester-end one-on-one meeting with students.The care with which a student analyzes her or his own performance will necessarily reflect the student's understanding of and application of interviewing and counseling theories to the specific situation that is the object of the critique. 66And as discussed above, the same would be true of an analysis of someone else's performance.Both provide additional components of what might be a student's final grade on her interviewing and counseling skills.
[D] Standardized Clients.Still another method of evaluating student skills performance is the "standardized client".This is a method that is based on the medical education model -the "standardized patient"-in which a lay person is trained to portray a patient and then to give written feedback to the student interviewing and examining them.I have been experimenting with my adaptation of this tool to law school in several contexts. 67 role-play clients or witnesses and second, to complete an assessment of the student's performance on a checklist68 that the professors prepare.While dispositive results are not yet in, the preliminary findings 69 suggest that this is a valuable assessment tool which gives students multiple 0 opportunities to perform a skill and to receive valid and reliable feedback as to how they are doing.
In our first-year Lawyering course, these exercises facilitate achieving the objective of introducing students to the basics of interviewing and counseling and fact analysis.We do not purport to do more.At most, a student might reach a novice level as an interviewer.On a more mundane level, the SC exercises also offer students some feedback on how a stranger might perceive them.It also affords them the opportunity to achieve a higher comfort level in trying to engage a client or a witness in a conversation.For many students, it is the first time they have had such a human interaction with a stranger.
As I suggested at the outset, a major concern is whether our overall assessments are fair, sufficiently objective, and, ultimately, accurate measures of a student's interviewing and counseling skills.One of the reasons that this device began to be used in medical education was that the professors concluded that their direct observations of their students' clinical skills were too variable and unpredictable. 71Observing a student performing a tonsillectomy is not the same as observing that student operating on a broken back.Similarly, a medical professor might be called away from an observation of a student for an emergency.The use of standardized patients was one way to ensure that all students got the full, as well as the same, array of cases to deal with. 72Very similar obstacles (and others as well) get in the way of law school clinicians in observing student performances on real cases.Thus, it could be a useful supplement even in our live-client clinics and simulation courses.The standardized client can also be valuable as a vehicle for introducing students to skills learning in situations where it is practically or financially impossible to provide professorial feedback on skills performances.That in fact is the context in which I have been conducting most of my experiments with this technique.Our first-year required course in Lawyering is a large class (we have four sections of 110 students each) which has as its purpose, the introduction to fact analysis and interviewing and counseling. 73One-on-one teacher feedback is not financially feasible.Providing each of them with three standardized client opportunities, developing a virtual world in which students can practice law and have their virtual world law work evaluated. 8 1Practicing law in the virtual world gives Maharg students the opportunity to be creative and do things not possible in a clinic, or even in a traditional simulation course and to do so without the concern that they will make a mistake with a real client.For example, they can exercise initiatives to bring a lawsuit or alternatively, to propose an ADR device to resolve a dispute without litigation, just to name two.The opportunities in the virtual world provide a potentially richer base upon which to assess the students' strategic as well as interpersonal lawyering skills.The computer records provide the basis upon which the assessments of student work are based.While he has not yet developed avatars who could be interviewed and counseled by the students in an interactive and interpersonal sense, he has taken simulation several steps further than anything I have seen in the U.S.There is certainly room for some American pedagogical initiative here.

Conclusion
Evaluating interviewing and counseling skills calls on us to apply different measurement tools from those used to assess students' grasp of doctrinal law.We should approach this responsibility as clinicians in the same open-minded manner we would like to think we approach all educational issues.Numerous ways to evaluate skills have been developed and we should use or at least consider the potential use of all of them, having in mind the ultimate refinement of the most appropriate mix in each situation.While clinicians have made much headway in developing sophisticated critique and feedback tools, there has been less effort expended on the more formal grading evaluation.To the extent we make progress in developing law school measuring tools, we should try to carry over developments to the bar exam.