.:Interpreter Services: Home

Most health providers struggle to determine whether their interpreter workforce is sufficiently skilled because tools to assess language proficiency or interpreting skills in health care currently do not exist. No valid or standardized tests exist to measure language ability in a health context. To fill this void, health providers across the country use a variety of locally devised ways to assess interpreters whom they hire or rely on informal measures such as the opinion of their staff or a trusted heritage community member. Because there is no standard language proficiency test for the health care setting, some providers use language proficiency tests that have been developed for business or academic settings. More recently several commercial products have been developed to fill this need, but none are empirically designed nor have they been tested.

As a response to this shortage, Hablamos Juntos demonstration sites are helping to pilot the Language and Interpreter Skills Assessment (L&ISA) program, four computer-administered assessment tools that measure language and interpreting skills in a health care context, which were created by a team of experts led by Dr. Claudia Angelleli. The program was developed to help measure Spanish proficiency and interpreting skills of individuals who interpret in health care settings. The computer-administered tests were designed to measure basic interpreting skills as well as Spanish language proficiency at the intermediate and advanced/superior levels.

l&isa test centers top

Each Hablamos Juntos demonstration site identified a project administrator to oversee test administration in the region and two raters to score local test results. The National Program Office (NPO) reviewed the qualifications and approved candidates for these positions.

Instruction manuals were prepared to guide implementation. One manual provided instructions for downloading the software and registering the test center computers. Another provided instructions for establishing test centers. Guidelines were provided for recruiting, screening, scheduling, registering and administering the four tests. Sample applications, consent forms and scripts for administering each test were included.

Thirty-seven test centers were established in ten states, and about half (23) are currently actively testing. Test center computers are registered with a central server and test results are automatically sent to the central server when a test is completed.

user experience top

Each grantee developed localized marketing materials for the region where testing is taking place to attract test candidates. Test center staff members provide an orientation to the testing process and how the software program works. Test takers are told that they are participating in a pilot project and the tools (administered through a computer-based audio-visual software program) are new. The need for confidentiality about the specific test content is stressed and test takers are asked to sign a confidentiality statement to ensure security.

Scoringtop

Trained raters score each test twice. Raters attended a two-day workshop to learn about test administration and scoring and received a manual with scoring guidelines and a rubric to score the four different tests on-line. Scoring is completed on a scoring rubric embedded with audio files. Raters access test results, click on the scoring rubric to listen to test results, score the test and submit scoring results via the Internet. The identity of the test taker is not made available to the raters.

Local raters are able to access local test results from a central server. National Program Office raters also score tests from all test centers across the country. Dual scoring enables the program to evaluate the quality and consistency of ratings by individual raters. National level scores will also enable the development of benchmarks for ranking individual results.

Skills Measured top

Candidates are asked to complete one language proficiency test (Intermediate or Advanced/Superior) and the Interpreter Readiness Pre-Test on the first visit. The Interpreter Readiness Final Test is scheduled for a subsequent date.

Each tool measures a set of skills at different levels. Listening and reading comprehension, literacy in Spanish and speaking are tested with the intermediate language proficiency tool. The advanced proficiency test measures speaking at different registers or ways of speaking as well as the four skills listed above but at more advanced levels. Both Interpreter Readiness tools test attention to detail and sequences, social and cultural appropriateness, general language ability, and terminology. In addition, the pre-test measures speaking and the post-test includes a test on ability to sight-translate a consent form. The table below shows skills measured by each of the four tests.

TEST
Language Proficiency Intermediate
Language Proficiency Advanced
Interpreter Readiness Pretest
Interpreter Readiness Posttest
SKILLS TESTED
Attention to details and sequences
X
X
Cultural/Social appropriateness
X
X
General language ability
X
X
listening comprehension
X
X
spanish literacy
X
X
reading comprehension - general
X
X
speaking
X
X
X
speaking + register
X
X
terminology
X
X
reading comprehension - consent forms
X

Individualized Self Improvement Planstop

Since this is a national pilot evaluating a newly developed tool, test results in the form of numeric scores were not provided initially. Instead, the National Program Office gave test takers received an Individualized Improvement Plan for each assessment test taken. The improvement plans were individualized based on the results of the assessment. This document identifies strengths and weaknesses and provides suggestions to improve current skills. Individual results cannot be released without written consent.

Test Development Teamtop

The tests used for this project were developed by Claudia Angelelli, Ph.D. Dr. Angelelli worked with a panel of experts including Dr. Guadalupe Valdes, Stanford University (Heritage speakers, Bilingualism, and Testing), Dr. Edward Haertel, Stanford University (Design of Measurement Instruments, Item Analysis, and Statistics), Dr. Mary Ann Lyman-Haeger, San Diego State University (Language Testing, Testing, and Technology), Prof. Christian Degueldre, Monterey Institute of International Studies (Translation and Interpreting) and Dr. Jean Turner (Language Testing and Applied Linguistics) to design the Spanish language proficiency and pre/post-interpreter readiness assessment tools used in the L&ISA program. Three tools were developed for Connecting Worlds with funding by The California Endowment. The fourth tool, developed for Hablamos Juntos, uses the same design to assess for high, medium, low competency at the intermediate Spanish level.