Monday, December 3, 2012

Instructor Credibility


 Theoretical Basis:  Communications Theory

Classroom Credibility

This topic is a nice change from the previous three process type subjects (testing and test items).  It has been my experience that most instructional design models don’t address the topic.  Information on presentation techniques and an instructor’s behavior are usually found in the communications theoretical basis.   The goal for this blog is to acquaint you with the dimensions of instructor credibility and provide you with methods to enhance it.

Prior Learning

My blog on the Introduction to Communications Theory will act as a basis to build from.  To make the most of this blog you need to have a model of communication to reference and know the attributes and role of the sender and receiver.

What is credibility?

Credibility refers to the objective and subjective components of the believability of a source or message.  Instructor credibility has to do with ability to leverage personal conduct, social practices, professionalism, and contact expertise to command attention and respect from the learners.

Why Does It Matter?

Research indicates that instructor credibility is one of the most important instructor attributes affecting the instructional process (1).   Applying Cognitive Learning Theory, it has to do with our “executive control function”.  The role of the executive control system is to select incoming information, determine how to best process that information, construct meaning through organization and inferences, and subsequently transfer the processed information to long-term memory or choose to delete that information from the memory system altogether.   In short if the message is not credible, we trash it.  Therefore credibiliy directly affects students' effort and behavior; hence learning


The Dimensions of Credibility

There are four major dimensions of instructor credibility: trust, competence, dynamism and immediacy.



Trust is defined as "placing confidence in the other". Trust must be earned through the pedagogical communication process that teachers display with their students. Any violation of this trust can potentially rupture the professional relationship that teachers need to maintain if honest dialogues are to occur.

Competence involves more than simply being knowledgeable. It involves a perception that others have of people concerning their degree of knowledge on topics, abilities to command such knowledge, and abilities to communicate this knowledge clearly. Teachers constantly face being evaluated and tested by students, concerning their level of knowledge on a variety of subjects.

Dynamism basically is the degree to which the audience admires and identifies with the source's attractiveness, power or forcefulness, and energy. This dimension correlates strongly to a person's level of charisma.

Immediacy is the level of distance both physical and psychological between himself or herself and the student.

The following table is a compilation from various books and articles of examples and suggestions for increasing one’s credibility by dimension (see references below).


Trust

Competence

Dynamism

Immediacy

Adapting messages to listeners by being genuinely sincere and honest in the presentation of information

Project an image of professionalism

Carry yourself with smooth movements and exude confidence

Demonstrate openness to learners

Always follow up with learners and keep promises (real or implied)

Identifying strengths and weaknesses in information (e.g., reliability, biases) to demonstrate the speaker's honesty in presenting messages

Demonstrate a relaxed and comfortable posture (don’t slouch)

Show respect for all learners

Introducing sources (which may be trusted by students) used to develop class material

Seek feedback about yourself

Develop a powerful style of speaking that uses few verbal or vocal hesitancies

Accept differences of opinion and experiences

Explaining the soundness of the evidence that can help to reinforce trust between teacher and student

Familiarize yourself with the training material and content

Vary physical movements to complement the message

Provide all learners with equal amounts of attention and avoid favoritism

Earning trust by showing trust towards students in the educational process

Describe your professional and academic credentials (but don’t brag)

Use gestures to describe and reinforce

Avoid inappropriate humor

Admit mistakes of lack of knowledge and apologize if necessary and appropriate

Prepare, Prepare, prepare for training delivery

Use a variety of evidence, stories, visual aids, that add interest to the message

Establish eye contact with the entire class by periodically scanning the entire class

Demonstrate acceptable social practices

Answer questions accurately and thoroughly

Avoid a monotonous communication style

Smiling to disarm and relax students

Handle sensitive issues discreetly

Use appropriate terminology and avoid jargon

Speak in color, expressing life, emotion, and animation

Attempt to reduce distance when possible by moving or away from barriers (e.g., desks podiums)

Demonstrate consistency in your works and actions during training and outside of training

 

Have a strong ending

 

 
Dispelling a Urban Legend – Or Opportunity for improvement

As the adage goes, you only get one chance to make a first impression. So don’t waste it!

In my opinion, the good practice of gaining your audiences’ attention by stating an “interesting fact or surprising statistic” gets abused by just spouting facts.  I think of our typical safety meeting that starts with some obscure statistics about how many “whatevers” happen every year.  I guess they forget about the interesting and surprising part.

I think Heath & Heath(1) suggest a better alternative “When we’re trying to build a case for something, most of us instinctively grasp for hard numbers. But in many cases this is exactly the wrong approach.” Let the learner test your ideas.  Instead ask a simple question that allows the learner to test for themselves.

Now you have it – now you don’t

I think one of the results of the information rich society is that our learners are less impressed by credentials.   Allgeier suggests, “Your position, status, or roles in life have nothing to do with your personal credibility factor. Different people play different roles in their careers, jobs, and other activities—and some are roles of very high authority—however, there’s no lasting connection between higher status/power and personal credibility (2).”    You gain or lose credibility through your behavior.  Instructor credibility must be earned in the classroom.

Make a Plan

Credibility is one of 14 competencies identified by the International Board of Standards for Training, Performance and Instruction (IBSTPI).  Ask one of your peers to review the competency dimension table above and identify an area you can improve in.  Set a SMART goal to improve that area.

Happy Holidays!

Cj

 References

(1 )Heath, C. & Heath, D. (2007) Made to Stick; Why Some Ideas Survive and Others Die. Random House Publishing. New York, New York.

(2) Allgeier Sandy. (2009) The Personal Credibility Factor, The How to Get It, Keep it, and Get It Back (If You’ve Lost It. Financial Times Press.  Upper Saddle River, New Jersey

King Stephen, B. King Marsha, and Rothwell William, J. (2001) The Complete Guide to Training Delivery: A Competency-Based Approach.  American Management Association. New York, New York

Haskins, W. (2000, March 3). Ethos and pedagogical communication: Suggestions for enhancing credibility in the classroom. Current Issues in Education [On-line], 3(4). Available: http://cie.ed.asu.edu/volume3/number4/.

Zhang, Qin (2009) Perceived teacher credibility and Student Learning: Development of a Multicultural Model. Western Journal of Communications. Vol 73, No. 3, July-September 2009, pp. 326-347

 

Thursday, November 8, 2012

Test Item Selection

Choosing the Appropriate Test Item Type or Validity Continued

In the October 2012 Test Construction and Validity, we took an in-depth look at the quality of an instructor-developed test by examining the validity of test items. This post will focus on applying those concepts through the appropriate selection of test item types.

If this blog was a synchronous type of media (all of us participating at the same time) and I was able to ask a question and receive immediate answers, I would ask you to share the different types of test items that you use. I’m confident that as a group, you would share most of the following:



Each test item type has inherent strengths and weaknesses and choosing the appropriate item type is like choosing a vehicle to carry a load. To choose a suitable vehicle you must know the load specifications and circumstances it will be operated in. The load specifications for your test items types come from your learning objectives.

The objective’s “action verb” provides the class of load and the “conditions” define the operating environment.


Some test conditions require a substantial vehicle
Classification Systems (Taxonomies)

Trucks come in different weight classes. You’re probably familiar with the classic half-ton pickup truck; good for small dry loads and not really good at carrying liquids. So how do you know what class of test item you need? There are multiple systems for classifying learning outcomes (think objectives). You’re probably most familiar with Bloom’s et al, taxonomy of learning domains. Think of the taxonomies as load classifications. For example consider the following learning objective:

First aid students will be able to state the ten steps of cardio-pulmonary resuscitation (CPR) in the order of performance, from memory with complete accuracy. The action verb is to “state”.

Working from a list of action verbs, you would find the verb in the knowledge category within the Cognitive domain. Within the knowledge category are several possible test item types including: true/false, completion, multiple-choice, short answer, and interview.

  To choose the appropriate test item type, you must also consider the learning objective’s conditions. To continue the vehicle analogy, it is like the difference between 2 wheel and 4 wheel drive. If you plan on getting your load (action verb) over muddy or icy conditions you will need a 4-wheel drive vehicle. Since the conditions are specified as “from memory”, I submit the only appropriate choice is an interview. It is the only test item type that can support the action verb (behavior) “state” and the condition “from memory”. Some might question why not the ubiquitous multiple choice format? Once you supply the correct answer to the student it changes the behavior to “recognize” and the condition would be given a list vs. solely from memory.

Below is a table of suggest test item alignments with the various levels within Bloom’s Cognitive domain.



Source: Carnegie Mellon

Other Options

In my blog on Learning Theories and Instructional Design I suggested that each theory has strengths and weaknesses and one of my goals for this Blog is to expand your options, so I am going to include Gagne’s learning outcomes (which is cognitive theory based vs. Bloom’s which is behavioral theory based) as an alternative set of domains. Why? Some day you may wish to include learning objectives (outcomes) that are outside the Behavioral realm; “cognitive strategies” for example.

Robert Gagne’s scheme for learning outcome is:
• Intellectual Skills (How to do something. Generally what is learned is procedural knowledge)
• Cognitive Strategy (Skills that govern one’s own learning, remembering and thinking)
• Verbal Information (Knowledge of facts, events and rules. Something you can state)
• Motor skills (Physical skills: skate, steer an automobile, printing/writing, keyboarding)
• Attitude (amplifies a person’s positive or negative reactions or choice to circumstances)

Note: Intellectual Skills have 4 distinct classes. The same appropriateness of test item type can be applied to Gagne’s learning outcomes (think domains). See a table of appropriate test item types based on Gagne’s learning outcomes.

Learning Aid:

From information to practice. Offer to do a review of one of your co-workers’ tests. Get a copy of the learning objectives and evaluate the test item types use against the action verb AND conditions of the associated learning objective using one of the domains (outcome) schemes. Talk over any discrepancies. Suggest alternatives. CAUTION: this may lead you to the re-evaluate the quality of your objectives.

Best regards,

Cj

Wednesday, October 3, 2012

Test Construction and Validity


Theory Basis:  Psychometrics

In the August 2012 Blog Post on Evaluating Training – What it all about? we took a look at a method to determine if the training that was completed was transferred to practical use on the job. This post is going to focus in on the concept of validity as it applies to instructor-developed tests prepared by internal talent.

Validity Defined:

Something is “valid” (has validity) when can actually support the intended point or claim; acceptable as cogent: "a valid conclusion".  

Narrowing Down the Topic

As in other posts I began by mind mapping the topic.  The map grew at a geometric rate and I came to the realization that some of the topics on the mind map could yield a map of their own.  I narrowed down the information to support the content of this blog and came up with the following:

To provide a frame of reference to the topic of this post you can follow the topics in blue text. Beginning with the realm of educational evaluations, we are going to focus on evaluating the individual, specifically the student, and the evaluation that takes place at the conclusion of the instruction; summative.  From the types of summative evaluations, we’ll concentrate on the internal, instructor developed tests that are criterion-referenced tests and how to ensure their quality through being valid.

Clarification

A common misunderstanding of the term criterion is its meaning. Many, if not most, criterion-referenced tests involve a cut-score, where the examinee passes if their score exceeds the cut-score and fails if it does not (often called a mastery test). The criterion is not the cut-score; the criterion is the domain of subject matter that the test is designed to assess.

 
Making the Connection to ISD

For us, the subject matter is the tasks derived in the analysis phase of your chosen design approach.  It is the behavioral repertoire (the kinds of things each examinee can do).  This repertoire is found in the action verbs of the objectives developed from the task identified in the analysis process.

Making our test’s validity cogent comes from two qualities: content validity and construct validity.

Content validity is the extent the items on the test are representative of the domain or universe that they are supposed to represent (the criteria).  To impart content validity to your test you may only as questions related to the objectives.

Construct validity is the extent the test measure the traits, attribute, or mental process it should measure.  This comes from the construction of the test items.  To be valid, a test item’s action verb must be congruent (matching) with the verb in the learning objective.  Beck (1) refers to as “item-objective congruence”; he goes on to say it is “the most important item characteristic.  Graphically it could look like:

How good is good enough?

Is it necessary to test on all the behaviors in a criterion?  In general I would say “Yes”.  For those of us that follow HM-FP-01, Section 3.2, Examination Preparation, Administration, and Control the answer is; “It depends”.  Part 4, Developing Examinations provides guidance:  If your developing items for an item bank all learning objectives will have 3 exam items for each objective. For individual exams, 80% of the learning objectives should be covered.  Based on the above, a note in Step 8 indicates “All objectives should be adequately covered in the exam items.  (I guess you get to define adequately J )

Setting the cut-score is mostly a case of judgment or negotiation. Your best bet is to follow your established standard.  Some standards can be found in the training program descriptions (TPDs) others in company procedures and guides.  If you would like to investigate this topic further I suggest reading Methods for Setting Cut Scores in Criterion-Referenced Achievement Tests a comparative analysis of six recent methods with an application to tests of reading in EFL by Felianka Kaftandjieva.

Challenge

Next time you prepare a test take the time to evaluate your questions for validity by ensuring questions come from the criteria and there is congruence between the objective and test item.

Regards,

Cj

References

Beck R. A. (1982). Criterion-Referenced Measurement: The State of the Art. Baltimore, Maryland: John Hopkins.

 

Tuesday, September 11, 2012

Conceptual Models for Instruction


Title: Conceptual Models for Instruction

ID Basis: Learning Theory and Systems Theory

Ever wonder what makes the difference between an average instructional session and an outstanding one?  Raw talent and luck may be one reason but to design the “outstanding” consistently, you need a conceptual model to work from.

Somewhere in your Instructional Design Process you will need to plan what is going to happen in the learning environment to aid learning; the “events of instruction”.  This post will focus on the tools available to the designer when creating a learning environment.

This post will build on the July 2012 post on Learning Theories and Instructional Design. Recall the mantra, “…designed instruction must be based on knowledge of how human beings learn.”(1) A learning theory delineates the processes used to teach, it provides a conceptual model of learning.  The model specifies the strategies and tactics used within a learning event.  A model is like a road map because it lets you decide your path and helps you know where you are along the way to your destination.
Just like there are multiple theories about how humans learn, there are corresponding models for designing instruction for each theory.  So which one is the right one?  Just like each learning theory has its strengths and weaknesses, so does its corresponding learning model.   How do you choose a model?

First, find the models associated with your theory of choice. You will find a continuum of models from general to detailed just like there are maps that just show the major highways and cities to detailed city street maps.    Then… 

Applying Your
Chosen
  Considering
  Choose

1

Philosophies and theory of learning

Your experience

Terminal objective

Target populations

Wider system needs

The conceptual learning model

2

Conceptual Learning Model

Detailed (enabling) objectives

Entry level skills

Actual resources and constraints

Instructional strategies

3

Instruction Strategies

Content

Types of learning outcomes

Knowledge and skills taxonomies

Instructional tactics

Cj’s Design Model Decision Heuristic

Finding the one that works for you will depend on how experienced you are at designing instruction in the same way of how well you know your way while traveling to a destination. The less experienced you are a more a detailed map will be helpful.    In addition the roles of the instructor and the learner can differ greatly depending on which theory you choose and you may find yourself challenged to take on a different role.

Model Examples

An example of a general model is the one offered by Stolovitch & Keeps in their book Telling Ain’t Training (2).    It shows the steps to follow for each learning objective and offers general guidance on what should take place at each step.  See the graphic below.
Stolovitch & Keeps 5 Step Model

One of the more detailed models is the Smith & Ragan Model. It has fifteen “events of instruction” and offers detailed strategies and tactics depending on the type of learning outcomes you are planning for.

Instructional Strategies & Tactics Defined
Jonassen & Harris (2) offer the following to clarify the concepts of strategies and tactics. “In a general sense, strategies are a set of decisions that result in a plan, method, or series of activities aimed at obtaining a specific goal.  Tactics are specific plays or interventions in the process that are used to enact the strategy.”  For example, an instructional strategy would be to “gain the attention of the learner”.  A tactic for implementing the strategy would be to “pose a question to the learners”.

The more detailed models usually offer specific strategies and tactics.  In addition there are sources of strategies that are independent of a model.  These can work well with a basic model like the Stolovitch & Keeps one above.  When you get to the step were you “give them things to do” having other sources are helpful and being eclectic pays off.  Here are my top 5 resources I suggest you have available to provide strategies and tactics:






 
They are all available via on-line book sources and/or in the HAMMER Safety Library.

Challenge

The next time you finish an analysis and have a good idea of what the content will be, review the different learning theories and determine which one fits the best for the learning outcomes you are going to design for.  Then select a theory, find a model and try to apply it for at least the first four learning objectives.  Or, take one strategy and try a different tactic.

Best regards,

Cj

References:

1.  Gagne, R., Briggs, L., Wager, W. (1992).  Principles of instructional design (4th Ed.).  Orlando, FL: HBJ.

2.      Stolovittch, H. D. and Keeps, E. J. (2002). Telling Ain’t Training. Alexandria, VA. ASTD Press. The Cambridge Handbook of Multimedia Learning. (2005) Mayer Richard, E. Editor. New York, New York. Cambridge University Press.