One-way ANOVA

CUE Forum 2008
Research design:
The backbone of academic
inquiry
Peter Neff – Doshisha University
Matthew Apple – Nara National College of Technology
David Beglar – Temple University Japan
Olympic Memorial Youth Center
November 1, 2008, 1:15 - 2:50 p.m.
Overview
Introduction: The importance of good
research design
 Approaching the study
 Developing the study design

Break for Q&A
Designing the right instrument
 Implementing your design
 Conclusion
Further Q&A

Introduction: The importance
of good research design
Poor design vs. Good design

Poorly-designed study
Hit on an idea, dive right in
No background research
Throw together a survey, give to a group of
unwary participants
Collect data, then ponder how to analyze
Run to a colleague for help
Fish around for most “interesting” findings
Pray to get published
Poor design vs. Good design

Well-designed study
Hit on an idea, do background research
Formulate relevant, specific, practical RQs
Consider participants, context, data analysis in
advance
Decide/develop instrument; pilot and revise it
Decide on appropriate pre/post-test instruments
Plan stages and structure of data collection
Prepare participants adequately
…Then carry out the study
The importance of good design

A well-designed study provides many
benefits:
– Demonstrates researcher knowledge
– Ties the study to an underlying
philosophy
– Provides a clear path for the
researcher(s)
– Helps avoid mishaps of previous
studies
The importance of good design

Other benefits
– Leads to more concrete results,
more definitive conclusions
– Improves chances of publication
– Raises the status of SLA as a field of
inquiry
A word about mixed methods
designs
The great quan-qual debate
 Mixed methods – the “best” of both
worlds
 Add a qualitative component to a
quantitatively-oriented study:

–
–
–

Participant interviews
Observational, audiovisual data
Open-ended survey questions
Plan for qualitative analyses (text
analysis, response coding)
Part 1
Approaching the Study
Approaching the study

Hitting upon research ideas

Review of the literature

Formulating research questions
Hitting upon research ideas

Identify the topic in a few words

Reflect on “doability” of research
–
–
–

Can I research this?
Should I research this?
Am I interested in researching this?
Review of the literature can help
redefine and revise ideas
Identifying the topic:
Hints for starting to narrow

Pose a short question using “what” or
“how”

Write a short title that consists of one
sentence under 12 words

Ask a friend or colleague to read your
topic and gauge their reactions

Draft research questions to see if the
topic can be adequately explored
A “researchable” topic

“Can I do this in my current situation?”

“Does this concern people at other
institutions?”

“Does this add to the current body of
research related to this topic?”

“Does this study contribute something
from a unique perspective?”
Filtering “probably not so good”
ideas:

To boldly go where no research has gone
before… (The “Star Trek” idea)

My theory is clearly better than X
(The “Steven Krashen is so wrong” idea)

My classroom is totally unique
(The “I don’t need theory” idea)

This is a really cool technology /
methodology / text book
(The “I am primarily a teacher” idea)
Filtering ideas: A few hints

Review research designs and
statistical techniques

Review teaching methods and overall
SLA research results

Evaluate access to potential study
participants

Plan time for material creation, study
design, and implementation
Review of the literature

Relate the study to continuing
“dialogue” in current research

Finding a “gap” in the literature

Provide a framework for the
importance of the study
Review of the Literature:
Finding a “gap” in knowledge

“We do not enough about X…”

“This way of looking at X has never
been done…”

“This way of learning about X has not
been duplicated in my context”

“Previous research has inadequately
explored X…”
Finding literature: Some hints





Google Scholar using key words or
researcher names
Scour recent literature review articles
Check for “cited” numbers online
Get access to university databases
Refer to recently published articles
–
–

After the year 2000
During the previous 2 to 3 years
Examine “outside the field” articles
Finding literature:
Separating the wheat…
“Top tier” journal articles
 Most-often-cited articles
 Recent articles
 Research articles (not reviews)
 Books / Edited book-articles
 Major international conference papers
 Dissertations / dissertation abstracts

…from the chaff
“In-house” journal articles
 Articles from “proceedings” books
 Online journal articles with only .html
versions
 Unedited books from small publishers
 Newspaper and magazine articles
 Web pages
 Anecdotal evidence
 Your own previous papers for an MA
course

Research questions:
A few useful guidelines
Naturally flow from the literature review
 Strongly connected to the topic
 At least two or three (not one)…
 …but not five or six or more
 As specific as possible
 Directly concern variables in the study
 Do not contain yes/no question words

RQs: What not to ask
“Is X true/false?”
 “Will X happen if…?”
 “Does X cause Y?”

“What do participants think of X?”
 “Why does X happen?”

RQs: What to ask
“What differences exist between…”
 “Compared to X, how does Y…?”
 “To what degree do X and Y differ…?”
 “When X is controlled for Y…, how
does Z…?”

“What are underlying patterns
among…?”
 “To what degree does X predict Y?”

Part 2
Developing the Study Design
Research design

Cross-sectional design: A design in
which data are collected from a sample
at only one point in time.

Longitudinal design: A design in which
data are collected at more than one
point in time.
Randomized Control-Group
Pretest-Posttest Design
Experimental Group 1
T1
Xa (Method a)
T2
Experimental Group 2
Control Group
T1
T1
Xb (Method b)
T2
T2
Randomized Control-Group
Pretest-Posttest Design

Reasonably strong conclusions can be reached
about the effects of the treatments.
 Problem 1: Within session variation (e.g.,
different teachers or room conditions) may
intervene.
 The solution? Randomly assigning participants,
times, and places to the experimental and
control conditions.
 Problem 2: The pretest may interact with the
treatment. This potential problem is dealt with in
the next design.
Randomized Solomon Six-Group
Design
Pretested (Random assignment)
Pretested (Random assignment)
Pretested (Random assignment)
Unpretested (Random assignment)
Unpretested (Random assignment)
Unpretested (Random assignment)
T1
T1
T1
Xa (Method a)
Xb (Method b)
Xa (Method a)
Xb (Method b)
T2
T2
T2
T2
T2
T2
Randomized Solomon SixGroup Design
This design amounts to doing the
experiment twice –once with and once
without pretesting.
 It is possible to know what effects, if any,
are associated with pretesting.
 If the results of the “two experiments” are
consistent, greater confidence can be
placed in the findings.

Counterbalanced Design

This design is useful when randomization is not
possible and intact groups must be used.
Replication
Xa
Xb
Xc
Xd
1
A
B
C
D
2
B
D
A
A
3
C
A
D
B
4
D
C
B
C
Counterbalanced design

The counterbalanced design rotates out
the participants’ differences (e.g., one
group has more aptitude or motivation than
the other groups) by exposing each group
to all variations of the treatment.
 Order-of-presentation effects are controlled.
 Primary weakness: The possibility of
carryover effects from one treatment to the
next exists. Allowing time between
treatments can alleviate this problem.
Control-Group Time-Series Design
Experimental Group 1 T1 T2 T3 T4 Xa (Method a)
Experimental Group 2 T1 T2 T3 T4 Xb (Method b)
Control Group
T1 T2 T3 T4
T5 T6 T7 T8
T5 T6 T7 T8
T5 T6 T7 T8
Control-Group Time-Series
Design

This design allows the researcher to
determine growth over time, and the effect
of an intervention.

The presence of a control group increases
the trustworthiness of the results because
the possibility of a contemporary event
causing any gains can be determined.
Control-Group Time-Series
Design


This design can be extended by exposing the
participants to the intervention on multiple occasions.
This approach is more sensitive to partial gains in
knowledge and tests the strength of the intervention
more than once, thus giving the researcher a more
accurate understanding of the effectiveness of the
intervention.
Experimental Group 1
T1 T2
Xa
T3 T4
Xa
T5 T6
Experimental Group 2
T1 T2
Xb
T3 T4
Xb
T5 T6
Control Group
T1 T2
T3 T4
T5 T6
Q&A Break
Part 3
Designing the Right Instrument
Instrument Design

Commonly used instruments in SLA
research
–
Scored tests
– Rater scores
– Surveys
– Interviews

Consider your eventual data analysis
when developing instruments
Instruments - Scored tests
Pluses
 Quantitative items
(M/C, Cloze/C-tests)
–
–

–
–
Simple to score large
# of participants
Easier to analyze
Qualitative items
(short answer, timed
essays)
–
Minuses
 Quantitative items
Good complement to
quantitative scores
Can provide more indepth assessment of
participants’ abilities

Limited to one type of
data
Qualitative items
–
–
Take more time/effort
to score
Rater bias
Instruments – Performance
ratings
An assessment of participants’
performance in an assigned task
 Tasks may include presentations,
interviews, written essays
 Performances can be scored using a
Likert-scale, a rubric, or holistically
 Usually scored by at least two “expert”
raters; sometimes also by peers

Performance ratings
Rating criteria should be concretely
established with little ambiguity
 Avoid including too many (or too few)
criteria for one performance task
 All raters should undergo a “normative”
training session prior to assessment
 Use models to train raters
 Avoid single-score holistic ratings

Instruments - Surveys

Often used for:
–
Collecting learner history data (L2 study
experience, other background info)
– Assessing participants’ attitudes towards
a predetermined construct (language
learning motivation, anxiety using the L2)
– Determining reactions to an experimental
treatment (teaching methods, innovative
learning tasks)
Survey making
For non-advanced learners – surveys
should be in their L1
 Build in redundancy - Include multiple
questions for each concept area
 Questions should be simply worded –
avoid negative or confusing wording
 Depending on the purpose of the
survey, 20-40 items/session is a good
range to shoot for

Survey making
Any survey used in a serious study
should be piloted in advance
 It is acceptable to make adjustments to
an existing instrument
 Likert-scale items should usually have
between 4 and 6 choices
 A few qualitative questions can provide
a nice complement to quantitative
instruments

Instruments - Interviews
Interviews can provide an excellent
qualitative component to a larger study
 It is not necessary to interview all
participants

–

a subsample as small as 10-20% can be
acceptable
Use your best judgment on
participants’ language ability
–
For intermediate-and-above learners, L2
interviews are often fine
Conducting interviews
Inform students they are being
interviewed, obtain consent
 Record unobtrusively
 “Warm up” the participants before
getting into the heart of the interview
 Collect more data than you need

Validating Instruments
Instrument Validity
The construct = The heart of the matter
 What construct do you wish to
measure?
 How do you define the construct?
 What are its component parts? Do they
form a unified whole?

Operationalizing the
construct: The items





Conceptualize the construct as a continuum:
easy—difficult items and less able—more
able persons.
How have other researchers measured the
construct?
Write original or adapted items.
Cover the estimated range of your
participants.
Write 50% more items than you intend to
use. This will allow you to “cherry pick” the
best items as well as items at various levels
of difficulty.
Operationalizing the
Construct: The Items
More able | More difficult
persons
| items
|
x | item 1
xx | item 2 item 3
xxx | item 4 item 5
xxxx | item 6 item 7 item 8
xxxx | item 9 item 10 item 11
xxx | item 12 item 13
xx | item 14
x | item 15
|
Less able | Less difficult
Persons
| items
Operationalizing the
Construct: The Items
After piloting the items, statistically
analyze the results.
 Examine dimensionality, item difficulty,
and item content.
 Select the best items to make an
efficient, highly reliable instrument.

Part 4
Implementing Your Design
Implementing the design

Including other researchers

Practical issues

Handling ethical concerns
Including other researchers in
the study

The nature of the researchers involved
–
–

Main researcher plus “helpers”
One researcher plus “other participants”
The nature of the instructors involved
–
–
–
–
Teaching methods
Students taught
Course goals
University program goals
Working with other researchers

Work with people you know and trust

Establish a schedule early

Define clear roles for each researcher

Decide definite research goals prior to
data collection

Keep in regular contact

The “band practice time” principle
Example research roles

Head researcher / contact person

Data entry specialist

Statistician

Interviewer

Literature analyst

Editor / proofreader
Heading off potential problems

Explain study commitments prior to starting
the study

Agree on “ownership” prior to data collection
and data entry
–
Who will keep the data?
–
Whose name comes first, second, etc.?

Keep everyone aware of deadlines

Include everyone in decision-making
processed and data analysis
Things to avoid in group
research
People you don’t know
 Research groups larger than four or
five members
 Using someone for language skills
 Involving others just to get more
participants
 Forgetting to thank others for their
assistance

Improving relations with
research helpers

Write clear instructions

Thank profusely for their time and effort

Offer to send copies of final research
papers and/or results

Offer to assist in future research

Keep in touch after research ends
Practical Issues

Timing of implementation

Learning and research context

Participant consent

Financial considerations
Timing of the implementation

Beginning, middle, or end of semester

Day of the week

Time of day

Exams and exam preparation periods

“Culture Festivals” or other club-related
events

“Open classes” or “parents’ day”
Learning and research context

Differing course goals (I.e., listening
class vs. reading class)

Different major field of study

Gender, age, year in school

Number of class meetings

Perception of the value of research by
institution heads
Participant consent

Always allow for “non-participation”
choice from potential participants

Write clear instructions for participants
asking for their cooperation

Ask co-researchers or helpers to briefly
inform participants about their choice
Financial considerations
Copies for questionnaires, exams, etc.
 Computer analysis software
 Mailing costs
 Interview travel costs
 Recording equipment

Reference books
 Journal article costs

Heading off lack of
cooperation problems

Review requirements of the study
–
How many items in the questionnaire?
– How many treatments?
Recognize that students are busy, tired,
etc.
 Plan to get between 30-50% more data
than you need
 Try to get more data at a later date

Ethical Considerations
Ethical considerations
Students should not be exploited just
because they are there
 In theory, they should derive benefit
(directly or indirectly) from the research
 Explain the basic purpose of your
research before collecting data

–

1-2 sentences should be fine
It is also good to briefly explain this at
the beginning of any survey instrument
Ethics - Consent
Provide students with the chance to opt
out of participation
 Verbal consent is usually acceptable
for surveys, ratings, and test scores
 A written consent form may be
necessary for more involved forms of
participation (interviews, essay
passages)
 When in doubt, check recent articles in
well-known journals for guidance

Ethics – Other considerations
The role of this study within your
institution
 Potential gender, proficiency or other
issues that may affect your data or
conclusions
 Have a plan for anonymizing the data
(and consider making this conspicuous
on your instruments)

Conclusion
In conclusion…
There are many factors to consider when
embarking on a serious study
Some points to take away…
 Most studies should fill a place (a “gap”)
within the current academic dialogue
 Research questions should reflect
continuity with the literature and should be
specific
 Carefully consider the design setup which
will work best with the participant groups
and the research and analytical goals
In conclusion
Further points…
 Different instruments work better in
different circumstances. Choose those
which best reflect your aims.
 Plan your analyses at the same time you
are developing your instruments
 Develop and pilot instruments which can
cover responses from a range of
participants
In conclusion
Even more points…
 Work with others who will be serious and
committed, and then be an organized
and conscientious leader
 Consider how practical, pedagogical, and
institutional concerns may affect your
study
 Do not forget current ethical guidelines
for carrying out participant research
Good luck with your
research!
Thank you for listening
Q&A
For a copy of this
presentation:
http://jaltcue-sig.org