PowerPoint 演示文稿

Computer-Assisted
Language Learning
Lectureted by
Deng Gang
E-Mail: [email protected]
The Text Book Adopted:
Computer-Assisted Language Learning
Context and Conceptualization
By MICHAEL LEVY
CLARENDON PRESS •OXFORD
1997
The definition of Computer-Assisted Language
Learning (CALL)
Computer-Assisted Language Learning (CALL)
may be defined as ‘the search for and study of
applications of the computer in language
teaching and learning’.
The name is a fairly recent one: the existence of
CALL in the academic literature has been
recognizable for about the last thirty years(up to
1997). The subject is interdisciplinary in nature,
and it has evolved out of early efforts to find
ways of using the computer for teaching or for
instructional purposes across a wide variety of
subject areas, with the weight of knowledge and
breadth of application in language learning
ultimately resulting in a more specialized field of
study.
CALL has been made possible by the invention
and subsequent development of the computer. As a
result, the nature of CALL at any particular time
is, to a large degree, a reflection of the level of
development of the technology. In the early days,
when developers had access only to large
mainframe大型机computers, they needed to know
a machine language to program the computer, and
they tended towards numerical applications
because such computations were more directly
compatible with the workings of the computer. In
more recent times, computers have become smaller,
faster, and easier for the non-specialist to use.
Developments in ‘user-friendly’ humancomputer interfaces and higher-level
languages and authoring systems insulate
the developer from the lower-level
workings of the computer, allowing
comparatively complex applications to be
written with relative ease.
The speed with which technology has developed since
the invention of the computer has been both
extraordinary and surprisingly sustained. For
educators, the rapid and continuing introduction of
new technology into education has outpaced the ability
of teachers and developers to evaluate it properly. No
sooner do we come to terms with one machine and
develop some CALL materials for it than another,
‘better’ machine arrives to replace it. Nevertheless, it
would be irresponsible to be led purely by the latest
technological breakthrough. Somehow, we must try
and make sense of what is going on, in spite of the rate
of change, and invent reliable and cost-effective
mechanisms for dealing with it.
Set against this background of a rapid, continually
evolving technology, there are conceptual and practical
problems that all newcomers to CALL encounter in
one way or another. For all those who wish to create
new CALL materials, either privately or commercially,
independently or as a member of a team, even a
cursory glance at contemporary CALL activity shows
that there are a multitude of approaches. Points of
departure range dramatically from top-down
approaches centred perhaps upon a theory of language
or language learning, or a curriculum specification,
while others might develop CALL materials from the
bottom up, perhaps by using the computer to address
a learning strategy, a macroskill, computer
conferencing, or an exploration of aspects of the
technology itself.
Once the point of departure has been clarified, there
are immediately practical issues to consider – for
example, the selection of the hardware and software
development tools for the project. Hyper Card,
Authorware , ToolBook, CALIS, C, and Visual Basic,
or a mark-up language to enable publishing on the
World Wide Web such as the Hypertext or Virtual
Reality Mark- up Languages (HTML and VRML),
are just a handful of the many options now
available’.
Given that the way in which CALL is
conceptualized can be largely determined by the
hardware and software that is used, this initial
design choice is a most important one, and it can
have a sweeping influence on what is ultimately
created. This is a consequence of the development
process, where the strengths and limitations of the
development environment variously shape and
constrain the CALL materials that are produced.
The software then has to reach the students and be used
on a regular basis. Here there is a twofold problem: on
the one hand the equipment might have been
superseded by the end of the project; on the other hand,
the intended student group might not be able to get
access to the materials because the cost of the
equipment is prohibitive昂贵 . If textbook materials
prove themselves they may be used for years with good
effect; of CALL materials are effective then often they
are discarded when the next model of computer comes
along – and for no other reason. In the twentieth
century, it takes a special kind of courage to continue to
use a particular technology once it is considered to be
outmoded, even if that technology is more than
adequate for the task at hand.
Within this volatile environment, a substantial
number of CALL materials have been produced,
especially over the last ten to fifteen years, and,
judging by the number of projects described in
the CALL journals and at conferences, there is
no sign that this interest is about to diminish.
Yet it has to be said that CALL remains a
peripheral interest in the language teaching
community as a whole, still largely the domain
of the CALL enthusiast, and there is scant
evidence to suggest that CALL has really been
absorbed into mainstream thinking, education,
and practice.
Of the CALL materials that have been produced,
there has been much criticism, most especially
directed at the software produced by language
teachers. In the 1980s particularly, the inferior
quality of CALL materials was blamed on
inexperienced language teacher-authors who may
not have known how to make appropriate use of
the medium (Hofmeister and Maggs 1984: 1-19;
Weible 1988: 67). As a result, questions have
arisen concerning the most appropriate role of
the language teacher in CALL materials
production ( Smith 1988: 3; Last 1989: 34).
Whilst on the one hand leading writers on CALL appear
to want language teachers to be involved in CALL (e. g.
Farrington 1989: 70; Sussex 1991: 21), at the same time,
somewhat paradoxically, language teachers who have
become CALL authors have received much
unfavourable criticism. In this debate, it should not be
forgotten that were it not for the ambitious pioneering
efforts of language teachers in CALL, the whole endeavour might not have got off the ground. Arguably, within
the field of computers and education, especially within
humanities computing, it is teachers in the area of
English as a Foreign Language (EFL) and foreign
languages more generally that have been in the
vanguard.
For all the false starts and incomplete realizations
of CALL, the 1980s were a highly creative decade.
More recently, concerns have appeared to move
away from the question of the role of the language
teacher in CALL materials development, though
concerns are still expressed about the status of
CALL. In this respect, Kohn suggests that current
CALL is lacking because of poor linguistic
modeling, insufficient deployment of natural
language processing techniques, an emphasis on
special-purpose rather than general-purpose
technology, and a neglect of the ‘human’ dimension
of CALL (Kohn 1994: 32).
Although many of these criticisms may well be justified, a
lack of guidelines or standards for the current generation
of CALL materials has meant that CALL authors, be they
language teachers of otherwise, have no reliable
conceptual framework, or yardstick by which to measure
their work (Smith 1988: 5; Last 1989: 35). Emerging most
strongly in a review of the literature on CALL materials is
the lack of a generally accepted theoretical framework that
authors can use to guide their work. The absence of ‘a
standard for the industry’, a ‘generally agreed set of
criteria for the present generation of CALL’, or ‘guiding
principles’ is noted by Smith (1988: 3), Last (1989: 35),
and Ng and Olivier (1987: 1).
It appears that a clear, general theoretical framework
has not emerged for a number of reasons. There is some
anecdotal evidence to suggest that materials developers
fall into two broad bands in their approach to their work.
As early as 1977, for example, in computer-assisted
learning Kemmis et al.(1977: 391) observed that many
developers rely on their intuition直觉as teachers rather
than on research on learning. He referred to
development being practitioner实践者-led, not researchbased. A similar division is noticeable in the field of
artificial intelligence, where Ginsberg (1988) maintains
that the field is divided between those who are primarily
interested in solving problems by formulation theory
(formalists), and those who prefer to solve problems by
writing programs (proceduralists ).
A perception of this division has remained and
more recently in 1995 it was reiterated in slightly
different terms at two CALL Conferences. First, in
a keynote address at the EUROCALL Conference
in Valencia, McCarty spoke of the path of
engineering versus the path of science in
CALL( McCarty 1993, 1995), and secondly, at the
CALL Conference in Exeter, Sussex, quite
independently, contrasted Engineering CALL with
Empirical CALL (Sussex 1995). Such divisions are
worthy of further investigation and reflection.
Where theory has been used as a point of
departure, theoretical sources that have been
proposed and used have been diverse, not
surprisingly perhaps given the range of CALL
activities and the evoking nature of the field.
Theories
emanating
from
psychology,
especially cognitive psychology and Second
Language Acquisition (SLA), are a frequent
point of departure (Schneider and Bennion
1984; Doughty 1991; Liou 1994).
The theories utilized from psychology are usually drawn
from a restricted set thought to be amenable to the
CALL context generally. For instance, Doughty (1991)
limits her focus to comprehension-based models of SLA
because of their suitability for the CALL environment.
Other theoretical bases include theories of language (e. g.
Demaiziere 1991; Catt 1991) and theories of instruction
(England 1989; Lawrason 1988/9). In addition,
integrated frameworks have been proposed, such as
Hubbard (1992, 1996), or Mitterer et al. (1990:136) who
suggest an integrated framework using theories from
instructional design, language teaching, language
learning, and knowledge of the applicability of the
technology. Integrated frameworks recognize the
multifaceted nature of CALL materials development.
There is also some evidence to suggest that a number of
CALL projects have not been driven directly by theory as
such. Although some projects clearly begin with a
theoretical orientation, others begin at a lower level of
abstraction more immediately determined by conditions
governing actual practice and problems arising directly
from it. CALL projects of this type as they are described
by their authors in the literature include vocational
language programs which begin with addressing student
needs (Keith and Lafford 1989), Kanji Card which uses a
specific language problem as a point of departure
(Nakajima 1988, 1990) and CAEF, where developing
grammar skills is the goal (Paramskas 1989, 1995).
In all, it is clear there are a number of possible
theoretical points of departure in CALL, either
utilizing a single theory or a mix of theoretical
perspectives. It also seems apparent that some
CALL projects do not begin with a theory at all,
reflecting the comment by Kemmis and his
colleagues about work that is practitioner-led as
opposed to research-based (Kemmis et al. 1977). To
help resolve this issue further, we need to have a
clearer idea of what CALL authors actually do
when they go about designing CALL materials.
Little is known about the conceptual frameworks
and working methods of CALL authors at present.
Sussex (1991: 26) stresses the importance of
investigating the processes of CALL materials
production and says:
At the present time rather little work has been done
on the question of how teachers become CAL authors:
how they objectify their knowledge domains, learning,
and teaching; how they conceptualize learning
materials and learning modes for transfer to the CAL
medium; how they achieve this transfer; how the
existence and use of CAL media influence theories of
CAL, and vice versa.
By carefully reviewing what has already been done, and
by exploring the ways in which CALL is conceptualized,
a clearer understanding of theory and practice will
emerge. This book attempts to address these areas of
concern, not by providing definitive answers, but by
shedding light on the nature of the problems. Such a
description has the potential to improve our
understanding of:
· the scope of CALL and prominent areas of focus
within it;
· the theoretical sources and conceptual frameworks of
CALL authors;
· the possible weaknesses or gaps between theory and
practice.
As yet the scope of CALL is not well defined, and
its relationship with other related fields is not
clear. For example, some writers see CALL as a
sub-domain of Applied Linguistics (e. g. Leen and
Candlin 1986: 204), while others challenge this
view (e.g. Fox 1986a: 235). A description of CALL
projects to date, together with the points of
departure their authors proclaim, can help situate
CALL in relation to cognate fields and disciplines,
and practical.
Given the newness of CALL, when practitioners
do search for a theoretical foundation for their
work, they are likely to draw on theories from
the more established disciplines that surround it.
It attempting to make use of these theories, care
has to be taken to ensure that the theories are
applicable. At this time, it does CALL a great
disservice to try and force it into a single
epistemology
or
theoretical
framework,
especially one that comes from a field where
language learning with the computer is not
foremost in mind.
It is tempting to approach the complexities of CALL in
this way, of course, because such a strategy provides a
well-trodden path for further research and
development. But what if the theory does not
encompass包括the unique qualities of learning with
the aid of a computer? Ideally, the use of non-CALL
theoretical frameworks should only occur if they are
sufficiently well articulated and powerful in
themselves, and if they are fully applicable to the
context of CALL. By reviewing the motivations for
CALL materials design, and by describing the CALL
programs that have been produced, the relationship
between theory and practice can be examined.
By describing what CALL authors actually do,
their conceptual frameworks and working
methods, their personal ‘theory’ of language
teaching and learning can be set against their
CALL programs, many of which are now in
circulation and can be described and evaluated
in their contexts of use. But first a description
of what has already been done is needed.
Historical and interdisciplinary perspectives can help
provide a context for CALL. An historical perspective
can help identify topics and themes that keep
reappearing over time, probably with good reason: for
example the question of the role of the teacher in CALL.
Also, it can help prevent CALL succumbing to the latest
technological advance in a way that is blindly accepting.
For example, multimedia is much in vogue时兴at present,
not only in CALL but right across the educational
curriculum. While undoubtedly having much to offer,
multimedia is not new – it was available in a primitive
form in the TICCIT project in the 1970s, and in a form
rather similar to that of today in the Athena Project in
the late 1980s, albeit on workstations rather than
microcomputers (see Chapter 2).
Knowledge of the approaches taken in the design and
implementation of these early multimedia programs
provides insights for the contemporary multimedia author.
A historical view is also helpful in mapping the changing
relationship between approaches to language learning and
computing. Early in the history of CALL, a highly
structured view of language teaching and language
learning provided a straightforward path towards
materials development on the computer because the
principles behind the theory could be easily matched to the
qualities of the machine: lock-step drill and practice
software was, for example, easy to program. More recently,
with the advent of communicative views of language
teaching and learning, and with more eclectic approaches
to language teaching generally, the relationship between
pedagogy and the technology has become more tenuous
and more complex.
An interdisciplinary perspective on CALL shows
it to be a relatively new field of study that has
been subject to the influence of a number of
other disciplines. In addition to the fields of
computing and language teaching and learning,
real or potential influences in the development of
CALL have included aspects of psychology,
artificial intelligence, computational linguistics,
instructional design, and human-computer
interaction. Many of these disciplines are
relatively now in themselves, having developed
significantly since World War II.
They each have their own perspective and frame
of reference, they often overlap and interrelate,
and the extent to which any one discipline should
influence the development of CALL has not been
determined. At various times, CALL workers
have called upon each of these fields to guide
their own work in some way, and in Chapter 3 an
interdisciplinary perspective gives examples of
how these links have been utilized.
Having set forth a context for CALL, the book
continues with a description of how CALL authors
have conceptualized CALL. In broad terms
‘conceptualization’ is used as a label to signify the
mental picture a CALL author or a teacher has
when envisaging the use and role of the computer
in CALL. The term is used by Richards and
Rodgers (1986: 15) in discussing the evolution of
their model of language teaching method. As with a
discussion of approaches and methods in language
teaching, ‘conceptualization’ would seem the best
term to use for a discussion of similar issues in
CALL.
It is not immediately obvious how to go about
building a picture of how CALL has been
conceptualized. On reflection, the strategy finally
taken was that used by Hirschheim et al. (1990: 22)
in ascertaining the impact of microcomputer use in
the humanities. That team of researchers used a
number of component ‘indicators’, each
considered to represent a key factor that needed to
be examined if the phenomenon as a whole were to
be understood. The indicators that are held to
relate to how CALL is conceptualized are the:
· language teaching and learning philosophy;
· role of the computer;
· point of departure;
· hardware and software;
· role of the teacher (as contributor);
· development process;
· role of the teacher (as author);
· materials developed.
A CALL author’s views of language teaching and
learning are held to influence how that author
conceptualizes CALL, even if the author cannot explain
the effects or make them explicit. The role of the
computer contributes to the conceptualization in many
ways, the most important distinction perhaps being
whether the computer’s role is directive or non-directive.
The point of departure describes the CALL author’s
declared starting-point for a project. Often given when
CALL projects are written up and published, points of
departure may range from a theory of language or
language learning to a problem recognized by a language
teacher in the classroom, and that is considered amenable
to a solution via the computer.
The hardware and software, in their capabilities and
limitations, are considered variously to shape what is, and
what is not possible in a CALL project. The teacher may
contribute in a conceptualization of CALL, or the role of
the human teacher in the implementation of the program
may not be envisaged at all. The development process is
included as an indicator because of the way the process
may deform or shape the initial conceptualization leading
to an end-product that may be very different to the one
originally conceived. As well as contributing in some way to
the conceptualization by contributing to it, the teacher may
also be involved in developing CALL materials, that is as a
CALL author.
The role of the teacher as developer of CALL
materials is included because of the ways in
which language teachers, through their CALL
development work, have contributed to CALL’s
conceptual frameworks. Finally, a description of
the CALL materials that have already been
created is included. The CALL materials that
are now available provide tangible evidence of
the ways in which the use of the computer in
language teaching and learning has been
conceptualized.
The ways in which CALL authors translate their
knowledge and experience of language teaching
and learning to the computer and produce CALL
materials is necessarily a complex and
multifaceted process. The major assumption in this
work is that these ‘indicators’ are valid. At this
stage in the development of CALL all that may be
said is that the indicators for conceptualization
have face validity, and there is a reasonable
likelihood that an investigation of these elements
will provide insights on how CALL is
conceptualized.
The indicators were investigated in both the
literature reporting CALL projects and through a
survey of CALL practitioners following the work
of Stolurow and Cubillos (1983), Ng and Olivier
(1987), and Fox et al. (1990). The component
indicators for conceptualization provide the
structural framework for Chapters 4, 5, and 6:
the indicators are examined in the literature in
Chapter 4 and through the CALL Survey in
Chapter 5. The international CALL Survey was
conducted in late 1991 and early 1992.
A total of 213 questionnaires were distributed and
104 (48.8%) usable responses were returned. The
questionnaire was sent to 23 different countries, and
key practitioners in CALL from 18 countries replied.
The key practitioners were chosen on the basis of
having written programs or published in the field of
CALL. The vast majority of respondents (i.e. CALL
authors) were practicing language teachers (97.1%).
The questionnaire combined with the information
found in the literature gives a comprehensive
overview of how CALL has been conceptualized so
far.
This book is divided into eight chapters. The first
two chapters aim to set CALL in context in order
to provide a suitable background for the discussion
of CALL’s conceptual frameworks. Chapter 2
provides a historical perspective on CALL. This
chapter is by no means a full and detailed history
of CALL, but rather it is a perspective, a synopsis
of the field by decade, in the 1960s and 1970s, in
the 1980s, and in the 1990s. For each time period,
CALL projects are selected and described which
are representative of the thinking and the activity
of the period, and themes are introduced that have
contemporary relevance.
Particular emphasis is placed on some of CALL’s
more invariant qualities: topics and issues that
tend to recur in CALL over time, such as the role of
the computer in CALL and the role of the language
teacher in relation to it. An exploration of the
context of CALL continues in Chapter 3 where an
interdisciplinary perspective is provided. In this
chapter and attempt is made to establish links
between CALL and the disciplines that surround it,
and have variously influenced its development.
A short description of each of the related disciplines
accompanies the account. In this way these two chapters
on CALL in context provide a setting for the rest of the
book, and introduce many of the themes that are
explored in greater detail later on. Chapters 4, 5, and 6
focus in much more detail on how CALL has been
conceptualized, that is, how language teachers and
CALL authors have envisaged the use of the computer
in the realm of language teaching and learning. Using
the indicators that are held to influence conceptual
frameworks as an organizational framework, Chapter 4
looks at aspects and issues described in the literature on
CALL, and Chapter 5 presents the findings of the
international CALL Survey.
These two chapters approach the topic from
different angles, the two approaches complement
each other, and each perspective provides a
window onto the complex phenomena of
conceptualization. The threads of this description
are brought together for discussion in Chapter 6,
where particular themes are identified and
drawn out. These themes do not account for all
the ways in which CALL has been conceptualized
but they do represent recognizable patterns that
are discernible when CALL is viewed as a whole.
Chapter 7 looks at one conceptual framework in
particular: the tutor-tool framework. This framework is
presented as a potential means of conceptualizing CALL.
The framework is valuable in helping users and
developers recognize significant features in CALL from
the vast array of CALL projects that have occurred to
date. Other CALL models and frameworks are
accommodated within the tutor-tool framework, and
possible refinements to this framework are suggested also.
The implications of the tutor-tool framework are
considered by showing how the role of the computer, that
is, whether it is used as a tutor or as a tool, has profound
implications for methodology, integration into the
curriculum, evaluation, and the roles of the teacher and
the learner.
Finally, Chapter 8 on the nature of CALL
completes the book. Viewing CALL as a body of
work brings to light a number of issues such as the
relationship between theory and application, and
the effects the computer, and technology more
generally, may exert on the surrounding
educational environment. Finally, this chapter
concludes with some suggestions for the future,
reflecting on where the energy and the effort in
CALL might most appropriately be directed.
1. In this book the label ‘materials’ will be used to
encompass the different kinds of materials, software,
courseware, programs, packages, and learning
environments that are created in CALL. This label is
used to emphasize the connection between language
learning materials development in general – where
the term ‘materials’ is commonly used – and CALL
materials development in particular. Though in some
instances materials and learning environments will be
distinguished and treated separately, generally
learning environments on the computer are included
under the materials umbrella.
This follows the work of Breen et al. (1979: 5) who, in
the case of Communicative Language Teaching (CLT),
suggest the development of two kinds of materials:
content materials as sources of data and information;
and process materials to serve as ‘guidelines or
frameworks for the learner’s use of communicative
knowledge and abilities’ (Breen et al. 1979: 5). Thus,
learning environments on the computer are likened to
process materials in that they provide frameworks
within which learners can use and practice their
communicative skills. The notion of materials as
guidelines or frameworks for learning is reinforced by
Allwright who argues for materials to be related to the
‘cooperative management of language learning’
(Allwright 1981: 5). Learning environments on the
computer fit comfortably within this broad definition of
materials.
2. A mark-up language such as HTML (the
Hypertext Mark- up Language) is a set of
instructions that are inserted into a plain text file to
enable it to be published on the World Wide Web.
The set of instructions, or tags, defines exactly how
the Web document is displayed. The tags also
enable links to be made between documents. Once
on the Web, browsers such as Netscape can
interpret the file. VRML (Virtual Reality
Modelling Language) is an emerging standard for
creating three-dimensional spaces and objects that
can be transferred easily via the Internet, then
viewed by many users at the same time.
3. In the CALL Survey the initial orientation and
points of departure are distinguished to accommodate
more abstract and more precisely described initial
positions (see Ch. 5, Apps. A and B). For example, if a
CALL author describes the starting-point in a project
rather abstractly, as in ‘exploration of a new
technology’ perhaps, then this would be considered an
initial orientation; if ‘curriculum specifications’ were
the starting –point, however – a more concrete
beginning – then this would be considered a point of
departure. This distinction can only provide a rough
approximation, but it was included in the CALL
Survey because it allows for different degrees of clarity
at the outset.
Three time periods
1) the 1960s and 1970s
2) the 1980s
3) the 1990s.
To identify some key themes and issues that
remain important today.
1.for the 1960s and 1970s, the PLATO and
TICCIT projects;
2.for the 1980s, Storyboard, and the
Athena Language Learning Project
(ALLP);
3. for the 1990s, the International Email
Tandem Network, the CAMILLE/France
Inter Active project, and the Oral
Language Archive (OLA).
CALL in the 1960s and 1970s
Background
In the 1950s and early 1960s empiricist theory
was predominant in language teaching, a theory
described by Stern (1983:169) as ‘pedagogically
audiolingualism, psychologically behaviourism,
linguistically structuralism’. The principles
emanating 发 源 from these three schools of
thought were mutually supportive when applied
to language teaching and learning.
WHAT IS AUDIOLINGUALISM?
Do you ever ask your students to repeat
phrases or whole sentences, for example?
Do you drill the pronunciation and
intonation of utterances? Do you ever use
drills? What about choral drilling合唱队的?
Question and answer? If the answer to any
of these questions is yes, then, consciously
or unconsciously, you are using techniques
that are features of the audiolingual
approach.
This approach has its roots in the USA during
World War II, when there was a pressing need
to train key personnel quickly and effectively
in foreign language skills. The results of the
Army Specialized Training Program are
generally regarded to have been very
successful, with the caveat that the learners
were in small groups and were highly
motivated, which undoubtedly contributed to
the success of the approach.
The approach was theoretically underpinned by
structural linguistics, a movement in linguistics
that focused on the phonemic, morphological
and syntactic systems underlying the grammar of
a given language, rather than according to
traditional categories of Latin grammar. As such,
it was held that learning a language involved
mastering the building blocks of the language
and learning the rules by which these basic
elements are combined from the level of sound to
the level of sentence. The audiolingual approach
was also based on the behaviourist theory of
learning, which held that language, like other
aspects of human activity, is a form of behaviour.
In the behaviourist view, language is elicited
引 出 by a stimulus and that stimulus then
triggers a response. The response in turn
then
produces
some
kind
of
reinforcement,which, if positive, encourages
the repetition of the response in the future or,
if
negative,
its
suppression.
When
transposed to the classroom, this gives us
the classic pattern drill- Model:
She went to the cinema yesterday.
Stimulus:
Theatre.
Response:
She went to the theatre
yesterday.
Reinforcement:
Good!
In its purest form audiolingualism aims to promote
mechanical habit-formation through repetition of
basic patterns.
Accurate manipulation of structure leads to
eventual fluency. Spoken language comes
before written language. Dialogues and drill
are central to the approach. Accurate
pronunciation and control of structure are
paramount.
While some of this might seem amusingly
rigid in these enlightened times, it is worth
reflecting on actual classroom practice and
noticing when activities occur that can be
said to have their basis in the audiolingual
approach.
Most teachers will at some point require
learners to repeat examples of grammatical
structures in context with a number of aims
in mind: stress, rhythm, intonation,
"consolidating the structure", enabling
learners to use the structure accurately
through repetition, etc. Question and
answer in open class or closed pairs to
practise a particular form can also be
argued to have its basis in the audiolingual
approach, as can, without doubt, any kind
of drill.
Although the audiolingual approach in its
purest form has many weaknesses
(notably the difficulty of transferring learnt
patterns to real communication), to
dismiss the audiolingual approach as an
outmoded method of the 1960s is to ignore
the reality of current classroom practice
which is based on more than 2000 years of
collective wisdom.
There seems to be a widely held
perception amongst language teachers
that methods and approaches have finite
historical boundaries - that the GrammarTranslation approach is dead, for example.
Similarly, audiolingualism was in vogue in
the 1960s but died out in the 70s after
Chomsky’s
famous
attack
on
behaviourism in language learning.
B.F.Skinner and Behaviorism
Burrhus Frederic Skinner was born March 20,
1904, in the small Pennsylvania town of
Susquehanna. Burrhus received his BA in English
from Hamilton College in upstate New York. He
didn’t fit in very well, not enjoying the fraternity
parties or the football games. He wrote for school
paper, including articles critical of the school, the
faculty, and even Phi Beta Kappa! To top it off, he
was an atheist无神论者-- in a school that required
daily chapel attendance.
He wanted to be a writer and did try, sending off poetry
and short stories. When he graduated, he built a study in
his parents’ attic to concentrate, but it just wasn’t
working for him.
Ultimately, he resigned himself to writing newspaper
articles on labor problems, and lived for a while in
Greenwich Village in New York City as a “bohemian玩世
不恭的.” After some traveling, he decided to go back to
school, this time at Harvard. He got his masters in
psychology in 1930 and his doctorate in 1931, and stayed
there to do research until 1936.
Also in that year, he moved to Minneapolis to teach
at the University of Minnesota. There he met and
soon married Yvonne Blue. They had two daughters,
the second of which became famous as the first
infant to be raised in one of Skinner’s inventions,
the air crib. Although it was nothing more than
a combination crib and playpen with glass sides
and air conditioning, it looked too much like
keeping a baby in an aquarium to catch on.
In 1945, he became the chairman of the psychology
department at Indiana University. In 1948, he was
invited to come to Harvard, where he remained for the
rest of his life. He was a very active man, doing research
and guiding hundreds of doctoral candidates as well as
writing many books. While not successful as a writer of
fiction and poetry, he became one of our best psychology
writers, including the book Walden II, which is a fictional
account of a community run by his behaviorist principles.
August 18, 1990, B. F. Skinner died of leukemia白血病
after becoming perhaps the most celebrated psychologist
since Sigmund Freud.
B. F. Skinner’s entire system is based on operant
conditioning动作训练. The organism生物体is in
the process of “operating” on the environment,
which in ordinary terms means it is bouncing
around it world, doing what it does. During this
“operating,” the organism encounters a special
kind of stimulus, called a reinforcing stimulus, or
simply a reinforcer. This special stimulus has the
effect of increasing the operant ------ that is,
the behavior occurring just before the
reinforcer. This is operant conditioning: “the
behavior is followed by a consequence, and the
nature of the consequence modifies the organisms
tendency to repeat the behavior in the future.”
Imagine a rat in a cage. This is a special cage
(called, in fact, a “Skinner box”) that has a bar or
pedal踏板on one wall that, when pressed, causes a
little mechanism to release a foot pellet小球into
the cage. The rat is bouncing around the cage,
doing whatever it is rats do, when he accidentally
presses the bar and -- hey, presto! -- a food pellet
falls into the cage! The operant is the behavior just
prior to the reinforcer, which is the food pellet, of
course. In no time at all, the rat is furiously
peddling away at the bar, hoarding his pile of
pellets in the corner of the cage.
A behavior followed by a reinforcing
stimulus results in an increased probability
of that behavior occurring in the future.
What if you don’t give the rat any more
pellets? Apparently, he’s no fool, and after a few
futile徒劳的attempts, he stops his bar-pressing
behavior. This is called extinction消失of the
operant behavior.
A behavior no longer followed by the reinforcing
stimulus results in a decreased probability
of that behavior occurring in the future.
Now, if you were to turn the pellet machine back on, so that
pressing the bar again provides the rat with pellets, the
behavior of bar-pushing will “pop” right back into existence,
much more quickly than it took for the rat to learn the
behavior the first time. This is because the return of the
reinforcer takes place in the context of a reinforcement
history that goes all the way back to the very first time the
rat was reinforced for pushing on the bar!
Skinner likes to tell about how he “accidentally -i.e. operantly -- came across his various
discoveries. For example, he talks about running
low on food pellets in the middle of a study. Now,
these were the days before “Purina rat chow” and
the like, so Skinner had to make his own rat pellets,
a slow and tedious task. So he decided to reduce
the number of reinforcements he gave his rats for
whatever behavior he was trying to condition, and,
lo and behold, the rats kept up their operant
behaviors, and at a stable rate, no less. This is how
Skinner discovered schedules of reinforcement!
The fixed ratio schedule was the first one
Skinner discovered: If the rat presses the pedal
three times, say, he gets a goodie. Or five
times. Or twenty times. Or “x” times. There is a
fixed ratio between behaviors and reinforcers: 3
to 1, 5 to 1, 20 to 1, etc. This is a little like “piece
rate”
in
the
clothing
manufacturing
industry: You get paid so much for so many
shirts.
The fixed interval schedule uses a timing device
of some sort. If the rat presses the bar at least
once during a particular stretch of time (say 20
seconds), then he gets a goodie. If he fails to do
so, he doesn’t get a goodie. But even if he hits
that bar a hundred times during that 20
seconds, he still only gets one goodie! One
strange thing that happens is that the rats tend
to “pace” themselves: They slow down the rate
of their behavior right after the reinforcer, and
speed up when the time for it gets close.
Skinner
also
looked
at
variable
schedules. Variable ratio means you change the
“x” each time -- first it takes 3 presses to get a
goodie, then 10, then 1, then 7 and so
on. Variable interval means you keep changing
the time period -- first 20 seconds, then 5, then 35,
then 10 and so on.
In both cases, it keeps the rats on their rat
toes. With the variable interval schedule, they
no longer “pace” themselves, because they now
can no longer establish a “rhythm” between
behavior and reward. Most importantly, these
schedules are very resistant to extinction. It
makes sense, if you think about it. If you
haven’t gotten a reinforcer for a while, well, it
could just be that you are at a particularly “bad”
ratio or interval! Just one more bar press,
maybe this’ll be the one!
This, according to Skinner, is the mechanism of
gambling. You may not win very often, but you
never know whether and when you’ll win
again. It could be the very next time, and if you
don’t roll them dice骰子, or play that hand, or bet
on that number this once, you’ll miss on the score
of the century!
A question Skinner had to deal with was how we
get to more complex sorts of behaviors. He
responded with the idea of shaping, or “the
method of successive approximations.” Basically,
it involves first reinforcing a behavior only
vaguely similar to the one desired. Once that is
established, you look out for variations that come
a little closer to what you want, and so on, until
you have the animal performing a behavior that
would never show up in ordinary life. Skinner
and his students have been quite successful in
teaching simple animals to do some quite
extraordinary things. My favorite is teaching
pigeons to bowl!
The theory of B.F. Skinner is based upon the idea that
learning is a function of change in overt明显的behavior.
Changes in behavior are the result of an individual's
response to events (stimuli) that occur in the environment.
A response produces a consequence such as defining a
word, hitting a ball, or solving a math problem. When a
particular Stimulus-Response (S-R) pattern is reinforced
(rewarded), the individual is conditioned有条件的, 习惯
于...的to respond. The distinctive characteristic of operant
conditioning relative to previous forms of behaviorism
(e.g., Thorndike, Hull) is that the organism can emit
responses instead of only eliciting response due to an
external stimulus.
Reinforcement is the key element in Skinner's S-R theory.
A reinforcer is anything that strengthens the desired
response. It could be verbal praise, a good grade or a
feeling of increased accomplishment or satisfaction. The
theory also covers negative reinforcers -- any stimulus that
results in the increased frequency of a response when it is
withdrawn (different from adversive 外 来 的 stimuli -punishment -- which result in reduced responses). A great
deal of attention was given to schedules of reinforcement
(e.g. interval versus ratio) and their effects on establishing
and maintaining behavior.
Structuralism
Beginning from the second of this century, a
school of linguistics, known as Structural
Linguistics, emerged as a flourishing linguistic
theory in the academic world and in language
pedagogy. Structuralism, as it is often called, is a
reaction against hitherto 迄 今 traditional
grammars in that it is able to set up precise and
verifiable definitions on formal and distributional
criteria the problems of which traditional
grammars have long unable to solve.
Leonard Bloomfield 1887-1949
Leonard Bloomfield was born in 1887. Leonard
studied under many different colleges; he
graduated from Harvard in 1906, and then went
on to graduate from the University of Wisconsin
in 1908. Then from there he went on to further
education and studied at the University of
Chicago where he later graduated. He spent most
of his time dealing with comparing and
contrasting Germanic languages. At the
University of Ohio, Bloomfield caught his first
break as an Assistant Professor of German. He
spent seven years under that title, and then
moved on to the University of Chicago. There he
was the head Professor of German, and spent a
lot of his time (1921-1928) teaching here.
After this Leonard became more interested in the
description of languages, and how they pertained
to science. When Leonard got into this aspect of
language, it is when he wrote his masterpiece
Language. It dealt with a standard text, and had a
tremendous influence on other linguists. Until
very recently most United States linguists
considered themselves in some sense Bloomfield's
disciples, whether they actually studied under him
or not, and a great deal of American linguistic
work has taken the form of working out questions
raised and methods suggested by Bloomfield
(Online-Media: Important Linguists).
Leonard had six main publications during his
lifetime, and they too have had their own little
mark in the history of linguists. His first main
book came in 1914, when he was an Assistant
Professor at the University of Illinois. It was
called Introduction to the study of Language;
this dealt with the overall aspect of language
and was just the beginning of Leonard's
profound career. After this Leonard went into
the grammatical aspect of the Philippine
language, he wrote and published his next
main book Tagalog Texts with Grammatical
Analysis (1917).
The next book was called Menomini Texts (1928), one of
Bloomfield's least favorable publications. In the middle of
his writing career came Language (1933), which was the
book he is renowned for. From here Leonard went deeper
into grammar, and wrote The Stressed Vowels of American
English (1935). The last main book of Leonard
Bloomfield's career was when he went back into the
scientific research of language. It dealt with the overall
aspect of language and science, and didn't get as much
publicity as Language. This book was called Linguistic
Aspects of Science (1939). At the end of Leonard's writing
career, he tried to write about other languages (Dutch and
Russian) but couldn't really get the true feeling out of this,
like he did with his other books. In the end, Leonard
Bloomfield is not only considered one of the best Linguists
of his time, he is considered one of the best of all time.
Structuralism is a theory that uses culturally
interconnected signs to reconstruct systems of
relationships rather than studying isolated,
material things in themselves. This method
found wide use from the early 20th cent. in a
variety of fields, especially linguistics,
particularly as formulated by Ferdinand de
Saussure. Anthropologist Claude Lévi-Strauss
used structuralism to study the kinship systems
of different societies.
No single element in such a system has meaning except as
an integral part of a set of structural connections. These
interconnections are said to be binary in nature and are
viewed as the permanent, organizational categories of
experience. Structuralism has been influential in literary
criticism and history, as with the work of Roland Barthes
and Michel Foucault. In France after 1968 this search for
the deep structure of the mind was criticized by such
“poststructuralists” as Jacques Derrida, who abandoned
the goal of reconstructing reality scientifically in favor of
“deconstructing” the illusions of metaphysics.
The noun structuralism has 3 meanings:
Meaning 1: linguistics defined as the analysis of
formal structures in a text or discourse
Synonym: structural linguistics
Meaning 2: an anthropological theory that there
are unobservable social structures that generate
observable social phenomena
Synonym: structural anthropology
Meaning 3: a sociological theory based on the
premise that society comes before individuals
Synonym: structural sociology
Structuralism is an approach that grew to become
one of the most widely used methods of analyzing
language, culture, and society in the second half of
the 20th century. 'Structuralism', however, does
not refer to a clearly defined 'school' of authors,
although the work of Ferdinand de Saussure is
generally
considered
a
starting
point.
Structuralism is best seen as a general approach
with many different variations. As with any
cultural
movement,
the
influences
and
developments are complex.
Broadly, structuralism seeks to explore the interrelationships (the "structures") through which meaning is
produced within a culture. According to structural theory,
meaning within a culture is produced and reproduced
through various practices, phenomena and activities which
serve as systems of signification. A structuralist studies
activities as diverse as food preparation and serving rituals,
religious rites, games, literary and non-literary texts, and
other forms of entertainment to discover the deep
structures by which meaning is produced and reproduced
within a culture. For example, an early and prominent
practitioner of structuralism, anthropologist and
ethnographer Claude Levi-Strauss, analyzed cultural
phenomena including mythology, kinship, and food
preparation.
When used to examine literature, a structuralist critic will
examine the underlying relation of elements (the
'structure') in, say, a story, rather than focusing on its
content. A basic example are the similarities between West
Side Story and Romeo and Juliet. Even though the two
plays occur in different times and places, a structuralist
would argue that they are the same story because they
have a similar structure - in both cases, a girl and a boy
fall in love (or, as we might say, are +LOVE) despite the
fact that they belong to two groups that hate each other, a
conflict that is resolved by their death. Consider now the
story of two friendly families (+LOVE) that make an
arranged marriage between their children despite the fact
that they hate each other (-LOVE), and that the children
resolve this conflict by committing suicide to escape the
marriage.
A structuralist would argue this second story is an
'inversion' of the first, because the relationship
between the values of love and the two pairs of
parties involved have been reversed. In sum, a
structuralist would thus argue that the 'meaning'
of a story lies in uncovering this structure rather
than, say, discovering the intention of the author
who wrote it.
Some feel that a structuralist analysis helps
pierce through the confusing veil of life to reveal
the hidden, underlying, logically complete
structure.
Others
would
argue
that
structuralism simply reads too much into 'texts'
(in the widest sense) and allows clever
professors to invent meanings that aren't
actually there. There are a variety of positions in
between these two extremes, and in fact many of
the debates around structuralism focus on
trying to clarify issues of just this sort.
Saussure's Course
Ferdinand de Saussure's Course in General Linguistics
(1916) is generally seen as being the origin of
structuralism. Although Saussure was, like his
contemporaries, interested in historical linguistics, in
the Course he developed a more general theory of
semiology. This approach focused on examining how the
elements of language related to each other in the
present ('synchronically' rather than 'diachronically').
He thus focused not on the use of language (parole, or
talk) but the underlying system of language (langue) of
which any particular utterance was an expression.
Finally, he argued that linguistic signs were composed
of two parts, a 'signifier' (roughly, the sound of a word)
and a 'signified' (the concept or meaning of the word).
This was quite different from previous
approaches to language which focused on the
relationship between words and the things in the
world they designated. By focusing on the
internal constitution of signs rather than focusing
on their relationship to objects in the world,
Saussure made the anatomy and structure of
language something that could be analyzed and
studied.
Structuralism in linguistics
Saussure's Course influenced many linguists in
the period between WWI and WWII. In
America, for instance, Leonard Bloomfield
developed his own version of structural
linguistics, as did Louis Hjelmslev in
Scandinavia. In France Antoine Meillet and
Émile Benveniste would continue Saussure's
program. Most importantly, however, members
of the Prague School of linguistics such as
Roman Jakobson and Nikolai Trubetzkoy
conducted research that would be greatly
influential.
The clearest and most important example of Prague
School structuralism lies in phonemics. Rather than
simply compile a list of which sounds occur in a language,
the Prague School sought to examine how they were
related. They determined that the inventory of sounds in a
language could be analyzed in terms of a series of
contrasts. Thus in English the words 'pat' and 'bat' are
different because the 'p' and 'b' sounds contrast. The
difference between them is that you vocalize while saying a
'b' while you do not when saying a 'p'. Thus in English
there is a contrast between voiced and non-voiced
consonants. Analyzing sounds in terms of contrastive
features also opens up comparative scope - it makes clear,
for instance, that the difficulty Japanese speakers have
differentiating between 'r' and 'l' in English is due to the
fact that these two sounds are not contrastive in Japanese.
While this approach is now standard in
linguistics, it was revolutionary at the time.
Phonology would become the paradigmatic
basis for structuralism in a number of
different forms.
Structuralism after the War
After WWII, and particularly in the 1960s,
Structuralism surged to prominence in France and it
was structuralism's initial popularity in this country
which led it to spread across the globe.
Throughout the 1940s and 1950s, existentialism such as
that practiced by Jean-Paul Sartre was the dominant
mood. Structuralism rejected existentialism's notion of
radical human freedom and focused instead on the way
that human behavior is determined by cultural, social,
and psychological structures.
The most important initial work on this score was
Claude Levi-Strauss's 1949 volume Elementary
Structures of Kinship. Levi-Strauss had known Jakobson
during their time together in New York during WWII
and was influenced by both Jakobson's structuralism as
well as the American anthropological tradition. In
Elementary Structures he examined kinship亲 缘 关 系
systems from a structural point of view and
demonstrated how apparently different social
organizations were in fact different permutations of a
few basic kinship structures. In the late 1950s he
published Structural Anthropology, a collection of essays
outlining his program for structuralism.
By the early 1960s structuralism as a movement was
coming into its own and some believed that it offered a
single unified approach to human life that would embrace
all disciplines. Roland Barthes and Jacques Derrida
focused on how structuralism could be applied to
literature. Jacques Lacan (and, in a different way, Jean
Piaget) applied structuralism to the study of psychology,
blending Freud and Saussure. Michel Foucault's book The
Order of Things examined the history of science to study
how structures of epistemology 认 识 论 , or epistemes
shaped how people imagined knowledge and knowing
(though Foucault would later explicitly deny affiliation
with the structuralist movement). Louis Althusser
combined Marxism and structuralism to create his own
brand of social analysis. Other authors in France and
abroad have since extended structural analysis to
practically every discipline.
The definition of 'structuralism' also shifted as a result of
its popularity. As its popularity as a movement waxed
and waned, some authors considered themselves
'structuralists' only to later eschew the label. Additionally,
the term has slightly different meanings in French and
English. In the US, for instance, Derrida is considered the
paradigm of post-structuralism while in France he is
labeled a structuralist. Finally, some authors wrote in
several different styles. Barthes, for instance, wrote some
books which are clearly structuralist and others which
are clearly not.
Reactions to structuralism
Today structuralism has been superceded by approaches
such as post-structuralism and deconstruction. There are
many reasons for this. Structuralism has often been
criticized for being ahistorical and for favoring deterministic
structural forces over the ability of individual people to act.
As the political turbulence of the 1960s and 1970s (and
particularly the student uprisings of May 1968) began
affecting the academy, issues of power and political struggle
moved to the center of people's attention. In the 1980s,
deconstruction and its emphasis on the fundamental
ambiguity of language - rather than its crystalline logical
structure - became popular. By the end of the century
Structuralism was seen as a historically important school of
thought, but it was the movements it spawned, rather than
structuralism itself, which commanded attention.
A notable problem is the definitions of parts of
speech, e.g. traditional grammar defines that
'a pronoun stands for a noun', but there are
many words which can be used instead of a
noun and they are not necessarily pronouns
and follow different distributional criteria in a
sentence from those we commonly name as
'pronouns'.
Structuralist grammar, on the other hand, sees
languages in form as consisting of various
constituent structures, thus 'if words occur
regularly in the same patterns - the same positions
in sentences, we say that they belong to the same
form class...' (Paul Roberts, quoted in Roulet
1975:23).
The ideas of this kind led to the substitution
drills widely used in teaching English as a
foreign language in the 50s and 60s known as
the Audio-lingual Method which is still applied
today in some classrooms in the world. We see
that the idea is the elaboration of the de
Saussure's 'langue is forme, non substance' and
his 'associative relations'.
The movement of Structuralism is represented
by American linguists C.C. Fries and Robert
Lado in the University of Michigan. Fries'
influential publication The Structure of English
(1952) triggered off the enthusiasm in
Structuralism. He and Lado's English
Sentence Patterns (1957) is the representative
structuralist teaching material. The axioms公
理of the structuralist view on the nature of
language can be summarized as follows:
1. Language is speech;
2. Language is a system;
3. The language system is arbitrary任意的;
4. Language is for communication (Bell
1981:92)
The structuralists, however, were not able to
develop the last axiom because they had paid less
attention to the study of meaning, i.e. the substance
of language. In psychology, structuralists found
their base on Pavlov-Skinner Stimulus-Response
Behaviourist model so as to produce their brain
child Audio-lingual Method in language teaching.
One should add 'language is habit' to the axioms of
the language view of the structuralists. Other
American influential Audio-lingual language
teaching materials are Lado English Series (Lado
1977) and English 900 (English Language Services
1964).
In Europe, structuralist theories were mingled
with British linguists' version of structuralism
- the notion of 'situation'. It was first referred
to as the Oral Approach and termed
Structural-Situational Approach later. In
language
theory,
Structural-Situational
Approach is not very much different from
American Structuralism. This is clear from a
statement made by A.W. Frisby:
Word order, Structural Words, the few
inflexions of English, and Content Words,
will form the material of our teaching
(Quoted in Richards & Rodgers 1986:35).
also from H.E. Palmer's
language learning:
view
on
... there are three processes in learning a
language - receiving the knowledge or
materials, fixing it in the memory by
repetition, and using it in actual practice
until it becomes a personal skill (op cit:
36).
It is again a habit -learning theory. What is
different is that the British structuralists put
emphasis on the close relationship between the
structure of language and the context and
situations in which language is used. In typical
Structural-Situational teaching materials we can
find units like 'At the Railway Station', 'In the
Restaurant' and so on. Sentence structures which
are assumed to be used in these situations are
practised intensively.
One can find the shortcomings of this approach:
communication patterns are not always the same
in the same situation, e.g. in a restaurant, besides
ordering food, one could appreciate or criticize
the service provided, therefore one needs
language items which fit these communication
patterns. Furthermore, the same language item
may have different communicative functions.
An everyday utterance 'Would you like...' may
be used to give an invitation or to ask for an
opinion. To introduce these items to students
according to the situations in which they may
occur may give the wrong impression that
these structures can only be used under certain
circumstances. The above facts are the
criticism of the structuralist theoretical
foundation: language is form, not substance.
The fact is language is both form and
substance.
E Roulet has some comprehensive criticisms of
structuralist theories and their application in
language of Structuralism can be summarized
as follows:
First, language corpus is necessarily unlimited, so
simply providing an inventory of structures cannot
lead to the mastery of a language, e.g. having
learned 900 sentences of a language will not result
in the ability of using that language freely in various
social interactions.
G. Sampson has a vivid analogy between
mathematics and language: language is just
like a circle on a sheet of graph-paper in
which one can find infinitely many points
(Sampson 1980:132). Syllabuses based on
language structures often lead teachers and
students to think that manipulation of
structures is an end itself in language
learning.
Second, form alone do not decide the correct
grammaticality. they are of only secondary
importance while meanings and functions are of
primary importance. A linguistically correct
form or structure can be socially or functionally
unaccepted, considered from when, where and
how it is used. So in determining degrees of
grammaticality of utterances, teachers have to
seek criteria or rules apart from rules of
forming structures. There is an implication here
that language is not the only variable in
language pedagogy: there are social, economic,
political and psychological variables.
Finally, structures are not isolated within a
language (even between languages). There
exists certain relationship between forms and
between structures which allow the native
language user to shift from one pattern to
another, or to 'generate' many other useful
patterns. Structuralist grammars do not
consider these relations. This is a typical
criticism of Structuralism from Chomsky's
Transformational-Generative Grammar (TGG).
Although Structuralism has met with criticisms in
theory and in language pedagogy, its significant
influence on language teaching and learning
cannot be underestimated.
In today's rapidly changed language teaching
methodology, structural contents are still the basic
section in language teaching. Linguistic theories
have, to put it simply, made two important
contributions to language teaching and learning
since Structuralism, one is the primary of speech
and the other is the adequate descriptions of
language. The former is inherited from the early
Reform Movement and is further applied by
structuralists.
最早的计算机辅助教育(CBE)的应用程序是在
大型机和小型机上开发的。有两个最著名的系
统至今还在影响着教育:一个是PLATO(在大
型机上运行),另一个是TICCIT(在小型机上运
行)。
PLATO:1959年,依利诺斯大学的工程师.物理学家,
心理学家和教育学家们在Denald Biber的领导下,开
始开发一个旨在使个别化教学自动化的系统。以
PLATO闻名的这个系统最初是由依利诺斯大学和国防
部提供资金,逐渐发展成为一个功能强大的CBE系统。
开发的内容包括一个计算机辅助教学的写作语言
TUTOR(该语言的目的是简化CAI程序的开发过程)以
及一个专用的计算机终端。在头六年中,这一系统从
一个终端增至71个,并可允许21个终端同时操作。在
系统上编制了近200个课件,这足以证明该系统在教
学上的灵活性。1967年,依利诺斯州立大学建立了计
算机辅助教育研究实验室,把PLATO也搬进了这个新
实验室。在这其间实验的重点是如何有效利用该系统、
大型CBE系统的软件开发以及硬件的开发。
多年来,投入PLATO系统开发的资金累计已
达几亿美元,资金来源很广泛,除CDC之
外.还包括了国家自然科学基金、联邦教育部、
国家教育研究所以及依利诺斯州立大学等。
PLATO系统也许可算作世界上最著名的CAI项
目,以致于它本身己成为被研究的对象。这个
系统是卓有成效的,这可以从不断涌现的有关
提高使用者成绩和态度的报告得到证明。在美
国已经不用Micro-PLATO;但在日本,TDK
还继续使用它并还在做进一步的开发。
PLATO 的 继 承 者 NovaNET(University
Communications公司的注册商标)提供了更
高品质的图像以及更高的效率。该系统通过
下 列 三 个 途 径 降 低 成 本 :
①它在一组DEC Alpha计算机上运作,可同
时 运 行 几 千 个 NovaNET 终 端 ;
② 个 人 系 统 ( 包 括 Dos 、 Windows 、
Macintosh和UNIX计算机)都可以享受这种服
务
;
③拨通租用的电话线以及在Internet上可以使
用这套系统。
TICCIT:TICCIT(分时、交互、计算机控制的信
息电视)系统始于1971年,是弗吉尼亚州的
MITRE公司的工程师们与德克萨斯州立大学CAI
实验室的教育工作者们共同合作的项目。后来
杨百翰大学的计算机教育应用研究所也加入进
来。该系统的研制由国家自然科学基金投资,
在C.V.邦德森的主持下,工作组开始开发大学
低年级数学和英语的整个课程。课程开发涉及
到小型机、彩色电视机、图像、专用的写作系
统以及精通教学设计领域的专家和心理学家的
专门知识。TlCCIT系统通常是放置在学习中心,
并且能够处理多达128个终端。
TlCCIT 采用的教学策略有几个方面不同于先
前的计算机辅助学习――最突出的一方面是
学生掌握学习内容的控制权:将要学习哪一
课,以及希望学习课程中的哪一部分。
热心于CAI的人常常希望有这样一个系统:它
能够知道学生的学习方式、以往的成绩、学
习的准备情况,然后把采用最恰当的策略把
最适合的信息呈现给学生。TlCCIT 的设计人
员感到,如果这种努力使得学生完全依赖于
系统,那么就会起到反作用――后续的学习
会变得更加困难,因为现实世界毕竟不会如
此完美地符合个人需要。TICCIT的一个主要
目标就是帮助学生成为一个独立的学习者。
当学生学着在TICCIT系统上选择不同的显示内
容时,他或她同时也在学习在没有计算机辅助
的情况下如何选择下一步的学习。设计
TICCCIT系统最初是为大学低年级学主提供英
语和数学的教学。它实际上应用最多的却是军
职人员、大学英语和代数的教学。TICCIT在
1995年中己逐步停止使用。
Behavioristic CALL
The first phase of CALL, conceived in the
1950s and implemented in the 1960s and '70s,
was based on the then-dominant behaviorist
theories of learning. Programs of this phase
entailed repetitive language drills and can be
referred to as "drill and practice" (or, more
pejoratively, as "drill and kill").
Drill and practice courseware is based on the
model of computer as tutor (Taylor, 1980). In
other words the computer serves as a vehicle
for delivering instructional materials to the
student. The rationale behind drill and
practice was not totally spurious, which
explains in part the fact that CALL drills are
still used today. Briefly put, that rationale is as
follows:
* Repeated exposure to the same material is
beneficial or even essential to learning
* A computer is ideal for carrying out repeated
drills, since the machine does not get bored with
presenting the same material and since it can
provide immediate non-judgmental feedback
* A computer can present such material on an
individualized basis, allowing students to
proceed at their own pace and freeing up class
time for other activities
Based on these notions, a number of CALL
tutoring systems were developed for the
mainframe computers which were used at that
time. One of the most sophisticated of these was
the PLATO system, which ran on its own special
PLATO hardware, including central computers
and terminals. The PLATO system included
vocabulary drills, brief grammar explanations
and drills, and translations tests at various
intervals (Ahmad, Corbett, Rogers, & Sussex,
1985).
In the late 1970s and early 1980s, behavioristic
CALL was undermined by two important
factors. First, behavioristic approaches to
language learning had been rejected at both
the theoretical and the pedagogical level.
Secondly,
the
introduction
of
the
microcomputer allowed a whole new range of
possibilities. The stage was set for a new phase
of CALL.
CALL in the 1980s
Bakground
Notable amongst the new methods that began to
appear in the 1970s were the new humanistic
methods such as Community Language Learning
(Curran 1976) and Total Physical Response (Asher
1977), Humanistic methods and techniques
engaged the whole person, their emotions and
feelings, the affective dimension (see Moskowitz
1978: 2). But the most far-reaching approach to
language teaching to emerge at this time was
Communicative Language Teaching (CLT).
Communicative
approach that
language use.
summarized the
deduced from
language teaching is an
is based on communicative
Richards and Rodgers
following theoretical premises
the consideration of CLT:
1. The communication principle: Activities that involve
communication
promote
language
learning.
2. The task principle: Activities that involve the
completion of real-world tasks promote learning.
3. The meaningfulness principle: Learners must be
engaged in meaningful and authentic language use for
learning to take place (Richards & Rodgers, 1986, p.
72).
Communicative CALL
The second phase of CALL was based on the
communicative approach to teaching which
became prominent in the 1970s and 80s.
Proponents of this approach felt that the drill
and practice programs of the previous decade
did not allow enough authentic communication
to be of much value. One of the main
advocates of this new approach was John
Underwood, who in 1984 proposed a series of
"Premises 前 提for 'Communicative' CALL"
(Underwood, 1984, p. 52). According to
Underwood, communicative call:
* focuses more on using forms rather than on the
forms themselves;
* teaches grammar implicitly隐含地rather than
explicitly;
* allows and encourages students to generate
original utterances rather than just manipulate操
作prefabricated预制language;
* does not judge and evaluate everything the
students nor reward them with congratulatory
messages, lights, or bells;
* avoids telling students they are wrong and is
flexible to a variety of student responses;
* uses the target language exclusively专有地and
creates an environment in which using the target
language feels natural, both on and off the screen;
and
* will never try to do anything that a book can do
just as well.
Another critic of behavioristic CALL, Vance
Stevens, contends that all CALL courseware
and activities should build on intrinsic内在的
motivation and should foster培养interactivity-both learner-computer and learner-learner
(Stevens, 1989).
Several types of CALL programs were developed and
used during this the phase of communicative CALL.
First, there were a variety of programs to provide skill
practice, but in a non-drill format. Examples of these
types of programs include courseware for paced
reading, text reconstruction, and language games
(Healey & Johnson, 1995b). In these programs, like the
drill and practice programs mentioned above, the
computer remains the "knower-of-the-right-answer"
(Taylor & Perez, 1989, p. 3); thus this represents an
extension of the computer as tutor model. But--in
contrast to the drill and practice programs--the process
of finding the right answer involves a fair amount of
student choice, control, and interaction.
In addition to computer as tutor, another CALL model
used for communicative activities involves the computer as
stimulus (Taylor & Perez, 1989, p. 63). In this case, the
purpose of the CALL activity is not so much to have
students discover the right answer, but rather to stimulate
students' discussion, writing, or critical thinking. Software
used for these purposes include a wide variety of programs
which may not have been specifically designed for
language learners, programs such as SimCity, Sleuth, or
Where in the World is San Diego(在San Diego, CA周边使用
Where in the World is进行搜索所得到的当地结果)(Healey
& Johnson, 1995b).
The third model of computers in communicative
CALL involves the computer as tool (Brierley &
Kemble, 1991; Taylor, 1980), or, as sometimes called,
the computer as workhorse (Taylor & Perez, 1989). In
this role, the programs do not necessarily provide
any language material at all, but rather empower授
权 the learner to use or understand language.
Examples of computer as tool include word
processors, spelling and grammar checkers, desk-top
publishing programs, and concordances.
Of course the distinction between these models is not
absolute. A skill practice program can be used as a
conversational stimulus, as can a paragraph written by
a student on a word processor. Likewise, there are a
number of drill and practice programs which could be
used in a more communicative fashion--if, for example,
students were assigned to work in pairs or small groups
and then compare and discuss their answers (or, as
Higgins, 1988, students can even discuss what
inadequacies they found in the computer program) In
other words, the dividing line between behavioristic and
communicative CALL does involves not only which
software is used, but also how the software is put to use
by the teacher and students.
On the face of things communicative CALL seems like a
significant advance over its predecessor. But by the end of
the 1980s, many educators felt that CALL was still failing
to live up to its potential (Kenning & Kenning, 1990;
Pusack & Otto, 1990; Rüschoff, 1993). Critics pointed out
that the computer was being used in an ad hoc and
disconnected fashion and thus "finds itself making a
greater contribution to marginal rather than to central
elements" of the language teaching process (Kenning &
Kenning, 1990, p. 90).
These critiques of CALL dovetailed吻合with broader
reassessments 重 新 估 价 of the communicative
approach to language teaching. No longer satisfied
with teaching compartmentalized 分 类 skills or
structures (even if taught in a communicative
manner), a number of educators were seeking ways to
teach in a more integrative manner, for example
using task- or project-based approaches . The
challenge for advocates of CALL was to develop
models which could help integrate the various aspects
of the language learning process. Fortunately,
advances in computer technology were providing the
opportunities to do just that.
The official website for SimCity, with online
play, forums, downloads, and more
information than you can shake a stick at. A
great place to visit for all your
SimCityneeds.
Sleuth - Cast, Crew, Reviews, Plot Summary, Comments,
Discussion, Taglines, Trailers, Posters, Photos,
Showtimes, Link to Official Site, Fan Sites.
1975年,微型计算机是成套提供的。不久以后,
美国的学校就得到预先装配好的微机了----包
括Commodore Pet、Apple Ⅱ、TRS-80。由
此引发的一个市场在短短几年间就把几十万台
微型计算计带进美国的家庭和学校中。随着计
算机的发展,它对社会的影响也变得明显,
1982年《时代》周刊甚至打破评选年度新闻人
物常规,而把计算机评选为年度的新闻“人
物”。出版商Mayers在其刊首语中写道“几位
人类竞选者也许代表了1982年,但是没有人能
象这台机器一样更丰富地象征看过去的一年,
或说更有资格被历史记载下来”。
It was in the early 1980s that the language teacherprogrammer became prominent. With the widespread
availability of inexpensive microcomputers, often supplied
with a version of BASIC, the motivated language teacher
could write simple CALL programs. Programming in BASIC
for language teachers was encouraged through texts such as
Higgins and Johns (1984), Kenning and Kenning (1984), and
Davies (1985), which contained fragments of BASIC, or
complete programs in the language. Prior to microcomputer
CALL, most software development had resulted from wellfunded team efforts because of the complexity of the task and
limited access to mainframe computers. The programming
component of these projects was completed by specialists in
the field.
Now, in theory at least, language teachers were
free to develop their own conceptualization of
CALL on the microcomputer, the only major
constraint being their programming ability. The
range of software written by teacherprogrammers at this time was broad. It was often
centred around a single activity and examples
included text reconstruction, gap-filling, speedreading, simulation, and vocabulary games
(Wyatt 1984c; Underwood 1984).
In developing CALL software for the
microcomputer in the 1980s, teacherprogrammers often chose to learn a high-level
programming language such as BASIC to
design materials from scratch. Other language
teachers produced CALL materials using
authoring programs such as Storyboard. Two
other possible approaches to authoring were the
use of authoring systems and authoring
languages.
An authoring system that has had a resounding
influence across educational computing is
HyperCard is a good example of how longstanding concepts can suddenly find expression
and widespread acceptance on the computer,
though they have existed for many years. The
non-linear approach to text production and
consumption was derived from the work of Ted
Nelson who first coined the word ‘Hypertext’ in
1965(see Nelson 1976, 1981).
This work itself was derived from the idea of the
‘memex’ first outlined by Vannevar Bush in 1945.
Other manifestations of the concept include
Notecards by Xerox PARC, Intermedia at Brown
University, and Guide at the University of Kent,
which was the first commercial implementation
of hypertext (Cooke and Williams 1993: 80).
More recently, of course, the phenomenal growth
in the use of hypertext started when the NCSA
Mosaic (美国)国家计算机安全协会browser was
released early in 1993, and the hypertext concept
has in part been responsible for the
extraordinary growth of the Internet and the
World Wide Web ever since.
The language teacher has not only played a role
in developing CALL materials, but also in using
them effectively with students. Many CALL
commentators have stressed the importance of
carefully integrating CALL work into the broader
curriculum (e.g. Farrington 1986; Hardisty and
Windeatt 1989; Garrett 1991). In achieving
successful integration, the teacher’s role is central,
not only in choosing materials to incorporate into
the programs, but also in integrating the
computer activity into the lesson as a whole
(Jones, C. 1996).
This point is emphasized in Jones’s influential
paper ‘It’s not so much the program, more what
you do with it: the importance of methodology in
CALL’ (Jones, C. 1986). Jones stresses the
intelligent combination of class work away from
the computer with work on the computer,
achieved by coordination and advanced planning
by the teacher. Thus, CALL materials are not
intended to stand alone, but to be integrated into
broader schemes of work (see also Hardisty and
Windeatt 1989).
The development of word processing on
microcomputers must be mentioned also
because of its widespread use in language
teaching (Wresch 1984). In 1978 MicroPro
announced WordMaster, the precursor to
WordStar, and Word and WordPerFect for a
variety of micros followed in 1983 and 1984
respectively (Smarte and Reinhardt 1990).
Storyboard
A typical example in the authoring program genre
of the 1980s is the Storyboard program written by
John Higgins. Storyboard is a text-reconstruction
program for the microcomputer where the aim is to
reconstruct a text, word by word, using textual
clues such as the title, introductory material, and
textual clues within the text. The program also falls
into the authoring program or authoring package
category, in that teachers (or students) can use the
authoring facility within the program to write, or
author, their own texts which are then incorporated
into the program for future use.
Storyboard has an interesting history that gives
some indication of how a CALL software
program evolves as the concept and the
technology develop. The original total text
reconstruction idea for a microcomputer
probably emanates from Tim Johns at the
University of Birmingham, who wrote two
programs called Masker and Textbag that
variously exploited the general concept (Davies
1996). Both programs are described in Higgins
and Johns’s seminal book, Computers in
Language Learning (1984).
John Higgins wrote the original version of
Storyboard in 1981 for a Sharp CP/M computer
and for the Sinclair Spectrum in BASIC(Eclipse
manual: Higgins 1989:18). Graham Davies
(1996) then worked collaboratively with Johns
to produce a version for the Commodore PET
and BBC microcomputers. Chris Jones
produced and Apple II version at the same time.
Early versions of Storyboard were published by
Wida Software, London in 1982. An agreement
with a second publisher, who insisted on certain
modifications to make the program more userfriendly, led to a new version of the program
called Copy Write which was published in 1984.
Storyboard itself underwent further modification, and
new versions of the program were created for
different languages and for different microcomputers.
In the mid to late 1980s text reconstruction programs
proliferated扩散, with many variations exploiting the
same central idea in different ways. They included
Developing Tray, TextPlay, Storyline, Quartext,
Storycorner, are a Swedish version called Memory.
Other programs such as Fun with Texts extend the
total text reconstruction idea considerably by adding
further activities. Versions of Storyboard are now
available for IBM style (DOS and Windows) and
Macintosh computers for English, French, Spanish,
and German.
The level of expertise and the amount of time required to
create such programs as Storyboard was within the reach of
individual language teacher-programmers. It is interesting
to note, however, that as the 1980s progressed expectations
grew, and in more recent versions of the text-reconstruction
idea, professional programmers have usually been
employed to optimize使最优化the workings of the program,
and to ensure that the programs are suitably user-friendly
and ‘bomb-proof’. Last refers to programs like Storyboard
as first-generation CALL, and text reconstruction,
alongside gap-filling, text manipulation, and simulation,
provided the basis for many CALL activities created by
language teacher-programmers at this time (Last 1989:47;
Scarborough 1988:301).
Brett (1994) discusses the use and value of text
reconstruction programs. He emphasizes the use of
authentic material, and suggests that text reconstruction
activities are best exploited as one in a series of
communicative tasks. The careful integration of CALL work
and non-CALL work is apparent in the way Brett organizes
the learning environment. Legenhausen and Wolff (1991)
assessed the Storyboard program more formally, particularly
with regard to the learning strategies used by students. They
noted six strategy types: frequency strategies, form-oriented
strategies, and strategies related to grammatical knowledge,
semantic knowledge, textual knowledge, and world
knowledge. They conclude that regardless of the particular
learning strategy learners employ, the sue of Storyboard is
valuable for promoting language awareness.
The Athena Language Learning Project
While many language teachers were becoming directly
involved in creating CALL software for the
microcomputer, the tradition of the larger scale project.
In 1983, the Massachusetts Institute of Technology (MIT)
established Project Athena as an eight-year research
program to explore innovative uses of the computer in
education. One focus of the project was to create an
experimental system for building multimedia learning
environments. Within this framework is the Athena
Language Learning Project (ALLP), whose aim is the
creation of communication-based prototypes 模 型 for
beginning and intermediate courses in French, German,
Spanish, Russian, and English as a Second Language
(Morgenstern 1986).
The Athena Language Learning Project (ALLP) was
conceived within the communicative approach to language
learning. The educational principles underlying ALLP are,
described by Murray et al. (1989: 98):
Language is seen as a negotiable system of meanings,
expressed and interpreted via the social interaction of reader
and text, or between speakers in a culturally coded situation
rather than as a closed system of formal lexical and
grammatical rule. Accordingly the aim of the materials being
developed is not so much mastery of the grammatical and
syntactic code as the ability to use this code to perform certain
actions.
Project Athena began in 1983 at MIT with initial
funding of $50 million dollars from Digital
Equipment Corporation and IBM with the aim of
exploring innovative uses of the computer in
education (Lamper 1988). As of 1988, MIT had 450
computer workstations, interconnected using a
campus-wide network, on various sites around the
institute. Among these workstations is ‘a cluster of
32bit “Visual Workstation” machines which are
capable of combining full-motion digitized colour
videodisc, cable television, digital audio, high
resolution graphics and CD-ROM’(Lampe 1988).
Of the many new research initiatives associated with this
project, two are particularly noteworthy. The first is the
development of the MUSE multimedia authoring
environment. It uses the basic structure of hypertext and
hypermedia systems to provide for extensive crossreferencing of video, audio, and graphic materials
(Lampe 1988). The second important initiative employed
in the ALLP is MIT-based artificial intelligence
techniques where the goal is to ‘develop a natural
language processing system that can intelligently “guess”
meanings intended from minimal clues, and check its
understanding with the user’ (Murray et al. 1989:98).
An example of an application of these techniques is No
Recuerdo, language learning materials for Spanish. No
Recuerdo is an interactive video narrative based on a
simulation game about an amnesiac记忆缺失的Columbian
scientist (Murray 1987: 35). The video provides a series of
structured conversations with strong narrative interest
and a topic-based discourse structure (Murray et al. 1989:
106). As students explore and try to understand the plot,
they query people in the story by typing questions and
commands on the keyboard. The program uses artificial
intelligence techniques to parse the questions and
commands and thereby determine the flow of the action
through the story (Murray et al. 1989: 107). The goals of
the program are vocabulary learning in context, reading
and listening comprehension, cultural awareness, and
practice with conversational strategies (Morgenstern 1986:
31).
As the ALLP is intended as a prototype原型only, it is difficult
to assess the role of the teacher when these materials are in
use. Nevertheless, Morgenstern (1986: 24) asserts that the
software will ‘certainly not supplant代替 ’ the teacherlearner relationship, as the materials are designed for use in
the language laboratory and in conjunction with classroom
activities. It is also significant that language teachers were
heavily involved in the ALLP and their areas of interest and
expertise were utilized in their development (see Murray
1987: 34). Finally, since the ALLP has not been widely
implemented, extensive evaluative studies have not yet been
conducted.
Evaluation
When empiricist theory predominated there appeared
to be a perfect match between the qualities of the
computer and the requirements of language teaching
and learning. With the advent 出 现 of the
communicative approach to language teaching, some
writers began to say that CALL methodology was ‘out
of step’ with current ideas on language teaching and
learning (Stevens et al. 1986: p. xi), that the ideologies
conflicted (Smith 1988: 5) and that CALL was not
adaptable to modern methodologies (Last 1989: 39).
Last commented that, ‘The potentiality of the computer
appears all the more restricted as a language teacher if
you couple that to the fact that communicative
competence is now increasingly playing a central role at
all levels of language learning’ (Last 1989: 37).
Aside from the question of matching
methodological demands with technological
capabilities, other critics of CALL have directed
their attention towards the dominance of the
microcomputer and, in come instances, specific
brands of microcomputers. For example, in 1989
Last (1989: 32) blamed the static state of the art
in CALL in the UK on the market dominance of
the BBC microcomputer. Lian also maintains that
conceptualizing
CALL
only
within
a
microcomputer framework is overly restrictive
(Lian 1991: 2).
Two major syntheses 综合性of research on CALL
in the 1980s have been completed by Pederson
(1988) and Dunkel (1991b). They summarize their
findings on effectiveness research on CAL and
CALL to date as ‘limited’ and ‘somewhat
equivocal歧义的’ (Dunkel 1991b: 24). Of the
research on CALL and education generally
conducted to 1988, Pederson summarizes the
research findings as follows:
1. Meaningful (as opposed to manipulative) CALL
practice is both possible and preferable.
2. The way CALL is designed to encourage the
development of language learning skills can result
in more learning.
3. Learner differences can be documented easily
and accurately through computer tally of
interactive learning strategies.
4. Learner differences can affect learner
strategies, learning gains, and attitude in CALL.
5. Students tend to demonstrate a more positive
attitude towards CALL written by their own
instructor.
6. Language teachers need to develop strategies
for manoeuvering 大 演 习 effectively within the
culture of the learning laboratory and the
educational institution in order to secure needed
computer resources.
7. Despite the enthusiasm of language teachers
already using CALL, many language teachers are
dissatisfied with existing software and desire
training on how to integrate CALL into the
existing curriculum.
In suggesting research directions for the 1990s,
Carol Chapelle (1989a) describes how the
assumptions underpinning the CALL research
question of the 1970s – ‘Is CALL effective in
improving
students’
second
language
competence?’ – have been invalidated during
the intervening period and gives the following
justification.
Firstly, it is now recognized that the term CALL covers a
range of activities, not just one type;
Next, ‘second language competence’ is now defined as a
complex set of interrelated competencies, making it more
difficult to test directly as a result;
Thirdly, researchers have recognized the importance of
studying the processes of learning, causing research that
focuses on learning outcomes alone to be inadequate; and
finally, individual student characteristics have been
shown to lave沐浴 a significant impact on SLA (Chapelle
1989a: 7-9).
CALL in the 1990s
Integrative CALL:Multimedia
Integrative approaches to CALL are based on two
important technological developments of the last decade-multimedia computers and the Internet. Multimedia
technology--exemplified today by the CD-ROM-- allows a
variety of media (text, graphics, sound, animation, and
video) to be accessed on a single machine. What makes
multimedia even more powerful is that it also entails
hypermedia. That means that the multimedia resources
are all linked together and that learners can navigate their
own path simply by pointing and clicking a mouse.
Hypermedia provides a number of advantages
for language learning. First of all, a more
authentic learning environment is created, since
listening is combined with seeing, just like in the
real world. Secondly, skills are easily integrated,
since the variety of media make it natural to
combine reading, writing, speaking and
listening in a single activity. Third, students
have great control over their learning, since they
can not only go at their own pace but even on
their own individual path, going forward and
backwards to different parts of the program,
honing in on particular aspects and skipping
other aspects altogether.
Finally, a major advantage of hypermedia is that
it facilitates 使 便 利 a principle focus on the
content, without sacrificing a secondary focus on
language form or learning strategies. For
example, while the main lesson is in the
foreground, students can have access to a variety
of background links which will allow them rapid
access to grammatical explanations or exercises,
vocabulary glosses, pronunciation information,
or questions or prompts which encourage them
to adopt an appropriate learning strategy.
An example of how hypermedia can be used for
language learning is the program Dustin which is
being developed by the Institute for Learning
Sciences at Northwestern University (Schank &
Cleary, 1995). The program is a simulation of a
student arriving at a U.S. airport. The student
must go through customs, find transportation to
the city, and check in at a hotel.
The language learner using the program assumes the
role of the arriving student by interacting with
simulated people who appear in video clips and
responding to what they say by typing in responses. If
the responses are correct, the student is sent off to do
other things, such as meeting a roommate. If the
responses are incorrect, the program takes remedial
action by showing examples or breaking down the task
into smaller parts. At any time the student can control
the situation by asking what to do, asking what to say,
asking to hear again what was just said, requesting for
a translation, or controlling the level of difficulty of the
lesson.
Yet in spite of the apparent advantages of
hypermedia for language learning, multimedia
software has so far failed to make a major
impact. Several major problems have surfaced
in regarding to exploiting multimedia for
language teaching.
First, there is the question of quality of available
programs. While teachers themselves can conceivably
develop their own multimedia programs using
authoring software such as Hypercard (for the
Macintosh) or Toolbook (for the PC), the fact is that
most classroom teachers lack the training or the time
to make even simple programs, let alone more complex
and sophisticated ones such as Dustin. This has left the
field to commercial developers, who often fail to base
their programs on sound pedagogical principles. In
addition, the cost involved in developing quality
programs can put them out of the market of most
English teaching programs.
Beyond these lies perhaps a more fundamental
problem. Today's computer programs are not yet
intelligent enough to be truly interactive. A
program like Dustin should ideally be able to
understand a user's spoken input and evaluate it
not just for correctness but also or appropriateness.
It should be able to diagnose 诊 断 a student's
problems with pronunciation, syntax, or usage and
then intelligently decide among a range of options
(e.g., repeating, paraphrasing, slowing down,
correcting, or directing the student to background
explanations).
Computer programs with that degree of
intelligence do not exist, and are not expected to
exist for quite a long time. Artificial intelligence
(AI) of a more modest degree does exist, but few
funds are available to apply AI人工智能 (artificial
intelligence)research to the language classroom.
Thus while Intelligent CALL (Underwood, 1989)
may be the next and ultimate usage of computers
for language learning, that phase is clearly a long
way down the road.
Multimedia technology as it currently exists thus
only partially contributes to integrative CALL.
Using multimedia may involve an integration of
skills (e.g., listening with reading), but it too seldom
involves a more important type of integration-integrating
meaningful
and
authentic
communication into all aspects of the language
learning curriculum. Fortunately, though, another
technological breakthrough is helping make that
possible--electronic communication and the
Internet.
Integrative CALL: The Internet
Computer-mediated communication (CMC),
which has existed in primitive form since the
1960s but has only became wide-spread in the
last five years, is probably the single computer
application to date with the greatest impact on
language teaching. For the first time, language
learners
can
communicate
directly,
inexpensively, and conveniently with other
learners or speakers of the target language 24
hours a day, from school, work, or home.
This communication can be asynchronous异步
(not simultaneous) through tools such as
electronic mail (e-mail), which allows each
participant to compose messages at their time
and pace, or in can be synchronous (synchronous,
"real time"), using programs such as MOOs,
which allow people all around the world to have
a simultaneous conversation by typing at their
keyboards. It also allows not only one-to-one
communication, but also one-to-many, allowing
a teacher or student to share a message with a
small group, the whole class, a partner class, or
an international discussion list of hundreds or
thousands of people.
Computer-mediated communication allows users
to share not only brief messages, but also lengthy
(formatted or unformatted) documents--thus
facilitating collaborative writing--and also
graphics, sounds, and video. Using the World
Wide Web (WWW), students can search through
millions of files around the world within minutes
to locate and access authentic materials (e.g.,
newspaper and magazine articles, radio
broadcasts, short videos, movie reviews, book
excerpts) exactly tailored 特 制 的 to their own
personal interests. They can also use the Web to
publish their texts or multimedia materials to
share with partner classes or with the general
public.
It is not hard to see how computer-mediated
communication and the Internet can facilitate an
integrative approach to using technology. The
following example illustrates well how the Internet
can be used to help create an environment where
authentic and creative communication is
integrated into all aspects of the course.
Students of English for Science and Technology
in La Paz Mexico don't just study general
examples and write homework for the teacher;
instead they use the Internet to actually become
scientific writers (Bowers, 1995; Bowers, in
press). First, the students search the World
Wide Web to find articles in their exact area of
specialty and then carefully read and study
those specific articles. They then write their
own drafts online;
the teacher critiques the drafts online and creates
electronic links to his own comments and to pages of
appropriate linguistic and technical explanation, so that
students can find additional background help at the
click of a mouse. Next, using this assistance, the students
prepare and publish their own articles on the World
Wide Web, together with reply forms to solicit恳求
opinions from readers. They advertise their Web articles
on appropriate Internet sites (e.g., scientific newsgroups)
so that interested scientists around the world will know
about their articles and will be able to read and
comment on them. When they receive their comments
(by e-mail) they can take those into account in editing
their articles for republication on the Web or for
submission to scientific journals.
The above example illustrates an integrative
approach to using technology in a course based on
reading and writing. This perhaps is the most
common use of the Internet to date, since it is still
predominantly a text-based medium. This will
undoubtedly change in the future, not only due to
the transmission of audio-visual material (video
clips, sound files) World Wide Web, but also due
to the growing use of the Internet to carry out
real-time audio- and audio-visual chatting (this is
already possible with tools such as NetPhone and
CU-SeeME, but is not yet widespread).
Nevertheless, it is not necessary to wait for further
technological developments in order to use the Internet in
a multi-skills class. The following example shows how the
Internet, combined with other technologies, was used to
help create an integrated communicative environment for
EFL students in Bulgaria--students who until recent years
had little contact with the English-speaking world and
were taught through a "discrete不连续topic and skill
orientation" (Meskill & Rangelova, in press, n.p.). These
Bulgarian students now benefit from a high-tech/low-tech
combination to implement an integrated skills approach
in which a variety of language skills are practiced at the
same time with the goal of fostering communicative
competence. Their course is based on a collaborative,
interpreted study of contemporary American short stories,
assisted by three technological tools:
* E-mail communication. The Bulgarian students
correspond by e-mail with an American class of
TESOL graduate students to explore in detail the
nuances细微差别of American culture which are
expressed in the stories, and also to ask questions
about idioms, vocabulary, and grammar. The
American students, who are training to be
teachers, benefit from the concrete experience of
handling students' linguistic and cultural
questions .
* Concordancing. The Bulgarian students
further test out their hypotheses regarding the
lexical
and
grammatical
meanings
of
expressions they find in the stories by using
concordancing software to search for other uses
of these expressions in a variety of English
language corpora stored on CD-ROM.
* Audio tape. Selected scenes from the stories-dialogues, monologues, and descriptions--were
recorded by the American students and provide
both listening practice (inside and outside of
class) and also additional background materials
to help the Bulgarians construct their
interpretation of the stories.
These activities are supplemented by a range of
other classroom activities, such as in-class
discussions and dialogue journals, which assist the
students in developing their responses to the
stories' plots, themes, and characters--responses
which can be further discussed with their e-mail
partners in the U.S.
Conclusion
The history of CALL suggests that the computer can
serve a variety of uses for language teaching. It can
be a tutor which offers language drills or skill
practice; a stimulus for discussion and interaction; or
a tool for writing and research. With the advent of
the Internet, it can also be a medium of global
communication and a source of limitless authentic
materials. But as pointed out by Garrett (1991), "the
use of the computer does not constitute a method".
Rather, it is a "medium in which a variety of methods,
approaches, and pedagogical philosophies may be
implemented" (p. 75). The effectiveness of CALL
cannot reside in the medium itself but only in how it
is put to use.
As with the audio language lab "revolution" of
40 years ago, those who expect to get
magnificent results simply from the purchase
of expensive and elaborate systems will likely
be disappointed. But those who put computer
technology to use in the service of good
pedagogy will undoubtedly find ways to enrich
their educational program and the learning
opportunities of their students.
A Typology of CALL Programs and Applications
Computer as Tutor
Grammar
CALL Programs designed for teaching grammar include
drill and practice on a single topic (Irregular Verbs,
Definite and Indefinite Articles), drills on a variety of
topics (Advanced Grammar Series, English Grammar
Computerized I and II), games (Code Breaker, Jr. High
Grade Builder), and programs for test preparation (50
TOEFL SWE Grammar Tests) Grammar units are also
included in a number of comprehensive multimedia
packages (Dynamic English, Learn to Speak English
Series).
Listening
This category includes programs which are
specifically designed to promote secondlanguage listening (Listen!), multi-skill drill and
practice
programs
(TOEFL
Mastery),
multimedia programs for second language
learners (Accelerated English, Rosetta Stone),
and multimedia programs for children or the
general public (Aesop's Fables, The Animals).
Pronunciation
Pronunciation programs (Sounds American,
Conversations) generally allow students to
record and playback their own voice and
compare
it
to
a
model.
Several
comprehensive
multimedia
programs
(Firsthand Access, The Lost Secret) include
similar features.
Reading
This category includes reading programs
designed for ESL learners (Reading Adventure
1 - ESL) and tutorials designed for children or
the general public (MacReader, Reading
Critically, Steps to Comprehension). and games
(HangWord). Also included are more general
educational programs which can assist reading
(Navajo Vacation, The Night Before Christmas)
and text reconstruction programs.
Text Reconstruction
Text reconstruction programs allow students to
manipulate letters, words, sentences, or
paragraphs in order to put texts together. They
are usually inexpensive and can be used to
support reading, writing, or discussion activities.
Popular examples include Eclipse, Gapmaster,
Super Cloze, Text Tanglers, and Double Up.
Vocabulary
This category includes drill and practice
programs (Synonyms), multimedia tutorials
(English Vocabulary), and games (Hangman,
Scrabble). Also useful are several reference and
searching tools (such as concordancers) which will
be described in the Computer as Tool section
below.
Writing
Most software for supporting writing falls under
the Computer as Tool category (see below).
Exceptions include tutorials such as Sentence
Combining, SentenceMaker, and Typing Tutor.
Comprehensive
A number of comprehensive multimedia
programs are designed to teach ESL students a
variety of skills. They range in price but many
are quite expensive. Among the better known are
Dynamic English, Ellis Mastery, English
Discoveries, Rosetta Stone.
Computer as Stimulus
The computer as stimulus category includes
software which is used not so much as a tutorial
in itself but to generate analysis, critical thinking,
discussion, and writing. Of course a number of
the above-mentioned programs (e.g., The Animals,
Navajo Vacation, Night Before Christmas) can be
used as a stimulus. Especially effective for a
stimulus are programs which include simulations.
Examples of this latter group include London
Adventure, Oregon Trail, SimCity, Sleuth,
Crimelab, Amazon Trail, Cross Country
Canada/USA, and Where in the World is Carmen
Sandiego?
Computer as Tool
Word Processing
The most common use of computer as tool, and
probably the most common use overall of the
computer for language learning, is word
processing. High quality programs like Microsoft
Word can be useful for certain academic or
business settings (Healey & Johnson, 1995a).
Programs
such
as
ClarisWorks
and
MicrosoftWorks are cheaper and simpler to learn
and still have useful features. SimpleText and
TeachText are simpler yet and may be sufficient
for many learners.
Grammar Checkers
Grammar checkers (e.g., Grammatik) are designed
for native speakers and they typically point to
problems believed typical of native speaker
writing (e.g., too much use of passives). They are
usually very confusing to language learners and
are not recommended for an ESL/EFL context.
Concordancers
Concordancing software searches through huge
files of texts (called corpora, which is the plural of
corpus) in order to find all the uses of a particular
word (or collocation). While very confusing for
beginners, concordancers can be a wonderful tool
for advanced students of language, linguistics, or
literature. The best concordancer for language
students and teachers is Oxford's MicroConcord.
The program includes as an optional extra several
large (total 1,000,000 words) taken from British
newspapers. Or this program, and other
concordancers as well, can be used with any other
text files available in electronic form.
Collaborative Writing
A number of tools exist to help students work on
their writing collaboratively on computers linked
in a local area network. The most popular among
language teachers is Daedalus Integrated Writing
Environment, which includes modules for realtime discussion, word processing, electronic mail,
and brainstorming集体讨论, as well as citation
software and a dictionary. Other programs with
some similar features are Aspects and
MacCollaborator.
Reference
There are numerous CD versions of
encyclopedias and dictionaries. Two which have
highly recommended (Healey & Johnson, 1995a)
for language learners are the encyclopedia
ENCARTA and the Longman Dictionary of
American English.
Internet
The three most popular uses of the Internet for language
teaching are electronic mail (e-mail), the World Wide
Web, and MOOs. Numerous programs exist for using
electronic mail. The Eudora program has several nice
features, including "point-and-click" word processing
capacity, easy attachment of formatted files, and ability
to include foreign characters and alphabets. The free
version (Eudora Light) is suitable for most purposes;
there is also a more powerful commercial version
(Eudora Pro). Eudora requires a direct connection to the
Internet. Additional programs which run through the
unix system and do not require a direct Internet
connection are Pine and Elm.
To access the World Wide Web, one needs a special
program called a browser. By far the most popular
browser among educators is Netscape, which until
now has been free to teachers and students.
MOOs ("Multiple-user-domains Object Oriented")
allow for real time communication, simulation, and
role playing among participants throughout the world,
and a special MOO has been set up for ESL teachers
and students (schmOOze University homepage, 1995).
The use of MOOs is greatly facilitated if one uses a
special client software program such as TinyFugue
(for unix), MUDDweller (for Mac), or MUDwin (for
Windows).
Authoring
Authoring allows teachers to tailor software programs
either by inserting new texts or by modifying the
activities. Authoring runs on a spectrum from set
programs which allow slight modification (e.g., inclusion
of new texts) to complex authoring systems. Many of the
programs listed earlier (e.g., MacReader, Eclipse,
Gapmaster, Super Cloze, Text Tanglers, and Double Up)
allow teachers to insert their own texts and thus make
the programs more relevant to their own lessons (and
greatly extend their shelf life too). By allowing the
students themselves to develop and insert the texts, the
programs can be made even more communicative and
interactive.
On the other end of the spectrum, authoring
systems allow teachers to design their own
multimedia courseware. These can take a lot of time
and effort to master, and are most often used by
true enthusiasts. Some are specifically designed for
language teachers (CALIS, DASHER), others for
educators (Digital Chiseler) and others for the
general public (Hypercard, Hyperstudio, Supercard,
Toolbook, Macromind Director).