Research Article: Chunking or not chunking? How do we find words in artificial language learning?

Date Published: May 21, 2012

Publisher: University of Finance and Management in Warsaw

Author(s): Ana Franco, Arnaud Destrebecqz.

http://doi.org/10.2478/v10053-008-0111-3

Abstract

What is the nature of the representations acquired in implicit statistical
learning? Recent results in the field of language learning have shown that
adults and infants are able to find the words of an artificial language when
exposed to a continuous auditory sequence consisting in a random ordering of
these words. Such performance can only be based on processing the transitional
probabilities between sequence elements. Two different kinds of mechanisms may
account for these data: Participants may either parse the sequence into smaller
chunks corresponding to the words of the artificial language, or they may become
progressively sensitive to the actual values of the transitional probabilities
between syllables. The two accounts are difficult to differentiate because they
make similar predictions in comparable experimental settings. In this study, we
present two experiments that aimed at contrasting these two theories. In these
experiments, participants had to learn 2 sets of pseudo-linguistic regularities:
Language 1 (L1) and Language 2 (L2) presented in the context of a serial
reaction time task. L1 and L2 were either unrelated (none of the syllabic
transitions of L1 were present in L2), or partly related (some of the
intra-words transitions of L1 were used as inter-words transitions of L2). The
two accounts make opposite predictions in these two settings. Our results
indicate that the nature of the representations depends on the learning
condition. When cues were presented to facilitate parsing of the sequence,
participants learned the words of the artificial language. However, when no cues
were provided, performance was strongly influenced by the employed transitional
probabilities.

Partial Text

When faced with a complex structured domain, human learners tend to behave as if they
extract the underlying rules of the material. In an artificial grammar learning
experiment, for instance, participants are first requested to memorize a series of
letter strings following the rules of a finite-state grammar. They are not informed
of the existence of those rules, however. In a second phase of the experiment, when
asked to classify novel strings as grammatical or not, they usually perform above
chance level but remain generally unable to verbalize much of the rules. Such a
dissociation has been initially attributed to the unconscious or implicit learning
of the underlying rules (Reber, 1967, 1989).

To contrast the predictions of chunking and transition-finding strategies, we used a
12-choice SRT task in which the succession of the visual targets implemented
statistical regularities similar to those found in artificial languages. We choose
to use a visuomotor task instead of presenting the artificial language in the
auditory modality in order to be able to track the development of statistical
learning through reaction times (see Misyak,
Christiansen, & Tomblin, 2010, for a recent similar attempt; see also
Conway & Christiansen, 2009, for a
systematic comparison between the auditory and visual modalities). In our version of
the task, participants had to learn two different artificial languages presented
successively. In our experiments, the first “language” (L1) was
composed of four “words”, or small two-element sequences, and the
second “language” (L2) was composed of four small three-element
sequences. In one (control) condition, the two ensembles were not related to each
other, but in the other (experimental) condition, the intra-sequences transitions of
L1 became inter-sequences transitions in L2 (see Figure 1 and Table 1.).

The goal of Experiment 1 was twofold. First, we wanted to make sure that participants
could learn statistical regularities similarly as those used in artificial languages
in the context of a SRT task. Second, we wanted to establish whether they will be
able to recognize the L2 “words”, that is, the three-element sequences
presented in a random order during the SRT task. If learning is based on chunking,
recognition performance should be the same for non-sequences and part-sequences. If
performance is based on learning transitional probabilities, participants may more
frequently consider part-sequences than non-sequences as L2 sequences. The chunking
hypothesis also predicts better L2 sequence-recognition in the control than in the
experimental condition.

In this paper, we aimed at clarifying the nature of the representations involved in
implicit and statistical learning. The question was to assess whether participants
form chunks of the training material or merely develop a sensitivity to the
transitional probabilities present in the training sequence. In line with previous
studies showing that statistical learning of pseudolinguistic regularities can occur
in other modalities than the auditory modality, we showed, in the context of a
visuo-motor RT task, that participants learn the statistical regularities present in
a random succession of sequences of visual targets. The RT results indicate that
participants were able to learn two different languages (L1 and L2) presented
successively. Moreover, they were also able to recognize L2 sequences in a
subsequent recognition task.

 

Source:

http://doi.org/10.2478/v10053-008-0111-3

 

Leave a Reply

Your email address will not be published.