2009Performance of Learning & Learning from Performance.
Conference talk given at: Workshop on Learning Meets Acquisition, DGfS meeting, March 4, 2009, Osnabrück, Germany. See also http://www.birot.hu/events/lma/. Slides.

 

 

Abstract:

Formal models of language learning suppose that the learning data are produced by the target grammar. In other words, the learner supposedly develops her linguistic competence by having direct access to the product of the linguistic competence of the teacher. Some algorithms are robust to reasonable random noise, but are still based on the idea that most of the learning data reflect the target grammar.

In reality, however, the learner is exposed to the linguistic performance of the teacher, not to his competence. As long as performance errors are seen as random noise on competence, a view going back to Chomsky's introduction of the competence-performance distinction in his Aspects, the robust algorithms just referred to may be sufficient. Yet, performance can be argued to be more complex than mere noise. Both Bíró (2006) and Smolensky and Legendre (2006) argue that linguistic performance can be modelled as the algorithm that implements computationally the function describing competence. In particular, they suggest performance errors are locally optimal candidates of an Optimality Theoretical candidate sets, which can trap the performance algorithm, simulated annealing. Bíró (2006) also argues that in certain cases performance error forms are produced with a significant percentage even if simulated annealing is given ample computing time.

Consequently, this talk will argue, research on the learnability of linguistic models must take into account the divergences between competence and performance. The paper will first present how standard learning algorithms in Optimality Theory must be revised to meet the proposed switch in the research paradigm (Biró 2007). Second, results of a few simple experiments will be reported, which can be evaluated against child language phenomena. Finally, we draw the consequences of the approach for language evolution models based on iterative learning.

References

Tamás Bíró. Finding the Right Words: Implementing Optimality Theory with Simulated Annealing. PhD thesis, University of Groningen, 2006. ROA-896.

Tamás Biró. 'The benefits of errors: Learning an OT grammar with a structured candidate set'. In Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition, pages 81–88, Prague, Czech Republic, June 2007.

Paul Smolensky and Géraldine Legendre (eds.). The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar. MIT Press, Cambridge, 2006.