Class OfflineLearningExamples

  extended by hu.birot.OTKit.learning.OfflineLearningExamples

public class OfflineLearningExamples
extends java.lang.Object

Examples of offline learning algorithms.

Constructor Summary
Method Summary
static OfflineLearning RCD(java.lang.String approach)
          Returns an OfflineLearning object that implements Tesar and Smolensky's Recursive Constraint Demotion (RCD) algorithm
static OfflineLearning RepeatOnline(OnlineLearning algorithm, Production P)
          This method returns an instance of an OfflineLearning based on the OnlineLearning algorithm.
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait

Constructor Detail


public OfflineLearningExamples()
Method Detail


public static OfflineLearning RCD(java.lang.String approach)

Returns an OfflineLearning object that implements Tesar and Smolensky's Recursive Constraint Demotion (RCD) algorithm

The learn(Grammar G, Vector<Candidate> cand) method of this OfflineLearning object receives a grammar G and a Vector of Candidates cand. This second contains the learning data (winner forms), that is, the input to the learning algorithm. The method returns false if the RCD algorithm has been unsuccessful, that is, if the data are inconsistent, no single hierarchy can account for them. It returns true otherwise, and in this case the rank values in G.hierarchy will contain the ranking information. Consequently, G must contain a Hierarchy G.hierarchy, whose rank values will be initialized or overwritten by this method.

The RCD algorithm requires the generation of loser candidates that are less optimal than the observed learning data (specified in the Vector cand of the method lear). Parameter approach specifies how to do it. Possible values of parameter approach at this point are:

  1. "all": In this case, all the candidates in the candidate set are used as suboptimal alternatives. Method G.gen.allCandidates(uf) will be applied (and so must be specified in advance) to the underlying form in each piece of learning data.
  2. "neighb": In this case, only the neighbors of the winner will be considered as suboptimal alternatives. Method G.topology.allNeighborsOf(w) is used, and hence, must be specified before using method RCD.learn(G, cand).

After the method learn(Grammar G, Vector<Candidate> cand) has been run, the values of "rank" in G.hierarchy are set as follows: The constraints in the highest stratum (those not needing to be dominated) have rank value 0. The constraints in the second statum (those needing to be dominated by some constraint in the highest stratum) have rank value -1. Those in the third stratum have rank -2, and so forth. In case the algorithm is unsuccessful (the return value of the method is false), then Double.NEGATIVE_INFINITY is the rank of the constraints that are not ordered yet when inconsistency of the data is established.

Note that after the learn method, G.hierarchy will most probably be "stratified", that is, some constraints will have the same ranking value. Method G.hierarchy.sortByRank() will yield an array corresponding to a particular refinement of this stratified hierarchy into a fully ranked hierarchy. Another solution is to add a small random noise (less than 1) to every rank value.

approach - This string describes the specific approach to be used to create loser candidates.
An OfflineLearning object that realizes some version of the Recursive Constraint Demotion (RCD) algorithm.


public static OfflineLearning RepeatOnline(OnlineLearning algorithm,
                                           Production P)

This method returns an instance of an OfflineLearning based on the OnlineLearning algorithm. The learn(Grammar, Vector<Candidate>) method of this OfflineLearning reads the Candidates in the Vector, and employ the OnlineLearning algorithm to improve the Grammar. The learn method returns false if and only if the Vector is empty.

algorithm - An instance of OnlineLearning that is used repeatedly in the learn method of the OfflineLearning.
P - A method of production to be used by the learn method of algorithm.
An instance of OfflineLearning, as described above.