|
|||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object hu.birot.OTKit.learning.OfflineLearningExamples
public class OfflineLearningExamples
Examples of offline learning algorithms.
Constructor Summary | |
---|---|
OfflineLearningExamples()
|
Method Summary | |
---|---|
static OfflineLearning |
RCD(java.lang.String approach)
Returns an OfflineLearning object that implements Tesar and Smolensky's Recursive Constraint Demotion (RCD) algorithm |
static OfflineLearning |
RepeatOnline(OnlineLearning algorithm,
Production P)
This method returns an instance of an OfflineLearning based on the OnlineLearning algorithm. |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Constructor Detail |
---|
public OfflineLearningExamples()
Method Detail |
---|
public static OfflineLearning RCD(java.lang.String approach)
Returns an OfflineLearning object that implements Tesar and Smolensky's Recursive Constraint Demotion (RCD) algorithm
The learn(Grammar G, Vector<Candidate> cand) method of this OfflineLearning object receives a grammar G and a Vector of Candidates cand. This second contains the learning data (winner forms), that is, the input to the learning algorithm. The method returns false if the RCD algorithm has been unsuccessful, that is, if the data are inconsistent, no single hierarchy can account for them. It returns true otherwise, and in this case the rank values in G.hierarchy will contain the ranking information. Consequently, G must contain a Hierarchy G.hierarchy, whose rank values will be initialized or overwritten by this method.
The RCD algorithm requires the generation of loser candidates that are less optimal than the observed learning data (specified in the Vector cand of the method lear). Parameter approach specifies how to do it. Possible values of parameter approach at this point are:
After the method learn(Grammar G, Vector<Candidate> cand) has been run, the values of "rank" in G.hierarchy are set as follows: The constraints in the highest stratum (those not needing to be dominated) have rank value 0. The constraints in the second statum (those needing to be dominated by some constraint in the highest stratum) have rank value -1. Those in the third stratum have rank -2, and so forth. In case the algorithm is unsuccessful (the return value of the method is false), then Double.NEGATIVE_INFINITY is the rank of the constraints that are not ordered yet when inconsistency of the data is established.
Note that after the learn method, G.hierarchy will most probably be "stratified", that is, some constraints will have the same ranking value. Method G.hierarchy.sortByRank() will yield an array corresponding to a particular refinement of this stratified hierarchy into a fully ranked hierarchy. Another solution is to add a small random noise (less than 1) to every rank value.
approach
- This string describes the specific
approach to be used to create loser candidates.
public static OfflineLearning RepeatOnline(OnlineLearning algorithm, Production P)
This method returns an instance of an OfflineLearning based on the OnlineLearning algorithm. The learn(Grammar, Vector<Candidate>) method of this OfflineLearning reads the Candidates in the Vector, and employ the OnlineLearning algorithm to improve the Grammar. The learn method returns false if and only if the Vector is empty.
algorithm
- An instance of OnlineLearning that is used repeatedly in the learn
method of the OfflineLearning.P
- A method of production to be used by the learn method of algorithm.
|
|||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |