AbstractIn Optimality Theory, a linguistic input is assigned a grammatical structural description by selecting, from an infinite set of candidate structural descriptions, the description which best satisfies a ranked set of universal constraints. Cross-linguistic variation is explained as different rankings of the same universal constraints. Two questions are of primary interest concerning the computational tractibility of Optimality Theory. The first concerns the ability to compute optimal structural descriptions. The second concerns the learnability of the constraint rankings. Parsing algorithms are presented for the computation of optimal forms,using dynamic programming. These algorithms work for grammars in Optimality Theory employing universal constraints which may be evaluated on the basis of information local within the structural description. This approach exploits optimal substructure to construct the optimal description, rather than searching for the solution by moving from one entire description to another. A class of learning algorithms, the Constraint Demotion algorithms,are presented, which solve the problem of learning constraint rankings based upon hypothesized structural descriptions (an important sub problem of the general problem of language learning). Constraint Demotion exploits the implicit negative evidence available in the form of the competing (suboptimal) structural descriptions of the input. The data complexity of this algorithm is quadratic in the number of constraints.