On Language Identities


“Japanese is the order, Chinese the question.”

– Willem Etsenmaker

Advertisements

Greek Proverbs


Κόρακας κοράκου μάτι δε βγάζει

“The crow does not take the eye out of another crow.”

  • Meaning: People who are the same do not hurt each other.
  • English equivalent: Hawks will not pick out Hawk’s eyes.
  • Shqiptaro-Greke (1999). Albanohellenica. Albanian-Greek Philological Association. p. 22.

Η γλώσσα κόκαλα δεν έχει, αλλά κόκαλα τσακίζει.

“The tongue has no bones, yet it crushes bones.”

  • English equivalent: The pen is mightier than the sword.
  • Venizelos (1867). Paroimiai dēmōdeis. Ek tou typographeiou tēs “Patridos”. p. 95.

Καλή ζωή, κακή διαθήκη

“Good life, bad testament.”

  • Implying that most likely, you will leave little in your will by living a good life.
  • Chakkas (1978). Hapanta. Kedros.

Ο πνιγμένος, από τα μαλλιά του πιάνεται

“The drowning man grips to his own hair.”‘

  • Meaning: A person in a desperate situation will try the most desperate measures.
  • English equivalent: A drowning man will clutch at a straw.
  • Κριαρας (2007). Αλλελωγραφιαδυο:. ΕκδοσειςΠολυτυπο. p. 33.

Optimality Theory


Optimality theory is a linguistic model proposing that the observed forms of language arise from the interaction between conflicting constraints. There are three basic components of the theory:

1. GEN generates the list of possible outputs, or candidates.
2. CON provides the criteria, strictly ordered violable constraints, used to decide between candidates.
3. EVAL chooses the optimal candidate based on the constraints.

Optimality theory assumes that these components are universal. Differences in grammars reflect different rankings of the universal constraint set, CON. Part of Language acquisition can then be described as the process of adjusting the ranking of these constraints.

Map Showing the Distribution of Language Families

Optimality theory is usually considered a development of generative grammar, which shares its focus on the investigation of universal principles, linguistic typology and language acquisition.

Optimality theory is often called a connectionist theory of language, because it has its roots in neural network research, though the relationship is now largely of historical interest. It arose in part as a successor to the theory of Harmonic Grammar.

Optimality theory supposes that there are no language-specific restrictions on the input. This is called richness of the base. Every grammar can handle every possible input. For example, a language without complex clusters must be able to deal with an input such as /flask/. Languages without complex clusters differ on how they will resolve this problem; some will epenthesize (e.g. [falasak], or [falasaka] if all codas are banned) and some will delete (e.g. [fas], [fak], [las], [lak]). Given any input, GEN generates an infinite number of candidates, or possible realizations of that input. A language’s grammar (its ranking of constraints) determines which of the infinite candidates will be assessed as optimal by EVAL.

In optimality theory, every constraint is universal. CON is the same in every language. There are two basic types of constraints. Faithfulness constraints require that the observed surface form (the output) match the underlying or lexical form (the input) in some particular way; that is, these constraints require identity between input and output forms. Markedness constraints impose requirements on the structural well-formedness of the output. Each plays a crucial role in the theory. Faithfulness constraints prevent every input from being realized as some unmarked form, and markedness constraints motivate changes from the underlying form.
The universal nature of CON makes some immediate predictions about language typology. If grammars differ only by having different rankings of CON, then the set of possible human languages is determined by the constraints that exist. Optimality theory predicts that there cannot be more grammars than there are permutations of the ranking of CON. The number of possible rankings is equal to the factorial of the total number of constraints, thus giving rise to the term Factorial Typology. However, it may not be possible to distinguish all of these potential grammars, since not every constraint is guaranteed to have an observable effect in every language. Two languages could generate the same range of input-output mappings, but differ in the relative ranking of two constraints which do not conflict with each other.

Given two candidates, A and B, A is better than B on a constraint if A incurs fewer violations than B. Candidate A is better than B on an entire constraint hierarchy if A incurs fewer violations of the highest-ranked constraint distinguishing A and B. A is optimal in its candidate set if it is better on the constraint hierarchy than all other candidates. For example, given constraints C1, C2, and C3, where C1 dominates C2, which dominates C3 (C1 >> C2 >> C3), A is optimal if it does better than B on the highest ranking constraint which assigns them a different number of violations. If A and B tie on C1, but A does better than B on C2, A is optimal, even if A has 100 more violations of C3 than B. This comparison is often illustrated with a tableau. The pointing finger marks the optimal candidate, and each cell displays an asterisk for each violation for a given candidate and constraint. Once a candidate does worse than another candidate on the highest ranking constraint distinguishing them, it incurs a crucial violation (marked in the tableau by an exclamation mark). Once a candidate incurs a crucial violation, there is no way for it to be optimal, even if it outperforms the other candidates on the rest of CON.

Constraints are ranked in a hierarchy of strict domination. The strictness of strict domination means that a candidate who violates only a high-ranked constraint does worse on the hierarchy than one that doesn’t, even if the second candidate fared worse on every other lower-ranked constraint. This also means that constraints are violable; the winning candidate need not satisfy all constraints. Within a language, a constraint may be ranked high enough that it is always obeyed; it may be ranked low enough that it has no observable effects; or, it may have some intermediate ranking. The term the emergence of the unmarked describes situations in which a markedness constraint has an intermediate ranking, so that it is violated in some forms, but nonetheless has observable effects when higher-ranked constraints are irrelevant.

An early example proposed by McCarthy & Prince (1994) is the constraint NoCoda, which prohibits syllables from ending in consonants. In Balangao, NoCoda is not ranked high enough to be always obeyed, as witnessed in roots like taynan (faithfulness to the input prevents deletion of the final /n/). But, in the reduplicated form ma-tayna-taynan ‘repeatedly be left behind’, the final /n/ is not copied. Under McCarthy & Prince’s analysis, this is because faithfulness to the input does not apply to reduplicated material, and NoCoda is thus free to prefer ma-tayna-taynan over hypothetical ma-taynan-taynan (which has an additional violation of NoCoda). Constraints are also violable; the winning candidate need not satisfy all constraints, as long as for any rival candidate that does better than the winner on some constraint, there is a higher ranked constraint on which the winner does better than that rival.

Some optimality theorists prefer the use of comparative tableaux, as described in Prince (2002). Comparative tableaux display the same information as the classic or “flyspeck” tableaux, but the information is presented in such a way that it highlights the most crucial information. For instance, the tableau above would be rendered in the following way.

Each row in a comparative tableau represents a winner-loser pair, rather than an individual candidate. In the cells where the constraints assess the winner-loser pairs, there is a W if the constraint in that column prefers the winner, an L if the constraint prefers the loser, and an e if the constraint does not differentiate between the pair. Presenting the data in this way makes it easier to make generalizations. For instance, in order to have a consistent ranking some W must dominate all L’s. Brasoveanu and Prince (2005) describe a process known as fusion and the various ways of presenting data in a comparative tableau in order to achieve the necessary and sufficient conditions for a given argument.

As a simplified example, consider the manifestation of the English plural:

/ˈkæt/ + /z/ → [ˈkæts] (cats) (also smirks, hits, crepes)

/ˈdɒg/ + /z/ → [ˈdɒgz] (dogs) (also wugs, clubs, moms)

/ˈdɪʃ/ + /z/ → [ˈdɪʃɨz] (dishes) (also classes, glasses, bushes)

Also consider the following constraint set, in descending order of domination (M: markedness, F: faithfulness):

M: *SS – Sibilant-Sibilant clusters are ungrammatical: one violation for every pair of adjacent sibilants in the output.

M: Agree(Voi) – Agree in specification of [voi]: one violation for every pair of adjacent obstruents in the output which disagree in voicing.

F: Max – Maximize all input segments in the output: one violation for each segment in the input that doesn’t appear in the output (This constraint prevents deletion).

F: Dep – Output segments are dependent on having an input correspondent: one violation for each segment in the output that doesn’t appear in the input (This constraint prevents insertion).

F: Ident(Voi) – Maintain the identity of the [voi] specification: one violation for each segment that differs in voicing between the input and output.