Monday, April 16, 2007

SRL Techniques (1 of 2)

I'm reading about approaches to SRL that include linearly interpolated relative frequency models [1], HMM's [2], SVM's [3], decision trees [4], and log-linear models [5]. I'm hoping to learn which are most compatible with resolving a sentence's syntax to an exhaustive set of predicates (hopefully capturing all of a sentence's semantics) with simple nouns or noun phrases as arguments.

As a rule, the more abstract roles have been proposed by linguists, who are more concerned with explaining generalizations across verbs in the syntactic realization of their arguments, while the more specific roles are more often proposed by computer scientists, who are more concerned with the details of the realization of the arguments of specific verbs. [1]

[1] Daniel Gildea, Daniel Jurafsky. Automatic Labeling of Semantic Roles. In Computational Linguistics (2002), 28(3):245–288

[2] Freitag, D., McCallum, A.: Information extraction with HMM structures learned by stochastic optimization. In: Proceedings of AAAI. (2000) 584–589

[3] Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James Martin, and Dan Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of HLT/NAACL-2004.

[4] Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of ACL-2003.

[5] Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP-2004.

No comments: