ibm5.py 27 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662
  1. # -*- coding: utf-8 -*-
  2. # Natural Language Toolkit: IBM Model 5
  3. #
  4. # Copyright (C) 2001-2020 NLTK Project
  5. # Author: Tah Wei Hoon <hoon.tw@gmail.com>
  6. # URL: <http://nltk.org/>
  7. # For license information, see LICENSE.TXT
  8. """
  9. Translation model that keeps track of vacant positions in the target
  10. sentence to decide where to place translated words.
  11. Translation can be viewed as a process where each word in the source
  12. sentence is stepped through sequentially, generating translated words
  13. for each source word. The target sentence can be viewed as being made
  14. up of ``m`` empty slots initially, which gradually fill up as generated
  15. words are placed in them.
  16. Models 3 and 4 use distortion probabilities to decide how to place
  17. translated words. For simplicity, these models ignore the history of
  18. which slots have already been occupied with translated words.
  19. Consider the placement of the last translated word: there is only one
  20. empty slot left in the target sentence, so the distortion probability
  21. should be 1.0 for that position and 0.0 everywhere else. However, the
  22. distortion probabilities for Models 3 and 4 are set up such that all
  23. positions are under consideration.
  24. IBM Model 5 fixes this deficiency by accounting for occupied slots
  25. during translation. It introduces the vacancy function v(j), the number
  26. of vacancies up to, and including, position j in the target sentence.
  27. Terminology:
  28. Maximum vacancy:
  29. The number of valid slots that a word can be placed in.
  30. This is not necessarily the same as the number of vacant slots.
  31. For example, if a tablet contains more than one word, the head word
  32. cannot be placed at the last vacant slot because there will be no
  33. space for the other words in the tablet. The number of valid slots
  34. has to take into account the length of the tablet.
  35. Non-head words cannot be placed before the head word, so vacancies
  36. to the left of the head word are ignored.
  37. Vacancy difference:
  38. For a head word: (v(j) - v(center of previous cept))
  39. Can be positive or negative.
  40. For a non-head word: (v(j) - v(position of previously placed word))
  41. Always positive, because successive words in a tablet are assumed to
  42. appear to the right of the previous word.
  43. Positioning of target words fall under three cases:
  44. (1) Words generated by NULL are distributed uniformly
  45. (2) For a head word t, its position is modeled by the probability
  46. v_head(dv | max_v,word_class_t(t))
  47. (3) For a non-head word t, its position is modeled by the probability
  48. v_non_head(dv | max_v,word_class_t(t))
  49. dv and max_v are defined differently for head and non-head words.
  50. The EM algorithm used in Model 5 is:
  51. E step - In the training data, collect counts, weighted by prior
  52. probabilities.
  53. (a) count how many times a source language word is translated
  54. into a target language word
  55. (b) for a particular word class and maximum vacancy, count how
  56. many times a head word and the previous cept's center have
  57. a particular difference in number of vacancies
  58. (b) for a particular word class and maximum vacancy, count how
  59. many times a non-head word and the previous target word
  60. have a particular difference in number of vacancies
  61. (d) count how many times a source word is aligned to phi number
  62. of target words
  63. (e) count how many times NULL is aligned to a target word
  64. M step - Estimate new probabilities based on the counts from the E step
  65. Like Model 4, there are too many possible alignments to consider. Thus,
  66. a hill climbing approach is used to sample good candidates. In addition,
  67. pruning is used to weed out unlikely alignments based on Model 4 scores.
  68. Notations:
  69. i: Position in the source sentence
  70. Valid values are 0 (for NULL), 1, 2, ..., length of source sentence
  71. j: Position in the target sentence
  72. Valid values are 1, 2, ..., length of target sentence
  73. l: Number of words in the source sentence, excluding NULL
  74. m: Number of words in the target sentence
  75. s: A word in the source language
  76. t: A word in the target language
  77. phi: Fertility, the number of target words produced by a source word
  78. p1: Probability that a target word produced by a source word is
  79. accompanied by another target word that is aligned to NULL
  80. p0: 1 - p1
  81. max_v: Maximum vacancy
  82. dv: Vacancy difference, Δv
  83. The definition of v_head here differs from GIZA++, section 4.7 of
  84. [Brown et al., 1993], and [Koehn, 2010]. In the latter cases, v_head is
  85. v_head(v(j) | v(center of previous cept),max_v,word_class(t)).
  86. Here, we follow appendix B of [Brown et al., 1993] and combine v(j) with
  87. v(center of previous cept) to obtain dv:
  88. v_head(v(j) - v(center of previous cept) | max_v,word_class(t)).
  89. References:
  90. Philipp Koehn. 2010. Statistical Machine Translation.
  91. Cambridge University Press, New York.
  92. Peter E Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and
  93. Robert L. Mercer. 1993. The Mathematics of Statistical Machine
  94. Translation: Parameter Estimation. Computational Linguistics, 19 (2),
  95. 263-311.
  96. """
  97. import warnings
  98. from collections import defaultdict
  99. from math import factorial
  100. from nltk.translate import AlignedSent
  101. from nltk.translate import Alignment
  102. from nltk.translate import IBMModel
  103. from nltk.translate import IBMModel4
  104. from nltk.translate.ibm_model import Counts
  105. from nltk.translate.ibm_model import longest_target_sentence_length
  106. class IBMModel5(IBMModel):
  107. """
  108. Translation model that keeps track of vacant positions in the target
  109. sentence to decide where to place translated words
  110. >>> bitext = []
  111. >>> bitext.append(AlignedSent(['klein', 'ist', 'das', 'haus'], ['the', 'house', 'is', 'small']))
  112. >>> bitext.append(AlignedSent(['das', 'haus', 'war', 'ja', 'groß'], ['the', 'house', 'was', 'big']))
  113. >>> bitext.append(AlignedSent(['das', 'buch', 'ist', 'ja', 'klein'], ['the', 'book', 'is', 'small']))
  114. >>> bitext.append(AlignedSent(['ein', 'haus', 'ist', 'klein'], ['a', 'house', 'is', 'small']))
  115. >>> bitext.append(AlignedSent(['das', 'haus'], ['the', 'house']))
  116. >>> bitext.append(AlignedSent(['das', 'buch'], ['the', 'book']))
  117. >>> bitext.append(AlignedSent(['ein', 'buch'], ['a', 'book']))
  118. >>> bitext.append(AlignedSent(['ich', 'fasse', 'das', 'buch', 'zusammen'], ['i', 'summarize', 'the', 'book']))
  119. >>> bitext.append(AlignedSent(['fasse', 'zusammen'], ['summarize']))
  120. >>> src_classes = {'the': 0, 'a': 0, 'small': 1, 'big': 1, 'house': 2, 'book': 2, 'is': 3, 'was': 3, 'i': 4, 'summarize': 5 }
  121. >>> trg_classes = {'das': 0, 'ein': 0, 'haus': 1, 'buch': 1, 'klein': 2, 'groß': 2, 'ist': 3, 'war': 3, 'ja': 4, 'ich': 5, 'fasse': 6, 'zusammen': 6 }
  122. >>> ibm5 = IBMModel5(bitext, 5, src_classes, trg_classes)
  123. >>> print(round(ibm5.head_vacancy_table[1][1][1], 3))
  124. 1.0
  125. >>> print(round(ibm5.head_vacancy_table[2][1][1], 3))
  126. 0.0
  127. >>> print(round(ibm5.non_head_vacancy_table[3][3][6], 3))
  128. 1.0
  129. >>> print(round(ibm5.fertility_table[2]['summarize'], 3))
  130. 1.0
  131. >>> print(round(ibm5.fertility_table[1]['book'], 3))
  132. 1.0
  133. >>> print(ibm5.p1)
  134. 0.033...
  135. >>> test_sentence = bitext[2]
  136. >>> test_sentence.words
  137. ['das', 'buch', 'ist', 'ja', 'klein']
  138. >>> test_sentence.mots
  139. ['the', 'book', 'is', 'small']
  140. >>> test_sentence.alignment
  141. Alignment([(0, 0), (1, 1), (2, 2), (3, None), (4, 3)])
  142. """
  143. MIN_SCORE_FACTOR = 0.2
  144. """
  145. Alignments with scores below this factor are pruned during sampling
  146. """
  147. def __init__(
  148. self,
  149. sentence_aligned_corpus,
  150. iterations,
  151. source_word_classes,
  152. target_word_classes,
  153. probability_tables=None,
  154. ):
  155. """
  156. Train on ``sentence_aligned_corpus`` and create a lexical
  157. translation model, vacancy models, a fertility model, and a
  158. model for generating NULL-aligned words.
  159. Translation direction is from ``AlignedSent.mots`` to
  160. ``AlignedSent.words``.
  161. :param sentence_aligned_corpus: Sentence-aligned parallel corpus
  162. :type sentence_aligned_corpus: list(AlignedSent)
  163. :param iterations: Number of iterations to run training algorithm
  164. :type iterations: int
  165. :param source_word_classes: Lookup table that maps a source word
  166. to its word class, the latter represented by an integer id
  167. :type source_word_classes: dict[str]: int
  168. :param target_word_classes: Lookup table that maps a target word
  169. to its word class, the latter represented by an integer id
  170. :type target_word_classes: dict[str]: int
  171. :param probability_tables: Optional. Use this to pass in custom
  172. probability values. If not specified, probabilities will be
  173. set to a uniform distribution, or some other sensible value.
  174. If specified, all the following entries must be present:
  175. ``translation_table``, ``alignment_table``,
  176. ``fertility_table``, ``p1``, ``head_distortion_table``,
  177. ``non_head_distortion_table``, ``head_vacancy_table``,
  178. ``non_head_vacancy_table``. See ``IBMModel``, ``IBMModel4``,
  179. and ``IBMModel5`` for the type and purpose of these tables.
  180. :type probability_tables: dict[str]: object
  181. """
  182. super(IBMModel5, self).__init__(sentence_aligned_corpus)
  183. self.reset_probabilities()
  184. self.src_classes = source_word_classes
  185. self.trg_classes = target_word_classes
  186. if probability_tables is None:
  187. # Get probabilities from IBM model 4
  188. ibm4 = IBMModel4(
  189. sentence_aligned_corpus,
  190. iterations,
  191. source_word_classes,
  192. target_word_classes,
  193. )
  194. self.translation_table = ibm4.translation_table
  195. self.alignment_table = ibm4.alignment_table
  196. self.fertility_table = ibm4.fertility_table
  197. self.p1 = ibm4.p1
  198. self.head_distortion_table = ibm4.head_distortion_table
  199. self.non_head_distortion_table = ibm4.non_head_distortion_table
  200. self.set_uniform_probabilities(sentence_aligned_corpus)
  201. else:
  202. # Set user-defined probabilities
  203. self.translation_table = probability_tables["translation_table"]
  204. self.alignment_table = probability_tables["alignment_table"]
  205. self.fertility_table = probability_tables["fertility_table"]
  206. self.p1 = probability_tables["p1"]
  207. self.head_distortion_table = probability_tables["head_distortion_table"]
  208. self.non_head_distortion_table = probability_tables[
  209. "non_head_distortion_table"
  210. ]
  211. self.head_vacancy_table = probability_tables["head_vacancy_table"]
  212. self.non_head_vacancy_table = probability_tables["non_head_vacancy_table"]
  213. for n in range(0, iterations):
  214. self.train(sentence_aligned_corpus)
  215. def reset_probabilities(self):
  216. super(IBMModel5, self).reset_probabilities()
  217. self.head_vacancy_table = defaultdict(
  218. lambda: defaultdict(lambda: defaultdict(lambda: self.MIN_PROB))
  219. )
  220. """
  221. dict[int][int][int]: float. Probability(vacancy difference |
  222. number of remaining valid positions,target word class).
  223. Values accessed as ``head_vacancy_table[dv][v_max][trg_class]``.
  224. """
  225. self.non_head_vacancy_table = defaultdict(
  226. lambda: defaultdict(lambda: defaultdict(lambda: self.MIN_PROB))
  227. )
  228. """
  229. dict[int][int][int]: float. Probability(vacancy difference |
  230. number of remaining valid positions,target word class).
  231. Values accessed as ``non_head_vacancy_table[dv][v_max][trg_class]``.
  232. """
  233. def set_uniform_probabilities(self, sentence_aligned_corpus):
  234. """
  235. Set vacancy probabilities uniformly to
  236. 1 / cardinality of vacancy difference values
  237. """
  238. max_m = longest_target_sentence_length(sentence_aligned_corpus)
  239. # The maximum vacancy difference occurs when a word is placed in
  240. # the last available position m of the target sentence and the
  241. # previous word position has no vacancies.
  242. # The minimum is 1-max_v, when a word is placed in the first
  243. # available position and the previous word is placed beyond the
  244. # last available position.
  245. # Thus, the number of possible vacancy difference values is
  246. # (max_v) - (1-max_v) + 1 = 2 * max_v.
  247. if max_m > 0 and (1 / (2 * max_m)) < IBMModel.MIN_PROB:
  248. warnings.warn(
  249. "A target sentence is too long ("
  250. + str(max_m)
  251. + " words). Results may be less accurate."
  252. )
  253. for max_v in range(1, max_m + 1):
  254. for dv in range(1, max_m + 1):
  255. initial_prob = 1 / (2 * max_v)
  256. self.head_vacancy_table[dv][max_v] = defaultdict(lambda: initial_prob)
  257. self.head_vacancy_table[-(dv - 1)][max_v] = defaultdict(
  258. lambda: initial_prob
  259. )
  260. self.non_head_vacancy_table[dv][max_v] = defaultdict(
  261. lambda: initial_prob
  262. )
  263. self.non_head_vacancy_table[-(dv - 1)][max_v] = defaultdict(
  264. lambda: initial_prob
  265. )
  266. def train(self, parallel_corpus):
  267. counts = Model5Counts()
  268. for aligned_sentence in parallel_corpus:
  269. l = len(aligned_sentence.mots)
  270. m = len(aligned_sentence.words)
  271. # Sample the alignment space
  272. sampled_alignments, best_alignment = self.sample(aligned_sentence)
  273. # Record the most probable alignment
  274. aligned_sentence.alignment = Alignment(
  275. best_alignment.zero_indexed_alignment()
  276. )
  277. # E step (a): Compute normalization factors to weigh counts
  278. total_count = self.prob_of_alignments(sampled_alignments)
  279. # E step (b): Collect counts
  280. for alignment_info in sampled_alignments:
  281. count = self.prob_t_a_given_s(alignment_info)
  282. normalized_count = count / total_count
  283. for j in range(1, m + 1):
  284. counts.update_lexical_translation(
  285. normalized_count, alignment_info, j
  286. )
  287. slots = Slots(m)
  288. for i in range(1, l + 1):
  289. counts.update_vacancy(
  290. normalized_count, alignment_info, i, self.trg_classes, slots
  291. )
  292. counts.update_null_generation(normalized_count, alignment_info)
  293. counts.update_fertility(normalized_count, alignment_info)
  294. # M step: Update probabilities with maximum likelihood estimates
  295. # If any probability is less than MIN_PROB, clamp it to MIN_PROB
  296. existing_alignment_table = self.alignment_table
  297. self.reset_probabilities()
  298. self.alignment_table = existing_alignment_table # don't retrain
  299. self.maximize_lexical_translation_probabilities(counts)
  300. self.maximize_vacancy_probabilities(counts)
  301. self.maximize_fertility_probabilities(counts)
  302. self.maximize_null_generation_probabilities(counts)
  303. def sample(self, sentence_pair):
  304. """
  305. Sample the most probable alignments from the entire alignment
  306. space according to Model 4
  307. Note that Model 4 scoring is used instead of Model 5 because the
  308. latter is too expensive to compute.
  309. First, determine the best alignment according to IBM Model 2.
  310. With this initial alignment, use hill climbing to determine the
  311. best alignment according to a IBM Model 4. Add this
  312. alignment and its neighbors to the sample set. Repeat this
  313. process with other initial alignments obtained by pegging an
  314. alignment point. Finally, prune alignments that have
  315. substantially lower Model 4 scores than the best alignment.
  316. :param sentence_pair: Source and target language sentence pair
  317. to generate a sample of alignments from
  318. :type sentence_pair: AlignedSent
  319. :return: A set of best alignments represented by their ``AlignmentInfo``
  320. and the best alignment of the set for convenience
  321. :rtype: set(AlignmentInfo), AlignmentInfo
  322. """
  323. sampled_alignments, best_alignment = super(IBMModel5, self).sample(
  324. sentence_pair
  325. )
  326. return self.prune(sampled_alignments), best_alignment
  327. def prune(self, alignment_infos):
  328. """
  329. Removes alignments from ``alignment_infos`` that have
  330. substantially lower Model 4 scores than the best alignment
  331. :return: Pruned alignments
  332. :rtype: set(AlignmentInfo)
  333. """
  334. alignments = []
  335. best_score = 0
  336. for alignment_info in alignment_infos:
  337. score = IBMModel4.model4_prob_t_a_given_s(alignment_info, self)
  338. best_score = max(score, best_score)
  339. alignments.append((alignment_info, score))
  340. threshold = IBMModel5.MIN_SCORE_FACTOR * best_score
  341. alignments = [a[0] for a in alignments if a[1] > threshold]
  342. return set(alignments)
  343. def hillclimb(self, alignment_info, j_pegged=None):
  344. """
  345. Starting from the alignment in ``alignment_info``, look at
  346. neighboring alignments iteratively for the best one, according
  347. to Model 4
  348. Note that Model 4 scoring is used instead of Model 5 because the
  349. latter is too expensive to compute.
  350. There is no guarantee that the best alignment in the alignment
  351. space will be found, because the algorithm might be stuck in a
  352. local maximum.
  353. :param j_pegged: If specified, the search will be constrained to
  354. alignments where ``j_pegged`` remains unchanged
  355. :type j_pegged: int
  356. :return: The best alignment found from hill climbing
  357. :rtype: AlignmentInfo
  358. """
  359. alignment = alignment_info # alias with shorter name
  360. max_probability = IBMModel4.model4_prob_t_a_given_s(alignment, self)
  361. while True:
  362. old_alignment = alignment
  363. for neighbor_alignment in self.neighboring(alignment, j_pegged):
  364. neighbor_probability = IBMModel4.model4_prob_t_a_given_s(
  365. neighbor_alignment, self
  366. )
  367. if neighbor_probability > max_probability:
  368. alignment = neighbor_alignment
  369. max_probability = neighbor_probability
  370. if alignment == old_alignment:
  371. # Until there are no better alignments
  372. break
  373. alignment.score = max_probability
  374. return alignment
  375. def prob_t_a_given_s(self, alignment_info):
  376. """
  377. Probability of target sentence and an alignment given the
  378. source sentence
  379. """
  380. probability = 1.0
  381. MIN_PROB = IBMModel.MIN_PROB
  382. slots = Slots(len(alignment_info.trg_sentence) - 1)
  383. def null_generation_term():
  384. # Binomial distribution: B(m - null_fertility, p1)
  385. value = 1.0
  386. p1 = self.p1
  387. p0 = 1 - p1
  388. null_fertility = alignment_info.fertility_of_i(0)
  389. m = len(alignment_info.trg_sentence) - 1
  390. value *= pow(p1, null_fertility) * pow(p0, m - 2 * null_fertility)
  391. if value < MIN_PROB:
  392. return MIN_PROB
  393. # Combination: (m - null_fertility) choose null_fertility
  394. for i in range(1, null_fertility + 1):
  395. value *= (m - null_fertility - i + 1) / i
  396. return value
  397. def fertility_term():
  398. value = 1.0
  399. src_sentence = alignment_info.src_sentence
  400. for i in range(1, len(src_sentence)):
  401. fertility = alignment_info.fertility_of_i(i)
  402. value *= (
  403. factorial(fertility)
  404. * self.fertility_table[fertility][src_sentence[i]]
  405. )
  406. if value < MIN_PROB:
  407. return MIN_PROB
  408. return value
  409. def lexical_translation_term(j):
  410. t = alignment_info.trg_sentence[j]
  411. i = alignment_info.alignment[j]
  412. s = alignment_info.src_sentence[i]
  413. return self.translation_table[t][s]
  414. def vacancy_term(i):
  415. value = 1.0
  416. tablet = alignment_info.cepts[i]
  417. tablet_length = len(tablet)
  418. total_vacancies = slots.vacancies_at(len(slots))
  419. # case 1: NULL-aligned words
  420. if tablet_length == 0:
  421. return value
  422. # case 2: head word
  423. j = tablet[0]
  424. previous_cept = alignment_info.previous_cept(j)
  425. previous_center = alignment_info.center_of_cept(previous_cept)
  426. dv = slots.vacancies_at(j) - slots.vacancies_at(previous_center)
  427. max_v = total_vacancies - tablet_length + 1
  428. trg_class = self.trg_classes[alignment_info.trg_sentence[j]]
  429. value *= self.head_vacancy_table[dv][max_v][trg_class]
  430. slots.occupy(j) # mark position as occupied
  431. total_vacancies -= 1
  432. if value < MIN_PROB:
  433. return MIN_PROB
  434. # case 3: non-head words
  435. for k in range(1, tablet_length):
  436. previous_position = tablet[k - 1]
  437. previous_vacancies = slots.vacancies_at(previous_position)
  438. j = tablet[k]
  439. dv = slots.vacancies_at(j) - previous_vacancies
  440. max_v = total_vacancies - tablet_length + k + 1 - previous_vacancies
  441. trg_class = self.trg_classes[alignment_info.trg_sentence[j]]
  442. value *= self.non_head_vacancy_table[dv][max_v][trg_class]
  443. slots.occupy(j) # mark position as occupied
  444. total_vacancies -= 1
  445. if value < MIN_PROB:
  446. return MIN_PROB
  447. return value
  448. # end nested functions
  449. # Abort computation whenever probability falls below MIN_PROB at
  450. # any point, since MIN_PROB can be considered as zero
  451. probability *= null_generation_term()
  452. if probability < MIN_PROB:
  453. return MIN_PROB
  454. probability *= fertility_term()
  455. if probability < MIN_PROB:
  456. return MIN_PROB
  457. for j in range(1, len(alignment_info.trg_sentence)):
  458. probability *= lexical_translation_term(j)
  459. if probability < MIN_PROB:
  460. return MIN_PROB
  461. for i in range(1, len(alignment_info.src_sentence)):
  462. probability *= vacancy_term(i)
  463. if probability < MIN_PROB:
  464. return MIN_PROB
  465. return probability
  466. def maximize_vacancy_probabilities(self, counts):
  467. MIN_PROB = IBMModel.MIN_PROB
  468. head_vacancy_table = self.head_vacancy_table
  469. for dv, max_vs in counts.head_vacancy.items():
  470. for max_v, trg_classes in max_vs.items():
  471. for t_cls in trg_classes:
  472. estimate = (
  473. counts.head_vacancy[dv][max_v][t_cls]
  474. / counts.head_vacancy_for_any_dv[max_v][t_cls]
  475. )
  476. head_vacancy_table[dv][max_v][t_cls] = max(estimate, MIN_PROB)
  477. non_head_vacancy_table = self.non_head_vacancy_table
  478. for dv, max_vs in counts.non_head_vacancy.items():
  479. for max_v, trg_classes in max_vs.items():
  480. for t_cls in trg_classes:
  481. estimate = (
  482. counts.non_head_vacancy[dv][max_v][t_cls]
  483. / counts.non_head_vacancy_for_any_dv[max_v][t_cls]
  484. )
  485. non_head_vacancy_table[dv][max_v][t_cls] = max(estimate, MIN_PROB)
  486. class Model5Counts(Counts):
  487. """
  488. Data object to store counts of various parameters during training.
  489. Includes counts for vacancies.
  490. """
  491. def __init__(self):
  492. super(Model5Counts, self).__init__()
  493. self.head_vacancy = defaultdict(
  494. lambda: defaultdict(lambda: defaultdict(lambda: 0.0))
  495. )
  496. self.head_vacancy_for_any_dv = defaultdict(lambda: defaultdict(lambda: 0.0))
  497. self.non_head_vacancy = defaultdict(
  498. lambda: defaultdict(lambda: defaultdict(lambda: 0.0))
  499. )
  500. self.non_head_vacancy_for_any_dv = defaultdict(lambda: defaultdict(lambda: 0.0))
  501. def update_vacancy(self, count, alignment_info, i, trg_classes, slots):
  502. """
  503. :param count: Value to add to the vacancy counts
  504. :param alignment_info: Alignment under consideration
  505. :param i: Source word position under consideration
  506. :param trg_classes: Target word classes
  507. :param slots: Vacancy states of the slots in the target sentence.
  508. Output parameter that will be modified as new words are placed
  509. in the target sentence.
  510. """
  511. tablet = alignment_info.cepts[i]
  512. tablet_length = len(tablet)
  513. total_vacancies = slots.vacancies_at(len(slots))
  514. # case 1: NULL aligned words
  515. if tablet_length == 0:
  516. return # ignore zero fertility words
  517. # case 2: head word
  518. j = tablet[0]
  519. previous_cept = alignment_info.previous_cept(j)
  520. previous_center = alignment_info.center_of_cept(previous_cept)
  521. dv = slots.vacancies_at(j) - slots.vacancies_at(previous_center)
  522. max_v = total_vacancies - tablet_length + 1
  523. trg_class = trg_classes[alignment_info.trg_sentence[j]]
  524. self.head_vacancy[dv][max_v][trg_class] += count
  525. self.head_vacancy_for_any_dv[max_v][trg_class] += count
  526. slots.occupy(j) # mark position as occupied
  527. total_vacancies -= 1
  528. # case 3: non-head words
  529. for k in range(1, tablet_length):
  530. previous_position = tablet[k - 1]
  531. previous_vacancies = slots.vacancies_at(previous_position)
  532. j = tablet[k]
  533. dv = slots.vacancies_at(j) - previous_vacancies
  534. max_v = total_vacancies - tablet_length + k + 1 - previous_vacancies
  535. trg_class = trg_classes[alignment_info.trg_sentence[j]]
  536. self.non_head_vacancy[dv][max_v][trg_class] += count
  537. self.non_head_vacancy_for_any_dv[max_v][trg_class] += count
  538. slots.occupy(j) # mark position as occupied
  539. total_vacancies -= 1
  540. class Slots(object):
  541. """
  542. Represents positions in a target sentence. Used to keep track of
  543. which slot (position) is occupied.
  544. """
  545. def __init__(self, target_sentence_length):
  546. self._slots = [False] * (target_sentence_length + 1) # 1-indexed
  547. def occupy(self, position):
  548. """
  549. :return: Mark slot at ``position`` as occupied
  550. """
  551. self._slots[position] = True
  552. def vacancies_at(self, position):
  553. """
  554. :return: Number of vacant slots up to, and including, ``position``
  555. """
  556. vacancies = 0
  557. for k in range(1, position + 1):
  558. if not self._slots[k]:
  559. vacancies += 1
  560. return vacancies
  561. def __len__(self):
  562. return len(self._slots) - 1 # exclude dummy zeroeth element