Search In this Thesis
   Search In this Thesis  
العنوان
Cohesion in Arabic into English Translation:
الناشر
Ain Shams university.
المؤلف
Al-Kashef, Yasmin Ossama.
هيئة الاعداد
مشرف / Amal Mohammad Abdel Maqsoud
مشرف / Mona Fouad Attia
مشرف / Amal Mohammad Abdel Maqsoud
باحث / Yasmin Ossama Al-Kashef
الموضوع
Cohesion. Arabic into English Translation. Translation Universals.
تاريخ النشر
2011
عدد الصفحات
p.:203
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
اللغة واللسانيات
تاريخ الإجازة
1/1/2011
مكان الإجازة
جامعة عين شمس - كلية الألسن - English
الفهرس
Only 14 pages are availabe for public view

from 203

from 203

Abstract

Synopsis
Modern technology opened new horizons for research in linguistics and, in turn, in translation studies. Rather than manual research conducted on a limited amount of text, electronic corpora made the investigation of texts comprising millions of words at every researcher’s fingertips. In the nineties, this new tool started to invade the realms of descriptive translation studies. In effect, it has caused a debate on the possible existence of universal features of translated texts.
The key problem facing the study of translation universals is the fact that universality is difficult to prove. Though the suggestion of universal features of translation can be traced back to the 1990s, the debate is still rife concerning the universality of the universals. More than ten years after Baker’s seminal article (1993), Mauranen and Kujamaki (2004) question the concept in their book Translation Universals: Do they Exist? It is almost unimaginable how a single study can prove the translation universals in all languages. Thus, the majority of studies focus on one language pair, bi-directionally or even mono-directionally (like the present study), whereas very few projects set out to investigate the translation of multiple languages into one language as in case of the studies conducted on the Translational English Corpus (TEC), a corpus of translated English with multiple source languages. It is expected that the universality of the universals will either be proved or disproved by the overall conclusions made by the different studies conducted on the world languages.
The study sets out to question the validity of the hypothesis on the translation universals, namely explicitation, simplification and normalization in translated texts through the use of a corpus of English news stories translated from Arabic and a comparable corpus of original English news stories. Recent literature has identified such features as explicitation, simplification and normalization in translated texts into major Indo-European languages; translation researchers are keen to know if these phenomena are also present in non-European languages like Arabic. By applying corpus-based methodology, the study focuses on cohesion in English target texts with Arabic as a source language. Reference, conjunctions and lexical cohesion are traced in the text to depict any disparity in the usage of the cohesive devices in original English texts and target English texts for which the source text is Arabic, and decide if such disparity is due to those universal features or not. The study raises a number of questions:
• Do the cohesive devices prove or disprove the translation universals in Arabic into English translations?
• Is their a relation between some cohesive devices and particular translation universals?
• Is the particular usage of cohesive ties in the English target texts, if Found, a result of certain influences of the source text? If yes, does this jeopardize the universality of the universals?
Translation Universals
Baker (1993) finds in the availability of electronic corpora a resource for studying various translational features, including the distinctive nature of the translated text and the distinctive styles of individual translators (p.234). She (1993) suggests the idea of the universal features of translation or translation universals, and defines them as “features which typically occur in translated text rather than original utterances and which are not the result of interference from specific linguistic systems” (p.234). To Baker, the focus is on patterns that are not the result of a specific target text or source text. Rather, these patterns are specific to translation, and thus typical of the translated language and not the non-translated language. Early on, researchers, amongst whom are Blum-Kulka and Levenston (1983), Frawly (1984) and Blum Kulka (1986), put forward a series of hypotheses suggesting universal features of translation or features recurring invariably in translated texts irrespective of the source and the target languages involved. For instance, Laviosa (2002) points out that in a study conducted on English-French and French-English translations, Blum-Kulka (1986) notices shifts in the types of cohesion markers used in the target text and records instances where the translator adds to the target text by inserting words that are absent in the source text. She concludes that such phenomena render the target texts more explicit than their sources and suggests that this could also hold true to all translations, notwithstanding the language pair, i.e. the source and target languages (Laviosa, 2002, p.52). This hypothesis and similar assumptions are based on small-scale and manually conducted contrastive studies. Therefore, Baker sees in electronic corpora a useful proving ground for such hypotheses.
Baker (1993, 1995, 1996) identifies three universals, namely explicitation, normalization and simplification. Explicitation is “an overall tendency to spell things out rather than leave them implicit in translation” (Baker, 1996, p.180). This is seen in the fact that translations are sometimes longer than their originals. Translators sometimes prefer to avoid ambiguity; the vague is made precise and the ellipted filled in (Vanderauwera, 1985, p.97). Normalization is the “tendency to conform to patterns and practices which are typical to the target language” (Baker, 1996, p.176-7). Instances of normalization are found in using common collocations, even if the original is unusual and in using typical grammatical structures of the target language, often making grammatical what is not in the original (Helgegren, 2005, p.13). Simplification is “the tendency of translated texts to contain simplified language compared to the original text” (Baker, 1996, p.180). This phenomenon is reflected in various strategies including the breaking up of long sentences and the omission of repeated information. The combined effect of the three universals is that translations are usually less complex than their original.
Cohesion in English and Arabic
In the present study, cohesion is selected to be the linguistic phenomenon serving as a main parameter for the application of the translation universals to the Arabic-English translation. Hoey (1991) states, “Cohesion may be crudely defined as the way certain words or grammatical features of a sentence can connect that sentence to its predecessors (and successors) in text” (p.3). This relation between the different parts of the text is realized through cohesive ties or links, “signals of cohesion [that] indicate how the part of the text with which they occur links up conceptually with some other part” (Dooley and Levinsohn, 2001, p.27).
Halliday and Hasan (1976) distinguish two distinct types of cohesive relations, i.e. grammatical cohesion and lexical cohesion. Grammatical cohesion is subdivided into the subclasses of reference, substitution, ellipsis, and conjunction, whereas, lexical cohesion is subdivided into the subclasses of reiteration and collocation. Shi (2004) points out that the cohesive tie of reference “occurs when the reader has to retrieve the identity of what is being talked about by referring to another expression in the immediate context” (p.1). Halliday and Hasan (1976) identify three types of reference: i) personal reference expressed through pronouns and determiners (e.g. I, me, mine, you, your, ours, etc.), ii) demonstrative reference expressed through determiners and adverbs (e.g. this, these, etc.) and iii) comparative reference expressed “through adjectives and serves to compare items within a text in terms of identity and similarity” (Nunan, 1993, p.24), (e.g. same, similar, other, etc.). In substitution, an item is replaced by another, using words like one, do, or so (e.g. -A: Do you like movies? –B: I do) (Shi, 2004, p.2). Ellipsis, however, is the process in which a certain item is not mentioned and is understood via a previous sentence or clause (Jackson, 1991, p. 112). Conjunctions do not request that the reader searches for information for interpretation. They rather suggest interpreting in relation to previous text (Fine, 1994, p. 212). Halliday and Hasan (1976) identify four types of conjunctions: additive (and), adversative (yet, however), causal (therefore, because), and temporal (next, afterwards) (p. 238-61).
As regards lexical cohesion, two main devices are identified, namely reiteration and collocation. Trujillo (2004) defines reiteration as “a general lexical cohesive phenomenon which includes several different procedures of cohesion: the repetition of a lexical item, the use of a synonym, near-synonym or superordinate and the use of a general word” (p.1). As regards collocation, it “has long been the name given to the relationship a lexical item has with items that appear with greater than random probability in its (textual) context” (Hoey, 1991, p.7).
The study also sheds light on cohesion in the source language, Arabic, be it grammatical or lexical. To state but a few examples, in an-Nahw al-Waafi, Hasan (1966) identifies ten coordinators, such as waw, faa, θumma, hattaa bal, lakin, laa, (p. 628-9). On the other hand, Hassaan (1979) identifies other devices such as (i) personal pronouns, (ii) demonstratives, (iii) reiteration or repetition, and (iv) reiteration of meaning. Furthermore, cohesion in Arabic is realized through adverbs of place and time and conditionals.
The selection of cohesion in particular is inspired by the evidence of the translation universals already suggested by other scholars: breaking long sentences, filling in the ellipted forms, and the fact that translations are sometimes longer than their originals. In effect, the phenomenon has been studied before, but with different language pairs. Thus, investigating the English-Arabic language pair, even mono-directionally (from Arabic into English) proves worthwhile.
Corpus Linguistics
Kenny (2001) defines corpus linguistics simply as “the branch of linguistics that studies language on the bases of corpora” (p.50). Olohan (2004) explains that a corpus is “a collection of texts, selected and compiled according to specific criteria” (p.1). Kennedy (1998) contends that corpus linguistics finds in bodies of text a ”domain of study” and a ”source of evidence for linguistic description and argumentation” (p.7).
Herscovitch (2005) points out that the early 1960s witnessed the inception of a one-million word computerized corpus, in reference to the Brown corpus (p.1). Kennedy (1998) explains that by the early 1980s, the number of corpora was significantly limited; they were prepared over two decades on mainframe computers for the non-profit purpose of linguistic research at universities, for these were the institutions that could afford such technology. The time and effort involved in the process of compiling the first group of electronic corpora is hard to imagine, especially if compared to the capacities of nowadays computers. In the 1970s microcomputers emerged, and in the 1980s CD-ROM came in vogue, thus making electronic corpora more accessible. In fact, corpora started to expand in size and processing became more advanced given the speed and capacity of the more modern computers. All these inventions led to a blooming field of research in the 1990s. In terms of size, researchers have started to think that a corpus of 10-20 million words may not be enough. A number of researches are currently working on infinite corpora (Kennedy, 1998, p.4-7). In terms of speed, Kennedy (1998) gives an example of a concordancer (a software specialized in finding key words in contexts) that would take an hour to search for a word on a one-million-word corpus on a mainframe computer in the 1970s. In the 1990s, however, the same task could take a few minutes on a personal computer (p.7).
Unlike other branches of linguistics, corpus-based research is not built up of theories of language analysis at a certain level. Kennedy (1998) points out that corpus linguistics is not a theory of language that competes with other theories. Rather, it can be used to analyze these theories (p.2). Corpus linguistics, however, is not ”a mindless process of automatic language description” (Kennedy, 1998, p.2). It is interested in describing and explaining the ”nature, structure and use of language and languages and particular matters such as language acquisition, variation and change” (Kennedy, 1998, p.8). Using corpus linguistics tools, research is conducted in lexis, syntax, pragmatics and all branches of linguistics.
Corpus-Based Translation Studies
Though corpus linguistics emerged in the 1960s, it is in the 1990s that this method of empirical investigation has been considered useful in translation studies. According to Laviosa (1998), it has been Baker (1993) who thought that with large corpora of original and translated texts made available and with the progress in corpus-based research, scholars might well reveal ”the nature of translated text as a mediated communicative event” (Baker, 1993, p.243). But the goal is not merely uncovering the ”third code” in itself. It is more importantly fathoming the limits and drives behind the production of such language (Laviosa, 1998, p.1). Nonetheless, Olohan and Baker (2000) state that corpus-based translation studies focus on the translated text itself rather than on the equivalence of the translated to the source text (p.141). This type of research can also contribute to the identification of individual translators’ linguistic behavior. In the groundbreaking article, “Corpus Linguistics and Translation Studies: Implications and Applications”, Baker (1993) suggests that an excellent investigation tool can be found in the many large corpora now made available in electronic form. This tool can test the linguistic nature of translations: either in contrast to their source texts or in contrast to non-translated target language texts (p.235).
Corpus
In the present study, a comparable corpus is used. Herscovitch (2005) explains that a comparable corpora consists of two separate collections of texts in the same language, one being a corpus of original texts in a certain language and the other consisting of translations in that language either from a single or a group of languages (p.5). Through this type of corpora, Baker (1995) contends, light is shed on ”the nature of the translated text and the nature of the process of translation itself” (p.236). This has been the basis of Baker’s attention to ”the universal features of translations”, Olohan (2004, p.37) comments. For the purpose of the study, two sub-corpora are used: a corpus of translated English and a corpus of original or non-translated English. The Translated Corpus (TC) is taken from the Linguistic Data Consortium (LDC) parallel corpus. The LDC corpus consists of Arabic news stories and their English translations LDC collected via Ummah Press Service from January 2001 to September 2004. For the purpose of this research, only the translated English texts are used. The second sub-corpus, the Non-translated Corpus (NTC), is part of the LDC English tree bank (released in 1999) which is made up of news stories from the Wall Street journal. This way, a corpus of translated and non-translated English texts will be ready for investigation. As a second step of analysis, a sample parallel corpus is used. It is made up of a part of TC and its original Arabic texts from the LDC parallel corpus.
Software
Two softwares are used in the present study. The first software is Simple Concordance Program (SCP). This free-online software provides concordancing utilities for text analysis. It is mainly used in the analysis of grammatical cohesion and only reiteration under lexical cohesion. The second software is N-gram Statistics Package (NSP), another free-online software. It is a collocation extraction tool that is used in the present study to analyze collocation and compare it in TC and NTC.
Results and Conclusions
The results of the present study point out that some cohesive devices are indeed more prevalent in the translated text, namely reiteration, demonstrative reference, causal conjunctions. This is inline with the translation universals hypothesis. In addition, other conjunctions are rarer in translated texts, yet still prove the translation universals, namely ellipsis and substitution. Therefore the results of the present study attest to the validity of the translation universals hypothesis. The explicitation hypothesis proves true in reiteration, ellipsis and reference. The simplification hypothesis is found to be true with respect to reference and substitution. The normalization hypothesis follows suit with respect to conjunctions. However, some reservations must be taken into consideration. It is not wise to generalize when speaking of cohesion and the translation universals, for the different cohesive devices have different relations with the hypothesis of the translation universals. Many ties are found to be less common in the translated text. However, it is how they are used that lead to simpler, normalized and more explicit translated texts. It can be recommended to conduct similar studies on different language pairs, especially non-European language pairs. Likewise, a similar study to the present one but in the opposite direction of translation (English into Arabic) could shed significant light on the results and conclusions of the present study.