Audiovisual Translation in a Global Context - Facebook Com LinguaLIB (1)

July 14, 2017 | Author: walid ibrahim ali | Category: Accessibility, Translations, Usability, Learning Disability, Educational Assessment
Share Embed Donate


Short Description

Descripción: good...

Description

Palgrave Studies in Translating and Interpreting Series Editor: Margaret Rogers, The Centre for Translation Studies, University of Surrey, UK This series examines the crucial role which translation and interpreting in their myriad forms play at all levels of communication in today’s world, from the local to the global. Whilst this role is being increasingly recognised in some quarters (for example, through European Union legislation), in others it remains controversial for economic, political and social reasons. The rapidly changing landscape of translation and interpreting practice is accompanied by equally challenging developments in their academic study, often in an interdisciplinary framework and increasingly reflecting commonalities between what were once considered to be separate disciplines. The books in this series address specific issues in both translation and interpreting with the aim not only of charting but also of shaping the discipline with respect to contemporary practice and research. Titles include: Ann Corsellis PUBLIC SERVICE INTERPRETING C. K. Quah TRANSLATION AND TECHNOLOGY Jenny Williams THEORIES OF TRANSLATION Margaret Rogers SPECIALISED TRANSLATION Shedding the ‘Non-Literary’ Tag Rocío Baños Piñero and Jorge Díaz Cintas (editors) AUDIOVISUAL TRANSLATION IN A GLOBAL CONTEXT Mapping an Ever-changing Landscape

Palgrave Studies in Translating and Interpreting Series Standing Order ISBN 978–1–403–90393–8 Hardback (outside North America only) You can receive future titles in this series as they are published by placing a standing order. Please contact your bookseller or, in case of difficulty, write to us at the address below with your name and address, the title of the series and the ISBN quoted above. Customer Services Department, Macmillan Distribution Ltd, Houndmills, Basingstoke, Hampshire RG21 6XS, England

Also by Jorge Díaz Cintas AUDIOVISUAL TRANSLATION Language Transfer on Screen (co-editor) AUDIOVISUAL TRANSLATION Subtitling (co-author) AUDIOVISUAL TRANSLATION – TAKING STOCK (co-editor) MEDIA FOR ALL Subtitling for the Deaf, Audio Description, and Sign Language (co-editor) NEW INSIGHTS INTO AUDIOVISUAL TRANSLATION AND MEDIA ACCESSIBILITY Media for All (co-editor) NEW TRENDS IN AUDIOVISUAL TRANSLATION (editor) THE DIDACTICS OF AUDIOVISUAL TRANSLATION (editor)

Audiovisual Translation in a Global Context Mapping an Ever-changing Landscape Edited by

Rocío Baños Piñero University College London, UK

and

Jorge Díaz Cintas University College London, UK

Selection and editorial content © Rocío Baños Piñero and Jorge Díaz Cintas 2015 Individual chapters © Respective authors 2015 Softcover reprint of the hardcover 1st edition 2015 978-1-137-55288-4 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. No portion of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency, Saffron House, 6–10 Kirby Street, London EC1N 8TS. Any person who does any unauthorized act in relation to this publication may be liable to criminal prosecution and civil claims for damages. The authors have asserted their rights to be identified as the authors of this work in accordance with the Copyright, Designs and Patents Act 1988. First published 2015 by PALGRAVE MACMILLAN Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited, registered in England, company number 785998, of Houndmills, Basingstoke, Hampshire RG21 6XS. Palgrave Macmillan in the US is a division of St Martin’s Press LLC, 175 Fifth Avenue, New York, NY 10010. Palgrave Macmillan is the global academic imprint of the above companies and has companies and representatives throughout the world. Palgrave® and Macmillan® are registered trademarks in the United States, the United Kingdom, Europe and other countries. ISBN 978-1-349-55404-1 ISBN 978-1-137-55289-1 (eBook) DOI 10.1057/9781137552891 This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. Logging, pulping and manufacturing processes are expected to conform to the environmental regulations of the country of origin. A catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication Data Audiovisual translation in a global context : mapping an ever-changing landscape / edited by Rocío Baños Piñero, University College London, UK; Jorge Díaz Cintas, University College, London, UK. pages cm Summary: “Audiovisual Translation in a Global Context offers an up-to-date survey of the field of Audiovisual Translation (AVT). One of the main aims of the book is to document the changes taking place in this thriving discipline, by focusing not only on current projects and research being carried out in AVT but also on the professional practice in a wide range of contexts. The contributors to the collection cover a wide array of topics from subtitling, dubbing, and voiceover, to media accessibility practices like sign language, subtitling for the deaf and the hard-of-hearing, and audio description for the blind and visually impaired. In an accessible and engaging manner, the chapters discuss theoretical issues in close relation to real translation problems and empirical data, providing useful and practical insights into the personalised input that translators inevitably give to their work”— Provided by publisher. Includes bibliographical references. 1. Translating and interpreting 2. Dubbing of motion pictures 3. Motion pictures Titling 4. Dubbing of television programs 5. Television programs - Titling 6. Video recordings for the hearing impaired 7. Audio-visual equipment - Technological innovations. I. Baños Piñero, Rocío, 1980– editor. II. Díaz Cintas, Jorge, editor. TR886.7.A84 2015 777—dc23 2015019833 Typeset by MPS Limited, Chennai, India.

Contents List of Figures and Tables

vii

Acknowledgements

x

Notes on Contributors

xi

1 Audiovisual Translation in a Global Context Rocío Baños Piñero and Jorge Díaz Cintas

1

Part I Addressing Quality 2 Institutional Audiovisual Translation: A (Shop) Window on the World Adrián Fuentes-Luque

13

3 Accuracy Rate in Live Subtitling: The NER Model Pablo Romero-Fresco and Juan Martínez Pérez

28

4 Synchronized Subtitles in Live Television Programmes Mercedes de Castro, Luis Puente Rodríguez and Belén Ruiz Mezcua

51

5 Cross-fertilization between Reception Studies in Audio Description and Interpreting Quality Assessment: The Role of the Describer’s Voice Emilia Iglesias Fernández, Silvia Martínez Martínez and Antonio Javier Chica Núñez

72

Part II Targeting the Audience 6 Audio Describing for an Audience with Learning Disabilities in Brazil: A Pilot Study Eliana P. C. Franco, Deise M. Medina Silveira and Bárbara C. dos Santos Carneiro

99

7 Analysing Redubs: Motives, Agents and Audience Response Serenella Zanotti

110

8 Subtitling in the Era of the Blu-ray Nicolas Sanchez

140

9 The MultilingualWeb (MLW) Project: A Collaborative Approach and a Challenge for Translation Studies Cristina Valdés v

149

vi

Contents

Part III Mapping Professional Practices 10 Professional Realities of the Subtitling Industry: The Subtitlers’ Perspective Arista Szu-Yu Kuo 11 The Pros and Cons of Using Templates in Subtitling Kristijan Nikolić 12 Signing and Subtitling on Polish Television: A Case of (In)accessibility Renata Mliczak 13 Voiceover as Spoken Discourse Agata Hołobut 14 Dubbing Directors and Dubbing Actors: Co-authors of Translation for Dubbing Regina Mendes

163 192

203 225

253

15 Audio Description in Hong Kong Dawning Leung

266

Index

282

List of Figures and Tables Figures 2.1

EuroparlTV: subtitle activation and language selection

18

2.2

EC webcast portal: audio languages

19

2.3

EC webcast portal: lengthy, unconventional subtitles

20

2.4

EC webcast portal: three-line subtitles

21

2.5

EC webcast portal: normalization

22

2.6

EbS: mobile phone version

23

2.7

UN webcast

25

2.8

UN YouTube channel: YouTube’s auto-captioning feature

25

4.1

Subsystems involved in real-time subtitling on live television

53

4.2

Delay of several seconds between audio/video and subtitles in the transcription process

55

Subtitle delay measurements taken from samples of TV magazines from the RTVE channel La 1

58

Subtitle delays in the live TV programme Las mañanas de La 1, RTVE, Spain

59

Variable transcription delays using direct ASR for 24 hours of non-controlled audio on Spanish TV channel La 1

60

Variable transcription delays using direct ASR for seven live TV programmes with controlled audio

60

Model to obtain synchronized subtitles from the audio in real-time subtitling

64

Modules for subtitle generation in TV live programmes where direct ASR is used to generate audio-to-text transcriptions

66

Keeping track of time references between audio and subtitles

67

4.3 4.4 4.5 4.6 4.7 4.8 4.9

4.10 Modules and data flow for TV channel buffering, subtitle synchronization, multiplexing and IPTV transmission 10.1 Frequency of working into languages other than the mother tongue vii

68 168

viii

List of Figures and Tables

10.2 Frequency of timely payments from clients

174

10.3 Negotiation power vs. subtitling experience rates

176

10.4 Work premises

177

10.5 Royalty payments

178

10.6 Preference for acknowledgement

180

10.7 Professional practice as regards the inclusion of the subtitler’s name in the acknowledgement credits

180

10.8 Influence of tight deadlines on quality

182

10.9 Frequency of working with subtitling equipment

183

11.1 Example of a Swedish template provided for the translation of a film from English into Croatian

195

11.2 Example of an English template provided for the translation of an audiovisual programme into Croatian

197

12.1 Schedule of programmes on TVP1

211

12.2 TV Effatha

214

12.3 ONSI.tv

214

Tables 3.1

Examples of serious recognition errors

34

3.2

Examples of serious editing errors

35

3.3

Examples of standard recognition errors

37

3.4

Examples of standard editing errors

38

3.5

Examples of minor recognition errors

40

3.6

Examples of minor editing errors

40

4.1

Characterization of tests performed to evaluate delay variability

61

5.1

Methodology, material and subjects in Experiment 1

83

5.2

Methodology, material and subjects in Experiment 2

84

5.3

Experiment 1, results: vocal sonority ratings and their emotional correlates

86

5.4

Experiment 2, results: preferences and quality assessment of full AD clip

7.1

Films included in the audiovisual corpus

87 113

List of Figures and Tables

ix

10.1

Respondents SLs

166

10.2

Respondents TLs

167

10.3

Ranges of average subtitling rates – only translating from a template

169

10.4

Ranges of average subtitling rates – only time-cueing

170

10.5

Ranges of average subtitling rates – time-cueing and translation

170

10.6

Ranges of average revision and proofreading rates

173

10.7

Deadline range

181

10.8

Supporting material provision

185

10.9

Quality of the provided supporting material

185

10.10 Influence of the quality of supporting material on the final translation

186

12.1

Levels of deafness

206

12.2

Programmes with SDH on Polish television

212

13.1

Characteristics of conversational register considered in the analysis

228

13.2

Dysfluency in the original and translated dialogues

230

13.3

Use of connectors in the original and translated dialogues

231

13.4

Use of gambits in the original and translated dialogues

235

13.5

Use of vocative types in the original and translated dialogues

237

Ritualized illocutions in the original and translated dialogues

238

13.7

Expletives in the original and translated dialogues

248

15.1

EOC audio described short videos (EOC 2013)

274

13.6

Acknowledgements We would like to thank everyone who has been involved in the making of this volume, particularly the contributors for their generosity in sharing their work with us. Our thanks also go to Rebecca Brennan at Palgrave Macmillan for her support from inception through to production, and to Caroline Rees for having proofread the manuscript. Special thanks should be extended to the following colleagues for their time, expertise and help with the blind peer reviewing of the chapters contained in the book but also of other papers that have not made it to the final publication: Charlotte Bosseaux, Mary Carroll, Frederic Chaume, Agnieszka Chmiel, Carlo Eugeni, Federico Federici, Henrik Gottlieb, Jan-Louis Kruger, Carme Mangiron, José Luis Martí Ferriol, Anna Matamala, Laura McLoughlin-Incalcaterra, Josélia Neves, Pilar Orero, Jan Pedersen, Ana María Pereira Rodríguez, Aline Remael, Diana Sánchez, Noa Talaván Zanón, Adriana Tortoriello, Gert Vercauteren, Patrick Zabalbeascoa and Soledad Zárate. And last but not least, a very special thank you goes to our partners, families and friends for their unrelenting emotional support and remarkable patience.

x

Notes on Contributors Editors Rocío Baños Piñero is Senior Lecturer in Translation at the Centre for Translation Studies (CenTraS) at University College London, where she teaches audiovisual translation and translation technology. She holds a PhD from the University of Granada focusing on spoken Spanish in dubbed and domestic situation comedies. Her main research interests lie in the fields of audiovisual translation, translation technology and translation training. She has written numerous articles in these areas. She has also edited a dossier on dubbing in Trans: Revista de Traductología, and has co-edited a special issue of the journal Perspectives: Studies in Translatology entitled ‘Corpus Linguistics and Audiovisual Translation: In Search of an Integrated Approach’. Jorge Díaz Cintas is Professor of Translation and Director of the Centre for Translation Studies (CenTraS) at University College London. He is the author of numerous articles, special issues and books on audiovisual translation, including Audiovisual Translation: Subtitling (with Aline Remael), The Manipulation of Audiovisual Translation (special issue of Meta) and Audiovisual Translation: Tacking Stock (co-edited). He is one of the directors of the European Association for Studies in Screen Translation and, since 2010, has acted as Chief Editor of the Peter Lang series New Trends in Translation Studies. He is a member of the international research group TransMedia and a board member of LIND-Web. He was presented with the Jan Ivarsson Award for invaluable services to the field of audiovisual translation in 2014.

Contributors Mercedes de Castro holds a BA in Physical Sciences from the University Complutense of Madrid, Spain. She has worked in telecommunications R&D for most of her professional career, in the fields of voiceover IP, IP and ethernet communication networks, and real-time software development. Between 2008 and 2015 she was a technical coordinator and parttime lecturer at the Spanish Centre of Subtitling and Audiodescription of the Universidad Carlos III, Madrid. She has worked on research xi

xii

Notes on Contributors

projects focusing on the accessibility of audiovisual media and her current research centres on accessibility to digital television, with a particular emphasis on live subtitling. Bárbara C. dos Santos Carneiro holds a BA in Foreign Languages from the Federal University of Bahia, Salvador, Brazil, where she is also pursuing a master’s degree. She is a member of the research group TRAMAD (Tradução, Mídia e Audiodescrição, www.tramad.com.br) and is carrying out research on audio description for audiences with learning disabilities in three different cities in Brazil. Eliana P. C. Franco is an associate professor at the Federal University of Bahia, Salvador, Brazil. She holds a PhD from the Catholic University of Leuven, Belgium, and has undertaken post-doctoral studies in audiovisual translation and accessibility at the Universitat Autònoma de Barcelona, Spain. She is the coordinator of the research group TRAMAD (Tradução, Mídia e Audiodescrição, www.tramad.com.br), where she has worked on various projects on the subtitling and audio description of films, plays and dance performances. Her current research focuses on audio description for audiences with learning disabilities. She has supervised a number of master’s and doctoral theses on the subject of audiovisual translation and is the co-author of the monograph Voiceover Translation. An Overview. Adrián Fuentes-Luque is Senior Lecturer in Translation at the Universidad Pablo de Olavide in Seville, Spain, where he is also Director of the Master’s in International Communication, Translation and Interpreting. He has previously taught, among others, at the Universidad de Cádiz and the University of Granada, Spain, the University of Portsmouth, UK, and the University of Puerto Rico. His main interests include audiovisual translation, tourism translation, the translation of humour, advertising translation, and legal and institutional translation. He has worked as senior translator at the Australian Embassy in Spain as well as for a number of companies and institutions, including Cambridge University Press, the Museo Nacional del Prado and the British Council. Agata Hołobut is a lecturer at the Institute of English Studies, Jagiellonian University, in Krakow, Poland. Her PhD research, carried out in cooperation with the Academy of Fine Arts in Krakow, explored product design and its authorial description from a cognitive perspective. Her main interests include audiovisual and literary translation, cognitive poetics and semiotics. She is a member of the editorial board of Przekładaniec: A Journal of Literary Translation.

Notes on Contributors xiii

Emilia Iglesias Fernández is Senior Lecturer in Interpreting at the University of Granada, Spain. She holds a PhD in Interpreting Training and has been involved in several state-funded research projects. She is a member of two permanent research groups, Quality Assessment in Simultaneous Interpreting (ECIS: Evaluación de la calidad en interpretación simultánea) and Media Access through Translation (AMATRA: Accesibilidad a los medios a través de la traducción), where she is studying the role of the nonverbal dimension of communication. She has authored books on the didactics of conference interpreting and on self-learning for liaison interpreting. Her contributions to the study of quality in interpreting can be found in The Encyclopedia of Applied Linguistics and The Routledge Encyclopedia of Interpreting Studies. Arista Szu-Yu Kuo is Assistant Professor of Translation Studies at Nanyang Technological University, Singapore. She holds a PhD in subtitling from Imperial College London and has worked as a teaching fellow at the Centre for Translation Studies, University College London. Arista is also a freelance translator, interpreter and subtitler and has been involved in numerous translation projects, including finance, business and commerce, law, politics, innovation and technology, cultural and creative industries and films. Her research interests include audiovisual translation, translator training, translation quality assessment and cross-cultural communication. Dawning Leung is a PhD student in Translation Studies at University College London. Her main research interests include audio description (AD), and media uses and gratifications. She has presented papers on these topics at international conferences. As an AD trainer, she has organized workshops, talks and demo sessions on AD at various universities and associations for the visually impaired in Hong Kong. She has designed and taught undergraduate modules on AD for films, TV programmes and performing arts at Chu Hai College of Higher Education. She is also an audio describer for the Hong Kong Society for the Blind and for the Arts with the Disabled Association Hong Kong and a guide runner for Blind Sports Hong Kong. Silvia Martínez Martínez is a visiting lecturer at the University of Seville and also a doctoral candidate at the University of Granada, where she attended a postgraduate course in subtitling for the deaf and the hard-of-hearing and audio description. She has taken part in the AMATRA (Media access through translation; Ref. T07-SEJ2660), TACTO,

xiv

Notes on Contributors

DESAM and Pra2 research projects. She is also co-founder of the translation agency STU Traductores. Juan Martínez Pérez is a member of the research group Transmedia Catalonia at the Universitat Autònoma de Barcelona, Spain. He has worked with public and private television broadcasters in Europe as an advisor and instructor in speech recognition since 2008. He has both taught and presented research results on the topic of live subtitling at different European television broadcasters and conferences. He teaches respeaking at SDI, the Munich Institute of Language and Interpretation, and is working with Pablo Romero-Fresco, SWISS TXT and the German company Verbavoice on the development of a tool to allow the application of the NER model to live and automatic subtitles (www.nerstar. com). He is also a researcher on the EU-funded project HBB4ALL (www. hbb4all.eu). Deise M. Medina Silveira holds a master’s in Letters and Linguistics from the Federal University of Bahia, Salvador, Brazil. She teaches at the Instituto Federal de Educação, Ciência e Tecnologia in Bahia state and acts as the vice-coordinator of the research group TRAMAD (Tradução, Mídia e Audiodescrição, www.tramad.com.br), where she is carrying out research on audio description and subtitling. Lately, her research has focused on audio description for audiences with learning disabilities. She also works as a translator and English teacher. Regina Mendes holds an MA in Applied Linguistics. She has taught English as a Foreign Language for over 20 years and has also worked as a writer and editor of text books in the area of language and as an instructional designer for distance education. She teaches discourse analysis and English at the Faculdade CCAA, Rio de Janeiro, Brazil. She is particularly interested in professional practices in dubbing. Renata Mliczak is a PhD student researching subtitling for the deaf and the hard-of-hearing in Poland at the Centre for Translation Studies (CenTraS), University College London. Her research interests lie in audiovisual translation, especially in accessibility to the media by people with sensory impairments. She is a member of the European Association for Studies in Screen Translation (ESIST) and the Audiovisual Translation Lab (AVT Lab), a research group at the University of Warsaw, Poland. She is working on a project focusing on the subtitling of screen adaptations of books, taking into account the needs and skills of students from the Institute for the Deaf in Warsaw, Poland.

Notes on Contributors xv

Kristijan Nikolić is a senior lecturer at the University of Zagreb, Croatia, where he teaches English and American culture and translation at undergraduate and MA levels. He holds a PhD in Translation Studies from the University of Vienna, Austria. He also works as a freelance subtitler and is a member of the Executive Board of the European Association for Studies in Screen Translation (ESIST) and President of the Croatian Association of Audiovisual Translators. His research interests include interlingual subtitling and the study of culture. He has written articles and books extensively on subtitling. Antonio Javier Chica Núñez is a member of the Department of Philology and Translation at the Universidad Pablo de Olavide, Seville, Spain, where he holds a research and teaching scholarship. He holds a PhD from the Universidad Pablo de Olavide. His thesis was titled ‘Parameters of Analysis for the Translation of Motion Images Applied to Audio Description’. He is undertaking post-doctoral studies at the same university. He is a member of the research group TRACCE (Ref. HUM-770) at the University of Granada. His main research interests lie in the fields of intersemiotic translation, audiovisual translation and accessibility. Luis Puente Rodríguez is a telecommunications engineer from the Universidad Politécnica de Madrid, Spain. He holds an MA in Operation and Production Management from the Instituto de Empresa de Madrid, and an MA and PhD in Computer Science and Technology from the Universidad Carlos III de Madrid. He has worked on R&D projects in a wide range of fields, from RADAR to Information Society, and is Operations Deputy Director of the Centro Español del Subtitulado y la Audio Descripción, in Madrid, where he is also in charge of the Automatic Speech Processing Area and coordinates projects on the automation of accessibility to audiovisual media. Pablo Romero-Fresco is Reader in Translation and Filmmaking at the University of Roehampton, London, UK. He is the author of Subtitling through Speech Recognition: Respeaking (2011). As an external reviewer for Ofcom, he assesses the quality of live subtitles in the UK. He is a member of the first Focus Group on Media Accessibility at the United Nation’s ITU and of the research group TransMedia Catalonia, for which he coordinated part of the EU-funded project DTV4ALL. As a filmmaker, his first documentary, Joining the Dots, was screened during the 69th Venice Film Festival and at other festivals in the UK, Poland, Switzerland

xvi

Notes on Contributors

and Spain. His second documentary, Brothers and Sisters, on education in Kibera, Kenya, was broadcast by the Spanish newspaper El País along with the feature article ‘Levantarse en Kibera’ and the short film Joel. Belén Ruiz Mezcua is a lecturer in the Department of Informatics at the University Carlos III, Madrid. She is also Director of the Spanish Centre of Captioning and Audio Description (CESyA), Co-director of the disability-EADS-Adecco Foundation UC3M and Co-director of the MA TADIS. She holds a PhD from the Universidad Politécnica de Madrid, Spain, and has worked at ICT companies such as ALCATE and Indra. She has led several national and international research projects on voice recognition, human-machine interface, software engineering, systems analysis, mobile communications and audiovisual accessibility. She is the author of more than 30 papers in the fields of biometrics and the application of technology to help people with disabilities. She is also the author of eight patents, four SW records and six trademark registrations. Nicolas Sanchez is a former professional subtitler and dubbing author currently teaching English in Lyons, France. He has taught audiovisual translation at the University of Nice Sophia Antipolis, France, and wrote his doctoral thesis on the challenges of subtitling film adaptations of literary sources. His main research interests include Shakespeare and translation. Cristina Valdés is Lecturer in English Studies at the University of Oviedo, Spain. Her main research has been carried out in the fields of advertising translation, website translation/localization and intercultural communication. She has taken part in several European projects on intercultural communication, language learning, the multilingual web and audiovisual translation, as well as national projects on the reception of the English translations of Don Quixote in the eighteenth century. She has written numerous articles and chapters on translation, published La traducción publicitaria: comunicación y cultura (2004) and co-edited a special issue of The Translator, entitled ‘Key Debates in the Translation of Advertising Material’. Serenella Zanotti is Lecturer in English Language and Translation at Roma Tre University, Italy. She has published widely in the field of audiovisual translation, focusing on language varieties, orality markers, vague language, censorship and manipulation in both dubbing and subtitling. Her other interests include conversational narrative,

Notes on Contributors xvii

translation theory and literary bilingualism. She is the author of Italian Joyce: A Journey through Language and Translation (2013) and co-editor of several volumes including The Translator as Author (2011) and Observing Norms, Observing Usage: Lexis in Dictionaries and the Media (2014); and articles including ‘Corpus Linguistics and Audiovisual Translation’ (special issue of Perspectives: Studies in Translatology, 21:4, 2013) and ‘Translating Ethnicity’ (special issue of The European Journal of English Studies, 18:3, 2014).

1 Audiovisual Translation in a Global Context Rocío Baños Piñero and Jorge Díaz Cintas

Today’s exposure to and interaction with audiovisual content is far greater now than ever before, and this has obvious repercussions for audiovisual translation (AVT), both as a professional practice and as an academic discipline. A recent report by Ofcom (2014), the independent communications regulator in the UK, revealed that UK adults spend an average of eight hours and 41 minutes using media or communications every day, and that approximately half of that time is spent watching audiovisual content. Of course, audiovisual practices differ across the globe. Whereas a large proportion of the audiovisual content consumed in the UK and other English-speaking countries involves AVT in the form of audio description for the blind and the partially sighted or subtitling for the deaf and the hard-of-hearing (SDH), the situation is rather different in other countries where, owing to the need to make foreign programmes available to local audiences, interlingual AVT modes are more frequent. However, some trends seem not to recognize boundaries and could even be deemed to be universal. Far-reaching technological developments and new forms of communication have given consumers and audiences a great deal of power and autonomy, enabling them to decide and influence decisions related to the translation of audiovisual content. We are not only referring to the widespread practice of fansubbing (Díaz Cintas and Muñoz Sánchez 2006; Massidda 2015) or crowdsourced AVT (O’Hagan 2012), but also to the possibility of making an audiovisual programme go ‘viral’ in just minutes, thanks to communication tools such as Twitter and Facebook, increasing the need for AVT in different contexts, involving different audiovisual genres and into different languages. The purpose of this volume is to offer an up-to-date survey of the present state of affairs in AVT, enabling a better understanding of the 1

2

Rocío Baños Piñero and Jorge Díaz Cintas

global audiovisual landscape. One of its main aims is to gauge the pulse of the changes taking place in this thriving field by focusing not only on current research, but also on professional practices in a wide range of contexts. With these goals in mind, this volume brings together a group of scholars and academics of proven international reputation, who have been working in this field for many years in countries such as Brazil, China, Croatia, France, Italy, Poland, Singapore, Spain, Switzerland and the UK. In their contributions, theoretical issues are discussed in close connection to real translation problems and empirical data, providing useful and practical insights into the personalized input that translators inevitably give to their work. As the table of contents shows, the chapters have been grouped into three key areas, which are closely related: quality, audiences and professional practices. Probably as a result of globalization, audiences seem to be more willing to disregard borders and language barriers, but are also growing increasingly impatient when it comes to AVT consumption. They want to enjoy their favourite videogames, TV series and the latest movies as soon as they are released. To this end, some viewers even seem prepared to sacrifice quality, a loose term which has come to have a different meaning and significance for scholars, industry members and audiences, and which cannot be mentioned without resulting in long and often polemic discussions. As can be seen in the first part of this volume, Addressing Quality, scholars are not afraid of discussing qualityrelated issues in AVT. These discussions constitute a logical step forward in our field now that quantity is no longer the most pressing concern in most contexts and technology has provided us with appropriate tools to improve and even measure quality. In some contexts, however, the importance of high-quality AVT seems still to be underestimated. As Adrián Fuentes-Luque’s contribution shows, this seems to be the case with some official institutions and non-profit organizations. Having identified the power of audiovisual and multimedia content as an effective communications and public relations tool, some of these institutions seem to be more concerned to assert their multimedia presence than about the appropriateness of their audiovisual programmes. Yet, as Fuentes-Luque argues in ‘Institutional Audiovisual Translation: A (Shop)Window on the World’, international institutions using audiovisual material to communicate and increase their visibility should not be satisfied with being a virtual shop window on the world, and should instead adopt a diverse and integrating approach. The author discusses how ‘institutional audiovisual translation’ differs from traditional AVT and traditional institutional

Audiovisual Translation in a Global Context 3

translation and offers a descriptive overview of its characteristics through a wide range of interesting case studies of prominent institutions worldwide. Drawing on examples from the European Commission webcast portal and the United Nations webcast, among others, the paper outlines potential areas for improvement and identifies the need to establish adequate quality standards, appropriate linguistic conventions and consistent accessibility policies as far as audiovisual content is concerned. As illustrated in the existing literature available on the widely explored topic of translation quality (House 2015; O’Brien 2012, among many others), assessing it from an objective standpoint is extremely challenging, if not impossible. In AVT, the relevance and often constraining nature of the technical and semiotic dimensions, as well as the many factors influencing the translation process, hinder the evaluation of quality even further. In their contribution ‘Accuracy Rate in Live Subtitling: The NER Model’, Pablo Romero-Fresco and Juan Martínez Pérez reflect on the concept of quality assessment in live subtitling, emphasizing that, although many parameters are to be considered (e.g. subtitle speed, delay, positioning of the text, audience reception, etc.), linguistic accuracy and closeness to the original seems to be the main concern for broadcasters and regulators. Bearing this in mind, the authors put forward the NER model, which is designed to assess the accuracy of live subtitles regardless of the language and the country for which the subtitles are destined. This model addresses the deficiencies and weaknesses of previous models and is defined by the authors as viewer-centred since it distinguishes between different types of error (serious, standard and minor), at the same time as acknowledging that not all of them pose the same issues for viewers. The authors illustrate the easy implementation of the model with examples of respoken live subtitles in different languages and discuss the reasons motivating its rapid success in the industry. Although the focus of this research is on respoken subtitles, Romero-Fresco and Martínez Pérez argue that the model can also be applied to subtitles produced with automatic speech recognition (ASR). In this case, they warn of the critical importance of other parameters such as speed and synchronization. The latter dimension is discussed in detail by Mercedes de Castro, Luis Puente Rodríguez and Belén Ruiz Mezcua, who in turn propose a model to correct the lack of synchronization between live subtitles, respoken and automatic, and video/audio. Their contribution, entitled ‘Synchronized Subtitles in Live Television Programmes’, describes the processes involved in live subtitling from the initial audio transcription to the

4

Rocío Baños Piñero and Jorge Díaz Cintas

final reception, decoding and presentation on the viewer’s screen. They report on the results of three tests carried out to substantiate the significance and variability of the delays between subtitles and video/audio in live programmes. To compensate for potential delays, the proposed model relies on the assignation of time references to subtitles and their corresponding source audio fragments as well as on the application of a global buffering time for audio and video. Like Romero-Fresco and Martínez Pérez, the authors highlight the limitations of current ASR technology, concluding that such technology is ‘a long way from producing quality subtitles with negligible delay’. Quality can be perceived very differently by those involved in the production and consumption of translated audiovisual products depending on their needs and expectations. As such, translation scholars might criticize and question the appropriateness of some translated programmes deemed ‘fit for purpose’ by broadcasters and media companies. Likewise, audiences might be highly critical when appraising a subtitled product as a result of its vulnerability (Díaz Cintas 2003) or because of their very different set of expectations and understanding of quality criteria. The chapter ‘Cross-fertilization between Reception Studies in Audio Description and Interpreting Quality Assessment: The Role of the Describer’s Voice’, by Emilia Iglesias Fernández, Silvia Martínez Martínez and Antonio Javier Chica Núñez, examines the quality expectations of audio description (AD) users. The authors distinguish between the a priori preferences of users and their actual quality assessment of specific audio described scenes taken from the film The Hours and set out to compare both. This is achieved through a truly interdisciplinary approach, drawing on the methodology and findings from experimental studies in interpreting quality assessment. The focus of the reception study presented in this chapter is on the perception of the nonverbal qualities of the audio describer’s voice, in particular on vocal sonority qualities and its emotional correlates. With this emphasis, the aim of the authors is to question the assumption that the describer’s voice best serves the interests of the viewers if it is kept neutral, as is often argued in official guidelines and literature on AD. Their findings show that quality expectations do not match actual quality assessment in situated contexts and that users seem to favour non-neutral vocal quality in AD. The results of this study reveal other factors to be considered in the quality assessment provided by viewers, namely their previous exposure to the practices being evaluated and the general availability of such practices. Quality is closely related to the audience profile, an aspect that is discussed in detail in the second part of this volume entitled Targeting

Audiovisual Translation in a Global Context 5

the Audience. This part contains papers in which the heterogeneous profile of audiences and the challenges of providing solutions to satisfy the needs of such a diverse target group are emphasized. Also focusing on AD, the opening chapter of this second part has been co-authored by Eliana P. C. Franco, Deise M. Medina Silveira and Bárbara C. dos Santos Carneiro under the title ‘Audio Describing for an Audience with Learning Disabilities in Brazil: A Pilot Study’. As the title suggests, the target audience in this study is made up of viewers with learning disabilities. The authors remind the reader that, although AD is understood as an accessibility service targeting people with visual and cognitive impairment, research carried out to date does not seem to include those with learning disabilities as potential beneficiaries of this service. To bridge this gap, the authors designed a pilot study to observe whether current AD practices, mainly for visually impaired audiences, are also able to meet the needs of people with learning disabilities. After exposing participants to a short film, both with and without AD, and having asked them to complete a questionnaire, the authors conclude that further information would be needed in order to make this film fully accessible to this specific audience. The pilot study also resulted in interesting findings as regards the different degrees of learning disability and the inability of the available official classifications to define and categorize them adequately. In her paper ‘Analysing Redubs: Motives, Agents, and Audience Response’, Serenella Zanotti also discusses audience perceptions and reactions, but in this case redubs in Italy constitute the object of study. Although some scholars have recently given some attention to retranslation in AVT (Chaume 2007), it is a fascinating area that remains relatively unexplored. The corpus compiled by Zanotti for this study shows that redubbing is a common practice, with some films having been dubbed on as many as three different occasions. The aim of the analysis is to test the so-called retranslation hypothesis and to examine the process and effects of redubbing, the type of changes implemented in redubs, and the translational norms at work. Although it is very difficult to ascertain the motives behind some of these changes, decisions and norms and to determine the role played by all the various agents, the paper provides insights into how these could have influenced the final product. As regards audience response, this contribution claims that the practice of redubbing is often despised by viewers due to the poor quality of some retranslations or to a change of dubbing actors from one version to another. In this regard, audience preferences can be decisive when it comes to relegating a new redubbing, ‘compelling

6

Rocío Baños Piñero and Jorge Díaz Cintas

distributors to revisit their marketing strategies’. Redubbing emerges as the result of shifting needs in the target audience, but technological developments allow different perceptions of a single product. As Zanotti states, more DVD and Blu-ray editions now include both the old and the new dubbed versions, giving viewers a choice, an aspect that is explored further in the next contribution. In ‘Subtitling in the Era of the Blu-ray’, Nicolas Sanchez reflects precisely on the customization of subtitles by viewers and customer satisfaction, which has become a priority for the audiovisual industry. He discusses the implementation of ‘remote subtitling’ in France, a technological development allowing viewers to decide the size and position of subtitles on screen in Blu-ray productions. In addition to examining the limitations and potential of this technology, the author asks whether such decisions should be left to viewers and whether audiences have the knowledge necessary to judge the best viewing conditions in subtitling. What would be the point of following specific subtitling conventions and guidelines if viewers are able to disregard these by pressing a couple of buttons on their remote control? Sanchez argues that, whereas there are many questions to consider as regards the suitability and success of this technology in a wider context, the potential of this development and the principles behind it should not be underestimated. Customization seems to be valued highly by French consumers of Blu-ray editions but, as Sanchez suggests, its importance is even greater considering the wide range of devices where audiovisual material is consumed nowadays, from large HD computer screens to tablets or smart phones. In this regard, a readable and accessible audiovisual programme would quickly become unusable and unreadable if it were not repurposed appropriately for its broadcast in a different medium. Readability, usability, accessibility and standards are also discussed by Cristina Valdés, but this time with a focus on the World Wide Web. Her chapter, entitled ‘The MultilingualWeb (MLW) Project: A Collaborative Approach and a Challenge for Translation Studies’, emphasizes the need for best practices and quality standards in the design, management and localization of multilingual web content. Valdés outlines the contribution of the EU-funded MultilingualWeb project and maintains that multilingual websites pose a challenge to Translation Studies from a quantitative as well as a qualitative point of view. Regarding the former, the challenges lie in the vast amount of web content that requires translation and in the recent role of users as producers, and also translators, of such content. As for the latter, Valdés posits the idea that

Audiovisual Translation in a Global Context 7

interdisciplinary collaboration and standards represent the way forward as far as enhancing the quality, accessibility and usability of the multilingual web is concerned. In line with the rest of the contributions in this section, this chapter argues that user and cultural variables are key aspects, and that reaching each target audience adequately is essential to make the World Wide Web fully international. The third part of this volume, Mapping Professional Practices, examines professional aspects and the working conditions of audiovisual translators and provides an insight into particular cultures and contexts around the globe. Arista Szu-Yu Kuo’s contribution opens this section with an extremely detailed overview of the subtitling industry worldwide from the point of view of subtitlers. ‘Professional Realities of the Subtitling Industry: The Subtitlers’ Perspective’ presents the findings of a survey on subtitlers’ working conditions carried out in an attempt to shed light on professional practices globally. Although the results seem to be more representative of European countries due to the responses gathered, they provide an initial and very valuable survey of working conditions in other continents. The discussion touches on key aspects such as deadlines, rates, use of software and negotiation power. One of the main findings of Kuo’s study is that it is very difficult to draw general conclusions concerning these aspects owing to the broad differences in practices between countries as well as within the same country. As far as subtitling rates are concerned, the author interprets the substantial disparity reported by respondents as a prevalent global phenomenon in the subtitling industry. One of the trends emerging from the survey is that the vulnerability of subtitlers seems to have increased in recent years and that divergences between respondents seem to be less marked in countries with strong subtitlers’ associations and unions. This chapter also explores themes recurrent throughout the volume, such as the part played by research findings in our understanding of the factors influencing subtitling quality in a professional environment. In his contribution, Kristijan Nikolić highlights the use of templates as one of these factors. In ‘The Pros and Cons of Using Templates in Subtitling’, Nikolić examines the reasons behind the use of templates in the industry and provides an account of the varied terminology used to refer to this approach to subtitling. The advantages and disadvantages are considered from a subtitler’s perspective, as well as that of subtitling companies, and the discussion brings to the table issues such as quality, subtitling rates and client awareness of the specificities of subtitling and the discrepancies between languages and cultures. The author illustrates the limitations of some subtitling software programmes through

8

Rocío Baños Piñero and Jorge Díaz Cintas

a number of interesting examples, as well as the motives behind the objections to translating from templates made by some experienced subtitlers. In the next contribution from this section, ‘Signing and Subtitling on Polish Television: A Case of (In)accessibility’, Renata Mliczak addresses the issue of accessible audiovisual programmes for the deaf and the hard-of-hearing on Polish television. The discussion revolves round the four groups influencing the provision of SDH and signing on Polish television: the audience, the actual providers, the promoters and the legislators. Although each of these groups is examined in detail, more importance is paid to the viewers since they represent the main reason for the services under study. The author concludes that, despite the fact that SDH and signing have been provided on Polish television for some time, these accessibility services are far from being satisfactory and audiences tend to look for other options including the Internet, the DVD market, cinemas and theatres. The reflections on the use of an artificially created signing system, which most Polish deaf people find difficult to understand, are extremely interesting and contribute to our understanding of the AVT landscape in Poland. This understanding is complemented by Agata Hołobut’s contribution, entitled ‘Voiceover as Spoken Discourse’, which investigates the use of one of the most frequently used AVT modes in Poland. The emphasis is placed here on the techniques used to translate the conversational features of audiovisual dialogue. This is done by comparing the subtitled and voiced-over versions of two episodes of the popular series Desperate Housewives. The analysis shows significant differences when dealing with features of orality in these two AVT modes and reveals how translators take advantage of the multimodality of audiovisual texts in different ways. In line with the global approach taken in the present volume, this chapter establishes parallels with other AVT modes and contexts and argues that, despite being exclusive to Eastern Europe, the voiceover translation of fiction might also be of interest further afield. Mliczak’s and Hołobut’s contributions exemplify the prominence that television programmes have recently gained in studies on AVT. The cinema has traditionally been considered more prestigious than television and, therefore, more worthy of study. However, many experts on media studies nowadays would put TV series such as The Wire, Breaking Bad or Homeland on a par with high-quality cinema productions, arguing that their study is as relevant, if not more so. In her contribution, Regina Mendes also looks into the world of television fiction, this time from the point of view of dubbing from English to Brazilian Portuguese.

Audiovisual Translation in a Global Context 9

‘Dubbing Directors and Dubbing Actors: Co-authors of Translation for Dubbing’ examines to what extent the text provided by translators is changed by dubbing directors and voice talents during the actual recording. The author concludes that their participation in the translation process is substantial and argues that they should be considered as co-authors of the translated script. Ascertaining the degree of involvement of stake holders in AVT is complicated and accounts of what really happens in the dubbing studio are invaluable. A  similar approach is taken in Dawning Leung’s contribution, ‘Audio Description in Hong Kong’, providing first-hand information on the rapid development of this AVT mode in Hong Kong. An overview of the types of audio description services and training provided in Hong Kong is offered to this end. With a wide range of interesting examples and an up-to-date account of innovative academic and professional initiatives, Leung reflects on how the provision of AD services has been promoted by local non-governmental organizations and on how the role of audio describers has evolved in some contexts, where they may also act as tourist guides or even as sports trainers. As the reader will realize, some topics are recurrent throughout the volume, although they are examined from different angles, leading to thought-provoking discussions showing the great diversity of the global audiovisual landscape. In this volume, existing guidelines, technology, old and new practices and models are questioned and scrutinized. Scholars openly plead for more interdisciplinarity as well as for adequate quality standards, user-friendly models and guidelines in order to improve the current audiovisual product. Yet, at the same time, the urgent need to consider the requirements of a demanding, active and heterogeneous audience is also emphasized, along with the heterogeneity and complexity of the professional factors influencing the working conditions of audiovisual translators worldwide.

References Chaume, Frederic. 2007. ‘La retraducción de textos audiovisuales: razones y repercusiones traductológicas’. In Juan Jesús Zaro Vera and Francisco Ruiz Noguera (eds) Retraducir: Una nueva mirada. La retraducción de textos literarios y audiovisuales (pp. 49–63). Málaga: Miguel Gómez Ediciones. Díaz Cintas, Jorge. 2003. Teoría y práctica de la subtitulación: Inglés-español. Barcelona: Ariel. Díaz Cintas, Jorge and Pablo Muñoz Sánchez. 2006. ‘Fansubs: audiovisual translation in an amateur environment’. The Journal of Specialised Translation 6: 37–52. www.jostrans.org/issue06/art_diaz_munoz.pdf.

10

Rocío Baños Piñero and Jorge Díaz Cintas

House, Juliane. 2015. Translation Quality Assessment: Past and Present. Oxon/ New York: Routledge. Massidda, Serenella. 2015. Audiovisual Translation in the Digital Age: The Italian Fansubbing Phenomenon. Basingstoke: Palgrave Macmillan. O’Brien, Sharon. 2012. ‘Towards a dynamic quality evaluation model for translation’. The Journal of Specialised Translation 17: 55–77. O’Hagan, Minako. 2012. ‘From fan translation to crowdsourcing: consequences of Web 2.0 user empowerment in audiovisual translation’. In Aline Remael, Pilar Orero and Mary Carroll (eds). Audiovisual Translation and Media Accessibility at the Crossroads (pp. 25–41). Amsterdam: Rodopi. Ofcom. 2014. The Communications Market Report. http://stakeholders.ofcom.org. uk/binaries/research/cmr/cmr14/2014_UK_CMR.pdf.

Part I Addressing Quality

2 Institutional Audiovisual Translation: A (Shop) Window on the World Adrián Fuentes-Luque

2.1

Introduction

While audiovisual translation (AVT) has flourished in recent years, both in technological and academic circles, to date the focus has mainly been on the use, analysis, development and translation of commercial products, namely films, television series, documentaries, video games and, to a lesser extent, advertising and promotional material. Official institutions and non-profit organizations have also realized the need to use multimedia and audiovisual technologies as an effective public relations and image-building tool. This is at a time of international crisis and economic downturn (which is said to have started around 2008) when social, economic, environmental and human affairs have generated interest worldwide. The role of translation in institutional settings has been studied from different points of view (Koskinen 2008; Mossop 1988, 1990). Research on institutional translation mainly focuses on theoretical approaches and discursive, ideological and ethical practices. However, although such research is potentially useful, the use of translation  – or lack of it – in audiovisual contexts by official bodies at both national and international level has received scant attention.

2.2

Institutional audiovisual translation: an overview

Initially, 17 international organizations (including the World Health Organization (WHO), the World Meteorological Organization (WMO), the United Nations Educational, Scientific and Cultural Organization (UNESCO), the International Telecommunication Union (ITU), the European Commission (EC) and the United Nations (UN), among others) 13

14

Adrián Fuentes-Luque

were contacted (three times in the case of the EC and the UN) in order to obtain information regarding their aims, criteria and procedures when using and managing audiovisual and multimedia content that has been translated. Since none of them responded, this paper contains a descriptive overview of a selection of examples of institutional AVT from some of the main international organizations and bodies. It is often believed that institutional translation is restricted purely to international bodies and organizations. However, it is also carried out at, or for, regional, national and international bodies (parliaments, congresses, senates, tourist boards, trade offices, diplomatic missions, etc.), non-governmental organizations (NGOs), public services (especially health and justice), political parties and educational institutions to name but a few. In principle, as public (or public-oriented) bodies, such institutions have a duty to make all the information relating to their activities public and accessible in response to their citizens’ right to know and understand what is happening within them. In terms of distribution channels, the production and translation of printed material have undergone a considerable change over the past few years and have now been surpassed  – and to a great extent replaced – by audiovisual or multimedia platforms. Corporate responsibility issues, the need to show a more environmentally friendly image, and the immediacy and widespread advantages of new media have fostered the shift from printed material (which has circulation, language and accessibility restrictions) to material in audiovisual and multimedia formats. In this sense, the main platforms include: ad-hoc television channels (such as the European Union’s ‘Europe by Satellite’ and C-SPAN in the United States), dedicated portals and websites with live and archived material (such as the European Parliament’s ‘EuroparlTV’) and social media (the UN webcast, which can be accessed at http://webtv. un.org, as well as on Facebook, YouTube and Twitter). Audiovisual and multimedia platforms provide an attractive, immediate, user-friendly, readily updatable way of delivering information. I agree with Martín Ruano (2009) that institutional translation all too often seems to be content with what Koskinen (2000: 51) calls ‘existential equivalences’, underlining the actual existence or presence of translation, rather than how and to what extent it is carried out. In this sense, it is paradoxical that, in the diverse supranational contexts that institutions deal with, uniformity is desired as well as sought, preferably in the form of general, universal equivalents, as if the ubiquitous presence of such terms would turn any potential barriers or distances into universality and uniformity.

Institutional AVT: A (Shop) Window on the World 15

The scope, aims, channels and even the end-users of this type of translation appear not to be clearly defined. The customer/user/receiver is no longer an individual, or even a group, but rather a sometimes undefined, international, multicultural, multilingual collective, where culture is diluted in favour of a ‘common culture/knowledge’, which is actually a mere political and legal construct. This factor seems to be sometimes overlooked by institutions when faced with the transfer of contents to be distributed on audiovisual platforms often offering language-restricted versions of the material. Concepts and situations might even appear distorted to certain users, who may find the information or audiovisual material, albeit linguistically correct in their own respective languages (if such is the case), alien to their own cultural context. Institutional translation is one of the best examples in which interlinguistic equivalence does not imply content equivalence: ‘common’, universal terms such as ‘women’s rights’, ‘fair trial’ or ‘public health’ have very different definitions and represent different realities depending on the country in question and even the same alleged common region (the EU, for example). As Martín Ruano (2009) points out, some authors  – Appiah (2000) and Hermans (2007) for example  – adhere to Geertz’s (1973: 3–30) anthropological concept, favouring ‘dense’ mediating, explicative translation practices that encourage intercultural dialogue and knowledge through intra- or extra-textual gloss. Just as societies and communities evolve through translation, as Koskinen (2008: 3) points out, institutions not only produce translations, but in doing so, they translate and establish themselves, and reaffirm their presence and power. This is surely enhanced by the use of audiovisual and multimedia content, an extremely penetrating, farreaching and powerful tool for image-building and projected identity shaping. Institutions, particularly supranational ones, often seem to be vested with a halo of neutrality. I concur with Martín Ruano (2009) in that institutional translation is sooner or later forced to take sides for or against a particular view of a given people, a social group, or a particular set of values and principles, using words and images as a driving force. It could be argued, then, that the often extremely literal character of many institutional translations  – including those in an audiovisual format  – would hinder intercultural dialogue rather than fostering image-building. This would, therefore, call for the serious planning and selection of material, the direct and integral participation of translators in the process (rather than simply passing the scripts to be translated to in-house translators or even outsourcing it) and the establishment

16

Adrián Fuentes-Luque

of a check-list of clear, down-to-earth technical, linguistic, cultural, social and ethical norms. Unlike the usual practice in the translation of entertainment products (films, TV series, documentaries, etc.), AVT, in this sense, would not be a ‘one-stage process’ limited to interlinguistic/ intercultural transfer, but rather a whole multistage, multilevel, interdisciplinary, creative process. I would also like to refer to another problem here, one that calls for an urgent and serious solution. Unfortunately, rather than considering this type of translation as ‘institutional translation’, with its own requirements, problems and particularities, most institutions themselves (particularly international bodies) see it just as a ‘multilingual press office’ (and act accordingly), inevitably putting an immediate social, cultural and linguistic distance between themselves and their users. International institutions, especially those using audiovisual and multimedia platforms to ensure visibility and deliver social, economic, political, cultural, scientific and environmental content, should not be satisfied with being a virtual shop window on the world. Instead of displaying a limited, restricted, one-way, sometimes inaccessible array of audiovisual material, they should do their utmost to shift from the shop-like, one-for-all perspective, to an integrating, truly diverse open house. In terms of function, institutional AVT differs from ‘traditional’ or ‘general’ AVT mostly aimed at the translation of cinematic material for entertainment purposes (feature films, TV series, documentaries, etc.). Function-wise, institutional AVT would be closer to audiovisual advertising, whose main function is usually persuasive, although it can also have additional intentions (Fuentes Luque 2010: 44), namely to provide information on a particular topic, to raise awareness concerning a specific problem (as in the case of NGOs or health-related bodies, for example) as well as to publicize events or activities organized by such institutions. However, institutional AVT is almost entirely devoid of the creativity that is usually found in AVT for advertising and film/ TV. Translations tend to be very literal, which could mean a shift from a creative component to an almost word-for-word rendering (perhaps because of the nature of the source texts, mainly legislative, policy and administrative documents that are complex and highly formal both in form and content, and the ‘institutional’ character of such texts and their functions). In this sense, text reduction (be it partial, in the form of a condensation of the source text; or total, by deleting or omitting lexical items (Díaz Cintas and Remael 2007: 146)) is practically nonexistent in institutional subtitling. Other reasons for literal translation could be consistent with the multilingual dimension of audiovisual

Institutional AVT: A (Shop) Window on the World 17

material in pan-national versus national contexts, the socio-cultural perception of a given institution by different audiences and the projected corporate image of some of these organizations.

2.3 Case studies: EuroparlTV, European Commission webcast portal, Europe by Satellite and United Nations webcast This is, to my knowledge, the first study on the use of audiovisual material or audiovisual platforms in institutional settings and the role of translation. After compiling a selection of some 80 examples of translated audiovisual material from different regional, national and international bodies (parliaments, tourist boards, universities, international organizations, NGOs, political parties and educational institutions, for example), a sample of some of the most representative is analysed here. The main aim is to offer a descriptive starting point for how such institutions use audiovisual material for information, promotion and image-building purposes, as well as to analyse the different translation capabilities included on the institutions’ TV channels, websites or portals and social media. The focus will be on the conventions and technical aspects followed and applied. In any case, as it has been noted above, this is a starting point for further and wider research on the topic. 2.3.1

EuroparlTV

The European Parliament’s dedicated online television channel is arguably the most complete, intuitive and user-friendly platform of all the institutions under analysis. The site www.europarltv.europa.eu offers access to live and archived audiovisual material on current and public affairs. Some videos are available in a number of official EU languages, although most of them are offered with an English or French soundtrack only. Not all videos (such as recordings of meetings and debates) are translated. When available, subtitles can be turned on and off. Optional subtitles are available in most of the official EU languages (Figure 2.1). However, this capability is usually available only for the latest videos and seldom for archived videos. Videos cannot be downloaded, but certain events, such as plenary and committee sessions, are expected to be available for downloading, including both speeches and debates. Although videos cannot be downloaded, they can be readily linked or shared on social networks. Most of the site’s menu and interface is accessible in 23 languages, that is, all the official EU languages at the time of writing, except Irish. Subtitle language

18

Adrián Fuentes-Luque

Figure 2.1

EuroparlTV: subtitle activation and language selection

switching is very fast and flick-free. In general, subtitles seem to respect cohesion, punctuation and segmentation conventions in the different languages on offer. 2.3.2 European Commission webcast portal The EC webcast portal (http://webtv.un.org) provides access to live internet broadcasts and video recordings of conferences in Brussels and elsewhere in Europe, as well as EC documentary videos on various policies. According to the ‘About this site’ section, ‘the portal features a user-friendly interface integrating video, audio, slides, speaker information, interactive chat, polling mechanism and other useful information on a single screen’. For this study, seven people (both sexes, aged 21–59, and with varying levels of education) in different locations throughout Europe were asked to test all the advertised capabilities. Many of them were either unavailable or seemed to be faulty: video downloading was not available in 60 per cent of the attempts (six out of ten attempts), video streaming failed even with a high-speed internet connection in 50 per cent of the attempts (five out of ten attempts) and language audio selection did not work 40 per cent of the time (four out of ten attempts). The site also informs us that ‘Conferences, Workshops, Programme Information Days, and other events are combined, at times, with

Institutional AVT: A (Shop) Window on the World 19

interactive chats or polls enabling online participants to ask questions at the actual event and to discuss related topics amongst themselves and with EC staff. This allows an easy to use, cost-effective and environmentally-friendly interactive communication’. However, on three different occasions, testers confirmed that the capabilities on offer were unavailable. The portal is translated into all the official languages of the EU. Upon accessing a specific video, a new window pops up. Written content is provided in the original language used during the event. The audio is available in all languages for which simultaneous interpretation was provided in the conference room: users can select the language from the language menu below the picture (Figure 2.2). Switching languages is slow, taking an average of 60 to 90 seconds, with the occasional freezing of the image. The revoiced version corresponds to the conference interpreting recording. Curiously enough, the site also includes a disclaimer in small print stating that ‘[t]he interpretation of the conferences available live or on demand via this portal is intended to facilitate communication and does not constitute an authentic record of the proceedings. Only the original speech is authentic. The EC and its interpretation service (DG Interpretation) cannot be held liable for damage or loss of any kind

Figure 2.2

EC webcast portal: audio languages

20

Adrián Fuentes-Luque

whatsoever arising out of the use of the interpretation into any other language’. This would perhaps be acceptable for a commercial or a personal website, but not for an institutional website of an international, prestigious, high-impact organization such as the EC. Certain videos are subtitled, with different space restrictions and language choices depending on the audience targeted. For example, the European Day of Languages video is offered as an abridged threelanguage version (French, German and Italian) and as a five-minute version featuring subtitles in 24 languages, including Russian, Norwegian and Turkish. Videos (whether revoiced or subtitled) cannot be downloaded or shared. No cohesion, punctuation and segmentation conventions are followed and the most basic rules of subtitling are not respected, making for extremely lengthy one or two-liners of up to 56 characters per line (Figure 2.3). Sometimes, even three-line subtitles are used (Figure 2.4).

Figure 2.3

EC webcast portal: lengthy, unconventional subtitles1

Institutional AVT: A (Shop) Window on the World 21

Figure 2.4

EC webcast portal: three-line subtitles2

We might therefore be justified in imagining that automatic subtitling is being used here. However, EC staff confirmed that this kind of technology is not yet in use for institutional AVT at the EC. Subtitling seems to be being carried out using very rudimentary subtitling software created by EC developers. No further information has been uncovered or disclosed regarding this. We might be led to suppose, however, that the EC is using Statistical Machine Translation to subtitle certain videos. This system would be based on hundreds of thousands of previous translations being fed into a translation engine (such as Google Translate or Moses) in order to train it. In this way, a translation supermemory would work in conjunction with machine translation, creating the type of subtitles that can be found on certain videos on this site. In the case of formal, ‘neutral’ speeches like those shown in Figures 2.3–2.5, given by Androulla Vassiliou, the EU Commissioner for

22

Adrián Fuentes-Luque

Figure 2.5

EC webcast portal: normalization3

Education, Culture, Multilingualism and Youth, the language is simple and controlled (in the sense of Somers’ (2003) definition of ‘controlled English’). In terms of image, users are also faced with an issue of normalization vs. subversion. The static, rather tight, formal posture of the speaker; the old-fashioned, almost Soviet-like pose and background (blue, in accordance with the EU colours); the medium camera shot and absence of camera movements; the sober attire and setting, are all coherent with a normalization policy. In terms of language use, the speaker uses a very formal register, with an unexpressive, monotonous intonation. The result, in any case, is the same normalized, uncritical, merely expository discourse and metadiscourse that is present in both printed and audiovisual texts, one that is devoid of creativity and content. The EU also has a dedicated channel on YouTube (called EUTube and available at www.youtube.com/eutube) as well as accounts on Facebook and Twitter. The EUTube channel contains a series of EU-policies-related

Institutional AVT: A (Shop) Window on the World 23

promotional videos. These are well-crafted, original, sometimes humorous videos, narrated (normally in English) with no dialogue and a closing written slogan also in English.4 Although the website’s language menu allows for a choice between English, French and German, no further revoicing or subtitling options are offered. 2.3.3 Europe by Satellite (EbS) EbS is the European Union’s TV information service. It was launched in 1995 and provides EU-related audiovisual material via satellite (multiplexed into a package that can be received on the Astra 4A satellite) and via the Internet to media professionals. The European Commission’s Audiovisual Services, responsible for the EbS service, have recently launched a version of the service for smart mobile telephones that allows both viewing and listening capabilities for live audio and video, videos on demand, press briefings and press conferences. The mobile phone service is not a mobile phone application, but can be accessed through a mobile phone internet address (http://ec.europa.eu/ avservices/m). The site includes dedicated icons to view or listen to EbS in English or in the original language (Figure 2.6).

Figure 2.6

EbS: mobile phone version

24

Adrián Fuentes-Luque

EbS’s programming consists of a mix of live events, news items, stockshots on a variety of EU policies and issues. It is transmitted using a non-encrypted signal, which can be received using readily available equipment. It can be viewed in Europe, North Africa, the Middle East, the East coast of the USA, Latin America and the Caribbean. Live events are generally covered in the original language with simultaneous interpretation into all Community languages being provided when available. Occasionally, other languages are provided through additional audio channels as and when required. The original (international) audio sound is transmitted when a particular language is not available. Certain more technical transmissions, such as the European Commission’s midday briefing, are only available in the original, English and French language channels. News footage and video stockshot material are broadcast with natural sound only. 2.3.4

United Nations webcast

The UN webcast service (http://webtv.un.org/) is very similar in purpose and functions to the EC webcast portal. It features seven different channels offering live and prerecorded UN-related content (Figure 2.7). Videos can be selected on demand and are accessible in one of the six official languages (mostly English, with some samples including one or more of the remaining official languages,5 namely, Arabic, Chinese, French, Russian and Spanish). No subtitling is available in any language, except for some occasional onscreen intralingual subtitles in the case of videos with audio which is hard to understand because of the accent or pronunciation of the speaker (Figure 2.7). Language and video switching is sometimes very slow and faulty. Like the EU, the UN also has a dedicated channel on YouTube and accounts on Facebook and Twitter. The UN YouTube channel (www. youtube.com/user/unitednations) contains a number of UN-related videos. These are basically the same videos as those found on the UN webcast service. There are no revoicing or subtitling capabilities as such. However, YouTube’s auto-captioning (occasionally available and, apparently, only in English) and automatic subtitle translation features can be used (Figure 2.8). In this case, the subtitles are of a very low quality, mostly incoherent (probably the result of Google Translate or Google speech recognition technology) and do not follow any subtitling conventions whatsoever, other than keeping to a maximum of two lines. This results in an unintelligible translation, and one that is almost impossible to follow both in terms of comprehension and pace.

Institutional AVT: A (Shop) Window on the World 25

Figure 2.7

UN webcast

Figure 2.8

UN YouTube channel: YouTube’s auto-captioning feature

2.4

Conclusions

This chapter has provided a descriptive overview of some of the current main practices in institutional AVT, analysing, through case studies of prominent institutions worldwide, which AVT modes are being used and how, as well as pointing out some potential areas for improvement

26

Adrián Fuentes-Luque

(including, but not limited to, linguistic, technical, political and design issues). It is clear that institutions have realized that using audiovisual and multimedia platforms is paramount for their promotion and for the identity and image-building of the institution itself and of the people(s) they represent. The window on the world is no longer a one-way observation post, but a true multidirectional, multisociety shop window through which institutions interact with the rest of the world. However, it is crucial that such institutions (in necessary cooperation with linguistic and cultural experts) work together in establishing adequate technical standards for the different AVT modes and platforms in use, and develop clear and far-reaching AVT criteria, quality and accessibility standards, as well as policies to ensure the linguistic and cultural diversity of such pan-national and national contexts. Far from representing just a mere tick on a check list, having an institutional or corporate website or a dedicated TV or internet channel has an enormous potential that can, and must, be adequately and carefully crafted and maintained. It is, therefore, desirable that institutional audiovisual material, whether live or archived, should be offered in as many languages as there are target language communities covered by the institution in question or, at least, in the most widely spoken languages. In any case, fast-access, intuitive, user-friendly translation capabilities (different audio languages and subtitles) should always be at the top of the agenda in designing and programming audiovisual material in institutional contexts. Compiling varied corpora of audiovisual texts for, and from, different audiovisual and multimedia platforms in institutional contexts is highly desirable, as well as promoting and extending research based on empirical studies (observation, professional and user/consumer practices, experience of users, practitioners and reception studies, for example). In addition, interdisciplinarity would surely yield valuable and interesting findings for policy-makers at different levels, intercultural and linguistic mediators and translation trainers and experts.

Notes 1. 2. 3. 4. 5.

Source: http://webcast.ec.europa.eu/subtitle/portal/player/index.html?id=32. Source: http://webcast.ec.europa.eu/subtitle/portal/player/index.html?id=32. Source: http://webcast.ec.europa.eu/subtitle/portal/player/index.html?id=34. See, for instance, www.youtube.com/watch?v=MYt_FRwNu7w. See, for instance, http://webtv.un.org/news-features/index.php/watch/globaltrafficking-waging-war-on-criminal-cargo/3431505092001.

Institutional AVT: A (Shop) Window on the World 27

References Appiah, Kwame Anthony. 1993/2000. ‘Thick translation’. In Lawrence Venuti (ed.) The Translation Studies Reader (pp. 417–29). London: Routledge. Díaz Cintas, Jorge and Aline Remael. 2007. Audiovisual Translation: Subtitling. Manchester: St. Jerome. Fuentes Luque, Adrián. 2010. ‘Audiovisual advertising: “Don’t adapt to the text, be the text”’. In Jorge Díaz Cintas, Anna Matamala and Josélia Neves (eds) New Insights into Audiovisual Translation and Media Accessibility (pp. 41–50). Amsterdam: Rodopi. Geertz, Clifford. 1973. The Interpretation of Cultures. New York: Basic Books. Hermans, Theo. 2007. ‘Los estudios interculturales de traducción como traducción densa’. In Emilio Ortega Arjonilla (ed.) El giro cultural de la traducción (pp. 119–39). Bern: Peter Lang. Koskinen, Kaisa. 2000. ‘Institutional illusions: translating in the EU Commission’. The Translator 6 (1): 49–65. Koskinen, Kaisa. 2008. Translating Institutions: An Ethnographic Study of EU Translation. Manchester: St. Jerome. Martín Ruano, M. Rosario. 2009. ‘Teorías y utopías:  hacia nuevos vocabularios y prácticas de la traducción institucional’. http://ec.europa.eu/translation/ bulletins/puntoycoma/117/pyc11722_es.htm. Mossop, Brian. 1988. ‘Translating institutions: a missing factor in translation theory’. TTR: Traduction, Terminologie, Redaction 1 (2): 65–71. Mossop, Brian. 1990. ‘Translating institutions and “idiomatic” translation’. Meta 35 (2): 342–55. Somers, Harold. 2003. Computers and Translation. Amsterdam: John Benjamins.

3 Accuracy Rate in Live Subtitling: The NER Model Pablo Romero-Fresco and Juan Martínez Pérez

3.1

Introduction

Over the past few years, the focus of audiovisual translation (AVT) seems to have shifted from quantity to quality. As is demonstrated by international conferences, such as Media for All 3 in 2009 (www.mediaforall. eu/all3) and Media for All 4 in 2011 (www.imperial.ac.uk/humanities/ translationgroup/mediaforall4), this shift applies to industry as well as to academia. In the case of live subtitling, and more specifically respeaking, the most common method used to evaluate the quality of subtitles produced in real time consists of assessing their accuracy. Needless to say, where quality is concerned there are also a number of other features to be considered, such as delay, positioning, character identification and speed, as well as factors relating to their reception by viewers (opinion, comprehension, perception). These issues have all been discussed by Romero-Fresco (2011) with particular reference to the UK market. Yet, what concerns broadcasters, regulators such as Ofcom and subtitling companies is the accuracy of live subtitles and it is this that constitutes the main focus of this chapter. Up until now, subtitling companies have tackled the issue of quality in very different ways. In some companies, trainers are in charge of error calculation, whereas in others this is done by the subtitlers themselves. The calculation methods vary greatly, some being much more ‘generous’ than others. In addition, the approach to live subtitling is different depending on the country in question. Whereas, in the UK, live subtitles are nearly always verbatim, in other countries, such as Germany or Switzerland, they are variously edited. This heterogeneous picture raises several questions, namely, are the accuracy rates obtained by different companies at all comparable? Do the methods used take 28

Accuracy Rate in Live Subtitling: The NER Model 29

into account differences between languages? Do they only provide a final score or do they also give an indication of the improvements necessary to obtain better results? The aim of this chapter is to present the NER model, a new model for assessing the accuracy of live subtitles in different countries and in different languages by analysing the extent to which errors affect the coherence of the subtitled text or modify its content. An emphasis will be placed on respoken subtitles, the type in most common use nowadays. The model is also applicable to automatic subtitles, which, given the rapid development of speech recognition technology, are likely to become more widespread in the near future.1 Following an introduction outlining the basic requirements that such a model might be expected to fulfil, an overview of the traditional methods used in what is known as word error rate (WER) is given. This is then followed by examples of the different types of error assessed by the NER model in English, Spanish, Italian and German with, finally, an explanation of the application of the NER model to real-life subtitles.

3.2 Basic requirements and traditional methods Before presenting the NER model, it seemed important to outline the basic requirements of such a model in order to ensure its success in academia as well as in the industry as a whole. It also seemed essential to assess traditional methods so as to illustrate their deficiencies and, consequently, the areas that ought to be targeted by the new model. 3.2.1

Basic requirements

Models to assess the quality of live subtitling should meet the following basic requirements. They should: 1.

2. 3.

4.

Be functional and easy to apply. Although the use of multiple variables might conceivably be helpful for the researcher, respeakers and trainers should also be able to apply the model on a daily basis. Include the tried and tested principles of WER calculations from speech recognition theory. Take into account the different degrees of editing entailed/required by different programmes. For example, sports commentating is often heavily edited, whereas subtitles for news programmes, especially in the UK, are reproduced almost verbatim (Eugeni 2009). Take into account the fact that the approach to live subtitling may differ from country to country, thus allowing for the possibility

30

5.

6.

7.

Pablo Romero-Fresco and Juan Martínez Pérez

of assessing edited (summarized, expanded, etc.) and yet accurate respeaking. This is the reason why it is not possible to automate the assessment of accuracy in live subtitling as has been done in the US2 (Apone et al. 2010), at least when respeaking is being used. Compare the original spoken text with the respoken subtitles to identify editing or recognition errors that might be classified as serious, standard or minor, depending on how they affect the processing of textual and visual information. Include, whenever possible, other relevant information regarding live subtitling quality, such as delay, position, speed, character identification, etc. Provide an overall idea, not only of quality in terms of accuracy, but also aspects to be improved (and perhaps even how to do so). Instead of a spot-the-error exercise, the model should provide food for thought as far as training is concerned.

3.2.2 Traditional WER methods The US National Institute of Standards and Technology distinguishes between word correctness and word accuracy, both of which are presented as percentages using the following basic formula: Accuracy rate

N – Errors N

× 100 = %

In this model, N is the total number of words spoken by the user. As illustrated by Dumouchel et al. (2011), in the following example there are at least three different types of error that can occur with the use of speech recognition: deletion (a correct word is omitted in the recognized sentence), substitution (a correct word is replaced by an incorrect one) and insertion (an extra word is added). Where

is

Where

is

the D

whole S hole

wheat S I we eat

flour S flower

Taking this into account, the measure of word correctness proposed by the US National Institute of Standards and Technology, which includes deletion and substitution errors, would apply to the above utterance as follows: Accuracy rate

N(6) – D(1) – S(3) N (6)

× 100 = 33.33%

Accuracy Rate in Live Subtitling: The NER Model 31

The model to assess word accuracy is stricter, as it also takes into consideration insertion errors: Accuracy rate

N(6) – D (1) – S (3) – I(1) N (6)

× 100 = 16.66%

Designed as they are for the use of speech recognition, these models pose a significant problem when applied to respeaking, as they do not account for instances in which a respeaker edits the original text without changing or losing meaning, as is shown below: Well, you know, you have to try and put out a good performance, I mean, yeah, it’s kind of a stepping stone, isn’t it, really?

Accuracy Rate

25 – 11 – 1 – 0 25

You have to try to put out a good performance. It’s a stepping stone.

× 100 = 52%

The example given above features the omission of relatively unimportant asides (‘you know’, ‘I mean’, ‘kind of’), which constitutes a useful strategy commonly applied by respeakers to catch their breath and keep up with the original speaker. Traditional WER methods would yield an accuracy rate of 52 per cent, whereas a model suited to respeaking might consider this respoken subtitle as 100 per cent accurate. 3.2.3 The CRIM method One of the first attempts to adapt traditional WER methods to the specificity of respeaking was carried out by the Centre de Recherche Informatique de Montréal (CRIM, www.crim.ca/en/r-d/reconnaissance_ parole). The basis is still the word accuracy method described above, but a step is added in between: once the spoken and the respoken text have been automatically aligned, a human operator goes through the text and decides whether or not the deletions have caused loss of information. In this way, both verbatim and edited respeaking can be accounted for. A number of problems remain unsolved, however. First of all, the decision as to when deletion brings about loss of information is entirely subjective, varying thus from person to person. This issue will be dealt with in the next section. Secondly, while requirements 1–4 outlined above are met, 5, 6 and 7 are not. The accuracy rate obtained with this model may provide useful data, but the deletion figure could be perceived as ambiguous. Indeed, the formula does not give any indication as to whether the

32

Pablo Romero-Fresco and Juan Martínez Pérez

deletions have been caused by misrecognitions or by poor editing strategies on the part of the respeaker. This is a very important distinction, as it requires two different remedial actions. If deletion errors are mostly misrecognitions, further work is needed on the software to improve the voice profile by fine-tuning the acoustic and language models. In contrast, if the deletions are caused by poor editing by the respeaker, the training should be based on providing the respeaker with the skills necessary to edit the original text without losing (excessive) information. Finally, this method does not seem to take into account the specificities of different languages or the occurrence of errors that are subsequently corrected on air by the respeaker. With regard to the latter, some companies, such as IMS in the UK, consider them as half an error, while others do not regard them as mistakes at all, obviously resulting in better overall accuracy rates.

3.3 The NER model This model is based on the NERD model illustrated in Romero-Fresco (2011: 150–161), which included a loose category for ‘Deductions’ (D), but made no distinction between different types of error. Drawing on the findings obtained in the EU-funded project DTV4ALL (www.pspdtv4all.org) regarding viewers’ preferences, the NER model accounts for serious, standard and minor errors, thus acknowledging the fact that not all errors pose the same problems in terms of comprehension and highlighting the viewer-centred nature of the model. 3.3.1 Main components of the model Good quality live subtitles may be expected to reach a 98 per cent accuracy rate using the following model: Accuracy

N – E – R × 100 N

CE (correct editings): Assessment: N: Number of words in the respoken subtitles, including commands (punctuation marks, speaker identification). E: Editing errors, usually caused by strategies applied by the respeaker. In other words, these errors are the result of a judgement or decision on the part of the respeaker. The most common situation is that, in a given instance, and for whatever reason (e.g. because the original speech rate is too fast), the respeaker decides to omit something, thus losing an idea unit (a piece of information).3 The respeaker

Accuracy Rate in Live Subtitling: The NER Model 33

sometimes also adds idea units or paraphrases the original text, losing information or introducing wrong information, perhaps due to a misunderstanding of the original text. Editing errors are calculated by comparing the respoken subtitles and the original text and may be classified as serious, standard or minor, scoring 1, 0.5 and 0.25, respectively (specific examples are included in the next section). In the case of automatic subtitles, these errors are, among others, related to punctuation and speaker identification. R: Recognition errors. These are usually misrecognitions caused by mispronunciation/mishearing or by the specific technology used to produce the subtitles. These errors may involve insertion, deletion or substitution, and are calculated by comparing the respoken subtitles and the original text. They may be classified as serious, standard or minor, scoring 1, 0.5 and 0.25 respectively (specific examples are included in the next section). CE: Correct editings, that is, instances in which the respeaker’s editing has not led to a loss of information. This is calculated by comparing the respoken subtitles and the original text. Given the difficulty involved in producing verbatim live subtitles, the omission of redundancies and hesitations may be considered as cases of correct editing and not as errors as long as the coherence and cohesion of the original discourse are maintained. Assessment: This section includes the assessment and analysis of the results as well as comments on different issues, such as the speed and delay of the subtitles, how the respeaker has coped with the original speech rate, the overall flow of the subtitles on screen, speaker identification, the audiovisual coherence between the original image/ sound and the subtitles and whether too much time has been lost in the corrections, etc. Given that it is difficult to describe the overall quality of a set of live subtitles by a single figure, this assessment, and not just the accuracy rate, should be given priority when determining the quality of subtitles in the NER model. 3.3.2 Types of error in English, Spanish, Italian and German This section includes examples of types of error obtained from a 35,000word corpus comprising live subtitles from some of the most watched news programmes in the UK (8,000 words), Spain (11,000 words), Italy (8,000 words) and Switzerland (8,000 words): BBC Six O’Clock News, Newsnight and SKY News (English), 59 segundos and Telediario TVE (Spanish), Telegiornale RAI (Italian), and 10vor10 and Tagesschau on SF1 (German). Of these subtitles, 90 per cent were produced by respeaking and 10 per cent by stenography.

34

Pablo Romero-Fresco and Juan Martínez Pérez

As mentioned above, the approaches to live subtitling tend to vary from country to country and even from programme to programme. Subtitling companies or broadcasters will often provide clear indications of what is expected from respeakers in terms of editing, summarizing, etc. (see 3.2.3). 3.3.2.1

Serious errors

Serious errors change the meaning of the original text, creating a new meaning that could make sense in that particular context. Serious recognition errors are often caused by substitutions such as ‘alms’ instead of ‘arms’, or ‘15 per cent’ instead of ‘50 per cent’. Serious editing errors are often caused by bad choices or confusion on the part of the respeaker (a mistake with figures or numbers, a change from an affirmative to a negative statement, etc.). From the viewers’ point of view, serious errors do not only omit information but also misinform, which is why some deaf viewers refer to them as ‘lies’. More worryingly, since these errors make sense in the particular context in which they occur, they may not be noticed by viewers. Table 3.1 shows some examples of serious recognition errors in different languages, whereas Table 3.2 illustrates examples of serious editing errors. Table 3.1

Examples of serious recognition errors

Language Subtitles English

Spanish

Original text

he’s having problems with the he’s having problems with the Czechs cheques they allow young people to smoke but only outside

they allow young people to smoke pot only outside

he never talks to Rudy

he never talks dirty

he was born in 1986

he was born in 1996

the driver must have had a view

the driver must have had a few

casas habitadas por humanos casas habitadas por rumanos [houses inhabited by humans] [houses inhabited by Romanians] esto se llama asimetría [this is called asymmetry]

esto se llama simetría [this is called symmetry]

siempre usa mal esa siempre usa mal esa conjugación conjunción [always uses that conjunction [always uses that conjugation wrong] wrong] normalmente nos dan ejemplos [they normally provide examples for us]

normalmente no se dan ejemplos [normally examples are not provided]

(continued)

35 Table 3.1 Continued Language Subtitles

Original text

Italian

una richiesta di fiducia che nasce [a motion of confidence based on]

una richiesta di sfiducia che nasce [a motion of rejection based on]

per tutte le forze armate [for all the armed forces]

per tutte le forze alleate [for all the allied forces]

le norme [the norms]

enorme [enormous]

se ci dicono anche che il fidanzato del PD [we are also told that PD’s fiancé]

se ci dicono anche chi è il fidanzato del PD [when they also tell us who PD’s fiancé is]

governo internazionale [international governance]

governo di unità nazionale [national unity government]

in Island ist der Zinssatz von 3,5 auf 2% gesunken [the interest rate in Island decreased from 3.5% to 2%]

in Irland ist der Zinssatz von 3,5 auf 2% gesunken [the interest rate in Ireland decreased from 3.5% to 2%]

die Schwangerschaft ist ein Zustand, eine Krankheit, man sollte einfach normal weiterleben [pregnancy is a state, a disease, one should just carry on]

die Schwangerschaft ist ein Zustand, keine Krankheit, man sollte einfach normal weiterleben

German

Table 3.2

[pregnancy is a state, not a disease, one should just carry on]

Examples of serious editing errors

Language

Subtitles

Original text

English

There are a number of questions in the UK, but the big one is whether it’s about to slip into recession.

There are a number of questions in the UK, but the big one is the US and whether it’s about to slip into recession.

Spanish

En el interior hay problemas. Ahí sí que se necesita ayuda. [There are problems inland. This is where help is really needed.]

En el interior hay problemas, pero no tantos como en la costa. Ahí sí que se necesita ayuda. [There are problems inland, but not as many as on the coast. This is where help is really needed.] (continued)

36

Pablo Romero-Fresco and Juan Martínez Pérez

Table 3.2 Continued Language

Subtitles

Italian

dopo la nomina è però arrivata una nota sull’opportunità política

German

es wird nützen, dass die USA bekannt gemacht haben, dass sie ihre Regierungscomputer besser schützen lassen will [it is useful that the USA announced that they are going to provide better protection for their government computer]

3.3.2.2

Original text

dopo la nomina è però arrivata una nota del Quirinale in cui si esprime una riserva del Presidente sull’opportunità politica [but after the nomination, [but after the nomination, a message a message arrived regarding from Quirinale arrived in which the whether it seems politically President had a reservation opportune] concerning whether it seems politically opportune] es wird wenig nützen, dass die USA bekannt gemacht haben, dass sie ihre Regierungscomputer besser schützen lassen will [it is of little use that the USA announced that they are going to provide better protection for their government computer]

Standard errors

Despite the fact that they do not create a new meaning, standard errors are caused by the omission of an information unit from the original text. Standard recognition errors disrupt the flow/meaning of the original and often cause surprise. They are identified as errors, but it is not always easy to figure out what was originally meant. For example, in the subtitle ‘She has no big plans for hell of even this year’, ‘hell of even’ (used instead of ‘Halloween’) does not make sense in this context, thus making it difficult for the viewers to understand what was originally meant. Table 3.3 includes some examples of standard recognition errors in different languages. The difference between standard editing errors and minor editing errors is based on the distinction between independent and dependent idea units. An independent idea unit, such as ‘The blaze started this morning at the front of the house’, is the oral equivalent of a sentence, makes sense as a full, independent message and may be composed of several dependent idea units, such as ‘this morning’ and ‘at the front of the house’. A dependent idea unit is often a complement and provides information about the ‘when’, ‘where’, ‘how’, etc. of an independent idea unit. As shown in Table 3.4, standard editing errors often consist of

37 Table 3.3 Language English

Spanish

Italian

German

Examples of standard recognition errors Subtitles

Original text

way man Republican

Weimar Republic

paid in full by pizza

paid in full by Visa

he’s a rats public and

he’s a Republican

he’s a buy you a bull asset

he’s a valuable asset

attend Tatian

a temptation

I couldn’t hear Iran said

I couldn’t hear your answer

los detalles que nadie de son importantes [the details nobody of are important]

los detalles que nadie ve son importantes [the details nobody sees are important]

y los festival internacional [and the festivals international]

22 festival internacional [22 international festival]

dividan enfermedad [divide illness]

debido a una enfermedad [due to an illness]

vida el gobierno [life the government]

pide al gobierno [asks the government]

es la queja historia [it is the complaint story]

es la vieja historia [it is the old story]

di un vero e proprio piano Marshall sulla lì [of a real Marshall Plan for lee]

di un vero e proprio piano Marshall per la Libia [of a real Marshall Plan for Libya]

un’ala [a wing]

un’aula [an assembly hall]

nella notte si legge che il Presidente ha proceduto [in the evening, one can read that the President had consulted]

nella nota si legge che il Presidente ha proceduto [in the message, one can read that the President had consulted]

sie haben Maschinenpistolen eisig getragen [they were carrying icy machine guns]

sie haben Maschinenpistolen bei sich getragen

geschossen werden immer kleiner [shootings are diminishing constantly]

die Chancen werden immer kleiner [the chances are diminishing constantly]

[they were carrying with them machine guns]

38 Table 3.4

Examples of standard editing errors

Language

Subtitles

Original text

English

Birmingham’s problems aren’t solely of the council’s making. There is a large population living in some of the most deprived communities in the country.

Birmingham’s problems aren’t solely of the council’s making. There is a huge demand for services in this city. There is a large population living in some of the most deprived communities in the country.

We’ll be discussing tonight. Then we’ll have some time at the end for football.

We’ll be discussing tonight the great start of the New Labour government. Then we’ll have some time at the end for football.

El Celta de Vigo ha cuajado una gran temporada. El Deportivo de la Coruña, sin embargo, sigue decepcionando. [Great season for Celta de Vigo. Deportivo de la Coruña, however, is still disappointing]

El Celta de Vigo ha cuajado una gran temporada. Pocos medios lo han mencionado hasta ahora. El Deportivo de la Coruña, sin embargo, sigue decepcionando. [Great season for Celta de Vigo. Only a few media have mentioned it so far. Deportivo de la Coruña, however, is still disappointing]

El ministro anunció. Nadie se lo esperaba.

El ministro anunció que dejará la política a finales de año. Nadie se lo esperaba. [The minister announced that she will be leaving politics at the end of the year. This is something no one expected.]

Spanish

[The minister announced. This is something no one expected.] Italian

L’iniziativa verrebbe respinta anche nella Svizzera italiana. Va però detto che nella Svizzera italiana le campagne iniziano sempre tardi.

L’iniziativa verrebbe respinta anche nella Svizzera italiana. Regione che però si è sempre mostrata molto critica nei confronti della libera circolazione. Va però detto che nella Svizzera italiana le campagne iniziano sempre tardi. [The initiative would also [The initiative would also be be rejected by the Italianrejected by the Italian-speaking speaking part of Switzerland. part of Switzerland. A region that It has to be said, though, that has always been critical of the free campaigns in the Italianmovement of persons. It has to be speaking part of Switzerland said, though, that campaigns in the always start at a later date.] Italian-speaking part of Switzerland always start at a later date.] (continued)

Accuracy Rate in Live Subtitling: The NER Model 39 Table 3.4 Continued Language

Subtitles

Original text

German

Das neue Gesetz erlaubt den Anbau und Verkauf von Haschisch. Ebenso ist der Konsum von vierzig Gramm pro Monat gestattet. [The new law allows the cultivation and sale of hashish, as is the consumption of forty grammes per month.]

Das neue Gesetz erlaubt den Anbau und Verkauf von Haschisch. Eine Behörde soll die Produktion und den Handel überwachen. Ebenso ist der Konsum von vierzig Gramm pro Monat gestattet. [The new law allows the cultivation and sale of hashish, as is the consumption of forty grammes per month. The production and sale would be monitored by a public authority.]

the omission of a full independent idea unit (which may not be noticed by viewers) or of a dependent idea unit that renders the remaining unit meaningless or nonsensical, i.e. the omission of ‘the great start of the New Labour government’ in ‘We’ll be discussing tonight the great start of the New Labour government’. 3.3.2.3

Minor errors

These errors allow viewers to follow the meaning/flow of the original text and sometimes even to reconstruct the original words. Typical cases of minor recognition errors are the presence/absence of capital letters, apostrophes, insertions of small words, etc. Minor editing errors often involve the omission of a dependent idea unit that does not render the remaining unit meaningless or nonsensical, i.e. the omission of ‘this morning’ in ‘the blaze started this morning at the front of the house’. Minor editing errors depend largely on specific respeaking practices. In some countries, such as in the UK, respeaking ‘the former head of the Federal Reserve, Alan Greenspan, has stated that …’ as ‘the former head of the Federal Reserve has stated that’ may be considered as an editing error, whereas for others adopting a less verbatim approach – Switzerland for example  – it may be regarded as correct editing. Corrected errors may also be included in this category, although in many countries this approach might constitute correct editing. From the viewers’ point of view, minor errors may go unnoticed or may be detected without hindering the comprehension of the key elements of the original text. Table 3.5 shows some examples of minor recognition errors in different languages, whereas Table 3.6 contains minor editing errors.

40 Table 3.5

Examples of minor recognition errors

Language

Subtitles

Original text

English

brown we’re it’s a Ryan Giggs for people were found their what you do then?

Brown were it’s Ryan Giggs four people were found they’re what do you do then?

Spanish

va ser crucial ayer estudie mucho el presidente zapatero todo satisfechos tan poco lo tiene

va a ser crucial ayer estudié mucho el presidente Zapatero todos satisfechos tampoco lo tiene

Italian

credo là ne meno ha se si sono cinque è caso questa quello napolitani

crede della nemmeno ha se ci sono cinque è il caso questo è quello Napolitano

German

zweite diese bereit haben dass die irische Regierung zwei weitere konkursreifer Banken Verstaatlichung muss

zweiten dieser bereits haben dass die irische Regierung zwei weitere konkursreife Banken verstaatlichen muss

Table 3.6

Examples of minor editing errors

Language

Subtitles

Original text

English

The neighbours did all they could to try and get inside, but it was too difficult and dangerous a task.

The neighbours did all they could to try and get inside, trying to knock down the door, but it was too difficult and dangerous a task.

Spanish

La playa de las Islas Cíes fue elegida como la mejor del mundo en una encuesta publicada en 2007.

La playa de las Islas Cíes fue elegida como la mejor del mundo en una encuesta publicada por The Guardian en 2007.

[The Cíes Islands beach was chosen best beach in the world in a survey published in 2007.]

[The Cíes Islands beach was chosen best beach in the world in a survey published by The Guardian in 2007.] (continued)

Accuracy Rate in Live Subtitling: The NER Model 41 Table 3.6 Continued Language

Subtitles

Original text

Italian

Sono emerse alcune novità nell’inchiesta sull’incidente di Michael Schumacher comunicate nel corso di una conferenza stampa a Grenoble. [There are some news about the investigation into the accident of Michael Schumacher that have been provided in a press conference in Grenoble.]

Sono emerse alcune novità nell’inchiesta sull’incidente di Michael Schumacher comunicate nel corso di una conferenza stampa a Grenoble dalla procura francese. [There are some news about the investigation into the accident of Michael Schumacher that have been provided in a press conference in Grenoble by the French public prosecutor.]

German

Die Studie hat für Unruhe unter den Parlamentariern gesorgt.

Die Studie, die in der NZZ veröffentlicht wurde, hat für Unruhe unter den Parlamentariern gesorgt. [The study published in the NZZ caused a stir among the parliamentarians.]

[The study caused a stir among the parliamentarians.]

3.4 Application of the NER model in English This section includes three examples of how the NER model can be applied to the assessment of accuracy in live subtitling, in this case with respoken subtitles. Example 1 In this example, although the respeaker has had to edit approximately 20 per cent of the original discourse due to the high speech rate, the key information has been included in the subtitles, which are very accurate.

Original text

Respoken subtitles

– What would it mean for triathlon in Britain if a male or female British athlete were able to go to Beijing and bring back a first triathlon Olympic medal?

– What would it mean for triathlon in Britain if a male or female British athlete were able to go to Beijing and bring back a first triathlon Olympic medal? (continued)

42

Pablo Romero-Fresco and Juan Martínez Pérez

Example 1 Continued Original text

Respoken subtitles

– Oh, I think it’d be, be, be everything for, for the individual, of course, and, you know, I’m sure, you know, there’d be loads of people, you know, all the personal supporters but obviously, you know, the sport as a whole would be great. You know, I’m sure that if you speak to the rest of the guys that’s what everyone is really, really trying to do is get a good performance and, you know, try to race for a medal and, you know, it’s the pinnacle of your career, you know and, you know, hopefully and if it’s one of the younger ones, or myself or whoever, you know, it’ll be a stepping stone to, to, to, for experience leading up to 2012.

– It would be everything for the individual, of course. I’m sure there will be lots of people who will … The sport as a whole would be great. I’m sure if you speak to the rest of the guys, that’s what everyone is trying to do. Get a good performance and try to race for a medal. It is the pinnacle of your career. Hopefully if it is one of the younger ones or myself, whoever, it will be a stepping stone for experience leading up to 2012.

– Inevitably, we have to touch on the three missed tests and on the fact that you were temporarily banned by the BOA. Do you look back on it and feel that you were maybe a victim of a system that was in its infancy at the time?

– Inevitably, be have to touch on the three missed tests and at the fact that you were temporarily banned. Do you look back and it and feel that you were maybe a victim of a system that was in its infancy at the time?

– Uh, to a degree, yes. I think that everyone learnt a lot from that. UK sport did, I did, the federation and hopefully the juniors have. Uh, but, you know, it happened and, you know, I’m not hiding behind the fact it didn’t happen.

– Everyone learnt a lot from that, UK sport, the federation and hopefully the juniors. It happened. I am not hiding behind the fact it didn’t happen.

Accuracy N: 205 E: 1

205 – 1 – 0.5 205

× 100 = 99.3%

(186 words + 19 commands, namely commas, full stops and question marks) (‘all the personal supporters’ [0.25: dependent idea unit], ‘by the BOA’ [0.25: dependent idea unit], ‘to a degree, yes’ [0.5: independent idea unit])

Accuracy Rate in Live Subtitling: The NER Model 43

R: 0.5 CE: 40

Assessment:

(‘be have’ instead of ‘we have’ [0.25], ‘look back and it’ instead ‘look back on it’ [0.25]) (‘oh, I  think’ [x2], ‘be’ [x2], ‘for, and’ [x5], ‘you know’ [x 12], ‘but obviously’, ‘that, really’ [x2], ‘or, to’ [x3], ‘I think that, I did, have, and, but, it would be, there will be, it is, it will be’) Overall accuracy is very good. There is heavy editing (20.1 per cent of the original text) due to very high speech rate (245 wpm) but, to the respeaker’s credit, there are also 40 instances of correct editing vs. only three where some information is lost. The respeaker is very proficient in this regard. Recognition is good, but attention should be paid to ‘be/ we’ and ‘and/on’. In any case, when respeaking with Dragon, if the respeaker manages to deliver respeaking units such as ‘we have to touch’ or ‘look back on it’, the algorithm in the language model is unlikely to allow mistakes such as ‘be have to touch’ and ‘look back and it’. None of the errors are serious.

Example 2 Example 2 presents a very different situation. In this case, only 6 per cent of the original discourse has been edited, but while many irrelevant elements have been maintained in the subtitles, some key idea units have been lost. Original text

Respoken subtitles

Everyone agrees the economy is gonna cool down in two years. The question is, will it be a deep freeze or just a bracing chill? Well, if I really knew the answer to that question, I’d be in the City, not standing here talking to you. But I do know what the key questions are: the big one is the US, and whether it’s about to slip into recession.

Everyone agrees the economy is gonna cool down. The question is, will it be a deep freeze or just a bracing chill? Well, if I really knew the answer to that question, I’d be in the City, not standing here talking to you. But I do know what the key questions are, the big one is whether it’s about to slip into recession.

(continued)

44

Pablo Romero-Fresco and Juan Martínez Pérez

Example 2 Continued Original text

Respoken subtitles

The former head of the Federal Reserve, Alan Greenspan, thinks the odds on a recession this year are fifty-fifty. The Fed has cut rates three times this year but has been surprised by the slowdown and plans to do more. The same goes for President Bush, who said yesterday he was thinking about a stimulus package of his own for 2009.

The former head of the Federal Reserve thinks the odds on a recession this year are high. The Fed has cut rates but has been surprised by the slowdown and plans to do more. The same goes for President Bush, who said today he was thinking about a stimulus package of his own.

Now, the worse things get over there, the tougher it will be in Britain. Sure, people talk about all the growth in Asia and how the global economy can decouple from America, but Britain’s credit crunch has been nearly as bad as America’s. Worse, if you think Northern Rock. And even though Alistair Darling has announced reforms today that could prevent that kind of fiasco happening again at a macro level, it’s hard to get round the fact that Britain shares many of the same big economic weaknesses as America, not least a habit of spending beyond our means.

Now, the worse things get over there, the tougher it will be in Britain. Sure, people talk about all the growth and how the global economy can decouple from America, but Britain’s credit crunch has been nearly as bad as America’s. Even though new reforms could prevent that kind of fiasco happening again, it’s hard to get round the fact that Britain shares many of the same big economic weaknesses as America, not least a habit of spending beyond our mains.

Accuracy N: 220 E: 5.5

R: 0.25 CE: 1

220 – 5.5 – 0.25 220

× 100 = 97.4%

(196 words + 24 commands, namely commas, full stops and question marks) (‘in two years’ [0.25], ‘the US’ [1], ‘Alan Greenspan’ [0.25], ‘fifty-fifty’ [1], ‘three times this year’ [0.25], ‘today’ [1], ‘for 2009’ [0.25], ‘in Asia’ [0.25], ‘Worse, if you think Northern Rock’ [0.5], ‘Alistair Darling has announced’ [0.25], ‘today’ [0.25], ‘at a macro level’ [0.25]) (‘mains’ instead of ‘means’ [0.25]) (‘and [even though …]’)

Accuracy Rate in Live Subtitling: The NER Model 45

Assessment:

Overall accuracy does not reach 98 per cent. Recognition is good: only one minor error. The problem lies in editing, not because of the amount (only 6 per cent edited) but because of the quality (12 instances of incorrect editing vs. one instance of correct editing). Facing a fairly normal speech rate in the original text (165 wpm), the respeaker has kept many ‘irrelevant’ elements (‘well’, ‘but’, ‘sure’, ‘now’, ‘and’) but has lost as many as 12 idea units. Many of these are dependent idea units made up of only one or two words and could have easily been maintained. Further training is needed to improve this respeaker’s editing skills.

Example 3 Finally, example 3 is also unsuccessful. In this case, the problems are not related to editing, but to recognition, which means that the respeaker needs to undergo further training with his/her voice profile.

Original text

Respoken subtitles

– In my view, it is monetary policy that needs to act, not fiscal policy. And what we have at the moment is interest rates which are contractionary on the economy. Let’s not forget that an interest rate of 15 per cent is higher than a neutral interest rate for this economy and yet there are all these forces pushing downwards, such as oil prices, the credit crunch, housing, housing vulnerabilities. In my view, the Bank of England needs to start cutting rates and start cutting them quickly. Now, if it doesn’t, this economy is likely to stall next year.

– In my view, it is monetary police that needs to and not fiscal policy. What we have at the moment is interest rates which are confectionery on the economy. An interest rate of 50 per cent is higher than a neutral interest rate for this economy and yet there are all these forces pushing downwards, such as the credit card, housing, housing and inabilities. The Bank of England needs to start cutting rates and start cutting them quickly. If it doesn’t, this economy is likely to still the next year.

– So if you were still in the committee that is what you would be reckoning? – Basically, I will be voting for a cut next year.

– So if he were still in the committee that is what you would be reckoning? – I will be voting for a cup next year.

(continued)

46

Pablo Romero-Fresco and Juan Martínez Pérez

Example 3 Continued Original text

Respoken subtitles

– She’s absolutely right. Look, sure there is a risk of inflation, we’re not in the Weimar Republic, we’re living in Great Britain in the United States. There is pain to cutting interest rates. I mean, you risk a little inflation but the question is what would you rather risk, a little more inflation or a major slowdown? Given the risks, you best take a chance on inflation.

– She’s absolutely right. Sure there is a risk of inflation, were not in the way my Republican, were living in Great Britain in the United States. There is pain to cutting interest rates. You risk a little inflation but the question is what would you rather risk, a little more inflation or a major slowdown? Given the risks, you best take a chance on inflation.

– I was just talking to some people at the White House today. It looks like we have room for fiscal policy adjustments. The president is considering increasing allowances for depreciation for instance to stimulate business investment. In this country, Gordon Brown did not leave you with that room.

– I was just talking to some people at the White House today. It looks like we have room for fiscal policy adjustments. The president is considering increasing allowances for deposition for instance to stimulate business investment. In this country, Gordon Brown did not leave you with that run – room.

Accuracy N: 259 E: 0.25 R: 8.75

CE: 7 Assessment:

259 – 0.25 – 8.75 × 100 = 96.5% 259

(232 words + 27 commands, namely commas, full stops, question marks and one correction) (‘oil prices’ [0.25]) (‘police’ instead of ‘policy’ [1], ‘and’ instead of ‘act’ [0.5], ‘confectionary’ instead of ‘contractionary’ [0.5], ‘50 per cent’ instead of ‘15 per cent’ [1], ‘credit card’ instead of ‘credit crunch’ [1], ‘and inabilities’ instead of ‘vulnerabilities’ [0.5], ‘to still’ instead of ‘to stall’ [0.5], ‘he’ instead of ‘you’ [1], ‘cup’ instead of ‘cut’ [0.5], ‘were’ instead of ‘we’re’ [0.25], ‘were’ instead of ‘we’re’ [0.25], ‘way my Republican’ instead of ‘Weimar Republic’ [0.5], ‘depreciation’ as ‘deposition’ [1], ‘run’ corrected as ‘room’ [0.25]) (‘and’, ‘let’s not forget that’, ‘in my view’, ‘now’, ‘basically’, ‘look’, ‘I mean’) Overall accuracy does not reach 98 per cent. Editing is good: only one dependent idea unit missing and seven instances of correct editing.

Accuracy Rate in Live Subtitling: The NER Model 47

Recognition is poor: 14 recognition errors (including five serious errors). Further training is needed to improve the voice profile. The errors occur not only with single words but also with phrases and contractions. The respeaker should thus be advised not to dictate contractions in order to avoid errors such as ‘were’ instead of ‘we’re’.

3.5 Final thoughts: the NER model and automatic subtitling As mentioned above, the NER model can also be used to measure the quality of automatic subtitles, that is, those produced with ASR (automatic speech recognition). Given the rapid evolution of speakerindependent speech recognition, it makes sense to anticipate the use of this technology by subtitling companies once it has reached optimum levels of accuracy. First of all, this software may be used with the intervention of a human operator, who will correct misrecognitions and errors of punctuation and speaker identification before sending the subtitles on air. Perhaps, in a more distant future, human intervention may even be excluded from the process altogether. Be that as it may, it is important to ensure that subtitles are at least as accurate as those produced by respeaking, in this case reaching 98 per cent with the NER model, including punctuation and character identification errors. There is, however, one more element that becomes critical when dealing with automatic subtitles: speed. As highlighted in RomeroFresco (2012), the speed of subtitles has a direct impact on the amount of time viewers can devote to the images. According to eye-tracking data obtained in Poland, the UK and Spain in the DTV4ALL project, and in South Africa by Hefer (2011), a speed of 150 wpm leads to an average distribution of 50 per cent of the time on the subtitles and 50 per cent on the images. A faster speed of 180 wpm yields an average of 60–65 per cent of the time on the subtitles and 40–35 per cent on the images, whereas 200 wpm only allows 20 per cent of the time on the images. As shown by González Lago (2011), the average speech rate of live programmes, such as the Spanish news, is 240–278 wpm, with peaks of 400 wpm and even 600 wpm in certain cases. Speech rates of over 220 wpm are also very common for presenters in the UK (Eugeni 2009). Considering the fact that presenters are unlikely to slow down their speech rates and that automatic subtitles are by definition verbatim, the speed of automatic subtitles is likely to cause viewers to miss most of the images, unless: a) human intervention before launching the

48

Pablo Romero-Fresco and Juan Martínez Pérez

subtitles also includes editing, which is very complex and could lead to prolonged delays; b) an antenna delay is implemented so that the editor can have time to correct errors and edit the subtitles;4 or c) the technology used allows settings to be defined in order to achieve an optimum display mode/exposure time by automatically calculating a maximum and minimum duration.

3.6

Conclusions

Following a brief description of some of the traditional models used to assess accuracy in speech recognition and respeaking, the NER model has been introduced in this article in an attempt to provide a functional and easy-to-apply model to assess the accuracy of live subtitles, while also providing data on important subtitling factors such as delay, position, speed and character identification. The division between editing and recognition errors, not only provides an indication of the accuracy rate of subtitles, but also gives us an idea of what must be improved and how. The model has been adopted by Ofcom to assess the quality of live subtitles on UK television (Ofcom 2013).5 Published in April 2014, the first Ofcom report on the quality of live subtitling measured using the NER model has helped to dispel one of the most common concerns regarding the application of quantitative and qualitative measures used to assess the quality of live subtitles, namely, the existence of discrepancies and subjective evaluations, especially when it comes to analysing the loss of information in the subtitles and the impact this might conceivably have on viewers. The application of the NER model in the UK has proved very consistent according to the internal reviewers from the different broadcasters and subtitling companies (who were only given a few written instructions as to how to apply the model) as well as the external reviewers from the University of Roehampton (London). The average discrepancy with regard to the accuracy rates of the 66 programmes analysed was 0.09 per cent (Ofcom 2014). The NER model has also been endorsed by a white paper published by Media Access Australia (2014) and has been included in the official Spanish guidelines on subtitling for the deaf and the hard-of-hearing (AENOR 2012). It is also being used by broadcasters, companies and training institutions in, amongst others, Spain, France, Italy, Switzerland, Germany, Belgium and Australia, as well as by the EU-funded projects, SAVAS (www.fp7-savas.eu) and HBB4ALL (www. hbb4all.eu). Furthermore, NERstar, a semi-automatic tool, has been

Accuracy Rate in Live Subtitling: The NER Model 49

developed to ensure a quick and effective application of the NER model to live subtitles produced by respeaking or by ASR.6 Ten years after the introduction of respeaking, the collaboration between academia and the subtitling industry seems to be yielding interesting results, not least a set of parameters to ensure a certain degree of quality in respeaking, which in this case could attain a 98 per cent accuracy rate with the NER model, a block display mode for subtitles (Romero-Fresco 2011) and a maximum speed of 180 wpm. Now that the use of (semi)automatic subtitles is becoming a possibility in live subtitling, it is all the more important to maintain these parameters in order to ensure that these new developments are not introduced at the expense of quality, that is to say, at the expense of viewers.

Acknowledgement This research has been partly funded by the research group Transmedia Catalonia (a research group recognized by the Catalan Government with code number 2014GR027) and by the EU-funded project HBB4ALL, CIPICT-PSP.2013.5.1, ref number 621014 HBB4ALL.

Notes 1. In automatic subtitling, a speaker-independent speech recognition engine (commonly known as ASR or automatic speech recognition) transcribes what the speaker is saying without the need for a respeaker to act as an intermediary. This transcription may be shown directly as subtitles on screen with no delay (re-synchronization) and no correction, as was the case in a pilot project conducted by the Portuguese television company RTP, or by means of an operator who edits the transcription live (reviewing possible misrecognitions, errors of punctuation and character identification) before launching the subtitles on air with a slight delay, as is done by the Japanese broadcaster NHK. 2. Most live subtitles in the US are produced by stenography, with very little editing, which allows for a completely automatic comparison between the source text (spoken dialogue) and the target text (respoken subtitles). 3. According to Chafe (1985: 106), idea units are ‘units of intonational and semantic closure’. They can be identified because they are spoken with a single coherent intonation contour, preceded and followed by some kind of hesitation, made up of one verb phrase along with whatever noun, prepositional or adverb phrase is appropriate, and usually consist of seven words and take about two seconds to produce. 4. This is carried out in Belgium by the Flemish public broadcaster VRT, which has managed to implement an antenna delay of up to ten minutes for some live programmes, chat shows for example. This enables subtitlers and respeakers to correct, edit and synchronize the subtitles before launching them on air for the viewers, who receive them as though they were produced live.

50

Pablo Romero-Fresco and Juan Martínez Pérez

5. Between 2014 and 2016, the BBC, ITV, Channel 4 and Sky, as well as their accessibility providers (at the time of writing, Red Bee Media and Deluxe), will use the NER model to assess the quality of one sample of their programmes every six months. This assessment will be reviewed by a team of researchers at the University of Roehampton, London. 6. The NERstar tool is available at www.nerstar.com.

References AENOR. 2012. Subtitulado para personas sordas y personas con discapacidad auditiva. Madrid: AENOR. Apone, Tom, Marcia Brooks and Trisha O’Connell. 2010. Caption Accuracy Metrics Project. Caption Viewer Survey: Error Ranking of Real-time Captions in Live Television News Programs. Report published by the WGBH National Center for Accessible Media. http://ncam.wgbh.org/invent_build/analog/caption-accuracy-metrics. Chafe, Wallace. 1985. ‘Linguistic differences produced by differences between speaking and writing’. In David Olson, Nancy Torrance and Angela Hildyard (eds) Literacy, Language, and Learning: The Nature and Consequences of Reading and Writing (pp. 105–22). Cambridge: Cambridge University Press. Dumouchel, Pierre, Gilles Boulianne and Julie Brousseau. 2011. ‘Measures for quality of closed captioning’. In Adriana Şerban, Anna Matamala and JeanMarc Lavaur (eds) Audiovisual Translation in Close-up: Practical and Theoretical Approaches (pp. 161–72). Bern: Peter Lang. Eugeni, Carlo. 2009. ‘Respeaking the BBC News. A strategic analysis of respeaking on the BBC’. The Sign Language Translator and Interpreter 3(1): 29–68. González Lago, María Dolores. 2011. Accuracy Analysis of Respoken Subtitles Broadcast by RTVE, the Spanish Public Television Channel. MA Dissertation. London: Roehampton University. Hefer, Esté. 2011. Reading Second Language Subtitles: A  Case Study of Afrikaans Viewers Reading in Afrikaans and English. MA Dissertation. Vaal Triangle Campus of the North-West University. Media Access Australia. 2014. Caption Quality: International Approaches to Standards and Measurements. Sydney: Media Access Australia. www.mediaaccess. org.au/sites/default/files/files/MAA_CaptionQuality-Whitepaper.pdf Ofcom. 2013. Measuring the Quality of Live Subtitling: Statement. London: Ofcom. http://stakeholders.ofcom.org.uk/binaries/consultations/subtitling/statement/ qos-statement.pdf. Ofcom. 2014. Measuring Live Subtitling Quality: Results from the First Sampling Exercise. London: Ofcom. http://stakeholders.ofcom.org.uk/binaries/consultations/ subtitling/statement/sampling-report.pdf. Romero-Fresco, Pablo. 2011. Subtitling through Speech Recognition: Respeaking. Manchester: St Jerome. Romero-Fresco, Pablo. 2012. ‘Quality in live subtitling: the reception of respoken subtitles in the UK’. In Aline Remael, Pilar Orero and Mary Carroll (eds) Audiovisual Translation and Media Accessibility at the Crossroads (pp. 111–31). Amsterdam: Rodopi.

4 Synchronized Subtitles in Live Television Programmes Mercedes de Castro, Luis Puente Rodríguez and Belén Ruiz Mezcua

4.1

Introduction

In recent years, a substantial increase in the demand for multimedia products has taken place, an increase that is being met by prerecorded or live multimedia programmes offered by broadcasters, IPTV or the Internet. At the same time, in the coming years, an increase is expected in the number of adults in Europe with problems accessing digital television, as has been highlighted by the DTV4All project (Looms 2009). For this part of the population, subtitles are needed to access the audio content of TV programmes and to ensure the compliance of broadcasters with regulatory standards currently in place worldwide (RomeroFresco 2011). Subtitles not only benefit hearing impaired people, but are also beneficial in noisy environments or places where the audio must be turned off. Non-native speakers with limited knowledge of a local language, or for whom accent or speed is a problem, may also find subtitles helpful. These are some of the reasons why, at present, many films, TV series and prerecorded programmes are being produced with offline-generated subtitles. Live multimedia also requires subtitles, but real-time implies technical difficulties and lower quality than in the case of prerecorded subtitles. This has led to a number of research projects and technological innovations, some of which are looking into the synchronization of the audio/video with the subtitles, for example DTV4All project, APyCA system (Álvarez et al. 2010) and AudioToText Synchronization (García et al. 2009). As will be explained, subtitling live events is a complex process, where the required immediacy limits the quality of the results in terms of content, accuracy and delay. The subtitling process consists of a number of 51

52

Mercedes de Castro et al.

steps ranging from audio reception to the presentation of subtitles on the media players of users. This is the reason why, in live subtitling environments, subtitles are only available several seconds after the originating audio utterance. In the best-case scenario, the delay in creating text subtitles is of several seconds’ duration and, depending on the method used, a reduction in production time may result in low-quality subtitles, affecting overall comprehension. Unsynchronized subtitles tend, thus, to lead to audience dissatisfaction (Rander and Looms 2010: 158) due to their negative impact on comprehension (Mason and Salmon 2009) and the extra effort needed when trying to match the meaning of the text with the images, intonation or nonverbal communication which is no longer visible/audible on screen. This chapter presents the results of the research project Synchronized Subtitling in Live Television: Proof of Concept led by University Carlos III of Madrid in collaboration with the Spanish Centre of Subtitling and Audio Description, and partly funded by France Telecom. The objective of this project was to evaluate and measure actual delays in live subtitling in Spain and to develop and demonstrate ways of providing users with alternatives to avoid unsynchronized subtitles on live TV programmes.

4.2 Subtitling live TV programmes The real-time subtitling of live television programmes is a multidisciplinary research field encompassing disciplines and technologies from, amongst others, the fields of audiovisual translation, automatic speech recognition, respeaking, natural language processing, computer science, network transmission and broadcasting. For a better understanding of the processes involved, we should take note of the steps outlined in Figure 4.1: 1. Audio transcription: manual, semi-automatic or automatic real-time transcription of speech from one or many speakers into text. 2. Subtitle generation: the text is split into subtitles according to standards (AENOR 2012). Editing/correction and natural language processing may be included in this step. 3. Coding and packetization: coding into the appropriate subtitle protocol (e.g. DBVB Sub). 4. Transmission/broadcasting: network transmission/broadcasting, including video and audio. 5. Reception, decoding and presentation on the user’s screen.

Synchronized Subtitles in Live Television Programmes 53 Audio transcription

Subtitle generation

Coding & Transmission/ Packetization Broadcast

Automatic Speech Recognition

Reception & presentation

DTT Live Subtitling framework

Audio, video and subtitles Packetization

INTERNET

Re-Speaking +ASR IPTV Corrections Speaker colors Stenotype Context info

Figure 4.1

4.2.1

HbbTV

Subsystems involved in real-time subtitling on live television

Audio transcription

An alternative to live speech transcription is the use of stenotypists, who produce the text transcription of the audio manually in real time. Although quality and speed are good, the cost of this process and the low availability of stenotypists constitute limitations as far as real-time mass subtitling is concerned. Another alternative is the use of automatic speech recognition (ASR) engines directly applied to the audio. Costs are drastically reduced, but available technology is not yet of an acceptable quality in areas requiring speaker independence and large dictionaries. Error rates are very sensitive to audio quality, signal to noise ratios and even the noise type. In order to obtain better results than with direct ASR, a technique known as respeaking may also be used. In respeaking, an intermediate respeaker will use ASR systems trained to his/her voice and specific vocabulary. The editing/correction of the generated subtitles is also common. Currently, respeaking is the normal practice in live subtitling for television and is the most common procedure in countries where live television subtitling is widely available, such as the UK, Spain, France, Germany and the USA (Eugeni 2009; Romero-Fresco 2011). ASR systems can also be used to minimize human intervention in certain television programmes  – the news for instance  –, where the text is available in advance (García et al. 2009), or can be adapted to support

54

Mercedes de Castro et al.

multiple speakers simultaneously (Wald 2008). As is highlighted by Boulianne et al. (2008), the use of ASR as a remote captioning application is also a possibility and should have tremendous cost-saving benefits. A complete description of the different transcription methods used for the real-time audio transcription of live programmes can be found in Romero-Fresco (2011). 4.2.2 ASR in live subtitling Today’s ASR systems are able to recognize arbitrary sentences with a large, but finite, vocabulary. Typical vocabulary sizes are of the order of 10,000–100,000 word forms (Bisani and Ney 2005). A large vocabulary speech-recognition system is mainly composed of the acoustic model, the language model and the decoder. The acoustic model assigns probabilities to phonetic elements (phonemes, three-phonemes, sub-words, etc.) for every sequence of input observations. The language model creates the acoustic model output sequence in order to evaluate word probabilities. If they are combined, they result in multiple word sequence hypotheses (Ruokolainen 2009). To find the best recognition hypothesis, the decoder should try all possible transcripts and pick the one with the highest probability (Siivola 2007). As a consequence, ASR does not deliver transcriptions at the same pace as the audio, but uses current audio input to find the more probable alternatives to the former fragments, thus increasing accuracy. Transcriptions are held back until there is confidence that new incoming words will not change the probability of former ones – this usually occurs during periods of silence. The longer the voice fragment, the lower the probability of a word at the beginning being changed in the final transcript. For this reason, many systems tend to use long speech fragments before issuing a final decision, increasing accuracy, but penalizing response time because no output is produced until a full fragment has been received. It is possible to balance accuracy and delay by setting up a maximum waiting time before issuing a final hypothesis. This is a critical issue in live TV subtitling. To improve problems in terms of ASR quality resulting from speaker variability, noisy environments and low-quality audio, an intermediary respeaker is usually in charge of live TV subtitling. This approach allows the phonetic model to be adjusted to the respeaker, who usually works in an isolated room; all this results in better rates of accuracy. 4.2.3 Composing a digital multimedia stream In DTT broadcast and IPTV, TV channels are delivered to users by way of MPEG (ISO 2007) and DVB (ETSI 2009) Transport Stream codification

Synchronized Subtitles in Live Television Programmes 55

Subtitle is several seconds later

Statistical Multiplexing

Subtitle Generation

AUDIO

Coding & Packetization

VIDEO

Coding & Packetization

techniques. Subtitles can be conveyed in the Transport Stream in the form of DVB Subtitle stream(s) (ETSI 2006) or as Teletext subtitles embedded in the Teletext stream (ETSI 2003). Video, audio and subtitles are assembled according to MPEG and DVB standards to create a multimedia service (the MPEG term for TV channel) transmitted over IPTV or DTT broadcast networks. The use of a common clock reference is essential to the process of multiplexing video, audio and data in the same Transport Stream (according to MPEG structure, subtitles lie in the data category). MPEG presentation time-stamps are assigned to video, audio and subtitle packets when multiplexing takes place; for this reason, the delay between a subtitle packet and its corresponding audio packets, which is caused by the time spent on the audio transcription, editing and coding processes, is maintained during transmission and reproduction. Different coding and packetization are used for Internet TV, but the same principles apply. As is shown in Figure 4.2, real-time subtitle generation is a parallel process to the encoding and packetization of the audio and video input signals resulting in transmission packets. The packages from the three sources (audio, video and subtitles) are finally multiplexed and transmitted. Irrespective of the methods used (respeaking, direct ASR, stenotype), subtitle packages are available several seconds after the corresponding video and audio packages have been created and sent. As a result, the appearance of subtitles on the user’s screen will be out of step with the audio/video by several seconds.

Figure 4.2 Delay of several seconds between audio/video and subtitles in the transcription process

56

Mercedes de Castro et al.

4.2.4 The quality of live subtitling Spanish standards concerned with subtitle quality (AENOR 2012) not only address the formal presentation aspects such as use of colours, the number of lines or reading time, but also encompass the content of the subtitles and refer to parameters such as literality, density and synchronization. Literality reflects the closeness of the written text to the spoken words in the audio, whereas density measures the number of words per minute presented in the subtitles according to the assumed reading speed of the viewers (Romero-Fresco 2011). Synchronization is related to the ideal timein and time-out settings, enabling subtitles to appear on the screen in synchrony with the audio and images. Density and literality are closely related and a compromise is necessary when the word rate of the speakers exceeds the viewers’ reading speed. Lack of synchronization is highly disturbing as it creates dissociation between the essential elements within the audiovisual communication and tends to be the main reason for complaint. 4.2.5 Subtitle delays on live TV In live subtitling, real-time constraints strongly affect all the quality parameters noted above. Literality and synchronization conflict with one another and, as will be shown in the following paragraphs, significant delays occur between audio/video and subtitles. In ASR-based live subtitling environments, the better the accuracy between the oral and written elements, the greater the delay, so that any solutions intended to minimize the negative impact that subtitle delays have on the audience must always take accuracy into account. According to certain studies, the degree of success can, in some cases, be close to 97 per cent or 98 per cent (Lambourne et al. 2004). An exception to this occurs when the speech content is known in advance and pre-prepared subtitles can then be broadcast synchronously, either with human intervention or automatically (García et al. 2009; Gao et al. 2010). Taking into account the fact that audio, video and subtitles undergo parallel coding, packetization and transmission processes, it is in the audio transcription subsystem (see Figure 4.1) where delays between audio/video and subtitles are generated. According to Romero-Fresco (2011), when a stenotype is used, delays are small if subtitles are emitted word by word, but may be significant if block subtitles are used. With respeaking, the main sources of delay with regard to the audio include the time needed by the respeaker to listen to an audio fragment, the time needed to respeak it into an ASR system and the time the ASR needs to produce the transcription. Respeakers can be trained to insert silences so that the ASR can generate shorter text strings and thus

Synchronized Subtitles in Live Television Programmes 57

reduce transcription time. The additional tasks of correcting errors and adding punctuation and colours also add to the overall time required before the final subtitles are obtained (Wald et al. 2007).

4.3 The delays affecting subtitles on live TV The first part of the research presented in this chapter focuses on two questions closely related to one of the quality aspects of live subtitles, namely: 1. Are the delays between subtitles and video/audio in live programmes significant? 2. Are the delays between subtitles and video/audio in live programmes variable or can they be considered constant? Three tests were performed to evaluate subtitle delay variability in cases where subtitles are generated in real-time for live programmes. Test 1 involved an evaluation of end-user-perceived delays in live TV programmes subtitled in real-time with respeaking techniques. In Test 2, direct ASR was applied to the continuous audio stream of a TV channel and the resulting delays analysed. A similar test was performed in Test 3, but what was evaluated here were the delays in the subtitles obtained by applying ASR directly to selected clips from TV programmes with ‘good’ audio quality. The main difference between Test 2 and Test 3 was that, in Test 2, the TV channel audio was transcribed using ASR for hours, regardless of the types of TV programme; in Test 3, seven samples of TV programmes were selected where the audio quality was good enough to obtain better ASR transcriptions. Test 2 and Test 3 were, in fact, laboratory experiments carried out to evaluate the delays produced by ASR, while Test 1 measured the actual delays experienced by the viewers. 4.3.1

Test 1

The actual delays were those experienced by users watching live TV programmes. Test 1 was devised to measure subtitle delays in respoken, live TV programmes and the corresponding audio/video; the delays were then measured on the DTT signal received by the user. Clips from different live TV programmes from the Spanish public service broadcaster RTVE were used, all of them captured on the reception side. These programmes were broadcast with live subtitles created in real-time with respeaking; the ASR tool used at RTVE was Dragon Naturally Speaking v11 and the clips were analysed with the help of SoundForge 9.0.

58

Mercedes de Castro et al.

Although there are, in fact, different proposals for the measurement of real-time subtitle delays in live TV programmes (Luyckx et al. 2010; AENOR 2012), a simplified approach was used in this research, since the main goal was purely to evaluate delay variability. Only respoken sentences matching sentences spoken in the TV programme were considered in the sample and delays were measured by using the following algorithm: Δti = tsubi − ti where tsubi is the time reference of the subtitle appearing on the screen and ti is the time reference of the start of the first word in the corresponding sentence of the original audio. The precision of SoundForge 9.0 is less than 0.1sec for video analysis, enabling subtitles appearing on screen to be identified with, at least, a 100 millisecond accuracy. However, original audio identification requires manual stop/start and wave analysis, which is less accurate. The precision of every single delay measurement is ± 250 milliseconds. The results shown in Figure 4.3 correspond to delays measured over 12 hours of TV programme recordings from Las mañanas de La 1, España Directo and 59 segundos, all of them live programmes from the Spanish RTVE channel La 1, with respoken subtitles and recorded on 16, 17, 20 and 22 June 2011. The graph shows the delay percentages within a 0.5sec range. The figure shows that subtitle delays range from 5.0sec to 22.0sec, with an average delay of 12.2sec. Percent 10.0% 9.0% 8.0% 7.0% 6.0% 5.0%

Mean value = 12.2 sec

4.0% 3.0% 2.0% 1.0%

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5 18.0 18.5 19.0 19.5 20.0 20.5 21.0 21.5 22.0 22.5 23.0 23.5 24.0 24.5 25.0

0.0% Delay (Seconds)

% of subtitles with xi < delay < xi+0.5

Figure 4.3 Subtitle delay measurements taken from samples of TV magazines from the RTVE channel La 1

Synchronized Subtitles in Live Television Programmes 59 Delay (seconds) 25

20

15

10

5

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133

0

Figure 4.4 Spain

Subtitle number

Subtitle delays in the live TV programme Las mañanas de La 1, RTVE,

A sample from the evolution of subtitle delays during a TV programme is shown in Figure 4.4, providing another view on delay variability. Although delays can be caused at different steps of the live subtitle creation process, Test 1 only shows resulting delays, that is, the delays experienced by users on their TV sets.

4.3.2 Test 2 and Test 3 Subtitle delays were calculated in Test 2 and Test 3, taking into account the difference between audio utterances and their corresponding text transcriptions, without considering further processing time spent on subtitle generation. Transcriptions were created by applying ASR to the audio extracted from a TV channel conveyed in DVB-T. Delays were calculated for this step only, without the further processing of transcriptions to create subtitles and without any other transmission or broadcasting steps. The main conclusions obtained relate to the variability of transcription delays when transcribing the audio element of TV programmes using direct ASR instead of respeaking and subtitle editing and transmission. The results from Test 2, obtained from transcriptions of 24 hours of continuous audio from Spanish TV channel La 1, with direct ASR applied to the audio channel, are shown in Figure 4.5. In this case, subtitle delays range from 1.0sec to 40.0sec and the average delay of the

60 14 9200 audio fragments 24 hours of recorded audio 12

10 Fragments (%)

Mean value = 7.5 sec 8

6

4

2 0 0

10

20

30

40

Delays (seconds) Figure 4.5 Variable transcription delays using direct ASR for 24 hours of non-controlled audio on Spanish TV channel La 1

16 14

Fragments (%)

12 10

Mean value = 3.9 sec

8 6 4 2 0 0

5

10

15 20 Delay (seconds)

25

30

Figure 4.6 Variable transcription delays using direct ASR for seven live TV programmes with controlled audio

Synchronized Subtitles in Live Television Programmes 61

sample is 7.5sec. The graph shows the occurrence of every measured delay, in percentage units. The results of Test 3 for the 60 minute transcription of seven live TV programmes with direct ASR applied to the audio as previously explained are shown in Figure 4.6. Table 4.1 summarizes the key variables of the tests and the samples used: Table 4.1

Characterization of tests performed to evaluate delay variability Test 1

Test 2

Test 3

Transcription method / Language

Respeaker+ASR / Spanish

Direct ASR / Spanish

Direct ASR / Spanish

TV samples – Programme types

• Las mañanas de La 1 – RTVE channel La 1. Live magazine. • España Directo – RTVE channel La 1. Live magazine. • 59 segundos – RTVE channel La 1. Live TV debate.

• Telediario – RTVE channel La 1. Interview with several speakers on TV set. • Telediario –RTVE channel La 1. Interview with correspondent. • Tennis match – RTVE channel La 1. Speaker locution during match play. • Handball match – RTVE channel Teledeporte. Speaker locution during match play. • El hormiguero – channel Cuatro. Live magazine. • Interview with- J. L. Rodríguez Zapatero – RTVE channel La 1. Interview with several speakers on TV set.

• 24 h of continuous audio samples retrieved from RTVE channel La 1.

Live subtitling phases considered

Respeaking, Transcription, Subtitle generation, DVBSub coding, DVB packetization and multiplexing, Transmission, Reception, Reproduction on user TV receiver.

DVB audio retrieval on DVB-T reception side, Transcription

DVB audio retrieval on DVB-T reception side, Transcription

62

Mercedes de Castro et al.

4.4 Compensation for variable subtitle delays in live events Two of the most relevant characteristics of respeaking subtitle delays have now been described, namely, their variability and the fact that they are significant in terms of length. The next phase of this research was to propose a model to compensate for these variable delays and to ensure that a live event could be reproduced with subtitles and audio/video being synchronized on the user’s screen. The model provides a way of synchronizing subtitles with the video/audio on live TV so as to minimize the disturbing effect caused by the lack of synchronization, available on user request. The model proposed is a general one applicable not only to live TV programmes, but to all live events where subtitles are created from realtime transcription, including conferences, classes, etc. The fact that live events are reproduced several seconds after an event takes place necessitates an intermediate alignment step in order to synchronize subtitles with the audio/video. In TV broadcasting/transmission in particular, the fact that there is a transmission phase (the TV network, whether broadcast, IPTV, HbbTV or the Internet) and a presentation phase (the set top box or the player triggered from the Internet navigator) offers the possibility of compensating for the delays produced when creating subtitles from speech in real-time so that users have the option to experience a synchronized alternative with everything being synchronized on the screen. This slightly delayed live programme is known in these pages as a ‘quasi-live’ version of the live TV programme. Such a model takes into account the following aspects: • Subtitle delays are variable. • Transcription from audio to text could be obtained from direct ASR, respeaking or stenotype. • It is possible to register the time reference of the transcription with regard to the original audio. • It is possible to design an algorithm to infer the time references of words/sentences obtained from the transcription of the respeaker audio with regard to the original audio of the TV programme. • It is possible to reproduce a TV programme or audiovisual event with a slight delay by using an intermediate buffering step involving the video and audio and enabling subtitle synchronization. • The number of seconds in the overall delay of the programme is configurable. • This option is user selectable.

Synchronized Subtitles in Live Television Programmes 63

This proposal takes into account variable delays and, although other alternatives exist where an overall constant delay is proposed (Looms 2010: 5–6), this would not be applicable to live subtitles where the variability of delays is significant, as in the case of the Spanish public service broadcaster. 4.4.1 Model for live subtitle synchronization The model has been designed for application to different live subtitling environments and has a patent proposal provided by University Carlos III of Madrid (de Castro et al. 2010). One of the key characteristics of the procedure is to assign time references to subtitles and their corresponding source audio fragments so that the individual subtitle delays (Δti) can be compensated for regardless of the time spent on each step of the subtitling process. In respeaking scenarios, time references provided by the ASR engine for every word transcript refer to the respeaker audio and not to the original TV channel audio. The procedure, therefore, predicts the correct time references for every subtitle whenever it is not possible to measure it accurately, as in the case of respeaking. The model proposed in Figure 4.7 is based on the compensation of the time spent on translating speech into subtitles, regardless of the method used. It uses the time reference of the transcriptions with respect to the audio along with the rest of the process until the written text appears on the user’s screen. Somewhere in the process a buffering step is necessary in order to allow subtitles to catch up with audio and video. The model proposes the creation of a real-time interrelation between audio and subtitles, based on the time references of the original audio. This is the opposite of current practice where the time interrelationship of audio-to-subtitles is built at the packetization stage, just before it is transmitted or broadcast onto the network (ISO 2007; ETSI 2009). A  two-step process is applied where different sub-steps depend on deployment alternatives and involve technology for the delivery of the audiovisual event. Phase 1 calculates the time reference for each individual subtitle with regard to the audio and video, whilst Phase 2 refers to the creation of a synchronized version by shifting the audio/video to match the subtitles temporarily. This can be done by using resources on the transmission side, on the reception side, or indeed a combination of both. A  synchronized version may also be created simply for storage, for offline distribution or prerecorded broadcasting. The model provides a generic framework for the synchronization of subtitles when they are created in real-time, before they are reproduced

64

Mercedes de Castro et al. Audiovisual Event Video and Audio generation

TEXT PLUS INDIVIDUAL TEXT SUBTITLES

BUFFERIZATION AND AUDIO, VIDEO AND SUBTITLE SYNCHRONIZATION TO THE ORIGINAL TIMING

Audio to Text Transcription AUDIO, VIDEO AND DELAYED SUBTITLES WITH TIMING INFORMATION Subtitle to Audio Synchronization AUDIO, VIDEO AND SYNCHRONIZED SUBTITLES Reproduction

Figure 4.7 Model to obtain synchronized subtitles from the audio in real-time subtitling

on the user’s screen. It allows for different audio-to-text transcription processes (direct ASR, respeaking) and architecture, coding and packetization protocols or different distribution networks. Once the delay of each individual subtitle is known, two options exist to synchronize the subtitles and audio: compensating for the subtitling delays before transmission or upon reception. Compensating for the delay before transmission is technically the simplest option, although there are drawbacks precluding its general application. It will affect all users unless there is a bandwidth penalty in force for an extra TV service for the delayed version, an option that is reasonable only in the case of IPTV channels or Internet TV. Legal considerations might prevent its application in many countries. There are, however, ‘quasi-live’ programmes where immediacy is not too important and synchronization upon emission might be acceptable. The other alternative is to correct the delay in reception, requiring an enhanced set top box or app for users of this new facility. The procedure comprises the following functional steps: a) Monitoring the audio time reference when every subtitle is generated. b) Handling, together with the subtitles, the time references needed to realign them.

Synchronized Subtitles in Live Television Programmes 65

c) Transmitting either a synchronized version of the audiovisual programme, or a version that is user selectable upon reception, using enhanced compatible MPEG/DVB formats. d) Depending on the case in question, transmitting a potential synchronized subtitling service to end-users. e) Showing onscreen menus of the IPTV set top box, DTT or HbbTV receiver, the option to reproduce the whole event synchronized with the several second delay this entails. A  similar approach exists for Internet TV. f) When the option to reproduce a slightly delayed TV channel in synchronized mode is selected by the user, the set top box receives packets from all the streams, buffers them in appropriate queues and reproduces the subtitles according to the variable delays of each individual subtitle. Video and audio packets are buffered for a fixed, configurable, amount of time. g) The optional storage of a version of the audiovisual programme with either synchronized subtitles or ready-to-synchronize subtitles. Some of the options require a DTT receiver, IPTV or HbbTV set top box able to perform the synchronization between video/audio and subtitles based on variable delays provided from the emission side. Users without such devices would not be affected. With the other options described here, existing receivers would be able to reproduce the audiovisual programme that has been synchronized before broadcasting. In some cases, an extension of the DVB standards (Reimers 2006) is proposed to carry the alternate presentation time-stamps of every subtitle together with the associated signalling, allowing receivers to offer viewers an optional quasi-live reproduction alternative. 4.4.2 Proof of concept with delayed transmission over IPTV The third phase of this research involved the design and implementation of one of the possible variants foreseen in the model for subtitle synchronization previously described. The project Synchronized Subtitling in Live Television: Proof of Concept provided real-time subtitling for TV programmes using ASR techniques to generate the audio subtitles from TV channels. In this project, the model for subtitle synchronization was based on a synchronization of subtitles with the audio/video before broadcasting the TV channel over IPTV. The system is able to generate a DVB/MPEG signal with subtitles running in sync with the original audio/video, which is then broadcast with a slight delay compared with the original signal via an additional,

66

Mercedes de Castro et al.

user-selectable IPTV channel. The slight delay is calculated to allow all individual subtitle delays to be compensated for. Typical values in Spain would be around 20 seconds, according to measured subtitle delays. The objectives of the project also included research into the fields of applied ASR in selected live TV scenarios and the functional steps in this proof of concept are as follows: a) Detecting broadcast terrestrial TV input channels. b) Extracting the audio signal from the input channel. c) Generating an audio-to-text transcription by applying the Dragon Naturally Speaking ASR system, previously trained for selected programmes and obtaining the temporal information necessary for the subsequent synchronization of subtitles. d) Dividing textual transcriptions into subtitles, taking into account the Spanish guidelines (AENOR 2012) and the timing information required for further synchronization. e) Encoding subtitles according to DVB-Sub closed caption standards. f) Generating a DVB stream to include both the original video and audio and the subtitles as DVB-Sub closed captions, after applying synchronization techniques between video/audio and the real-time generated subtitles. g) Delivering the new TV channel as a new IPTV channel. A representation of the main parts of the process is shown in Figure 4.8 and Figure 4.10. The model selector makes it possible to select the ASR recognizer model most suited to the DTT programme characteristics. The speech recognizer model transcribes the audio stream and submits a sequence of strings to the subtitle maker, each one labelled with the start and end time of the original audio. The subtitle maker processes these

Speech recognizer Subtitle maker

DVB-Sub generator

Model selector

Figure 4.8 Modules for subtitle generation in TV live programmes where direct ASR is used to generate audio-to-text transcriptions

Synchronized Subtitles in Live Television Programmes 67

strings, converting them into subtitles and calculating their ‘in’ and ‘out’ times by referring to the original audio. Figure 4.9 shows the internal process in which time references for the audio utterances in the speech recognizer module are calculated. Each audio fragment processed is assigned a tuple of three variables, namely, its transcribed audio text (Txi) and the beginning (tiB) and ending times (tiE) with reference to the corresponding audio fragment. With these time references, the DVB-Sub generator is able to calculate the duration of each caption, create packets containing DVB-Sub subtitles and deliver each subtitle in real-time to the muxer using times that correspond with the original audio. In addition, the original streams in the audio and video undergo a buffering step before being input into the muxer. The amount of time buffered is configurable and its value is used by the DVB-Sub generator to calculate the delivery time of each synchronized subtitle. This results in an effective compensation for the delay in the creation of each individual subtitle ΔtM between the moment when a caption is delivered and the time when the corresponding audio fragment occurred. As this system is based on applying ASR to the audio stream of the TV channel without an intermediate respeaker, the transcriptions (and therefore the subtitles) obtained are of poor quality in terms of accuracy, but this is enough to demonstrate the viability of the proposed synchronization model.

Real time Clock Reference t1B

t1E

audio fragment 1

t2B

t2E audio fragment 2

t3B

t4B

t3E

audio fragment 3

silence

t4E audio fragment 4

Original Audio

Transcription process (Automatic Speech Recognition) TX1 t1B

TX2 t2B

t1E

t2E

TX3 t3B t3E

TX4 t4B t4E

Real time Clock Reference

Figure 4.9

Keeping track of time references between audio and subtitles

Figure 4.10 shows subsystems dealing with the creation of a DVB signal where subtitles are synchronized. The muxer is responsible for

68

Mercedes de Castro et al. System time-stamped transcription

DVB-sub generator

Delayer

audio/video delayed PES packets

DVB-sub

Muxer DVB-sync

PMT Maker

PMT IPTV Transmiter

Figure 4.10 Modules and data flow for TV channel buffering, subtitle synchronization, multiplexing and IPTV transmission

creating the final process. It multiplexes the streams to create a complete MPEG transport stream including audio, video and synchronized subtitles, together with a new, dynamically-created PMT, containing signalling information for the synchronized channel. The contribution of the delayer is needed to hold up the video and audio of the selected TV channel by several seconds (typical values are around 20sec). The system runs in two central processing units and shares a common clock reference.

4.5

Results

An example of the results, available at www.youtube.com/watch?v=Wz CENDbqf6I, shows how the system behaves when it is applied to a sample from the news programme Telediario on the Spanish RTVE channel La 1, subtitled in the laboratory by applying direct ASR. A synchronized version of the programme, aligned in real-time, is offered 20 seconds later via IPTV. The screen on the left shows the live TV channel whereas the screen on the right represents the subtitled and synchronized channel, namely, the ‘quasi-live’ channel. It is important to note that the use of direct ASR gives poor results in terms of transcription accuracy, although this is not the point here. The main results of the first part of this research show that live subtitles created with respeaking have the following characteristics:

Synchronized Subtitles in Live Television Programmes 69

• Block subtitles are presented on the user’s screen with a delay ranging from five seconds to 22 seconds for the sample analysed. • Subtitle delay fluctuation is continuous throughout the same TV programme. • The use of ASR is one of the sources for subtitle delays and is also one of the causes of delay variability. In the second part of this research, a model to compensate for variable subtitle delays, and one of its possible implementations, has been evaluated. The system was developed to prove the concept for the individual synchronization of real-time subtitles, with the direct use of ASR for live TV programme audio and IPTV as the delivery option. These are the main results: • It is possible to measure the correct presentation times for real-time subtitles created with the help of ASR engines. • The synchronization process is operative and able to synchronize 95 per cent of the ASR generated subtitles individually with a maximum error of one second if a global buffering time of 20 seconds is applied to the audio and video of the TV channel. Better results are also provided by setting lower values for the buffering time when compared with non-synchronized subtitles. These results take only the situations in which the ASR provides transcriptions into consideration with, at least, a 70 per cent level of accuracy. Although a direct ASR has been used to validate the synchronization model, it is not considered to be a practical option for live TV programme subtitling.

4.6

Conclusions

In practice, current ASR technology is a long way from producing quality subtitles with negligible delay and, as there is a trade-off between speed and quality, even longer delays can occur when priority is given to quality. Subtitle delays can be considered significantly variable in different scenarios involving subtitled live TV programmes in Spain and show variation ranges that preclude a solution by applying a fixed delay to the video/ audio. A solution to enhance the quality of live subtitles should take into account the variable delays produced when creating subtitles in real-time. The model proposed to solve this problem is only one among many and its implementation in the first scenario demonstrates the viability

70

Mercedes de Castro et al.

of individual subtitle synchronization with audio and video when subtitles are created in real-time. Taking into account the continuous growth of IPTV markets worldwide, the delivery of a quasi-live version of a TV channel via IPTV is a realistic possibility. The results are good enough to justify the continuation of this research baseline. There are several avenues open to further research; these are mainly related to the accuracy and quality of the model when subtitles are obtained by applying ASR to the audio signal of the TV programme. Project objectives are being extended to include other scenarios involving subtitle synchronization in environments based on direct ASR with further editing for transcription enhancement and live TV subtitling based on ASR respeaking and editing. Broadcasting alternatives where there is an option for synchronization upon reception may also be considered.

Acknowledgements This research project has been led by University Carlos III of Madrid in collaboration with the Spanish Centre of Subtitling and Audio Description.

References Álvarez, Aitor, Arantza del Pozo, Andoni Arruti. 2010. ‘Apyca: Towards the automatic subtitling of television content in Spanish’. In International Multiconference on Computer Science and Information Technology  – IMCSIT (pp. 567–574). Wisla: Poland, www.informatik.uni-trier.de/~ley/db/conf/imcsit/ imcsit2010.html. AENOR. 2012. Subtitulado para personas sordas y personas con discapacidad auditiva. Madrid: AENOR. Bisani, Maximilian and Hermann Ney. 2005. ‘Open vocabulary speech recognition with flat hybrid models’. Proceedings of Interspeech 2005: 725–8. Boulianne, Gilles, Maryse Boisvert and Frederic Osterrath. 2008. ‘Real-time speech recognition captioning of events and meetings’. IEEE Spoken Language Technology Workshop, SLT 2008: 197–200. de Castro, Mercedes, Manuel de Pedro, Belén Ruiz and Javier Jiménez. 2010. Procedimiento y dispositivo para sincronizar subtítulos con audio en subtitulación en directo. Oficina Española de Patentes y Marcas, Patent Id. P201030758. ETSI. 2003. Digital Video Broadcasting (DVB); Specification for conveying ITU-R System B Teletext in DVB bitstreams. European Telecommunications Standards Institute, ETSI EN 300 472 V1.3.1. ETSI. 2006. Digital Video Broadcasting (DVB); Subtitling systems. European Telecommunications Standards Institute, ETSI EN 300 743. V1.3.1. ETSI. 2009. Digital Video Broadcasting (DVB); Specification for Service Information in DVB Systems. European Telecommunications Standards Institute, (DVB-SI) ETSI EN 300 468 V1.11.1.

Synchronized Subtitles in Live Television Programmes 71 Eugeni, Carlo. 2009. ‘Respeaking the BBC news’. The Sign Language Translator and Interpreter 3(1): 29–68. Gao, Jie, Qingwei Zhao and Yonghong Yan. 2010. ‘Automatic synchronization of live speech and its transcripts based on a frame-synchronous likelihood ratio test’. IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP 2010): 1622–5. García, José E., Alfonso Ortega, Eduardo Lleida, Tomas Lozano, Emiliano Bernues and Daniel Sanchez. 2009. ‘Audio and text synchronization for TV news subtitling based on automatic speech recognition’. IEEE International Symposium on Broadband Multimedia Systems and Broadcasting BMSB ‘09: 1–6. ISO. 2007. Information Technology – Generic Coding of Moving Pictures and Associated Audio Information: Systems. International Organization for Standardization, ISO/IEC 13818–1:2007. Lambourne, Andrew, Jill Hewitt, Caroline Lyon and Aandra Warren. 2004. ‘Speech-based real-time subtitling services’. LNCS International Journal of Speech Technology 7: 269–79. Luyckx, Bieke, Tijs Delbeke, Luuk Van Waes, Mariëlle Leijten and Aline Remael. 2010. Live subtitling with speech recogntion: Causes and consequences of text reduction. Artesis VT Working Papers in Translation Studies 2010–2011. Antwerp: Artesis University College Antwerp. Looms, Peter O. 2009. ‘E-inclusiveness and digital television in Europe  – A holistic model’. In C. Stephanidis (ed.) Universal Access in Human-Computer Interaction: Part I (pp. 550–8) Berlin: Springer Verlag. Looms, Peter O. 2010. ‘The production and delivery of access services’. EuBU TechnicalReview-201Q3. Mason, Andrew and Robert A. Salmon. 2009. Factors Affecting Perception of Audiovideo Synchronisation in Television’. BBC R&D White Paper WHP176. Rander, Annie and Peter O. Looms. 2010. ‘The accessibility of television news with live subtitling on digital television’. In Proceedings of the 8th International Interactive Conference on Interactive TV&Video (EuroITV ‘10) (pp. 155–60). New York: ACM. Reimers, Ulrich H. 2006. ‘DVB – The family of international standards for digital video broadcasting’. Proceedings of the IEEE 94(1): 173–82. Romero-Fresco, Pablo. 2011. Subtitling through Speech Recognition: Respeaking. Manchester: St Jerome. Ruokolainen, Teemu. 2009. Topic Adaptation for Speech Recognition in Multimodal Environment. MA thesis. Helsinki University of Technology. Siivola, Vea. 2007. Language Models for Automatic Speech Recognition: Construction and Complexity Control. PhD thesis. Helsinki University of Technology. Wald, Mike, John-Mark Bell, Philip Boulain, Karl Doody and Jim Gerrard. 2007. ‘Correcting automatic speech recognition captioning errors in real time’. International Journal of Speech Technology 10(1): 1–15. Wald, Mike. 2008. ‘Captioning multiple speakers using speech recognition to assist disabled people’. Computers Helping People with Special Needs, Lecture Notes in Computer Science 5105: 617–23.

5 Cross-fertilization between Reception Studies in Audio Description and Interpreting Quality Assessment: The Role of the Describer’s Voice Emilia Iglesias Fernández, Silvia Martínez Martínez and Antonio Javier Chica Núñez

5.1

Introduction

Scholars, such as Braun (2007) and Gambier (2006), have been calling for a cross-fertilization of disciplines concerned with the study of human communication in an attempt to gain a fuller understanding of the complex nature of audio description (AD). The AD script is a hyposemiotic text that uses the aural medium as the sole vehicle for relaying the input derived from visual stimuli, both iconic and verbal. The nonvocal visual and/or verbal input is conveyed as aural output consisting of vocal stimuli (both verbal and nonverbal). As it is the aural medium that is so prominent in AD, one would expect that research in this field would pay more attention to the aural dimension and that it would attract attention from disciplines related to spoken communication and media studies, such as radio broadcasting, television, advertising and film theory. A perusal of the existing literature on media communication reveals the potential of the aural dimension, from the characters’ vocal expression of emotion, to soundtrack, ambient sounds and special effects. While it is true that some scholars have highlighted the critical role of the aural medium in audiovisual products and their translation (Chion 1994; Díaz Cintas et al. 2007; Gottlieb 2005; Matamala 2005; Orero 2005; Salway and Palmer 2007), the true potential of the aural dimension has not yet been fully explored. Claims have also been made as to the need to extend the notion of text, moving beyond the ‘hypertrophy of this concept’ (Gambier 72

Cross-fertilization between AD and Interpreting Studies 73

2006: 3) when working within multimodal environments. It is, thus, important to look at the properties and effects of the various vocal and prosodic features of AD narration, such as the audio describer’s pitch, voice quality, timbre, intonation, rhythm, vocal intensity and accent. As the AD script is heavily constrained by temporal limitations, scholars have tried to find answers to questions such as what and when and where to audio describe (Benecke 2007; Braun 2007, 2008; Hyks 2005), as well as to how to do it (Braun 2008; Gambier 2006; Gerzymisch-Arbogast 2007). Many of the solutions put forward to overcome these limitations have come from linguistic and sociolinguistic approaches to AD (Fels et al. 2006; Matamala 2005). Linguistic and sociolinguistic components are of fundamental importance, but insufficient if we want to account for a cross-modal instance of translation conveyed through the aural medium. This is where research on the quality assessment of interpreting can be of use, particularly as regards methodology. One way of exploring the effect of the audio described product on the target audience is to conduct reception studies amongst visually impaired users. Reception studies aim to match a text’s intentions with the outcome interpretations of the target audience. In Culler’s (1981: 13) words, this kind of study represents ‘an attempt to understand the object of study’s intelligibility and offer(s) the possibility of identifying the codes and interpretative assumptions that give their meaning for different audiences’. The aim does not involve so much a hermeneutics of the meaning of the text as an analysis of what the text means and for whom. This approach has been pursued in research concerning the quality assessment of interpreting by authors like Collados Aís (1998, 2002), Collados Aís et al. (2007) and Iglesias Fernández (2007). Reception studies on AD are extremely scarce in countries where, by contrast, the practice of AD goes back decades. This is the case in Spain, as well as in other countries with a relatively long history of AD. There are a few exceptions that are worth noting, however. For instance, studies on the perception and comprehension of audio described films by blind and visually impaired users have been conducted since the mid1990s in the USA and China. Yeung (2007: 238) notes that some studies have aimed to map out the profile of AD audiences (Packer and Kirchner 1997), with some being concerned with evaluating the benefits of AD services (Peli et al. 1996; Petré 2005; Schmeidler and Kirchner 2001), while others have looked into ways of measuring possible improvements in film AD. This is the case of Romero-Fresco and Fryer’s (2013)

74

Emilia Iglesias Fernández et al.

study, which has examined the advantages and disadvantages of audio described introductions, and contains discussions concerning the cinematic and technical aspects of film and the style of direction. The results of these studies have been extremely positive, as they have revealed the key contribution of AD in fostering social integration and education among visually impaired users. In particular, they have shown the usefulness of audio introductions. In Spain, Bourne and Lachat (2010) have looked into the effectiveness of the Spanish AD guidelines (AENOR 2005) on the basis of data gathered from an overall assessment carried out among AD users. They found the guidelines lacking in specifications related to aural stimuli in the AD. No mention was made of the audio describer’s intonation, rhythm, voice quality and the volume of the soundtrack. Using two AD excerpts from the feature films Master and Commander (Peter Weir 2003) and Elizabeth (Shekhar Kapur 1998), audio described according to the official AENOR guidelines, the authors elicited user preferences as well as overall quality assessment for each of the film excerpts by means of five questionnaires. Interestingly, findings from this reception study have identified preferences as regards the audio describer’s gender and issues related to the intonation and rhythm of the narration as possible causes for negative assessment. By the same token, issues regarding the sound intensity (volume) of the music soundtrack and the AD track have also been raised as having a bearing on the effect of the AD experience (Bourne and Lachat 2010). We strongly believe that reception studies in AD require thoughtful and rigorous research in order to gauge the social and cognitive needs of blind and visually impaired users, but we also think that it is necessary to take stock of the findings in reception studies in related disciplines, such as interpreting, media communication and social psychology in order to broaden the methodological scope used to investigate the reception of AD products. Experimental studies in the quality assessment related to interpreting have revealed the remarkable role played by prosodic issues such as intonation, fluency and voice quality in favourable assessments of interpreting quality (Collados Aís 1998, 2002; Collados Aís et al. 2007; Iglesias Fernández 2007; Pradas Macías 2003, 2007). Research in this field has shown that an interpreter’s melodic intonation and smooth rhythm, coupled with a pleasant voice, can lead users to consider that the message is more consistent with the original. In contrast, a flat and monotonous rhythm and an unpleasant voice have often resulted in a negative perception as regards the consistency and overall quality of the message’s content.

Cross-fertilization between AD and Interpreting Studies 75

5.2 Audio description and interpreting: two complete strangers? Since both AD and interpreting have much in common, the study of AD could benefit from insights into the methodology of interpreting quality assessment, especially in the case of simultaneous interpreting. They are both instances of translation: iconic and verbal images are translated into words in the former, and verbal and vocal information is translated into words in the latter. They are both shaped and determined by temporal constraints at input and output levels, and, more importantly, they both share the aural dimension which is the cornerstone of the communication medium whereby auditory stimuli are foregrounded. These instances of aural production belong, mutatis mutandis, to the realm of acousmatics (Iglesias Fernández 2010). In this sense, AD users and simultaneous interpreting users share a similar experience, as they lack access to the source of the translated aural message1 and auditory stimuli are presented in a manner that leads to an individual subjective effect (Schaeffer 2004: 76–77). It, therefore, seems only natural to investigate what quality assessment research into interpreting can teach us about quality reception in AD. The concept of quality in interpreting goes hand in hand with the evolution of the discipline, with the first approaches being informed by the predominant linguistic paradigms of the time and, hence, adopting a narrow lexico-semantic approach based on linguistic equivalence. Quality was tantamount to equivalence at the propositional level. This mathematical, quantitative perspective concerning quality has been primarily perceived as separate from other communicative dimensions associated with the act of interpretation. The interpreters’ rendition has been seen solely in terms of the transfer of linguistic information, with their role being to transfer the semantic load from one language into another, affecting neither the interpreting process nor the product. The role of interpreters as ‘invisible’ or ‘ghost’ participants is also found in the literature on AD, in which the audio describer is presented as someone who should not attract attention (Benecke 2004). Reception studies in simultaneous interpreting are among the most robust and cohesive research fields in Translation Studies (Gile 2003). They have been consistently conducted since the end of the 1980s, yielding very productive results. Scholars have embarked on the study of the factors enhancing or undermining interpreting quality by looking at the end-recipients’ views on quality. It has been argued that, in order to strengthen the position of interpreters when negotiating

76

Emilia Iglesias Fernández et al.

with employers and thus improve training, a better knowledge of what end-users expect is needed (Kurz 1989). With this purpose in mind, the interpretation of the products in question has been surveyed from the point of view of the end-recipient and research has found that the quality of interpreting services is evaluated by users in terms of what they perceive and that that perception is very much modelled by their expectations (Bühler 1986; Kurz 1989). Findings from research on user expectations as regards interpreting quality have shown that these are not the same for everyone, with differences in priorities being related to issues such as the recipient’s background (technical vs. academic users’ profiles), the setting (TV or cinema interpreting vs. conference interpreting), and the geographical origin of the user, their gender and age, amongst other factors. Despite variations in their priorities as to the features affecting quality in interpreting, all users agreed on a series of factors most affecting quality, namely an accurate and complete rendition of the message, the use of appropriate terminology and cohesive speech. A major difference in this trend was observed when user expectations of TV and cinema interpreting were elicited, as greater importance was placed in these settings on prosodic features, such as intonation, fluency and voice quality (Kurz and Pöchhacker 1995). As a result, a picture of quality as a dynamic concept emerged. Intrigued by the seemingly little importance given by users to the nonverbal features in interpreting expectations surveys, a group of scholars embarked on a series of experimental studies that, used in conjunction with survey research, they hoped would contribute to ascertaining the role of nonverbal elements in the assessment of quality among users. In these experiments, users were exposed to accurate interpreting rendered with a poor delivery marked by a slightly monotonous intonation, an unpleasant voice, a broad foreign accent or a lack of fluency (Collados Aís 1998, 2002; Collados Aís et al. 2007; Iglesias Fernández 2007; Pradas Macías 2003, 2007). Their findings revealed a wide gap between users’ abstract conceptualizations of what they thought good interpretation should be ‒ namely, an accurate and complete rendering of the original message ‒ and their actual judgements. Users were very poor judges of accuracy, for instance. They thought they had perceived inaccuracies and a diminished overall quality when they were exposed to accurate yet monotonous or vocally unpleasant interpretations lacking in fluency. It appeared that the fidelity of the message content could be easily compromised if a minimum standard of vocal quality was not guaranteed. Two relevant findings emerged. Firstly, users are very poor judges of what constitutes quality and rely on traditional, idealized notions

Cross-fertilization between AD and Interpreting Studies 77

for their preferences. Secondly, the aural medium shapes perception in particular ways so that nonverbal weaknesses in delivery spill over with a marked effect on intelligibility and the perception of overall quality. In sum, the nonverbal vocal dimension emerged as a key element affecting the perceived quality of interpreting and, thus, as an indispensable ally in quality assurance. On the basis of the research findings with regard to interpreting, Iglesias Fernández (2010) has noted that, because of a similar background to simultaneous interpreting, reception studies in AD should not stop at the preconceived preferences of users, but progress to an elicitation of an assessment of actual instances of AD on the part of its users. Indeed, if the relevant weight of each quality component is to be ascertained, users should assess variations of quality of each component in order to understand the role of each parameter, the relations between them and their combined effect.

5.3

The aural medium in the narration of AD scripts

A perusal of the literature on AD shows most professional describers and guidelines taking the view that the narrator’s voice should not detract either from the scenes or the characters’ dialogues. This is also the stance adopted by the Spanish guidelines on AD (AENOR 2005), which recommend that the describer’s voice should be neutral and devoid of emotion. Recommendations in this regard range from the voice not detracting from the users’ attention (Benecke 2004) or stealing the scene (Hyks 2005), to articulatory features related to the describer’s clarity of enunciation (Orero 2005) or to sociolinguistic properties such as dialect or accent. A more dynamic definition of the role of the audio describer is found in Hyks’s (2005: 7) account of AD skills: A competent describer can summarise effectively, describe colourfully and accurately and convey the verbal picture in a vivid yet objective manner. This applies to both the writing and the delivery. An effective audio describer delivers the text in a tone that matches the programme material, at a measured pace, distinctly but never stealing the scene. Very possibly, these definitions derive from the describer’s or researcher’s own experience or from reception studies. Definitions that are too vague present us with methodological problems. For instance, how can we operationalize the notion of ‘the voice matching the programme material’? How can we go about establishing the degree of ‘vividness’ required

78

Emilia Iglesias Fernández et al.

for a particular AD product? How can we realize Crook’s (1999: 13, quoted in Fryer 2010) aspiration that ‘AD products should be appealing to the ear as the images are appealing to the eye?’ The answer lies in reception studies, as well as in the interrelation of a lively use of language, correct balance and a congruent prosodic and vocal quality in the narration. Findings in film studies have highlighted the crucial role of the aural dimension on cognitive processes, steering attention and reinforcing comprehension. Chion (1994) has explored the so-called phenomenon of ‘audiovisual illusion’ deriving from the relationship between image and sound. Sound adds value to the cinematic image, since it ‘enriches a given image so as to create the definite impression, in the immediate or remembered experience one has of it, that this information or expression naturally comes from what is seen, and is already contained in the image itself’ (ibid.: 5). In fact, experimental studies have substantiated the relation between aural stimuli in films  – the soundtrack, ambient music or sound effects  – and the visual content, drawing attention to temporally and structurally congruent visual elements (Bolivar et al. 1994). Likewise, we believe that the describer’s voice can enhance as well as intensify a film’s plot and the characters’ intentions by fitting in with the emotional landscape. Voice in AD can serve as the communicating link between the screen and the audience, condensing all stimuli into a single experience. It is the purpose of this study to challenge the assumption made in the literature on AD – particularly the codes of practice – that the describer’s voice best serves the interests of AD users if it is kept neutral. Likewise, it aims to apply the research findings from the quality assessment of interpreting with its emphasis on the failure of expectations to live up to actual judgements in situated contexts where both vocal and nonverbal signals seem to play a crucial role. For AD to meet the needs of end-users and achieve its maximum potential, reception studies should explore the a priori expectations of visually impaired users as well as their priorities regarding audio described products and tally them with their actual assessment of these products. This is the approach taken in this study, where visually impaired users’ a priori judgement concerning quality issues in AD is tallied with their actual assessment of an excerpt of an AD film.

5.4

Methodology

This study has been conducted within the framework of the University of Granada’s AMATRA2 and PRA23 research projects. It was developed in

Cross-fertilization between AD and Interpreting Studies 79

two stages. During the first stage, an exploratory study was carried out with five visually impaired AD users affiliated to the Spanish national association for the blind (ONCE) in Madrid. The purpose of this exploratory study was to test the experimental material ‒ a three minute excerpt from the feature film The Hours (Stephen Daldry 2002) ‒ at the same time as our hypothesis, namely the distinct nature of situated judgement vis a vis decontextualised abstract preferences and the significant role of the aural dimension in situated quality assessment. Once the validity of the material had been successfully tried in this first stage, the study was subsequently put to a larger sample of participants: ten visually impaired AD users from ONCE in Granada. Since the methodology used in both stages was identical, the findings of both studies are reported here. 5.4.1

The subjects

The exploratory study, conducted at the ONCE headquarters in Madrid, involved five subjects, four females and one male, aged between 45 and 60. They volunteered to be interviewed concerning their preferences and experience of AD in feature films, as well as to take part in an experimental study. Two participants were congenitally blind and three were partially sighted (with various degrees of sight). The subsequent study, conducted at the ONCE headquarters in Granada, involved ten participants (seven females and three males), distributed within various age groups (one was under 20 years of age, two under 30, three between 30 and 45, and four were over 60). Three were congenitally blind and seven, partially sighted. In total, 15 participants took part in the two studies. Their exposure to AD was varied: five of them listened to AD feature films on a weekly basis, three did so a couple of times a month and seven of them said that they had been exposed to films with AD a couple of times a year. In order to test different types of AD material, participants were divided into two randomly assigned subgroups both in the exploratory and the final study (one group of three and one of two in the exploratory study, and of five people in each group in the study carried out in Granada). 5.4.2 Material used in the experiment 5.4.2.1 Questionnaire on expectations of audio description Participants were asked to complete a multiple-choice questionnaire in order to rank the AD features that, in their opinion, affected quality the most. Given the particular profile of our participants, the questionnaire

80

Emilia Iglesias Fernández et al.

was replaced with a semi-structured interview where questions were read aloud by the researchers. The quality features to be rated were as follows: the length of the AD script (short and concise or longer and more descriptive), the quality of the music soundtrack, the quality of the ambient sound, the quality of special effects, and the quality of the audio describer’s voice. General questions were also put to participants, such as their favourite film genre and their recollections of instances of good and/or bad experiences involving the AD of feature films. Users were also asked to rate the most enhancing and the most irritating features they had personally experienced when listening to AD feature films. We used a five-point Likert scale (one being ‘null effect’ and five being ‘maximum effect’). This information would then be compared with the answers provided to the questions asked after the experiments had been carried out, as explained below. Socio-personal data was also elicited regarding the participants’ age, sex, background education and their previous exposure to AD in feature films. 5.4.2.2

Audio described material

The chosen material was the feature film The Hours. The dramatic and foreboding atmosphere of the film was considered to make it suitable material on which to test our hypothesis that congruent aural stimuli could contribute to an enhanced experience of an AD film and perhaps to a better understanding of the plot. We selected the opening scene (minutes 00:57–03:47) because of its high visual input and emotional load. The excerpt was also chosen because two of the aural stimuli, namely the music score and ambient sound, were congruent with the character’s emotional state, as we believed that these would enhance the narration. The scene chosen was set in the 1940s and contained no dialogue, only an off-screen narration. It depicted an anguished woman, Virginia Wolf, leaving a cottage and walking towards a river bank, where she would eventually drown herself. These images were interspersed with images of her husband entering the cottage, dropping his boots on the floor and finding a letter addressed to him, in which he would find out about his wife’s sombre intentions. In addition to these visual stimuli, Virginia’s off-screen voice could be heard reading the farewell note that she had left for her husband. The film was officially audio described in Spanish by a female describer working for ARISTIA S.L., a company that commissions AD projects for ONCE in Spain. We focused on a sub-excerpt from the opening scene (from 00: 06–01:45) because it was particularly suitable for the purposes of the research. It also allowed us to erase the off-screen narration without

Cross-fertilization between AD and Interpreting Studies 81

altering the other layers of information, as will be explained below. The corresponding AD script runs as follows: Sussex, Inglaterra. 1941 [Sussex, England. 1941]; una mujer se anuda el cinturón del abrigo y sale de una casa campestre [a woman ties the belt of her coat and steps out of a cottage]; atraviesa encorvada el jardín de la casa [she crosses, with her back hunched, the house’s garden]; abre una valla y sale a un camino [she opens a fence and walks on to a pathway]; minutos antes había redactado una carta: [minutes earlier she had written a letter]; se detiene en la orilla del río [she stops at the river bank]; Leonard entra en la casa campestre y deja unas botas en el suelo [Leonard enters the cottage and drops a pair of boots on the floor]. A trained female phonetician with wide experience in media voice coaching was hired and exposed to the original describer’s voice.4 With the help of the free speech analysis software PRATT (Boersma and Weenink 2000), the phonetician measured the acoustic and dynamic qualities of the original audio describer’s voice. They revealed high energy (high potency), high intensity (loud volume) and a rising pitch contour (vivid intonation). These acoustic features correlate with a confident and optimistic voice and, in psychological terms, they reveal a confident, extrovert and lively personality (Scherer 1979, 2003). The audio describer’s voice was described as ‘institutional’, ‘detached’ and ‘confident’ by the professional phonetician. As these vocalizations did not seem to be congruent with the emotional landscape of the scene, it was decided to run an alternative recording of the same AD script, but this time with more congruent vocal material, using the voice of the female professional phonetician. We then investigated the vocal expression of emotion in the literature on social psychology, particularly the vocal correlates of sadness. We found that relaxed and hushed vocal qualities combined with low intensity and a falling pitch contour helped to conjure up the listeners’ impressions of sadness (Scherer 2003). With the help of PRATT, we controlled the phonetician’s vocal output as she narrated the AD script for the selected clip, so that the AD voice would fall within the vocal and dynamic parameters of sadness in a controlled environment. We refer to this recording as ‘experimental material’. To ensure the ecological validity of our material, the mood-incongruent (original) AD narration was recorded again by the same phonetician who had recorded the mood-congruent AD narration (experimental material), so as to be able to contrast the two narrations. We refer to this recording as ‘control material’. It should be

82

Emilia Iglesias Fernández et al.

noted that the linguistic content of the AD narration was the same in both the control and the experimental materials. The ultimate goal was to contrast the response of the participants to the control material (the original, mood-incongruent voice) with their response to the experimental material (the mood-congruent voice of the same narrator). Both narrations were produced and used for the assessment of the audio describer’s vocal features, as well as for the assessment of the personality and emotional correlates of her voice. Two sets of clips were created for both the control and the experimental material: in the first set, used in Experiment 1, we purposely excluded the vocal stimuli pertaining to the character’s voice. Virginia’s off-screen narration expressing her anguish was erased from the track because our objective was to elicit the participants’ perceptions of the emotional load in the audio describer’s voice only and we wanted to avoid the bias the linguistic content of Virginia’s foreboding utterances might produce. The material retained the soundtrack and ambient sound of the turbulent river. A  second set of material, used in Experiment 2, was produced with the ultimate goal of exposing users to an authentic product of the sort that would be released in a professional context. This material involved the full aural stimuli: soundtrack, sound effects, AD narration and Virginia’s off-screen voice reading her farewell letter. As we had already produced two AD narrations (one congruent and another one incongruent with the emotional ambiance), we inserted each of these AD tracks in two clips containing Virginia Woolf’s off-screen narration. This resulted in two clips: (1) a congruent and full AD experience; and (2) an incongruent, full AD experience, both with Virginia Woolf’s off-screen thoughts. All in all, we worked with four clips. The first two explored the effect of vocal sonority and emotional load in a congruent and incongruent AD narration, whereas the other two clips explored the effect in the final AD product. 5.4.3 Experiment 1: assessment of the audio describer’s vocal sonority and its corresponding emotional correlates In order to tally the abstract preferences of AD users with their actual judgements of the AD clips, we followed the methodology employed by Collados Aís (1998, 2002) and Collados Aís et al. (2007) and, in particular, that used by Iglesias Fernández (2007) in their quality assessment studies on simultaneous interpreting for the quality criteria ‘intonation’ and ‘pleasant voice’. The purpose of Experiment 1 was two-fold: (1) to test the emotional load in the audio describer’s voice as perceived by

Cross-fertilization between AD and Interpreting Studies 83

participants; and (2) to observe whether the mood-congruent AD narration helped to guide participants towards the retrieval of the character’s final intentions when compared to the mood-incongruent AD narration. To ascertain this, we designed a semi-structured interview to be carried out after showing the clips to the study subjects. The interview asked about the attributes of emotional states as well as the character’s intentions as identified in the AD narration only. Participants were asked to rate the sound quality of the AD voice and the emotional load on a five-point scale using a set of bipolar adjectives. These adjective pairs were based on Russell’s (1980) multidimensional scaling model and represented the dimensions of activity/inactivity and pleasantness/unpleasantness. Participants were also asked to predict the outcome of the scene. It should be noted that none of the participants had been previously exposed to the film. Participants were divided into two randomly assigned subgroups. Subgroup A  (eight participants in total) was exposed to the experimental recording, where the describer’s voice was mood-congruent. Subgroup B (seven participants in total) was exposed to the control recording, where the describer’s voice emulated the original ARISTIA narrator’s incongruent voice, as shown in Table 5.1. Table 5.1

Methodology, material and subjects in Experiment 1

Expectations AD Quality Expectations Questionnaire Study

Experiment 1

Study 1 ONCE Madrid (5 participants)

Study 2 ONCE Granada (10 participants)

Subgroup A Subgroup B

Subgroup A

Subgroup B

Participants

3

5

5

Material

Experimental Control clip 1 Experimental Control clip 1 clip 1 (mood- (mood(moodclip 1 incongruent) incongruent) congruent) (moodcongruent)

2

5.4.4 Experiment 2: quality assessment of the final AD product Experiment 2 involved the participants in assessing the quality of the two clips providing a full AD experience. The purpose of this experiment was also two-fold. On the one hand, we wanted to examine whether the mood-congruent AD narration was thought to enhance the quality of the AD experience; and, on the other hand, we wanted

84

Emilia Iglesias Fernández et al.

participants to be able to draw comparisons between the congruent and incongruent AD clips as to their quality. To ascertain this, we designed a semi-structured interview to be carried out after showing the clips to the study subjects. The quality assessment of the participants was elicited so that those who had been exposed to the congruent AD narration-only clip in Experiment 1 (subgroup A) were asked to assess the incongruent full AD clip in Experiment 2, and vice versa. Thus, the participants assessed the full AD material only once, but could still compare the describer’s voice in Experiment 1 with her voice in Experiment 2, as is shown in Table 5.2.

Experiment 2

Table 5.2

Methodology, material and subjects in Experiment 2 Study 1 ONCE Madrid (5 participants)

Study 2 ONCE Granada (10 participants)

Subgroup A

Subgroup B Subgroup A

Subgroup B

Participants

3

2

5

Material

Control clip 2 Experimental Control clip 2 (moodclip 2 (moodincongruent) (moodincongruent) congruent)

5

Experimental clip 2 (moodcongruent)

Preferences between the two clips were elicited through a ‘yes’ or ‘no’ answer, and participants were asked to justify the reasons for their preference through an open-ended question. They were also asked if they thought the AD voice had contributed to a better understanding of the character’s state of mind and intentions. Finally, their quality assessment ratings were elicited using a five-point scale (one being ‘null quality’ and five being ‘maximum quality’).

5.5 Results 5.5.1 Audio description expectations, questionnaire When asked to rank the features that most affected quality in an AD film, all the participants attributed the highest importance to the AD script, with their ratings falling within the highest marks (five and four). Eight participants preferred the AD script to be longer and more descriptive, whereas for seven of them the ideal AD script should be kept short and concise. The AD script was followed in importance

Cross-fertilization between AD and Interpreting Studies 85

by ambient sounds and the audio describer’s voice. The musical soundtrack and special sound effects were placed in third and fourth place respectively. When eliciting the participants’ prior pleasant and irritating experiences in terms of AD feature films, results showed that 15 out of 15 subjects had thoroughly enjoyed each of the AD film experiences they had been exposed to and that none of them had had a negative or unpleasant experience in the past. 5.5.2 Experiment 1: assessment of the audio describer’s vocal sonority and its corresponding emotional correlates The results of the interviews conducted on the subgroup exposed to the mood-congruent AD narration (experimental clip 1) showed that the majority (five out of the eight participants) regarded the audio describer’s voice quality as relaxed and frail, whereas two out of eight thought the sound of the voice was strong. Only one out of eight identified some tension in the vocal stimuli. The emotional correlates of these sonority cues were primarily related to sadness (five out of eight participants) and fear (four out of eight participants), and to a lesser extent to calmness (three out of eight). No participant attributed joy to the describer’s vocal expression in the congruent pairing. When participants in this subgroup were asked what mental representations were conjured up as they listened to the narration-only AD clip, all participants (eight out of eight) concurred on a feeling of foreboding. When asked if they could imagine how the scene might end, all the participants came to the conclusion that something ominous was going to happen to the character, most probably death. As regards the subgroup exposed to the mood-incongruent AD narration (control clip 1), more than half of the sample (four out of seven subjects) perceived a fairly strong vocal sonority and the same number of participants attached a relaxed vocal sound quality to the describer’s voice. Two participants thought the vocal sound was weak and only one subject perceived it to be tense. The emotional correlates for the vocal stimuli in this subgroup were related to dominance (four out of seven participants), calmness (two out of seven), sadness (two out of seven) and, to a lesser extent, fear (one out of seven). Over half of the participants in this subgroup imagined the main character attacking somebody (four out of seven), two anticipated the main character’s suicide and one thought she would end up by crying to purge her despair. Table 5.3 contains detailed information concerning the results of Experiment 1.

86

Emilia Iglesias Fernández et al.

Table 5.3 Experiment 1, results: vocal sonority ratings and their emotional correlates Vocal sonority qualities

Emotional correlates of vocal stimuli

Moodcongruent Subgroup A

Moodincongruent Subgroup B

Relaxed

5/8

62.5%

4/7

57.1%

Potent

2/8

25%

4/7

57.1%

Frail

5/8

62.5%

2/7

Tense

1/8

12.5%

1/7

Moodcongruent Subgroup A

Moodincongruent Subgroup B

Sadness

5/8

62.5%

2/7

28.6%

Dominance

0/8

0%

4/7

57.1%

28.6%

Calmness

3/8

37.5%

2/7

28.6%

14.3%

Fear

4/8

50%

1/7

14.3%

5.5.3 Experiment 2: quality assessment of the final AD product Experiment 2 assessed the quality of the full AD experience and the extent to which it contributed to a better understanding of the emotional atmosphere and the character’s mental state and intentions. The findings revealed a marked trend in the subgroup exposed to the full mood-congruent AD experience (subgroup B). All the participants in this group preferred the mood-congruent clip they had been exposed to in Experiment 2. A large number (71.4 per cent) believed that the voice quality of the AD had enhanced their understanding of the scene’s emotional landscape and only 28.6 per cent did not regard it as particularly helpful. The reasons for their preference for the mood-congruent clip varied, but most of them attributed this to a ‘more emotionally loaded AD voice’ (57.1 per cent) and to ‘more detailed and better description in the AD script’ (42.8 per cent). Two participants did not answer this question (28.6 per cent) and thus did not explain the reasons for their preference. The majority of subgroup B (three people, 71.4 per cent) rated the quality of the AD product as ‘very good’ (five points out of five on the Likert scale), one person (14.3 per cent) regarded it as ‘good’ (four points out of five), and another one (14.3 per cent) thought it was ‘slightly good’ (three points out of five). The results from the subgroup exposed to the full mood-incongruent AD experience (subgroup A) were less conclusive, as the response pattern displayed variations. Four participants (50 per cent) preferred the mood-incongruent clip, three people (37.5 per cent) preferred the mood-congruent clip, and one participant (12.5 per cent) did not answer this question. As regards the reasons for their preference, the majority referred to a more detailed and more informative AD script (6 people, 75 per cent). Two participants did not answer this question (25 per cent) and thus did not explain the reasons for their preference.

Yes

5 participants: 5 out of 5 1 participant: 4 out of 5 1 participant: 3 out of 5

Positive role of AD voice quality

Quality assessment of AD product (where 1 is very bad and 5 is very good)

5 (71.4%)

No

2 (28.6%)

7 (100%)

1. Audio description is better and voice quality relays more feelings 2. AD is more detailed. Voice is touching: sadness of character 3. AD is more detailed 4. No response 5. Audio describer seemed part of the sad experience 6. Audio describer relays more emotion 7. No response

Present Clip

Reason behind preference

0

0

Previous clip

No

Preference

7 (100%)

Yes

Mood-congruent voice and full AD Subgroup B

Experiment 2

Experiment 2, results: preferences and quality assessment of full AD clip

Difference with previous clip

Table 5.4

3 (37.5%)

8 (100%)

Present clip

No

3 participants: 4 out of 5 2 participants: 5 out of 5 1 participant: 3 out of 5

6 (75%)

No

Description is more detailed No response Description is more informative No response AD is more detailed Description is more informative Description gives more information AD is more detailed

Yes

1. 2. 3. 4. 5. 6. 7. 8.

Previous clip

Yes

2 (25%)

4 (50%)

0

Mood-incongruent voice and full AD Subgroup A

87

88

Emilia Iglesias Fernández et al.

Six participants (75 per cent) believed that the AD voice quality had enhanced their understanding of the scene’s emotional landscape and only two people (25 per cent) did not regard it as particularly helpful. Quality assessment ratings were also less homogeneous and more scattered in this group: Three participants (37.5 per cent) thought the AD was ‘good’ (four points out of five), two of them (25 per cent) regarded it as ‘very good’ (five points out of five), and only one (12.5 per cent) thought it was ‘slightly good’ (three points out of five). Two participants in this group (25 per cent) did not rate the quality of the second clip. Table 5.4 contains detailed information of the results of Experiment 2.

5.6

Discussion

Regardless of their previous degree of exposure to AD films, the participants seemed to find no fault with their past AD experiences. They claimed never to have experienced a bad or unpleasant AD. As to their expectations on the components of the AD impinging on quality, the highest importance was attributed to the AD script. The AD narrator’s voice and ambient music were placed in second place, followed in third position by the musical soundtrack. Special effects came in last. As to the findings of Experiment 1 concerning the emotional correlates of the AD voice, the results from subgroup A were very revealing. These participants, who had been exposed to experimental clip 1, showed a very homogenous response pattern. The vocal correlates of relaxation and frailty were clearly identified by over half of the participants (five people, 62.5 per cent). In addition, 62.5 per cent of the sample in subgroup A elicited the emotions of sadness, and 50 per cent (four out of eight) identified fear in the vocal cues. In a similar vein, the eight participants’ mental image of the scene’s denouement was very similar. A sense of foreboding and the anticipation of the main character’s death by suicide were shared by all the participants in this group. These results contrast sharply with the answers given by the participants in subgroup B, as they yielded a different, somewhat contradictory, response pattern. Four participants (57.1 per cent) in subgroup B attached the same ratings to two opposite vocal properties: a powerful voice and a weak voice. It came as no surprise that these acoustic misperceptions would lead to some confusion. This is the case of the relaxed vocal quality, which four participants (57.1 per cent) correlated to the emotional state of dominance that is not naturally evoked. The relaxed vocal quality was perceived by two participants (28.6 per cent) as a correlate of calmness and by one (14.3 per cent) as a correlate of fear. However, these

Cross-fertilization between AD and Interpreting Studies 89

perceptions of emotion were at odds with the main character’s own emotional experience and so were participants’ mental representations and their anticipation of events. For four participants (57.1 per cent) from subgroup B, the main character was very likely to inflict some kind of violence on somebody. Only one participant (14.3 per cent) imagined Virginia to be crying and two (28.6 per cent) imagined her committing suicide. It could thus be said that the mood-incongruent clip resulted in confusion, whereas the mood-congruent clip contributed to a better understanding of the plot. As regards Experiment 2, all the participants from subgroup B who had previously listened to the mood-incongruent AD clip and had difficulties identifying emotions in the AD voice showed a preference for the full mood-congruent AD clip. In addition, when we analysed their answers, their preferences seemed to be rather more influenced by the enhanced emotional landscape relayed by the AD voice (four participants, 57.1 per cent) and less by a more detailed and more informative AD script (two participants, 28.6 per cent), which slightly contradicted their expectations as outlined in the initial questionnaire. Five participants (71.4 per cent) regarded the second clip as having played a major role in enhancing their understanding of the scene’s emotional landscape and of the main character’s future intentions. Quality ratings for experimental clip 2 were also very positive. In contrast, the participants of subgroup A, who had been exposed to the mood-incongruent clip in Experiment 2, showed a very heterogeneous answer pattern. Preferences were split between those who regarded the mood-incongruent clip as better (four participants, 50 per cent), and those who preferred the mood-congruent one (three participants, 37.5 per cent). Unfortunately, one participant (12.5 per cent) did not answer and this preference could have contributed to tipping the balance in favour of a particular direction. The preference for the mood-incongruent clip was largely attributed to ‘a more informative and more detailed AD script’, as stated by six respondents (75 per cent). This response could be explained by the fact that the second clip included additional information, that is, the main character’s off-screen narration. Unfortunately, two participants (25 per cent) did not provide a reason for their preference, and this precluded us from drawing a full picture of this subgroup’s preferences. A  contradiction was observed in subgroup A  when we compared the reasons behind participants’ preferences (a more detailed AD script) and their judgements concerning the role played by the AD voice. The high ratings attached to the latter were extremely compelling as, for six participants (75 per cent), the AD voice had been crucial in enhancing

90

Emilia Iglesias Fernández et al.

their understanding of the scene’s emotional landscape and the character’s future intentions and only two participants (25 per cent) did not attach this role to the voice. Quality ratings for the mood-incongruent clip also showed some contradictions. Despite the fact that participants considered this clip to be better due to its more informative and detailed nature, assessment ratings were disparate and fell in the middle of the scale. It is noteworthy that two participants (25 per cent) did not answer this question or had difficulty rating the quality of the AD product.

5.7

Conclusions

The participants in this study showed great enthusiasm for the quality of the AD feature films they had been exposed to at the official ONCE headquarters. When asked about their positive and negative experiences of AD films, they found nothing to fault. In fact, they were very grateful that AD feature films were available in the first place, so their abstract quality expectations should be taken with a pinch of salt. It seems that the mere existence of the service provided to ONCE-affiliated members could have biased their expectations. It is also noteworthy that the majority of the participants were not frequent users of AD feature films so their experience was limited. Despite these provisos, the results from this study support our hypothesis that quality expectations do not match quality assessment in situated contexts when users are exposed to genuine experimental material. With regard to their a priori judgements, all the participants attached the highest importance to the linguistic content of the AD script, whereas the quality of the AD describer’s voice was only considered of marginal importance. However, when exposed to the full mood-congruent AD clip in Experiment 2, their perceptions of these two factors seemed to change. Accordingly, the majority of participants in subgroup B and 37.5 per cent of the participants in subgroup A regarded the mood-congruent clip as much better than the mood-incongruent one. This was mostly due to the AD voice having contributed to an enhancement of their understanding of the main characters’ emotional state and her future intentions. Quality assessment ratings for the mood-congruent AD voice clip were very high and were endorsed by most participants. This confirms the fact that the congruence of the audio describer’s voice with the scene’s visual stimuli, together with the quality of the voice, can play a much more important role than previously envisaged, both in the favourable assessment of AD quality in general and of the feature film AD quality assessment in particular. This finding replicates the outcomes of the quality assessment

Cross-fertilization between AD and Interpreting Studies 91

relating to simultaneous interpreting, where users’ a priori expectations of interpreters’ voice qualities received very poor ratings, but the situation was reversed when users had listened to, and assessed, an interpreter’s unpleasant voice in a genuine interpreting situation (Collados Aís 1998, 2002; Collados Aís et al. 2007; Iglesias Fernández 2007). The reasons for the participants’ preference for the mood-congruent clip were clearly determined by the AD voice in the sample of participants who had listened to the full mood-congruent AD clip. A large number of participants exposed to the full mood-incongruent AD clip showed a preference for this clip, perhaps because they had been exposed to the full content of the AD clip including the main character’s off-screen narration of her feelings of despair. We are inclined to believe that their appreciation for the additional explicit semantic, verbal information contained in the full AD clip could have eclipsed the implicit, nonverbal, incongruent information relayed by the AD voice. We are also basing this assumption on the very high ratings attached by the majority of participants in this subgroup to the role of the AD voice in enhancing their understanding of the scene and the character’s future intentions. Although quality ratings were lower for the moodincongruent AD voice clip, with preferences being very close, users who were exposed to this clip seemed to be happy with its quality. We believe that the participants’ limited previous exposure to AD feature films and their gratitude that AD services are provided at all could have biased their quality rating for the mood-incongruent AD voice clip. A very large number of participants in subgroup B (71.4 per cent) and subgroup A (75 per cent) concurred that the audio describer’s vocal sonority had led to them to conjure up emotional states that reinforced their understanding of the character’s mood and had contributed to anticipating the plot. These results are in line with findings in film studies, which have revealed the crucial role played by mood congruency in aural/visual stimuli on cognitive processes, as this pairing of perceptive cues seems to steer attention and reinforce comprehension. The crucial role of the nonverbal dimension in the assessment of the linguistic content of the interpretation observed in quality assessment studies relating to interpreting inspired us to pursue the replication of these experimental studies in AD reception studies. Like the findings involving quality assessment in simultaneous interpreting, in which prosodic features in the interpreters’ delivery were thought to have a fundamental bearing on quality perception, the congruent audio description (i.e., the clips in which the audio describer’s voice quality was relaxed, possessed slow vocal dynamics and lacked vocal intensity congruent

92

Emilia Iglesias Fernández et al.

with the sadness of the scene) seem to have led to a more precise identification of affective states and the scene’s outcome in Experiment 1 and resulted in higher quality assessments by participants in Experiment 2. A cautious approach should be taken to the results, however, as the sample was small and the participants’ previous exposure to AD films varied. Additionally, regardless of the frequency of their previous exposure to AD films, the participants were highly tolerant of AD in terms of their abstract preference. Further reception and assessment studies involving a larger sample of participants sharing a more homogeneous pattern in their AD exposure to feature films would contribute to a fuller understanding of the role of the audio describer’s vocal qualities, its emotional correlates and congruency in mood in terms of visual and aural stimuli. In the meantime, this study tentatively shows that AD users favour mood-congruent vocal quality in AD. This is in sharp contrast to the traditional assumptions made in the literature on AD, in which the audio describer’s voice is seen as serving the interests of the users only if it is kept neutral. The audio describer’s affective voice can enhance and intensify a film’s plot and the characters’ intentions by being congruent with the emotional landscape. Voice in AD can serve as the communicating link between the screen and the audience creating a single, cohesive experience and proving McLuhan’s assumption (1964, quoted in Gottlieb 2005: 21) that ‘the medium is also the message’.

Notes 1. Simultaneous interpreters’ booths are very often placed at the very back of the venue, while users sit in front of the speaker, with their backs to the interpreters. This spatial disposition precludes users from having clear visual access to the source of the aural input in their language. 2. AMATRA, Accesibilidad a los medios de comunicación a través de la traducción, is a research project funded by the Andalusian Government (P07-SEJ-2660). 3. PRA2, Plataforma de Recursos Audiovisuales Accesibles, Investigación, Formación y Profesionalización, is a research project funded by the Spanish Ministry of Science and Innovation (FTI2010-1614). 4. We would like to express our gratitude to Dr Carolina Pérez Sanz, a phonetician working at the CSIC Laboratorio de Fonética in Madrid, for her disinterested participation in this study.

References AENOR. 2005. Audiodescripción para personas con discapacidad visual. Requisitos para la audiodescripción y elaboración de audioguías. UNE 153020. Madrid: AENOR.

Cross-fertilization between AD and Interpreting Studies 93 Benecke, Bernd. 2004. ‘Audio-Description’. Meta 49(1): 78–80. Benecke, Bernd. 2007. ‘Audio description: Phenomena of information sequencing’. In Heidrun Gerzymisch-Arbogast and Gerhard Budin (eds) MuTra 2007  – LSP Translation Scenarios: Conference Proceedings. Vienna: Mutra. www. euroconferences.info/proceedings/2007_Proceedings/2007_Benecke_Bernd. pdf. Boersma, Paul and David Weenink. 2000. Praat, a System for Doing Phonetics by Computer.www.praat.org. Bolivar, Valerie, Annabel Cohen and John Fentress. 1994. ‘Semantic and formal congruency in music and motion pictures: effects on the interpretation of visual action’. Psychomusicology: Music, Mind and Brain 13(1–2): 28–59. Bourne, Julian and Cristina Lachat. 2010. ‘Impacto de la norma AENOR: valoración del usuario’. In Catalina Jiménez Hurtado, Claudia Seibel and Ana Rodríguez Domínguez (eds) Un corpus de cine: Fundamentos teóricos y aplicados de la audiodescripción (pp. 315–33). Granada: Tragacanto. Braun, Sabine. 2007. ‘Audio description from a discourse perspective: a socially relevant framework for research and training’. Linguistica Antverpiensia 6: 357–69. Braun, Sabine. 2008. ‘Audiodescription research: state of the art and beyond’. Languages and Translation: Papers from the Centre for Translation Studies. Guilford: University of Surrey. http://epubs.surrey.ac.uk/translation/13. Bühler, Hildegund. 1986. ‘Linguistic (semantic) and extra-linguistic (pragmatic) criteria for evaluation of conference interpretation and interpreters’. Multilingua 5(4): 231–35. Chion, Michel. 1994. La audiovisión. Barcelona: Ediciones Paidós. Collados Aís, Ángela. 1998. La evaluación de la calidad en interpretación simultánea. La importancia de la comunicación no verbal. Granada: Comares. Collados Aís, Ángela. 2002. ‘Quality assessment in simultaneous interpreting: the importance of nonverbal communication’. In Franz Pöchhacker and Miriam Shlesinger (eds) The Interpreters’ Studies Reader (pp. 327–36). London: Routledge. Collados Aís, Ángela, Esperanza Pradas Macías, Elisabeth Stévaux and Olalla García Becerra (eds). 2007. La evaluación de la calidad en interpretación simultánea: Parámetros de incidencia. Granada: Comares. Culler, Jonathan. 1981. The Pursuit of Signs. London: Routledge. Díaz Cintas, Jorge, Pilar Orero and Aline Remael (eds). 2007. Media for All. Subtitling for the Deaf, Audio Description and Sign Language. Amsterdam: Rodopi. Fels, Deborah, John Patrick Udo, Jonas Diamond and Jeremy Diamond. 2006. ‘Comparison of alternative narrative approaches to video description for animated comedy’. Journal of Visually Impairment & Blindness 100(5): 295–305. Fryer, Louise. 2010. ‘Audio description as audio drama: a practitioner’s point of view’. Perspectives: Studies in Translatology 18(3): 205–13. Gambier, Yves. 2006. ‘Multimodality and audiovisual translation’. In Mary Carroll, Heidrun Gerzymisch-Arbogast and Sandra Nauert (eds) MuTra 2006 – Audiovisual Translation Scenarios: Conference Proceedings. Copenhagen: Mutra. http://euroconferences.info/proceedings/2006_Proceedings/2006_Gambier_ Yves.pdf. Gerzymisch-Arbogast, Heidrun. 2007. ‘Workshop audio description’. Summer School Forlí: Screen Translation, 25 May 2007. www.translationconcepts.org/pdf/ audiodescription_forli.pdf.

94

Emilia Iglesias Fernández et al.

Gile, Daniel. 2003. ‘Quality assessment in conference interpreting: methodological issues’. In Ángela Collados Aís, María Manuela Fernández Sánchez and Daniel Gile (eds) La evaluación de la calidad en interpretación: Investigación (pp. 109–23). Granada: Comares. Gottlieb, Henrik. 2005. ‘Multidimensional translation: semantics turned semiotics’. In Heidrun Gerzymisch-Arbogast and Sandra Nauert (eds) MuTra 2005  – Challenges of Multidimensional Translation: Conference Proceedings. Saarbrücken: Mutra. http://euroconferences.info/proceedings/2005_Proceedings/2005_ Gottlieb_Henrik.pdf. Hyks, Veronika. 2005. ‘Audio description and translation. Two related but different skills’. Translating Today 4: 6–8. Iglesias Fernández, Emilia. 2007. ‘La incidencia del parámetro agradabilidad de la voz’. In Ángela Collados Aís, Esperanza Pradas Macías, Elisabeth Stévaux and Olalla García Becerra (eds) La evaluación de la calidad en interpretación simultánea: Parámetros de incidencia (pp. 37–51). Granada: Comares. Iglesias Fernández, Emilia. 2010. ‘La dimensión paralingüística de la audiodescripción: un acercamiento multidisciplinar’. In Catalina Jiménez Hurtado, Claudia Seibel and Ana Rodríguez Domínguez (eds) Un corpus de cine: Fundamentos teóricos y aplicados de la audiodescripción (pp. 205–22). Granada: Tragacanto. Kurz, Ingrid. 1989. ‘Conference interpreting: user expectations’. In Deanna Hammond (ed.) Coming of Age. Proceedings of the 30th Conference of the ATA (pp. 143–48). Medford, NJ.: Learned Information. Kurz, Ingrid and Franz Pöchhacker. 1995. ‘Quality in TV interpreting’. TranslatioNouvelles de la FIT—FIT Newslettter 14(3–4): 350–58. Matamala, Anna. 2005. ‘Live audio description in Catalonia’. Translating Today 4: 9–11. Orero, Pilar. 2005. ‘Audio description: professional recognition, practice and standards in Spain’. Translation Watch Quarterly 1: 7–18. Packer, Jaclyn and Corinne Kirchner. 1997. Who’s Watching: A Profile of the Blind and Visually Impaired Audience for Television and Video. New York: American Foundation for the Blind. Peli, Eli, Elisabeth M. Fine and Angela T. Labianca. 1996. ‘Evaluating visual information provided by audio description’. Journal of Visual Impairment & Blindness 90(5): 378–85. Petré, Leen. 2005. User Feedback on Audio Description and the Case for Increasing Audiodescription Targets. www.mib.org/uk/xpedio/groups/public/documents/ publicwebsite/public_userfeedback.doc. Pradas Macías, Esperanza M. 2003. Repercusión del intraparámetro pausas silenciosas en la fluidez: Influencia en las expectativas y en la evaluación de la calidad en interpretación simultánea. Unpublished PhD Thesis. Granada: University of Granada. Pradas Macías, Esperanza M. 2007. ‘La incidencia del parámetro fluidez’. In Ángela Collados Aís, Esperanza Pradas Macías, Elisabeth Stévaux and Olalla García Becerra (eds) La evaluación de la calidad en interpretación simultánea: Parámetros de incidencia (pp. 53–70). Granada: Comares. Romero-Fresco, Pablo and Louise Fryer. 2013. ‘Could audio described films benefit from Audio Introductions? An audience response study’. Journal of Visual Impairment and Blindness 107(4): 287–5.

Cross-fertilization between AD and Interpreting Studies 95 Russell, James. 1980. ‘A circumplex model of affect’. Journal of Personality and Social Psychology 39: 1161–78. Salway, Andrew and Alan Palmer. 2007. ‘Describing actions and thoughts’. Paper presented at the Advanced Seminar: Audiodescription – Towards an Interdisciplinary Research Agenda. University of Surrey, 28–29 June. Schaeffer, Pierre. 2004. ‘Acousmatics’. In Christoph Cox and Daniel Warner (eds) Audio Culture. Readings in Modern Music (pp. 76–81). London: Continuum. Scherer, Klaus R. 1979. ‘Personality markers in speech’. In Klaus R. Scherer and Giles Howard (eds) Social Markers in Speech (pp. 147–209). Cambridge: Cambridge University Press. Scherer, Klaus R. 2003. ‘Vocal communication of emotion: a review of research paradigms’. Speech Communication 40: 227–56. Schmeidler, Emilie and Corinne Kirchner. 2001. ‘Adding audio description. Does it make a difference?’ Journal of Visually Impairment & Blindness 95(4): 197–212. Yeung, Jessica. 2007. ‘Audio description in the Chinese world’. In Jorge Díaz Cintas, Pilar Orero and Aline Remael (eds) Media for All. Subtitling for the Deaf, Audio Description, and Sign Language (pp. 231–43). Amsterdam: Rodopi.

Part II Targeting the Audience

6 Audio Describing for an Audience with Learning Disabilities in Brazil: A Pilot Study Eliana P. C. Franco, Deise M. Medina Silveira and Bárbara C. dos Santos Carneiro

6.1

Introduction

For almost a decade now, audio description (AD) has been a key topic at many audiovisual translation (AVT) conferences and in publications focusing on the issue of accessibility. As a relatively new audiovisual mode of translation, AD has been the subject of heated debates both on the relevance of images and how these should be put into words so that the target audience – vision impaired addressees – could make better sense of the original film soundtrack that, in the case of the blind, was the only source of information on the audiovisual product to which they had access. Regional and individual practices and research have resulted in a discussion concerning the adoption of norms nationally, norms that have ended up vying with each other. At the 2011 Media for All conference, held in London, it became clear that some scholars had begun to realize the futility or, indeed, pointlessness of the wish to nationalize or homogenize AD norms, believing that each audiovisual product should be unique owing to the fact that the audiences concerned are heterogeneous, both within and beyond national borders. The universality of norms is now out of the question and concerns about the uniqueness of the audiovisual product have become more important than simply following instructions on how to audio describe. This is not to say that norms or guidelines are not useful or should not be discussed. They are necessary for those who train audio describers and who are in the process of being trained, since every trainee needs standards to which to work. Norms are a general starting point from where local practices can refine and modulate the audio described discourse according to the genre and narrative of the audiovisual product 99

100

Eliana P. C. Franco et al.

in question, as well as fitting it to suit local expectations and preferences. These, in turn, can only be discovered on the basis of consistent reception research. As Díaz Cintas (2005: 5) puts it: ‘it is necessary to undertake empirical studies that will provide us with a more complete and detailed idea of what the audience needs’. These studies will provide the strongest argument to validate any set of guidelines. An audio described script may appear to be good from the perspective of a nonvisually impaired viewer, but it still has to meet the needs of an audience to which the audio describer is essentially an outsider. This current trend of abandoning any attempt to standardize practices, whether at national or international level, together with this new perspective in AD research, which is rooted in narratology and cinematography, mean that discussions on the topic and professional practice implicitly assume that the main (and only) target audience to benefit from AD is the poorly sighted and the blind. Thus, in a very few articles regarding the usefulness of this AVT mode for audiences with learning disabilities, the observations made by scholars working in this field largely seem to have been ignored: Audio description also benefits people with cognitive-perceptual issues. In addition, in some cases, non-visually impaired audiences can also enjoy AD in situations where no visual information is provided, namely, in audio guides, audio described films to be ‘watched’ while driving, audio books, etc. (Díaz Cintas 2007: 49, our translation) AD could represent an additional acoustic source of information, which would help this audience to grasp the audiovisual product in a more straightforward and independent manner. Because the so-called learning disabled audience has different needs and expectations from those of the blind and visually impaired, and because all that is offered in terms of AD nowadays is targeted towards the latter, this study represents a first (small) step to investigate the extent to which AD intended for the visually impaired audience meets the needs of an audience with learning disabilities. One of the motivations behind this study is the fact that, at least in Brazil, no secondary AD track targeting such a specific audience is offered in audiovisual products, although the UN Convention on the Rights of Persons with Disabilities (United Nations 2006) has established that all audiences (which includes all disabilities) have the right of equal access to communication and cultural products. This study was, in addition, mainly motivated by the unexpected audience that the AVT research group TRAMAD (www.tramad.com.br) encountered when audio describing a dance performance in the small

Audio Describing for an Audience with Learning Disabilities 101

town of Santo Amaro da Purificação, in the northeast of Brazil. There was not a single visually impaired person in the audience, which consisted of a group of ten enthusiastic students from APAE, a Brazilian institution that has contributed to the education of people with learning disabilities for over 60 years. This definitely attracted the attention of our research group to this segment of the audience that has been forgotten in studies on AD. AD was finally made compulsory on Brazilian open television channels from 1 July 2011 after a struggle of 11 years since the introduction of the law of accessibility (Law 10098/2000) and six years after the Decree 5.371/2005 determined the implementation of AD on Brazilian TV. However, this was done on a much reduced scale than had originally been envisaged, that is, only two hours per week instead of two hours per day. The official launch of AD in Brazil has contributed to the spread of specialized courses, to an increased interest in AD from the dubbing industry and has resulted in discussions regarding AD norms between the ABNT (the Brazilian Association of Technical Norms), a group of practitioners, representatives for the blind, scholars from a number of disciplines and amateurs. The discussion, available online, makes no mention of a target audience other than the blind and visually impaired, although those with learning disabilities have been mentioned occasionally in petitions used to reinforce arguments in favour of the final implementation of AD in Brazil. Taking all this into consideration, this study focuses on people with learning disabilities as a potential audience that would benefit from AD and attempts to answer the following two questions: • Does AD intended for the blind and visually impaired also promote a better understanding of the audiovisual product in an audience with learning disabilities? • If so, to what extent is AD intended for the blind and visually impaired suitable for an audience with learning disabilities?

6.2

Defining learning disabilities

According to the American Association on Intellectual and Developmental Disabilities (AAIDD) and the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), mental disability is defined as: the observable decreasing state of intellectual functioning to a level significantly lower than average, associated with at least two aspects of adaptive functioning, such as: communication and personal care,

102

Eliana P. C. Franco et al.

domestic competence, social skills, use of community resources, autonomy, health and safety, education skills, entertainment and work (Fundação Oswaldo Cruz 2014, our translation). As stated by the psychiatrist Ballone (2004), the World Health Organization still defines intellectual functioning quantitatively according to the intelligence quotient (IQ) of the individual and classifies it officially, as shown in the ICD or International Statistical Classification of Diseases and Related Health Problems (www.who.int/classifications/icd), whose tenth revision was published in 2010. According to this classification, IQs range from deep impairment (below 20) to superficial impairment (IQ 52–67) and slow thinking (IQ above 76). Mental disorders, in turn, are classified from superficial (F70) to deep (F79), while Down’s syndrome is characterized by a type of chromosomal abnormality, ranging from Q90–Q99. Despite all these classifications, reflecting a medical diagnosis that has labelled people with learning disabilities for too long, the UN Convention on the Rights of Persons with Disabilities (2006) took a decisive step in proposing a new definition for the term ‘disability’ or ‘disabled person’. According to this convention (ibid.), ‘[p]eople with disabilities include those who have long-term physical, mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others’. As the commented Brazilian Portuguese version of the convention aptly remarks (Resende and Vital 2008), this new perspective acknowledges the fact that barriers are not imposed by the disability itself, but by environments that do not promote equality.

6.3 Context of the study 6.3.1

The institution

The institution chosen to be part of the study was the Association of Parents and Friends of Exceptional People (APAE, Associação de Pais e Amigos de Excepcionais, www.apaebrasil.org.com.br). The APAE is the most long-standing educational institution in Brazil for people with multiple disabilities, including learning disabilities. The first APAE was founded in Rio de Janeiro in December 1954 by Beatrice Bermin, a member of the diplomatic committee from the United States in Brazil and the mother of a child with Down’s syndrome. Since then, many APAEs have opened, and today there are almost 2,000 local branches in Brazil.

Audio Describing for an Audience with Learning Disabilities 103

The APAE chosen for this study is located in the small town of Santo Amaro da Purificação in the interior of Bahia state; it was created in 1999,1 a long time after many branches of the APAE had already been established. Santo Amaro da Purificação is a small town with scant resources, which justifies the delay in the advent of such a specialized association. This APAE is a much needed institution for historical reasons, since a large number of its inhabitants were born with physical and learning disabilities due to the presence there of an old, now defunct, factory that had been contaminating the soil with lead for years, affecting all the resources deriving from it and consumed by its population. 6.3.2

The subjects

The first encounter with our potential subjects was after the audio description of a dance performance in December 2010, in the only theatre in Santo Amaro da Purificação, when ten APAE students came to congratulate the TRAMAD team on the novelty known as audio description, which had enhanced their appreciation of the dance performance. At the beginning of 2011, a second contact was made in order to explain the prospective study, to meet and screen subjects who might be willing to participate in the experiment and to start the process of collecting official parental and institutional permission to be sent to an ethical committee. Four students were chosen for this first study with the help of the team from the APAE and according to their view on the students’ profiles and their disability record, as defined by the above-mentioned ICD. The subjects included two teenagers (one male and one female, aged between 17 and 20) with Down’s syndrome and learning disabilities (F70 and Q90), and two male adults (aged between 30 and 35) with learning disabilities only (F70). During a second visit to the institution (our actual third contact with them), we spent more time with three of the subjects so that a stronger bond could be established. The process to be followed during the study was explained to each of the subjects separately so that they knew what to expect in the following visit when the study was scheduled to take place. The APAE team was in charge of explaining the process to the remaining student, the female teenager, whom we only met once before the experiment, after the abovementioned audio described dance performance. A conversation with the students’ psychoanalyst and speech therapist revealed that, although the subjects had similar records as regards their official disability classification, their background was quite different.

104

Eliana P. C. Franco et al.

For example, one of them was brought up in a poor family and was extremely over-protected by his mother, who prevented him from taking part in the simplest day-to-day activities, whereas another student had been adopted by a lawyer who encouraged her child to take part in as many interesting activities as possible, including a drama course. The fact that these different environments had undoubtedly had an impact on the subjects’ personalities could be observed from the very first encounter. Despite their differences, they all loved to watch soap operas on TV (the famous Brazilian telenovelas) and hardly went to the movies, which is justified by the fact that there are no cinemas in Santo Amaro da Purificação and people have to travel to see a film on the big screen. The only exception was the subject who attended drama classes, who was often taken to the movies by his mother in Feira de Santana, the closest town. 6.3.3 The audio described film In order to assess the effectiveness of AD on our group of four subjects with learning disabilities, a short 15 minute film produced in the state of Ceará in 2002 was chosen: Águas de Romanza (Waters of Romanza). Directed by Gláucia Soares and Patrícia Baía, two local filmmakers, this film had been audio described as part of a research project on media accessibility carried out by a research group from the State University of Ceará. The story of Águas de Romanza takes place in the infertile backlands of the northeast of Brazil, where a six-year-old girl called Romanza lives with her sick grandmother and dreams of seeing rain for the first time. The grandmother wants to make the girl’s dream come true as soon as possible and asks for the help of her friend Persival, a travelling salesman, who seems to be their only hope. The choice of this film was based on the fact that some scenes required analytical thinking and close attention on the part of the audience. In some scenes, the grandmother sees spirits or ghosts that are portrayed in the same way as the live characters in the film. This is the case with her husband, Antonio, and her daughter and Romanza’s mother, both dead. It is only in one scene that the latter appears to Romanza as a shadowy figure, indicating that she is, in fact, different from the other characters. In this scene, Romanza is sleeping in a hammock when she suddenly opens her eyes and sees images projected on the wall, suggesting that she is only dreaming, and not really awake. The last and most important scene in the film shows the grandmother, Persival and Romanza arriving at a sunny place; soon after, the girl is told by the grandmother to run after the rain. In the following shot,

Audio Describing for an Audience with Learning Disabilities 105

Romanza is shown close-up, dancing in the middle of green plants under water coming down from above. When the camera opens up the frame from above, the audience finds out that the rain is in fact water coming from irrigation pipes. As it was originally intended for the blind and visually impaired, the AD accompanying these scenes does not really explain that the grandmother sees spirits, that the images on the wall are merely Romanza’s dreams, that she is still asleep although her eyes are open, or that the falling water is not real rain. Thus, our experiment aimed to find out how our subject group would cope with the meanings constructed through images that had not been made explicit in the AD. Would the AD, intended for the blind, be of some use to them? The names of the characters were also considered interesting, since these were either a little unusual or confusing, particularly as they were sometimes mentioned in the film when other characters appeared on screen. The experiment was, therefore, also intended to determine whether the repetition of these names in the AD helped the audience to memorize them.

6.4

Methodology

Due to the lack of relevant reception studies on the topic and the lack of literature on which to base the study, the methodology followed was based on previous reception research with blind and visually impaired audiences. Each subject went through the same steps individually, in a room prepared just for the study. Every single step was filmed with the permission of the participant, his/her parent, and the institutional representative. The methodology used in the study can be summarized as follows: 1. The subjects watched the film Águas de Romanza without the AD. 2. The subjects were invited to talk freely about the film before answering any questions. This gave us a good idea of their understanding of the story. 3. The subjects were asked questions to determine content understanding. Originally, 21 questions were prepared, but were amended throughout the experiment to accord with what had been mentioned by each subject during the previous phase. It is worth highlighting the fact that, for some of the questions, particularly those related to the areas of difficulty mentioned above, the scene was shown a second time so that participants could be sure of the answers given. For example, in the rain scene, three out of four did not understand that the rain was

106

4. 5. 6.

7.

8.

Eliana P. C. Franco et al.

not real despite the image of the irrigation tubes. The scene was shown again and the subjects were asked: ‘Is this real rain?’ The participants watched the film Águas de Romanza with AD. The participants’ reactions were checked during the film projection. These were very significant in terms of the impact of the AD. The subjects were invited to talk freely about the film with AD before answering any questions. As a result, it was possible to observe how much had been added to the original understanding of the film. The subjects were asked the same questions as in point 3 above to determine content understanding. This was also intended to check any improvement in the understanding of the film. Participants were asked specific questions about AD, such as: Do you think audio description helped you to understand more things about the film? What did you understand with the audio description that you did not understand without it? Did you like the voice describing the images? Did you like the way she spoke? Would you like to watch more films with audio description?

6.5

Preliminary observations

Even without explicitating the meaning implied in the visual sequences of the short film Águas de Romanza, it can be said that its AD, intended for a blind and visually impaired audience, did help the subjects with learning disabilities to understand the film better, especially some of the scenes discussed above, including the key one. The three subjects, who were not able to perceive that the rain was not real in the non-audio described version, did ascertain that the rain actually came from irrigation tubes when watching the AD version, proving that the AD somehow directed their attention to the tubes. In the AD version, all the participants realized that some of the people in the film were not real but spirits, for example in the scene where Romanza is in the hammock and her mother appears as a shadowy image, audio described in Brazilian Portuguese as ‘a transparent woman’. However, in some cases, the subjects were not able to grasp the implicit meaning of the scene with the help of the AD without needing further interpretation. A  good example was when Romanza is in the hammock and opens her eyes suddenly while images are being projected on the wall. Hearing that the girl opened her eyes in the AD was, for all the subjects, a clear indication that she was awake. Aside from the implicit meanings, the AD also seemed to provide little help

Audio Describing for an Audience with Learning Disabilities 107

to the audience as regards the association of characters with scenes in the film. For instance, although the spirits seen by the grandmother appear in many scenes throughout the film (in the grandmother’s house, along the road and on top of the cart where she, Romanza and Persival are travelling), just one of the subjects was able to make the necessary connection between these scenes, perceiving that they were actually the same people/spirits accompanying the grandmother all the time. In addition, although the AD seemed to guide the focus of attention to certain visual aspects, like the irrigation tubes, it did not prove to be acoustically efficient. That is to say, the repetition of the characters’ names throughout the AD script did not necessarily help the subjects to remember them. As an example, none of the subjects were able to remember the protagonist’s name, Romanza, although it was mentioned many times in both the film and the AD. When asked what the name of the girl was, two of them said ‘Larissa’, a name that appears in the credits at the end of the film and refers to the person to whom the film is dedicated. Would this point to the fact that the audience with learning disabilities is more visually than auditorily inclined or that auditory information is impaired by their lack of concentration when there are questions to be answered? A rather important observation concerns the subjects’ reactions while watching the film with AD. Whereas the facial expressions of the two subjects with Down’s syndrome remained mostly the same when both versions were watched, the two male subjects without Down’s syndrome frequently reacted physically, and also sometimes verbally, to the presence of the AD. It could be argued that these reactions had two different functions: on the one hand, they worked to corroborate the answers previously given to the content-related questions about the non-audio described version. The participants appeared to be happy when they got the answers right, with one of the subjects even remarking: ‘You see? I was right’. On the other hand, their reactions also revealed that the subjects were comparing the AD with the images shown on screen, that is, they were checking whether the AD was accurate and really described the images they were seeing. One of the subjects made noises of approval and corroboration throughout the whole film whenever acoustic verbal signs coincided with visual nonverbal signs. In this case, a question that could be asked in later studies in relation to the elaboration of an AD script for audiences with learning disabilities is whether an audience with learning disabilities reacts to the frequent, and often necessary, anticipation and delay in an AD script, and whether the latter is welcomed or rejected.

108

Eliana P. C. Franco et al.

As regards questions relating to the voice and rhythm of the AD, as well as about the participants’ willingness to watch more films with audio description, all the answers showed a positive response on the part of the participants in the study.

6.6

Concluding remarks

Based on the observations made during the study described above, it is possible to draw one major conclusion: despite it being of great help, an AD originally intended for the visually impaired would not be enough to enable an audience with learning disabilities to derive a full understanding of a film’s plot and narrative. Interpretation of the implicit meanings encoded by images as well as of the connections among them would be needed in order to make the film as accessible as possible to this very specific audience. Another minor, but still important observation meriting further development and testing as part of a larger-scale study is that, according to official classifications, the definitions of degrees of impairment are often too general and do not constitute an adequate representation of the differences between subjects as regards their educational and social background or, indeed, the environment in which they are brought up. These could prove more important in the analysis of their performance than a mere medical diagnosis. Thus, pre-defined types of disability, such as those based on official classifications, might generate assumptions that do not prove to be true in practice. For example, in our experiment, the only participant to perceive that the rain was not real in the non-audio described version had Down’s syndrome, whereas the participant considered to be the cleverest of the group and one who did not have Down’s syndrome, could not tell that the rain was not real, even after watching the scene for a second time. Filming subjects is a primary condition for the observation of the impact of AD through bodily reactions. Just after the completion of the study, we realized that we had not recorded the specific scenes of the film to which the subjects reacted. With a camera directed only on the faces of the subjects, there was no apparent record of the scenes to which they reacted, although film timing provided some clues. Ideally, one camera should be facing the subject while another should be positioned behind him or her, focusing on the screen where the film is being projected. The study also revealed that previous meetings between researchers and subjects are of the utmost importance in order to create a bond of

Audio Describing for an Audience with Learning Disabilities 109

trust and to make subjects feel at ease during the study. It became clear that only one encounter at the institution with the female subject on the day that the study was conducted affected results negatively, either because she did not concentrate enough or because she was not really interested in the film or the study. To conclude, this short initial study on AD for people with learning disabilities has provided suggestions so that further research can be conducted in this field in an appropriate manner.

Notes 1. The authors are deeply grateful to the team of APAE Santo Amaro (apaesantoamaro.blogspot.com.br), especially to Ms Alessandra Gomes Reis e Silva do Carmo, head of APAE Santo Amaro, Mr Divino Marcos França, psychoanalyst and coordinator, and Ms Analu Valladares Vasconcelos, speech therapist, for welcoming us into the institution and providing invaluable help to make this study possible.

References Ballone, Geraldo José. 2004. ‘Deficiência Mental’. Psiqweb. www.psiqweb.med.br. Díaz Cintas, Jorge. 2005. ‘Audiovisual translation today. A question of accessibility for all’. Translating Today 4: 3–5. Díaz Cintas, Jorge. 2007. ‘Por una preparación de calidad en accesibilidad audiovisual’. Trans: Revista de Traductología 11: 45–59. Fundação Oswaldo Cruz. 2014. ‘Deficiência Mental’. www.fiocruz.br/biosseguranca/ Bis/infantil/deficiencia-mental.htm. Resende Crosara de, Ana Paula and Flavia M. de Paiva Vital (orgs). 2008. A  Convenção sobre Direitos das Pessoas com Deficiência Comentada. Brasília: Secretaria Especial dos Direitos Humanos. United Nations. 2006. ‘Article I’. Convention on the Rights of Persons with Disabilities. www.un.org/disabilities/default.asp?id=261.

7 Analysing Redubs: Motives, Agents and Audience Response Serenella Zanotti

7.1 The retranslation of audiovisual products The retranslation of audiovisual products (movies, animated feature films, TV series) has become common practice nowadays, but remains a largely under-researched phenomenon. As well as constituting fascinating subjects for study in their own right, film retranslations offer a unique opportunity to investigate the evolution of translational norms and practices in the field of audiovisual translation. According to Nornes (2007: 16), changes in viewers’ habits and cultural expectations require new forms of translation. The retranslations of audiovisual texts are indeed evidence of new forms of audiovisual translation (AVT). When applied to audiovisual texts, the term retranslation denotes a second or subsequent translation of the same source text in the same target language (Chaume 2007: 50). Retranslation may occur when the translation modality changes (that is, when an audiovisual text is retranslated using a different modality from the one chosen initially: dubbing, subtitling and voiceover, for example); when the modality is the same as the first translation, then retranslation takes the form of redubbing, resubtitling, etc. The existence of multiple subtitled versions of the same film is largely acknowledged; for instance, subtitles produced for film festival projections are seldom used for cinema distribution and resubtitling may also be required by different TV channels or for release on DVD. The circulation of different dubbed versions of the same feature film or TV series within the same country is also widely reported and is starting to attract the attention of scholars (Chaume 2007; Khris 2006; Maraschio 1982; Valoroso 2006; Votisky 2007; Wehn 1998). In fact, redubbing has become a much debated issue among dubbing professionals, who lament the fact that poor quality redubs 110

Analysing Redubs

111

are invading the market, thus leading to a lowering of current dubbing standards. According to Paolinelli (2004: 177–8): There are some players who, for the most disparate reasons, want to pay peanuts for dubbing jobs, so allowing unscrupulous nonprofessionals to undercut the market. Some of them, with complete impunity, are ‘re-dubbing’ the great films of the past, works that made the history of the cinema (and of dubbing), working on the lowest levels. It will be argued, however, that redubs should not automatically be associated with low quality. Redubbing, a widespread, complex and much debated phenomenon, constitutes the focus of this chapter. The term is used here to refer to the existence of a second, or subsequent, dubbed version of the same audiovisual text in the same target language. The interest in redubbing resides, among other things, in the challenge it poses to the so called ‘retranslation hypothesis’, which maintains that: (a) translated texts age and need updating; (b) retranslations occur more frequently with canonical texts; (c) retranslation is a process of improvement (Berman 1990; Gambier 1994). According to this hypothesis, subsequent translations are more source-oriented, thus coming closer to the original, than first translations, which are more target-oriented and inherently assimilative. Moreover, retranslation is seen as a process intended to achieve greater accuracy and lead to an improvement on previous versions. It should be pointed out, however, that research on retranslation has traditionally focused on literary material and has, therefore, omitted the parameters and factors affecting the translation of audiovisual texts. Redubs seem to challenge the retranslation hypothesis on a number of points. First of all, there is a considerable difference between audiovisual and literary retranslation in terms of perception. While retranslation in the field of literature is ‘usually regarded as a positive phenomenon, leading to diversity and a broadening of the available interpretations of the source text’ (Gürçağlar 2009: 233), retranslation in the audiovisual field tends either to be neglected or is negatively received. More precisely, while resubtitling is seen as inevitable and is hardly ever noticed or remarked upon, redubbing almost inevitably attracts attention and is often subjected to negative judgment. Furthermore, one should not forget that profit is a decisive factor when it comes to films and, therefore, given the cost and laboriousness of the dubbing process, it is clear that the decision to redub must be grounded

112

Serenella Zanotti

in the assumption that the endeavour is worth the effort. Can textual ageing and the need for updating thus be considered as decisive factors justifying redubbing? Is it true that only the classics get redubbed? Do redubbed films truly mark a return to the source text? And do they represent an improvement on the previous versions? The aim of this chapter is to find an answer to these questions by examining redubbing in its various forms. The reasons advanced as a justification of redubs will be discussed and some core issues addressed, such as the impact of censorship and local regulations, the impact of commercial factors and viewers’ perceptions concerning the decisions made by distributors.

7.2 Reasons for redubbing according to existing literature There seems to be a general consensus concerning the reasons for redubbing in the case of films. Redubbing, which is associated particularly with great movie classics (Chiaro 2008: 247), is ascribed to a number of factors including the linguistic ageing of the old version, a damaged soundtrack and lack of awareness regarding the existence of a previous dubbed version (Chaume 2007: 56–61). Technical and commercial factors can also constitute fairly strong reasons for redubbing. Regarding the former, with Dolby Digital 5.1 as an essential ingredient in highquality DVDs, old mono- and stereophonic soundtracks are often turned into a multichannel format and are gradually disappearing. Technical reasons are strictly connected to marketability and, since films are first and foremost marketable products, one of the driving forces behind redubbing seems to be commercial success. Another important factor is cost-effectiveness. As is well known, one of the problems with audiovisual material is copyright. As Chaume (2007: 60) points out, purchasing an existing dubbing may be very complicated or expensive, so that it is often easier and, even, cheaper to commission a new one. Finally, there are purely commercial reasons to explain the number of film re-edits that are invading the market. Re-editions providing additional footage and higher audio quality encourage consumers to buy DVDs and Bluray Discs (BDs). This often leads to a redub because a newly dubbed soundtrack is needed to accompany the additional scenes. It can be inferred from the above that the textual profiles of redubbed films are more likely to correspond to financial interests than to cultural and aesthetic criteria. Cost cutting is paramount as far as the home video industry is concerned and redubbing may contribute to reducing publishing costs as well as to meeting commercial needs. Redubbing is most commonly associated in existing literature with

Analysing Redubs

113

DVDs and there is no doubt that the expansion of the DVD market has brought about a new wave of redubs. BD technology is now opening up more space for multiple film editions, with new material waiting to be translated, dubbed or redubbed. It must be pointed out, however, that the practice of redubbing is not specific to DVDs or BDs. In fact, it was fairly common in the past. For instance, prior to the expansion of the home video market, most redubs were commissioned in Italy by the national television network (RAI), apparently because the soundtrack of the original dubbed version was missing or damaged (Comuzio 2000: 100–1). In the past, films were also redubbed before a new cinema release, or in order to replace the soundtrack of multiple language version films, that is, the simultaneous remaking of the same film in different language versions.

7.3

The corpus

According to the website www.ciakhollywood.com/antiridoppiaggio, there are over 350 redubbed feature films on the Italian home video market. The vast majority are represented by old classic films (about 300), but the list includes some 50 recent films that have also been redubbed. Based on reports by film enthusiasts, these figures are far from exhaustive and, of course, need reviewing, but they nevertheless provide a sense of the extent of the phenomenon. It is true that the lack of reliable and thorough information limits the possibility of drawing up an accurate overview, which is still to be properly mapped out. This chapter presents the results of an analysis carried out on a corpus of 18 feature films from the 1930s to the 1980s, as is shown in Table 7.1. The selected films cover different eras and film genres, as well as a variety of production companies, distributors and dubbing companies. Table 7.1

Films included in the audiovisual corpus1

Original film

First dubbed version

Redubbed version(s)

The Adventures of Robin Hood (Michael Curtiz 1938)

C.D.C.

DVD edition (2003)

Amadeus (Miloš Forman 1984)

S.A.S (1984) 157’ Dubbing director: Fede Arnaud

Director’s cut (178’), Angriservices Edizioni (2002) Dubbing director: Filippo Ottoni (continued)

114 Table 7.1 Continued Original film

First dubbed version

Redubbed version(s)

Apocalypse Now (Francis Ford Coppola 1979)

C.D. (1979) Dubbing director: Renato Izzo

Director’s cut (Apocalypse Now Redux), PUMAISdue (2001) Dubbing director: Fiamma Izzo

E.T. the Extraterrestrial (Steven Spielberg 1982)

S.A.S. (1982) Dubbing director: Fede Arnaud

Home-video edition, PUMAIS due (2003) Dubbing director: Fiamma Izzo

For Whom the Bell Tolls (Sam Wood 1943)

C.D.C (1948)

2nd edition: C.D. (1978); Dubbing director: Riccardo Cucciolla 3rd edition: CVD (2003); Dubbing director: Oreste Rizzini

The Godfather (Francis Ford Coppola 1972)

C.D. (1972) Dubbing director: Ettore Giannini

DVD edition, Dubbing Brothers Int. Italia (2007) Dubbing director: Rodolfo Bianchi

Gone with the Wind (Victor Fleming 1939)

C.D.C. (1950) Dubbing director: Franco Schirato

CVD (1977) Dubbing director: Mario Maldesi

Grease (Randal Kleiser 1978)

DEFIS (1978)

DVD edition, C.D. (2002) Dubbing director: Maria Pia di Meo

Jaws (Steven Spielberg 1975)

C.D. (1975) Dubbing director: Renato Izzo

DVD edition, Dubbing Brothers Int. Italia (2004) Dubbing director: Teo Bellia

Jezebel (William Wyler 1938)

C.D.C.

Edition for RAI (1970s)

Lady and the Tramp (Clyde Geronimi, Wilfred Jackson and Hamilton Luske 1955)

Fono Roma/ C.D.C. (1955) Dubbing director: Giulio Panicali

Royfilm/ Angriservices Edizioni (1997) Dubbing director: Francesco Vairano

The Meaning of Life (Terry Jones 1983)

C.D. (1983) Dubbing director: Renato Izzo

SEFIT-CDC (2004) Dubbing director: Angelo Nicotra

The Mummy (Karl Freund 1932)

C.D.C.

DVD edition (2002)

Once Upon a Time in America (Sergio Leone 1984)

C.D.C. (1984) Dubbing director: Riccardo Cucciolla

DVD edition, Dubbing Brothers Int. Italia (2003) Dubbing director: Fabrizia Castagnoli (continued)

Analysing Redubs

115

Table 7.1 Continued Original film

First dubbed version

Redubbed version(s)

Saturday Night Fever (John Badham 1977)

(1977)

DVD edition (2002)

Superman (Richard Donner 1978)

C.D. (1978) Dubbing director: Renato Izzo

DVD edition (2003) Dubbing director: Ludovica Modugno

Touch of Evil (Orson Welles 1958)

TITANUS/C.D.C (1958)

Director’s cut, CVD (2003) Dubbing director: Solvejg D’Assunta

Wuthering Heights (William Wyler 1939)

C.D.C. (1940)

Edition for RAI (1970s)

An analysis of first and subsequent dubbings of the films in this corpus has been carried out in order to uncover the reasons behind the redubs, provide insights into the process and analyse its effects, the relationship between first and subsequent dubbings, the type of changes made and, finally, the translational norms at work.

7.4 Towards a typology of redubs The analysis of the film corpus has shown that redubs can be grouped into three broad categories according to the type of changes implemented: 1. revoicing, when the redubbing is little more than a restaging of a previous or original dubbing script; 2. revision, when the original dubbing script is revised at different levels; 3. retranslation, when the source language text is retranslated and a new dubbing script is used for recording. 7.4.1 Redubbing as revoicing In this case, a new soundtrack relying on the previously translated script is produced. New dubbing actors and a new dubbing director are normally selected, even though it is also possible to make use of the same voices in the new recording. There is very little variation on a textual level, although occasional changes may be made at the moment of recording owing to improvisations made by the dubbing

116

Serenella Zanotti

actors. The alterations mainly concern non-textual features such as acting style, interpretation and voice quality, which play a crucial role in the transfer of meaning. A case in point is the DVD version of Once Upon a Time in America (Sergio Leone 1984). This is a particularly interesting case, as the first Italian edition had the seal of authorial approval in that it was supervised by the director Sergio Leone himself, who personally chose the actors used in the dubbing. A new 5.1 Dolby Digital soundtrack was recorded in 2003 with new dubbing actors as a replacement for the old monophonic format. As shown in Example 1, minimal changes can be detected in the redubbed dialogues. However, these changes do not indicate that a genuine process of revision was undertaken. Nonetheless, it is worth noting that the new voices and sound quality resulted in negative reactions from viewers.2

Example 1 Source Text (ST)

First Dubbed Version (FDV)

Redubbed Version (RV)

Fat Moe: Yeah, the synagogue sent these out if you wanted to relocate your loved ones.

Fat Moe: Ma sì. È l’invito che ha spedito la sinagoga per spostare le salme. [Fat Moe: Oh, yeah. It’s the request sent out by the synagogue about moving the remains (of their loved ones).]

Fat Moe: Ma sì. È l’invito che ha fatto la sinagoga per spostare le salme. [Fat Moe: Oh, yeah. It’s the request made by the synagogue about moving the remains (of their loved ones).]

7.4.2

Redubbing as revision

Revision ‘involves making changes to an existing TT [target text] whilst retaining the major part, including the overall structure and tone of the former version’ (Vanderschelden 2000: 1). Redubbing as revision encompasses a variety of activities, ranging from correcting inaccuracies or mistranslations and introducing minimal stylistic changes, to extensive rewriting. Intervention is therefore present in varying degrees. In the redubbed version of The Godfather changes were kept to a minimum: apart from occasional lexical changes (e.g. undertaker > beccamorto > becchino), what makes the redubbed version different is the stronger dialectal characterization of the Italo-American characters at the phonetic and prosodic level.

Analysing Redubs

117

More extensive revision is detectable in the case of Jaws (Spielberg 1975), also an example of redubbing as revision, as is illustrated by the following example: Example 2 ST

FDV

RV

Mayor Vaughn: You yell shark, we’ve got a panic on our hands on the Fourth of July.

Mayor Vaughn: Ma se uno grida: ‘Squalo!’, qui ci ritroviamo il finimondo all’apertura della stagione. [Mayor Vaughn: But if someone yells ‘shark!’ there will be bedlam at the start of the season.]

Mayor Vaughn: Ma se uno grida: ‘Squalo!’, significa il finimondo per il quattro luglio. [Mayor Vaughn: But if someone yells ‘shark!’, there will be bedlam on the Fourth of July.]

Concerns as to the accuracy of the first rendering must have motivated the revision. When redubbing, the extra-linguistic culture-specific reference ‘the Fourth of July’, which had been replaced by the more general ‘at the start of the season’ in the first dubbing, is brought back into the translated dialogue, perhaps based on the assumption that the Italian audience was familiar enough with American traditions. Revision can be a cost-effective strategy for film distributors, as the recycling of previously translated scripts contributes to reducing production time and costs. Revising is preferable to translating anew, even when it entails extensive rewriting, as it is both cheaper and faster than commissioning a new translation. When it comes to bigbudget productions, the original dubbing script is usually thoroughly revisited and carefully selected changes allowing for greater accuracy and acceptability are made. This was the case with the blockbuster film Superman (Richard Donner 1978). Different translational norms seem to be in operation in the two dubbings: a radically sourceoriented strategy is adopted in the 1978 version, often providing a word-for-word translation of the original lines of dialogue, while the redub seems to be primarily concerned with clarity and effectiveness, thus opting for target-oriented solutions as shown in Example 3. Lois’s attempt to persuade her editor at the Daily Planet to publish her piece, on the grounds that ‘it’s got everything: sex, violence, the ethnic angle’, is frustrated by the man’s humorous retort, which is translated literally in the old dubbed version so that the intended

118

Serenella Zanotti

meaning remains obscure. However, the humorous effect of the original is recreated in the new dubbing through a non-literal translation, as is shown below: Example 3 ST

FDV

RV

Lois: It’s got everything: sex, violence, the ethnic angle. Perry: So does a lady wrestler with a foreign accent.

Lois: C’è proprio tutto. C’è sesso, c’è violenza, c’è il lato razziale. Perry: Sì, quanto una lottatrice con l’accento straniero. [Lois: It’s got everything: sex, violence, the ethnic angle. Perry: Yes, like a lady wrestler with a foreign accent.]

Lois: C’è proprio tutto. C’è sesso, c’è violenza, c’è il probema razziale. Perry: Sì, come in un bidone aspiratutto. [Lois: It’s got everything: sex, violence, the ethnic issue. Perry: Yes, like in a hoover.]

A similar case is found in Example 4, which goes to show that the strong orientation towards the source text in the first dubbed version leads to foreignizing translation solutions such as impervio al dolore, a literal rendition of the English ‘impervious to pain’, which was replaced by a more natural-sounding equivalent in the second dubbed version: Example 4 ST

FDV

RV

Lois: Is it true […] that you’re totally impervious to pain?

Lois: È vero che […] che è del tutto impervio al dolore? [Is it true that you’re totally impervious to pain?]

Lois: È vero che […] è del tutto insensibile al dolore? [Is it true that you’re totally insensitive to pain?]

A more complex revision process was carried out on E.T. (Steven Spielberg 1982). The 20th anniversary edition that came out in 2002 offered an extended version of the film with altered special effects, computer-generated imagery, digitally manipulated shots and newly added scenes. A few changes were made to the original dialogues by the director, Steven Spielberg; for example the word ‘terrorist’ was changed to ‘hippie’. When the film was redubbed for the Italian audience, the

Analysing Redubs

119

revisions made to the old script were both meticulous and numerous. Quite clearly, the translators’ main concern was to produce a more accurate source-oriented translation, often involving changes to the creative solutions of the previous dubbing. This is evident in examples 5 and 6, where the revisions clearly mark a return to the source text: Example 5 ST

FDV

RV

Elliott: Swear as my only brother on our lives.

Elliott: Giura che ti caschi un occhio nel buco del lavandino. [Elliott: Swear should your eye drop into the sink.]

Elliott: Giura come mio unico fratello sulla nostra vita. [Elliott: Swear as my only brother on our lives.]

Example 6 ST

FDV

RV

Michael: Maybe he’s like a worker bee who only knows how to push buttons or something.

Michael: Forse è solo un robot che sa solo spingere bottoni, o altro. [Michael: Maybe he’s just a robot who only knows how to push buttons or something.]

Michael: Forse è una specie di ape operaia che sa solo spingere bottoni. [Michael: Maybe he’s a sort of worker bee who only knows how to push buttons.]

The same approach was also adopted to deal with the memorable line ‘E.T. phone home’: Example 7 ST

FDV

RV

E.T.: E... T... home... phone.

E.T.: E.T. telefono casa. [E.T.: E.T. phone home.]

E.T.: E.T. casa telefono. [E.T.: E.T. home phone.]

The rendering of this line has been the subject of innumerable comments made by E.T. fans on the Internet. The reactions were generally negative, probably due to the popularity of the quote, which had attained almost legendary status among Italian speakers. As with family films in general, strong language represents an important area of intervention.

120

Serenella Zanotti

In the redubbed version, the expletives occasionally surfacing in Elliott’s brother’s speech are toned down: Example 8 ST

FDV

RV

Michael: What’s all this shit?

Michael: Che cos’è questa stronzata. [Michael: What’s this shit?]

Michael: Che cos’è tutta questa roba. [Michael: What’s all this stuff?]

Some key lexical choices were subject to revision owing to a different interpretation imposed on the film in the new dubbing. In the original dialogue E.T. is referred to as ‘the creature’ by the scientists who provide medical assistance. The first dubbing adopted the word essere [being], suggestive of an unemotional attitude, whereas the new version opted for the word creatura [creature], which instead carries a positive emotional connotation: Example 9 ST

FDV

RV

Doctor: Boy and creature are separating.

Doctor: Il bambino e l’essere si separano. [Doctor: The boy and the being are separating.]

Doctor: Il bambino e la creatura si stanno separando. [Doctor: The boy and the creature are separating.]

Concern with accuracy and linguistic naturalness was probably behind the revisions made to Elliott’s words in the resuscitation scene, where ti amo [I love you / I am fond of you] was replaced by a less loaded lexical variant (ti voglio bene [I love you / I care for you]), which was evidently considered more appropriate: Example 10 ST

FDV

RV

Elliott: I’ll believe in you all my life. Every day. E. T... I love you.

Elliott: Io penserò a te per tutta la vita, ogni giorno. E.T., io ti amo. [Elliott: I’ll think of you all my life, every day. E.T. I love you.]

Elliott: Crederò in te per tutta la mia vita, ogni giorno. E.T., io ti voglio bene. [Elliott: I’ll believe in you all my life, every day. E.T. I care for you.]

Analysing Redubs

121

As is shown above, in the case of blockbuster films, the process of revision aims to comply with target culture norms. The toning down of potentially disturbing elements found in redubbed family films such as E.T. should thus be ascribed to the impact of current norms in the target culture, which are naturally subject to change. However, as Vanderschelden (2000: 3) points out, ‘[w]hether or not [revision] really improves quality is debatable, as the procedure can be perceived as a minimal cost revamping exercise, in the same way as it can be considered an updating of the TT in order to improve its quality’. 7.4.3 Redubbing as retranslation Revising an old dubbing script is of course cheaper than retranslating, but sometimes retranslation becomes necessary, as with film re-edits. For Touch of Evil (Orson Welles 1958), for instance, a new dubbing based on retranslated dialogues was carried out in 2003 for the DVD release of the director’s cut edition. A new translation and adaptation of a film is costly, however, so when new footage is added, cheaper solutions may be chosen, such as dubbing the new sequences, as in Alien (Ridley Scott 1979), or subtitling, as in Citizen Kane (Orson Welles 1941). It is nevertheless true that there are diverse reasons behind retranslation. It may be chosen not only to fill the gaps when new footage is added, but also to update dialogues, to offer a fresh look at a popular film or simply to present the product as though it were new. A  case in point is Gone with the Wind (Victor Fleming 1939), which was first dubbed in 1950 and redubbed in 1977 when the restored version was ready for cinema release. The redubbed dialogues feature greater linguistic realism and a source-oriented approach to translation indicative of changed translational norms in Italian dubbing, as illustrated by examples 11 and 12:

Example 11 ST

FDV

Ashley: You always had Ashley: Sapete che il mio mine [heart]. You cut vi appartiene già. your teeth on it. [Ashley: You know my heart already belongs to you.]

RV Ashley: Hai sempre avuto il mio. Ti ci sei svezzata. [Ashley: You’ve always had mine. You were weaned on it.]

122

Serenella Zanotti

Example 12 ST

FDV

RV

Rhett: Never in any crisis of your life have I known you to have a handkerchief.

Rhett: […] nei momenti più gravi non hai mai il fazzoletto. [Rhett: At the worst times you never have a handkerchief.]

Rhett: In tutte le numerose crisi della tua vita non hai mai avuto il fazzoletto con te. [Rhett: In all of the many crises in your life you have never had a handkerchief on you.]

The retranslated text follows the original closely, sometimes even opting for a word-for-word translation of the source text, as in Example 13: Example 13 ST

FDV

Scarlett: But Melly’s Scarlett: Ma Melania ha le having her baby! doglie! [Scarlett: Melly is in labour!]

RV Scarlett: Melania sta avendo il bambino! [Scarlett: Melly’s having her baby!]

Greater realism is detectable in the use of pronouns of address in the redubbing of Gone with the Wind, in line with the social conventions of the target culture. While in the original dubbing Scarlett and Ashley use the formal pronoun voi, they resort to a more familiar form of address (tu) in the redubbed version, as their life-long relationship allows for greater intimacy: Example 14 ST

FDV

RV

Scarlett: Well, Ashley, Ashley... I love you.

Scarlett: Oh, Ashley, Ashley... Vi amo. [Scarlett: Oh, Ashley, Ashley... I love you.]

Scarlett: Oh, Ashley, Ashley... Ti amo. [Scarlett: Oh, Ashley, Ashley... I love you.]

An accurate approach is also found in the treatment of the following dialogue exchange (Example 15). Rhett’s passion and his breaking of social conventions are indicated in the redubbed version by his shift

Analysing Redubs

123

from the formal pronominal address form lei [formal ‘you’] to the more intimate tu [informal ‘you’], whereas Scarlett sticks to the formal variant lei as an emphasis of her detachment and disapproval: Example 15 ST

FDV

RV

Rhett: I love you, Scarlett. Scarlett: Don’t hold me like that.

Rhett: Vi amo, Rossella […] Scarlett: Non stringetemi così. [Rhett: I love you, Scarlett. Scarlett: Don’t hold me like that.]

Rhett: Io ti amo […] Scarlett: Non mi stringa così. [Rhett: I love you, Scarlett. Scarlett: Don’t hold me like that.]

The redubbed version’s predilection for greater linguistic realism is exemplified by the rendering of Scarlett’s line in Example 16, showing greater accuracy in dealing with linguistic crudity: Example 16 ST

FDV

RV

Scarlett: I’ll whip the hide off you.

Scarlett: Se no, ti frusto! [Scarlett: Otherwise I’ll whip you!]

Scarlett: O ti frusterò a sangue! [Scarlett: Otherwise I’ll whip you until you bleed!]

Curiously enough, the retranslation of Rhett’s last words to Scarlett goes in the opposite direction, as the second dubbing mitigates the strength of Rhett’s assertion by neutralizing the mildly profane overtones of the word ‘damn’, as is shown below: Example 17 ST

FDV

RV

Rhett: Frankly, my dear, I don’t give a damn.

Rhett: Francamente me ne infischio. [Rhett: Frankly, I couldn’t care less.]

Rhett: Francamente, cara, non me ne importa niente. [Rhett: Frankly, my dear, I don’t care at all.]

The most remarkable change in the 1977 redubbed version concerns the linguistic representation of African-American characters. The new dubbing director’s intention was to reject the racist discourse of the

124

Serenella Zanotti

previous edition and he, therefore, replaced the caricatural pidgin-like Italian used by the African-American servants in the first dubbed version with a standard variety in the redub (Zanotti 2012a). Interestingly, despite the high quality of both dialogue and acting, the redubbed version met with rather a cold reception by film critics and audience alike.3 In the end, the old dubbed version superseded the new and is currently the only version available on television and home video. Similar trends can be seen in the redubbing of William Wyler’s film Wuthering Heights (1939). First dubbed in 1940, the film was circulated under two different titles and in two different dubbed versions: La voce nella tempesta [The voice in the storm] and Cime tempestose [Wuthering heights]. The redubbing was commissioned by the Italian national television network RAI in the 1970s. In line with new target culture norms, the redub opted for a more source-oriented approach, as is shown by Example 18: Example 18 ST

FDV

RV

Heathcliff: I stayed just to be near you. Even as a dog! I’ll stay till the end. I’ll live and I’ll die under this rock.

Heathcliff: E resto proprio per essere sempre vicino a te. Ø Ci resterò fino alla fine e morirò su queste mie rocce. [Heathcliff: I’m staying just to always be near you. I’ll stay till the end and I’ll die on these rocks.]

Heathcliff: Solo per restarti accanto, sia pure come un cane. E ci resterò fino alla fine. Vivrò e morirò sotto questa roccia. [Heathcliff: Just to always be close to you, even as a dog! And I’ll stay till the end. I’ll live and I’ll die under this rock.]

In the 1940 dubbing, part of the line was deleted (‘even as a dog’), while the last sentence was shortened and slightly changed in meaning. The later version is more accurate, however, and shows no deletion. On the whole, the first dubbing tends to be more linguistically manipulative, while the second provides a more accurate rendition, closer to the source text. This is exemplified in the scene where Cathy explains to her maid that ‘there’s nothing to be gained by just looking pretty’, illustrated in Example 19. In the first dubbing, the key words ‘thought’ and ‘humour’ were omitted and replaced by capriccetto [whim], thus giving Cathy’s words a coquettish overtone, which was absent in the original. The redub restores the original tone with a more accurate translation, containing the words pensiero [thought] and spirito [wit].

Analysing Redubs

125

Example 19 ST

FDV

RV

Cathy: Every beauty mark must conceal a thought and every curl be full of humour... as well as brilliantine.

Cathy: Gli uomini, vedi, non si interessano a noi se in ogni nostro ricciolo non trovano almeno un capriccetto. [Cathy: Men, you know, do not take an interest in us unless they find a whim for each curl.]

Cathy: Ogni neo di bellezza deve nascondere un pensiero e ogni ricciolo deve essere pieno di spirito, non solo di profumo. [Cathy: Each beauty mark must conceal a thought and every curl must be full of wit, not just perfume.]

Francis Ford Coppola’s film Apocalypse Now (1979) is another interesting example. In this case, retranslation was necessary owing to significant additions of footage and new editing. The extended edition, which restored about 50 minutes of cut scenes, was distributed in film theatres in 2001 and later in DVD format under the title Apocalypse Now Redux. One of the many problematic points was the rendering of the slang word ‘hairy’ (fraught with difficulties), illustrated in Example 20. The lexical choice made in the first dubbing creates an effect of linguistic unnaturalness, neutralized in the redubbed version through the choice of a less marked target-language equivalent. Example 20 ST

FDV

RV

Kilgore: That village you’re pointing at is kinda hairy, Willard. Willard: What do you mean hairy, sir? Kilgore: It’s hairy. Got some pretty heavy ordnance there.

Kilgore: Quel villaggio che stai indicando è un po’ cazzuto, Willard. Willard: Cosa intende per cazzuto? Kilgore: Cazzuto. C’è artiglieria pesante, lì. [Kilgore: That village you’re pointing at is a bit cocky, Willard. Willard: What do you mean cocky, sir? Kilgore: It’s cocky. Got some pretty heavy ordnance there.]

Kilgore: Quel villaggio che mi indica è rognoso, capitano. Willard: In che senso rognoso, signore? Kilgore: Rognoso. Quelli hanno un po’ d’artiglieria pesante. [Kilgore: That village your pointing at is pesky, Willard. Willard: What do you mean pesky, sir? Kilgore: It’s pesky. Got some pretty heavy ordnance there.]

126

Serenella Zanotti

One of the issues at stake in retranslation is the relationship between the first and subsequent translation. In the case of Apocalypse Now Redux, we have a brand-new translation of the original dialogue, yet in some cases the previous adaptation of rather crucial lines of dialogue made its way into the new translation, as in Example 21, which shows that the notion of retranslations as palimpsests reflecting traces of previous translations can easily be applied to redubs (Chaume 2007; Paloposki and Koskinen 2010): Example 21 ST

FDV

RV

Photojournalist: … and he’ll say: Do you know that if is the middle word in life?

Photojournalist: … e ti dice: Lo sai che se è la sillaba centrale di essere? [Photojournalist: …and he says: Do you know that if is the central syllable in essere (being)?]

Photojournalist: … e ti dice: Lo sai che se è la sillaba al centro di essere? [Photojournalist: …and he says: Do you know that if is the syllable in the middle of essere (being)?]

The area where the two Italian dubbed versions of Apocalypse Now Redux differ the most is in the treatment of intertextual references. In Example 22 explicitation is the strategy adopted in the first dubbing to deal with a quotation from T.S. Eliot’s poem, The Love Song of J. Alfred Prufrock, in which the poet is mentioned explicitly as a means of identifying the quotation, whereas the redubbed version keeps it implicit. This not only reveals a different approach to translating intertextual references, but also a changing perception of audience familiarity with Anglophone cultures.4 Example 22 ST

FDV

RV

Photojournalist: I should have been a pair of ragged claws scuttling across floors of silent seas.

Photojournalist: E, come dice Eliot, io sono soltanto un paio di ruvide chele che fuggono attraverso distese di mari silenziosi. [Photojournalist: And, as T.S. Eliot says, I am just a pair of ragged claws running away across stretches of silent seas.]

Photojournalist: Ah, beh, per esempio, dice potevo essere un paio di ruvidi artigli che fuggono sul fondo di mari silenziosi. [Photojournalist: Oh well, for instance, he says I could have been a pair of ragged claws running away on the bottom of silent seas.]

Analysing Redubs

127

The analysis carried out so far has shown that redubbing should be seen as an umbrella term used to encapsulate a wide range of strategies and practices, in a continuum from re-acting a previously translated script, through a more radical reworking, to translating anew. Section 7.5 below focuses on the variables and issues affecting film retranslation and considers the following questions: Do film rating and regulations regarding the circulation of audiovisual material prevent redubbed dialogues from being changed more radically? Given that different versions circulate on different media (TV, pay channels, DVDs, BDs and the Internet), are they to be considered as truly competing versions, as is the case with literary translations? The present analysis has pointed out that there can be more than one reason for redubbing, but is improved quality the main concern? And is it true that retranslation is a more source-oriented process? The ageing of translations is, moreover, often regarded as a major motive for retranslation, but is linguistic updating to be considered a truly crucial issue in redubbing? And if not, what are the forces that decide what is to be redubbed and why? These points will be addressed below through a number of further examples.

7.5 7.5.1

Understanding redubbing: variables and issues Redubbing and censorship

The ‘retranslation hypothesis’ maintains that first translations are perforce deficient, whereas retranslations are ameliorative (Berman 1990) in so far as they mark a return to the source text (Gambier 1994). But what happens when the issue of censorship comes into play? Do redubs make up for censorial interventions in previous dubbings?5 An interesting case in point is offered by For Whom the Bell Tolls (Sam Wood 1943). This film has been the subject of three different dubbings carried out in 1948, 1978 and 2003. The original dialogue was censored in the earliest dubbed version, as is illustrated by Example 23, where Maria’s explicit reference to love making in her exchange with Jordan is replaced by a vaguer allusion to a ‘night visit’. The censored line was restored in the later redubs. Example 23 ST

FDV

1978 RV

2003 RV

Maria: … and in the nights we’ll make love.

Maria: E la notte verrò a trovarti. [Maria: And in the night I’ll come and see you.]

Maria: E la notte faremo l’amore. [Maria: And in the night we’ll make love.]

Maria: E di notte faremo l’amore. [Maria: And at night we’ll make love.]

128

Serenella Zanotti

This example shows that redubbing may serve as a chance for translators to recover previously censored content and to restore the integrity and meaning of the work (Nornes 2007: 2). A similar case is found in Miloš Forman’s Amadeus (1984), of which a new edition containing 20 minutes of additional footage was released in 2002. The distributor decided to have the whole film redubbed and, in the process of revision, no substantial changes were made to the old dubbing script except perhaps for one line containing a potentially disturbing reference to Italians as ‘musical idiots’ not included in the first dubbing. Example 24 ST

FDV

RV

Amadeus: Naturally, the Italians! Of course, always the Italians! They’re all musical idiots!

Amadeus: Naturalmente, i cortigiani. Ma si capisce. Sempre i cortigiani. Gente musicalmente idiota! [Amadeus: Naturally, the courtesans! Of course, always the courtesans! They’re all musical idiots!]

Amadeus: Naturalmente, gli italiani! Ma si capisce. Sempre gli italiani. Gente musicalmente idiota! [Amadeus: Naturally, the Italians! Of course, always the Italians! They’re all musical idiots!]

The meaning of the line was distorted in the first dubbed version by replacing the word ‘Italians’ with cortigiani [courtesans], with the translators avoiding a potentially offensive allusion. The 2002 redubbed version restored the missing content, thus showing that redubbing can indeed be used as an opportunity to recover the original intentio operis. However, there are cases where this does not apply, as with John Badham’s film Saturday Night Fever (1977). A  new dubbing was commissioned for the DVD edition and the original dubbing script underwent a substantial revision process in which, however, some previously censored lines of dialogue were not amended. The first dubbed version adopted a rather liberal attitude towards strong language, but showed a censorial treatment of blasphemy. Although the redubbing engaged in extensive rewriting of the original dialogue, it continued to avoid religious taboo, as illustrated by Example 25: Example 25 ST

FDV

RV

Double J.: If you do it in five, you get a medal of honor with rubies and a piece of the Pope’s ass.

Double J.: E se te ne fai 5 ti danno una bella medaglia con su inciso un bel culo.

Double J.: E se la metti incinta ti danno anche una medaglia e te la ficchi su per il culo, eh? (continued)

Analysing Redubs

129

Example 25 Continued ST

FDV

RV

Joey: The Pope don’t got no ass. That’s why’s the Pope.

Joey: Il culo di chi? Double J.: Il culo di tua sorella. È la zona più bella. [Double J.: And if you make more than five, you get a nice medal engraved with an ass. Joey: Whose ass? Double J.: Your sister’s ass. That’s the best part.]

[Double J.: And if you make her pregnant, they’ll give you a medal and you’ll throw it up your ass.]

Obscene references were also censored in the first dubbing with the new dubbing continuing along the same lines, as is illustrated by Example 26, where the obscenity contained in Tony’s line is still omitted. Considerations concerning both target audience sensitivity and film rating were perhaps behind the decision to leave the censored lines unchanged. Example 26 ST

FDV

RV

Tony: Oh, forget it. Fuck! Just give me a blow job. Right? Come on.

Tony: Ah, lasciamo perdere. Chiudiamo. OK? Coraggio. [Tony: Oh, forget about it. Let’s give up. Ok? Come on.]

Tony: Ah, lasciamo perdere. Chiudiamo. OK? Coraggio. [Tony: Oh, forget about it. Let’s give up. Ok? Come on.]

In the case of Grease (Randal Kleiser 1978), redubbing only involved revoicing the previous dubbing script. The dialogue in the first dubbed version underwent substantial censorship on a linguistic as well as a content level (Zanotti 2012b) so that the film would receive a favourable rating, bypassing age restrictions, and this policy was maintained in the second edition. Reconsidering the dubbed film’s profile was not the distributor’s intention, as it would have meant submitting the new version to the revision commission in charge of the examination and rating of films. The cases examined in this section show that, as far as censorship is concerned, redubbing represents a chance for improvement when it allows censored content to be restored; but when no attempt is made to make up for previous censorial intervention, it could be regarded as a missed opportunity. It is true, though, that redubbing can also lead

130

Serenella Zanotti

to greater linguistic restraint, as seen in family films such as E.T. It can, therefore, be concluded that much depends on target culture norms, which change through time and are affected by the socio-cultural context in which translation takes place. 7.5.2 Redubbing and translation quality The examples illustrated above seem to confirm the notion that retranslation often leads to an improvement in translation quality. However, can the need for greater accuracy be considered a viable and universally valid reason for retranslating a film? The examples which will be discussed in this section seem to paint a less clear-cut picture, since in these cases redubbing does not entail a better quality translation; on the contrary, it is the earlier dubbed version that is more accurate. This is more likely to happen with lower budget productions, as exemplified by William Wyler’s film Jezebel (1938). Commissioned by the Italian national television network RAI, the new version was evidently based on the previous one from which it nevertheless departs in a rather intermittent, disorderly and inconsistent manner. This has resulted in a pastiche constructed in a patchwork of old and newly translated dialogues. The redubbing of Jezebel also challenges the idea that subsequent translations are more source-oriented, since it is the second version that constitutes a radical departure from the source-language dialogue. This can be observed in Example 27, where Julie’s words are totally misinterpreted in the more recent dubbed version: Example 27 ST

FDV

RV

Aunt Belle: That creature, Julie. You heard what Madame Poulard said? That infamous Vickers woman. Julie: Mary Vickers couldn’t possibly do it justice.

Aunt Belle: Di quella donna, Giulia. Hai capito quello che dice Madame Poulard. Di quella indegna signora. Julie: Mary Vickers non gli renderebbe giustizia. [Aunt Belle: Of that woman, Julie. Did you understand what Madame Poulard said? That despicable lady. Julie: Mary Vickers wouldn’t do it justice.]

Aunt Belle: Addirittura? Hai sentito cos’ha detto Madame Calotte. Quella svergognata della Vicker. Julie: La svergognata Vickers farà le mie vendette. [Aunt Belle: Really? Did you hear what Madame Calotte said? That shameless Vickers woman. Julie: Shameless Vickers will do me justice.]

Michael Curtiz’s film The Adventures of Robin Hood (1938) is another case in point. Once again, it is the original dubbing that provides a more

Analysing Redubs

131

accurate translation, whereas the redubbed version gives a poor, if not actually inaccurate, rendering of the original dialogue: Example 28 ST

FDV

RV

Robin: He’s too thin. I might miss him altogether.

Robin: No, è troppo magro e la mira è troppo difficile. [Robin: No, he is too thin and too difficult to hit.]

Robin: Oh, è troppo magro e non è facile infilzarmi. [Robin: Oh, he is too thin and it is not easy to get me.]

Example 29 shows that the redubbed version relies heavily on the previous dubbing script and even when textual differences occur, the impression is that the dubbing script for the later version was not prepared on the basis of the original English script, from which it often departs: Example 29 ST

FDV

RV

Marian: And when my real guardian, King Richard, finds out about your being in love with me... Robin: I know, he’ll make me court jester. Marian: He won’t. He’ll stick your funny head on London’s Gate.

Marian: E quando il mio vero sovrano, re Riccardo, scoprirà quante pazzie abbiamo fatto… Robin: Eh, allora ci proteggerà. Marian: Lo so. E ci ringrazierà per la nostra fedeltà. [Marian: And when my real sovereign, King Richard, finds out the foolish things that we did… Robin: He will give us protection. Marian: He will thank us for being loyal.]

Marian: E quando il nostro sovrano, re Riccardo, scoprirà le pazzie che abbiamo fatto… Robin: Allora ti darà un premio. Marian: Sì, e ci ringrazierà per esserci battuti tanto per lui. [Marian: And when our sovereign, King Richard, finds out the foolish things that we did… Robin: He will give you a reward. Marian: He will thank us for fighting for him.]

Redubs based on poor-quality translations such as the ones discussed above seem to confirm Vanderschelden’s (2000: 8) point that the notion that retranslation improves the quality of a translation ‘cannot be confirmed nor dismissed’. We may therefore conclude that redubbing includes a variety of practices and situations so that generalizations cannot be made.

132

7.5.3

Serenella Zanotti

Redubbing as linguistic updating

Linguistic updating is often regarded as a major reason for both revision and retranslation (Gambier 1994: 413). But does this hold true for redubbing? In big-budget family films, linguistic modernization seems to play a crucial ‒ although not unique ‒ role in the revision process, as distributors aim to address as large an audience as possible and therefore resort to redubbing in order to update their products so that they will appeal to younger generations. This was the case with E.T., where the revision of the original dubbing script involved, among other things, both the review and the linguistic updating of the dialogue. For instance, the word ‘cowgirl’ in Example 30 had been rendered as ‘cowboy’ in the 1984 version since cowgirl was not used in the Italian language at the time; in 2002 the female form was evidently considered acceptable and was therefore used in the redubbed version: Example 30 ST

FDV

RV

Gertie: I’m going as a cowgirl.

Gertie: Però vengo vestita da cowboy. [Gertie: But I’m coming dressed as a cowboy.]

Gertie: Però vengo vestita da cowgirl. [Gertie: But I’m coming dressed as a cowgirl.]

In the redubbing of E.T., the substitution of outdated lexis is consistent throughout the dialogue. For example, the word ragazzino, which was routinely used in the late 1970s to translate ‘kid’, is abandoned in favour of bambino: Example 31 ST

FDV

RV

Elliott: Only little kids can see him.

Elliott: Solo noi ragazzini possiamo vederlo. [Elliott: Only us kids can see him.]

Elliott: Possono vederlo soltanto i bambini. [Elliott: Only children can see him.]

The same approach was also adopted in dealing with slang lexis such as spillato for ‘lame’, which was evidently perceived as old-fashioned in 2002 and therefore replaced with a more up-to-date synonym in the redub:

Analysing Redubs

133

Example 32 ST

FDV

RV

Michael: You’re so lame, Elliott.

Michael: Sei uno spillato, Elliott. [Michael: You are an idiot, Elliott.]

Michael: Elliott, sei un povero fesso. [Michael: Elliott, you are a poor fool.]

One of the most striking examples of linguistic modernization by means of redubbing is the 1997 version of the animated feature film Lady and the Tramp (Geronimi, Jackson, Luske 1955; see Valoroso 2000 and Rossi 2006). This ambitious commercial operation was intended to re-launch an old Disney classic. For this purpose, the original dubbing script was updated at different levels and mainstream Italian actors were hired as voices for the characters. The revisions included both dialogue and lyrics, and were primarily intended to refresh old-fashioned language, as is exemplified below: Example 33 ST

FDV

RV

Tramp: It’s what they do to your happy home. Move it over, will ya, friend? Home wreckers, that’s what they are.

Tramp: Portan lo scompiglio in casa. Fatti più in là, cocco bello. Distruttori di focolai, sono. [Tramp: They create confusion at home. Shove over, honey! Home destroyers, that’s what they are.]

Tramp: Ti puoi scordare la tua pace. Fatti più in là, piccoletto. Sono distruttori di focolare. [Tramp: Forget your peace. Shove over, shorty. They’re home destroyers.]

When redubbing Lady and the Tramp, the strategy of acculturation was adopted to deal with culture-specific references, while lexical selection was guided by the concern to replace words potentially unfamiliar to young viewers with more familiar ones, as in examples 34 and 35: Example 34 ST

FDV

RV

Toughy: Well, well, look youse guys, Miss Park Avenue herself.

Toughy: Guarda, guarda chi abbiamo qui. Miss Parioli in persona. [Toughy: Why, look who’s here. Miss Parioli herself.]

Toughy: Guarda chi abbiamo qui. Miss Quartieri alti in persona. [Toughy: Look who’s here. Miss Uptown herself.]

134

Serenella Zanotti

Example 35 ST

FDV

RV

Boris: Besides, little ‘bublichki’, wearing license here, that is like waving, you should excuse the expression, red flag in front of bull.

Boris: E inoltre, piccola ‘bublichki’, portare la piastrina qui è come sventolare, con rispetto parlando, bandiera rossa di fronte a toro. [Boris: Besides, little ‘bublichki’, wearing one’s tag here is like waving a red flag in front of a bull.]

Boris: E inoltre, piccola ‘matrioska’, portare la piastrina qui è come sventolare, con rispetto parlando, bandiera rossa di fronte a toro. [Boris: Besides, little ‘matrioska’ (i.e. Russian doll), wearing one’s tag here is like waving a red flag in front of a bull.]

Curiously enough, the redubbed version was not devoid of errors. Given the prestige of the commissioner, mistranslations such as the one shown in Example 36 are indeed fairly surprising: Example 36 ST

FDV

RV

Dr Jones: Cheer up, Jim. Old Doc Jones has never lost a father yet.

Dr Jones: Coraggio, Gianni. Il vecchio dottor Jones non ha mai deluso un papà. [Dr Jones: Cheer up, Gianni. Old Doctor Jones has never let a father down.]

Dr Jones: Coraggio, Gianni. Il vecchio dottor Jones non ha mai tenuto un poppante. [Dr Jones: Cheer up, Gianni. Old Doctor Jones has never held a baby in his arms.]

In the redub, all the lyrics were changed extensively in terms of both rhyming and lexical selection. In line with current target cultural norms, truncated rhymes were avoided and lexical choices were updated somewhat, as illustrated in Example 37: Example 37 ST

FDV

RV

Si & Am: We are Siamese if you please. We are Siamese if you don’t please.

Si & Am: Siam siam siam del Siam siam siamesi. Siam flatelli ma non siamesi.

Si & Am: Siam siam siam del Siam siam siamesi. Siam gemelli siam monosiamesi. (continued)

Analysing Redubs

135

Example 37 Continued ST

FDV

RV

We are from a residence of Siam. There is no finer cat than I am.

Questa nuova casa ispezional dobbiam. Se ci galba folse un pezzo ci lestiam. [Si & Am: We come come come from Siam we’re Siamese. We’re blothels though not Siamese. This new home we must inspect. If we like it we might stay awhile.]

Questa casa ispezionale noi dovlemo. Se ci piace molto a lungo lestelemo. [Si & Am: We come come come from Siam we’re Siamese. We’re twins, we’re monosiamese. This new home we must inspect. If we like it we might stay long.]

The analysis of the strategies adopted in retranslating family films seems to suggest that, even though linguistic modernization cannot be considered a major reason for redubbing, ‘it does seem pertinent to draw attention to the cultural dynamism that makes cultures constantly evolve and periodically refresh their inventory of cultural elements’ (MartínezSierra 2010: 128). In the case of blockbusters aimed at young audiences, the updating of cultural references becomes essential in order to ensure commercial success. 7.5.4 Commercial factors and audience response: a field open to investigation As is well known, in the home-video industry decisions are always dictated by market issues. The redubbed version of Lady and the Tramp sought commercial success by replacing the old dubbing voices with those of contemporary mainstream Italian actors. However, due to a negative reception from Disney fans and viewers, the distributor was forced to go back to the old dubbing for the new version, which came out in DVD format in 2006. The cases of Gone with the Wind and Lady and the Tramp suggest that audience response is an important factor. Even though the issue is complex and beyond the scope of this chapter, some tentative observations can be made from this material. Motion pictures are as much part of popular culture as film quotes are part of our daily lives and language. Because the power of auditory memory is immense, viewers are alert to changes in the dialogues of films they have grown up with. Changing whole lines of dialogue together with the voices of the original dubbing

136

Serenella Zanotti

actors is thus relatively risky, especially with popular films. Viewers’ reactions to new dubbing voices appear to be generally negative as they tend to ‘identify the visible actor on screen and his [or her] dubbed voice as one and the same person’ (Wehn 1998: 186). This is one of the reasons why redubs often have a negative reception. A  further reason is that redubbed films lose their vintage flavour if the redubbing is too distant in time from the original release. Over the years, websites and discussion groups voicing discontent concerning redubbing in Italy have proliferated, but it is only recently that distributors have started to respond to criticism. It is perhaps for this reason that DVD and BD editions featuring the original dubbed version at the same time as the new are now available on the Italian market (Grease, Saturday Night Fever and The Godfather, for instance). We are thus in a situation where two (or even more) dubbed versions of the same film are in circulation within the country at any one time, with multitrack DVDs and BDs virtually inviting viewers ‘to critically compare and analyse multiple translations’ (Nornes 2007: 16).

7.6

Conclusions

This chapter has shown that the study of redubbing has the potential to yield new insights into AVT theory and practice. Analysing redubs allows us not only to compare different translation strategies of the same material, thus fostering research into the way imported films are brought in line with current target cultural values and aesthetic norms, but also to study operational norms from a diachronic perspective, thus providing material regarding cultural-historical approaches to AVT. Research into the phenomenon of redubbing offers us an opportunity to measure the impact of economic and commercial factors on the adaptation and reception of audiovisual products. This may also help to shed light on the issue of manipulation and censorship in dubbing translation. Finally, the study of redubs adds to the ongoing debate on retranslation. The analysis has shown that the theoretical framework provided by retranslation theory is insufficient to account for its complexity, firstly, because in the case of audiovisual texts the very notion of retranslation is far more complex, as it encompasses not only subsequent translations of the same source text, but also the reprocessing of texts in multiple modalities and, secondly, because the factors involved are not the same as those involved in the retranslation of literary texts.

Analysing Redubs

137

Research has shown that the decision to redub is primarily motivated by economic and commercial factors, which in turn relate to copyright issues and marketing strategies. On the other hand, redubbing can also be regarded as the result of shifting needs and changing perceptions in the target culture. New dubbings are made in answer to the cultural policy of special institutions (e.g. national television channels) and, most particularly, as fuel to the audiovisual market. Studies on literary retranslation have highlighted the importance of ‘supplementarity’, i.e. ‘the targeting of different versions to different audiences’ (Koskinen and Paloposki 2003: 22–3). This notion seems also to be applicable to the field of AVT, since more and more DVD and BD editions are coming out with optional soundtracks containing both the old and the new dubbing versions. Thus, the very existence of redubs precludes the enduring assumption that films ‘can only be translated once rather than being brought out in different versions for different receptors’ (Fawcett 2003: 162). One of the tenets of retranslation theory is that retranslation causes competition and tension within the target system, as different translations offer different interpretations of the same work (Pym 1998; Susam-Sarajeva 2003). This is not always true with redubbing. However, the cases of Gone with the Wind and For Whom the Bell Tolls seem to suggest that film translations offering competing interpretations do exist. Based on the points outlined above, it can be concluded that the translational issues emerging from the specific field of AVT do indeed add to the notion of retranslation itself and contribute to the ongoing debate on the subject. To sum up, this study has shown that the reasons behind the practice of redubbing are as diverse as the strategies adopted in its actual process, with both changing according to the commissioner, the purpose and the target audience. Medium and time are also important variables. As marketable products, films are subject to commercial factors, which often prevail over translational, aesthetic and philological considerations. However, given the artistic nature of films in general, a balance between cultural values and financial interests is desirable. Viewer response has proved to be another decisive factor compelling distributors to revisit their marketing strategies. It appears that the practice of redubbing is often despised by viewers for several reasons, including the fact that the onscreen actor and voice are inextricably connected. Another reason is the existence of poor-quality translations, which can often be attributed to the working conditions of dubbing professionals, which are in turn determined by the distributors’ concern with cost-effectiveness.

138

Serenella Zanotti

This study was carried out to show the interest and complexity of redubbing, the study of which may be of value to our understanding of AVT practice from both a synchronic and a diachronic perspective. The conclusion reached is that redubbing should not be regarded in terms of loss or betrayal, but rather as a chance to gain from the perspectives of scholars as well as viewers.

Notes 1. Information on distributors and dubbing directors has been included wherever possible. 2. See for instance http://it.wikipedia.org/wiki/C’era_una_volta_in_America or customers’ reviews on Amazon’s website for Italy (www.amazon.it/productreviews/B004DMG8V6). 3. Mario Maldesi, personal communication with the author. 4. The two versions offer different translations of the line from T.S. Eliot’s poem. This line is translated freely in the first dubbed version, while in the second dubbing the rendering is based on Roberto Sanesi’s translation. 5. Nornes (2007: 2) addresses the issue of retranslation and censorship with reference to the English edition of Bergman’s Persona (1966), which was retranslated for high-quality DVD release. As he points out, the previous ‘subtitles censored the film, which was far more sexually explicit than British and American spectators imagined’.

References Berman, Antoine. 1990. ‘La retraduction comme espace de la traduction’. Palimpsestes 4: 1–7. Chaume, Frederic. 2007. ‘La retraducción de textos audiovisuales: razones y repercusiones traductológicas’. In Juan Jesús Zaro Vera and Francisco Ruiz Noguera (eds) Retraducir: una nueva mirada. La retraducción de textos literarios y audiovisuales (pp. 49–63). Málaga: Miguel Gómez Ediciones. Chiaro, Delia. 2008. ‘Issues of quality in screen translation’. In Delia Chiaro, Christine Heiss and Chiara Bucaria (eds) Between Text and Image: Updating Research in Screen Translation (pp. 241–56). Amsterdam: John Benjamins. Comuzio, Ermanno. 2000. ‘Quando le voci non appartengono ai volti’. In Alberto Castellano (ed.) Il doppiaggio. Profilo e storia di un’arte negata (pp. 96–102). Roma: AIDAC. Fawcett, Peter. 2003. ‘The manipulation of language and culture in film translation’. In María Calzada Pérez (ed.) Apropos of Ideology: Translation Studies on Ideology – Ideologies in Translation Studies (pp. 45–63). Manchester: St Jerome. Gambier, Yves. 1994. ‘La retraduction, retour et détour’. Meta 34(3): 413–17. Gürçağlar, Şehnaz Tahir. 2009. ‘Retranslation’. In Mona Baker and Gabriela Saldanha (eds) Routledge Encyclopedia of Translation Studies (pp. 233–36). London: Routledge. Khris, Smail. 2006. ‘The whys of redubbing. Toward a theoretical approach to redubbing’. Paper presented at MuTra – Multidimensional Translation Conference (University of Copenhagen, 1–5 May 2006).

Analysing Redubs

139

Koskinen, Kaisa and Outi Paloposki. 2003. ‘Retranslations in the age of digital reproduction’. Cadernos de Tradução 11(1): 19–38. Maraschio, Nicoletta. 1982. ‘L’italiano del doppiaggio’. In Accademia della Crusca (ed.) La lingua italiana in movimento (pp. 137–58). Firenze: Accademia della Crusca. Martínez-Sierra, Juan José. 2010. ‘Building bridges between cultural studies and translation studies: with reference to the audiovisual field’. Journal of Language & Translation 11(1): 115–36. Nornes, Abé Mark. 2007. Cinema Babel. Translating Global Cinema. Minneapolis: University of Minnesota Press. Paloposki, Outi and Koskinen, Kaisa. 2010. ‘Reprocessing texts. The fine line between retranslating and revising’. Across Languages and Cultures 11(1): 29–49. Paolinelli, Mario. 2004. ‘Nodes and boundaries of global communications: notes on the translation and dubbing of audiovisuals’. Meta 49(1): 172–81. Pym, Anthony. 1998. Method in Translation History. Manchester: St. Jerome. Rossi, Fabio. 2006. Il linguaggio cinematografico. Rome: Aracne Susam-Sarajeva, Şebnem. 2003. ‘Multiple-entry visa to travelling theory: retranslations of literary and cultural theories’. Target 5(1): 1–36. Valoroso, Nunziante. 2000. ‘Il doppiaggio nel cinema d’animazione: I lungometraggi animati Disney’. In Alberto Castellano (ed.) Il doppiaggio. Profilo e storia di un’arte negata (pp. 109–14). Roma: AIDAC. Valoroso, Nunziante. 2006. ‘Il problema del ridoppiaggio’. Disneyrama. www. disneyrama.com/index.php?Itemid=31&id=60&option=com_content& task=view. Vanderschelden, Isabelle. 2000. ‘Why retranslate the French classics? The impact of retranslation on quality’. In Myriam Salama-Carr (ed.) On Translating French Literature and Film (pp. 1–18). Amsterdam: Rodopi. Votisky, Anna. 2007. ‘The questions of redubbing films (in Hungary)’. Paper presented at MuTra  – Multidimensional Translation Conference: LSP Translation Scenarios (University of Vienna, 30 April–4 May 2007). Wehn, Karin. 1998. ‘Re-dubbings of US-American television series for the German television: the case of Magnum, P.I.’. In Yves Gambier (ed.) Translating for the Media (pp. 185–99). Turku: University of Turku. Zanotti, Serenella. 2012a. ‘Racial stereotypes on screen: dubbing strategies from past to present’, In Silvia Bruti, Elena Di Giovanni and Pilar Orero (eds) Audiovisual Translation across Europe: An Ever-changing Landscape (pp. 153–70). Bern: Peter Lang. Zanotti, Serenella. 2012b. ‘Censorship or profit? The manipulation of dialogue in dubbed youth films’. Meta 57(2): 351–68.

8 Subtitling in the Era of the Blu-ray Nicolas Sanchez

8.1

Introduction

We ought not to be over anxious to encourage innovation, in case of doubtful improvement, for an old system must ever have two advantages over a new one; it is established and it is understood. (Colton 1845) For many decades, subtitles have been considered a useful but ungraceful tool. Indeed, while decoding foreign dialogue, they leave an unsightly mark on screen spoiling the viewer’s pleasure; a double-edged sword that Marleau (1982: 271) once defined as ‘un mal nécessaire’, a necessary evil. Nonetheless, the cinema is an industry and all industries are oriented to consumer satisfaction. It therefore seems logical to expect that someday, someone would find a way of making captions more palatable. But how could a succession of words become acceptable, or even desirable? Should subtitles become totally unobtrusive or conversely more aesthetically pleasing and thus more visible? The answer should be left to the discretion of each viewer. Indeed, today’s society has embraced the notion of customization and a successful caption seems to be tailor-made. This option has now become possible thanks to the new optical disc Blu-ray, which enables the activation of ‘Remote Subtitle’, a French application that indefinitely reshapes captions to suit the viewer’s preferences. Their size and positioning on screen is no longer decided by authoring studios and the subtitles can now be adapted by a series of clicks on a remote control. As Díaz Cintas (2005: 1) puts it, subtitling ‘has an umbilical relationship with technology […] The technical advances taking place in this area can have an immediate and considerable impact both on the 140

Subtitling in the Era of the Blu-ray

141

subtitling practice from the practitioner’s perspective, and also on the perception of subtitling we have as spectators and consumers’. With Remote Subtitle, the once inflexible subtitle has finally become a malleable entity in what might be regarded as one of the most significant transformations in its whole history. And yet, however satisfying it seems from the audience’s viewpoint, such evolution must be carefully scrutinized. Arguably, viewers may not have the right knowledge to judge the best viewing conditions and the shaping of the subtitles should perhaps remain within the competence of distributors. Though a real consensus has never been reached on a global scale, there are some common technical rules that have not been set gratuitously, but are the result of many years of experience and observation. To assess the legitimacy of Remote Subtitle, we must look into its origin, its foundations and its mechanism. Then, its performance must be assessed against the set of rules that is generally applied in cinemas and on DVD, as well as against the ideal standards championed by most specialists. When minimized, do these subtitles remain legible? When maximized, do they overwhelm the picture? Is there any real benefit in moving them? Ultimately, to what extent should the audience have control over subtitles? This chapter seeks to discuss the drawbacks and benefits of resizable and movable subtitles so as to determine their validity and viability.

8.2

Origins

Without deviation from the norm, progress is not possible. (Zappa 1990: 185) Over the decades, audiovisual translators have been asked to embrace simplification and to produce subtitles that are accessible to the largest audience possible, both in terms of form and content. Although the film industry has gone to great lengths to please viewers as far as subtitling content is concerned, not much has happened to the form in which the subtitles are presented. In the second half of the twentieth century, there were no major differences to distinguish the multiple screens on the market and the subtitles followed the same layout irrespective of the equipment used. The situation has changed, however, and, as Géroult (2009: 62, my translation) puts it, more screens have appeared, ‘from the small TV set installed in the bedroom to the majestic flat panel display that has pride of place in the lounge, not to mention the pulldown screen that stretches over several meters in the home theatre’.

142

Nicolas Sanchez

As a result, the very same subtitles can no longer suit all configurations and although, as elements within the picture, their proportions are adapted to the screen, on tiny monitors they are hard to read and on wide sets they detract from the viewer’s attention. Size is not the only issue, as caption placement is also a problem. Indeed, the aspect ratio of films varies considerably – from Movietone to Ultra Panavision, including the iconic Cinemascope – but video publishers hardly bother to particularize subtitles for each movie and end up placing them in some pre-determined space. Depending on the programme and the setup, subtitles appear either in the active frame or in the scope bar in the lower part of the screen. On occasions, the first subtitle line spreads out over the picture while the second is included in the bottom black margin, and for the increasing number of people using a video projector or a 21:9 cinema display, the second line is doomed to disappear with the scope bars, which is due to the fact that the native frame of Blu-ray (1.78) is inferior to the aspect ratio of Cinemascope movies (2.20 or more). Unsurprisingly, some of the growing discontent among cinephiles has been aired over the Internet. According to a poll on Blu-ray.com, one of the largest video databases, 94 per cent of voters would like to be given the option to select the position of subtitles. Some solutions have surfaced sporadically, but none has brought total satisfaction. For instance, some viewers duplicate commercially available DVDs on their computers and alter subtitles with software such as SubEdit or MPC, but the manoeuvre is too complicated for most ordinary people. Two manufacturers – Oppo and Philips – have included ‘subtitle shift’ options on their playback devices to let viewers select their favourite caption positioning both on DVDs and Blu-rays. On Oppo’s BDP-83, captions can be moved while watching the movie and, thanks to the up and down arrows on the remote control, the user only has to keep the subtitles button pressed to trigger the procedure. There are ten possible positions, the subtitles can be moved five pixels up or five pixels down, and the shift position may be saved in the ‘display options’ of the setup menu. Philips reacted by introducing a similar feature on their BDP9100 and even used it as a key asset on their advertising brochure. Yet, without the necessary equipment, the vast majority of spectators are still excluded. Both the hardware and software have to be examined in order to satisfy more customers. As the ageing DVD is no longer able to evolve, it is up to Blu-ray to meet new expectations. A promising medium that can store five times as much data as the DVD (25 GB vs. 4.7 GB) and

Subtitling in the Era of the Blu-ray

143

offers a 3D-ready device with a crisp sound and picture, it has managed to attract a large number of viewers. The Digital Entertainment Group estimates that the number of Blu-ray playback devices in US households soared to 27.5 million in 2010, up 62 per cent (Schaefer 2011). An early attempt to capitalize on the potential of Blu-ray was made by Sony Pictures Home Entertainment for the release of the film Immortal Beloved (Bernard Rose 1994). According to Williams (2007: online), ‘[d]ue to overwhelming requests from users of fixed height front projection systems, Sony has included an option to move the subtitles of Immortal Beloved out of the black bars of the widescreen presentation and into the actual picture area’. A  survey was immediately conducted on Blu-ray. com to find out what people thought of the subtitle re-positioning feature and the response was fairly positive. A total of 17 per cent answered they had used and ‘liked’ this feature and more than 70 per cent stated that, even though they had not tested it, they thought it was ‘a good idea’ and would look for other titles with the same feature. Some of the comments found on message boards give an idea of the viewers’ expectations and the potential evolution of subtitling, with statements showing satisfaction  – ‘I hope Sony sets a new trend here!’  – as well as more anticipatory remarks – ‘The color of the font is what I would like to change. That’s far more important to me than the location’ or ‘Moveable subtitles still lack the option to change the font-size and/ or type. That will be the next logical enhancement then. Size at least.’ However encouraging the reactions were in the USA, the system has never been extended to more Blu-rays, perhaps because in the USA captions appeal more to the deaf and the hard-of-hearing than to general audiences. The European market may offer more opportunities and a better testing ground, however, as it is more accustomed to foreign movies and subtitles. In 2008, a French company decided to launch a feature called Remote Subtitle, not only allowing for subtitles to be repositioned, but also maximized and minimized ad infinitum. The brainchild of Laurent Jaconelli, the founder of Mastery International Pictures and a pioneer of high-definition technology, the project is the materialization of his desire to develop interactive captions able to adapt to all kinds of screen and eyesight. The first Blu-ray to come out with the Remote Subtitle feature was the French edition of Jon Favreau’s film, Iron Man, in November 2008 and, although it went unnoticed globally, it created some sensation in the local press. The application relies on Java and enhances interactivity by allowing users to modify the captions during the viewing of the programme. This is possible because ‘as opposed to a classic subtitle displayed as

144

Nicolas Sanchez

a fixed picture, the [remote] subtitle […] is computed and inlaid real-time’ (Géroult 2008: 69, my translation). Due to its technical requirements, when it was launched, Remote Subtitle could only be run on a few Bluray players, but since then it has appeared on various French classics and more recent releases – Les Tontons flingueurs (Georges Lautner 1963), Le Fabuleux destin d’Amélie Poulain (Jean-Pierre Jeunet 2001), OSS 117 Rio ne répond plus (Michel Hazanavicius 2009) – as well as on some US blockbusters like Underworld 3 (Patrick Tatopoulos 2009) and From Paris with Love (Pierre Morel 2010). All recent Mastery discs are compatible with the new-generation players known as ‘profile 2.0 players’, which have an internet connection and among which the most famous is Sony’s PlayStation. Remote Subtitle is available in the audio and subtitle setup and it only takes a basic remote control to activate it, although it also works with more elaborate devices such as iPhones or PlayStation paddles. Some boards instantly appear with very detailed and didactic notes, describing the system’s goal of enhancing viewing comfort, and explaining how to move and resize subtitles thanks to the navigation arrows on the remote control: the ‘up’ and ‘down’ arrows allow for the placement of captions anywhere in the picture, while ‘left’ and ‘right’ increase or decrease their size and ‘enter’ reverts to default subtitles. But even though the process is very efficient in practical terms, circumspection remains appropriate. Is Remote Subtitle a genuine help to viewers or does it just give an illusion of comfort? Does it represent a turning point in the history of subtitling – something decisive that may change the way viewers understand captions and translations? Or is it a mere gadget that may not live up to expectations and will never transcend the boundaries of France?

8.3

Limitations and potential

Progress imposes not only new possibilities for the future but new restrictions. (Wiener 1954/88: 46) In many regards, Remote Subtitle seems to go against some of the guidelines proposed by specialists to ensure quality in subtitling. For instance, Díaz Cintas and Remael (2007: 82) state that ‘subtitling is a type of translation that should not attract attention to itself’ and, in that respect, the ease with which subtitles can now be expanded risks overwhelming the picture. By masking the frame, not only do they

Subtitling in the Era of the Blu-ray

145

interfere with the aesthetics of the film but, more worryingly, they can hinder the viewer’s comprehension of some of the information. A subtitle has to remain subtle since it is only a part of the visual message that has been added a posteriori. This is the reason why Ivarsson and Carroll (1998: 158) insist on keeping ‘as much of the image as free as possible’ and authors like Karamitroglou (1998) suggest that subtitles should not occupy more than one sixth of the screen image. In addition to this respect for the original images, there is also the need to consider the translation act itself, as abiding by the technical constraints is key to the readability of the subtitle. One of the main downsides with Remote Subtitle is that, in the absence of boundaries, the words can be increased to such an extent that some of them threaten to disappear off screen leaving only large partial letters within the frame. To avoid this, there should be a limit to the maximum font size so that all the subtitles will always appear in full. Such a limit would have to be preset for the entire movie as subtitles tend to vary in length and there should not be a constant need for the viewer to change the size. The option to move captions at will also goes against traditional guidelines, which recommend that ‘subtitles should be positioned at the lower part of the screen, so that they cover an area usually occupied by image action which is of lesser importance’ (Karamitroglou 1998: online). Indeed, moving the subtitles to the centre of the picture may result in important elements being obscured, the actor’s lips and face, for example. However, in spite of these flaws, Remote Subtitle should not be dismissed. It is definitely in line with the digital era and new consumer expectations. As Díaz Cintas (2005: 15) puts it, the audience has changed in recent decades and we are now ‘dealing with an active rather than passive viewer. […] Interactivity is a buzzword and its potential is enormous’. Subtitling is indeed becoming increasingly interactive. Rather inflexible in the early years, the invention of the DVD has changed viewer and subtitling dynamics forever, enabling users to control the subtitles more easily, to decide whether to show them on screen or not and to select a given language among several. With the advent of Blu-ray, the format and layout can be changed at the viewer’s will, marking another milestone in the evolution of subtitling. In terms of accessibility to audiovisual programmes, Remote Subtitle represents a genuine revolution. Though much has been done for the deaf and the hard-of-hearing, most studios and publishers have neglected the partially sighted. Although some DVDs and Blu-rays include audio description for the blind, with a voice narrating the

146

Nicolas Sanchez

events on screen, this approach cannot address the large variety of visually impaired users. When combined with subtitles for the deaf or hard-of-hearing captioning, the system can also be very useful to those who are hard-of-hearing with some visual impairment. Indeed, some spectators may have difficulty in reading subtitles, but not to the extent that they need audio support, in which case enlarged captions could prove to be an excellent solution. Remote Subtitle can also bring relief to people who are slightly short-sighted and prone to eye fatigue after concentrating too hard on captions that look blurred. With this new facility, they can enjoy the film without suffering from burning eyes or headaches. Blu-ray relies on a more efficient video compression and a better picture resolution than DVD, with frames now being 1920 pixels wide and 1080 pixels high (a major advance from the previous 720x480 standard). As a result, the image is clearer and sharper with better contrast, ensuring a superior definition for captions, even in black and white films, thus enhancing legibility. Remote Subtitle is based on a vector font that can be increased or decreased without creating any blur, enabling the text to be manipulated to suit the viewer’s sight and to increase reading comfort. As opposed to traditional pixel-made fonts, vector typefaces are based on points, lines and curves that, thanks to some mathematical equations, can be scaled to any size without being distorted.

8.4

Closing remarks

The most exciting breakthroughs of the 21st century will not occur because of technology but because of an expanding concept of what it means to be human. (Naisbitt and Aburdene 1990: 16) Though it might appear to be a superfluous widget at first, Remote Subtitle is arguably a major innovation in audiovisual translation. Not only does it boost interactivity but it also increases accessibility for the visually impaired, strengthening the viewer’s confidence in subtitles. These are no longer sequences of words that pop up on screen in a predetermined form, but have become a flexible tool to help viewers address their individual needs and enhance their comfort. It may come as a surprise that such a concept was born in France, mainly a dubbing country, but it clearly shows that the country is now catching up in the subtitling field by offering viewers the opportunity to modify the formal aspect of the subtitles at will.

Subtitling in the Era of the Blu-ray

147

Despite its upsides, Remote Subtitle still offers scope for improvement. For instance, the translation could adapt to the reshaping of the subtitles. So, if the viewer selects a larger font than the default, there should be an alternative subtitle track with different spotting or a shorter translation in order to avoid spoiling the picture. Conversely, if the viewer chooses a much smaller font than the default, two-line subtitles could turn into one-liners as a result of the saved space, and this might help the viewer’s ability to read them as ‘there is no need to make eyes travel from one line to the next when all the information can be presented in a single line that viewers can read at a glance’ (Díaz Cintas and Remael 2007: 86). All in all, Blu-ray tends to favour quicker decoding and a better reading speed, thanks to its crisp, anti-aliased image and better definition. Nonetheless, such a revolutionary concept is not necessarily destined for success and it remains to be seen whether Remote Subtitle will remain a purely French phenomenon or whether it will be embraced by other countries, becoming the standard for (most) commercial Blu-rays sold worldwide. For the time being, not all Blu-ray players are efficient enough to support it and the system is rather expensive. Indeed, interactive discs incur a 25 per cent cost overrun and not all media publishers are prepared to invest more money in a hi-tech Blu-ray. However, the idea of viewers being able to customize subtitles is here to stay. Even if it does not triumph on Blu-ray, it may well be applied to the increasingly popular field of video-on-demand and devices like smart phones, for which there is already an application called SubMovie, which allows the size and colour of subtitles to be changed on some programmes. The other field amenable to these changes is that of stereoscopic subtitles and, with the inclusion of a ‘Z’ function, Remote Subtitle would allow viewers of 3D movies to select the depth of their captions, that is, their distance from the screen.

References Colton, Charles Caleb. 1845. Lacon, or Many Things in a Few Words I (DXXI). s.l. s.n. Díaz Cintas, Jorge. 2005. ‘Back to the Future in Subtitling’. In Heidrun GerzymischArbogast and Sandra Nauert (eds) MuTra 2005 – Challenges of Multidimensional Translation: Conference Proceedings. Saarbrücken: Mutra. www.euroconferences. info/proceedings/2005_Proceedings/2005_DiazCintas_Jorge.pdf. Díaz Cintas, Jorge and Aline Remael. 2007. Audiovisual Translation: Subtitling. Manchester: St Jerome. Géroult, François-Cyril. 2008. ‘Iron Man révolutionne le Blu-ray’. Les Années Laser 146: 68–9.

148

Nicolas Sanchez

Géroult, François-Cyril. 2009. ‘Comment Déplacer les Sous-Titres’. Les Années Laser 152: 62. Ivarsson, Jan and Mary Carroll. 1998. Subtitling. Simrishamn: TransEdit. Karamitroglou, Fotios. 1998. ‘A proposed set of subtitling standards in Europe’. The Translation Journal 2 (2), http://translationjournal.net/journal/04stndrd. htm. Marleau, Lucien. 1982. ‘Les sous-titres … un mal nécessaire’. Meta 27(3): 271–85. www.erudit.org/revue/meta/1982/v27/n3/003577ar.pdf. Naisbitt, John and Patricia Aburdene. 1990. Megatrends 2000. New York: Avon Books. Schaefer, Lyndsey. 2011. DEG Year-End 2010 Home Entertainment Report. Las Vegas: The Digital Entertainment Group. www.degonline.org/wp-content/ uploads/2014/02/f_Q410.pdf Wiener, Norbert. 1954/1988. The Human Use of Human Beings. Cambridge: Da Capo Press. Williams, Ben. 2007. ‘Immortal Beloved Blu- ray’. www.blu- ray.com/ movies/Immortal-Beloved-Blu-ray/473/#Review Zappa, Frank. 1990. The Real Frank Zappa Book. New York: Simon & Schuster.

9

The MultilingualWeb (MLW) Project: A Collaborative Approach and a Challenge for Translation Studies Cristina Valdés

9.1

Introduction

This chapter refers to a project called MultilingualWeb – Advancing the Multilingual Web funded by the European Commission through the ICT PSP Grant Agreement No. 250500, as part of the Competitiveness and Innovation Framework Programme, Theme 5: Multilingual Web (http://ec.europa.eu/information_society/apps/projects/factsheet/ index.cfm?project_ref=250500). The reasons for including this topic in a publication on audiovisual translation (AVT) are various. The first is that the multilingual web naturally requires translation activities that share many characteristics with other audiovisual media. Secondly, the inclusiveness of the multilingual web is a topic that has lately received much attention, promoting debate on its accessibility and availability to different audiences. Thirdly, the collaboration between Translation Studies and other disciplines is essential, especially if we consider that: (a) web translation or localization has challenged the traditional notion of what constitutes a ‘text’, foregrounding the need for a more interdisciplinary approach; (b) interdisciplinary collaboration and research should account for a new approach to translation initiatives and improvements relating to the social impact of the web. A final reason concerns the dissemination of the project among scholars working in AVT who might be interested in the topic. It is also expected that the multilingual web community would benefit from the expertise offered by translation researchers and practitioners.

149

150

9.2

Cristina Valdés

The MLW Project

The project was launched to ensure that the multilingual nature of the World Wide Web remains paramount given its communicative role in all walks of life and that there should be a decreasing share of web content written in English compared to that of other languages spoken in the European Union and around the world. The web has undeniably become more and more international and, in order to build on its internationalization, it is essential to raise awareness of existing best practices and standards related to the management of multilingual web content and to anticipate what remains to be done. The project was coordinated by the World Wide Web Consortium (W3C), made up of around 400 member organizations worldwide, including research and industry partners and headed by Sir Tim BernersLee, the inventor of the web. 22 partners, representing a wide range of stakeholders, also helped to run the project. These included several universities (Politécnica de Madrid, Oviedo, Limerick), various organizations and institutions in the field of translation and localization (The Localization Research Centre, TAUS, the EC Directorate-General for Translation) and a wide range of industry partners (Lionbridge Belgium, Microsoft Ireland, Facebook Ireland). One of the project’s core elements has entailed the organization of four public workshops over a two-year period aiming to foster discussions on existing standards and best practices. In parallel with the project, but not funded by it, the W3C has also developed some practical tools including an internationalization checker for HTML (http:// validator.w3.org/i18n-checker) and a proposed outline for training, and has published the results obtained from some of the tests conducted in the field of internationalization. Their purpose was to facilitate the implementation of standards and to define good practices as regards internationalization, understood as the process of developing or designing content, applications or specifications to ensure their correct functioning or easy adaptation to suit users from any culture, region or language (W3C 2013). The project can be followed on social media like Twitter and Facebook and there is currently a public distribution list (http://lists.w3.org/ Archives/Public/public-multilingualweb) for the discussion of any aspects related to it, fostering collaboration and consultation since its inception. The following aims have been singled out in the project’s rationale:

The MultilingualWeb Project

151

• to contribute to a better awareness of standards and best practices in the area of the multilingual web; • to provide a catalyst for future projects in the areas of multilingual web standardization, best practices and tool development; • to develop relationships and awareness of shared issues across organizations, scientific disciplines and academic/industrial boundaries; • to improve the use of multilingual standards and best practices in the creation of pages using (X)HTML and CSS by content developers; • to improve the support for multilingual features in web user agents. The importance of standards related to the project should not be underestimated. According to the W3C (2011: online), ‘standards and best practices enable interoperability of data, which in turn maximises the potential for access to information, ensures longevity and usability of data, and improves the efficiency of processes for producing, localizing and disseminating information’. In this respect, the MLW Project is concerned particularly with increasing this interoperability and encouraging coherence across the multilingual web. Standards provide targets so that applications are urged to consider the requirements for supporting multilingual aspects of the web involving the creation, display and management of content. Important standardization work has already been done  – or is in progress – in order to establish a base for the multilingual deployment of the web. Organizations such as the W3C, for example, have worked on the use of Unicode in web technologies and the development of standardized language tags. Such efforts are essential to ‘support the worldwide interchange, processing, and display of the written texts of the diverse languages and technical disciplines of the modern world’ (Unicode Consortium 2014), thus enabling the translated and localized text to appear on browsers, media platforms or other types of machine. Nevertheless, professionals producing multilingual web content feel that there are still a number of impediments preventing the full multilingual roll-out of information and tools and that these still need to be identified. These impediments are present in a range of areas, reducing efficiency or standing in the way of the work being carried out by those attempting to provide a truly multilingual web experience and affecting the ability to produce, localize, manage and share information and applications on the web. In this respect, Translation Studies can surely provide the expertise necessary before some of these barriers, which are not merely technical, can be overcome.

152

Cristina Valdés

MLW Project members have explored a variety of possible definitions for the necessary standards and examined alternatives to these, which are intended to contribute to the advance of the multilingual web. Some preliminary decisions were taken concerning the general aim and approach of workshops at the kick-off meeting in Bucharest in June 2010. The responsibilities of the programme committee and other partners concerning the provision of successful workshops also came up for discussion. It was decided to organize the first workshop in Madrid and the second in Pisa, while the third and fourth were to be held in Limerick and Luxembourg. Before the first workshop, it was thought essential to publicize the project and attract researchers, professionals and the general public to the meeting so that a wide range of specialists could contribute to the project. In this first meeting, the internationalization checker, the internationalization curriculum and tests were revised so that improvements could be made. All the workshops were organized as two-day events giving partners and stakeholders the chance to present papers during the first day and, on the second day, an open space discussion was held to allow for the identification of a significant number of issues. Some of these were related to translation in particular and most specifically to AVT. Discussions also took place concerning the role of minority languages, their translation on the multilingual web and its importance to users. The results of the project were published on the project website and presented in the final report for evaluation. Amongst the various outcomes of the MultilingualWeb project, the MultilingualWeb-LT Project (MLW-LT) merits particular emphasis; this project was set up by the Multilingual Web Language Technologies (LT) working group focusing particularly on the technical side of the original project. Likewise, in order to foster further collaboration among the MLW participants, the Multilingual Web Sites Community Group (www. w3.org/community/mws) was also founded with the aim of producing specifications to facilitate the use and creation of multilingual websites.

9.3 Translation Studies and the multilingual web Some of the preliminary questions that come to mind if we compare the work carried out within Translation Studies  – particularly in the fields of localization and AVT – with that carried out within the MLW Project concern issues like the social impact of web localization, the rapid growth of translation activity, the increasing demand for qualified

The MultilingualWeb Project

153

translators and the barriers  – both technical and non-technical  – to online multilingual communication. As Pym (2011: 410) points out: The translation and localization of websites has thus become a lucrative, dynamic, and inter-professional field, often involving marketing, design, and software engineering, as well as linguistic processes. At the same time, the development of the Internet as an interactive medium is giving rise to a series of creative non-professional translation practices. In the overview of Theme 5: Multilingual Web, the European Union states that cross-lingual access to web resources is suffering from a lack of language-friendly conventions. Since translators are proven experts in overcoming language barriers and enabling cross-lingual access to content in a wide range of media, this particular issue is especially pertinent to Translation Studies. Within this discipline  – and most specifically within AVT studies  – scholars have developed methodologies to study translation processes, procedures and products. Holmes’s (1988) distinction between product-oriented and process-oriented approaches is still valid in terms of translation research and may also be useful in the study of web content translation and localization to suggest improvements and define best practices. In a process-oriented approach, for instance, the industry would benefit from a study of all the stages and participants involved in the creation and localization of multilingual websites, investigating issues such as who decides which languages are used in localization or whether the website in question is part of a global marketing campaign or, indeed, otherwise. A  productoriented approach, on the other hand, would facilitate improvements connected with the readability or usability of multilingual websites, taking into account their function. The reconstruction of intercultural and intertextual relations between websites and webpages may be attained from the analysis of the products involved. Delabastita (2008: 244) refers to text relations through the notions of the status and origin of multilingual corpora, which can also be applied to multilingual online content: In the case of multilingual corpora, the series of texts in different languages that are compared never pretend to be ‘translations’ of each other (status) and their production has followed entirely autonomous genetic lines (origin). Yet their comparative study may reveal interlingual and intercultural patterns that are directly relevant to

154

Cristina Valdés

the study of translation, highlighting features that may or may not occur between texts that are ‘translations’ in the genetic sense. As far as the multilingual web is concerned, translation scholars are experts in textual analysis, both at a pre-translation stage and at the subsequent editing and revision stages. As a multilingual website is, in a narrow sense, a collection of translated texts cohesively linked by textual devices such as hyperlinks, keys and commands highlighted on screen, textual expertise is very valuable, enabling the selection of the right terms taking the user to a different page within the site or, indeed, to another website. Likewise, as textual experts, translators can help to identify suitable topic sentences or keywords to draw users to the key or command in the target language or the language of the website (Valdés 2008). One of the hypotheses of the MLW Project is that the multilingual web is not used from a multilingual point of view, that is to say, users are unaware that the website is produced in several languages and with Unicode support. On the contrary, they experience a website in their target language, and this experience should be as local, and in most cases as monolingual, as possible. This results in a second contribution from Translation Studies, since translators or localizers happen to be the best candidates to make the web readable for those accessing it in their target language. To ensure readability, attention should not only be paid to word choice or error avoidance, but also to the discourse and stylistic conventions of non-linear hypertexts on websites. Some cases may be found in Valdés (2008) and Declercq (2011), both of whom have written on the localization of promotional discourse on the Internet, advertising and localization. Both papers provide interesting examples of the challenges and participants involved in the localization of multilingual content aimed to promote a company or service online. Two more principles, also related to content management and display, determine the success of multilingual websites. One of them is the principle of usability which, as the name would indicate, refers to the ease with which a website is used. General usability guidelines advise that broken links and an excessive number of links should be avoided with scrolling down the page to read the full text also being undesirable. Some of these issues require editing and revising and are better tackled by translation practitioners, as translators have the competences necessary to deal with textual condensation, a skill that requires the ability to summarize and rephrase in the target language, as well as to identify the most relevant information in the source text to be preserved in each of

The MultilingualWeb Project

155

the language sites within a multilingual website. This condensation task will result in shorter and more functional texts for the web. Linguists and translators may use their expertise to choose the right terms for the action keys that are necessary to call for action and to comply with character restrictions. Another necessary principle on multilingual websites is accessibility and this is one of the main concerns of the MLW Project and the W3C. In a broad sense, accessibility is about opening the web to different kinds of people with their own languages and cultural backgrounds, thus overcoming linguistic, technical and social barriers. Since accessibility is one of the main challenges facing the multilingual web today, efforts are being made to internationalize websites, that is, to design them so as to reach international audiences both from a technical and a cultural point of view and to implement standards to guarantee wider access in different languages. The web of translated texts through which multilingual web users navigate serves to enhance the invisibility of translation, which has become one of the main goals facing web experts. The evident increase in the number of localized websites creates the illusion that users are reading original texts in their own language rather than translations from a different language. The application of standards and web technologies is important when it comes to publishing a multilingual collection of texts on a website, but good practices related to how users gain optimal access to the web should also be identified. Functionality, therefore, represents one of the goals of localization since, as O’Hagan and Ashworth (2002: 12) state: ‘Web localisation means that the given site is provided in a specified language so that users can read text and navigate in their own language when they access the localised site. In other words, a localised Web site retains the same functionality as the original site.’ In this sense, AVT research and practice may help to explore the relationship between multilingual websites and web users, who are actively bound to the website with which they interact by means of a machine and through a screen. AVT and visual semiotics can play a key role in improving the display and organization of online content and can, thus, identify and recommend some best practices. Textual content should be displayed on websites clearly, using an appropriate font size or employing colours to enhance the readability of the text. In this sense, readability standards regarding font size and onscreen type may be drawn from subtitling conventions. A  case of an excessively small font can be seen in Spain’s official tourism portal (www.spain.info), which in turn has been reproduced on all language sites.

156

Cristina Valdés

A truly multilingual experience on the web involves the application of suitable translation strategies for textual and audiovisual material. The varied work already developed by AVT scholars to reconstruct translation norms in audiovisual texts could also be used to improve the efficiency of web design and localization. More and more multilingual websites include videos as well as other material to enable user interaction with online content and AVT modes can be used to ensure its accessibility to specific groups of users. Websites may also be part of more general communication campaigns run by companies or institutions and these demand a coherent approach to all the materials used in the campaign with respect to translation and localization. Most international brands, for example, launch their international promotional campaigns by designing texts for printed, audiovisual and online media. The same branding strategies (themes, actors, pictures and songs) should, therefore, be applied throughout. Within the MLW Project, the main contribution from the perspective of Translation Studies has been the consideration of user and cultural variables as key aspects, so that making the web fully international not only involves making it multilingual, but also requires adequate accessibility for every target audience and group. Reading entails interpreting meaning, contrasting textual elements, colours, pictures and ideological references on a website to coincide with our own expectations and mental associations. Previous works on this topic have focused on country-user comparisons (Singh and Baack 2004), on specific product promotions (Chiaro 2004), on the need for localization efforts that go beyond translation (de Bortoli et al. 2003) or on multilingual institutional websites (Valdés 2010). Translators are necessarily experts both in languages and the cultural framework in which these languages are embedded. Since no choice of linguistic terms is neutral, standards should include diversity in order to truly facilitate access and understanding. Web experts have identified the need for standards for content interoperability, or at least for the exploration of localized knowledge repositories. Such an approach bears similarities with past debates in Translation Studies on ‘equivalence’, since it might make us consider determining equivalent units in the source and target languages to enable functionally communicative processes on the Internet. In a physical library filled with books and on the World Wide Web, knowledge can be managed through repositories, presented in different forms (books or journals in the former, websites or applications on the latter). In translation, ‘content also needs to be localised when it should be presented to other cultures’ (Budin 2008: 125). This content

The MultilingualWeb Project

157

conveys knowledge that is presented in a particular ‘container’ (Budin 2008) made up of words, symbols, and so on. When the localization process requires the retrieval of these containers of knowledge from a repository, it actually involves a data selection process, based on criteria of equivalence, whether partial or total, between a source language item and the same item in the target language. Given their familiarity with different types of equivalence, whether semantic or communicative, functional or pragmatic, translators, as experts in language, text and culture, could contribute to improving localization and the transfer of knowledge (Wittner 2004). When searching for shared projects and collaborative work, we should consider that translation scholars and practitioners have the knowledge and expertise to make uniform representations of texts across the different languages and media that are embedded within a multilingual website. In order to apply speech recognition tools or real-time subtitling effectively, there should be collaboration between web designers, localizers and audiovisual translators, thus achieving better practices and easier processes on multilingual websites. Another essential shared task would be the mining of data across languages in order to anchor the terminology used in different languages to refer to a common conceptual backbone or field, with the ultimate goal of improving the structure and localization of knowledge at a semantic level. Linguistic expertise in register and discourse, as well as in specialized terminology, would be of great value here, giving shape to data knowledge repositories or to the databases required by multilingual web designers and localizers. To conclude, there is another crucial area of interest to which translation scholars and localization experts have started to pay attention when referring to multilingual websites, namely, the choice of language. The predominance of English as the language of the Internet has been called into question, as users speaking other languages have started to demand a higher presence of their languages on the web (Jenkins 2009; Quilty-Harper 2012; Zuckerman 2013). Developing countries have begun rapidly to adopt languages other than English for use online. This shows that narrow-minded views are currently being questioned and voices against ‘the arrogant approach’ (de Bortoli et al. 2003: online), in which the use of English is favoured as the only global language, have started to be raised. There should also be ethical and ideological debates when improvements for the multilingual web are proposed. So far, global corporations and institutions have presented their own view of the world, dividing it into market areas or regions

158

Cristina Valdés

in an arbitrary way and, thus, revealing the manner in which these companies conceive languages and users. Some companies and organizations like Nestlé (www.nestle.com), for example, have already started to promote a different and more inclusive strategy, trying to localize their global websites not only in terms of content but also in terms of form. This strategy involves preserving the formal organization of elements on the various websites in question and adapting their content to suit a particular target market and users, for example visuals that change according to the product supply or current events taking place in the target culture, or headlines and banners connected with promotions aimed at a particular target culture. Another interesting issue relates to the way global companies or institutions perceive the multilingual world on the web. While some, such as the car manufacturer Audi (www.audi.com), invite users to choose a local site employing geographical criteria and respecting official country divisions, others combine a country and language selection, as on the homepages of L’Oreal (www.loreal.com) and Lancôme (www2.lancome.com). On the other hand, international organizations like the United Nations (www. un.org) or the European Commission (http://ec.europa.eu/) tend to prefer language as the identifying factor of a site.

9.4

Conclusions

If professional standards become the way forward in an attempt to enhance the quality of multilingual websites, translation scholars and practitioners, including localizers, can offer a different viewpoint, alternative approaches and, eventually, a genuine human touch. These involve the consideration of pragmatic and cultural aspects related to communication, content-generation and perception and reception processes. One of the most recent concerns is how to deal with the vast amounts of user-generated web content that is increasingly being translated by users, clearly indicating that best online practices cannot be explored in isolation from user action. In a narrow sense, a multilingual website is a web of translated texts through which multilingual users navigate and interact as if they were original texts in the target language, thus enhancing the invisibility of translation on the web. In a broader sense, multilingual websites pose a challenge to Translation Studies, not only in quantitative terms but also from a qualitative point of view, since they foreground the need to implement production and translation quality standards related to web

The MultilingualWeb Project

159

design and internationalization-friendliness, and to find best practices as regards usability, accessibility and readability. A collaboration between Translation Studies experts and experts from other disciplines is a must, and further workshops and activities will follow on from the MLW Project presented here. Opportunities such as this should be promoted in order to bring together social agents, companies, institutions and academics with the common goals of developing the multilingual web and bringing together users, sotfware developers, market agents, and political and social institutions.

References Budin, Gerhard. 2008. ‘Global content management: challenges and opportunities for creating and using digital translation resources’. In Elia Yuste Rodrigo (ed.) Topics in Language Resources for Translation and Localization (pp. 121–34). Amsterdam: John Benjamins. Chiaro, Delia. 2004. ‘Translational and marketing communication: a  comparison of print and web advertising of Italian agro food products’. The Translator 10(2): 313–28. De Bortoli, Mario, Robert Gillham and Jesús Maroto. 2003. ‘Cross-cultural interactive marketing & website usability’. Global Propaganda. www.globalpropaganda. com/articles/InternationalWebsiteUsability.pdf. Declercq, Christophe. 2011. ‘Advertising and localization’. In Kirsten Malmkjaer and Kevin Windle (eds) The Oxford Handbook of Translation Studies (pp. 262–72). Oxford: Oxford University Press. Delabastita, Dirk. 2008. ‘Status, origin, features: translation and beyond’. In Anthony Pym, Miriam Shlesinger and Daniel Simeoni (eds) Beyond Descriptive Translation Studies. Investigations in Homage to Gideon Toury (pp. 233–46). Amsterdam: John Benjamins. Holmes, James. 1988/2000. ‘The name and nature of translation studies’. In Lawrence Venuti (ed.) The Translation Studies Reader (pp. 172–85). London: Routledge. Jenkins, Jennifer. 2009. World Englishes: A  Resource Book for Students. London: Routledge. O’Hagan, Minako and David Ashworth. 2002. Translation-Mediated Communication in a Digital World. Facing the Challenges of Globalization and Localization. Clevedon: Multilingual Matters. Pym, Anthony. 2011. ‘Website localization’. In Kirsten Malmkjaer and Kevin Windle (eds) The Oxford Handbook of Translation Studies (pp. 410–23). Oxford: Oxford University Press. Quilty-Harper, Conrad. 2012. ‘Chinese internet users to overtake English language users by 2015’. The Telegraph. 26 September. www.telegraph.co.uk/ technology/broadband/9567934/Chinese-internet-users-to-overtake-Englishlanguage-users-by-2015.html. Singh, Nitish and Daniel W. Baack. 2004. ‘Web site adaptation: a  cross-cultural comparison of US and Mexican web sites’. Journal of Computer-Mediated

160

Cristina Valdés

Communication 9(4). http://onlinelibrary.wiley.com/doi/10.1111/j.10836101.2004.tb00298.x/full. Unicode Consortium. 2014. About the Unicode Standard. www.unicode.org/standard/standard.html. Valdés, Cristina. 2008. ‘The localization of promotional discourse on the Internet’. In Delia Chiaro, Christine Heiss and Chiara Bucaria (eds) Between Text and Image: Updating Research in Screen Translation (pp. 227–40). Amsterdam: John Benjamins. Valdés, Cristina. 2010. ‘Institutional translation: when values matter’. Paper presented at the EST Conference Tracks and Treks in Translation Studies. University of Leuven. W3C. 2011. MultilingualWeb Project. www.w3.org/International/multilingualweb/ site/about-the-project. W3C. 2013. Internationalization. www.w3.org/standards/webdesign/i18n. Wittner, Janaina. 2004. ‘Strategic knowledge management for the localisation industry’. MultiLingual Computing 15(4): 41–44. Zuckerman, Ethan. 2013. Rewire. Digital Cosmopolitans in the Age of Connection. New York: W.W. Norton & Company.

Part III Mapping Professional Practices

10 Professional Realities of the Subtitling Industry: The Subtitlers’ Perspective Arista Szu-Yu Kuo

10.1

Introduction

In the past, the working conditions of audiovisual translators in general, and subtitlers in particular, have tended to be veiled in mystery, with very few works written on the topic. In the case of subtitling, this is mainly because the majority of subtitlers tend to work on a freelance basis. Since contractors work alone, independently and very frequently in isolation, it is difficult for outsiders to gain an understanding of their profession and working conditions. This has also restricted the circulation of information among the professionals themselves. In order to present a clearer picture of the working environment in which subtitlers operate, as well as an overview of the realities defining the interlingual subtitling industry, an online survey was distributed among subtitlers across the world. During the revision of the questionnaire structure and question, the survey received significant support from members of the European Association for Studies in Screen Translation (ESIST, www.esist.org/) and the Subtitlers’ Association (Subtle, www.subtitlers.org.uk/) in the UK. A  significant number of translators’ and subtitlers’ associations joined in later on and provided substantial assistance by recommending the survey and distributing the questionnaire to other professionals in the industry. These associations include the Spanish Asociación de Traducción y Adaptación Audiovisual de España (ATRAE, www. atrae.org), the Italian Associazione Italiana Dialoghisti Adattatori Cinetelevisivi (AIDAC, www.aidac.it), the French Association des Traducteurs et Adaptateurs de l’Audiovisuel (ATAA, www.traducteursav.org), the Danish Union of Journalists, the Dutch association of subtitlers Beroepsvereniging van Zelfstandige Ondertitelaars (BZO, 163

164

Arista Szu-Yu Kuo

http://bzo-ondertitelaars.nl), the Forum for Finnish Subtitlers (www.avkaantajat.fi) and the Polish association of audiovisual translators Stowarzyszenie Tłumaczy Audiowizualnych (STAW, www.staw.org.pl). The methodology used in this study will be discussed in the following section, followed by an overview of the background information supplied by the participants with, finally, a presentation of the survey’s findings.

10.2

Research methodology

In an attempt to explore the professional realities of the interlingual subtitling industry, as well as the working environment of subtitlers, an online survey was used as the main research instrument in this study. Questionnaire surveys tend to be popular research tools because they constitute a relatively objective and efficient means of collecting information concerning people’s knowledge, beliefs, attitudes and behaviours (Oppenheim 1992; Sapsford 1999). With the advancement of telecommunications technology and the prevalence of the Internet, online surveys have gradually replaced traditional data-gathering methods such as paper-and-pencil interviewing, mail surveys, telephone surveys and so on. According to Wright (2005), there are three main advantages to online survey research  – access, time and cost  – which will be further elaborated below. The Internet can provide access to unique populations, particularly those who would be difficult to reach through other channels, which is the case of subtitlers, who usually work independently with no fixed working hours or permanent places of work. Online surveys can also shorten the time needed by researchers for data collection. Even if it were possible to find an equivalent number of subtitlers in one location, the process would still be very time consuming. In addition, this method also makes it possible for researchers to collect and monitor data while working on other tasks, thus saving time (Andrews et al. 2003). Using an electronic tool can save money not only on paper, but also on other costs that might be incurred through travel, printing, postage, and so on (Ilieva et al. 2002). The design of the questionnaire survey was based on a number of relevant survey reports on the translation industry, such as the Comparative Income of Literary Translators in Europe published in 2008 by the Conseil Européen des Associations de Traducteurs Littéraires (Fock et al. 2008) and an unpublished MA dissertation entitled A  Quantitative Study on Subtitling Rates, written by Reyntjens in 2005.

Professional Realities of the Subtitling Industry

165

The survey started to gather basic information from respondents and the broad questions asked at the beginning became increasingly narrow in scope, in keeping with the ‘funnel approach’ suggested by Oppenheim (1992: 110). Open-ended questions and comment boxes were used in moderation in order to elicit more specific answers. As Frazer and Lawley (2000) state, open-ended questions allow respondents to express themselves freely without limiting their responses. This method seemed to suit the respondents, who preferred to answer in their own words. The survey underwent various pre-tests, with the questions being revisited, developed and further enhanced through meetings and correspondence with a dozen scholars and experienced subtitlers from ESIST and Subtle, ensuring the effectiveness and efficiency of the questionnaire. A  pilot test was later conducted on a small sample of five freelance subtitlers to ensure applicability. As recommended by Ballinger and Davey (1998: 549), this phase was included to test whether the questionnaire was easy to complete, verify that the questions could be understood and that the required time frame for completion was realistic. The survey was then launched on SurveyGizmo, an online survey site, in May 2010.

10.3

Survey respondents’ background information

The survey was open to all subtitlers, irrespective of their country of operation or language combination(s). The final population of respondents comprised 429 professionals located in the following 39 countries by the end of October 2010: Argentina (1.9 per cent), Australia (0.9 per cent), Austria (1.2 per cent), Belgium (2.6 per cent), Brazil (2.6 per cent), Canada (2.3 per cent), Chile (0.2 per cent), China (including Hong Kong) (0.9 per cent), Croatia (0.7 per cent), Czech Republic (1.2 per cent), Denmark (6.3 per cent), Estonia (0.2 per cent), Finland (9.8 per cent), France (6.5 per cent), Germany (5.8 per cent), Greece (2.8 per cent), Hungary (0.2 per cent), Iran (0.5 per cent), Israel (0.2 per cent), Italy (4.0 per cent), Netherlands (12.3 per cent), New Zealand (0.2 per cent), Norway (5.6 per cent), Poland (1.4 per cent), Portugal (1.6 per cent), Republic of Ireland (0.2 per cent), Romania (1.4 per cent), Serbia (0.5 per cent), Slovakia (1.9 per cent), Slovenia (0.5 per cent), Spain (6.3 per cent), Sweden (1.9 per cent), Switzerland (0.2 per cent), Taiwan (0.5 per cent), Thailand (0.2 per cent), Turkey (0.9 per cent), United Kingdom (11.9 per cent), United States (1.4 per cent) and Venezuela (0.5 per cent). The overwhelming majority of subtitlers (87.7 per cent) were from Europe and the rest were distributed all over the world with the exception of Africa.

166

Arista Szu-Yu Kuo

The clients, i.e. translation agencies, subtitling studios or direct clients, identified by the respondents, also came from numerous geographical areas reflective of the sample distribution for subtitlers. The results also revealed that the main clients and commissioners of 17.5 per cent of the respondents were based in more than one country, in line with globalization trends found in many other professions nowadays. The respondents translated from 20 source languages (SLs); 83 per cent mainly translated from English, foregrounding the predominant position of this language in the audiovisual industry. A  total of 36 respondents worked from more than one main SL, that is, they usually translated from a couple of languages giving a total count of 465 instead of 429. Details concerning the SLs are shown in Table 10.1. Table 10.1 Language

Respondents SLs Count

Language

Count

English

357

Chinese

2

French

31

Finnish

2 2

German

22

Portuguese

Spanish

13

Russian

2

Italian

9

Czech

1

Dutch

6

Frisian

1

Swedish

5

Hindi

1

Danish

3

Japanese

1

Norwegian

3

Persian

1

Arabic

2

Turkish

1

As for target languages (TLs), respondents translated into 30 languages in total. Eight respondents used two main working languages so that the total count was 437. Details concerning the TLs are shown in Table 10.2. Translation into the subtitlers’ mother tongues or main languages seemed to be the norm, with 418 (97.4 per cent) of the 429 respondents subtitling routinely in this direction. This supports the traditional view that translators should translate only into their mother tongue, as the essence and flavour of the target language is more likely to be attained by native speakers. As Newmark (1988: 3) argues, translating into one’s language of habitual use is ‘the only way [one can] translate naturally, accurately and with maximum effectiveness’, a mantra that has been traditionally embraced by numerous scholars and professionals.

Professional Realities of the Subtitling Industry Table 10.2 Language

Respondents TLs Count

Language

Count

Dutch

66

Czech

5

English

57

Chinese

4

Finnish

41

Catalan

3

French

35

Croatian

3

Spanish

35

Turkish

3

Danish

28

Arabic

2

German

27

Persian

2

Norwegian

25

Serbian

2

Italian

21

Slovenian

2 1

Portuguese

20

Estonian

Swedish

15

Flemish

1

Greek

14

Galician

1

8

Hebrew

1

Romanian

167

Slovak

7

Hungarian

1

Polish

6

Russian

1

Translation into a non-mother tongue is in fact frowned upon and considered to be doomed to failure by many scholars (Dickins et al. 2002; Duff 1981). However, according to the survey results, the situation may be changing in the audiovisual industry, as a substantial 37.8 per cent of the respondents highlighted the fact that they were asked to work outside of their mother tongues or main languages to a varying degree (from ‘sometimes’ to ‘always’), as is shown in Figure 10.1. The gender ratio of the respondents was close to 25 per cent male and 75 per cent female. Among participants, 55.7 per cent were aged 25–40, 34.5 per cent were aged 41–55 and the remaining were either younger than 25 (2.6 per cent) or older than 55 (7.2 per cent) years old. Over 50 per cent of respondents held a university degree, 35.4 per cent had a postgraduate degree and 5.8 per cent a high school degree. Only one participant’s level of study was lower than high school, and the remaining 5.3 per cent mostly either held a diploma or had studied, but not to degree level. Regarding qualifications and specialization, 72.3 per cent of the respondents indicated that they possessed a qualification in translation and only 32.9 per cent stated that they had achieved a specialized qualification in subtitling, tallying with the fact that

168

Arista Szu-Yu Kuo Always 1.9% Often 11.0% Never 21.9%

Sometimes 24.9% Rarely 40.3%

Figure 10.1 Frequency of working into languages other than the mother tongue

audiovisual translation (AVT) training is a relatively new area in most educational institutions throughout the world. In terms of subtitling experience, 85.1 per cent of the respondents had been working in subtitling for at least two years, of whom 27.7 per cent had done so for 2–5 years, 25.2 per cent for 6–10 years, 18.4 per cent for 11–15 years and 13.8 per cent for more than 15 years. A low 22.4 per cent of the participants specialized and worked exclusively in subtitling, while 77.6 per cent did it as part of their portfolio and also accepted other forms of translation assignments. With regard to the respondents’ main fields of work in subtitling, the top three were, in decreasing order, ‘TV series and sitcoms’, ‘films’ and ‘documentaries’.

10.4

Survey findings

Now that the background information behind the survey respondents has been outlined, the key findings of the survey will be discussed. The discussion will begin with the results with regard to rates and will be followed by an analysis of a wide range of issues, including: negotiation power, royalties, acknowledgement credits, notices and deadlines, contracts and materialization of jobs, the use of software, the provision

Professional Realities of the Subtitling Industry

169

and quality of supporting materials and the changes experienced since the economic downturn. 10.4.1

Subtitling rates

The pay rate is a fundamental criterion behind accepting or declining a job offer and is, therefore, a crucial component of translators’ working conditions. Questions concerning remuneration were included in the survey, with results indicating that rates not only varied greatly from country to country, but also from person to person. The respondents were paid in 24 currencies in total, the main forms of legal tender being Euros (EUR: 54 per cent), US Dollars (USD: 14 per cent), and Pound Sterling (GBP: 9 per cent). The Euro was the currency in which more than half of the respondents were regularly paid and displayed the widest range of rates. The following analysis will, therefore, be mainly based on the data reported with respect to the Euro. According to the various types of task performed, respondents were asked to provide their average rates where relevant, based on one of the per unit prices normally applied in the industry, that is, per programme minute, per working hour, per subtitle, per 1,000 SL words and per 1,000 TL words. Other variables have also been taken into account to reflect the complexity of this field in the industry, such as whether the script is provided by the commissioner, whether the subtitles are originated and the subtitler is responsible for time-cueing or whether a template with the master subtitles is provided by the client and the technical dimension is therefore not required. A summary of the average subtitling rate ranges elicited in the survey is shown in the following tables. The results are displayed according to the type of task in the following order: (1) only translating from a template, (Table 10.3); (2) only time-cueing (Table 10.4); and (3) time-cueing and translation (Table 10.5).

Table 10.3 Ranges of average subtitling rates – only translating from a template Unit

Highest

Lowest

Per programme minute (ppm)

€15

€0.12

Per working hour

€100

€15

Per subtitle

€3

€0.02

Per 1,000 SL words

€120

€15

Per 1,000 TL words

€280

€50

170

Arista Szu-Yu Kuo

Table 10.4

Ranges of average subtitling rates – only time-cueing

Unit

Highest

Lowest

With script, ppm

€12

€0.3

With script, per working hour

€30

€19.5

With script, per subtitle

€2

€0.05

Without script, ppm

€16

€0.27

Without script, per working hour

€30

€15

Without script, per subtitle

€3

€0.05

Table 10.5

Ranges of average subtitling rates – time-cueing and translation

Unit

Highest

Lowest

With script, ppm

€28.5

€1

With script, per working hour

€35

€20

With script, per subtitle

€2

€0.2

With script, per 1,000 SL words

€130

n/a

With script, per 1,000 TL words

€130

n/a

Without script, ppm

€28.5

€0.18

Without script, per working hour

€28.6

n/a

Without script, per subtitle

€1.9

€0.21

Without script, per 1,000 SL words

€130

€15

Without script, per 1,000 TL words

€130

€50

Table 10.3 shows the ranges in average subtitling rates received by respondents when they translated subtitles from a template into their main languages; that is, the task did not include time-cueing or spotting and they focused solely on linguistic transfer. If attention is paid to the range of per-programme-minute (ppm) rates, it can be seen, for example, that the highest average rate reported was 15 euros and that 12 cents was the lowest. The respondent, who was paid the highest average rate (known hereafter as respondent F) in the above example and her typical client were both located in France, while the respondent who received the lowest average rate (known hereafter as respondent P) was based in Portugal, the same country as her client. Both respondents fell into the same age category, namely 25–40 years, and TV series and sitcoms were the main genre in which they had been working. We will now look at the main differences regarding their

Professional Realities of the Subtitling Industry

171

backgrounds in order to find out potential reasons for this striking discrepancy in the rates offered. Firstly, in terms of their level of education, respondent F had achieved a doctoral degree in subtitling while respondent P held a qualification in subtitling at degree level. Secondly, regarding work experience, respondent F had more than ten years’ work experience in subtitling, while respondent P reported less than two. Thirdly, when looking at the ratio of subtitling to other translation work, the questionnaires show that respondent F worked exclusively in subtitling, while the percentage of respondent P’s total output in subtitling was in the 21–40 per cent range, with a tendency to increase. Fourthly, in terms of the typical deadlines set by the clients, for a subtitling assignment requiring translation from a template, a typical client usually allowed respondent F five days for a 60-minute programme, but respondent P usually worked to very tight deadlines and was given only 12 hours to complete the subtitling of a 40-minute programme. Considering the acquired level of qualification in subtitling, the number of years of subtitling experience, as well as the ratio of subtitling work to total translation output of both respondents, a higher level of pay would naturally be expected for respondent F. In addition, the level of income in France is also generally higher than in Portugal.1 Although the survey results confirm this logical deduction, in this case the average rate regularly received by respondent F is some 125 times higher than the typical rate paid to respondent P. The difference is staggering, even if we take into account not only their work experience and qualifications, but also the economic status of the two countries where the respondents and their usual clients were based. From the perspective of economic status, France has the world’s fifth largest national economy by nominal GDP as of 2010.2 Portugal, on the other hand, occupies thirty-seventh place. As of the same year, the French GDP per capita was 39,186 U.S. dollars, whereas in Portugal it was 21,382 U.S. dollars.3 Despite the fact that France has 1.8 times the GDP of Portugal, both countries are still categorized by the World Bank as high-income economies.4 Among the respondents located in France and paid in Euros, average ppm rates fell in a range between 2.3 and 15 euros. As to the responses from Portuguese subtitlers regarding their remuneration in Euros, the range was between 12 cents and 2.8 euros ppm; one aspect of note is that the second lowest average rate in Portugal was 1.2 euros, ten times higher than the lowest of 12 cents. These ranges covering maximal and minimal rates imply the existence of extreme swings in terms of rates,

172

Arista Szu-Yu Kuo

potentially symptomatic of a somewhat dysfunctional business, even though some of these discrepancies might represent rare cases. The reason for including the responses from respondents F and P in the discussion is that they serve not only to highlight some of the most pertinent issues in this industry, but also because the case of respondent P is not isolated. There were other respondents from Portugal being paid a similar rate for other types of assignment. The extreme discrepancies shown by comparing the answers provided by respondent F and respondent P represent just some of the many revealed in the survey results. It is an undeniable fact that the country in which the subtitler works and where the typical client is located, together with the work experience and qualifications of the subtitler, the type of genre being subtitled, the level of difficulty of the task at hand and the urgency of the delivery, among other variables, all contribute to the level of pay. However, the enormous discrepancy observed, not only among, but also within countries, still indicates that the turmoil in terms of the rates being offered in the subtitling industry is a prevalent global phenomenon. The case of respondent P and respondent F may also suggest a potential connection between levels of pay and the support of unions and/or related professional bodies. In France, subtitlers are supported by the Association des Traducteurs et Adaptateurs de l’Audiovisuel (www.traducteurs-av.org), a rather active and visible AVT association, whilst in Portugal no such support exists for subtitlers. This might also explain the fact that the range of rates paid in France is narrower than in Portugal. The rates displayed in Table 10.4 are for the task of time-cueing only, as subtitlers are sometimes asked to work only on this technical aspect without being involved in the translation. Time-cueing, also known as cueing, originating or spotting, consists of ‘determining the in and out times of subtitles, i.e. the exact moment when a subtitle should appear on screen and when it should disappear’ (Díaz Cintas and Remael 2007: 88). Subtitlers are occasionally asked to do both, i.e. time-cueing and translation, and charge accordingly. The average subtitling rates reported by respondents for a combination of these tasks are shown in Table 10.5. In addition to translating and time-cueing, revision and proofreading are also part of the subtitling process. According to Gouadec (2007: 24), proofreading ‘consists in correcting any kind of blatant defects (spelling or grammar mistakes, missing bits, faulty formatting) and pointing out any apparent defects, discrepancies or translation errors’. Although for

Professional Realities of the Subtitling Industry

173

some professionals revision involves making appropriate amendments to improve translation quality and proofreading is limited to making only necessary corrections, both terms are often used interchangeably. The rates for revision and proofreading are usually much lower than for translating subtitles, as illustrated in Table 10.6. Table 10.6

Ranges of average revision and proofreading rates

Unit

Highest

Lowest

ppm

€4

€0.03

Per working hour

€56

€10

Per subtitle

€0.3

€0.15

Per 1,000 SL words

€40

€10

Per 1,000 TL words

€40

n/a

Tables 10.3–10.6 refer to the activities most frequently carried out by subtitlers, although not all respondents were involved in some of them (revision and proofreading, for example). It should also be noted that these tasks could well change in the near future due to developments under way in this field. For example, as a result of the increasing interest in the application of machine translation to subtitling, reduced rates for the post-editing of machine-translated output may very soon become a reality, posing a further challenge to subtitlers. In addition to the type of assignment, there are various potential reasons for the differences in rates. As indicated in the previous discussion, since the educational level and qualifications of the subtitlers may, to some extent, influence their level of pay, an examination of the information provided by the respondents in this respect may be of interest. Participants who were paid the highest average rates in each category were analysed further in order to ascertain any possible correlation in the information provided and to establish whether there were any similarities in their background characteristics. The results, however, indicate a lack of homogeneous features in education and qualification levels, as well as work experience among the respondents. This also points to a worrying state of affairs in the subtitling industry in terms of market pay rates and the recognition of the backgrounds and expertise of professionals. Apart from qualifications and work experience, the type of product subtitled is another factor that might contribute to differences in rates. 50.1 per cent of the respondents indicated that they were paid different

174

Arista Szu-Yu Kuo

rates according to the programme genre, and 11.2 per cent declared that they were always paid differently; however, there were still 38.7 per cent whose pay never differed according to the type of programme. Among those whose rates differed depending on the product subtitled, the highest paid genre was ‘documentaries’, followed in descending order by ‘films’, ‘corporate videos’, ‘TV series and sitcoms’ and ‘DVD bonus material’. It would seem logical to assume that, in general, tasks with a more urgent deadline are better paid than those with generous time allowances. And yet, according to the information provided by subtitlers, no correlation was identified between the urgency of delivery of a task and the level of pay awarded, although some respondents did acknowledge that they received a bonus for working to urgent deadlines. In a rather ironic twist, many respondents, who habitually worked to tight deadlines, were paid a relatively low rate when compared to those who were usually given more generous deadlines. In addition to the level of remuneration paid, the terms and conditions regulating payment also play a crucial role in terms of cash flow and are very important to subtitlers, particularly to those working as freelancers. Among the respondents, 46.2 per cent received payment within 30 days of submitting work, 45.2 per cent within 60 days, 6.5 per cent within 90 days and the remaining 2.1 per cent usually received payments only after 90 days. Payments, however, could be further delayed. Only 43.1 per cent of the respondents ‘always’ received punctual payments; 40.6 per cent ‘often’ received payments on time, whilst 11.4 per cent stated they ‘sometimes’ received payments on time. 3.5 per cent of the participants commented that their clients ‘rarely’ made timely payments and 1.4 per cent claimed that their clients ‘never’ paid on time (Figure 10.2).

Always Often Sometimes Rarely Never 0%

Figure 10.2

10%

20%

30%

Frequency of timely payments from clients

40%

50%

Professional Realities of the Subtitling Industry

10.4.2

175

Negotiation power

Following from the above discussion on rates, this section will analyse the survey results regarding who sets the rates as well as payment terms and conditions. Only 4.9 per cent of the respondents set their own rates, 8.9 per cent mentioned that their unions negotiated on their behalf, 17.3 per cent negotiated the rates with their clients, and 69 per cent usually accepted the rates offered by their clients without further negotiation.5 In some countries, notably those where the (audiovisual) translation sector is more mature, subtitlers were be able to ask their unions to negotiate rates on their behalf. This was the case for respondents from Canada, Denmark, Finland, Greece, Italy, Norway, Portugal, Spain and Venezuela. The results regarding terms and conditions of payment echo the findings from rate negotiation: 69.5 per cent of the respondents accepted the conditions of payment set by their clients, 15.2 per cent negotiated with clients, 8.6 per cent asked their unions to negotiate on their behalf, and only 6.8 per cent set conditions of payment themselves. The relationship between negotiation power and the respondents’ age, level of education, qualifications and subtitling experience have been examined in further detail, with the results showing that there is no apparent correlation among the variables studied. However, information provided by respondents did highlight a trend among those who negotiated with their clients, shown in Figure 10.3; the ratio did increase exponentially with subtitling experience. Meanwhile, the ratio for those whose clients set the rates also decreased as subtitling experience increased. This tendency suggests that more experienced subtitlers are more likely to negotiate with their clients. Nevertheless, this might only reveal an inclination to negotiate and not necessarily an increase in negotiation power. The different ways in which the respondents engaged professionally with their clients might shed some light on the reasons behind the weak negotiation power shown, particularly when it comes to setting rates and conditions of payment. Firstly, working through translation agencies might decrease the possibilities for negotiation. According to the survey results, a meagre 20 per cent of the respondents only worked directly with clients, while more than double that figure (43.6 per cent) took assignments solely through translation agencies. Some of these agencies tended to operate globally and had fixed policies in place for determining rates and terms of payment, thus leaving

176

Arista Szu-Yu Kuo

My clients set the rates

My union negotiates rates on my behalf

I negotiate the rates with my clients

I set the rates

90 80

Percentage

70 60 50 40 30 20 10 0 Less than 2 years Figure 10.3

2–5 years

6–10 years

11–15 years

More than 15 years

Negotiation power vs. subtitling experience rates

no room for negotiation for the translators, who could only accept or decline. Even if the companies were willing to negotiate with translators who were more experienced and competent, the margin for manoeuvre was normally very limited as the intermediaries also needed to retain profits. Secondly, ‘client concentration’ might also act as a contributive factor in terms of the decrease in the negotiating power experienced by some translators, as they relied heavily on a limited source for their work assignments and might have risked losing clients if they had been perceived to be belligerent. In this respect, the survey results indicate that 30.8 per cent of the respondents only worked with a single client on a regular basis, 47.3 per cent with two/three clients, 10.3 per cent with four/five clients, 5.1 per cent with more than five clients and a final 6.5 per cent stated that they did not work with any clients on a regular basis. Although working with only a couple of clients did not necessarily lead to an unstable source of income, depending of course on the volume of work commissioned, the nature of such an attachment could contribute to an increase in the respondent vulnerability in terms of negotiating power. Thus, translators sometimes have no choice but to accept unsatisfactory rates and payment terms if they want to continue working with their clients. Of course, this dependency can also be observed in the case of novice subtitlers without

Professional Realities of the Subtitling Industry

177

sufficient market experience or who are prepared to work for any rate in the hope that they can gain experience and that their income will increase in the future. Factors such as working habits also tended to influence subtitlers’ negotiation powers. In this regard, 92.3 per cent of the respondents claimed that they ‘always’ (72.5 per cent) or ‘mostly’ (19.8 per cent) worked for their typical clients from home. The results are summarized in Figure 10.4.

Always from home Mostly from home and sometimes at client’s premises Mostly at client’s premises and sometimes from home From home and at client’s premises on an equal basis Always at client’s premises 0% 10% 20% 30% 40% 50% 60% 70% 80%

Figure 10.4

Work premises

Working from home, and in isolation, can be seen as the third factor influencing subtitlers’ negotiation power, as it is thus more difficult for them to remain informed or to be aware of the rates charged by other colleagues, due to the market’s deliberate opacity regarding these issues. This can be particularly problematic in the case of inexperienced translators, who may not know the parameters of a reasonable price range and therefore risk undercutting the market. Additionally, it is also highly unlikely that they would turn down assignments paid at lower than average rates when they do not receive other, better-paid job offers. 10.4.3

Royalties

As Downey (2008: 119) states, ‘cinematic subtitling is often performed after the fact on a piece [by piece] basis by subcontracted firms or individuals not connected with the initial production process and who do not reap royalties from subsequent distribution’. It is, indeed, uncommon for subtitlers to share the royalties generated by the programmes they have subtitled or to possess the copyright for their work. The situation seems to be different in the case of literary

178

Arista Szu-Yu Kuo

translators, who are more likely to sign contracts in which their right to receive royalties is acknowledged. However, even in Europe, where the translation industry is in general more mature, the practice does not seem to be commonplace and only exists ‘in the major countries where publishers sell large numbers of books (10,000 copies and over)’ (Fock et al. 2008: 30). Returning to the field of subtitling, when asked about receiving royalties, only 2.8 per cent of the respondents stated that this was ‘normal’, and their typical clients and commissioners were from countries such as Australia, Denmark, Finland, Norway, Thailand and the United States. An additional 2.6 per cent of respondents claimed to have received royalties ‘often’, and their typical clients and commissioners were based in countries such as Denmark, France, Finland, Norway, Slovenia and the United States. The majority (84.2 per cent) of the respondents indicated that they had ‘never’ received royalties from the re-sale of a programme they had subtitled. The results concerning the practice of royalty payments are summarized in Figure 10.5. Among those who stated that they had received royalties at different levels, 15 per cent admitted to receiving a 50 per cent royalty rate6

Sometimes 4.9%

Always 2.8%

Often 2.6%

Rarely 5.6% Never 84.2%

Figure 10.5

Royalty payments

Professional Realities of the Subtitling Industry

179

and the countries in which their typical clients and commissioners were based were Denmark, Finland, Germany and Sweden; however, one respondent received a 30 per cent royalty rate from typical clients and commissioners from Greece. The situation facing the remaining respondents varied greatly, with royalty rates being mostly under 15 per cent. Some participants stated that they received rates on a negotiation basis, others mentioned that different rates were applied depending on the type of programme, while others said they were not sure, as they rarely received royalties or were unaware of the rates because they had always received lump-sum payments. Only 6.5 per cent of the sample indicated that they had ‘always’ received royalties from secondary use through joint collective societies such as Copydan, Copyswede and Norwaco; 3.7 per cent claimed that they had done so ‘often’, 7 per cent ‘sometimes’, 3.5 per cent ‘rarely’ and the remaining 79.3 per cent confirmed that they had never had such an experience. Among those who had received royalties through societies, most rates ranged from 5 per cent or below to 11–15 per cent. Some, however, did not know the amount or the rates they had received (16.9 per cent), while others had received a lump-sum payment on an annual basis, which varied according to the different societies. Based on the survey results, the royalty rights of subtitlers seem to depend more on the willingness shown by some companies operating from certain countries (e.g. Finland, Norway, Denmark, France, etc.) than on any legislative safeguard. 10.4.4

Acknowledgement credits

Despite the campaign led by some translators and subtitling associations to raise the visibility of professional subtitlers, the fact remains that, for many companies and viewers, the best subtitles are those that are not noticed. In this respect, the reality is that subtitlers are more often perceived when they make mistakes than when they produce successful translations. On the part of the professionals themselves, the survey results reveal, as demonstrated in Figure 10.6, that there was a strong desire among the respondents to raise their visibility both socially and professionally. This was attested by 74.4 per cent of respondents who claimed that they ‘always’ preferred to have their names appear on the credits of subtitled programmes. However, professional practice seems to point in a different direction. As is demonstrated in Figure 10.7, the survey results show that only 24.7 per cent had ‘always’ been credited for their work, whereas 14.7 per cent had ‘never’ been given credit for their contributions and 5.1 per cent

180

Arista Szu-Yu Kuo

Yes, always

Yes, sometimes

No, never 0% Figure 10.6

10%

20%

30%

40%

50%

60%

70%

80%

Preference for acknowledgement

Always Often Sometimes Rarely Never Don’t know 0%

5%

10%

15%

20%

25%

30%

Figure 10.7 Professional practice as regards the inclusion of the subtitler’s name in the acknowledgement credits

did not know if they had been credited. The remaining 55.5 per cent had been credited at different frequency levels from ‘rarely’ to ‘often’. It should be noted that the percentage of respondents who had ‘always’ been credited for their work in Scandinavian countries such as Denmark, Finland, Norway and Sweden was much higher than the rest of the countries under analysis (either close to or above 50 per cent). This might be because ‘subtitlers’ credit’ is either required by law, or because subtitlers are supported by strong unions in these countries. Despite the preference of the majority, not all subtitlers were equally vocal or enthusiastic about their visibility. In fact, 6.5 per cent did not want to be credited, with 19.1 per cent preferring to be credited ‘only under certain conditions’ (see Figure 10.6). The reasons for such a preference varied from respondent to respondent, yet on the whole, they wanted to be credited only when they had been given enough time to work on the assignments, had been given the chance to agree with the revised version, personally liked the programme and when they had worked directly with clients instead of working through agencies. In

Professional Realities of the Subtitling Industry

181

other words, the confidence level that the respondents had in their own work strongly influenced their attitude towards being (in)visible. Once again, this approach seems to be different from the practices observed in the world of literary translation, where the name of the person who has carried out the translation is published in most works. 10.4.5

Notices and deadlines

Short notice and tight deadlines in which to deliver work were a common reality for the respondents: 17.5 per cent of the participants were usually given assignments with less than 24 hours’ notice, 24.5 per cent had 1–2 days, 32.4 per cent received 3–7 days and 11.9 per cent were given 7–10 days. The situation varied among the remainder of the sample, with some stating that they worked on a monthly plan, some declaring that they proactively asked their clients for assignments when they had the time and some stating that they were simply given a different length of notice every time they were offered a job. Concerning the time given to complete an assignment, the answers showed a marked variation, primarily based on the type of work and the duration of the programme to be subtitled, as is shown in Table 10.7. The tightest deadline was reported by a Turkish respondent, with a typical client/commissioner also based in Turkey. The reported answers not only cover a wide range of programme lengths and given times, but also vary greatly from person to person. Tellingly, only one Swiss respondent, with a typical client/commissioner in Switzerland, was able to decide her own deadlines, usually allowing more than two weeks to subtitle a 60-minute programme. The fact that many respondents were forced to work to tight deadlines to some extent justifies their attitudes towards their (in)visibility, as discussed in Section 10.4.4. Table 10.7

Deadline range

Type of work

Most urgent

Least urgent

Programme length

Time given

Programme Time given length

Translating from template

20 mins

3–4 hours

Varies

As much as needed

Time-cueing and translation

20–35 mins

8 hours

Varies

As much as needed

Proofreading/revision

25 mins

1 hour

Varies

As much as needed

182

Arista Szu-Yu Kuo

45% 40% 35% 30% 25% 20% 15% 10% 5% 0% Very much Figure 10.8

Moderately

Just a little

Not at all

Influence of tight deadlines on quality

The survey results also highlight the tension that exists between tight deadlines and output quality, as this appears to be an issue for the majority of respondents. A  total of 67.6 per cent of participants considered that tight deadlines affected the overall quality of their output, with 28.7 per cent admitting that this tension existed to a ‘strong’ degree and 38.9 per cent to a ‘moderate’ degree, as is shown in Figure 10.8. 38.5 per cent of the respondents were convinced that generous deadlines would ‘very much’ help to increase the quality of the results they delivered, as more time could be devoted not only to documenting and finding appropriate solutions, but also to conducting a final quality check and revision. 10.4.6 Contracts and materialization of jobs Rules and regulations governing freelance practice vary from company to company and from professional to professional, with some requiring clients, or being asked by clients, to sign contracts and purchase orders, whereas others tended to carry out the work on the basis of verbal agreements. Among the respondents, 20.3 per cent stated that

Professional Realities of the Subtitling Industry

183

they had ‘always’ signed a contract with their clients before proceeding to translate the programme, while 24 per cent had ‘never’ signed one; the remaining 55.7 per cent signed at different frequency levels: 16.3 per cent ‘often’, 21.4 per cent ‘sometimes’ and 18 per cent ‘rarely’. The survey results indicate that signing a contract with clients is not yet a routine practice in the industry. One of the risks inherent in the profession is the failure of an assignment to materialize even if a contract has already been signed between the parties concerned, although according to the survey results, this does not seem to be very common. Among respondents, 1.9 per cent declared they had ‘often’ encountered such a situation, with their typical client/commissioner based in Argentina, Brazil, Germany, Italy, Portugal, Spain and the United Kingdom; 19.3 per cent had failed to receive assignments ‘sometimes’, whilst 50.6 per cent confirmed that the situation had happened to them ‘rarely’. On the flip side, 28.2 per cent of the respondents affirmed that their assignments always materialized following a contract. 10.4.7 Use of subtitling software In practice, the support received by a subtitler in terms of dedicated software and working files such as scripts and consistency sheets is very likely to increase in direct proportion to the professionalism and established level of their clients. According to the survey results, a substantial 73.7 per cent of participants declared that they had ‘always’ worked with subtitling equipment, whilst 6.1 per cent had ‘never’ worked with it. The remaining 20.2 per cent had worked with subtitling equipment at different frequency levels, from ‘rarely’ to ‘often’. Among those who worked with subtitling equipment, the majority used professional software. These results are summarized in Figure 10.9:

Always Often Sometimes Rarely Never, I always work with templates 0% Figure 10.9

10%

20%

30%

40%

50%

60%

Frequency of working with subtitling equipment

70%

80%

184

Arista Szu-Yu Kuo

Among those who used subtitling software, 61 per cent only used a single subtitling program, 30.3 per cent made use of two, 5.3 per cent used three, 1.7 per cent used four and the remaining 1.7 per cent used more than four programs. The various professional subtitling programs mentioned by survey participants included WinCAPS, Spot, Swift, Titlevision, EZTitles, GTS, Ayato, Tempo, FAB, Eddie, Monal, Screen, TextYle and Polyscript. As for freeware, the most popular programs were Subtitle Workshop, VobSub, VisualSubSync and offline versions of some commercial programs such as Belle Nuit. Among those who used professional subtitling software, 46.6 per cent bought subtitling software on their own, 34.3 per cent used fully-fledged professional subtitling software provided for free by their clients, 10.9 per cent used a freelancer version with reduced functionality, also provided for free by their clients, 7.7 per cent paid to use a fully-fledged version provided by their clients, and 0.5 per cent paid to use a freelance version with reduced functionality provided by their clients. As regards technical support, among the respondents who used subtitling programs, 47.6 per cent did not receive any technical support, 40 per cent received support financed by the clients and 12.4 per cent paid for it themselves. Among the latter, six respondents indicated that they did not pay extra for the service as it was already included in the price of the software program. The fees that respondents paid for technical support varied greatly from €60 per hour to €1,000 per year. 10.4.8 Provision and quality of supporting material In addition to subtitling software, supporting material is also an essential concomitant in the process of subtitling. In an ideal situation, material such as dialogue lists, templates, audiovisual programmes and consistency/terminology sheets should be provided to subtitlers in order to facilitate their task and to boost the consistency and quality of the output. For example, when a subtitler has to translate everything from the soundtrack without a script, the time needed will be significantly prolonged and the efficiency and accuracy substantially reduced. The chance of mishearing some words or expressions from the original also increases considerably, sometimes leading to mistranslation. Table 10.8 refers to the frequency with which respondents received different types of supporting material in real-life scenarios. It should be noted that some respondents mentioned that they had to translate without access to audiovisual material, i.e. video clips. This approach endangers the quality of the subtitling output, as images are a crucial part of the audiovisual programme and should be taken into

Professional Realities of the Subtitling Industry

185

consideration when producing subtitles that are coherent and cohesive with the overall semiotic complex. The quality of the supporting material provided to subtitlers can also influence the quality of the subtitling output substantially. For instance, the audiovisual material may reach the subtitler in low resolution due to the fear of films or programmes being leaked before the official launch. The downside of this approach is that the resulting images are sometimes so blurred and fuzzy that the picture is unable to provide the subtitler with useful details. On the whole, the majority of respondents found the quality of the received materials either satisfactory or acceptable (albeit in various degrees), that is when they were available. The survey results concerning the perception of the respondents in relation to the quality of the main supporting material they had received from their typical clients are presented in Table 10.9. The survey was set up to explore the correlation between the quality of the subtitling output and the quality of the supporting material. Although appreciation is necessarily subjective, the results show that there is, indeed, a clear correlation. The impact of different types of

Table 10.8

Supporting material provision

Type of material

Always

Often

Sometimes

Rarely

Never

Guidelines

26.3%

27%

18.2%

15.9%

12.6%

Dialogue lists/Scripts

31%

47.3%

11.2%

5.4%

5.1%

Templates

20%

21.9%

20.3%

14.9%

22.9%

Audiovisual material

71.8%

9.3%

4%

4.2%

10.7%

Consistency/Terminology sheets

6.5%

13.1%

21%

27.5%

32%

Table 10.9

Quality of the provided supporting material

Type of material

Very good Good

Average Fair

Poor

N/A

Dialogue lists/Scripts

16.3%

36.4%

29.1%

9.1%

2.6%

6.5%

Templates

12.6%

23.1%

21.7%

6.1%

7.7%

28.9%

Audiovisual material

21.5%

42.4%

23.3%

3.5%

2.3%

7.0%

4.2%

19.1%

19.8%

7.9%

8.2%

40.8%

Supplementary material (e.g. consistency sheets, glossaries, etc.)

186

Arista Szu-Yu Kuo

Table 10.10 Influence of the quality of supporting material on the final translation Type of material

Very much

Moderate

23.3%

36.4%

28%

12.3%

Templates

24.7%

31.7%

19.1%

24.5%

Audiovisual material

35.7%

36.8%

15.6%

11.9%

Supplementary material (e.g. consistency sheets, glossaries, etc.)

11.2%

27.7%

25.9%

35.2%

Dialogue lists/Scripts

Just a little

Not at all

supporting material on output quality according to the subtitlers who completed the survey is shown in Table 10.10. In this regard, it seems that respondents greatly appreciate the quality of audiovisual material, dialogue lists and templates. 10.4.9 Changes since the economic downturn The financial troubles that started in 2007–2008, affecting countries worldwide, developed rapidly into a profound economic crisis, which has had a tremendous impact on demand, investment and growth in many sectors. The subtitling industry has been no exception. This is why the survey also examined the resulting changes induced by the global economic crisis in the industry, with results grouped under four categories: (a) rates, (b) volume of work/clients, (c) payment terms and (d) procedures, all of which will be discussed in detail below.7 10.4.9.1

Rates

A total of 32.4 per cent of the respondents noted that they had experienced diminished rates of pay, with levels varying from respondent to respondent, ranging from ‘slightly’, ‘lowered by a third’ up to ‘50 per cent of the original rate’. Some respondents also pointed out that cuts in rates had begun even before the economic downturn and that they had been lowered several times prior to the point at which they filled in the survey. Cuts were implemented in many ways, with some companies negotiating lower prices with subtitlers, asking for discounts, lowering the price and adopting a take-or-leave-it policy or changing the payment policy from pay-per-subtitle to ppm. In addition, according to a couple of respondents, some subtitling studios had stopped offering bonuses for rushed assignments or programmes with high lexical density and a large number of subtitles. A Polish respondent commented that ‘[t]he rates are lower and there is no extra payment for a

Professional Realities of the Subtitling Industry

187

short notice assignment’. At the time of the survey, three respondents were either in the process of negotiations with clients or had just been informed about potential cuts in their rates. Despite the fact that more than 30 per cent of the respondents had experienced cuts, there were still five respondents (0.1 per cent) whose rates had slightly increased. There was also one respondent who remarked that her rate had been ‘initially reduced, but through negotiation the agreed final rate was even higher’ than the one suggested originally. 10.4.9.2 Volume of work/clients Changes affecting the volume of work varied from respondent to respondent, with some having suffered a decrease in job offers, while others had seen an increase in their workload. Overall, the results tend to be rather positive for freelance subtitlers. In the five years prior to the time of the completion of the survey, a total of 35.7 per cent of the respondents reported that the subtitling output had decreased ‘considerably’ (21.2 per cent) or ‘slightly’ (14.5 per cent), whilst for a substantial 42.6 per cent, it had increased ‘slightly’ (22.6 per cent) or ‘considerably’ (20 per cent); and for 21.7 per cent it had ‘remained the same’. In the opinion of some respondents, the increase in work was due to their establishment as credible professionals over the passage of time. It is worth noting that, on occasion, decreased output was to some extent due to personal choice, as respondents had turned down job offers due to their personal unwillingness to compromise on new rates. As a respondent based in Finland commented: ‘[t]hey lowered the rate paid for programmes of a certain TV channel. I have refused to translate for this channel since then’. There were also involuntary job losses and situations where respondents were simply assigned fewer contracts. The situation is gloomier for those who worked as in-house subtitlers and some were forced to shorten their working hours or days, whilst others lost their jobs altogether because their clients went bankrupt. All in all, losses and gains in the job market clearly foreground the phenomenon of workload redistribution: those subtitlers who compromised on rates tended to benefit from a boost in their workload, whereas those who refused to accept lower rates were given fewer assignments. 10.4.9.3

Payment terms

Delays in payment were another issue reported by respondents as a collateral consequence of the cash-flow problems that had become more

188

Arista Szu-Yu Kuo

acute with the economic crisis. Nonetheless, the percentages are relatively low. Indeed, only 5.1 per cent of respondents mentioned experiencing either delayed payments and/or extended payment terms, e.g. from 30 days to 60 days, or from 60 days to 90 days. Some respondents further indicated that they had even started to request upfront payment because some of their clients had gone bankrupt before making payments, thus incurring a loss. A respondent based in France also added that she had compromised on lower rates, but had ‘ask[ed] for payment on or before delivery’. 10.4.9.4

Procedures

A small fraction of respondents had also been confronted with procedural changes: 2.8 per cent pointed out that their typical clients had made procedural changes over the five years prior to the time they had replied to the questionnaire. Some companies had chosen to reduce their budgets for procedures such as second proofreading/final editing, as well as requiring subtitlers to perform time-cueing. In relation to the task of time-cueing subtitles, concerns about the declining quality of master files due to budget cuts were also expressed. The reasons put forward were that clients no longer had enough manpower to take care of all the tasks in a satisfactory manner or that they had started to hire cheap, low-quality labour to reduce costs. As one respondent based in the Czech Republic stated, ‘[my clients] decreased the rates [and] started cooperating with cheaper, less qualified translators’. Along these lines, some of the respondents reported that they had been asked to take charge of more research and documentation tasks, such as creating consistency guidelines and compiling vocabulary lists. According to the survey findings, the economic crisis has certainly compounded the difficulties with respect to the working conditions of many subtitlers and the state of affairs within the industry. However, further research is required to investigate the development of the ensuing situation.

10.5

Conclusion

Given the broad differences affecting both practice and rates, not only between countries but also within the same country, drawing general conclusions on the working conditions of subtitlers is a difficult task. Nonetheless, the purpose of this chapter is to present empirical findings in order to offer an overview of the subtitling industry. Although the research scope is limited to the perspective of subtitlers, this study has

Professional Realities of the Subtitling Industry

189

yielded rich findings regarding the working conditions under which they perform their activities in different countries. One aspect that should be noted is that the survey results are more representative of Europe, where the majority of the respondents were based, despite the fact that some were from North and South America, Asia, the Middle East and Oceania. Subtitling, with a century-old history, is a close-to-daily-life but often neglected profession. Subtitles, the so called ‘necessary evil’, make television programmes accessible to the deaf and the hard-of-hearing, enable the audience to understand the content of a programme without the need for a knowledge of the SL and are widely used as a tool to aid language learning. There is, in fact, an undiminished need for them in the market. However, the vulnerability of subtitlers seems to have increased with the development of the industry. Facing competition from the unlimited cheap labour available on the Internet, and challenged by profit-oriented clients who endeavour to boost their turnover while offering lower rates, many subtitlers have been struggling with trade-offs in time, earnings, and performance quality. The disparity between rates reported by survey respondents not only among, but also within countries, indicates that the turmoil regarding rates in the subtitling industry is a prevalent global phenomenon. Results have also highlighted worrying trends, such as the lack of correlation between higher remuneration rates and urgent deadlines, higher qualifications or levels of experience, as well as the lack of recognition of subtitlers’ work in general. The current situation is set to deteriorate further unless awareness is increased and the necessary measures put in place. There is no one-size-fits-all solution, nor is there a shortcut for noticeable improvements, which might perhaps be possible if a more prominent subtitlers’ network were built up. Such a network would hopefully contribute to the reinforcement of the subtitling community, resulting in greater recognition for this profession. Nevertheless, this effort should be made not only at an international, but also at a national level. The survey has revealed differences between countries with strong subtitlers’ associations and unions and those without. The working conditions of subtitlers in the former were more homogenous and more likely to remain at a certain level, particularly as regards rates, royalties and credits, while the situation in the latter was more disparate, thus pushing the two extremes even further apart. If a solid network were built within countries, cooperation at international level would also become more effective.

190

Arista Szu-Yu Kuo

To improve the working conditions of subtitlers, the scope for collaboration should be widened, instead of being limited to trade unions and associations of subtitlers. While translation agencies are blamed for ignoring quality in order to increase turnover, the crux of the problem may, in fact, lie in the attitudes of filmmakers, producers and the audience towards the quality of subtitles. If the audience were to understand the impact that the quality of subtitles can have on their viewing experience, they might demand better quality. If other stakeholders also became aware of these advantages, they might place more emphasis on subtitling and increase the budget allocated to it to ensure its quality. If effective communication channels could be implemented among the relevant industries and practising subtitlers, the working conditions of subtitlers might improve somewhat with time. Although there is no shortcut to improving the current status quo enjoyed by subtitlers, small changes could be made enabling professionals to move, step-bystep, to greener pastures, where subtitlers working for a reasonable wage and under decent working conditions is no longer a dream, but a reality. Only when the need for good quality subtitles is created can ‘the supply chain’, the subtitling profession, become well established.

Notes 1. For further information, please refer to the Income section under OECD’s Better Life Index: www.oecdbetterlifeindex.org/topics/income. 2. For further information, please see the report published by the World Bank: http://siteresources.worldbank.org/DATASTATISTICS/Resources/GDP.pdf. 3. For further information, please refer to the database of the World Bank: http:// data.worldbank.org/indicator/NY.GDP.PCAP.CD?order=wbapi_data_value_ 2010+wbapi_data_value&sort=asc. 4. For further information regarding Income Levels, please refer to the world development report by the World Bank: http://wdronline.worldbank.org/ worldbank/a/incomelevel. 5. Due to rounding up numbers, some of the totals in the survey findings do not add up to 100 per cent. 6. Some of the respondents further indicated that, despite being legally entitled to a royalty rate of 50 per cent of the original translation fee (e.g. Finnish copyright law), not all the translation agencies adhered to the law. Sometimes commissioners only give royalties under certain conditions, i.e. if they are the owners of the copyright themselves. In addition, the gross royalty to which subtitlers are entitled is still subject to some expense deductions. 7. The time frame for the changes indicated by respondents is between 2005 and 2010.

Professional Realities of the Subtitling Industry

191

References Andrews, Dorine, Blair Nonnecke and Jennifer Preece. 2003. ‘Electronic survey methodology: a  case study in reaching hard-to-involve internet users’. International Journal of Human-Computer Interaction 16(2): 185–210. Ballinger, Claire and Christine Davey. 1998. ‘Designing a questionnaire: an overview’. British Journal of Occupational Therapy 61(12): 547–50. Díaz Cintas, Jorge and Aline Remael. 2007. Audiovisual Translation: Subtitling. Manchester: St Jerome. Dickins, James, Sándor G. J. Hervey and Ian Higgins. 2002. Thinking Arabic Translation: A Course in Translation Method: Arabic to English. Oxford: Routledge. Downey, Gregory J. 2008. Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television. Baltimore: Johns Hopkins University Press. Duff, Alan. 1981. Third Language: Recurrent Problems of Translation into English. Oxford: Pergamon Press. Fock, Holger, Martin de Haan and Alena Lhotová. 2008. ‘Comparative income of literary translators in Europe’. Conseil Européen des Associations de Traducteurs Littéraires, www.ceatl.eu/docs/surveyuk.pdf. Frazer, Lorelle and Meredith Lawley. 2000. Questionnaire Design and Administration: A Practical Guide. Brisbane: John Wiley & Sons Australia. Gouadec, Daniel. 2007. Translation as a Profession. Amsterdam: John Benjamins. Ilieva, Janet, Steve Baron and Nigel M. Healey. 2002. ‘Online surveys in marketing research: pros and cons’. International Journal of Market Research 44(3): 361–76. Newmark, Peter. 1988. A Textbook of Translation. London: Prentice Hall. Oppenheim, Abraham N. 1992. Questionnaire Design, Interviewing and Attitude Measurement. London: Pinter. Reyntjens, Marie-Noëlle. 2005. A Quantitative Study on Subtitling Rates. Unpublished MA Dissertation. Brussels: Institut Supérieur de Traducteurs et Interprètes. Sapsford, Roger. 1999. Survey Research. London: Sage. Wright, Kevin B. 2005. ‘Researching Internet-based populations: advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services’. Journal of Computer-Mediated Communication 10(3), www.onlinelibrary.wiley.com/doi/10.1111/j.10836101.2005.tb00259.x/full?utm_source=twitterfeed&utm_medium=twitter.

11 The Pros and Cons of Using Templates in Subtitling Kristijan Nikolić

11.1

Introduction

According to Bywood et al. (forthcoming), ‘[t]he subtitling industry has experienced many seismic shifts in the course of the past three decades. The first of these was the cable and satellite TV revolution in the late 1980s which greatly increased the amount of content to be subtitled for television viewers across the globe’. One of the consequences of these shifts is the use of templates which, according to Georgakopolou (2012), were widely introduced because of the DVD boom in the late 1990s. She defines a template as ‘a subtitle file consisting of the spotted subtitles of a film done in the SL [Source Language], usually English, with specific settings in terms of words per minute and number of characters in a row, which is then translated into as many languages as necessary’ (Georgakopolou 2003: 220). In a similar vein, Díaz Cintas and Remael (2007: 253) define a subtitling template as ‘a list of master (sub)titles with the in and out times already spotted’. These definitions suggest that the technical task known as spotting or time-cueing (i.e. deciding the ‘in’ and ‘out’ times of subtitles, taking into account spatial and temporal constraints) may be conducted by a person other than the translator. The terms used to refer to templates are relatively varied. In the industry, for example, they are sometimes referred to as a ‘first translation’ or ‘pivot translation’. These two terms imply that templates are not primarily or only created for the production of subtitles in various languages. They suggest that this first or pivot translation is produced in one target language, broadcast in the target language territory, and then used by the subtitling company for the production of translations into different languages for that programme in other territories. This 192

The Pros and Cons of Using Templates in Subtitling

193

will be explained further below. Templates can, however, also be created in the source language of the audiovisual material, in which case there is no translation involved and they are never broadcast. If this type of template contains no text, it may be referred to as ‘empty timecodes’ or a ‘blank template’. According to Georgakopolou (2012), templates are also called ‘genesis files’, ‘masterfiles’ and ‘transfiles’. The term ‘spotting list’ is also used by some subtitling companies. This very varied terminology has resulted in considerable confusion. All in all, a wide variety of terms is used to refer to this process, which involves the production of subtitles with an already existing timecoded subtitle list or file that may or may not be in the source language of the original audiovisual programme. The introduction of templates has also caused slight terminological confusion regarding the term ‘subtitler’. Subtitling companies, which use templates, will call the person who has produced the template ‘a subtitler’, while those who produce translations using that template are likely to be called ‘translators’, and their products may be referred to as ‘subsequent translations’. However, translators working in subtitling call themselves ‘subtitlers’ regardless as to whether they produce subtitles in their target language using a template or not, that is to say, if they are also in charge of spotting. For ease of understanding, in this paper the term ‘template maker’ will be introduced and used to indicate a person responsible for the production of templates in the source language of the original audiovisual material.

11.2 The use of templates in the subtitling industry In their seminal book Subtitling, published at the onset of the digital revolution in 1998, Ivarsson and Carroll do not mention the word ‘template’ on a single occasion. However, five years later Georgakopolou (2003: 221) discusses templates and their influence on the subtitling process, which shows that substantial changes had taken place in the interim. Before the arrival of templates, skilled and unskilled subtitlers were differentiated, on the basis of the quality of their spotting, amongst other criteria, since, to spot well, a subtitler had to learn an extensive range of technical skills, meticulously described by Ivarsson and Carroll (1998: 79–103). Traditionally, subtitlers were all involved in what is referred to as a ‘first translation’. They would be provided with a video, a dialogue list or a film script, and use a piece of subtitling software for the translation and spotting of subtitles. Some international subtitling companies still commission such work, but that first

194

Kristijan Nikolić

translation is often used later on as a template to produce subtitles in another language for another market in order to save time and money. Given the swiftly changing nature of the subtitling process (Ivarsson 1992; Ivarsson and Carroll 1998; Nikolić 2005, 2010), it is evident that it is important for subtitlers to follow developments in subtitling technology, which have proved to be fairly extensive over the last decade. However, some subtitlers are not very keen to make these substantial changes to the way they work, since it takes time and therefore money; subtitlers are not paid to learn to work with new software, as they mostly work on a freelance basis. They might, in addition, not be able to spare the time and, as cutting costs is very important for subtitling companies, subtitlers need to work quickly and efficiently to meet deadlines. As regards the use of templates, subtitlers sometimes feel as if a part of the subtitling work has been taken away from them. Whether this is just a matter of ‘old habits dying hard’ is not clear. What is apparent is that subtitling companies do not pay the same rates for the production of a first translation (translation and spotting) as for the production of subsequent translations (translation only). This is one of the reasons why some subtitlers, if they have a choice, are not keen to work on subsequent translations. Although they often see the introduction of templates as an excuse for cutting rates, their reasons against them can be of an altogether different, not only financial, nature, as will be explained below. Another aspect to be considered is the type of template on which subtitlers are asked to work. As mentioned above, a template can be produced in the language of the original audiovisual programme or in a different one. For instance, subtitlers may be given templates in English to translate a US production into their mother tongue, but they may also be provided with a template in Swedish, Danish or Romanian. This is illustrated in Figure 11.1, where a Swedish template is used to subtitle a film from English into Croatian. In this case, subtitlers are not expected to translate from Swedish (unless they understand the language), but to use the timings of the already spotted subtitles to speed up their translation by not having to deal with the technical dimension of subtitling. On these occasions, the translation is carried out directly from the soundtrack or with the support of a dialogue list in the original language of the programme (English in this case). Some subtitling companies have introduced the practice of using so-called ‘blank templates’, which contain no text, only time-cued blank boxes in which to insert the translation. Similarly, where the source language of the audiovisual programme is English,

The Pros and Cons of Using Templates in Subtitling

195

Figure 11.1 Example of a Swedish template provided for the translation of a film from English into Croatian

subtitlers may be presented with templates in English to be used as a masterfile when producing their subtitles into their respective target languages. These English templates may be verbatim transcriptions of what is said in the audiovisual material or they may already have been edited. The reasons for employing templates are numerous and mostly depend on the circumstances in which subtitling companies and TV channels operate and/or on the markets in which audiovisual products are broadcast. If, for instance, a Swedish TV channel decides to enter new markets and broadcast overseas via cable TV systems, and if it already owns the Swedish subtitles of its mostly English-language programmes, the Swedish translations may be used as templates through which translations will be made into the languages spoken in those new markets. As explained above, in such instances, templates will only be used for their timecodes. In an attempt to keep costs low, if the Swedish TV station decides to export programmes shot originally in Swedish, and for which no subtitle files exist, they may hire professionals in a freelance capacity to produce templates in Swedish in countries where both translating and spotting are cheaper. If this Swedish template is used as a masterfile in other Scandinavian countries, such as Norway or Denmark, where the languages are similar, it is worth asking whether, and to what extent, the template influences the translations. The same

196

Kristijan Nikolić

could be said for the use of Croatian templates in Serbia or Slovenia – an issue which will be discussed briefly later on.

11.3 The advantages of using templates Templates were introduced as a response to the growing demand for subtitles (Georgakopolou 2012) and for greater control over the subtitling process. The production of subtitles in various languages for a DVD would have been impossible if subtitle files were not identical for all languages in terms of timecodes, number of subtitles, etc. Furthermore, as DVD subtitling is permanent, there is greater pressure on subtitling companies to eliminate mistakes (Bywood, personal communication, 24 November 2014). However, to use templates in an advantageous manner requires a certain modus operandi. For subtitling companies, using templates, either in the SL or as a first translation, represents a considerable advantage in terms of saving money and time. This time saving should also apply to translators, although it is not always the case, as will be discussed in the next section. When templates are prepared in the source language of the audiovisual programme, template makers do so according to a specific country’s subtitling standards, and these may vary from country to country or might not even exist at all. This is the case with some dubbing countries, such as the Czech Republic, which do not have specific subtitling standards, or at least did not have them when the DVD industry experienced its boom (Bywood, personal communication, 24 November 2014). The use of templates in these cases could again be seen as an advantage for subtitling companies as it facilitates the subtitling process. Modern subtitling software allows the template to be loaded and the timecodes copied onto a blank file into which the translation is then inserted. By using the original template synchronously with the video and the blank subtitle file, new subtitles may be produced relatively quickly. Figure 11.2 illustrates this process. On the right-hand side, a template in English can be seen; the translation into Croatian is shown in the middle, at the top and the bottom. The video is also located in the middle of the screen, and shows subtitle number three, which is the so-called ‘working subtitle’. The original template is normally used together with the video, since copying and pasting names and numbers saves time, enabling the amount of text to be inserted into subtitles in subsequent translations to be easily assessed. This procedure should result in a quicker translation, provided, that is, that the template is correct. For a template to

The Pros and Cons of Using Templates in Subtitling

197

Figure 11.2 Example of an English template provided for the translation of an audiovisual programme into Croatian

be considered ‘correct’ names, numbers, measurements, etc. should be written correctly, to match the original dialogue. Although correct and well-prepared templates are not all that common, they do exist. Given the above, such an approach to subtitling could be expected to make up for the ever-decreasing subtitling rates. Some subtitlers concur that a good and flexible template may indeed save time. Regarding the latter, some companies allow subtitlers to merge and split subtitles, a practice that is welcomed, especially when more time and space are needed, as is the case in the subtitling of some documentaries. Merging and splitting subtitles may be quicker than trying to fit the translation into a locked timecode but, if too many subtitles need to be split or merged, no time is, in fact, saved. The use of blank templates or empty timecodes is often perceived as problematic by subtitlers, since the absence of text slows down and complicates the subtitling process. In this case, instead of using existing text as a reference, the subtitler has to focus both on the duration of the subtitle and on the timecode at the top of the screen in order to assess the amount of text that may be inserted. The use of templates is of considerable benefit to subtitling companies, enabling better control of the whole process, which is essential since they are usually in charge of the production of subtitles in several

198

Kristijan Nikolić

countries, and thus often in more than one language. Since less is paid for subsequent translations, the resultant financial savings are also seen as an advantage. For instance, if subtitling company X has a contract with a TV channel Y to subtitle their programmes for all the European markets in which they are broadcast, X may hire a template maker to produce a template in the SL. This will then be sent out to freelancers to be translated into the relevant European languages. Alternatively, the company may decide to hire an Estonian subtitler, for instance, to produce the first translation (including both translation and spotting), which will then be sent out to all their European freelancers to be used as a template. As far as these two approaches are concerned, some subtitlers might in fact prefer to work with a template in a language other than the source language of the programme, as these are actually first translations produced by professional subtitlers, and therefore often of better quality. Original master templates (often in English) are usually produced only for use as templates and may contain odd ‘in’ and ‘out’ times with an awkward organization of subtitles. In addition, they usually contain too much text, as they tend to be verbatim accounts of what is being said in the audiovisual text. Subtitling companies also benefit from templates, as they do not have to invest time and money to train all their translators to spot. In the previous example, they only need to invest in the training of the Estonian subtitler or the English template maker, who can be based in a country with lower fees. This is one of the effects of globalization discussed by Georgakopoulou (2006), and Díaz Cintas and Remael (2007: 37). Subtitling companies are under pressure to produce high volumes of subtitles into many languages and the use of templates seems to respond to this need as well as to promote control and standardization. The subtitling industry is characterized by fierce competition, and this should be borne in mind when discussing the introduction of templates (see Georgakopoulou 2003: 214–232).

11.4

The disadvantages of using templates

Globalization, along with the economic downturn of recent years, has forced subtitling companies to research ways of saving money and time in order to be able to compete with each other. Georgakopoulou (2006: 120) discusses subtitling and globalization and notes that ‘[e]very time the word “digital” is used, studios see this as an opportunity for centrally produced work’, which is indeed facilitated by the use of templates. Subtitlers do not seem to criticize this approach as such. What

The Pros and Cons of Using Templates in Subtitling

199

subtitlers seem to see as a disadvantage to the introduction of templates is the reduced rates for which they have to work. Subtitling companies argue, however, that cuts are imposed by the TV broadcasters that hire them and not necessarily by the use of templates. Whether it is, in fact, that straightforward should not be a topic for scholarly discourse, although joint research by experts in economics and subtitling would be interesting. As regards further disadvantages, Kapsaskis (2011: 174–5) emphasizes that: it is arguable that template files effectively indicate what to translate and how to translate it. To a significant extent, they dictate specific or strategic choices that are often debatable as far as the TL [Target Language] is concerned, and ultimately they tend to replace the audiovisual material as the source text of the translation. This view requires further investigation, since it is indeed possible that templates will become the source text of translation. However, this does not necessarily have to be problematic, especially if they are correct. The question is if the template, when it is not a verbatim representation of the original, influences the condensation of text in other target languages. On the other hand, verbatim templates can be problematic for subtitlers in countries with different subtitling conventions. A verbatim English template may sometimes contain 1,700 subtitles for a 100 minute film, while the same film would only contain between 700 and 900 subtitles in countries with different standards. If these 1,700 subtitles are locked and the translator is not allowed to change timecodes or to split and merge subtitles, the process will take longer than subtitling from scratch. Even if these 1,700 subtitles are unlocked, the subtitling process might become lengthy and complicated. Croatian, for instance, tends to be more long-winded than English, which means that more space is needed for some of the subtitles compared to their English originals. The translation of abbreviations, for example, may require more space than in an English template, as a full explanation may be needed. This is the case of the term MP (Member of Parliament), which does not have an equivalent abbreviation in Croatian and could in fact be translated as Zastupnik (u Donjem domu Britanskoga parlamenta) [representative in the British House of Commons]. Perhaps, when it comes to templates and to the policy of not allowing any change in the timecodes, what subtitling companies tend

200

Kristijan Nikolić

to ignore is the lexical, syntactic and cultural differences between languages. As a result of these, translation often requires the use of paraphrase and this inevitably takes more space. This issue could, to a certain extent, be addressed by allowing timecode changes and the merging or splitting of subtitles in existing templates. Subtitling software should be user-friendly enough to enable these, since a subtitler may end up working longer hours with the template than without it. Some programmes do not allow the creation of a blank template to be used for translation and subtitlers may need to delete the text from the existing template and then insert the translation, a process which is far from time saving. Templates created in the source language of the original audiovisual programme can be of dubious quality, especially if they are produced by the ‘template makers’ described above. In these templates, proper names might be misspelt. For instance, the name ‘John’ in a template may actually be Jon in the dialogue list and, because of time constraints, someone translating using that template would probably just copy and paste the name without checking the dialogue list. While it could be argued that this would be unprofessional, it should be borne in mind that subtitlers are required to meet tight deadlines and work quickly. This lack of time affects quality, a fact that seems all too often to be disregarded by subtitling companies. This raises another issue: when using templates in languages that are closely related; for instance when using Slovenian templates to produce Croatian subtitles, the copy-paste method used in order to save time also tends to cause problems because of the many, sometimes partial, false friends in these languages. Even if a subtitler is aware of these, it is easy to slip up when copy-pasting from a template. Some examples are included below. Slovenian: Croatian: Slovenian: Croatian: Slovenian: Croatian:

predor [a tunnel] prijedor [a canyon] slab [bad, poor, for instance slab dogovor meaning bad, poor agreement] slab [fragile, meaning ‘physically fragile’] lahko [can, may – a modal] lako [easy, as in lak zadatak, an easy task]

Some of the consequences or characteristics associated with working with templates, that is, poor quality and pressing deadlines, are clearly considered by subtitlers as having a negative impact on their work. Becoming accustomed to new working methods can also be disadvantageous for some trained subtitlers. Some of them might prefer

The Pros and Cons of Using Templates in Subtitling

201

to translate a whole film or TV episode first, or a large part of it, and only then go back and do the spotting, a task that serves as a respite from translating. It seems, thus, that templates can have a significant impact on subtitlers’ work affecting key aspects such as rates, skills, workflow and quality.

11.5

Concluding remarks

Templates are here to stay, with some subtitling companies using them nearly exclusively. As they are under pressure to increase their output, subtitling companies are in a position to contribute to the timely production of high-quality subtitles from templates, which is in everyone’s interest. They should promote the adoption of user-friendly subtitling software to work with templates and allow subtitlers to change timecodes and to merge and split subtitles. They should also familiarize themselves with differences between languages and audiovisual genres, acknowledging that some languages or programs might require more space than others. Companies should also avoid using templates as a basis for cutting subtitling rates, since the time that a subtitler saves on spotting is not always as substantial as it might appear or, indeed, as they would wish it to be. Even if they work with templates, subtitlers still need enough time to finish a project. There is a limit to the amount of time that can be saved by using templates. Exceeding that limit is irresponsible and unfair to everybody involved in the process, from the producer of an audiovisual text down to the viewer. Subtitling a film in a day is rarely possible, since even the most simple of films requires time. Subtitlers, on the other hand, should accept the reality of working with templates, as well as other technological developments. This may be hard, as learning new skills can take up valuable time and unlearning skills acquired after many years is also difficult and time consuming. As far as the use of templates is concerned, better communication and understanding among subtitlers, subtitling companies and TV broadcasters would result in better products for those for whom subtitles are made, namely, the viewers.

References Bywood, Lindsay, Panayota Georgakopoulou and Thierry Etchegoyhen. Forthcoming. ‘Embracing the threat: Machine translation as a solution for subtitling’. In Maike Oergel and Pierre-Alexis Mével (eds) Subtitling: A Collective Approach. London: Bloomsbury.

202

Kristijan Nikolić

Díaz Cintas, Jorge and Aline Remael. 2007. Audiovisual Translation: Subtitling. Manchester: St. Jerome. Georgakopoulou, Panayota. 2003. Redundancy Levels in Subtitling. DVD Subtitling: A  Compromise of Trends. Unpublished PhD Thesis. Guildford: University of Surrey. Georgakopoulou, Panayota. 2006. ‘Subtitling and globalisation’. The Journal of Specialised Translation 6: 115–20. Georgakopoulou, Panayota. 2012. ‘Challenges for the audiovisual industry in the digital age: the ever-changing needs of subtitle production’. The Journal of Specialised Translation 17: 78–103. Ivarsson, Jan. 1992. Subtitling for the Media: A  Handbook of an Art. Stockholm: TransEdit. Ivarsson, Jan and Mary Carroll. 1998. Subtitling. Simrishamn: TransEdit. Kapsaskis, Dionysis. 2011. ‘Professional identity and training of translators in the context of globalisation: the example of subtitling’. The Journal of Specialised Translation 16: 162–84. Nikolić, Kristijan. 2005. ‘Differences in subtitling for public and commercial TV’. Translating Today 4: 33–6. Nikolić, Kristijan. 2010. ‘The subtitling profession in Croatia’. In Jorge Díaz Cintas, Anna Matamala and Josélia Neves (eds) New Trends in Audiovisual Translation and Media Accessibility (pp. 99–108). Amsterdam: Rodopi.

12

Signing and Subtitling on Polish Television: A Case of (In)accessibility Renata Mliczak

12.1

Introduction

Accessibility to audiovisual media for the hearing impaired audience in Poland has received considerable attention in recent years (Künstler 2008; Szarkowska 2010). This is part and parcel of a greater awareness concerning the needs of the deaf and hard-of-hearing at both European and national levels. Indeed, Directive 2007/65/EC of the European Parliament and the Council (Web 1)  – an amendment to Council Directive 89/552/EEC on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning television broadcasting  – encourages EU broadcasters to cater for minority groups, including the hearing impaired. In Article 3c, it states that, ‘Member States shall encourage media service providers under their jurisdiction to ensure that their services are gradually made accessible to people with a visual or hearing disability’. In 2011, the Polish Council of Ministers accepted the project of amendment to the National Broadcasting Council Act as part of the implementation of the EU Audiovisual Media Services Directive. This event marked the beginning of the provision of accessible services regulated at national level. From an academic perspective, more research is being undertaken in the area of Deaf Studies in Poland, especially at the University of Warsaw, where researchers have been engaged in a number of sign language projects. Examples include a grammatical categorization through space and movement in Polish Sign Language (PSL) [Polski Język Migowy – PJM], the iconicity of its grammar and lexis, and the compilation of a corpus intended as the basis for the first dictionary of PSL in Poland (www.plm.uw.edu.pl/en/projects). More research is underway into subtitling for the deaf and the hard-of-hearing (SDH), including 203

204

Renata Mliczak

projects on the reception of SDH, SDH in multilingual films or the provision of subtitles on digital television (http:/avt.ils.uw.edu.pl/en/sdh). New developments in technology also represent a step forward, helping the hearing impaired to gain better access to information. The use of videophones to establish contact with a sign language interpreter enables deaf people to use services, such as banks or post offices, independently. Systems like Thetos, which allows the conversion of text into sign language (http://sun.aei.polsl.pl/sign/#a), are able to ease the flow of communication between the deaf and hearing communities in Poland. Assuming that deaf people do not always read Polish fluently, the program is able to offer a translation of what is being said in the form of an animated virtual person. Although there are many positive changes taking place in the country, there are also some remaining issues with a negative influence on accessibility to audiovisual media. From a legislative point of view, for instance, the Act of 25 March 2011 on the Amendment to the Broadcasting Council Act and Other Acts (Web 2) and the Act of 19 August 2011 on Sign Language and Other Means of Communication (Web 3) raise many questions. By way of illustration, the former obliges broadcasters to make 10 per cent of services accessible to the sensory impaired, but, crucially, does not specify the percentages that should be devoted to subtitled, signed or audio described programmes, whereas the latter avoids any reference to the use of PSL in education, an issue that many deaf people feel very strongly about. Given this state of affairs, the audience is right to complain about the lack of accessible services on television and is looking to other media, such as the Internet, DVDs and cinemas and theatres, where films and performances are being increasingly offered with appropriate provision for those with hearing impairments. The use of an artificially created system, known as Signed Polish (SP) [System Językowo-Migowy – SJM], on public service television Telewizja Polska (TVP) is also proving controversial, as the provision offered at present is not satisfactory for the Deaf, who claim that sign language interpreting (SLI) should be delivered in the form of PSL – their natural language – rather than SP.

12.2 Accessibility for people with hearing impairments: the stakeholders As has already been mentioned, there are four groups influencing the way in which audiovisual media are made accessible to the hearing impaired in Poland: the audience, the providers, the promoters and the legislators. Although they all represent distinct categories in their own

Signing and Subtitling on Polish Television 205

right, we should bear in mind that their roles overlap to some extent. Viewers, for instance, might, and often do, promote accessibility, while providers can sometimes also be part of the audience. Sensory impaired viewers are obviously the most affected since their access to audiovisual material depends entirely on the state of accessible services within the country. They are the reason why accessible services should be offered in the first place and they also contribute, to a certain degree, to the shape that these assume by commenting on the quantity and the quality of the programmes provided with SDH or any of the two existing types of SLI. Then, there are the providers from both the public and private sectors. Their job is to ensure that a certain number of audiovisual productions is fully accessible to the target audience. They either feel morally obliged or are legally bound or economically driven to provide their services to this section of the community. In the case of Poland, public service television, TVP, is the only broadcaster to have provided 8–10 per cent of SDH long before it was made compulsory through the Amendment to the National Broadcasting Act in 2011. Legislators play the most important role in bridging the gap between the hearing impaired and the rest of the population concerning the right of access to information. Indeed, equal access to all types of media, including audiovisual media, should be legally established by governments as it is the most efficient way of ensuring that everyone has the same opportunities concerning access to information, culture and entertainment. Finally, promoters spread information concerning the need for accessible services and, through their work and campaigns, raise awareness among the wider public concerning the deaf and hard-of-hearing, their culture, their community and their right to equal treatment. Academics and members of associations, to name but a few, are often experts in accessibility or closely related fields. This category also encompasses professionals and providers who consider it their job to contribute to a more evenly balanced society. They may be hearing impaired themselves and they usually have a genuine and enduring interest in the culture and social habits of this specific group.

12.3 12.3.1

Audience General information

Hearing impaired people are not a unified and homogeneous community due to the fact that they can be affected by different levels of hearing loss, ranging from mild to profound. Their cultural affiliation, as well as

206

Renata Mliczak

their first language (whether phonic or sign language), are other defining features with an important role to play in the place occupied by these citizens within society. As Neves is at pains to stress (2005: 84): Given that hearing loss can be found in various degrees and can be classified according to various parameters, there is often difficulty in drawing a line between hard-of-hearing and being deaf. Deafness may be defined in terms of audiological measurements, focusing on the causes and severity of the impairment, but it can also be seen in terms of social integrations and language usage. In terms of audiological measurements, and according to the Royal National Institute for Deaf People (RNID, Web 4), ‘deafness’ can be divided into four different categories (Table 12.1). People suffering from mild to severe deafness are referred to as ‘hard-of-hearing’ in English, whereas in Poland they are described as słabosłyszący or niedosłyszący, which translates as ‘poor hearers’ (Szarkowska 2010). When it comes to people described as ‘deaf’, the terminology is less consistent. In the USA and Canada, for instance, it means people who have a total loss of hearing (Shield 2006) and, similarly in Poland, głuchy [deaf] refers to people with profound deafness. In the UK, however, the word ‘deaf’ refers to people with any degree of hearing loss and it is used interchangeably with the term ‘hearing impaired’, which in the USA is regarded as a rather derogatory description. ‘Deafened’, ogłuchły in Polish, is used to describe people who have lost their hearing gradually or suddenly, but always after having learnt to speak. Although the term ‘deaf’ is normally used in English as a medical term and refers generally to those with profound deafness, some deaf people prefer to describe themselves as ‘Deaf’, with a capital ‘D’, in order to stress the fact that they are not only medically deaf but that they also belong to the Deaf community, which has different values and habits from members of the hearing community. These individuals are usually,

Table 12.1

Levels of deafness

Mild deafness

25–39 decibels

Moderate deafness

40–69 decibels

Severe deafness

70–94 decibels

Profound deafness

Over 95 decibels

Signing and Subtitling on Polish Television 207

though not always, prelingually deaf, that is to say, they were born deaf or lost their hearing before the acquisition of speech and, as a result, sign language is their primary means of communication. In this article, the following distinction is made between these terms: • deaf and hard-of-hearing, hearing impaired and people with hearing loss are expressions to refer to people with any kind of hearing impairment, including the Deaf; • deaf is used when talking about profoundly deaf people, from a medical perspective and also including the Deaf; • the Deaf is used in these pages as opposed to deaf in audiological terms. Statistically speaking, it is notoriously difficult to obtain reliable data regarding the total number of deaf and hard-of-hearing in Poland as various sources provide different figures that tend to be mere approximations. Quoted on the Gallaudet University Library website (Web 5), the Polish Deaf Association (PZG) states that there are 100,000 deaf people living in the country (including the hard-of-hearing). The Signall report (Web 6) estimates that the number of Polish people suffering from any kind of hearing impairment is between 2.5 and 3.5 million. Taking into account that, according to the latest official statistics (GUS 2012), the population in Poland in 2011 numbered 38,511,800 inhabitants, the percentage of citizens affected by hearing problems is therefore between 7 and 9 per cent. These figures show that the number of Polish citizens with hearing loss is very close to the 10 per cent of people affected around the world (Traynor 2011). According to this author (ibid.: online), the number of hearing impaired people is constantly growing and it is predicted that, ‘[d]ue to the aging population in Europe, the overall prevalence of hearing impairment is likely to rise to some 30 per cent of people in the next century’. 12.3.2 Language issues: sign language, phonic language and signed system Moves to change and alter sign languages exist in every country and are used for different reasons. Some of the reasons for devising systems to teach deaf people are discussed by Sutton-Spence and Woll (2003: 37) in the following terms: One of the causes of change in sign languages has been language planning. Ever since public education of deaf people has existed,

208

Renata Mliczak

hearing people have attempted to alter the language used by deaf people. Even the great sign language enthusiasts of the eighteenth and nineteenth centuries, such as the Abbé de l’Epée in France, and Thomas and Edward Gallaudet in America, tried to alter the ‘natural signs’ of the deaf children they taught, to match the structure of the spoken language of the country. […] Unfortunately for the language planners, the changes have not been as great as they would have liked. Hearing people often try to invent new signs or sign systems for deaf people […] but these have never been totally accepted. In the case of Poland, apart from PSL, there are various forms of communication with the deaf, including SP and Pidgin Sign Polish (PSP). PSL is the natural visual-spatial language of the Polish Deaf community. Unlike the other forms of communicating with the deaf, it is characterized by its own grammar in the form of facial expressions, body posture and pantomime (www.plm.uw.edu.pl/en/node/239). It is believed that PSL started to develop informally among the students of the Institute for the Deaf, founded in Warsaw in 1817 by Rev. Jakub Falkowski (Tomaszewski and Czajkowska-Kisil 2006). This is also when the first observations of the language used by the deaf were made. They resulted in the publication of ‘The Mimic dictionary for the deaf-mute and people communicating with them’ [Słownik mimiczny dla głuchoniemych i osób z nimi styczność mających] in 1879 (ibid.). However, the global focus on teaching the deaf how to speak prevented studies from being carried out on PSL, creating assumptions about sign languages as purely visual codes for spoken languages (http://gupress.gallaudet.edu/stokoe.html). When William Stokoe suggested that American Sign Language (ASL) was ‘a fully formed human language in the same sense as spoken languages like English’ (ibid.) and published his findings in the monograph Sign Language Structure in 1960, sign languages began to be recognized all over the world. In Poland, an alternative to the purely oral method used at that time in most schools was SP, introduced by Bogdan Szczepankowski in the 1960s in an attempt to enable deaf children to learn Polish phonic language (Świdziński and Gałkowski 2003). SP is based on standard Polish language grammar with lexical items from PSL. In a linear way, it combines signs into sentences according to the grammatical rules of spoken Polish and, whenever necessary, the prefixes and suffixes are fingerspelt. There are two versions of the system: the full version known as pełny wariant systemu językowo-migowego (reflecting the Polish language with the use of inflected endings) and the basic version, wariant użytkowy

Signing and Subtitling on Polish Television 209

systemu językowo-migowego (the spoken text is presented by signs without endings), also known as the functional version (ibid.) Another system used in communication with the deaf is PSP. This system includes elements of both PSL and Polish phonic language but, crucially, also includes simplified language forms that do not exist in either and deviates from the grammatical structures of both languages. It is used in schools and in conversations with hearing people who have a limited knowledge of the Deaf community and their language (ibid.). Even though all the above-mentioned systems can be used in different contexts for different purposes, it needs to be noted that PSL is the only mother tongue of the Deaf. Its users include the Polish Deaf minority living within the boundaries of Poland, and even though PSL may show some influences from Polish phonic language, it has a clearly defined grammar and lexicon of its own. 12.3.3

Education

Regarding education in Poland, teachers in most of the schools for the deaf continue to lecture in SP. As has already been mentioned, hybrids of phonic and sign languages started to be formed in the 1960s and, in 1964–1965, Professor Bogdan Szczepankowski came up with a system that was a combination of signing and Polish phonic language grammar. Some years later, in 1985, the Ministry of Education agreed that this system should be introduced in schools for the deaf and, a year later, the PZG started to train teachers (Web 6) in its use. The introduction of SP into schools was considered a success after a period of oralism, which had been in operation since the Second Congress of Educators of Deaf Mutes in Milan in 1880. At the congress, a group of hearing teachers decided to banish sign languages and to teach the deaf using only the oral method. Despite the fact that the resolution had no legal status, it became the biggest influence in the education of deaf children for the next hundred years (www.deafinfo.org.uk/history/education.html). Although SP allowed the deaf to use signing when communicating with hearing people, it had the negative effect of preventing them from learning and developing in their natural language, PSL. The situation continues today with most schools still using SP as the vehicular language in their teaching. However, studies conducted in other countries (Gregory and Swanwick n.d.; Geeslin 2007) show that children learn better and faster when they have been exposed to sign language from earliest infancy and when they are able to follow a bilingual education, that is to say, when they are given the opportunity to learn the oral language by using their sign language mother tongue. The need for a

210

Renata Mliczak

bilingual education is also recognized amongst academics in Poland. An experimental project at the University of Wrocław on teaching Polish as a foreign language to a group of deaf students proved that the use of a bilingual method in education for the deaf brings better results than the methods currently in use (Kowal 2011). However, in order to benefit fully from a bilingual education, deaf students should be taught by teachers who are fluent in PSL and trained to teach Polish as a foreign language. Inclusive education, which means that deaf students are taught in mainstream school settings, is growing in popularity as an alternative to educating students in schools for the deaf. However, some researchers (Czajkowska-Kisil and Klimczewska 2009), as well as the deaf themselves (Web 7), argue that Polish mainstream schools are not really prepared to offer an appropriate level of education to the deaf, owing to a lack of specialist teacher training. Deaf students might also feel isolated in a mainstream school where they do not have friends with whom to communicate in sign language. All the developments discussed above show that the education system for the deaf in Poland should be improved, both in schools for the deaf, where many teachers do not know PLS, and in inclusive mainstream schools, where deaf students might be subject to unintended social exclusion. Students should be provided with optimal conditions in which to develop their natural language – PSL – and to make progress in terms of learning standard Polish. The way in which deaf students are taught Polish (especially reading skills) is bound to be very valuable to providers regarding the delivery of a quality service to their hearing impaired audience in the form of subtitles.

12.4 12.4.1

Providers Accessibility on Polish television

Subtitling for the deaf and the hard-of-hearing in Poland dates back to 1994 when the first film, Rio Grande (John Ford, 1950), was broadcast on Polish public service television, TVP. Until very recently, only two TVP channels, TVP1 and TVP2, offered SDH. During the first decade after its introduction, SDH on Polish TV was scant and limited to films only, the subtitles were prerecorded and, in the analogue era, the audience accessed them via Teletext page 777 (Künstler 2008). In 2003, TVP started to subtitle two of their most widely watched news programmes, Wiadomości and Teleexpress, with the former still being broadcast with SDH nowadays. Wiadomości is shown with semi-live subtitles as the files are prepared before the actual broadcast and the subtitler launches the subtitles live, adjusting them to the speed at which the journalists speak (ibid.).

Signing and Subtitling on Polish Television 211

Subtitled programmes and films are shown during peak hours, namely in the late afternoons and early evenings; these include popular soap operas, current affairs programmes and films shown immediately after the main news broadcast. Programmes with SDH are easy to identify as they are marked with the symbol on the schedule displayed on the TVP website (Figure 12.1). It is also worth mentioning here that TVP is, as of late 2012, the only provider offering subtitles for viewers with hearing loss on their website (Web 8). Since 2011, private stations have also been required by law to provide programmes accessible to people with sensory impairments, including the deaf and the hard-of-hearing. A few months after the implementation of this requirement, the National Broadcasting Council collected reports from all the stations so as to ascertain the amount of hours and programmes broadcast with accessible services. Regarding SDH, Table 12.2 shows the percentages of programmes broadcast by the three largest television stations in the country during the period August to October 2011, according to research conducted by the National Broadcasting Council and presented during the conference on ‘The role of television in breaking down the barriers’ held on 26 March 2012 (Web 9). As we can see from these figures, public service television is the leader in the provision of subtitled programmes. Astonishingly, the research also reveals that some stations failed to understand the meaning of SDH and, as a means of boosting their accessible output, one of them

Figure 12.1

Schedule of programmes on TVP1

212

Renata Mliczak

Table 12.2

Programmes with SDH on Polish television Station

Subtitled programmes

Public service television

TVP, including TVP1, TVP2, TVP Kultura [Culture], TVP Seriale [Series], TV Polonia, TVP Historia [History] and TVP Sport

15%

Private television

Polsat

10%

Private television

TV Nova (known as TVN)

4.1%

even included scrolling text at the bottom of the screen as part of their assisted services. Another way of providing the hearing impaired with access to audiovisual materials is by using SLI, whether in the form of PSL or SP. According to Szczepankowski (1997), SLI first appeared on Polish television in the late 1970s and, in 1980, TVP started to broadcast a programme for hearing impaired people entitled W świecie ciszy [In the world of deafness] in SP. From 1994 until 1996, Ewa Juchniewicz and Marta Boruń presented a series of children’s programmes, also in SP and broadcast on Mondays, which played not only an entertaining, but also an educational role in the lives of deaf children and their hearing counterparts. A year later, in 1997, TVP2 showed the series Dlaczego to my? [Why us?], where some of the acting roles were played by hearing impaired teenagers (ibid.). Nowadays, SP tends to be used mainly with news programmes, such as Echa Panoramy [Echoes of Panorama], broadcast on TVP2; and Serwis Info Dzień [News Bulletin Day], Serwis Info Dzień Weekend [News Bulletin Weekend] and Serwis Info Wieczór Weekend [News Bulletin Evening Weekend], shown on the different regional branches of TVP. Programmes for disabled people, like Spróbujmy razem [Let’s Try Together] and of a religious nature, like Słowo na niedzielę [Word for Sunday], also come with SP. SLI is only starting to appear on private stations with companies like Canal+ providing PSL in cartoons for children in a clear attempt to distinguish itself from TVP, which persistently continues to broadcast in SP. The situation could well soon change in the light of the consultation carried out in May-June 2012 by the National Broadcasting Council (Web 10) with providers and representatives of the audience, showing that the deaf want to watch programmes signed in PSL rather than in SP. At the moment, there are no reliable statistics regarding the provision of SLI on Polish television stations, although it is commonly accepted

Signing and Subtitling on Polish Television 213

that there are fewer programmes supported by SLI than with SDH. This situation can be said to be similar to that of other countries, where the amount of signing on television has always been significantly lower than that of subtitled programmes. The reasons behind this state of affairs are multifarious and can be attributed to a number of factors. One of them might be the fact that the number of people who actually know sign language is relatively small  – around 50,000 in Poland (Web 5). According to research carried out in 2005 by Ofcom, the independent regulator and competition authority for the UK communications industries, ‘about 66,000 people in [the UK] know sign language well enough to use it to watch television, but many of these people preferred to watch TV with subtitles rather than signing’ (Web 11). Another factor with an impact on the little provision of SLI is the fact that, as SLI is only available in an open format  – that is, viewers are not able to switch it off – hearing audiences complain of the distraction it causes. The result is that programmes with SLI are rather limited and tend to be broadcast late at night. As has been mentioned previously, in the case of Poland, the situation is compounded by the fact that many of these few signed programmes, particularly on TVP, are broadcast in SP rather than PSL, further reducing the number of viewers. 12.4.2 Solutions to insufficient accessible services on Polish television As the services offered on television fail to meet the needs of the deaf and the hard-of-hearing in Poland, viewers tend to resort to other media, notably the Internet. Some websites provide signing, such as Effatha.pl (Figure 12.2), an online TV station where religious programmes are signed in both SP and PSL. Others, like ONSI.tv (Figure 12.3), a website run by deaf people, offer news programmes signed only in PSL. The size of the sign language interpreter on screen is extremely important to deaf people since, as is highlighted by Szarkowska (2010: 150), ‘sign language makes use of three-dimensional space and facial expressions’ that need to be fully visible if the communication is to be successfully conveyed. Deaf viewers, disappointed with the signing services offered on public service television, decided to create their own television station, ONSI.tv, and to broadcast their own news programmes in a way that would meet their needs, that is by giving prominence to the figure and the actual size of the interpreter on screen (Figure 12.3) and using PSL as their main language of communication. ONSI.tv has another important role to play. For historical reasons, PSL is a rather fragmented language, without much visibility in the past, but its greater presence now on the Internet, thanks to the efforts of stakeholders such

214

Figure 12.2

TV Effatha

Figure 12.3

ONSI.tv

Signing and Subtitling on Polish Television 215

as ONSI.tv, might serve as a way of standardizing it. There are currently many local variations of PSL and, as Szarkowska (2010: 151) points out, ‘it would be interesting to see if the development of nationwide television broadcasting in Polish Sign Language will also bring the unification of PSL itself’. This is what happened in the UK where, as Woll (1991; quoted in Lucas 2001) argues, approximately ten years after the broadcast of the first programme with British Sign Language (BSL) on television in 1980, signing on TV had a significant influence on signing communication among deaf people. Regarding subtitling on the Internet, Polish deaf and hard-of-hearing viewers resort to material available on websites such as napisy.info, napisy24.pl or napisy.com.pl to mention but a few. These portals  – repositories of subtitle files  – do not provide SDH as such, but rather standard interlingual subtitles that, although not tailor-made to suit the needs of the hearing impaired, give them access to some foreign films. Fansubbing, or amateur subtitling (Bogucki 2009), gained popularity in the mid-1990s, when free subtitling software became easily accessible on the Internet (Díaz Cintas and Muñoz Sánchez 2006). In Poland, this practice is still very popular, despite issues connected with the illegality of distributing subtitle files online. In an attempt to clear their image and to ensure that legal requirements are met, the moderators of these forums clearly state that the subtitle files hosted on their servers are to be used with legal copies of films and that they are for personal use only. Apart from interlingual subtitles found on the Internet, the hearing impaired often resort to the interlingual subtitles in foreign films screened in cinemas, primarily devised for hearing audiences. On Polish television, voiceover is the most popular modality used to translate foreign productions, whereas films shown in cinemas are always subtitled. Polish films, on the other hand, are shown without subtitles, which means that, ironically, hearing impaired viewers have better access to foreign productions than to national ones, with all the important cultural considerations that this entails. Finally, people with sensory impairments may also benefit from projects organized by the foundation Kultura bez Barrier [Culture without Barriers, www.pcic.dzieciom.pl/oprojekcie.html], whose aim is to provide SDH as well as audio description (AD) for the blind and partially sighted in theatres, cinemas and museums. In May 2011, the project Poza Ciszą i Ciemnością [Beyond Silence and Darkness], which has now become the foundation Culture without Barriers, was awarded the Warsaw Citizens Prize, showing that the wider public appreciates and

216

Renata Mliczak

supports actions aiming to make cultural activities more inclusive. This foundation also contributes to the release of DVDs with SDH and AD. Thanks to these initiatives, as well as to the growing awareness of the needs of the deaf and hard-of-hearing, an increasing number of DVDs are being released in Poland with intralingual SDH (Polish into Polish). DVDs with interlingual SDH, that is, with SDH tracks to accompany foreign productions in other languages, are very limited in number and hearing impaired audiences have to make do with interlingual subtitles devised for hearing audiences. It seems that distributors struggle to acknowledge the specific needs of the deaf as regards the description of sounds or the identification of characters in the subtitles, and believe that standard interlingual subtitles should suffice.

12.5

Legislators

In some ways, this group of stakeholders can be the most influential when it comes to the provision of accessible audiovisual programmes. The reality is that very few providers are willing to offer subtitled or signed programmes unless they are legally obliged to do so, so there is an obvious need for legislators to propose laws regulating the access of deaf audiences to audiovisual media. International as well as national regulations affecting signing and SDH on television are discussed in the subsections that follow. 12.5.1

Supra-national level

At an international level, the 2007 UN Convention on the Rights of Persons with Disabilities, which came into force on 3 May 2008, is ‘the first international document to mention sign language explicitly and therefore safeguarding the rights of sign language users’ (Wheatley and Pabsch 2010: 18). The convention calls for equal treatment of the deaf in society and education ‘in the most appropriate languages’ (ibid.: 18) and recommends that states employ teachers with enough knowledge of sign language to teach deaf students. Poland signed the convention in the same year, but only ratified it very recently, on 6 September 2012. This means that Poland will need to introduce and endorse bilingual education in schools for the deaf, promoting teaching and learning in PSL and the mastering of standard Polish as a foreign language. At the EU level, the European Parliament issued the Resolution on Sign Languages for the Deaf in 1988, reiterated in 1998, acknowledging that the majority of the Deaf are not fluent in spoken languages and that their preferred means of communication is sign language (ibid.).

Signing and Subtitling on Polish Television 217

This resolution strengthens the position of sign languages as languages in their own right. The Council of Europe, for its part, issued the Recommendation Regarding the Protection of Sign Languages in the Member States of the Council of Europe in 2003. However, according to Wheatley and Pabsch (ibid.: 21), ‘[a]lthough there have been a number of reports and recommendations, a legal instrument has not (yet) been implemented at European level’. Directive 2010/13/UE, issued jointly by the European Parliament and the European Council in March 2010, is the first piece of legislation at European level to mention accessibility to audiovisual programmes specifically. It encourages providers to ensure that their services are made accessible to sensory impaired people by broadcasting programmes with SLI and SDH. Poland implemented the directive in 2011, as is discussed below. 12.5.2

National level

The first legal attempts to ensure the provision of accessible services in Poland were very general and tended to deal more with disabilities and the right of all citizens to equal access to information, rather than with the more specific issue of access to audiovisual media. Article 32 of the Act of 2 April 1997 from the Polish Constitution states that all citizens are equal before the law, that they have the right to equal treatment by public institutions, and that nobody can be discriminated against in public, social or economic life. The Act of 27 August 1997, which focuses on vocational and social rehabilitation as well as on the employment of people with disabilities, introduced the concept of disabled person into the Polish legal system and established a system of benefits for the disabled. 2011 can be considered a milestone as regards accessibility to audiovisual media in that it saw the first piece of legislation regulating the provision of accessibility services on television. The Amendment to the National Broadcasting Council Act (Web 2) was implemented on 23 May 2011, and in its article 18a, it states that television broadcasters are obliged to provide at least 10 per cent of their broadcasting time with accessible services including AD, SLI and SDH. It also states that the National Broadcasting Council can establish lower percentages depending on, for instance, the technical possibilities of the broadcasters or the nature and variety of programmes on show. However optimistic the Act may seem, it has its sceptics, who claim that as the Act groups all accessible services together, there are, in fact, no clear boundaries as to how much of each of these services providers are required to deliver. Another

218

Renata Mliczak

worry is that broadcasters might prefer to pay the fines resulting from lack of compliance, rather than provide accessible programming. At the time of writing, the National Broadcasting Council was leading consultations with broadcasters and audience representatives in order to reach a feasible solution to satisfy all parties involved. On 25 May 2011, the Act on Sign Language and Other Systems of Communication (Web 3) was passed. It is a first step to giving sign language more prominence and deaf citizens more rights. The document is the first legal act to mention PSL as the natural language of the Deaf and states that they have the right to use SLI services when dealing with public administration, although it does not make any reference to the use of PSL in education. As the above-mentioned 2007 UN convention – ratified on 6 September 2012 – requires nations to address the role and use of sign language in the classroom, it can only be expected that Polish legislators will have to pass another national act in order to resolve this debate. All in all, many positive changes are taking place on the legal front and, hopefully, they are the first stepping stones to offering fully accessible services for the deaf and the hard-of-hearing in Poland.

12.6 12.6.1

Promoters Associations

There are numerous state as well as non-governmental associations that care for deaf and hard-of-hearing people in Poland. Their expertise ranges from medical and physiological topics, through sport and education, to cultural matters. The oldest and most visible outside of Poland is Polski Związek Głuchych (PZG) [Polish Deaf Association, www.pzg. org.pl], established back in 1946. PZG is a member of both the World Federation of the Deaf and the European Union of the Deaf. Alongside PZG, which is by far the biggest organization for the deaf in the country, other associations have also appeared. They are often orientated towards groups of people with shared interests and, by promoting a wide range of activities, they have been able to increase their membership substantially. In recent years, new organizations have been founded, such as Polska Sekcja Młodzieży Głuchej (PSMG) [Polish Section of Deaf Youth, www. psmg.info.pl], which generally promotes Deaf culture and, more specifically, motivates and helps young deaf people to organize cultural events and conferences. Other associations are Polska Federacja Sportu Niesłyszących (PFSN) [Polish Federation of Sport for Non Hearers, www.

Signing and Subtitling on Polish Television 219

pfsn.pl], Fundacja Promocji Kultury Głuchych (KOKON) [Foundation Promoting Deaf Culture, www.fundacjakokon.pl] and Stowarzyszenie Tłumaczy Polskiego Języka Migowego (STPJM) [Association of Polish Sign Language Interpreters, www.stpjm.org.pl]. These new associations share the common goal of raising awareness regarding Deaf culture, and they all work towards the promotion of Polish Sign Language as the natural language of the Deaf not only to communicate among themselves, but also with the rest of the population. Hard-of-hearing people can now join the recently established Polską Fundację Osób Słabosłyszących (PFOS) [Polish Foundation of Hard-of-Hearing People, www.pfos.org.pl], which is working towards raising awareness concerning the hard-of-hearing and supports their active integration within the wider community. Through their work, these organizations are attempting to overcome some of the misconceptions that still persist concerning the language and culture of the Deaf, as well as trying to encourage their members to make their culture more visible. An example of a joint action among associations was the celebration of the third conference, Głusi Mają Głos [The Deaf Have a Voice], co-organized by KOKON, the Institute of Sign Language and the Section for Sign Linguistics. At the conference, which took place in Warsaw on 27–28 August 2011, all the presentations were delivered in PSL and simultaneously interpreted into Polish, whilst lip speakers repeated the words of the delegates so that hard-ofhearing attendees could easily understand what was taking place. It was an event that acted as a platform enabling three different groups of people – the hearing, deaf and hard-of-hearing – to exchange views and ideas. The Culture without Barriers foundation is the biggest organization in Poland to be working specifically to bring culture closer to those with sensory impairments. On 3–4 September 2012 this foundation organized a forum where representatives of organizations for deaf and blind people, together with professionals and academics, could discuss issues regarding accessibility to audiovisual media by the sensory impaired. 12.6.2

Academia

As Czajkowska-Kisil (2006) notes, there are still many misconceptions associated with PSL, such as the belief that it is only a set of gestures lacking any grammatical cohesion, that it is merely a signed version of Polish phonic language or that it is not elaborate enough to express complex concepts. The lack of a written representation of PSL further adds to the poor knowledge that the general public has regarding sign languages. Only relatively recently have scholars in countries like the

220

Renata Mliczak

UK, Australia and the Netherlands started to collect corpora that can be of help in analysing the structure of sign languages. In 2010, a team from the University of Warsaw and the Institute of Polish Sign Language began work on a project involving the compilation of a PSL corpus (Bezubik 2011). They record conversations in PSL and analyse them using specialized software to describe and translate them into Polish, enabling linguists who do not know PSL to benefit from the corpus. This is pioneering research on a national scale and is contributing to the investigation of the linguistics of sign languages globally. Another valuable use of the information gained from studying the PSL corpus might be in the preparation of SDH, an approach already trialled in Spain, where researchers have tried to include some characteristics of Spanish Sign Language (SSL) in the formulation of SDH. They analysed some of the rules shared between SSL and spoken Spanish and have tried to incorporate them into SDH so that subtitles would be more readily understood by native users of SSL (Pereira 2010). The researchers involved in this project attempted to reflect the syntactical and lexical features of SSL in subtitles by following the chronological order of events and presenting information from the most general to the most specific. They also tried to maintain a subject-verb-object structure, to indicate space and time at the beginning of a sentence and, at a lexical level, they suggested using more popular synonyms or using an adjective in place of an abstract noun (ibid.). This kind of research shows how subtitles might be adapted to suit the needs of deaf people. The University of Warsaw is the leading institution regarding Polish research into SDH, with Dr Agnieszka Szarkowska leading the research group ‘AVT Lab’ (http://avt.ils.uw.edu.pl/) and focusing on accessibility to the media. With their studies on the reception of SDH, this group has made substantial contributions to research in this field. One of the most significant studies was the project Digital Television for All in Poland – part of the pan-European project Digital Television for All (DTV4ALL) – which was conducted collaboratively by the University of Warsaw and the Warsaw School of Social Sciences and Humanities. It is the largest study of this kind to be carried out in Poland to date and the results have provided a valuable insight into the subtitling preferences of deaf and hard-of-hearing audiences. Another project conducted by the group included a reception study on SDH in multilingual films. The AVT Lab, in collaboration with the Interdisciplinary Centre for Applied Cognitive Studies at the School of Social Sciences and Humanities in Warsaw, also carried out research on SDH on digital television. With the use of

Signing and Subtitling on Polish Television 221

eye-tracking equipment, this project involved checking how people who have lost their hearing at different stages of language acquisition read verbatim and simplified subtitles and which parts of the subtitles (colloquial speech and address forms for example) were more or less significant to each group of respondents. The purpose of the study was to analyse the application of verbatim or simplified subtitles to different audiovisual genres. The details of all the AVT Lab projects can be found at http://avt.ils.uw.edu.pl/en/sdh.

12.7

Conclusion

Even though issues relating to accessibility to the media for the deaf and hard-of-hearing in Poland have developed relatively slowly over the years, recent changes to legal regulations seem to promise a brighter future. After many years of watching subtitled and signed programmes only on public service television, people with hearing loss can now access audiovisual material on private TV stations. There is also an increase in accessibility services in cinemas, DVDs and on the Internet. The recognition of PSL as the natural language of the Deaf, as well as action taken by the deaf themselves  – for example through consultations with the National Broadcasting Council  –, are the first steps towards discontinuing programmes offering SP and replacing them with those providing PSL on television and elsewhere. These consultations with broadcasters and representatives of the target groups will hopefully result in the publication of a document stipulating a gradual increase in the percentage of subtitled and signed programming on television in an attempt to satisfy the needs of deaf and hard-of-hearing communities. Increased research in connection with PSL, as well as SDH, is adding to a growing awareness of the issues resulting from hearing loss amongst a wider public. It is worth highlighting that all the different stakeholders described in the article have benefited greatly from the accelerated development that social media have experienced in recent years. This evolution is definitely contributing to more instant communication and to the spread of information among all the parties concerned, as well as to publicizing ideas concerning the lives of the deaf and hard-of-hearing. It is also a tool to help providers, promoters and legislators publicize their actions, encouraging the target audience actively to take part in them and comment on new developments. A recent example is the initiative taken by the Culture without Barriers foundation, which, using social media tools, has managed to gather

222

Renata Mliczak

professionals, academics, associations and target audiences at a forum where all the parties had a chance to exchange observations and discuss ways of improving accessible services for the sensory impaired in Poland. In order to ensure the continued cooperation of the participants in the forum, the organizers have created a mailing list and are in the process of designing a website that will include up-to-date information related to SDH and AD. The organization of this joint event shows that the associations and agencies involved in the welfare of people with sensory impairments are increasingly more willing to make an effort to cooperate with one another, acknowledging each other’s strengths and realizing that, conceptually, they all share the same objective, namely, to improve the social integration of their members and to make sure that the wider public is aware of their needs, strengths and potential.

References Bezubik, Małgorzata. 2011. ‘Wywiad z dr. Pawłem Rutkowskim’ [Interview with Dr Paweł Rutkowski]. Świat Ciszy. [The World of Silence], (4), PZG, www.swiatciszy.pl/?id=152&lang=1. Bogucki, Łukasz. 2009. ‘Amateur subtitling on the internet’. In Jorge Díaz Cintas and Gunilla Anderman (eds) Audiovisual Translation. Language Transfer on Screen (pp. 49–57). Basingstoke: Palgrave Macmillan. Czajkowska-Kisil, Małgorzata. 2006. ‘Dwujęzyczne Nauczanie.Głuchych w Polsce’ [Bilingual teaching of the Deaf in Poland]. Szkoła Specjalna [Special School], 4: 265–75. Czajkowska-Kisil, Małgorzata and Agnieszka Klimczewska. 2009. Rola Języka Migowego w KształtowaniuTożsamości Głuchych w Polsce [The Role of Sign Language in Shaping the Identity of the Deaf in Poland], www.pzg.lodz.pl/index. php?option=com_docman&task=doc_details&gid=32&Itemid. Díaz Cintas, Jorge and Pablo Muñoz Sánchez. 2006. ‘Fansubs: audiovisual translation in an amateur environment’. The Journal of Specialised Translation. 6: 37–52, www.jostrans.org/issue06/art_diaz_munoz.pdf. Geeslin, Joseph David. 2007. Deaf Bilingual Education: A  Comparison of the Academic Performance of Deaf Children of Deaf Parents and Deaf Children of Hearing Parents. Unpublished PhD Thesis. Indiana: Indiana University, http:// gradworks.umi.com/3287372.pdf. Gregory, Susan and Ruth Swanwick. N.d. Sign Bilingual Education: Policy and Practice, www.bris.ac.uk/Depts/DeafStudiesTeaching/bil/papers/sign_bilingual_ statement.pdf. GUS  – Główny Urząd Statystyczy [Central Statistical Office]. 2012. Raport z wyników. Narodowy Spis Powszechny Ludności i Mieszkań 2011. [Report on the Results from the National Census of People and Housing in 2011], www.stat.gov. pl/cps/rde/xbcr/gus/LUD_raport_z_wynikow_NSP2011.pdf. Kowal, Justyna. 2011. ‘Język polski jako obcy a edukacja niesłyszących’ [Polish as a foreign language and education for the deaf]. In EwaTwardowsa and Małgorzata Kowalska (eds) Edukacja Niesłyszących. Publikacja konferencyjna.

Signing and Subtitling on Polish Television 223 [Education for the Deaf. Conference Proceedings], Łódź, 15–17 October, (pp. 93– 110). Łódź: PZG. Künstler, Izabela. 2008. ‘Napisy dla Niesłyszących  – Problemy i Wyzwania’ [Subtitles for the deaf – Problems and challenges]. Przekładaniec. O Przekładzie Audiowizualnym [Translation Journal: On Audiovisual Translation] 20: 115–24. Lucas, Ceil. 2001. Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press. Neves, Josélia. 2005. Audiovisual Translation: Subtitling for the Deaf and Hard-ofHearing. Unpublished PhD Thesis. London: Roehampton University, http://rrp. roehampton.ac.uk/artstheses. Pereira, Ana. 2010. ‘Including Spanish Sign Language in subtitles for the deaf and hard of hearing’. In Anna Matamala and Pilar Orero (eds) Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing (pp. 103–13). Bern: Peter Lang. Shield, Bridget. 2006. Evaluation of the Social and Economic Costs of Hearing Impairment. Report for Hear-it, www.hear-it.org/multimedia/Hear_It_Report_ October_2006.pdf. Sutton-Spence, Rachel and Bencie Woll. 2003. The Linguistics of British Sign Language. An Introduction. Cambridge: Cambridge University Press. Szarkowska, Agnieszka. 2010. ‘Accessibility to the media by hearing impaired audiences’. In Jorge Díaz Cintas, Anna Matamala and Josélia Neves (eds) New Insights into Audiovisual Translation and Media Accessibility (pp. 139–58). Amsterdam: Rodopi. Szczepankowski, Bogdan. 1997. Język Migowy jako Środek Komunikacji [Sign Language as a Means of Communication], www.przerwijcisze.org/artykuly-imedia/115-jzyk-migowy-jako-rodek-komunikacji.html. Świdziński, Marek and Tadeusz Gałkowski. 2003. Studia nad kompetencją jezykową i komunikacją niesłyszących [Studies on Linguistic Competence and Communication of the Deaf]. Warsaw: The University of Warsaw. Tomaszewski, Piotr and Małgorzata Czajkowska-Kisil. 2006. ‘Przedmowa’ [Preface]. Studia nad językiem migowym. [Studies on Sign Language] 1: 3–8, http://chomikuj.pl/AoiTenshi/J*c4*99zyk+migowy/studia_nad_jezykiem_ migowym,1148511187.pdf. Traynor, Robert. 2011. ‘The incidence of hearing loss around the world’. Hearing International, 6 April, http://hearinghealthmatters.org/hearinginter national/2011/incidence-of-hearing-loss-around-the-world/. Wheatley, Mark and Annika Pabsch. 2010. Sign Language Legislation in the European Union. Brussels: EUD.

Websites Web 1: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2007:332:0 027:0045:EN:PDF Web 2: http://orka.sejm.gov.pl/opinie6.nsf/nazwa/3812_u/$file/3812_u.pdf Web 3: www.google.pl/url?sa=t&rct=j&q=&source=web&cd=1&cad=rja&sqi=2&v ed=0CCEQFjAA&url=http%3A%2F%2Fisap.sejm.gov.pl%2FDownload%3Fid% 3DWDU20112091243%26type%3D1&ei=2khKUOzlEaSd0QWLBA&usg=AFQj CNE4wtHD1_NgmQUnmQI5otZSVAlFSQ Web 4: www.actiononhearingloss.org.uk/ your- hearing/ about- deafness- andhearing-loss/deafness/describing-deafness.aspx

224

Renata Mliczak

Web 5: http://libguides.gallaudet.edu/content.php?pid=119476&sid=1061103 Web 6: www.frp.lodz.pl/projekty/zakonczone/signall2/download/RAPORT_ SIGNALL%202_PL.pdf Web 7: www.deaf.pl/index.php/topic,13672.0.html Web 8: www.tvp.pl/dostepnosc/filmy-z-napisami-w-serwisie-tvppl Web 9: www.prezydent.pl/dialog/fdp/ solidarne- spoleczenstwo- bezpiecznarodzina/aktualnosci/art,16,fdp-o-roli-telewizji-w-przelamywaniu-barier.html Web 10: www.krrit.gov.pl/krrit/konsultacje-krrit/wyniki-konsultacji/news,746,podsumowanie- konsultacji- publicznych- w- sprawie- modernizacji- prawamedialnego.html Web 11: http://stakeholders.ofcom.org.uk/consultations/signing/plainenglish

13 Voiceover as Spoken Discourse Agata Hołobut

13.1

Introduction

Despite its prevalence in news programmes and documentary channels, voiceover translation has long failed to make itself heard in audiovisual translation (AVT) studies (Franco et al. 2010; Orero 2004; Woźniak 2008). This academic neglect has recently been diminished by the publication of the first monograph on this AVT modality, with its specific focus on non-fictional programmes (Franco et al. 2010). The post-Soviet voiceover translation of fiction still remains uncharted territory, however. Drawing a map of this territory would enrich both genre- and method-specific approaches to AVT. It would require a joint effort from Russian, Polish, Lithuanian, Latvian, Georgian, Ukrainian and Bulgarian cartographers, exploring the norms of voiceover translation and delivery specific to each country, with single or multiple voice artists interpreting the lines with different levels of emotional involvement. Unfortunately, only a handful of enthusiasts have registered an interest in this type of project to date. In Eastern Europe, despite the growing popularity of AVT studies, ‘little or no research’ is being done into voiceover practices (Grigaravièiûtë and Gottlieb 1999: 45–46). As far as Poland is concerned, the few publications dealing with this modality to date include Bogucki (2004), Garcarz (2006a, 2006b, 2007), Garcarz and Widawski (2008), Szarkowska (2009), Tomaszkiewicz (2008) and, most notably, Woźniak (2008, 2012), who actively champions its in-depth investigation. Generally, however, this AVT modality has either been ignored (Garcarz 2007: 139) or accused of ugliness and anachronism (Woźniak 2008, 2012), despite its omnipresence and long-standing tradition. Indeed, the voiceover translation of foreign films emerged on Polish national television at the end 225

226

Agata Hołobut

of the 1950s (Kozieł 2003: 40), following the Russian Gavrilov model (Bogucki 2004), and has dominated the small screen ever since, being used both in fictional and non-fictional programmes. In fiction, a flat male voice ‘interprets the lines of the entire cast’ while ‘the volume of the original soundtrack is turned down’ (Gottlieb 1998: 246). In non-fiction, either male or female voice artists are used, unlike films in which only a male voice is used. They are usually professional actors or radio journalists (Garcarz 2007: 144), expected to combine perfect diction with perfect indifference to the interpreted material. Since the 1980s, voiceover translation has expanded beyond television: it has conquered the video market to become a standard option on DVDs, along with subtitles. A mystery to foreign observers, it remains popular with Polish audiences, who – according to the surveys discussed in Garcarz (2007: 131)  – prefer it to subtitles (favoured by only 8.1 per cent of respondents in 2002 and by 5 per cent in 2005), and value it as highly as dubbing (favoured by 43 per cent in 2002 and 46 per cent in 2005).

13.2

Fictional dialogue as conversational discourse

The Poles’ regard for voiceover translation may imply that it has some merits, uncharacteristic of subtitling or dubbing, that so far are completely unexplored. To make up for this lack, I conducted a preliminary case study, aiming to characterize the Polish voiceover translation of fiction. The focus is specifically on its capacity to retain the conversational features of film dialogue. The issue has already been raised and discussed in the fields of subtitling (Assis Rosa 2001; Guillot 2008) and dubbing (Baños-Piñero and Chaume 2009; Pavesi 2008). It seemed pertinent to voiceover research, though, as I suspected it to account for its appeal among the Polish audience. Audiovisual products model the fictional reality of our off-screen experience. Dialogue plays an important part in the process, imitating face-to face interactions. In many film and television genres it can be described in terms of ‘prefabricated orality’ (Baños-Piñero and Chaume 2009), because it captures and tames the ‘raw realities of spoken discourse’ (Leech and Short 2007: 129). Linguists list several principles distinguishing these realities from the processed realities of writing. A detailed analysis of the problem can be found in Biber et al. (1999: 15), who describe conversational registers as very different from written registers, such as fiction, news and academic discourse. Each register is defined by distinct situational characteristics with their own

Voiceover as Spoken Discourse 227

functional and formal correlates (ibid.). Conversation is characterized by its spoken mode, direct interaction, spontaneous production, shared immediate situation and personal range of topics (ibid.: 16). These traits influence its typical linguistic make-up, which leans towards grammatical reduction (ellipsis, simple and compound structures, for example). Interactive and unspecific, conversation abounds in pronominal forms, vagueness hedging, discourse markers, stance adverbials, response forms and interjections. Being spontaneous, it often involves hesitations, repetitions and false starts; and being personal, it often employs an informal style, rich in colloquialisms, expletives and lexical bundles (ibid.: 1038–50). Composed on the fly, it seems ‘messy’ and ‘formless’ (Leech and Short 2007: 129) in comparison with most written communication. Fictional dialogue is like everyday conversation only in some respects. As Kozloff (2000: 19) emphasizes, it is ‘scripted, written and rewritten, censored, polished, rehearsed, and performed’, with the customary ‘hesitations, repetitions, digressions, grunts, interruptions, and mutterings of everyday speech’ either ‘pruned away’ or ‘deliberately included’. These deliberately included conversational features of fictional dialogue certainly deserve the attention of translators. However, their recreation in the target language may be unwelcome or even impossible because of the specific constraints of AVT modalities. Subtitling and voiceover open up different possibilities in this respect. In subtitling, speech is transformed into writing in synchronous captions. Space and time permitting, they may reflect selected conversational features verbally and graphically. Non-synchronous voiceover superimposes speech on speech, encrusting the original with its translation. Unrestricted by lip synchrony (Szarkowska 2009: 185–6), voiceover translation can retain certain conversational features, within the limits of the voice talent’s low-profile, monophonic performance.

13.3

Research question and methodology

To investigate how Polish translators approach the ‘prefabricated orality’ of audiovisual dialogue, especially in voiceover translation, this study sets out to compare the voiced-over and subtitled versions of two episodes of Desperate Housewives Season II: ‘Next’ and ‘That’s Good, That’s Bad’, both directed by Larry Shaw in 2005. Famous for its intricate plot and vibrant dialogue, the TV series seemed to be promising research material. Aired by ABC from 2004 to 2012, Desperate Housewives has been acclaimed by both critics and international audiences. A rich

228

Agata Hołobut

blend of comedy, mystery and drama, the story follows the adventures of four neighbours: Bree Van de Kamp (Marcia Cross), Susan Meyer (Teri Hatcher), Lynette Scavo (Felicity Huffman) and Gabrielle Solis (Eva Longoria), who live in the luxurious suburbs of the imaginary US town of Fairview. All eight seasons of the series were broadcast on Polish television and subsequently released on DVD. The analysed edition was produced by Touchstone Television and distributed in Poland by Buena Vista Home Entertainment and Imperial CinePix as Gotowe na wszystko. Kompletny sezon drugi. It contains two target language versions, adapted to the requirements of voiceover and subtitling. The episodes chosen were translated for voiceover by Agnieszka Kamińska and revoiced by Andrzej Matul. The aim of comparing the voiced-over and subtitled versions is to find out how they capitalize on their potential to reflect the original conversational register based on the specificities of each modality. Particular attention will be paid to the oral features of voiceover translation, delivered through the spoken mode. The original dialogues, the voice artist’s lines and the Polish subtitles were transcribed for this purpose. The transcripts were then divided into utterances, i.e. dialogue turns with recognizable pragmatic functions. A total of 1,164 utterances were identified in the original version; 972 in the voiced-over version and 1,141 in the subtitles. A parallel corpus was then created for each of the target versions, matching the original utterances with their corresponding Polish renditions. Subsequently, a comparative analysis of the material was conducted, focusing on specific features of conversational register, as shown in Table 13.1. These combine the situational and formal characteristics of conversational register highlighted by Biber et al. (1999) with the functional characteristics identified by Edmondson and House (1981):

Table 13.1

Characteristics of conversational register considered in the analysis

Situational characteristics

Functional characteristics

Formal characteristics analysed

online production

dysfluency

retrace and repair sequences

add-on strategy

preference for simple and paratactic structures conjunctions and coordinators in sentence-initial positions (continued)

Voiceover as Spoken Discourse 229 Table 13.1 Continued Situational characteristics

Functional characteristics

direct interaction

discourse management

Formal characteristics analysed

starters (well, now, okay, yes) uptakers (yeah, okay, all right, oh) clarifiers (I mean, you know, you see, look) appealers (question tags, okay, right) vocatives and address forms

shared context

personal communication

13.4

pragmatics of interaction

verbal shape of greets, welcomes, leave-takes, apologies, thanks, requests, commands, threats, suggestions, opines

grammatical reduction

ellipsis

context dependence, avoidance of meaning specification

deictic items (this, that, these, those, now, here); substitute proforms (things, doing)

informal style

colloquialisms, expletives, idioms, lexical bundles

Analysis

It should be noted that the sample will first be analysed qualitatively, paying special attention to the differences between the two Polish representations of the original spoken discourse. It is hoped that the analysis will lead to the identification of trends specific to each modality and that it will reveal the tricks of the trade of the voiceover translation of fictional programmes. Observations will be subsequently verified with an additional quantitative analysis of the transcripts. The results are presented below in four sections, focusing on the chosen specific situational characteristics of the original dialogues. 13.4.1

Spontaneity and online production

One of the defining factors of conversational register is online production. As Biber et al. (1999: 1066) point out, conversation ‘takes place in realtime and is subject to the limitations of working memory’, which influences its typical linguistic make-up. Speakers avoid elaborate grammatical structures and opt for syntactic minimalism. Miscalculations result in

230

Agata Hołobut

hesitations and false starts, which are examples of dysfluencies. Utterances are mostly composed of single clauses/clause-like forms or based on juxtaposition and coordination, thus reflecting what Biber et al. (ibid.) call the ‘add-on’ strategy of syntactic organization. To facilitate production and processing, speakers accumulate clause-like chunks, each expressing a single idea, and connect them by means of simple coordinators (and, but) and simple conjunctions (because and so). Altogether, the grammar of conversation is ‘dynamic’, as it is produced and interpreted instantaneously, while the grammar of writing is ‘architectural’, since it can be elaborated and interpreted over an extended period of time (ibid.: 1067). In the original English dialogue, the characters of Desperate Housewives appear to be spontaneous, because they often fumble for words, making false starts, hesitating and using repetitions. They also apply the abovementioned ‘add-on’ strategy, creating linear, paratactic structures. These phenomena elicit different translation responses, as shown in Table 13.2. Table 13.2

Dysfluency in the original and translated dialogues

Dysfluency

Original

Voiceover

Subtitles

Unfinished sentences

7

3 (43%)

7 (100%)

Repeats following hesitation pauses

11

-

2 (18%)

Repeats for emphasis

10

-

7 (70%)

28 (100%)

3 (11%)

16 (57%)

The analysis shows that subtitles carefully reproduce dysfluency, marking hesitations by means of ellipsis, retaining all the unfinished sentences and some repeats. Voiceover, by contrast, seems more fluent and resorts to dysfluent features only to show the speaker’s emotions, as illustrated in example 1. Generally, when the characters hesitate or search for words, the voice artist remains silent. His performance is therefore organically incorporated into the original interactions, whereas the subtitles mirror them more closely. Example 1 Original

Voiceover

Subtitles

- Isn’t there any way … - No. No, I … so sorry. I’ll … I’ll, uhm, I’ll see you around.

- Nie możemy … [Can’t we …] - Nie. Przykro mi. [No. I’m sorry.]

Czy nie możemy jakoś … Nie. Nie. Przykro mi. Do … do zobaczenia. [Can’t we somehow … // No. No. I’m sorry. // See … See you around.]

Voiceover as Spoken Discourse 231

When it comes to incremental paratactic structures, both target versions tend to break them into simple sentences, as shown in example 2. They differ, however, in their approach to connectors (and, but, so and because), which are omitted more frequently in the voiced-over version (see Table 13.3). The voiceover translation generally relies on the simple juxtaposition of ideas, keeping logical connections implicit and utterances as short as possible. The subtitles, however, mark causality and contrast more overtly, as shown in Table 13.3. Table 13.3

Use of connectors in the original and translated dialogues

Coordinators and conjunctions

Original

Polish equivalents

Voiceover

Subtitles

but

64

ale, jednak, a

44 (69%)

54 (84%)

so

22

więc, dlatego

3 (14%)

8 (36%)

because

20

bo, ponieważ

8 (40%)

11 (55%)

and

105

i, oraz, a

32 (30%)

62 (59%)

87 (41%)

135 (64%)

211 (100%)

Coordinate clauses are often rendered as several simple clauses in subtitles, but they tend to retain the connectors in sentence-initial position, as illustrated in examples 2 and 3. Example 2 Original

Voiceover

Subtitles

George, I know what you did to Dr. Goldfine and I can see now just how sick you really are, so please just turn yourself in, and that way you can get the help you really need.

Wiem, że to ty napadłeś na doktora Goldfine’a. Jesteś chory. Oddaj się w ręce policji. Pomoc jest ci bardzo potrzebna. [I know that it was you who attacked Dr. Goldfine. You’re sick. Turn yourself in. You badly need help.]

George, wiem, co zrobiłeś doktorowi Goldfine’owi. Widzę teraz, że jesteś naprawdę chory. Proszę, oddaj się w ich ręce i może będą ci mogli udzielić odpowiedniej pomocy. [George, I know what you did // to Dr. Goldfine. // I can see now, // that you’re really sick. // Please, turn yourself in // and perhaps they will be able // to offer you appropriate help.]

232

Agata Hołobut

Example 3 Original

Voiceover

Subtitles

And in exchange I get your silence?

W zamian gwarantujesz milczenie? [In exchange you guarantee silence?]

I w zamian za to będziesz milczeć? [And in exchange for that you will keep silent?]

Another significant difference between the two modalities under examination lies in the syntactic complexity of dialogue lines. Voiceover translation avoids elaborate structures that remind us of the ‘architectural’ grammar of writing. Simple utterances usually take a nonclausal, elliptical form, whereas composite utterances juxtapose or coordinate units of meaning, facilitating their delivery by the voice talent. This dynamic flow often results from simplifying and reorganizing the syntax of the original, for example by means of fronting, which is characteristic of spoken discourse in Polish (Bartmiński and Niebrzegowska-Bartmińska 2009: 105–6). The examples provided below show the differences between the voiceover version, which employs front focus and intonational stress to highlight information, and the subtitles, which prefer end-focus, effective in written discourse. Example 4 Original

Voiceover

Subtitles

4a

I look like hell. I need a hairbrush.

Okropnie wyglądam. Muszę się uczesać. [Terrible I look. I need to brush my hair.]

Wyglądam okropnie. Muszę uczesać włosy. [I look terrible. // I need to brush my hair.]

4b

He had a gun in my face for six hours.

Sześć godzin trzymał mnie na muszce. [Six hours he held me at gunpoint.]

Trzymał mnie na muszce przez sześć godzin. [He held me at gunpoint // for six hours.]

4c

Ed’s been known to fire people for that sort of thing.

Za coś takiego Ed wyrzuca z pracy. [For a thing like that Ed fires people.]

Ed zwalnia ludzi za takie rzeczy. [Ed fires people for such things.]

By contrast, the elaborate structures acceptable in written registers are not avoided in the subtitles. They sometimes complicate the

Voiceover as Spoken Discourse 233

original dialogue, employing multiple complex sentences, inversions, participial clauses and de-verbal nouns, typical of academic discourse. They also use literary and archaic conjunctions and particles, such as aby [in order to], lub [or], bowiem [because], gdy [while] or wraz [along with], which are hardly ever employed in voiceover. These tendencies are illustrated in the examples given below, with example 5a demonstrating the use of a literary conjunction (aby), and 5b the use of an inversion and complex structures in the subtitles. Example 5c shows the use of a participial clause uncommon in conversation, and examples 5d and 5e present ‘architectural’ structures with de-verbal nouns. Moreover, the quoted captions manifest lexical sophistication (see section 13.4.4), which contrasts sharply with the simplicity of voiceover.

Example 5 Original

Voiceover

Subtitles

5a

So it looks like Mary Alice killed Zach’s birth mother in order to keep her from taking Zach away.

Mary Alice zabiła matkę Zacha, kiedy ta chciała odebrać jej dziecko. [Mary Alice killed Zach’s mother, when she tried to take the baby away.]

Wygląda na to, że Mary Alice zabiła biologiczną matkę Zacha, aby ta nie mogła go im odebrać. [It seems that Mary Alice // killed Zach’s biological mother // so that – literary – she wouldn’t be able to take him away from them.]

5b

Well, you’re not really required, but it’s something you should really consider when a handgun is involved.

Nie ma takiego obowiązku, ale radziłbym to zrobić. [There’s no obligation, but I would recommend it.]

Nie jest to obowiązkowe, ale z uwagi na to, że w grę wchodzi broń, należałoby to rozważyć. [It isn’t – inversion – obligatory // but on account of the fact that a handgun // is involved, one should consider that.]

5c

All this time my father has been right across town running the feed store?

Mój ojciec żyje i prowadzi sklep? [My father lives and runs the store?]

Przez cały czas mój ojciec mieszkał w tym mieście, prowadząc sklep? [All that time my father has been living // in this town running the store?] (continued)

234

Agata Hołobut

Example 5 Continued Original

Voiceover

Subtitles

5d

Nice is a luxury that I gave up along with vacations and relationships and eating at home.

Miłe zachowanie to luksus, z którego musiałam zrezygnować, tak jak z urlopu, rodziny, domowych posiłków. [Nice behaviour is a luxury I had to give up, just like holidays, family, home meals.]

Bycie miłą to luksus, którego się wyrzekłam wraz z wakacjami, związkami i jedzeniem w domu. [Being nice is a luxury // I renounced along // with – literary – holidays, // relationships and eating at home.]

5e

It’s just that it’s gonna be my job to evaluate those who are team players and those who are not.

Do mnie należy ocena, kto się nadaje do naszego zespołu. [It is my task to judge who’s fit for our team.]

Moim zadaniem będzie wydanie oceny na temat tego, kto potrafi pracować w zespole, a kto nie. [My task will be // to issue an assessment on the question // of who is able to work in a team and who isn’t.]

Regarding features associated with online production, each version recreates different aspects of conversational spontaneity. The subtitles contain elements of dysfluency with false starts and repeats, using linking words typical of conversation, but sometimes resorting to convoluted structures typical of writing. By contrast, the utterances are drier in voiceover translation with the verbal links between them removed, but retaining the grammatical simplicity of a conversational register. 13.4.2

Interactivity

Another feature characteristic of conversation is its immediate interactivity: it is ‘co-constructed by two or more interlocutors, dynamically adapting their expression to the ongoing exchange’ (Biber et al. 1999: 1045). Each utterance has a dual function, revealing the speaker’s ideas, attitudes and intentions (it has an illocutionary value) and contributing to the ongoing exchange (it has an interactional value) (Edmondson and House 1981: 36). Describing the typical illocutionary values of utterances, that is, communicative intents expressed by speakers, scholars distinguish several categories: requests, suggestions, complaints, excuses, forgiving, thanks, etc. (ibid.: 98). These intents can be realized by various verbal and nonverbal means.

Voiceover as Spoken Discourse 235

In order to achieve communicative goals, speakers adjust their stylistic choices to their recipients’ needs and also monitor the conversation by means of gambits, that is, elements of discourse that ‘lubricate’ the exchange and serve as ‘time-gaining routines during speech production’ (House 1997: 81). Edmondson and House (1981: 61–5) distinguish several types of these short expressions: starters, which signal the speaker’s willingness to take to the floor (well, oh); uptakers, which acknowledge the preceding utterance (yes, yeah, uhm, oh, great); cajolers, which secure sympathy (I mean, you know, you see); underscorers, which highlight the point being made (look, listen); and appealers, which invite the hearer’s agreement (question tags, right, okay). Hence, spoken discourse abounds with devices that help co-construct and negotiate meaning: discourse markers, response forms, vocatives and address forms (Biber et al. 1999: 1046–7). Screenwriters aiming to achieve conversational realism need to recreate the dynamics of typical face-to-face interaction, both in terms of utterance and exchange structure. The screenwriters of Desperate Housewives handle this aspect with real mastery. They adjust the style of particular illocutions to the characters’ social and psychological status, and they also mimic typical techniques of discourse management, employing a wide variety of gambits. In order to find out how the Polish translators dealt with dialogue interactivity, the translation of gambits and vocatives was analysed in the voiced-over and subtitled versions. The illocutionary value of English and Polish utterances was then examined, focusing on the differences in the expression of communicative intents. Gambits are generally omitted in both versions, being maintained only in 8 per cent of cases in voiceovers and in 30 per cent in subtitles, as shown in Table 13.4:

Table 13.4

Use of gambits in the original and translated dialogues

Gambit types Examples

Original Polish equivalents

Voiceover

Subtitles

Starters

well, now

46

cóż, no

2 (4%)

5 (11%)

Uptakers

yeah, okay, all right, oh

51

tak, dobrze, aha

5 (10%)

24 (47%)

Cajolers

I mean you know, you see, you gotta understand

11 7

znaczy no wiesz, musi pan wiedzieć

0 3 (43%)

0 4 (57%)

(continued)

236

Agata Hołobut

Table 13.4 Continued Gambit types Examples

Original Polish equivalents

Voiceover

Underscorers

see, look, you know what

9

posłuchaj

Appealers

question tags, okay, right

7

co, mam rację, 1 (14%) prawda?

131 (100%)

0

11 (8%)

Subtitles

2 (22%) 4 (57%) 39 (30)%

The subtitles mark the characters’ conversational cooperation more overtly, retaining almost half of the uptakers and most of the appealers, as shown in example 6: Example 6 Original

Voiceover

Translation

6a

To dobrze. [That’s good.]

Och, to dobrze, to dobrze. [Oh, that’s good, that’s good.]

Nie jesteśmy fajnymi ludźmi. [We aren’t nice people.]

Nie jesteśmy dobrymi ludźmi, co? [We aren’t nice people, are we?]

Oh, good. Good.

6b We’re not very nice people, are we?

A similar tendency is identified in the analysis of vocatives, used in the original dialogue 88 times, mostly to attract attention and indicate a particular emotional stance towards the interlocutor. The subtitled and voiced-over versions differ significantly in their treatment of these, as illustrated in Table 13.5. Whereas the voiced-over version retains 22 per cent of the original examples, omitting those which are audible on screen, the subtitles preserve every second vocative, possibly to facilitate the ascription of the captioned utterances to particular speakers and addressees. Thus, they seem more overtly interactive than the voiceover translation. However, they often fail to seem natural, due to the use of foreign address forms calqued from the original, in the same syntactic position. In example 7a the most sophisticated character, Bree, addresses her neighbour as Pani Applewhite (Mrs. Applewhite), although the

Voiceover as Spoken Discourse 237 Table 13.5

Use of vocative types in the original and translated dialogues

Vocative type

Examples

Original

Polish equivalents

Voiceover

Subtitles

Family names

Mike, Mrs. Scavo

59

Mike, Pani Scavo

13 (22%)

37 (63%)

Familiarizers

ladies, guys, folks, everybody

7

Panie

0

1 (14%)

Endearments

honey, baby, sweetie

9

skarbie, kochanie, złotko

1 (11%)

6 (67%)

Occupational/ status vocatives

Reverend, sister, Doctor Goldfine, operator

8

pastorze, siostro, doktorze, central

4 (50%)

5 (62%)

Family terms

Mommy, mom

5

mamusiu, mamo

1 (20%)

2 (40%)

19 (22%)

51 (58%)

88 (100%)

combination of an honorific title with a surname is considered plebeian by Polish standards (Szarkowska 2006: 215). Similarly, on several occasions, the subtitles place the vocative in final position, although these units are typically used in initial position or omitted altogether in the voiceover version, as shown in examples 7b and 7c: Example 7 Original

Voiceover

Translation

7a

Mrs. Applewhite, hi.



Dzień dobry, pani Applewhite. [Good morning, Mrs. Applewhite.]

7b

Could you change the attitude, Nina?

Nina, może zmienisz ton? [Nina, maybe you change the tone?]

Możesz zmienić nastawienie, Nina? [Can you change the attitude, Nina?]

7c

John! How did you get in here?

Jak tu wszedłeś? [How did you get in here?]

Jak się tu dostałeś, John? [How did you get here, John?]

238

Agata Hołobut

These examples suggest that the ostensible interactivity may have been achieved mechanically in subtitling, by calquing the original structures. Voiceover, by contrast, seems to abide by the rules of Polish conversational discourse: although most gambits and vocatives are omitted, those that are actually used always conform to target culture norms. Gambits and vocatives obviously contribute to the realism of film dialogues. Yet, a successful imitation of typical face-to-face interaction requires above all a convincing expression of the protagonists’ communicative intent. The realization of the speech acts performed by the characters in the Polish versions was analysed with this in mind. I  concentrated first on ritual illocutions, such as greetings, welcomes, okays and leave-takings, which are highly predictable and formulaic (Edmondson and House 1981: 59–60) and do not leave much room for creativity in translation. The focus was, then, diverted to attitudinal and informative illocutions, such as requests, suggestions, commands, threats, complaints, disclosures and opinions, which can take various stylistic forms. The transformations they underwent in translation were examined in both cases. The analysis revealed significant differences regarding ritualized forms, as demonstrated in Table 13.6.

Table 13.6

Ritualized illocutions in the original and translated dialogues

Ritual illocutions

Examples

Original Polish equivalents

greetings

hi, hello, good morning

leave-taking

bye

thanks

thanks, thank you

21

dzięki; dziękuję

okays

yeah, yes, okay, of course great, sure

48

disagreements

no

apologies, sympathy

sorry, excuse me

Voiceover Subtitles

18

cześć; witam; dzień dobry



1

do widzenia; do zobaczenia



17 (94%)

1 (100%)

9 (43%)

18 (86%)

tak; OK.; oczywiście; świetnie; super

16 (33%)

42 (88%)

34

Nie

12 (35%)

28 (82%)

25

przepraszam; wybacz; przykro mi; współczuję

15 (60%)

22 (88%)

52 (35%)

128 (87%)

147 (100%)

Voiceover as Spoken Discourse 239

Most of the ritualized forms included in the original dialogue were omitted in the voiceover translation. This reductionism does not necessarily imply that the interactions were lost to the audience. On the contrary, it could be argued that their absence would have directed the viewers’ attention to the verbal and visual communication on screen. The subtitles, by contrast, retained most of these formulaic expressions, although they were easily recoverable from the context. The difference between the two modalities can be observed in example 8, in which the new neighbours introduce themselves to each other, with all the greetings being omitted in the voiceover version, but retained in the subtitles. Example 8 Original

Voiceover

Subtitles

- Mrs. Applewhite, hi. I am Bree Van de Kamp, and this is my daughter Danielle.

- Jestem Bree Van de Kamp, a to moja córka Danielle. [I am Bree Van de Kamp and this is my daughter Danielle.]

Dzień dobry, pani Applewhite. Jestem Bree Van De Kamp. To moja córka Danielle. [Good Morning, Mrs. Applewhite. // I am Bree Van de Kamp. // This is my daughter Danielle.] - Witam. [Hello.] - Dzień dobry. [Good morning.] - Witam. [Hello.] - To mój syn. [That’s my son.] - Cześć. [Hi.]

- Hello. - Hi. - Hey, I’m Matthew. - This is my son. - Hi.

- Jestem Matthew. [I’m Matthew.] - Mój syn. [My son.]

When it comes to other speech acts, one can see both similarities and differences. Both target versions share a clear tendency to transform indirect speech into direct speech. In other words, they make the characters’ implicit communicative intentions more explicit. For example, when the characters use questions to express opinions (9a), make reproaches (9b) or invite guests to the table (9c), both versions transform them into assertions or directives, expressing the communicative intentions more briefly and overtly. Example 9 Original

Voiceover

Subtitles

9a

Marzyłeś o tym. [You’ve dreamt about it.]

Przecież tego chciałeś. [This is what you wanted.]

Isn’t that what you always wanted?

(continued)

240

Agata Hołobut

Example 9 Continued Original

Voiceover

Subtitles

9b

How many times did you go off on your little business trips and leave me alone?

Ciągle byłeś w rozjazdach. [You were always away.]

Setki razy wyjeżdżałeś w interesach i zostawiałeś mnie samą w domu. [You went on business trips a hundred times // and left me home alone.]

9c

Why don’t you all take your seats? Dessert is about to be served.

Usiądźcie, zaraz podam deser. [Take your seats, I will serve the dessert shortly.]

Proszę zająć miejsca. Już podaję deser. [Please, take your seats. // I’m about to serve the dessert.]

A similar device was applied in 31 cases of illocution in the voiceovers and 20 cases in the subtitles, predominantly involving requests, commands, suggestions and opinions. It seems, in these cases, that the translators expressed illocutions more directly to economize, thus observing the spatio-temporal constraints of subtitling and isochrony requirements of voiceover. At the same time, however, this resulted in a change of tone and thus in blunter, less polite and less sophisticated dialogue. What merits attention is the preference for directives in the voiceover version: most requests, suggestions, commands and pieces of advice are expressed in the imperative form, which is the briefest and most direct form in Polish. This can be clearly seen in examples 10a-c, where indirect requests and suggestions are transformed into directives, more familiar and unceremonious than the original. Example 10 Original

Voiceover

Subtitles

10a

I do want you to come to the funeral.

Przyjdź na pogrzeb. [Come to the funeral.]

Chcę, żebyś przyszła na pogrzeb. [I want you to come to the funeral.]

10b

Carlos, wait! Come back. Look, I wanna start over.

Carlos, zaczekaj. Zacznijmy od początku. [Carlos, wait. Let’s start over.]

Zaczekaj! Chodź tu. Chcę zacz ąć od początku. [Wait! Come back. // I want to start over.]

(continued)

Voiceover as Spoken Discourse 241 Example 10 Continued Original

Voiceover

Subtitles

10c

Daj mi cudze wyniki, które pokażę jako własne. [Give me someone else’s results, which I will show as my own.]

Proszę mi tylko dać wyniki testów kogoś innego, a stworzę sobie swoje. Dostałam na Gwiazdkę Photoshop. [Please, // just give me // someone else’s test results // and I will create my own. // I got Photoshop for Christmas.]

So if you could just slip me someone else’s test results, I could make my own. I got Photoshop for Christmas.

These examples clearly demonstrate the most important difference between Polish voiceover and subtitling in their approach to dialogue pragmatics. Voiceover translation opts for simplicity and brevity of expression, rendering the utterances more direct, assertive and categorical. In this version, unlike the original, the voice talent rarely uses the irrealis mood to hypothesize or indicate politeness. In contrast, these changes are not introduced into the subtitles, as exemplified in 11a and 11b. While the voiceover translation uses this grammatical mood only eight times, the subtitles do so 25 times. Example 11 Original

Voiceover

Subtitles

11a The kind anyone would want living right next door

Tacy, o jakich wszyscy marzą. [The kind everyone dreams about.]

Tacy, których każdy by sobie życzył. [The kind anyone would wish.]

11b So it would really help our marriage if you just backed off for a while, okay?

Będzie lepiej kiedy siostra się usunie. [It will be better when you back off.]

Nasze małżeństwo skorzystałoby na tym, gdyby nas siostra zostawiła w spokoju. W porządku? [Our marriage would benefit, // if you left us alone, Sister Mary. // Is that all right?]

Further grammatical discrepancies, distinguishing voiceover from subtitling, concern the simplification of the characters’ deontic and epistemic stance towards reality. Characters often express their views on

242

Agata Hołobut

what can, should or must be done, using optative verbs (such as wish or want) and modality markers. They also convey various shades of certainty, qualifying their statements by means of cognitive verbs (know, suppose, guess, believe) and conversational hedges (obviously, apparently, kind of, sort of). All these markers are typical of conversational discourse, which is expressive of attitude and averse to the specification of meaning (Biber et al. 1999: 1044–8). In the voiceover version, deontic and epistemic modality markers are often omitted, as shown in example 12c. Optative and cognitive verbs are also avoided consistently and all the hedges omitted, as demonstrated in examples 12a and 12b. Thus, this version portrays the characters as being certain of their social obligations, opinions and facts. The subtitles, in contrast, reflect the nuances present in the original spoken discourse more faithfully, as illustrated in the examples provided: Example 12 Original

Voiceover

Subtitles

12a I know. I should probably get my thyroid checked out.

Muszę zbadać tarczycę. [I must have my thyroid tested.]

Wiem. Powinienem pójść na badanie tarczycy. [I know, I should get // my thyroid tested.]

12b See, I don’t think that’s why you got fired

Nie dlatego cię wyrzucili. [This is not why you got fired.]

Nie sądzę że wylali cię z tego powodu. [I don’t think they fired you because of that.]

12c Well, he may have loved being a father, but your marriage was a disaster

Był szczęśliwym ojcem, ale wasze małżeństwo to była katastrofa. [He was a happy father, but your marriage was a disaster.]

Może i był oddanym ojcem, ale wasze małżeństwo było katastrofą. [He may have been a devoted father, // but your marriage was a disaster.]

All these shifts reinforce the impression that voiceover involves a more radical reworking of the original dialogues than subtitles, arguably to adapt the script to the voice artist’s needs by rendering it as concise as possible. It presents interaction as less cooperative (dispensing with most gambits and vocatives), more direct, assertive and categorical. Concise and fragmentary, the voiceover translation depends on the original audio and images to supplement the characters’ portrayal with signals of empathy, concern or doubt. The subtitles, by contrast, retain

Voiceover as Spoken Discourse 243

more overt signals of interaction and, in this respect, they follow the norms of conversational registers more closely. 13.4.3

Shared context

Conversation unfolds in a shared context, which accounts for another of its distinguishing features: grammatical reduction. Sharing the same spatio-temporal and cognitive background, interlocutors economize on words and structures. They seem to prefer pronouns to nouns and use substitute proforms (one/ones or do it/that) and deictic items (this, that, there, then, now) to avoid excessive specification of meaning (Biber et al. 1999: 1042–4). In addition, they often resort to ellipsis (ibid.: 1043), omitting subjects or operators, reducing their verbal exchanges to single-word questions and replies. In Desperate Housewives, the screenwriters resort to these features to make the dialogue concise and dynamic. The two Polish translations differ in their approach to deictic references and ellipsis. The subtitles retain most deictic markers and proforms, as exemplified in examples 13b–13e. However, references are explicated occasionally, as illustrated in 13a, with those things described as those cruel things. The voiceover translation, on the other hand, not only mirrors the original, but exceeds it as regards the use of deictic reference. To save space, the translator chooses more concise, context-dependent forms than the screenwriters. This is clearly visible in the examples given below, where those things are rendered as this (13a), and making up this tray as doing that (13b). Indeed, on numerous occasions deictic reference is considered redundant and dispensed with in the voiced-over version, as demonstrated in examples 13c–13d. Example 13 Original

Voiceover

Subtitles

13a

Well then, why did you say those things to me?

To czemu mi to mówisz? [So why are you telling this to me?]

To dlaczego powiedziałaś te okrutne rzeczy? [So why did you say // those cruel things?]

13b

I make up this tray every night

Robię to codziennie. [I do that every day.]

Przygotowuję tę tacę co wieczór. [I prepare that tray every day.] (continued)

244

Agata Hołobut

Example 13 Continued Original

Voiceover

Subtitles

13c

Oh, these are for you. How sweet.

Dla pani. [For you.] Jak miło. [How nice.]

To dla pani. [That’s for you.] Dziękuję. [Thank you.]

13d

Really? When did they start doing that?

Serio? Od kiedy? [Really? Since when?]

Serio? Kiedy zaczęli to robić? [Really? When did they start doing that?]

13e

Okay, obviously, I’m gonna clean that up.

Posprzątam. [I’ll clean up.]

Posprzątam to. [I’ll clean that up.]

Omitted structures are often made explicit in subtitles, as shown in examples 14a–14c, where the original economy is disregarded. On other occasions, however, subtitles compensate for these explicitations, exceeding the original number of elliptical utterances by 22 per cent. On the other hand, voiceovers use ellipsis much more often than the original. This regularity can be seen in example 14d, which portrays a conversation between Lynette, who attended a job interview with a crying child, and her potential employer, who is late for his plane. In the original, only the last utterance is elliptical (Sorry. Plane.). In the voiceover translation, this applies to every line. The little lady becomes the little [one], a request for more time (Could you just give me two minutes?) is transformed into an assertion (two minutes), and the plane retains its elliptical context-dependent form. In the subtitles, by contrast, each of these utterances is expanded. On the whole, the number of elliptical utterances revoiced by the voice talent exceeds the original characters’ by 66 per cent, including condensed questions, assertions, offers and elliptical statements absent from the original, as illustrated in examples 14e–14h:

Example 14 Original

Voiceover

Subtitles

14a

Muffinki. Kto chce? [Muffins. Who wants ?]

Są chętni na gorące babeczki? [Is there anyone interested in hot fairy cakes?]

Komórka! [Mobile!]

Zapomniałam komórki! [I forgot my mobile!]

Hot muffins. Any takers?

14b Oh, my cell phone.

(continued)

Voiceover as Spoken Discourse 245 Example 14 Continued Original

Voiceover

Subtitles

14c

Herbata. [Tea.]

Herbata gotowa, Phyllis. [Tea ready, Phyllis.]

14d - Any chance we can get that little lady to be quiet? - Not until I change her diaper. Could you just give me two minutes? - Sorry. Plane. Well …

- Można uspokoić małą? [Can you quiet the little ?] - Zmienię jej pieluchę. Dwie minuty. [I’m gonna change her nappy. Two minutes.] - Samolot. [The plane.]

- Da się ją jakoś uciszyć? [Is it possible to quiet her somehow?]

14e

I’m Lynette Scavo. I work here. Who are you?

Lynette Scavo. Pracuje tu. A pani? [Lynette Scavo. I work here. And you?]

Jestem Lynette Scavo. Pracuję tu. A kim pani jest? [I’m Lynette Scavo. I work here. And who are you?]

14f

But that was a stupid promise.

Głupia obietnica. [A stupid promise.]

To była głupia obietnica. [That was a stupid promise.]

14g

It really is a lovely gesture.

Piękny gest. [Nice gesture.]

To naprawdę miły gest. [That’s a really nice gesture.]

Hipokryta. [Hypocrite.]

Jesteś hipokrytą. [You’re a hypocrite.]

Here’s your tea, Phyllis.

14h You are a hypocrite.

- Jak jej zmienię pieluszkę. Mogę prosić o dwie minuty? [When I change her nappy. Can I ask for two minutes?] - Nie da rady. Muszę zdążyć na samolot. [Won’t manage. I must catch the plane.]

Thus, the context-dependence characteristic of spoken discourse is exploited more often in the voiceover translation than in the original and in the subtitles. The revoiced version is markedly elliptical and deictic, which brings it closer to face-to-face interaction. In this respect, both the original and the subtitles follow the norms of written discourse more closely. By using fewer elliptical utterances they preclude viewers from the need to look for contextual clues from the action on screen and the preceding verbal discourse. 13.4.4 Personal communication: colloquial style As conversation is a private form of communication, its style is ‘overwhelmingly informal’ (Biber et al. 1999: 1050), abounding in contractions, colloquial words and expressions, expletives and lexical bundles (ibid.). Since Desperate Housewives portrays characters that are intimately acquainted with each other, their interactions are predominantly

246

Agata Hołobut

informal and so are their stylistic choices. The analysis shows that both Polish translations acknowledge the original informality overall, substituting some colloquialisms and idioms with functional equivalents and neutralizing others. The versions under examination differ in terms of stylistic consistency and lexical economy. As regards the former, the voiceover translation shows a higher consistency and control over neutral or colloquial lexical choices. Any departure from this rule is deliberate. For instance, when the characters are markedly sophisticated, as Mrs. Applewhite is in example 15a, or crude, as Susan’s father is in example 15b, the translator signals this with a stylistic shift. She, thus, reflects their psychological and socio-cultural idiosyncrasies with fine-tuned lexical choices and levels of formality, possibly to compensate for the fact that their utterances will be delivered in the same expressionless voice. The subtitles, by contrast, reveal stylistic inconsistencies that are not justified by the original. They adhere to the colloquialism of the original dialogue and often depart from it, using formal, literary expressions. Consequently, they seem less sensitive to the stylistic variations of the original and less concerned with the individual portrayal of characters, who sometimes seem excessively colloquial (15a) or overformal and literary (15b–15d) in the subtitles. Example 15 Original

Voiceover

Subtitles

15a

I would be honoured to play for you. Why don’t I bring some sheet music over later and we can plan some selections?

Będę zaszczycona mogąc zagrać. Wspólnie wybierzemy utwory. [I will be honoured to be able to play. We shall make some selections together.]

To będzie dla mnie zaszczyt. Wpadnę później i coś wybierzemy. [It will be an honour. // I’ll drop by later and we’ll pick something.]

15b

But believe me it was nice to finally get to meet you. You’re a lovely woman.

Miło było cię poznać, jesteś fajną babką. [It was nice to meet you, you’re a fine babe.]

Ale miło mi było cię w końcu poznać. Jesteś czarującą kobietą. [But it was nice to finally meet you. // You’re a charming woman.] (continued)

Voiceover as Spoken Discourse 247 Example 15 Continued Original

Voiceover

Subtitles

15c

I said I was sorry and I love you!

Przeprosiłam. Powiedziałam, że cię kocham. [I said sorry. I said I love you.]

Przeprosiłam cię, zapewniłam o swojej miłości! [I apologized to you, I assured you // of my love.]

15d

Why do we try to define people as simply good or simply evil?

Czemu lubimy dzielić ludzi na dobrych i złych? [Why do we like to divide people into good and evil?]

Czemu staramy się kategoryzować ludzi jako dobrych lub złych? [Why do we try to categorize people // as good or – formal – evil?]

As far as lexical economy is concerned, the voiceover translation tends to render the semantic and stylistic value of the original lexical choices more economically, as demonstrated in examples 16a and 16b; the subtitles, however, do not share a similar economy-driven inventiveness, preferring closer equivalents and periphrases to short, creative substitutes. Example 16 Original

Voiceover

Subtitles

16a

She gives me all my best ideas.

To moja muza. [That’s my muse.]

Moja skarbnica pomysłów. [My treasury of ideas.]

16b

At 7:43 this morning, your husband held two guards at gunpoint and successfully escaped.

Dziś o 7.43 pani mąż sterroryzował dwóch strażników i uciekł. [Today at 7.43 your husband terrorized two guards and escaped.]

O 7.43 dziś rano pani mąż napadł z bronią na dwóch strażników, a potem uciekł. [Today at 7.43 in the morning // your husband attacked // two guards with a gun, // and then escaped.]

These trends towards reduction and retention, typical of voiceovers and subtitles respectively, are also visible in their treatment of expletives (see Table 13.7). The voice talent omits almost all taboo references to religion and swearwords, maintaining emotional neutrality and

248

Agata Hołobut

Table 13.7

Expletives in the original and translated dialogues

Expletive type

Examples

Taboo expletives

oh, my God; oh, for God’s sake; damn; the hell

9

na litość boską , na miłość boską , Boże, u diabła

Moderate expletives

Jeez

2

Jezu

5

suka, spieprzyć

Swearwords bitch; screw up; screw around

Original Polish equivalents

16 (100%)

Voiceover

Subtitles

1 (11%)

6 (67%)

0

1 (50%)

1 (20%)

3 (60%)

2 (13%)

10 (63%)

allowing the original interjections to reach the target audience. Still, as a consequence, the characters’ verbal interactions seem flatter and more moderate. The subtitles, by contrast, reflect most religion-related taboo expletives and selected swearwords, imitating the colloquialism of the fictional dialogue. Despite these omissions, the voiceover translation is more consistent and deliberate in its stylistic choices, combining conciseness with colloquialism. The subtitles, by contrast, seem less premeditated and are less concerned with economy of expression, which sometimes results in awkward effects.

13.5

Conclusions

The comparison of the Polish voiced-over and subtitled episodes of Desperate Housewives showed significant differences in terms of conversational linguistic features. The voiceover translation showed several conversational characteristics such as syntactic simplicity, and a preference for ellipsis or informal style. In the voiceover, the contextdependence and intimacy of onscreen exchanges were emphasized, whereas spontaneity, tentativeness and interactivity, inferable from the characters’ behaviour on screen, remained unstressed. This AVT modality seemed clearly designed to be incorporated organically into the audiovisual programme. The subtitles, by contrast, focused on other aspects of spoken discourse, reflecting spontaneity and interactivity by means of false starts,

Voiceover as Spoken Discourse 249

repetitions, hedges, gambits and vocatives. What they failed to stress were the simplicity and reductionism of conversational grammar: compared with the voiceover translation, the subtitles seemed syntactically complex and rich in verbal padding, which is unusual in this AVT modality (Georgakopoulou 2009: 26–7). This approach may suggest that the translator calqued the original interactions, while showing a concern for their dynamics. How should the differences between the two versions be interpreted? Both recognized the naturalness characteristic of speech in the fictional dialogue and yet each reconstructed it in a different way. The voiceover translation makes the most of the voice artist’s potential, following the model of whispered interpreting or chuchotage. Indeed, both voiceover translation (Garcarz 2007; Szarkowska 2009) and whispered interpreting are known as szeptanka among Polish professionals. According to this model, the artist accompanies the viewers as they follow the original utterances, remaining unobtrusive and informative; spontaneous, but not overemotional; sensitive to the dynamics of the original interactions and willing to let the characters speak for themselves. To achieve this effect, the voice talent should integrate the lines into the original audiovisual programme, balancing interference with silence. This type of performance requires a carefully adapted script. As Woźniak (2008, 2012) argues, this adaptation should involve maximal text reduction, decreasing the length and number of intrusions to increase access to the original dialogue. Selective and fragmentary, the translation will be supplemented with the information available on screen (Woźniak 2008: 77). Another important implication of the model concerns the stylistics of the voiceover script. Capitalizing on the spoken mode of delivery, it should reveal features of ‘false oral language’ (Franco et al. 2010: 178) and be adjusted to the voice artist’s unobtrusive, impassive performance. Utterances should remain conversational and concise, colloquial and polished, flat and singular, yet be representative of the polyphony of well-rounded characters. Like nonfictional genres, it is essential to ‘maintain the tone and the degree of formality of the original’ (ibid.: 76), as this accounts for the artistic value of the audiovisual programme. The analysis of the corpus showed that Agnieszka Kamińska, who translated the voiceover script, reconciled these paradoxes, intuitively following the chuchotage model. She combined brevity with conversational ease, radically reducing the script through paraphrase, condensation and omission, retaining 84 per cent of the original utterances identified in the corpus (972 out of 1,164) and 53 per cent of the

250

Agata Hołobut

original word-count (4,886 out of 9,190 words transcribed). The translation capitalizes on the interaction between the voiceover script and the original audiovisual text, leaving out verbal messages recoverable from the screen including ritualized illocutions, response forms, gambits, vocatives and hedges (Franco et al. 2010: 75). The focus seems to be on discourse rather than on textual coherence, which enhances the brevity, but also the conversational realism of the script, characterized by syntactic simplicity and context-dependence. Kamińska reflected the characters’ linguistic idiosyncrasies in the translation, although she also simplified their epistemic and emotional commitment. Altogether, she extracted the essence of the original dialogues, depriving them of hedges and redundancies and adapting them to the needs of the voice artist. As far as the subtitles are concerned, they followed a different model: that of a personal navigation device. They clearly aimed to provide a clear map of the audiovisual reality on screen, preparing the viewers for every sharp turn in the dialogue, without overloading them with excessive detail. They retained 98 per cent of the utterances (1,141 out of 1,164), reducing the original word-count by 31 per cent. Instead of ‘imitating spontaneous-sounding conversation in the target language’ (Baños-Piñero and Chaume 2009) as is done in dubbing, the subtitles provided a legible diagram of the original interactions, marking their most important landmarks, time and space permitting. They reflected every communicative turn, including some speech redundancies, such as false starts, hesitations and verbal padding, which are rarely reflected in subtitling practice (Díaz Cintas and Remael 2007: 162–6). They emphasized textual rather than verbo-visual coherence, explicating logical links and context-dependent references. They conveyed the semantic intricacies of the original, sometimes at the cost of excessive lexico-grammatical sophistication. They, therefore, reflected interactional rather than syntactic aspects of conversational register, combining the grammatical complexity of writing with the redundancy of impromptu speech. These preliminary results should certainly be verified on a larger and more varied corpus. Yet, despite its limited scope, the study already invites two general observations. Firstly, each mode of transfer seems to adopt a different approach to orality and requires a different strategy for the recreation of audiovisual dialogue. Read out by the voice artist, the subtitles would sound artificial and redundant. Used as subtitles, the voice artist’s lines would appear fragmentary and incoherent. Hence, it is alarming that more and more DVD distributors in Poland use subtitles

Voiceover as Spoken Discourse 251

as a voiceover script, thus impoverishing the aesthetic unity of the final product. Secondly, despite its idiosyncrasy and cultural specificity, the voiceover translation of fiction certainly raises issues ‘related to other modalities’, such as dubbing, subtitling or audio description (Franco et al. 2010: 14–15), as well as to other genres (the voiceover translation of non-fictional programmes). As such, it may not only be of interest to descriptive translation studies in the post-Soviet bloc, but it may also offer new comparative insights into other modes of audiovisual transfer outside Eastern Europe.

References Assis Rosa, Alessandra. 2001. ‘Features of oral and written communication in subtitling’. In Yves Gambier and Henrik Gottlieb (eds) (Multi) Media Translation: Concepts, Practices and Research (pp. 213–23). Amsterdam: John Benjamins. Baños-Piñero, Rocío and Frederic Chaume. 2009. ‘Prefabricated orality: a  challenge in audiovisual translation’. inTRAlinea, Special Issue: The Translation of Dialects in Multimedia, www.intralinea.it/specials/dialectrans/eng_more. php?id=761_0_49_0. Biber, Douglas, Stig Johansson, Geoffrey Leech, Susan Conrad and Edward Finegan. 1999. Longman Grammar of Spoken and Written English. London: Longman. Bartmiński, Jerzy and Stanisława Niebrzegowska-Bartmińska. 2009. Tekstologia. Warsaw: PWN. Bogucki, Łukasz. 2004. ‘The constraint of relevance in subtitling’. The Journal of Specialised Translation 1: 71–88, www.jostrans.org/issue01/art_bogucki_pl.pdf. Díaz Cintas, Jorge and Aline Remael. 2007. Audiovisual Translation: Subtitling. Manchester: St Jerome. Edmondson, Willis and Juliane House. 1981. Let’s Talk, and Talk about It. Munich: Urban & Schwarzenberg. Franco, Eliana, Anna Matamala and Pilar Orero. 2010. Voice-over Translation: An Overview. Bern: Peter Lang. Garcarz, Michał. 2006a. ‘Polskie tłumaczenie filmowe’. The Journal of Specialised Translation 5: 110–9, www.jostrans.org/issue05/art_garcarz.pdf. Garcarz, Michał. 2006b. ‘Tłumaczenie telewizyjne w Polsce. Teoria przekuta w praktykę’. In Andrzej Pławski and Lech Zieliński (eds) Medius Currens I  (pp. 97–108). Toruń: Wydawnictwo Uniwersytetu toruńskiego. Garcarz, Michał. 2007. Przekład slangu w filmie. Cracow: Tertium. Garcarz, Michał and Maciej Widawski. 2008. ‘Przełamując bariery przekłady audiowizualnego: o tłumaczu telewizyjnym jako twórcy i tworzywie’. Przekładaniec 20: 40–9. Georgakopoulou, Panayota. 2009. ‘Subtitling for the DVD industry’. In Jorge Díaz Cintas and Gunilla Anderman (eds) Audiovisual Translation. Language Transfer on Screen (pp. 21–35) Basingstoke: Palgrave Macmillan. Gottlieb, Henrik. 1998. ‘Subtitling’. In Mona Baker (ed.) Routledge Encyclopaedia of Translation Studies (pp. 244–6). London: Routledge.

252

Agata Hołobut

Grigaravièiûtë, Ieva and Henrik Gottlieb. 1999. ‘Danish voices, Lithuanian voiceover: the mechanics of non-synchronous translation’. Perspectives: Studies in Translatology 7(1): 41–80. Guillot, Marie-Noëlle. 2008. ‘Orality and film subtitling: the riches of punctuation’. The Sign Language Translator and Interpreter 2(2): 127–47. House, Juliane. 1997. Translation Quality Assessment. A Model Revisited. Tübingen: Narr. Kozieł, Andrzej. 2003. Za chwilę dalszy ciąg programu … Telewizja Polska czterech dekad 1952–1989. Warsaw: ASPRA-JR. Kozloff, Sarah. 2000. Overhearing Film Dialogue. Berkley: University of California Press. Leech, Geoffrey and Mick Short. 2007. Style in Fiction. Harlow: Pearson Education. Orero, Pilar. 2004. ‘The pretended easiness of voice-over translation of TV interviews’. The Journal of Specialised Translation 2: 76–96, www.jostrans.org/issue02/ art_orero.pdf. Pavesi, Maria. 2008. ‘Spoken language in film dubbing: target language norms, interference and translational routines’. In Delia Chiaro, Christine Heiss and Chiara Bucaria (eds) Between Text and Image. Updating Research in Screen Translation (pp. 79–99). Amsterdam: John Benjamins. Szarkowska, Agnieszka. 2006. ‘Formy adresatywne w przekładzie z języka angielskiego na polski’. In Lech Zieliński and Maciej Pławski (eds) Rocznik przekładoznawczy 2. Studia nad teorią , praktyką i dydaktyką przekładu (pp. 211–21). Toruń: Wydawnictwo Naukowe Uniwersytetu Mikołaja Kopernika. Szarkowska, Agnieszka. 2009. ‘The audiovisual landscape in Poland at the dawn of the 21st century’. In Angelika Goldstein and Biljana Golubović (eds) Foreign Language Movies – Dubbing vs. Subtitling (pp. 185–201). Hamburg: Verlag. Tomaszkiewicz, Teresa. 2008. Przekład audiowizualny. Warsaw: PWN. Woźniak, Monika. 2008. ‘Jak rozmawiać z kosmitami? Kilka uwag o tłumaczeniu lektorskim telewizyjnych filmów fantastyczno naukowych’. Przekładaniec 1(20): 50–88. Woźniak, Monika. 2012. ‘Voice-over or voice-in-between? Some considerations about voiceover translation of feature films on Polish television’. In Aline Remael, Pilar Orero and Mary Carroll (eds) Audiovisual Translation and Media Accessibility at the Crossroads (pp. 209–28). Amsterdam: Rodopi.

14 Dubbing Directors and Dubbing Actors: Co-authors of Translation for Dubbing Regina Mendes

14.1

Introduction

The present study is based on observations made in a dubbing studio in Brazil in an attempt to understand how and why dubbing professionals change the translation of the scripts submitted to them by translators. In Brazil, translators are also responsible for adapting the dubbing script, leaving the final adjustments to the dubbing director and actors during the recording of the translated dialogue. I, therefore, attended a series of dubbing sessions in order to understand the type of suggestions made at this stage and to ascertain the part played by all the participants in the dubbing process. The sessions involved the dubbing of the US police series Cold Case (2003–2010), season 3, episodes 7 and 8, entitled ‘Start Up’ (James Whitmore Jr. 2005) and ‘Honor’ (Paris Barclay 2005), respectively. 30 dubbing actors, a dubbing director and a sound technician were involved in the process. To interpret the results of my research, the data analysis was divided into two phases. The first phase included the categorization of the most common strategies used by dubbing professionals to alter the translated script. In order to identify the similarities and discrepancies between the two processes, these strategies were compared in the second phase with those commonly used by translators, following Baker’s (1992) classification. At this stage, my working hypothesis was that dubbing professionals could be viewed, to some extent, as co-translators. The results of the second phase of the research are considered in this chapter.

253

254

Regina Mendes

14.2 Strategies used by dubbing professionals This section describes the translation strategies most frequently employed by dubbing professionals when synchronizing the translation with the original audiovisual text. Each of the examples provided focuses on a single strategy, and examples from other strategies used for the same utterance are not discussed. The following information is included: 1. 2. 3. 4.

source text (ST), which refers to the onscreen actor’s original line; translated text (TT), that is, the original translation of the script; dubbed text (DT), which refers to the actual dubbed dialogue included in the final product; back translation (BT), which provides a literal translation of both the TT and the DT.

The ST is accompanied by the name of the character who speaks the line, and the title of the episode of the TV series from which the example was taken is provided at the beginning. Changes made by the dubbing professionals have been underlined in the examples. The two most common issues encountered by the dubbing director and actors during the dubbing process were related to isochrony (Chaume 2012: 68), that is, the need to adjust translated utterances so that they are of approximately the same length as the original, and to lip synchrony, that is, to the need to ‘fill in’ each articulation of the onscreen actor’s mouth with a sound, preferably providing a perfect match, above all in close-ups. Changes were sometimes made to synchronize the text with the onscreen characters’ gestures (kinesic synchrony) and to make dialogue sound more intelligible or natural to the viewer. To solve these problems, dubbing professionals frequently resorted to one of the various translation strategies described below, sometimes combining several strategies in the same utterance. In the case of the script in question, it was the dubbing director who interfered the most. 14.2.1 Translation using a loan word and explanation According to Baker (1992: 34), translation with a loan word on its own, or a loan word followed by an explanation, commonly occurs with culture-specific items, modern concepts and buzz words. Baker (ibid.) also points out that an explanation provided alongside the loan word helps the reader to understand the meaning of the foreign word the first time it appears, and makes it possible for the word to appear on its own in subsequent occurrences as the reader is no longer distracted by long explanations. This strategy was observed in Example 1, when the

Dubbing Directors and Dubbing Actors 255

Cold Case team encounters a code name (His Girl Friday) that they begin to investigate. The translator retained the source language code name in the translation, probably because in the story it refers to a real person and is an allusion to the title of the comedy film directed by Howard Hawks in 1940. The intertextual reference was not mentioned during the dubbing session, but the dubbing director agreed to retain the English loan. She decided to interfere in the translated script, however, including information about the language of the code name, thus helping non-English speaking viewers understand the foreign pronunciation in the utterance. She included the words em inglês [in English] in the script before the actual loan words as an indication that the code name was in English: Example 1 – Start Up (ST) Rush: Who that is. ‘His Girl Friday’? A certain clever secretary with a 9.8-million-dollar nest egg. (TT) Quem … é. ‘His Girl Friday’? Uma certa secretária inteligente com um pé de meia de nove vírgula oito milhões. (BT) Who … is. ‘His Girl Friday’? A certain clever secretary with a 9.8-million nest egg. (DT) Quem é isso. Em inglês, ‘His Girl Friday’. Uma certa secretária inteligente com um pé de meia de nove vírgula oito milhões. (BT) Who is that. In English, ‘His Girl Friday’. A certain clever secretary with a 9.8-million nest egg.

14.2.2 Translation by paraphrase Baker (1992: 37–40) discusses the strategy of translation by paraphrase at word level as well as above word level, dividing the former into two: paraphrase using a related word and paraphrase using unrelated words. According to Baker (ibid.: 37), paraphrase using a related word is a solution when translating a concept in the source language that is ‘lexicalized in the target language but in a different form, and when the frequency with which a certain form is used in the source text is significantly higher than would be natural in the target language’. Translation using unrelated words is the preferred option when a concept in the source language is not lexicalized in the target language and, particularly, when it is semantically complex (ibid.: 38). Translation by paraphrase was one of the most frequent strategies used by both the dubbing director and the dubbing actors to solve issues with the translated script during the recording. Depending on the kind

256

Regina Mendes

of problem involved, certain utterances had to be entirely paraphrased – an infrequent strategy  – or partially paraphrased, a recurring strategy. For the purpose of simplifying the discussion, only these two basic forms are analysed and other distinctions made by Baker (ibid.: 37–40; 74–77) are not considered here. 14.2.2.1 Paraphrasing the entire utterance In some cases, an entire translation had to be paraphrased for better isochrony and lip synchrony, which, despite having a distinct purpose, are types of synchrony that necessarily complement each other. This is the case with Example 2, where detective Rush is talking with Coleman, who is trying to give her a good reason for having smashed Scott’s car window with a golf club: Example 2 – Start Up (ST) Rush: And the golf club, that was your way of saying it? (TT) E o taco de golfe foi seu modo de dizer. (BT) And the golf club was your way of saying it. (DT) E falou isso através do seu taco de golfe? (BT) And you said that through your golf club?

14.2.2.2 Paraphrasing part of the utterance The dubbing director and the dubbing actors often had to paraphrase part of the utterance for the purposes of synchrony, a strategy observed in examples 3 and 4. In Example 3, the dubbing director changed the expression vamos em frente [let’s go ahead] for another that could be more readily pronounced while still transmitting the same idea, vamos seguir [let’s follow (it)]. Example 3 – Honor (ST) Stillman: He’s a POW, Scotty. We got anything, we’re gonna go with it. (TT) Ele foi prisioneiro de guerra, Scotty … Temos uma pista, vamos em frente. (BT) He was a prisoner of war, Scotty . . . We have a clue, let’s go ahead. (DT) Ele foi prisioneiro de guerra … Temos uma pista, vamos seguir. (BT) He was a prisoner of war . . . We have a clue, let’s follow (it).

Dubbing Directors and Dubbing Actors 257

In Example 4, the dubbing director identified the fact that the TT needed to be shortened slightly in order to synchronize it with the original and suggested changing the first sentence Não tem muita comida [There’s not much food], for a shorter one similar in meaning: Example 4 – Honor (ST) Carl: There’s not a lot to eat. Water makes you sick. (TT) Não tem muita comida. A água deixa você doente. (BT) There’s not much food. Water makes you sick. (DT) Tem pouca comida. A água deixa você doente. (BT) There’s little food. Water makes you sick.

14.2.3 Translation by omission Baker (1992: 40) identifies translation by omission as a possible strategy when the meaning conveyed by a certain item or expression is not essential to the development of the text. In the material analysed in this study, this was a recurrent strategy employed to deal with synchrony constraints. Several linguistic elements considered irrelevant for the understanding of the utterance and the story were candidates for omission, especially personal pronouns which were not necessary for the correct identification of personal references. In Example 5, the length of the TT had to be reduced to achieve isochrony, so the dubbing director omitted the adverbial bem [right], used in the translation for emphasis. The deletion did not change the meaning of the utterance; it only lessened the degree of emphasis, which the dubbing actress compensated for by using an emphatic intonation: Example 5 – Honor (ST) Janet: I didn’t know where Carl was, truly. Even when he was right in front of me. (TT) Eu não sabia onde Carl estava, ainda que ele estivesse bem na minha frente. (BT) I didn’t know where Carl was, even if he was right in front of me. (DT) Eu não sabia onde Carl estava, ainda que ele estivesse na minha frente. (BT) I didn’t know where Carl was, even if he was in front of me.

258

Regina Mendes

In Example 6, the dubbing director instructed the dubbing actor to omit Morre de [Dies of] from the TT because the onscreen actor pronounces these words with his mouth almost closed, which meant the whole TT utterance would not match Vera’s lip movements. Example 6 – Start Up (ST) Vera: Dies of a heart attack, 24 years old. (TT) Morre de ataque cardíaco. 24 anos. (BT) Dies of a heart attack. 24 years. (DT) Ataque cardíaco. 24 anos. (BT) Heart attack. 24 years.

14.2.4 Translation of idioms by adaptation Translators often have difficulties when they encounter idioms or fixed expressions. Baker (1992: 63–78) describes some of these and suggests some helpful solutions, such as using an idiom of similar meaning and form, using an idiom of similar meaning but dissimilar form, translating by paraphrase, translating by omission and translating by compensation. During the dubbing of Cold Case, the dubbing professionals sometimes found themselves in a position where they had to adapt the idioms in the TT for the purpose of synchronization. Two examples are given below. 14.2.4.1 Using an idiom of similar meaning and form According to Baker (ibid.: 72), this strategy calls for the use of an idiom in the target language with the same basic meaning as that of the idiom in the source language and consisting of equivalent lexical items. This is the strategy implemented in Example 7, where the dubbing director and the dubbing actors negotiated a change in the TT and decided to substitute the idiomatic expression ir em cima de alguém [to call someone on something] with another idiom of similar meaning and form, partir para cima, to achieve isochrony and lip synchrony, which were particularly relevant since the scene was a close-up. Example 7 – Start Up (ST) Vera: So maybe Amy finds out, calls him on it. (TT) Talvez Amy tenha descoberto e ido em cima dele. (BT) Maybe Amy found out and called him on it. (DT) Amy pode ter descoberto e partido pra cima. (BT) Amy might have found out and called him on it.

Dubbing Directors and Dubbing Actors 259

14.2.4.2 Using an idiom of similar meaning but dissimilar form This strategy involves using an idiom or fixed expression in the target language with a meaning similar to that of the source language, but with different lexical items (ibid.: 74). In Example 8, the dubbing director interfered by choosing an expression other than ficar de fora [to be left out] more in keeping with the interpretation of the onscreen actor. In this scene, an irritated Coleman makes a gesture with his hand raised to his neck to show that Scott deserves to be ‘cut out’ of the partnership with Amy. The meaning of the expression chosen for the DT, ser cortado [to be cut], is similar to that of the TT although it has different lexical items. In addition, it is not only more reflective of the onscreen actor’s body language (kinesic synchrony), but it is also more emphatic, conveying the sense of ‘elimination’ strongly:

Example 8 – Start Up (ST) Coleman: He’s seduced by trappings; he deserves to be cut out. (TT) Se deixa seduzir por armadilhas. Merece ficar de fora. (BT) He’s seduced by trappings. Deserves to be left out. (DT) Se deixa seduzir por armadilhas. Merece ser cortado. (BT) He’s seduced by trappings. Deserves to be cut.

14.2.5 Translation by compensation Baker (1992: 78) describes the strategy of compensation as the process of omitting or playing down a feature such as idiomaticity ‘at the point where it occurs in the source text and introduc[ing] it elsewhere in the target text’. Because of the considerable amount of space required to illustrate this strategy, Baker does not provide any examples herself, but she points out that it can also be used ‘to make up for any loss of meaning, emotional force, or stylistic effect which may not be possible to reproduce directly at a given point in the target text’ (ibid.). Indeed, translation by compensation is a complex strategy that can be divided into different subtypes, but here it will be treated as a general strategy, one that is often used to make the translation sound more natural. In the cases shown below, the DT is rather a recreation of the TT, and not a paraphrase, as it involves adding new information.

260

Regina Mendes

In Example 9, Carl has just told his son, Ned, that when he was a POW, he could only talk with his friends by using a secret code. Carl explains that they spelled out words on the walls between their rooms, using a certain number of taps for each letter. Then, tapping on the wall of a zoo, Carl demonstrates the way in which to say ‘Ned’ in their secret code. Next, he spells out ‘bear’, but Ned cannot figure out what the word is. This is why Carl tells Ned, ‘I said bear’. The TT here is a literal translation of the ST, but the dubbing director wanted the characters’ dialogue to sound more spontaneous since the father and his son are having fun, playing a kind of game. In her view, Fácil. Urso, Ned [Easy. Bear, Ned] sounded more natural in this context than Eu disse urso [I said bear]. Example 9 – Honor (ST) Carl: I said bear. Wanna go see the bears? (TT) Eu disse urso. Quer ir ver os ursos? (BT) I said bear. Wanna go see the bears? (DT) Fácil. Urso, Ned. Quer ver os ursos? (BT) Easy. Bear, Ned. Wanna see the bears?

To achieve better synchrony when uttering the translation of ‘and hold all calls’ in Example 10, the dubbing director and the dubbing actor first suggested a partial rewording of the TT with linguistic elements similar to those chosen by the translator. However, after a couple of failed attempts, they finally decided to use an expression that, although not an equivalent translation, preserved the character’s intention and made the dialogue sound more natural and spontaneous. Example 10 – Start Up (ST) Coleman: Geraldine, thank you, and hold all calls. (TT) Geraldine, obrigado e segure os telefonemas. (BT) Geraldine, thank you, and hold the calls. (DT) Geraldine, obrigado. Não estou pra ninguém. (BT) Geraldine, thank you. I’m not here for anybody.

14.2.6 Translation by grammatical equivalence Differences in the grammatical structures between the source and target languages may call for a change in the information of the ST during

Dubbing Directors and Dubbing Actors 261

translation (Baker 1992: 86). In the case of the material investigated, this type of change happened in the TT often because of grammatical differences between English and Portuguese as regards the use of personal references (personal pronouns and possessive adjectives, for example). The dubbing professionals of Cold Case changed the TT so that reference issues that could lead to an incorrect interpretation on the part of the spectator could be avoided. The personal pronoun ‘you’ and the possessive adjectives ‘his’ and ‘her’ were the most frequent causes of such problems. According to the dubbing director, context should always be considered when translating the personal pronoun ‘you’, because in English, ‘you’ can have a singular or a plural referent, but in Portuguese, there are different singular and plural forms (você, vocês), and the verb inflects accordingly. For instance, in Example 11, ‘you can’ could be translated as (você) pode (singular) or as (vocês) podem (plural). The translator opted for the first option. However, the dubbing director realized that the plural form should, in fact, have been used because the scene involved four characters. Taylor, a boy, is searching for a document on the computer to show to detectives Valens and Vera. They are behind the boy, together with his mother (Nancy). At a certain point, the boy, observing the computer screen, says: ‘You can look’. The dubbing director pointed out that the pronoun ‘you’ in the ST should be rendered in the plural because the boy seemed to be referring to everybody present: Example 11 – Start Up (ST) Nancy: No. I just can’t do PE. It’s up. You can look. (TT) Não. Só não posso fazer educação física. Pronto, já pode olhar. (BT) No. I just can’t do PE. Ready, you (singular) can already look. (DT) Não. Só não posso fazer educação física. Pronto, já podem olhar. (BT) No. I just can’t do PE. Ready, you (plural) can already look.

The dubbing director also explained that ambiguous words such as the possessive adjectives seu, sua, seus and suas should be avoided in the TT. For instance, the question Onde estão suas canetas? can be translated as ‘Where are your/his/her/their pens?’; the correct interpretation depends solely on context. To avoid ambiguity in Portuguese, these more specific words can be used: teu, tua, teus, tuas [your], dele [his], dela [her] or deles, delas [their]. In Example 12, the dubbing director and the dubbing actor agreed to modify the TT for a clearer understanding of

262

Regina Mendes

the pronoun ‘his’ in the ST, using dele instead of the potentially ambiguous seu, so that there would be no doubt that Carl was not referring to the person he was talking to. Although the viewer would probably not be confused, the use of seu might sound somewhat strange at first. In the scene, Carl is at a restaurant, speaking to a waitress, and his son is sitting opposite him: Example 12 – Honor (ST) Carl: I’m his dad. Carl Burton. Hi. (TT) Sou seu pai. Carl Burton. Oi. (BT) I’m his dad. Carl Burton. Hi. (DT) Eu sou o pai dele. Carl Burton. Oi. (BT) I’m his dad. Carl Burton. Hi.

14.2.7 Achieving textual equivalence: cohesion Cohesion, as Baker (1992: 180) puts it, is: the network of lexical, grammatical, and other relations which provide links between various parts of a text. These relations or ties organize and, to some extent, create a text, for instance by requiring the reader to interpret words and expressions by reference to other words and expressions in the surrounding sentences and paragraphs. Strategies to obtain textual equivalence by cohesion were also employed by the dubbing professionals. In the ensuing sections, some of the cohesive devices cited by Baker (ibid.) are described and interpreted within the context of this study. 14.2.7.1

Reference

When the dubbing director and the dubbing actors amended the TT because of a problem related to reference, understood as ‘a device which allows the reader/hearer to trace participants, entities, events, etc., in a text’ (Baker 1992: 181), they usually did so by including or omitting a personal pronoun which resulted in better synchronization. In other cases, changes were mainly made to avoid an incorrect interpretation, as shown in Example 13. In this scene, detective Vera wants Coleman to tell him whether or not he cares about having

Dubbing Directors and Dubbing Actors 263

lost his investment. Since the personal referent for the verb custar [cost] was not specified in the TT, the dubbing director included the pronoun lhe [you] in the translation so that there would be no doubt that the investment was Coleman’s and no-one else’s. In addition, the inclusion meant that the translation fitted better with the gestures made by the onscreen actor: Example 13 – Start Up (ST) Vera: And the millions they cost you? No feelings about that? (TT) E os milhões que custaram … Não se importou? (BT) And the millions they cost . . . You didn’t care? (DT) E os milhões que lhe custaram … Não teve importância? (BT) And the millions they cost you . . . They weren’t important?

14.2.7.2

Substitution

The strategy of grammatical substitution  – the replacement of certain items by others – was often used by dubbing professionals when they needed to shorten the duration of the TT. For instance, in Example 14, e nem [nor] was used as a substitute for the longer textual fragment e não agüentaria [and I couldn’t face] with no impact on meaning: Example 14 – Start Up (ST) Scott: I couldn’t face jail. And I couldn’t face her finding out. (TT) Eu não agüentaria a cadeia. E não agüentaria que ela descobrisse. (BT) I couldn’t face jail. And I couldn’t face her finding out. (DT) Eu não agüentaria a cadeia. E nem que ela descobrisse. (BT) I couldn’t face jail. Nor her finding out.

14.2.7.3

Conjunction

Conjunctions, ‘formal markers to relate sentences, clauses and paragraphs to each other’ (Baker 1992: 190), were included by dubbing professionals not only to improve synchrony, but also to make the dubbed dialogue more cohesive and clearer, as it is shown in Example 15. Ken is answering Stillman’s question: Então, Carl aparece nesse funeral a que foi desconvidado. Como ele está? [So, Carl turns up at this funeral reception

264

Regina Mendes

where he’s not invited. What’s he like?]. The dubbing director inserted the conjunction como [like] at the beginning of the TT to make it more evident that Ken’s response identifies Carl as an antagonist: Example 15 – Honor (ST) Ken: Antagonistic. He’s looking for trouble. (TT) Antagonista. Procurando problemas. (BT) Antagonistic. Looking for trouble. (DT) Como antagonista. Atrás de problemas. (BT) Like antagonistic. Looking for trouble.

The examples discussed above show that dubbing directors and dubbing actors regularly apply certain strategies, largely analogous to those used by translators, to solve the problems encountered by them when adapting the translations of a script to the original audiovisual text. The purpose of the changes is often to achieve lip synchrony, isochrony and/or kinesic synchrony, and to make the dialogue sound more realistic and less ‘foreign’.

14.3

Further considerations

One of the most important tasks facing the dubbing director is to adapt the translation of the original script so that it meets the restrictions imposed by the dubbing process. The textual manipulation of the translation in the dubbing studio is inevitable because, as explained by the dubbing director of Cold Case, the translator cannot predict and measure all the difficulties potentially encountered by the dubbing actors when recording the translated dialogue. It is only during rehearsals that it is possible to gauge whether or not a certain translation will work. Thus, the changes made by the dubbing director and dubbing actors to the translated script are their collaborative contribution to the process, intended to achieve better synchronization between the utterances and images on screen. Since the dubbing directors’ job requires them to adjust the dubbing script, some familiarity with the source language is desirable, as the dubbing director of Cold Case pointed out. She commented that mistranslations can sometimes occur, but that, in the cases of this particular series, her knowledge of English had allowed her to detect and modify inadequate translations. In her opinion, the dubbing actors should also

Dubbing Directors and Dubbing Actors 265

have a relatively good command of English because it can help them in the same way as it helps dubbing directors. The recording process can be seen as a decision-making process for dubbing professionals, especially when they encounter difficulties connected with preserving the translation provided. The solution is an adaptation or recreation of the translation occurring after a discussion of the problems and testing suggested changes. The dubbed text is not complete when it leaves the translator and the responsibility for its final form lies with the dubbing professionals. Recognizing this responsibility, these professionals know that they have to take care when solving translation problems to make the dubbing more realistic and to avoid conveying a different meaning from that of the original in the foreign language. Since dubbing professionals collaborate in the production of the translation for dubbing, employing strategies comparable to those used by professional translators, it is my view that they take on the role of co-authors of the translation of the original script, at least in the Brazilian dubbing industry. Nevertheless, as they often make only slight changes to the utterances, their participation as co-authors can be said to be less significant than the original translator’s. Others may argue that they are mere ‘adapters’ or ‘editors’ of the dialogues. However, since they manipulate translations according to the needs of the medium and must compare them with what was said in the original language before they can decide on the kind of modification to make, it seems legitimate to consider them as co-authors of the translated script for dubbing.

References Baker, Mona. 1992. In Other Words: A  Coursebook on Translation. London: Routledge. Chaume, Frederic. 2012. Audiovisual Translation: Dubbing. Manchester: St. Jerome.

15 Audio Description in Hong Kong Dawning Leung

15.1

Introduction

Audio description (AD) has been in existence in the Western world for more than 25 years and, in Europe, professional practices are being regulated through legislation and/or guidelines and codes of good practice (Orero 2007; Puigdomènech et al. 2010). In contrast, the development of AD in China is still at an early stage. According to the World Health Organization (2010: 5), with a total population of more than 1.3 billion inhabitants, China has approximately 75 million people with some kind of visual impairment, out of which 8 million are blind and 67 million have poor vision. Since AD services are not regulated by law, the provision of AD in mainland China is very limited at institutional level. After carrying out extensive research, it seems that only one public library, the China Braille Library (www.blc.org.cn), founded in Beijing in 2011, provides AD to cater for the special needs of those with some kind of visual impairment. The Audio Description Centre in this library consists of only three members of staff, who are responsible for both the writing and the delivery of audio described scripts. Audio described films are shown weekly, but only those who live in the city have ready access to these services. Nevertheless, the library uploads the audio described material on its official website to increase accessibility. In addition to this public library in Beijing, an organization called Beijing Hongdandan Education & Culture Exchange Centre (www. hongdandan.org) has offered AD services on a regular basis since 2005. Films with live AD are shown every week and made available on several radio channels. The centre expanded its service to include Tianjin and Chengdu in 2009 and 2010 respectively. In Shanghai, an accessible film 266

Audio Description in Hong Kong

267

service was launched in 2009, serving both the visually impaired and the deaf. Since 2012, audio described films have been available on a monthly basis at the Cathay Theatre cinema (China Disabled Persons’ Federation 2012). Thus, in Mainland China AD services seem to be very limited and are restricted only to films. The provision of AD in Hong Kong is more developed in comparison and has experienced a rapid development during the past three years. Since this chapter concerns AD in Hong Kong, the following sections will discuss some of these developments in detail, focusing on AD services and training.

15.2 Background information on Hong Kong Hong Kong, a Special Administrative Region of the People’s Republic of China, is a melting pot for the East and West with a population of 7.15 million inhabitants. Being one of the most densely populated cities in the world, Hong Kong has a land population density of around 6,620 inhabitants per square kilometre (Information Services Department 2013). Hong Kong is also one of the world’s wealthiest cities, ranking fourth in terms of GDP per capita in 2010; according to The Wealth Report 2012 (Knight Frank Research 2012), it is expected to rank second by 2050. Once a British colony, this international city was returned to China on 1 July 1997. Since then, it has enjoyed a high degree of autonomy under the principle ‘One Country, Two Systems’. After the 1997 handover, Hong Kong’s government started to promote Putonghua (Mandarin Chinese), the official language in mainland China, as an essential language in Hong Kong. As established by Hong Kong Basic Law, both Chinese and English are Hong Kong’s official languages. In addition, the majority of the population speaks Cantonese. Traditional Chinese characters are widely used in writing, whereas Cantonese and English are used in daily life and in announcements made in public places in Hong Kong. In addition to official documentation, notices, menus, signs and business correspondence are commonly provided in Traditional Chinese and English. Chinese and English are compulsory language subjects in both primary and secondary education. Putonghua has been a subject in the Hong Kong Certificate of Education Examinations since 2000 and was included in the education policy to encourage local students to be ‘biliterate and trilingual’ and to ‘master written Chinese and English [and] speak fluent Cantonese, Putonghua and English’ (Education and Manpower Bureau 2005: 1). Nowadays, it is also used in announcements made in public places.

268

15.3

Dawning Leung

Disability-related legislation in Hong Kong

The Disability Discrimination Ordinance was implemented and the Equal Opportunities Commission (EOC) was established in Hong Kong in 1996. The EOC is a statutory body responsible for implementing the above-mentioned ordinance to promote equality and diversity and to foster an inclusive society in Hong Kong. The law protects people with disabilities and their associates against discrimination, harassment or vilification in the following areas: employment, education, access to and management of public premises, provision of goods, services and facilities, clubs and sporting activities, etc. (EOC n.d.). However, there is no mention of any obligation for broadcasters, producers or exhibitors to provide access to their media for people with sensory impairments. In other words, there is no stipulation to ensure the provision of media access for audiences with varying sensory ability, such as AD for the visually impaired or subtitling for the deaf and the hard-of-hearing. In November 2002, the Legislative Council (LegCo) and the Broadcasting Authority (BA) proposed some subtitling requirements in an attempt to cater for the needs of the hearing impaired. These requirements started to be included on domestic free television programmes from December 2003 (Broadcasting Authority 2002). According to these, the licensees should provide Chinese subtitles ‘on the Cantonese channels for all news, current affairs, weather programmes and emergency announcements, as well as all programmes shown during prime time (7:00–11:00 p.m.)’ (ibid.: 4). As regards the English channels, licensees should provide English subtitles for all news, weather, current affairs programmes and emergency announcements, as well as for the compulsory two hours of educational programmes per week targeting teenagers (ibid.: 3–4). The visually impaired did not get as much attention as the hearing impaired and, to date, the Hong Kong authorities have not requested any broadcasting requirements for this particular group. In this regard, Hong Kong is lagging behind some Western countries like the UK, the USA, Australia, Canada and Spain, where legislation requiring a minimum percentage of audio described material on public television has already been implemented or soon will be. In the UK, under the Broadcasting Act 1996 and the Communications Act 2003, at least 10 per cent of audio described TV programmes should be provided each week (The National Archives n.d.-a and n.d.-b). In the USA, under the Twenty-First Century Communications and Video Accessibility Act of

Audio Description in Hong Kong

269

2010, four hours of audio described programmes should be available on nine TV channels per week, and AD is expected to be expanded to reach 100 per cent nationwide coverage within ten years (American Council of the Blind 2014a). In Spain, the White Paper of the Spanish Audiovisual Law (Ley General del Audiovisual) proposes that, in 2015, at least 10 per cent of the TV programmes broadcast on government-owned channels should be aired with AD (Díaz Cintas 2010; López Vera 2006).

15.4

The provision of AD in Hong Kong

According to the latest official data from the government, in 2008, the total population in Hong Kong was around 7 million (Information Services Department 2012), and the number of people with some kind of visual impairment was 122,600, thus amounting to 1.8 per cent of the total population. Nevertheless, the provision of AD in Hong Kong is non-existent on television channels and there is no regular provision for AD in cinemas, museums or theatres. In addition, as discussed above, there are no legislative requirements for the provision of AD on any media. Awareness of the particular needs of visually impaired people as regards their access to the media began to increase in Hong Kong only from 2009. It was at that time that AD services started to be introduced by local non-governmental organizations (NGOs) with the purpose of meeting the needs of visually impaired people and helping them to integrate into society better. Two local NGOs are at present dedicated to providing AD services: The Hong Kong Society for the Blind (HKSB), which mainly provides AD for films, visits and outings and, occasionally, for plays, exhibitions and performing arts; and the Arts with the Disabled Association Hong Kong (ADAHK), which primarily offers AD for plays, performing arts and exhibitions. The HKSB (www.hksb.org.hk/en) was founded in 1956, and since then has provided eye care and related services for those with poor vision, rehabilitation and vocational training, educational support, employment guidance, an adaptive technology advisory service, technology applications for information and communication purposes and rehabilitation services for people with Multiple Disabilities and Visual Impairment (MDVI), as well as offering residential care for people with MDVI and aged blind people. In order to provide more educational opportunities and community support, the HKSB has set up several centres for its members, such as the Centralized Braille Production Centre, the Parents Resource Centre for Visually Impaired Children, and the

270

Dawning Leung

Information Accessibility Centre, which runs a physical library as well as an online digital library for the visually impaired. The HKSB also provides AD for cultural and leisure activities (HKSB n.d.). At present, it is the only organization in Hong Kong to show films with AD on a regular basis. In addition, it regularly loans accessible DVDs to its members and has arranged a few screening sessions with AD in the cinema in the past. An AD study group was set up in July 2012 within this organization and, since then, a group meeting is held every month so that audio describers can share their experiences, exchange ideas to improve AD services, and receive some AD and voice training. In addition, group members can suggest arrangements for further AD training and activities. Established in 1986, the ADAHK has been devoted to the development and the promotion of the arts among people with disabilities (PWDs) with the motto ‘arts are for everyone’. The ADAHK (n.d.-a) has a two-directional approach – horizontal and vertical – to achieve its goal, which is explained as follows: The horizontal development is a general education and public sensitivity campaign to reach as many people (with and without disabilities) as possible to provide equal opportunity for PWDs to participate in the learning and creation of art, and to sensitise the community of the rights and needs of PWDs to engage in the arts. The vertical development is a series of programs to provide opportunities for professional training for PWDs to develop and nurture their artistic talents, and to promote excellence in their work. One of the ADAHK’s priorities is to train trainers to work with PWDs, and it has invited overseas specialists to visit Hong Kong to hold a wide range of training activities and to share adaptive techniques to work with PWDs. With the support of The Hong Kong Jockey Club Charities Trust, the ADAHK launched the five-year Jockey Club Arts Accessibility Scheme and set up the first local arts accessibility service centre, known as the Jockey Club Arts Accessibility Service Centre (JCAASC). The Centre provides consultation, training and services to create a barrierfree arts environment for the disabled (ADAHK 2008). As will be shown below when discussing the different types of AD provision in Hong Kong, the HKSB and ADAHK have played a key role in the development of AD practices there, especially as regards access to audiovisual material and the performing and visual arts. In addition, efforts have been made by these and other associations to

Audio Description in Hong Kong

271

grant the visually impaired access to other activities, such as outdoor outings and sports. 15.4.1

Film showing sessions with live AD

Since March 2009, the HKSB has been offering a regular film showing service, which is held two or three times a month at its headquarters. Due to copyright issues, only live – instead of prerecorded – AD can be provided. Up until now, more than 120 film showing sessions with live Cantonese AD have been organized, which have included some 90 different films. The majority are dramas and romantic films, whereas the rest are action films, martial arts films, comedies, historical films and thrillers. These films are mainly in Cantonese, with a few in Putonghua and English. As regards the provision of AD in these sessions, the HKSB usually books a theatre with a capacity for at least 60 people where a copy of the film on DVD/VCD will be screened. The audio describer sits in the middle of the last row in the theatre to perform the AD using a microphone, which means the AD is heard by all those present on the premises. The volume of the audio describer’s microphone will be tested in advance and adjusted so that it will not interfere with the sound from the DVD/ VCD. Audio describers are requested to prepare and rehearse their AD at home. A ten to 15-minute introduction is usually provided at the beginning of the session by the audio describer, who takes the opportunity to talk about the cast, the costumes and settings if they are special. In addition, s/he will remind the audience about flash backs or fast-changing scenes in order to prepare them for these. A discussion is usually held immediately after the screening for the audience to ask questions about the film and to express their opinions on the AD provided. Thanks to the increasing awareness of the provision of AD for films, some visually impaired elderly persons have shown an interest in the service and, as a result, the HKSB also offers this service at a few old people’s homes. Before the professional audio description training workshop held in July 2011 (see section 15.5.1), audio describers were allowed to conduct live AD in their own way. Some did not use an AD script, which meant there was a good chance of occasionally missing out important information. This was not the only issue encountered: since only live AD was provided, a single audio describer was responsible for the AD of the whole film, which meant that there was usually no break during the session. Information risked being lost whenever there was a slip of the tongue or when any other undesirable physical reaction occurred (coughs or hiccups, for example). After the above-mentioned workshop,

272

Dawning Leung

and in an attempt to minimize any potential downsides, the HKSB decided to request audio describers to prepare AD scripts in advance. This has been a big step forward and has contributed to the increased professionalism of the AD services provided, although some issues may not be solved until the advent of prerecorded AD. 15.4.2 Live AD in cinemas In some Western countries, such as the UK and the USA, the provision of AD is readily found in cinemas. In the UK, for instance, over 300 cinemas are equipped with either a Digital Theatre System (DTS) or a Dolby delivery unit to provide AD (World Blind Union 2011; Your Local Cinema.com n.d.). In the USA, the practice of offering AD in cinemas has spread across at least nine states. AD is provided by MoPix, DTS Access, Fidelio and Sony (the American Council of the Blind 2014b). In contrast to the increasing attention that AD has received in some countries, none of the cinemas in Hong Kong is currently equipped with a system to support proper AD as this mode of translation is new to the Hong Kong film industry. As a result, no prerecorded AD is provided in cinemas. The only exception seems to be Ip Man: The Final Fight [葉問: 終極一戰], a martial arts film shown with prerecorded AD in a reserved cinema in April 2013. The film director and screenwriter decided to add an AD soundtrack before screening because delivering live AD for fighting scenes would have been relatively difficult. Apart from this exception, so far only live AD is provided in cinemas and only on a handful of occasions, when the cinema is booked by an organization for visually impaired patrons to enjoy a film. For this purpose, as was the case with the previously-mentioned film shows, the audio describer uses an open microphone to perform the AD live. As the sound comes from speakers, the volume has to be carefully adjusted so that it does not interfere with the original soundtrack. According to the HKSB (2011a), 4 September 2011 was the very first time that a film was audio described by a trained audio describer in a reserved cinema in Hong Kong. Around 100 visually impaired people and their families enjoyed watching the film Overheard 2 [竊聽風雲2] with live AD (ibid.). 15.4.3

AD on DVDs

Thanks to a greater social awareness of the needs of the visually impaired over the past few years, a few audio described films have been released on DVD. In September 2010, the first DVD with a Cantonese AD soundtrack, corresponding to the film 唐山大地震 [After Shock] (Feng Xiaogang 2010), was launched in Hong Kong (HKSB 2010). Since then,

Audio Description in Hong Kong

273

more Chinese films with AD have become available on DVD: 單身男女 [Don’t Go Breaking My Heart] (Johnnie To and Ka-Fai Wai 2011) (HKSB 2011b), 奪命金 [Life Without Principle] (Johnnie To 2011), 桃姐 [A Simple Life] (Ann Hui 2012) (HKSB 2012a), DIVA華麗之後 [DIVA] (Heiward Mak 2012), 大上海 [The Last Tycoon] (Jing Wong 2012), 葉問: 終極一戰 [Ip Man: The Final Fight] (Herman Yau 2013) (HKSB 2013) and 盲探 [Blind Detective] (Johnnie To 2013). A local production company called ‘Best & Original Production Limited’ was responsible for producing the first four DVDs with AD for the Chinese market. According to AV Bi-Weekly (Zound 2012), the company has put a huge amount of effort, not only into the actual AD, but also on designing an audio menu on the DVD so that it can be used independently by the visually impaired. The DVD of A Simple Life is a clear example. Once the DVD is inserted, the audience hears a narration prompting them to select either ‘AD soundtrack’ or ‘nonAD soundtrack’. If the AD soundtrack is selected, the next page of the menu informs the audience orally that they can choose between the ‘Cantonese AD’ or the ‘Putonghua AD’ before the actual film starts. After the release of each of the DVDs, the company collected comments from HKSB members for further improvement. The discussion included aspects such as how many buttons there should be in the menu to simplify the selection process. Drawing on such feedback, the audio menus on the DVDs of Life without Principle and A Simple Life have been revised and improved. DVDs may also be played within other media. The public service broadcaster Radio Television Hong Kong (RTHK) introduced the programme Audio Cinema [光影無限LIKE  – 電影/舞台劇] on its digital audio broadcasting Channel 35 in March 2013. According to the programme description, the soundtrack of either a film or a stage drama in Cantonese is broadcast with audio narration on the last Sunday of each month. Sometimes, Cantonese films with Cantonese AD available on DVD are also played in this programme (RTHK n.d.). 15.4.4 Short audio described videos In June 2013, in collaboration with the HKSB, the Equal Opportunities Commission made six videos with prerecorded AD available online. Five of these, each lasting about 23 minutes, were episodes from various series of the EOC’s TV docudrama A Mission for Equal Opportunities, whilst the other video was an award-winning entry from a short video competition organized by this commission. All six videos were uploaded to the EOC YouTube Channel (www.youtube.com/user/

274

Dawning Leung

Table 15.1

EOC audio described short videos (EOC 2013)

Series

Title

Aim

EOC’s TV docudrama A Mission for Equal Opportunities

沒有眼神的微笑 [A Sightless Smile] – Episode 2, 1st Series (1998)

To promote equal opportunities among the visually impaired

網路盲點 [Internet Blindspot] – Episode 3, 2nd Series (2000)

To promote equal opportunities among the visually impaired when accessing the Internet

緣來自平等 [Destined Equality] – Episode 1, 3rd Series (2003)

To promote equal opportunities among the visually impaired

雙生兒 [Twins] – Episode 6, 3rd Series (2003)

To promote equal opportunities among students with specific learning disabilities

按摩有罪 [Harassment] – Episode 1, 6th Series (2009)

To prevent sexual harassment

Acceptance (2009)

To promote inclusion and diversity

Special Merit Award winner of the EOC’s video competition

HKEOC) in order to raise awareness of discrimination and to promote information about equal opportunities for the visually impaired. The audio described materials, included in Table 15.1, were carefully chosen and were carefully related to the target audience. 15.4.5 AD for performing and visual arts In the late 2000s the ADAHK introduced Playback for ALL, an improvisational theatre with the provision of live AD. The theatre usually stages three to four plays a year, each time on a different theme. During the two-hour play, a host invites some members of the audience to tell a story of their own, related to the theme of the play. The performers will then improvise and act out those same stories (hence, the ‘playback’ in the title). Since May 2009, 20 performances of Playback for ALL with AD have been organized, with an overall attendance of over 2,200 people (ADAHK n.d.-b). The ADAHK also provides AD services for other types of performing and visual art, such as the performance of the Hong Kong Ballet Open Dress Rehearsal of Swan Lake, the musical Animal Farm and the painting

Audio Description in Hong Kong

275

exhibition Best Wishes for the Family: Traditional Chinese Woodblock (ADAHK n.d.-c). The ADAHK is not the only association promoting AD for the performing and visual arts since the HKSB has also provided AD services for several stage dramas and live performances. During these events, when there are sighted people at the venue, handsets are offered to the visually impaired audience so that they can listen to the AD without disturbing their companions. 15.4.6

AD for visits and outings

The HKSB has organized outdoor activities with the provision of AD for its visually impaired members since 2011. To enrich their knowledge of local culture and nature, three types of visits/tours have been arranged so far: a visit to the Legislative Council Building, culture tours on the tram and visits to the Hong Kong Wetland Park. The role of the audio describer was different for each event. While, for the first two activities, audio describers also served as tour guides, for the third the audio describer worked together with a guide. The two-storey granite LegCo Building, built in the neo-classical style, has a well-known blind-folded statue of Justice, represented by the Greek goddess Themis. Before its closure in August 2011, the HKSB organized a visit, so that the visually impaired would have a chance to visit this historical building. AD was provided throughout and the audio describer also served as a tour guide. This was also the case with the two culture tours on the tram arranged for about 60 of its members by the HKSB in 2012. To understand the relevance of this activity, it should be noted that Hong Kong has the largest fleet of double-decker tramcars in the world, which have been serving the city for more than a century and have thus become an iconic symbol. In this case, two people were responsible for different parts of the tour: an expert with extensive knowledge of trams and their history, and an audio describer, who also fulfilled the role of tour guide and provided information about the main places and buildings along the way. All participants received a headset to listen to the AD and the information about the trams. On the upper deck, participants listened to the audio describer, who talked about the places and buildings they were passing, whilst on the lower deck, the expert introduced the tram and its history to enrich the participants’ cultural knowledge. The latter brought along small model trams and sample tickets so that members could feel the appearance of the tram and the size of the tram ticket. The visitors were also given the opportunity to take part in a touch tour and were allowed to touch different parts of the tram (the wooden seats, the ceiling) and to feel textures

276

Dawning Leung

and shapes. The tram went from one terminus to another and, when it arrived at the second terminus, the two groups swapped places and the tram went back to its starting point. As regards the third audio described outing mentioned above, in 2013 the HKSB organized three visits to the Hong Kong Wetland Park for its members. The Park preserves the wetland habitat and ecosystem within a 61-hectare suburban area and raises public awareness by educating visitors concerning its significance, value and biodiversity. Guided tours are provided for visitors to learn about the living organisms and plants living in the wetland, but there is no official guided tour with AD for the visually impaired. The audio described visits organized by the HKSB represented the first time that the Wetland Park tour guides had cooperated with audio describers. The main duties of the tour guides remained the same as usual: they passed on knowledge on the features of the wetland and introduced facts about the animals and plants to be found in this type of ecosystem, whereas the audio describers focused on the appearance of animals and plants, the landscape and its attractions. In each visit, a group of around 45 participants was divided into three subgroups of 15 people with one audio describer assigned to each subgroup. The visit started with a touch tour at the entrance, during which visitors were allowed to touch and compare life size statues of a wide range of wetland creatures and small bunches of real plants. Since the participants needed to be able to move around, headsets, instead of loudhailers, were used. 15.4.7

AD for sports: the guide runners (applied AD)

The Hong Kong Blind Sports Federation (HKBSF, 香港盲人體育總會) runs six sports courses for the visually impaired, including swimming, golf, bowling, dragon boating, football and marathon running, of which the latter seems to be the most popular. Other organizations, such as Blind Sports Hong Kong (BSHK, 香港失明人健體會) offer regular running training sessions. This organization arranges three long-distance running training sessions every week to encourage the visually impaired to exercise and stay healthy. A number of its members take part in marathon running competitions in Hong Kong and abroad (for example in Taiwan). During the training session, a guide runner pairs up with a visually impaired runner. They start with warm-up exercises and move on to long-distance running training according to the visually impaired runner’s ability. Both the trainer and the runner hold the same strip and the trainer takes the lead. The visually impaired runner can sense the

Audio Description in Hong Kong

277

direction, motion and speed of the guide runner through the strip. The session finishes with stretching and cool-down exercises. AD can also be provided during these training sessions. In this case, the guide runner describes the details of the stretching movements during the warm-up and cool-down exercises. The conditions of the road and any obstacles are described during the running session and, sometimes, the guide runner might even describe the scenery, the buildings and objects to be found along the way.

15.5

AD training in Hong Kong

The development of AD in Hong Kong is at a very early stage so that there is not much training available. However, some interesting, pioneering initiatives have been implemented, both by professional associations and Higher Education institutions, as will be shown below. 15.5.1 Professional AD training workshop Supported and funded by Create Hong Kong (CreateHK),1 a year-long AD development programme called ‘Hong Kong Audio Description in Films Development Scheme’ was implemented from March 2011 to April 2012 (Create Hong Kong 2011; Government Information Centre 2011; HKSB 2011c). Trainees were invited to attend seminars, workshops and a conference as part of this programme. In late July and early August 2011, two international AD specialists from the USA and Taiwan were invited to give a five-day professional AD workshop to train local audio describers in Hong Kong. These experts taught participants how to write AD scripts and how to describe films and images (photos). Around 100 participants attended the workshop and about 60 of them received certificates in two categories: Basic Theory and Practicum. From those who had obtained both certificates, 15 trainees were selected to perform AD in the film-showing service held by the HKSB (HKSB 2012b). 15.5.2 Training the trainers workshop: developing AD skills Organized by the ADAHK in early April 2013, the ‘Train-the-Trainers Workshop: Developing Audio Description Skills’ focused on the AD of visual and performing arts. A  specialist from the USA was invited to share her experience in providing AD and participants were taught how to write AD scripts and how to describe paintings, photos, plays and operas. In one of the sessions, participants observed how live AD was performed for the above-mentioned theatre performance Playback for ALL. After the workshop, a study group was established in June 2013

278

Dawning Leung

in an attempt to provide more opportunities to discuss AD issues. Since then, the audio describers who took part in this workshop and contributed to the ADAHK have met once a month to practise AD and share their experiences. 15.5.3

AD Training in Higher Education

Several elective courses on AD have been introduced in Higher Education institutions in Hong Kong. In 2005, Yeung (2007: 234) developed a set of 30-hour AD teaching materials as part of a translation and interpreting course at Hong Kong Baptist University. The material includes exercises for dramas, dances, xiqu (Chinese operas), films, paintings, museum tours and public events. Since there was no separate AD course to train her translation and interpreting students, in 2006 she incorporated AD training in her ‘Translating across Media’ class, a module on intersemiotic translation also covering interpreting and adaptation. Hence, only nine hours of original AD material (dramas and paintings) were used (Yeung 2007; Yeung, personal communication, 8 January 2014). Yeung (2005: 3) regards AD skills as ‘highly transferable’ especially for interpreting training. In particular, she emphasizes the usefulness of ‘vocal skills’, ‘command of language’ and the ability to provide an overall and focused description of pictures (ibid.: 4). Since 2011, at Chu Hai College of Higher Education, three elective undergraduate courses on AD have been designed, developed and taught: ‘Convergent Translation’, ‘Audiovisual Translation’ and ‘Media Communication for Performing Arts and Entertainments’. Whereas the first two courses focus only on AD for films, the last one is devoted to AD for the performing arts, as well as for entertainment activities including films and television programmes. The majority of students who choose these modules belong to the Department of Journalism and Communication at Chu Hai College.

15.6

Conclusion

The number of visually impaired people in Hong Kong is estimated at around 122,000; yet, the provision of AD services in this wealthy city is limited, with most of the effort being led by local NGOs. When compared to Western countries, the development of AD in Hong Kong is relatively young. However, as this chapter has shown, the provision of AD services has experienced a rapid growth over the past few years. AD provision now covers various products, ranging from films (including live AD sessions in cinemas and prerecorded AD available on DVDs)

Audio Description in Hong Kong

279

to performing and visual arts. Although lagging behind, Hong Kong’s government has also shown some awareness of the special needs of the visually impaired by producing short videos with AD. AD is also being provided for sports training, visits and outings, where the role of audio describers is sometimes extended, as they can also serve as tour guides. Such activities not only enable visually impaired people to integrate within society by taking part in cultural and sports activities, they also increase awareness of their needs among the rest of the population and of the usefulness of AD within the visually impaired community itself. Training also plays an essential role in raising awareness and in the provision of high-quality AD services. As was the case with the provision of AD, AD training has developed in Hong Kong in academic as well as professional circles. AD training has been organized by professional associations and has also been recently introduced into the curriculum of some degree programmes, where elective AD courses have been designed to train new blood and to equip future professionals with the relevant skills. Although there is still a long way to go, these initiatives highlight the increasing attention that AD is receiving in Hong Kong, a trend that will hopefully continue in the years to come.

Notes 1. Create Hong Kong is an agency set up under the Commerce and Economic Development Bureau of the Hong Kong Special Administrative Region, in partnership with the HKSB.

References ADAHK. n.d.-a. About us, www.adahk.org.hk/en/about_us/index.html. ADAHK. n.d.-b. School of Playback Theatre (Hong Kong), www.adahk.org.hk/en/ whats_news/upcoming_events/index_id_275.html. ADAHK. n.d.-c. Accessible Event Calendar, www.jcaasc.hk/?a=group&id=date_us. ADAHK. 2008. ADA Brochure and Milestones. Hong Kong: ADAHK. American Council of the Blind. 2014a. What the Twenty-First Century Communications and Video Accessibility Act of 2010 Will Do for People Who Are Blind or Visually Impaired, www.acb.org/adp/commact.html. American Council of the Blind. 2014b. Movie Theaters Offering Audio Description, www.acb.org/adp/moviesbystate.html. Broadcasting Authority. 2002. The Renewal of the Domestic Free Television Programme Service Licences of Asia Television Limited and Television Broadcasts Limited, www.cedb.gov.hk/ctb/eng/legco/pdf/Legco_brief-12Nov(r).pdf. China Disabled Persons’ Federation. 2012. 上海首家无障碍电影院揭牌 [The first cinema providing accessible films in Shanghai], www.cdpf.org.cn/dfgt/ content/2012-07/02/content_30400562.htm.

280

Dawning Leung

Create Hong Kong. 2011. CreateHK Drives Development of Audio Description in Films, www.createhk.gov.hk/text_only/en/news/wn_110317.htm. Díaz Cintas, Jorge. 2010. ‘La accesibilidad a los medios de comunicación audiovisual a través del subtitulado y de la audiodescripción’. In Luis González and Pollux Hernúñez (coord.) El español, lengua de traducción para la cooperación y el diálogo (pp. 157–80). Madrid: Instituto Cervantes. Education and Manpower Bureau, HKSAR. 2005. Education for non-Chinese Speaking Children, www.hkhrm.org.hk/racial%20discrimination/database/eng/ nc%20facilities%2004(final).pdf. EOC. n.d. The Disability Discrimination Ordinance and People with a Visual Impairment, www.eoc.org.hk/EOC/GraphicsFolder/showcontent.aspx?itemid=10268. EOC. 2013. Press Release: Towards Improving Universal Design and Accessibility  – EOC Launches iPhone App and Videos with Audio Descriptions, www.eoc.org.hk/ eoc/GraphicsFolder/ShowContent.aspx?ItemID=11430. Government Information Centre. 2011. CreateHK Drives Development of Audio Description in Films, www.info.gov.hk/gia/general/201103/17/P201103170112. htm. HKSB. n.d. About us, www.hksb.org.hk/en/index.php?option=com_content&vie w=article&id=2&Itemid=3. HKSB. 2010. 《唐山大地震》口述影像影碟 新聞發佈會 [‘After Shock’ DVD with Audio Description: Press Conference], www.hksb.org.hk/images/contents/ media/news/after%20shock-Press%20Release%20-%20final.doc. HKSB. 2011a. 《竊聽風雲2》口述影像電影欣賞會 [‘Overheard 2’ Movie Show with Audio Description], www.hksb.org.hk/images/contents/media/news/%A1m% C5%D1%C5%A5%AD%B7%B6%B32%A1n%A4f%ADz%BCv%B9%B3%B9q% BCv%AAY%BD%E0%B7%7C%20%B7s%BBD%BDZ.doc. HKSB. 2011b. 《單身男女》首張附有粵語及普通話口述影像的華語電影光碟 [‘Don’t Go Breaking My Heart’: the First Chinese DVD with Cantonese and Putonghua Audio Description], www.hksb.org.hk/images/contents/media/news/%A1u%B 3%E6%A8%AD%A8k%A4k%A1vPress%20Release.doc. HKSB. 2011c. 香港電影口述影像發展計劃 [Development of Audio Description Services for Hong Kong Films], www.hksb.org.hk/images/contents/media/news/ %AD%BB%B4%E4%B9q%BCv%A4f%ADz%BCv%B9%B3%B5o%AEi%ADp%B 9%BA-%20press%20release.doc. HKSB. 2012a. Development of Audio Description Services for Hong Kong Films Project. Closing Ceremony cum ‘A Simple Life’ Movie Show with Audio Description, www. hksb.org.hk/en/index.php?option=com_content&view=article&id=345&Ite mid=4. HKSB. 2012b. Development of Audio Description Services for Hong Kong Films Project 2011–2012. Hong Kong: HKSB and Create Hong Kong. HKSB. 2013. 《葉問-終極一戰》導演與編劇破天荒攜手協助視障人士欣賞電影 [‘Ip Man: The Final Fight’ Movie Show with Audio Description], www.hksb.org. hk/images/contents/media/news/%A1m%B8%AD%B0%DD%A1%D0%B2%D 7%B7%A5%[email protected]%BE%D4%A1n_%B7s%BBD%BDZ.pdf. Information Services Department, HKSAR. 2012. Hong Kong: The Facts – Population, www.gov.hk/en/about/abouthk/factsheets/docs/population.pdf. Information Services Department, HKSAR. 2013. Hong Kong: The Facts – Population, www.gov.hk/en/about/abouthk/factsheets/docs/population.pdf.

Audio Description in Hong Kong

281

Knight Frank Research. 2012. The Wealth Report 2012, www.thewealthreport.net/ The-Wealth-Report-2012.pdf. López Vera, Juan Francisco. 2006. ‘Translating audio description scripts: the way forward? – Tentative first stage project results’. In Mary Carroll, Heidrun Gerzymisch-Arbogast and Sandra Nauert (eds) MuTra – Audio Visual Translation Scenarios: Conference Proceedings. Paper presented at the EU High Level Scientific Conferences, Marie Curie Euroconferences, Copenhagen, 1–5 May 2006 (pp. 148–57). Saarland: Advanced Translation Research Center, Saarland University. The National Archives. n.d.-a. Broadcasting Act 1996, www.legislation.gov.uk/ ukpga/1996/55/contents. The National Archives. n.d.-b. The Communications Act 2003, www.legislation. gov.uk/ukpga/2003/21/contents. Orero, Pilar. 2007. ‘Sampling audio description in Europe’. In Jorge Díaz Cintas, Pilar Orero and Aline Remael (eds) Media for All: Subtitling for the Deaf, Audio Description, and Sign Language (pp. 111–26). Amsterdam: Rodopi. Puigdomènech, Laura, Anna Matamala and Pilar Orero. 2010. ‘Audio description of films: state of the art and protocol proposal’. In Łukasz Bogucki and Krzysztof Kredens (eds) Perspectives on Audiovisual Translation (pp. 27–44). Frankfurt: Peter Lang. RTHK. n.d. 光影無限LIKE—電影/舞台劇 [Audio Cinema], http://programme.rthk. hk/channel/radio/programme.php?p=5792. World Blind Union. 2011. World Blind Union Toolkit on Providing, Delivering and Campaigning for Audio Description on Television and Film, www.rnib.org.uk/ livingwithsightloss/tvradiofilm/tvradiofilmnews/Pages/wbu_audio_description_toolkit.aspx. World Health Organization. 2010. Global Data on Visual Impairments 2010, www. who.int/blindness/GLOBALDATAFINALforweb.pdf. Yeung, Jessica. 2005. Developing Teaching Material of Audio Description for Interpreting Subjects, http://libproject.hkbu.edu.hk/was40/detail?channelid=49 501&searchword=Code=’TDG-0405-I-04. Yeung, Jessica. 2007. ‘Audio description in the Chinese world’. In Jorge Díaz Cintas, Pilar Orero and Aline Remael (eds) Media for All: Subtitling for the Deaf, Audio Description, and Sign Language (pp. 231–44). Amsterdam: Rodopi. Your Local Cinema.com. n.d. Audio Described Cinema and DVD Information, www. yourlocalcinema.com/ad.about.html. Zound. 2012. 視障人士也可「看」《桃姐》 [The visually impaired can watch ‘A Simple Life’ too]. AV 雙周 [AV Bi-Weekly] 190: 22–25.

Index 10vor10, 33 59 segundos, 33, 58, 61 A Aburdene, P., 146–7 accent, 24, 51, 73, 76–7, 118 accessibility, 3, 5–8, 10, 14, 26–7, 50, 71, 99, 101, 104, 145–6, 149, 155–6, 159, 202, 203–4, 205, 217, 219–21, 223, 252, 266, 268, 270, 279–80 policies, 3 acculturation, 133 accuracy, 3, 28–33, 41–50, 51, 54, 56, 58, 67–70, 76, 111, 117, 120, 123, 130, 184 rate, 1, 28, 30–3, 48–9 adaptation, 71, 121, 126, 136, 150, 159, 249, 258, 265, 278 adaptive techniques, 270 The Adventures of Robin Hood, 130 AENOR, 48, 50, 52, 56, 58, 66, 70, 74, 77, 92, 93 Águas de Romanza, 104 AIDAC, 138, 139, 163 allusion, 127–8, 225 Álvarez, A., 70 Amadeus, 128 AMATRA research project, 78, 92 American Association on Intellectual and Developmental Disabilities, 101 American Council of the Blind, 272, 279 Andrews, D., 191 Animal Farm, 274 APAE, 101–3, 109 Apocalypse Now, 114, 125–6 Apone, T., 50 Appiah, K. A., 15, 27 APyCA system, 51

Arts with the Disabled Association Hong Kong (ADAHK), 269–70, 274–5, 277–9 Ashworth, D., 155, 159 Assis Rosa, A., 226, 251 ATAA, 163 ATRAE, 163 Audi, 158 audience perception(s), 5, 137 see also viewers’ perception(s) audience preferences, 5 see also user preferences see also viewers’ habits audience response, 5, 94, 110, 135 audio channel, 24, 59 audio describer, 73–5, 77, 80–2, 85, 87, 90–2, 99–100, 270–2, 275–6, 278 see also describer audio description (AD), 1–3, 5, 9, 52, 72, 73–4, 75, 77–8, 79–83, 84, 85–7, 91–5, 99–101, 104–9, 143, 145, 215, 251, 266–79, 279–81 applied AD, 276 guidelines, 4, 74, 77, 99–100, 266 live AD, 266, 271–2, 274, 277–8 prerecorded AD, 272–3, 278 script, 72–3, 77, 80–1, 84, 86, 88–90, 107, 271, 272, 277, 281 track / soundtrack, 74, 100, 272–3 audio introduction(s), 74, 94 AudioToText Synchronization Project, 51 audiovisual platform(s), 15, 17 see also multimedia platform(s) auditory stimuli, 75 aural dimension, 72, 75, 79 aural stimuli, 74, 78, 80, 82, 92 auto-captioning, 24–5 automatic speech recognition (ASR), 3–4, 47, 49, 53–7, 59–70 see also speech recognition 282

Index B Baack, D.W., 156, 159 Baker, M., 138, 251, 253–5, 257–9, 261–3, 265 Ballinger, C., 165, 191 Ballone, G.J., 102, 109 Baños Piñero, R., 226, 250–1 Bartmiński, J., 232, 251 BBC Six O’Clock News, 33 Benecke, B., 73, 75, 77, 93 Berman, A., 111, 127, 138 Bermin, B., 102 best practices, 6, 150–1, 153, 155, 159 see also good practice(s) Best Wishes for the Family: Traditional Chinese Woodblock, 275 Bezubik, M., 220, 222 Biber, D., 226, 228–30, 234–5, 242–3, 245, 251 Bisani, M., 54, 70 blank template(s), 193–4, 197, 200 blind, 1, 73–4, 79, 93–5, 99–101, 105–6, 145, 215, 219, 266, 269, 272–6, 279, 281 Blind Sports Hong Kong (BSHK), 276 block subtitles, 56, 69 blu-ray, 140–3, 145, 147, 148 Boersma, P., 81, 93 Bogucki, Ł., 215, 222, 225–6, 251, 281 Bolivar, V., 78, 93 Boulianne, G., 50, 54, 70 Bourne, J., 56, 71, 74, 93 Braun, S., 72–3, 93 Breaking Bad, 8 British Broadcasting Corporation (BBC), 33, 50, 71 broadcaster(s), 1, 4, 28, 34, 48–9, 51, 63, 199, 203–5, 217–18, 221, 268, 273 Broadcasting Authority, 268, 279 Budin, G., 93, 156–7, 159 Bühler, H., 76, 93 Bywood, L., 192, 196, 201 BZO, 163 C caption(s), 50, 54, 66–7, 70–1, 140, 142–7, 191, 227, 233, 236 see also subtitle(s) / subtitling

283

Carroll, M., 10, 50, 93, 145, 148, 193–4, 202, 252, 281 censorship, 127, 129, 136, 138–9 see also manipulation Chafe, W., 49–50 character, 15–16, 28, 30, 47–9, 85, 89, 155, 254 identification, 47–9 Chaume, F., 5, 9, 110, 112, 126, 138, 226, 250–1, 254, 265 Chiaro, D., 112, 138, 156, 159–60, 252 children, 132, 208–9, 212, 222, 269, 280 China Disabled Persons’ Federation, 267, 279 Chion, M., 72, 78, 93 cinema, 8, 16, 74, 76, 78, 100, 104, 110–11, 121, 139–42, 177, 204, 215, 221, 267, 269–70, 272–3, 278–9, 281 coherence, 29, 33, 151, 250 cohesion, 18, 20, 33, 219, 262 Cold Case, 253, 258, 261, 264 Collados Aís, Á., 73–4, 76, 82, 91, 93–4 colloquialism, 227, 229, 246, 248 Colton, C.C., 140, 147 commercial factor(s), 112, 135, 137 compensation, 62–3, 67, 258–9 Comuzio, E., 113, 138 condensation, 16, 154–5, 199 Conseil Européen des Associations de Traducteurs Littéraires, 164, 191 context-dependence, 243–5, 250 convention(s), 6, 17–18, 20, 24, 100, 102, 109, 122, 153–5, 199, 216, 218 conversation(al), 8, 103, 209, 220, 226–9, 233–6, 242–5, 248–50 discourse, 226 features, 8, 227 markers, 242 register, 226, 228–9, 234, 243, 250 compare fictional dialogue copyright, 112, 137, 177, 190, 271 Create Hong Kong, 277, 279–80 CRIM method, 31 Crosara de Resende, A.P., 109 crowdsourced AVT, 1

284

Index

Culler, J., 73, 93 cultural reference(s), 135 culture-specific reference(s) / item(s), 117, 133, 251, 254 see also extra-linguistic culturespecific reference(s) current affairs programmes, 211, 268 customization, 6, 140 Czajkowska-Kisil, M., 208, 210, 219, 222–3 D The Danish Union of Journalists, 163 de Bortoli, M., 156–7, 159 de Castro, M., 3, 51, 63, 70 de Paiva Vital, F.M., 109 deadlines, 168, 171, 174, 181–2, 189, 194, 200 deaf, 1, 8, 34, 48, 93, 95, 143, 145–6, 189, 203–13, 215–16, 218–24, 267–8, 281 see also hard-of-hearing see also hearing impaired see also prelingually deaf see also subtitling for the deaf and the hard-of-hearing deafness, 206, 212, 223 see also levels of deafness Declercq, C., 154, 159 deictic, 229, 243, 245 items, 229, 243 markers, 243 Delabastita, D., 153, 159 deletion, 30–3, 124, 257 error, 32 density, 56, 186, 267 describer, 4, 72–5, 77–8, 80–5, 87, 90–2, 99–100, 270–2, 275–9 see also audio describer Desperate Housewives, 8, 227, 230, 235, 243, 245, 248 Diagnostic and Statistical Manual of Mental Disorders (DSM), 101 dialect(al), 77, 251 dialogue, 8, 15, 23, 49, 77, 80, 116–18, 120–2, 124, 126–133, 135, 139, 140, 184–6, 193–4, 197, 200, 226–33, 235–43, 246, 248–50, 252–4, 260, 263–5 list, 184–6, 200

Díaz Cintas, J., 4, 9, 16, 27, 72, 93, 100, 109, 140, 144–5, 147, 172, 191–2, 198, 202, 215, 222–3, 250–1, 269, 280–1 Dickins, J., 167 Digital Television for All (DTV4All), 220 disability, 5, 101–3, 108, 203, 268, 280 hearing disability, 203 learning disability, 5 visual disability, 203 discourse, 8, 22, 33, 41, 43, 93, 123, 154, 157, 160, 199, 225–7, 229, 232–3, 235, 238, 242, 245, 250 distributor(s), 6, 112–13, 128, 132, 135–8, 141, 216, 250 documentary(ies), 13, 16, 18, 168, 174, 197, 225 Down’s syndrome, 102–3, 107–8 Downey, G., 177, 191 Dragon Naturally Speaking, 57 DTT broadcast network, 54–5, 57, 65–6 dubbing, 5, 8–9, 111–33, 135–9, 146, 226, 250–2, 253–65 actor, 5, 9, 115, 253, 255–9, 261–5 see also voice artist see also voice talent director, 9, 113–15, 123, 138, 253–65 script, 115, 117, 121, 128–9, 131–2, 253 Duff, A., 167, 191 Dumouchel, P., 30, 50 dysfluency, 228, 230, 234 E E.T. the Extraterrestrial, 154 EC Directorate-General for Translation, 150 editing, 29–30, 32–6, 38–40, 43, 45–6, 48–9, 52–3, 55, 59, 70, 125, 154, 173, 188 error, 32–6, 38–40 editor(s), 48, 117, 265 Edmondson, W., 228, 234–5, 238, 251 Education and Manpower Bureau, HKSAR, 280

Index educational programmes, 268 Elizabeth, 74 ellipsis, 227, 229–30, 243–4, 248 EOC, 268, 273–4, 280 equivalence, 14–15, 75, 156–7, 260, 262 error calculation, 28 ESIST, 163, 165 España Directo, 58 ETSI, 54–5, 63, 70 Eugeni, C., 29, 47, 50, 53, 71 EuroparlTV, 14, 17–18 Europe by Satellite, 14, 17, 23 European Commission webcast portal, 3, 17–22, 24 expletive(s), 120, 227, 229, 245, 248 see also swearword(s) see also taboo explicitation, 126, 244 extra-linguistic culture-specific reference(s), 117 F family films, 119, 121, 130, 132, 135 fansubbing, 1, 10, 215 Fawcett, P., 137–8 Fels, D., 73, 93 fictional dialogue, 226–7, 248 fictional programmes, 226, 229 compare non-fictional programmes films, 5, 13, 16, 51, 73–4, 78–80, 85, 88, 90–2, 94, 100, 106, 108, 110–13, 115, 119, 121, 129–30, 132, 135–7, 139, 142, 146, 168, 174, 185, 201, 204, 210–11, 215, 220, 225–6, 252, 266–7, 269–73, 277–81 first translation, 110, 111, 127, 193, 194, 196, 198 fluency, 74, 76, 228, 230, 234 For Whom the Bell Tolls, 114, 127, 137 foreignizing / foreignization, 118 formulaic expressions, 238–9 see also ritual illocutions The Forum for Finnish Subtitlers, 164 Franco, E., 5, 99, 225, 249–51 Frazer, L., 165, 191 From Paris with Love, 144

285

Fryer, L., 73, 78, 93–4 Fuentes-Luque, A., 2, 13 functionality, 155, 184 Fundação Oswaldo Cruz, 102, 109 G Gałkowski, T., 208, 223 Gambier, Y., 72–3, 93, 111, 127, 132, 138–9, 251 gambits, 235, 238, 242, 249–50 Gao, J., 56, 71 Garcarz, M., 225–6, 249, 251 García, J.E., 51, 53, 56, 71 Geertz, C., 15, 27 Geeslin, J.D., 209, 222 genesis files, 193 Georgakopoulou, P., 198, 201–2, 249, 251 Géroult, F.C., 141, 144, 147–8 Gerzymisch-Arbogast, H., 73, 93–4, 147, 281 Gile, D., 75, 94–5, 200 globalization, 2, 159, 166, 198 The Godfather, 116 Gone with the Wind, 121, 135, 137 González Lago, M.D., 47, 50 good practice(s), 150, 155 see also best practices Gottlieb, H., 72, 92, 94, 225, 226, 251, 252 Gouadec, D., 172, 191 grammatical reduction, 227, 243 Grease, 114, 129, 136 Gregory, S., 209, 222 Grigaravièiûtë, I., 225, 252 guidelines, 4, 6, 9, 48, 66, 74, 77, 99–100, 144–5, 154, 185, 188, 266 Guillot, M.N., 226, 252 Gürçaˇglar, S¸ .T., 111, 138 H hard-of-hearing (HoH), 203, 219, 221 see also deaf see also hearing impaired see also subtitling for the deaf and the hard-of-hearing HBB4All Project, 48–9 hearing impaired, 51, 203–5, 207, 212, 215–16, 223, 268

286

Index

hearing loss, 206–7, 211, 221, 223 see also levels of deafness Hefer, E., 47, 50 Hermans, T., 15, 27 Holmes, J., 153, 159 Homeland, 8 Hong Kong Blind Sports Federation (HKBSF), 276 Hong Kong Society for the Blind (HKSB), 269–73, 275–7, 279–80 The Hours, 4, 79–80 House, J., 3, 228, 234–5, 238, 252 Hyks, V., 73, 77, 94 I idea units, 33, 36, 43, 45 Iglesias Fernández, E., 4, 72–7, 82, 91, 94 Ilieva, J., 164, 191 illocutions, 235, 238, 240, 250 see also ritual illocutions idiom, 229, 246, 258–9 Immortal Beloved, 143, 148 implicit meaning(s), 106 Information Services Department, HKSAR, 267, 280 insertion, 30–1, 33, 39 institution(al), 2–3, 13–17, 20–2, 25–7, 48, 81, 101–3, 105, 109, 137, 150, 156–60, 168, 217, 220, 266, 277–8 audiovisual translation, 2, 13–14, 16, 21, 25 translation, 3, 13–16 interactive captions, 143 see also movable subtitles see also resizable subtitles interactivity, 143, 145–6, 234–5, 238, 248 intercultural dialogue, 15 intercultural relations, 153 International Statistical Classification of Diseases and Related Health Problems (ICD), 102–3 internationalization, 150, 152, 159–60 checker, 150 interoperability, 15, 156 interpreter(s), 50, 71, 74–5, 91–3, 204, 213, 219, 252

interpreting, 4, 19, 72–8, 82, 91, 93–4, 156, 204, 225, 249, 278, 281 intersemiotic, 278 see also semiotic intertextual reference(s), 126, 255 intertextual relations, 153 intonation, 22, 49, 52, 73–4, 76, 81–2, 257 IPTV broadcast network, 51, 53–5, 62, 64–6, 68–70 Iron Man, 143, 147 ISO, 54, 63 isochrony, 240, 254, 256, 258, 264 see also kinesic synchrony see also lip synchrony Ivarsson, J., 145, 148 J Jaconelli, L., 143 Jaws, 114, 117 Jenkins, J., 157, 159 Jezebel, 114, 130 K Kapsaskis, D., 199, 202 Karamitroglou, F., 145, 148 Khris, S., 110, 138 kinesic synchrony, 254, 259, 264 see also isochrony see also lip synchrony Kirchner, C., 73, 94 Knight Frank Research, 267, 281 Koskinen, K., 13–15, 27, 126, 137, 139 Kowal, J., 210, 222 Kozieł, A., 226, 252 Kozloff, S., 227, 252 Künstler, I., 203, 210, 223 Kurz, I., 76, 94 L L’Oreal, 158 Lachat, C., 74, 93 Lady and the Tramp, 114, 133, 135 Lambourne, A., 56, 71 Lancôme, 158 Las mañanas de La 1, 58–9, 61 Lawley, M., 165, 191 Le Fabuleux destin d’Amélie Poulain, 144

Index learning disabilities, 5, 99–104, 106–9, 274 Leech, G., 226–7, 251–2 Les Tontons flingueurs, 144 levels of deafness, 206 lexical bundles, 227, 229, 245 linguistic modernization, 132–3, 135 lip synchrony, 227, 254, 256, 258, 264 literality, 56 live programmes, 4, 47, 49, 54, 57–8, 66 live subtitling, 3, 28–30, 34, 41, 48, 50, 52–3, 54–6, 63, 71 see also quasi-live subtitling see also real-time subtitling loan word, 254–5 localization, 6, 149–50, 152–4, 156–7, 159–60 The Localization Research Centre, 150 Looms, P.O., 51–2, 63, 71 López Vera, J.F., 269, 281 Lucas, C., 215, 223 Luyckx, B., 58, 71 M machine translation, 21, 173, 201 manipulation, 136, 138–9, 264 see also censorship Maraschio, N., 110, 139 Marleau, L., 140, 148 Martín Ruano, M.R., 14–15, 27 Martínez-Sierra, J.J., 135, 139 Mason, A., 52, 71 Massidda, S., 1, 10 Master and Commander, 74 masterfiles, 193 Matamala, A., 27, 50, 72–3, 94, 202, 223, 251, 281 The Meaning of Life, 114 Media Access Australia, 48, 50 mental disability, 101 minor error, 32, 39, 45 misrecognition, 32–3, 47, 49 see also recognition error Mossop, B., 13, 27 movable subtitles, 141 MPC, 142 multilingual web, 149–55, 157, 159 MultilingualWeb project, 6, 149–50, 152, 160

287

multimedia platform(s), 14, 26 multimodality, 8, 93 The Mummy, 114 Muñoz Sánchez, P., 1, 9, 215, 222 music, 74, 78, 80, 85, 88, 93, 95, 128, 246, 274 see also song(s) N Naisbitt, J., 146, 148 narration, 73–4, 77–8, 80–5, 89, 91, 273 narrative, 93, 99, 108 natural language processing, 52 negotiation power, 7, 168, 175–7 NER model, 3, 28–9, 32–3, 41, 47–50 NERD model, 32 NERstar, 48, 50 Nestlé, 158 neutral AD, 4, 77 Neves, J., 27, 202, 206, 223 Newmark, P., 166 Newsnight, 33 Ney, H., 54, 70 Niebrzegowska-Bartmińska, S., 232, 251 Nikoli´c, K., 7, 192, 194, 202 noise(s), 53, 107 non-fictional programmes, 225–6, 251 compare fictional programmes non-governmental organizations (NGOs), 14, 16–17, 269, 278 nonverbal communication, 52, 93 nonverbal qualities, 4, 77, 91 nonverbal signs, 72, 76, 91, 107 norm(s), 5, 16, 22, 34–5, 45, 53, 93, 99, 101, 110, 115, 117, 121, 124, 130, 134, 136, 141, 156, 166, 169, 176, 196, 206, 225, 238, 245, 252 see also operational norm(s) see also target culture norm(s) see also translation(al) norm(s) Nornes, A.M., 110, 128, 136, 138–9 O O’Brien, S., 3, 10 O’Hagan, M., 1, 10, 155, 159 Ofcom, 1, 10, 28, 28, 50, 213, 224 off-screen narration, 80, 82, 89, 91

288

Index

omission(s), 31, 33, 36, 39, 248–9, 257–8 ONCE (Spanish national association for the blind), 79–80, 83–4, 90 Once Upon a Time in America, 114, 116 operational norm(s), 136 see also norm(s) Oppenheim, A., 164–5, 191 orality, 8, 226, 250–2 Orero, P., 10, 50, 72, 77, 93–5, 139, 223, 225, 251–2, 266, 281 OSS 117 Rio ne répond plus, 144 P Pabsch, A., 216–17, 223 Packer, J., 73, 94 Palmer, A., 72, 95 Paloposki, O., 126, 137, 139 Paolinelli, M., 111, 139 paraphrase, 33, 200, 249, 255–6, 258–9 Pavesi, M., 226, 252 Peli, E., 73, 94 Pereira, A., 220, 223 performing arts, 269–70, 274–5, 277–9 see also visual arts Petré, L., 73, 94 phonetic level, 54, 116 phonic language, 206, 207–9, 219 compare sign language pilot study, 5, 49, 99, 165 pivot translation, 192 Playback for ALL, 274, 277 Pöchhacker, F., 76, 93–4 Polish Public Service Television (TVP), 204–5, 210–13, 224 Polish sign language (PSL), 203–4, 208–10, 212–13, 215–16, 218–21 see also Signed Polish poorly sighted, 100 see also visually impaired post-editing, 173 PRA research project, 78, 92 Pradas Macías, E., 74, 76, 93–4 pragmatic(s), 157–8, 228–9, 241 PRATT software prelingually deaf, 207 prerecorded programmes, 24, 51, 63, 210

process-oriented, 153 product-oriented, 153 professional practices, 1–2, 7, 100, 161, 179–80, 266 proofreading, 172–3, 181, 188 prosodic features, 73–4, 76, 78 prosodic level, 116 Puigdomènech, L., 266, 281 punctuation, 18, 20, 32 Pym, A., 137, 139, 153, 159 Q qualities, 4 nonverbal, 4 vocal, 4, 81, 86, 91–2 quality, 2–10, 11, 24, 26, 28–30, 32–3, 45, 47–54, 56–7, 67, 69–70, 72–80, 82–5, 86–94, 110–12, 116, 121, 124, 127, 130–1, 137–9, 144, 158, 169, 173, 182, 184–6, 188–90, 193, 198, 200–1, 205, 210, 252 assessment, 3–4, 10, 72–5, 77–9, 82–4, 86–8, 90–4, 252, 279 expectations, 4, 90 standards, 3, 6, 9, 158 quasi-live subtitling, 62, 64–5, 68 see also live subtitling see also real-time subtitling questionnaire, 5, 74, 79, 83, 84, 89, 163–5, 171, 188, 191 Quilty-Harper, C., 157, 159 R Radio Television Hong Kong (RTHK), 273 Rander, A., 52, 71 readability, 6, 145, 153–5, 159 reading speed(s), 56, 147 real-time subtitling, 52–3, 55, 58, 65, 69, 71, 157 see also live subtitling see also quasi-live subtitling recognition error, 30, 33–4, 36–7, 39–40, 47–8 see also misrecognition redub / redubbing, 5–6, 110–18, 120–9, 130–1, 132–9 see also retranslation

Index reduction, 16, 52, 71, 227, 229, 239, 243, 247, 249 redundancy, 202, 250 register, 22, 62, 157, 225–6, 228–9, 232, 234, 243, 250 Reimers, U.H., 65, 71 relevance, 3, 99, 251, 275 Remael, A., 10, 16, 27, 50, 71, 93, 95, 144, 147, 172, 191–2, 198, 202, 250–2, 281 Remote Subtitling, 6, 140–1, 143–7 resizable subtitles, 141 respeaker, 29, 31–4, 41, 43, 45, 47, 49, 53, 54, 56, 61–3, 67 respeaking, 28, 30–1, 39, 43, 47–50, 52–3, 55–7, 59, 61–4, 68, 70–1 resubtitling, 110–11 retention, 247 retranslation, 110–11, 121, 123, 125, 126, 127, 131–2, 136–9 hypothesis, 5, 111, 127 see also redub revision, 102, 115–21, 128–9, 132–3, 154, 163, 172–3, 181–2 revoicing, 23–4, 115, 129 rewriting, 116–17, 128 Reyntjens, M-N., 164, 191 Rio Grande, 210 ritual illocutions, 238, 250 see also formulaic expressions Romero-Fresco, P., 4, 32, 56 Rossi, F., 133, 139 royalties, 168, 177–9, 189–90 The Royal National Institute for Deaf People (RNID), 206 Ruokolainen, T., 54, 71 Russell, J., 83, 95 S Salmon, R.A., 52, 71 Salway, A., 72, 95 Sapsford, R., 164, 191 Saturday Night Fever, 115, 128, 136 SAVAS Project, 48 Schaefer, L., 143, 148 Schaeffer, P., 75, 95 Scherer, K. R., 81, 95 Schmeidler, E., 73, 95

289

segmentation, 18, 20 semiotic(s), 3, 72, 155, 185 see also intersemiotic sensory impaired, 102, 204–5, 211, 215, 217, 219, 222, 268 see also hearing impaired see also visually impaired serious error, 34–5, 47 Shield, B., 206, 223 Short, M., 226–7, 252 sign language interpreting (SLI), 212–13, 217–18 sign language, 50, 71, 93, 95, 203–4, 206–10, 213, 215–20, 223, 252, 281 see also Polish sign language (PSL) see also Signed Polish (SP) see also Spanish sign language (SSL) compare phonic language Signed Polish (SP), 204, 208–9, 212–13, 221 see also Polish sign language signing system, 8 Siivola, V., 54, 71 simplicity, 233–4, 241, 248–50 Singh, N., 156, 159 sitcom(s), 168, 170, 174 situated quality assessment, 79 see also quality assessment SKY News, 33, 50 slang, 125, 132, 251 smart phone, 6, 147 soap opera(s), 104, 211 social media, 14, 17, 150, 221 Somers, H., 22, 27 song(s), 126, 156 see also music sound effect(s), 78, 82, 85 sound intensity, 74 SoundForge, 57–8 soundtrack(s), 17, 72, 74, 78, 80, 82, 85, 88, 99, 112–13, 115–16, 137, 184, 194, 226, 272–3 source-oriented, 111, 119, 121, 124, 127, 130 compare target-oriented Spanish sign language (SSL), 220 speech act, 238–9

290

Index

speech recognition, 29–31, 47–50, 53, 67, 70–1, 157 technology / engine, 24, 29, 49, 53, 63, 69 see also automatic speech recognition (ASR) spontaneity, 229, 234, 248 spontaneous-sounding dialogue, 227, 230, 249–50, 260 spotting, 147, 170, 192–3, 195, 201 see also time-cueing standard error, 36 standardization, 71, 100, 151, 198, 215 standards, 3, 6–7, 9, 26, 30, 50–1, 55–6, 65–6, 70–1, 94, 99, 111, 141, 148, 150–2, 155–6, 158, 160, 196, 199, 237 STAW, Polish association of audiovisual translators, 164 stenography, 33, 49, 191 stenotypist, 53 stereoscopic subtitles, 147 SubEdit, 142 SubMovie, 147 subtitle(s) / subtitling, 1, 3–4, 6–8, 16–18, 20–1, 23–4, 26–7, 28–50, 51–9, 61, 62–71, 93, 95, 110–11, 121, 138, 140–8, 155, 157, 163–81, 183–91, 192–202, 203–14, 210–13, 215–16, 220–3, 226–8, 230–52, 268, 281 automatic, 21, 24, 29, 33, 47, 49, 70 delay, 56–9, 62–3, 66, 69 for the deaf and the hard-of-hearing (SDH), 8, 203–5, 210–13, 215–17, 220–2 interlingual, 215–16 intralingual, 24 respoken, 3, 29–33, 42–6, 49 software / equipment, 7, 21, 183–4, 196, 201, 215 see also caption(s) Subtle, UK Subtitlers’ Association, 163, 165 Superman, 115, 117 Susam-Sarajeva, S¸., 137, 139 Sutton-Spence, R., 207, 223 Swan Lake, 274 Swanwick, R., 209, 222

swearword(s), 247–8 see also expletive(s) see also taboo S´widziński, M., 208, 223 synchronization / synchrony / sync, 3, 49, 51–2, 56, 62, 63–71, 196, 227, 246, 250–2, 254, 256–60, 262–4 see also isochrony see also kinesic synchrony see also lip synchrony see also unsynchronized subtitles Synchronized Subtitling in Live Television: Proof of Concept project, 65 Szarkowska, A., 203, 206, 213, 215, 220, 223, 225, 227, 237, 249, 252 Szczepankowski, B., 208–9, 212, 223 T tablet, 6 taboo, 128, 247–8 Tagesschau, 33 target culture norm(s), 121–2, 124, 130, 238 see also norm(s) target-oriented, 111, 117, 125 compare source-oriented TAUS, 150 Telediario TVE, 33, 61, 68 Telegiornale RAI, 33 television (TV), 3, 8, 13–14, 17, 48–50, 51–3, 65, 70–2, 94, 101, 113, 124, 130, 137, 139, 191–2, 203–5, 207, 209, 210–17, 220–1, 225–6, 228, 268–9, 273, 278, 279, 281 template maker, 193, 196, 198 template(s), 7–8, 169–71, 181, 183–6, 192–201 terminology, 7, 157, 184–5, 206 time-cueing, 169–70, 172, 181, 188, 192, 194 see also spotting timing(s), 64, 66, 108, 194 Tomaszewski, P., 208, 223 Tomaszkiewicz, T., 225, 252 Touch of Evil, 115, 121 touch tour, 275–6

Index training, 30, 32, 45, 47–8, 76, 93, 150, 168, 198, 202, 210, 267, 269–71, 276–7, 278–9 transcription(s), 3, 49, 52, 53–7, 59–64, 66–9, 195 Translation Studies, 6, 27, 71, 75, 93, 138–9, 149, 151, 152–4, 156, 158–60, 251 translation(al) norm(s), 5, 110, 115, 117, 121, 156 see also norm(s) Traynor, R., 207, 223 TV series, 8, 16, 51, 110, 168, 170, 174, 227, 254 TVE / RTVE (Televisión Española), 33, 50, 57–9, 61, 68 U UN Convention on the Rights of Persons with Disabilities, 100, 102, 109, 216 Underworld 3, 144 Unicode, 151, 154 Unicode Consortium, 151, 160 United Nations webcast, 3, 17, 24 unnaturalness, 125 unsynchronized subtitles, 52 see also synchronization usability, 6–7, 151, 153–4, 159 usability guidelines, 154 user preferences, 74 see also audience preferences user-generated, 158 V Valdés, C., 6, 149, 154, 156, 160 Valoroso, N., 110, 133, 139 Vanderschelden, I., 116, 121, 131, 139 video-on-demand, 147 viewers’ habits, 110 see also audience / users / viewer’s preferences viewers’ perception(s), 112 see also audience perception(s) viewer’s preferences, 140 see also audience preferences visibility (of translators), 2, 16, 179, 213

291

visual arts, 270, 274–5, 279 see also performing arts visually impaired, 5, 73–4, 78–9, 84, 100–1, 105–6, 108, 146, 267–71, 273–6, 278–81 see also poorly sighted vocal stimuli, 72, 82, 85–6 vocatives, 229, 235–8, 242, 249–50 voice artist, 226, 228, 230, 242, 249–50 see also voice talent voice quality, 73–4, 76, 85, 87–8, 91, 116 voice talent, 9, 227, 232, 241, 244, 247, 249 see also voice artist voiceover, 8, 110, 215, 225–52 volume, 74, 81, 226, 271–2 Votisky, A., 110, 139 W Wald, M., 54, 57, 71 Weenink, D., 81, 93 Wehn, K., 110, 136, 139 Wheatley, M., 216–17, 223 Wiener, N., 144, 148 Williams, B., 143, 148 The Wire, 8 Wittner, J., 157, 160 Woll, B., 207, 215, 223 word error rate (WER), 29, 30–1 working conditions, 7, 137, 163, 188–90 World Blind Union, 272, 281 World Health Organisation, 13, 102, 281 World Wide Web consortium (W3C), 150–1, 155, 160 Wo´zniak, M., 225, 249, 252 Wright, K., 164, 191 Wuthering Heights, 115, 124 Y Yeung, J., 73, 95, 278, 281 YouTube, 14, 22, 24–6, 28, 273 Z Zanotti, S., 5–6, 110, 124, 129, 139 Zappa, F., 141, 148 Zuckerman, E., 157, 160

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF