Seminar Series

Technologies for Translation and Interpreting: Challenges and Latest Developments 2020/21

Please find below details of the Technologies for Translation and Interpreting: Challenges and Latest Developments.  Due to the current world situation these seminars are taking place online, if you would like to join us, please email April Harper for a link.  {April [dot] Harper2 [at]}

Friday, 23 October

Dr Elizabeth Deysel: An Overview of Technology for Interpreters – the what and the why?

Elizabeth Deysel has been working in the field of interpreting for the past ten years. She is currently employed as an interpreter in the National Parliament of South Africa where she has been interpreting for the past six years. She previously lectured and trained interpreters at the University of the Free State before moving to Stellenbosch where she worked as an educational interpreter.  She completed her Masters in Interpreting which focused on computer assisted interpreter training (CAIT) and how it may be used to improve self assessment skills of the professional interpreters. She is currently pursuing her PhD in Interpreting at Stellenbosch University with a specific focus on Interpreting Technology and the implementation thereof in the curriculum for training interpreters. As you may have guessed, she is a lover of “gadgets” and all things tech related especially technology for interpreters.

Short abstract:

The webinar starts with a brief introduction on the history of technology and interpreting. It then provides a broad overview on technology and interpreting and what tools are currently available and used most frequently in practice. The two types of technologies to be discussed are: 1) process orientated technologies which provides support to the interpreter and 2) setting orientated technologies which shape and change the way interpreting is delivered.

Dr Maria Kunilovskaya: Linguistic Resources in Practical Translation – part 1

Wednesday, 4 November, 11:00-12:30

Dr Maria Kunilovskaya: Linguistic Resources in Practical Translation – part 2

Friday, 6 November, 13:00-14:30

This is a two-part hands-on session designed to introduce a range of freely available corpus-based tools and online resources that can be useful to address some of the most typical problems in human translation and in text generation at large. The session will open with the results of the survey offered to the EMTTI 2020 students to explore their interests and technical competence and to fine-tune the content of the Special Seminar to their needs. I will highlight the cognitive affinity between translation process and corpus use, and the typical problems in practical translation that can be solved with corpora. The main part of the session will cover some basic corpus linguistic terms and demonstrate several online query interfaces. Most of this part of the session is designed to allow shadowing to facilitate “learning-by-doing” approach in education.
The second part of this session provides details on more sophisticated types of queries, but mostly it is a follow-up on the few practical assignments offered for independent study. The session aims to provide working query skills to entice further research into using corpora and to whet your appetite for more.

Suggested background reading:

Thomas, James: Discovering English with Sketch Engine: A Corpus-Based Approach to Language Exploration. 2nd ed. Brno: Versatile, 2016. 228 pp. (Reviewed in Kunilovskaya, Maria and Kovyazina, Marina (2017). Sketch Engine: a toolbox for linguistic discovery. In Journal of Linguistics,  Slovak Academy of Sciences, Vol 68, No 3, 503-507. DOI: 10.2478/jazcas-2018-0005.

Speaker’s bio

Maria Kunilovskaya has been engaged in translator education for more than 10 years in her role as an Associate Professor in Contrastive Linguistics and Translation Studies at the University of Tyumen, Russia. Lecturing in Translation Studies, Corpus Linguistics and Text Linguistics, she has also been involved with teaching practical translation classes. She is a strong believer in promoting practical corpus skills that can be immediately applied in everyday activities of a language professional. Her research interests include construction and exploitation of parallel corpora, corpus-based research into translation competence and translationese, most recently with a strong pull towards the computational research methods, especially in the area of human translation quality estimation.

Wednesday, 11 November, 11:00-12:30

Dr Frédéric Blain: Shared tasks in NLP

Wednesday, 25 November, 10:00-11:00 

Prof Mark Shuttleworth, Hong Kong Baptist University: Free translation memory tools: a comparison of some well-known systems.

The use of translation memory tools is now fairly well embedded within the profession. While many translators are obliged to use one or other well-known system, others who are able to choose for themselves are perhaps confused by the sheer choice of systems that are available. In this talk I will demonstrate Memsource, Wordfast and Matecat and attempt to answer the following two questions: 1) to what extent does a free tool provide you with the functions that are needed to work at a professional level and 2) what are the strengths and weaknesses of each of these three systems?’

Speaker’s bio:

Mark Shuttleworth has been involved in translation studies research and teaching since 1993, at the University of Leeds, Imperial College London, University College London and, most recently, Hong Kong Baptist University. His publications include the Dictionary of Translation Studies, as well as articles on metaphor in translation, translation technology, translator training, translation and the web, and Wikipedia translation. More recently he has become interested in the use of digital methodologies in translation studies research. His monograph on scientific metaphor in translation, Studying Scientific Metaphor in Translation, was published in 2017 and he is currently working on a second edition of the Dictionary.

Keynote speaker engagements have included translation studies conferences in Poland, China and Malaysia. He has also addressed industry conferences in the UK, Italy and Brazil on the subject of translation technology and has provided training in the same area in Spain, Italy, Portugal, Finland, Tunisia and Malaysia.

Mark Shuttleworth is a fluent speaker of Russian, German, Polish and French and has some knowledge of a number of other languages including some Chinese. As and when time permits he is active as a translator

27 November 2020, 13:00-14:00

Dr Juan José Arevalillo Doval, CEO/Managing director of Hermes: Quality standardisation in language industry

Abstract: Quality in the language services industry is a very blurred term, but omnipresent in all activities from the moment a customer requests a translation service to the delivery and final closing of the project. In this process everything is measured and compliance with all requirements is usually a guarantee of success with the customer. In addition, there are numerous quality standards under ISO’s umbrella covering different services and aspects in this industry, which are applied on a daily basis and also form the basis of numerous academic programmes. Knowing this environment is essential for the future professionals so that they can know where they fit into the process and how to behave and act in that process.


Bio: PhD in Translation by University of Malaga, MA in Specialised Translation by the Institute of Modern Languages and Translators by Madrid Complutense University and BA in English Language and Literature by Madrid Complutense University.

In translation industry since 1980, he is the Managing Director at Hermes Traducciones y Servicios Lingüísticos. Previously worked as a freelance translator and as a language specialist and localiser in Digital Equipment Corporation.

A lecturer on Translation at Alfonso X University (Madrid) and International University of Valence (Spain), he is also the professional advisor for future graduates in the former university. He works with other Spanish high-education centres such as Autonomous University of Madrid, Autonomous University of Barcelona and ISTRAD of Seville.

Formerly Vice-president and Treasurer of the European Union of Associations of Translation Companies (EUATC), now he is the EUATC Youth Ambassador to try to cover the gap between university and industry and help new graduates join professional world. He is also the Chairman of the Spanish Association of LSPs (ASPROSET).

Chairman of Spanish Committee for Translation Services at UNE (Spanish Standardisation Association), and one of the creators of EN-15038 and ISO-17100 standards. He is also a member of ISO TC37 Committee for Translation Services.

Wednesday, 2 December, 09:00-10:00 

Stephen Doherty, University of New South Wales: Eye movements, cognitive load, and human-computer interaction with translation and interpreting technologies

Technological advances have led to unprecedented changes to translation and interpreting (see Doherty, 2016), chiefly in how we access and use translation and interpreting technologies for a diverse and growing range of professional and personal activities. Previous empirical research on translation and interpreting technologies has yielded a wealth of evidence to advance our understanding and usage of these technologies in addition to making them more visible and accessible. Of particular value amongst this growing body of work is the use of eye tracking in exploring and understanding the psychological and cognitive aspects of translation and interpreting technologies by analysing our eye movements as we interact with these technologies and use their outputs.

In this paper, I will consolidate this work by presenting a critical review of empirical studies of translation and interpreting technologies which have employed eye tracking, including my own recent work in the Language Processing Lab at the University of New South Wales. I will categorise previous research into areas of application, namely: computer-assisted translation tools, quality assessment of machine translation, post-editing machine-translated output, audio-visual translation, and remote interpreting. In closing, I will discuss the strengths and limitations of eye tracking in such studies and outline current and future research.

Suggested background reading:

Doherty, S. (2016). The impact of translation technology on the process and product of translation. International Journal of Communication, 10, 947–969.

Speaker’s bio

I am Associate Professor in Linguistics, Interpreting, and Translation, and lead of the HAL Language Processing Research Lab at UNSW. With a focus on the psychology of language and technology, my research investigates human and machine language processing using natural language processing techniques and combinations of online and offline methods, mainly eye tracking and psychometrics. My research has been supported by the Australian Research Council, Science Foundation Ireland, the European Union, the Federal Bureau of Investigation, the National Accreditation Authority for Translators and Interpreters, New South Wales Ministry of Health, Enterprise Ireland, and a range of industry collaborations. As a Chief Investigator, I have a career total of $1.5 million competitive research grants. 

Prior to my appointment at UNSW Sydney (2014), my doctoral (2008–2012) and post-doctoral research positions (2012–2013) were funded by Science Foundation Ireland and supervised by Prof Sharon O’Brien, Prof Dorothy Kenny, and Prof Andy Way at the CNGL Centre for Global Intelligent Content in Dublin City University, a multi-million euro, cross-institutional centre now known as the ADAPT Centre for Digital Content Technology. My subsequent post-doctoral position (2013–2014), supervised by Prof Josef Van Genabith, was based in the School of Computing and the National Centre for Language Technology at Dublin City University as part of the QTLaunchPad project, a €2.2 million project funded by the European Union through its Seventh Framework Programme (FP7) for research and technological development.


Friday, 4 December, 16:00-17:00

Prof William D Lewis, University of Washington and Microsoft Translator: ­Automatic Speech Transcription and Translation in the Classroom and Lecture Setting: The Technologies, How They’re Being Used, and Where We’re Going        


We have witnessed significant progress in Automated Speech Recognition (ASR) and Machine Translation (MT) in recent years, so much so that Speech Translation, itself a combination of these underlying technologies, is becoming a viable technology in its own right. Although not perfect, many have called what they’ve seen of the current technology the “Universal Translator” or the “mini-UN on a phone”. But we’re not done and there are many problems to solve. For example, for Speech Translation to work well, it is not sufficient to stitch together the two underlying technologies of ASR and MT and call it done. People are amazingly disfluent, which can have profound negative impacts on transcripts and translations. We need to make the output of ASR more “fluent”; this has the effect of improving the quality of downstream translations. Further, since “fluent” output is much more readable and “caption-like” than disfluent, it is also more easily consumable by same-language users. This opens doors to broader accessibility scenarios. Speech Translation is currently being used in a variety of scenarios, no more so than in education. It sees its greatest uptake in settings where one or more speakers needs to communicate with a multilingual population. Perfect examples are the classroom, but we also see its use in parent-teacher conferences. The underlying technologies can be enhanced further by giving users some control over customizing the underlying models, e.g., to domain-specific vocabulary or speaker accents, significantly improving user experiences. In this talk I will demonstrate the technology in action as part of the presentation.

Dr. William Lewis is an Affiliate Assistant Professor at the University of Washington, and until recently, a Principal PM Architect with the Microsoft Translator team.  He has led the Microsoft Translator team’s efforts to build Machine Translation engines for a variety of the world’s languages, including threatened and endangered languages, and has been working with the Translator team on Speech Translation.  He has been leading the efforts to support the features that allow students to use Microsoft Translator in the classroom, both for multilingual and deaf and hard of hearing audiences. 



Before joining Microsoft, Will was Assistant Professor and founding faculty for the Computational Linguistics Master’s Program at the University of Washington. Before that, he was faculty at CSU Fresno, where he helped found the Computational Linguistic and Cognitive Science Programs at the university. 

He received a Bachelor’s degree in Linguistics from the University of California Davis and a Master’s and Doctorate in Linguistics, with an emphasis in Computational Linguistics, from the University of Arizona in Tucson. In addition to regularly publishing in the fields of Natural Language Processing and Machine Translation, Will is on the editorial board for the Journal of Machine Translation, has previously served on the board for the Association for Machine Translation in the Americas (AMTA), served as a program chair for the National American Association for Computational Linguistics (NAACL) conference, served as a program chair for the Machine Translation Summit, regularly reviews papers for a number of Computational Linguistic conferences, and has served multiple times as a panelist for the National Science Foundation. 

9 December 2020

Prof. Antoni Oliver González, The Open University of Catalonia

Automatic Terminology Extraction

16 December, time tbc

Andrzej Zydroń, XTM International (CTO)

AI and Language Technology: De-demonizing AI


AI gets a lot of attention generally due to the stunning results that can be achieved, in fields such as medicine, automotive technology, diagnostic systems and of course translation. AI systems can seemingly outperform human beings in a wide range of tasks, from playing Chess, Go or even poker, to face and voice recognition. What is often lacking though, is a more realistic understanding of what intelligence is and the actual limitations that exist given the computing tools at our disposal.

The reality is much more prosaic: most of the mathematical basis of what is termed AI is not complicated and generally rooted in early 18th century mathematics, namely work done by Euler and Bayes.

Although some of the achievements of AI based systems may seem phenomenal, they are achieved through processing of gigantic amounts of data which would normally be beyond human capability. The presentation looks at what actually constitutes AI and how it relates to general human intelligence and what implications this has for the translation industry in general.

Andrzej Zydroń MBCS CITP

CTO and co-founder @ XTM International, Andrzej Zydroń is one of the leading IT experts on Localization and related Open Standards. Zydroń sits/has sat on the following Open Standard Technical Committees:


  2. LISA OSCAR xml:tm


  4. W3C ITS


  6. OASIS Translation Web Services

  7. OASIS DITA Translation



  10.  DITA Localization

  11. Interoperability Now!

  12. Linport

Zydroń has been responsible for the architecture of the essential word and character count GMX-V (Global Information Management Metrics eXchange) standard, as well as the revolutionary xml:tm standard which will change the way in which we view and use translation memory. Zydroń is also head of the OASIS OAXAL (Open Architecture for XML Authoring and Localization technical committee.

Zydroń has worked in IT since 1976 and has been responsible for major successful projects at Xerox, SDL, Oxford University Press, Ford of Europe, DocZone and Lingo24.

Friday, 18 December, 15:00-16:00

Lynne Bowker: Machine Translation Literacy

Friday 8 January 2021, 13:00-14:00

Dr Arda Tezcan, Ghent University: Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation


Even though Machine Translation (MT) quality may have increased considerably over the past years, most notably with advances in the field of Neural Machine Translation (NMT), Translation Memories (TMs) still offer some advantages over MT systems. They are not only able to translate previously seen sentences ‘perfectly’ but they also offer ‘near perfect’ translation quality when highly similar source sentences are retrieved from

the TM. As a result, in Computer-Assisted Translation (CAT) workflows, the MT system is often used as a back-off mechanism when the TM fails to retrieve high fuzzy matches above a certain threshold, even though it has been shown that this basic integration method is not always the most optimal TM-MT combination strategy.

We present a simple yet powerful data augmentation method for boosting Neural Machine Translation (NMT) performance by leveraging information retrieved from a Translation Memory (TM). Tests on the DGT-TM data set for multiple language pairs show consistent and substantial improvements over a range of baseline systems. The results suggest that this method is promising for any translation environment in which a sizeable TM is available and a certain amount of repetition across translations is to be expected, especially considering its ease of implementation.    

13 January & Time tbc

Julie Giguère, Business Director, Asiana Absolute

‘Working with Intelligent Machines’ 

Abstract – tbc

Julie C. Giguere 

Holder of degrees in Specialised Translation and Law, Julie’s career saw her manage translation and communication for BMO Bank of Montreal as well as financial and legal translation projects at major Language Service Providers in France and in the UK. These roles were a natural fit for Julie, a passionate communicator who speaks fluent French, Spanish and English. Julie held responsibility of Asian Absolute, a boutique language service provider, specialising in Asian and Asean languages for the last 6 years. She led the global Sales and Operation teams. Her development duties at the company saw her grow the operation from a handful of staff to a team of more than thirty and oversaw a 100% increase in the acquisition rate of new clients, such as UN Women headquartered in Bangkok and WIPO (World International Patent Organisation). She also personally led the start-up of the company’s operations in Bangkok and Panama City. She has over 10 year professional experience in multilingual communications and AI applications in linguistics. She recently completed the Oxford University, Said Business School AI Programme. She is a regular guest speaker at Localisation and tech/AI events. She recently spoke at the ATC Summits 2017 and 2018; EUATC 2018; Connected World Summit 2018; AI & Big Data Innovation Summit 2018 in Beijing and the IP EXPO Manchester 2019.

27 January 2021, 11:00-12:00

Dr Claudia Lecci, University of Bologna: Creating terminological projects for the detection of in-domain terminology: a workflow for interpreters.

Interpreters, as well as translators, need to be familiar with all the skills and abilities necessary to create a terminological project and detect the terminology belonging to specific domains of the orality genre.

The typical approach for detecting specialized terminology can be described through a workflow which combines different stages, starting from an information mining stage and ending with the creation of a terminological resource, namely a glossary or a terminology database that, when needed, can also be combined with Computer Assisted Translation tools.

The workflow presented in this talk starts with the definition of a domain and with the collection of reference materials on the Internet. The second step consists in the construction of specialized comparable corpora from the web using a dedicated tool. The third stage is the corpus-based extraction of simple or complex terms with the help of a concordancing tool. The last step of the workflow is the creation of terminological entries organized in the form of glossaries and/or TermBases.

The insights gained from this presentation will help trainee and professional interpreters prepare terminological resources both for specific assignments and for more general topics.


Claudia Lecci graduated in Specialised Translation and Translation for the Publishing Industry at the Advanced School of Modern Languages for Interpreters and Translators (now Department of Interpreting and Translation – DIT).

She currently teaches the MA modules “Computer-assisted Translation and web localization” and “Machine Translation and Post-Editing” within the Master’s in Specialized Translation (DIT – Forlì) and coordinates the MA course “Methods and Technologies for Interpreting” within the Master’s in Interpreting (DIT- Forlì). She also teaches the course “Traduzione in Italiano dall’Inglese (assistita)” as part of the Bachelor’s in Intercultural and Linguistic Mediation (DIT – Forlì).

She is an SDL Trados Authorised Trainer for SDL Trados Studio 2021 and SDL MultiTerm 2021.

Date & Time tbc

Professor Ruslan Mitkov, University of Wolverhampton, Don’t get me wrong…What’s next? 

Intelligent Translation Memory Systems…

We witnessed the birth of the modern computer between 1943 and 1946; it was not long after that Warren Weaver wrote his famous memorandum in 1949 suggesting that translation by machine would be possible. Weaver’s dream did not quite come true: while automatic translation went on to work reasonably in some scenarios and to do well for gisting purposes, even today, against the background of the recent promising results delivered by statistical Machine Translation (MT) systems such as Google Translate and latest developments in Neural Machine Translation and in general Deep Learning for MT, automatic translation gets it often wrong and is not good enough for professional translation.

Consequently, there has been a pressing need for a new generation of tools for professional translators to assist them reliably and speed up the translation process. First Krollman put forward the reuse of existing human translations in 1971. A few years later, in 1979 Arthern went further and proposed the retrieval and reuse not only of identical text fragments (exact matches) but also of similar source sentences and their translations (fuzzy matches). It took another decade before the ideas sketched by Krollman and Arthern were commercialised as a result of the development of various computer-aided translation (CAT) tools such as Translation Memory (TM) systems in the early 1990s.  These translation tools revolutionised the work of translators and the last two decades saw dramatic changes in the translation workflow.

The TM memory systems indeed revolutionised the work of translators and now the translators not benefiting from these tools are a tiny minority. However, while these tools have proven to be very efficient for repetitive and voluminous texts, are they intelligent enough? Unfortunately, they operate on fuzzy (surface) matching mostly, cannot benefit from already translated texts which are synonymous to (or paraphrased versions of) the text to be translated and can be ‘fooled’ on numerous occasions.

What is next in the translation world? We cannot get it wrong as we cannot let the translation go wrong: it is obvious that the next generation of TM systems will have to be more intelligent.  A way forward would be to equip the TM tools with Natural Language Processing (NLP) capabilities. NLP can come to help and propose solutions towards addressing this objective.  The invited talk will present recent and latest work by the speaker and his research group in achieving this. More specifically, the speaker will explain how two NLP methods/tasks, namely paraphrasing and clause splitting, make it possible for TM systems to identify semantically equivalent sentences which are not necessarily identical or close syntactically and enhance performance. The first evaluation results of this new generation TM matching technology are already promising….


Professor Ruslan Mitkov: short bio

Prof Dr Ruslan Mitkov has been working in Natural Language Processing (NLP), Computational Linguistics, Corpus Linguistics, Machine Translation, Translation Technology and related areas since the early 1980s. Whereas Prof Mitkov is best known for his seminal contributions to the areas of anaphora resolution and automatic generation of multiple-choice tests, his extensively cited research (more than 260 publications including 17 books, 40 journal articles and 40 book chapters) also covers topics such as machine translation, translation memory and translation technology in general, bilingual term extraction, automatic identification of cognates and false friends, natural language generation, automatic summarisation, computer-aided language processing, centering, evaluation, corpus annotation, NLP-driven corpus-based study of translation universals, text simplification, NLP for people with language disabilities and computational phraseology. Current topics of research interest include the employment of deep learning techniques in translation and interpreting technology as well as conceptual difficulty for text processing and translation. Mitkov is author of the monograph Anaphora resolution (Longman) and Editor of the most successful Oxford University Press Handbook – The Oxford Handbook of Computational Linguistics. Current prestigious projects include his role as Executive Editor of the Journal of Natural Language Engineering published by Cambridge University Press and Editor-in-Chief of the Natural Language Processing book series of John Benjamins publishers. Dr Mitkov is also working on the forthcoming Oxford Dictionary of Computational Linguistics (Oxford University Press, co-authored with Patrick Hanks) and the forthcoming second, substantially revised edition of the Oxford Handbook of Computational Linguistics. Prof Mitkov designed the first international Erasmus Mundus Master programme on Technology for Translation and Interpreting which was awarded competitive EC funding and which he leads as Project Coordinator. Dr Mitkov has been invited as a keynote speaker at a number of international conferences including conferences on translation and translation technology; he has acted as Programme Chair of various international conferences on Natural Language Processing (NLP), Machine Translation, Translation Technology (including the annual London conference ‘Translation and the Computer’), Translation Studies, Corpus Linguistics and Anaphora Resolution. Dr Mitkov is asked on a regular basis to review for leading international funding bodies and organisations and to act as a referee for applications for Professorships both in North America and Europe. Ruslan Mitkov is regularly asked to review for leading journals, publishers and conferences and serve as a member of Programme Committees or Editorial Boards. Prof Mitkov has been an external examiner of many doctoral theses and curricula in the UK and abroad, including Master’s programmes related to NLP, Translation and Translation Technology. Dr Mitkov has considerable external funding to his credit (more than є 25,000,000) and is currently acting as Principal Investigator of several large projects, some of which are funded by UK research councils, by the EC as well as by companies and users from the UK and USA. Ruslan Mitkov received his MSc from the Humboldt University in Berlin, his PhD from the Technical University in Dresden and worked as a Research Professor at the Institute of Mathematics, Bulgarian Academy of Sciences, Sofia. Mitkov is Professor of Computational Linguistics and Language Engineering at the University of Wolverhampton which he joined in 1995 and where he set up the Research Group in Computational Linguistics. His Research Group has emerged as an internationally leading unit in applied Natural Language Processing has been ranked as world No.1 in different international NLP competitions. In addition to being Head of the Research Group in Computational Linguistics, Prof Mitkov is also Director of the Research Institute in Information and Language Processing. The Research Institute consists of the Research Group in Computational Linguistics and the Research Group in Statistical Cybermetrics, which is another top performer internationally. Ruslan Mitkov is Vice President of ASLING, an international Association for promoting Language Technology. Dr Mitkov is Fellow of the Alexander von Humboldt Foundation, Germany, Marie Curie Fellow and Distinguished Visiting Professor at the University of Franche-Comté in Besançon, France; he also serves as Vice-Chair for the prestigious EC funding programme ‘Future and Emerging Technologies’. In recognition of his outstanding professional/research achievements, Prof Mitkov was awarded the title of Doctor Honoris Causa at Plovdiv University in November 2011. At the end of October 2014 Dr Mitkov was also conferred Professor Honoris Causa at Veliko Tarnovo University.


Date & Time tbc

Tharindu Ranasinghe, University of Wolverhampton

‘Intelligent Translation Memory Matching and Retrieval with Sentence Encoders’