Seminar Series Archive 2020-21


Technologies for Translation and Interpreting: Challenges and Latest Developments

This vibrant seminar series hosts leading scholars and CEOs of companies to report on their work and vision related to the technology for translators and interpreters covering among other topics translation and interpreting tools and resources and Natural Language Processing and Artificial Intelligence solutions. The seminar series has both strong research and industrial foci and as such serves not only as a forum showcasing latest research, professional practices, software and business developments but also bridging the gap between academia and the industry.

This seminar series is hosted by Professor Ruslan Mitkov.

Friday, 23 October

Dr Elizabeth Deysel: An Overview of Technology for Interpreters – the what and the why?

Elizabeth Deysel has been working in the field of interpreting for the past ten years. She is currently employed as an interpreter in the National Parliament of South Africa where she has been interpreting for the past six years. She previously lectured and trained interpreters at the University of the Free State before moving to Stellenbosch where she worked as an educational interpreter.  She completed her Masters in Interpreting which focused on computer assisted interpreter training (CAIT) and how it may be used to improve self assessment skills of the professional interpreters. She is currently pursuing her PhD in Interpreting at Stellenbosch University with a specific focus on Interpreting Technology and the implementation thereof in the curriculum for training interpreters. As you may have guessed, she is a lover of “gadgets” and all things tech related especially technology for interpreters.

Short abstract:

The webinar starts with a brief introduction on the history of technology and interpreting. It then provides a broad overview on technology and interpreting and what tools are currently available and used most frequently in practice. The two types of technologies to be discussed are: 1) process orientated technologies which provides support to the interpreter and 2) setting orientated technologies which shape and change the way interpreting is delivered.

Dr Maria Kunilovskaya: Linguistic Resources in Practical Translation – part 1

Wednesday, 4 November, 11:00-12:30

Dr Maria Kunilovskaya: Linguistic Resources in Practical Translation – part 2

Friday, 6 November, 13:00-14:30

This is a two-part hands-on session designed to introduce a range of freely available corpus-based tools and online resources that can be useful to address some of the most typical problems in human translation and in text generation at large. The session will open with the results of the survey offered to the EMTTI 2020 students to explore their interests and technical competence and to fine-tune the content of the Special Seminar to their needs. I will highlight the cognitive affinity between translation process and corpus use, and the typical problems in practical translation that can be solved with corpora. The main part of the session will cover some basic corpus linguistic terms and demonstrate several online query interfaces. Most of this part of the session is designed to allow shadowing to facilitate “learning-by-doing” approach in education.
The second part of this session provides details on more sophisticated types of queries, but mostly it is a follow-up on the few practical assignments offered for independent study. The session aims to provide working query skills to entice further research into using corpora and to whet your appetite for more.

Suggested background reading:

Thomas, James: Discovering English with Sketch Engine: A Corpus-Based Approach to Language Exploration. 2nd ed. Brno: Versatile, 2016. 228 pp. (Reviewed in Kunilovskaya, Maria and Kovyazina, Marina (2017). Sketch Engine: a toolbox for linguistic discovery. In Journal of Linguistics,  Slovak Academy of Sciences, Vol 68, No 3, 503-507. DOI: 10.2478/jazcas-2018-0005.

Speaker’s bio

Maria Kunilovskaya has been engaged in translator education for more than 10 years in her role as an Associate Professor in Contrastive Linguistics and Translation Studies at the University of Tyumen, Russia. Lecturing in Translation Studies, Corpus Linguistics and Text Linguistics, she has also been involved with teaching practical translation classes. She is a strong believer in promoting practical corpus skills that can be immediately applied in everyday activities of a language professional. Her research interests include construction and exploitation of parallel corpora, corpus-based research into translation competence and translationese, most recently with a strong pull towards the computational research methods, especially in the area of human translation quality estimation.


Dr Frédéric Blain, University of Wolverhampton

Wednesday, 11 November, 11:00-12:30

Shared tasks in NLP


Shared tasks have an important role of identifying interests for complex problems in a field, as well as to quantify progress made during a given period of time. In this seminar, we will revisit the History and key aspects of shared tasks in the field of Natural Language Processing. I will share with you my experience as co-organiser of the Quality Estimation Shared task at the Conference on Machine Translation (WMT). Finally, we will discuss some ethical considerations that are arising in the field with regard to the organisation and participation to such challenges.



Fred Blain is a Senior Lecturer of Translation Technology at the University of Wolverhampton and a member of the Research Group on Computational Linguistics (RGCL).

Prior to joining RGCL, Fred was a research associate in Machine Translation in Prof. Lucia Specia’s group at the University of Sheffield. There he worked on discriminative training algorithms for Statistical Machine Translation and continuous adaptation from post-editing workflow within the scope of the EU H2020 QT21 project. He then turned to Quality Estimation for Machine Translation, a topic he has been working on since, in close collaboration with Lucia Specia. Together, they successfully secured several research grants (an Amazon Research Award grant, an EAMT grant and more recently Bergamot, an EU H2020 project), leading to many publications and DeepQuest, the first open-source toolkit for quality estimation for neural-based Machine Translation.

Fred holds a PhD in Computer Science from Le Mans Université (France), which he defended in 2013. As a PhD student, he worked on post-editing, continuous adaptation as well as domain and project adaptations for Machine Translation under the supervision of Holger Schwenk (Facebook) and Jean Senellart (Systran). He pursued his PhD work as a postdoctoral researcher at LIUM, the Computer Science Laboratory of Le Mans Université, by joining the EU FP7 MateCAT project. He also has experience in industry having held a research engineer position at Systran during his PhD. 

Prof Mark Shuttleworth, Hong Kong Baptist University: Free translation memory tools: a comparison of some well-known systems.

Wednesday, 25 November, 10:00-11:00 

The use of translation memory tools is now fairly well embedded within the profession. While many translators are obliged to use one or other well-known system, others who are able to choose for themselves are perhaps confused by the sheer choice of systems that are available. In this talk I will demonstrate Memsource, Wordfast and Matecat and attempt to answer the following two questions: 1) to what extent does a free tool provide you with the functions that are needed to work at a professional level and 2) what are the strengths and weaknesses of each of these three systems?’

Speaker’s bio:

Mark Shuttleworth has been involved in translation studies research and teaching since 1993, at the University of Leeds, Imperial College London, University College London and, most recently, Hong Kong Baptist University. His publications include the Dictionary of Translation Studies, as well as articles on metaphor in translation, translation technology, translator training, translation and the web, and Wikipedia translation. More recently he has become interested in the use of digital methodologies in translation studies research. His monograph on scientific metaphor in translation, Studying Scientific Metaphor in Translation, was published in 2017 and he is currently working on a second edition of the Dictionary.

Keynote speaker engagements have included translation studies conferences in Poland, China and Malaysia. He has also addressed industry conferences in the UK, Italy and Brazil on the subject of translation technology and has provided training in the same area in Spain, Italy, Portugal, Finland, Tunisia and Malaysia.

Mark Shuttleworth is a fluent speaker of Russian, German, Polish and French and has some knowledge of a number of other languages including some Chinese. As and when time permits he is active as a translator

Dr Juan José Arevalillo Doval, CEO/Managing director of Hermes: Quality standardisation in language industry

27 November 2020, 13:00-14:00

Abstract: Quality in the language services industry is a very blurred term, but omnipresent in all activities from the moment a customer requests a translation service to the delivery and final closing of the project. In this process everything is measured and compliance with all requirements is usually a guarantee of success with the customer. In addition, there are numerous quality standards under ISO’s umbrella covering different services and aspects in this industry, which are applied on a daily basis and also form the basis of numerous academic programmes. Knowing this environment is essential for the future professionals so that they can know where they fit into the process and how to behave and act in that process.


Bio: PhD in Translation by University of Malaga, MA in Specialised Translation by the Institute of Modern Languages and Translators by Madrid Complutense University and BA in English Language and Literature by Madrid Complutense University.

In translation industry since 1980, he is the Managing Director at Hermes Traducciones y Servicios Lingüísticos. Previously worked as a freelance translator and as a language specialist and localiser in Digital Equipment Corporation.

A lecturer on Translation at Alfonso X University (Madrid) and International University of Valence (Spain), he is also the professional advisor for future graduates in the former university. He works with other Spanish high-education centres such as Autonomous University of Madrid, Autonomous University of Barcelona and ISTRAD of Seville.

Formerly Vice-president and Treasurer of the European Union of Associations of Translation Companies (EUATC), now he is the EUATC Youth Ambassador to try to cover the gap between university and industry and help new graduates join professional world. He is also the Chairman of the Spanish Association of LSPs (ASPROSET).

Chairman of Spanish Committee for Translation Services at UNE (Spanish Standardisation Association), and one of the creators of EN-15038 and ISO-17100 standards. He is also a member of ISO TC37 Committee for Translation Services.

Stephen Doherty, University of New South Wales: Eye movements, cognitive load, and human-computer interaction with translation and interpreting technologies

Wednesday, 2 December, 09:00-10:00 

Technological advances have led to unprecedented changes to translation and interpreting (see Doherty, 2016), chiefly in how we access and use translation and interpreting technologies for a diverse and growing range of professional and personal activities. Previous empirical research on translation and interpreting technologies has yielded a wealth of evidence to advance our understanding and usage of these technologies in addition to making them more visible and accessible. Of particular value amongst this growing body of work is the use of eye tracking in exploring and understanding the psychological and cognitive aspects of translation and interpreting technologies by analysing our eye movements as we interact with these technologies and use their outputs.

In this paper, I will consolidate this work by presenting a critical review of empirical studies of translation and interpreting technologies which have employed eye tracking, including my own recent work in the Language Processing Lab at the University of New South Wales. I will categorise previous research into areas of application, namely: computer-assisted translation tools, quality assessment of machine translation, post-editing machine-translated output, audio-visual translation, and remote interpreting. In closing, I will discuss the strengths and limitations of eye tracking in such studies and outline current and future research.

Suggested background reading:

Doherty, S. (2016). The impact of translation technology on the process and product of translation. International Journal of Communication, 10, 947–969.

Speaker’s bio

I am Associate Professor in Linguistics, Interpreting, and Translation, and lead of the HAL Language Processing Research Lab at UNSW. With a focus on the psychology of language and technology, my research investigates human and machine language processing using natural language processing techniques and combinations of online and offline methods, mainly eye tracking and psychometrics. My research has been supported by the Australian Research Council, Science Foundation Ireland, the European Union, the Federal Bureau of Investigation, the National Accreditation Authority for Translators and Interpreters, New South Wales Ministry of Health, Enterprise Ireland, and a range of industry collaborations. As a Chief Investigator, I have a career total of $1.5 million competitive research grants. 

Prior to my appointment at UNSW Sydney (2014), my doctoral (2008–2012) and post-doctoral research positions (2012–2013) were funded by Science Foundation Ireland and supervised by Prof Sharon O’Brien, Prof Dorothy Kenny, and Prof Andy Way at the CNGL Centre for Global Intelligent Content in Dublin City University, a multi-million euro, cross-institutional centre now known as the ADAPT Centre for Digital Content Technology. My subsequent post-doctoral position (2013–2014), supervised by Prof Josef Van Genabith, was based in the School of Computing and the National Centre for Language Technology at Dublin City University as part of the QTLaunchPad project, a €2.2 million project funded by the European Union through its Seventh Framework Programme (FP7) for research and technological development.

Prof William D Lewis, University of Washington and Microsoft Translator.­

Automatic Speech Transcription and Translation in the Classroom and Lecture Setting: The Technologies, How They’re Being Used, and Where We’re Going        

Friday, 4 December, 16:00-17:00


We have witnessed significant progress in Automated Speech Recognition (ASR) and Machine Translation (MT) in recent years, so much so that Speech Translation, itself a combination of these underlying technologies, is becoming a viable technology in its own right. Although not perfect, many have called what they’ve seen of the current technology the “Universal Translator” or the “mini-UN on a phone”. But we’re not done and there are many problems to solve. For example, for Speech Translation to work well, it is not sufficient to stitch together the two underlying technologies of ASR and MT and call it done. People are amazingly disfluent, which can have profound negative impacts on transcripts and translations. We need to make the output of ASR more “fluent”; this has the effect of improving the quality of downstream translations. Further, since “fluent” output is much more readable and “caption-like” than disfluent, it is also more easily consumable by same-language users. This opens doors to broader accessibility scenarios. Speech Translation is currently being used in a variety of scenarios, no more so than in education. It sees its greatest uptake in settings where one or more speakers needs to communicate with a multilingual population. Perfect examples are the classroom, but we also see its use in parent-teacher conferences. The underlying technologies can be enhanced further by giving users some control over customizing the underlying models, e.g., to domain-specific vocabulary or speaker accents, significantly improving user experiences. In this talk I will demonstrate the technology in action as part of the presentation.

Dr. William Lewis is an Affiliate Assistant Professor at the University of Washington, and until recently, a Principal PM Architect with the Microsoft Translator team.  He has led the Microsoft Translator team’s efforts to build Machine Translation engines for a variety of the world’s languages, including threatened and endangered languages, and has been working with the Translator team on Speech Translation.  He has been leading the efforts to support the features that allow students to use Microsoft Translator in the classroom, both for multilingual and deaf and hard of hearing audiences. 



Before joining Microsoft, Will was Assistant Professor and founding faculty for the Computational Linguistics Master’s Program at the University of Washington. Before that, he was faculty at CSU Fresno, where he helped found the Computational Linguistic and Cognitive Science Programs at the university. 

He received a Bachelor’s degree in Linguistics from the University of California Davis and a Master’s and Doctorate in Linguistics, with an emphasis in Computational Linguistics, from the University of Arizona in Tucson. In addition to regularly publishing in the fields of Natural Language Processing and Machine Translation, Will is on the editorial board for the Journal of Machine Translation, has previously served on the board for the Association for Machine Translation in the Americas (AMTA), served as a program chair for the National American Association for Computational Linguistics (NAACL) conference, served as a program chair for the Machine Translation Summit, regularly reviews papers for a number of Computational Linguistic conferences, and has served multiple times as a panelist for the National Science Foundation. 

Dr Antoni Oliver González, The Open University of Catalonia

9 December 2020, 10:00 – 11:00

Techniques for automatic terminology extraction: implementation into TBXTools

In this talk main techniques for automatic terminology extraction and for automatic detection of translation equivalents of terms will be presented. The talk includes an explanation of the implementation of these techniques into TBXTools, a free tool for terminology extraction. We will explore statistical and linguistic methodologies for terminology extraction. We will also present the implementation of automatic search of translation equivalents of terms in parallel corpora and in statistical machine translation phrase tables.

Dr. Antoni Oliver

Antoni Oliver is a lecturer at the Universitat Oberta de Catalunya (UOC, Barcelona, Spain) and the director of the master’s degree in Translation and Technologies. His main areas of research are machine translation and automatic terminology extraction. He is developing TBXTools, a free terminology extraction tool, available at

Diana Ballard, STAR Group

11 December 2020, Time 12.00

Scaling language technologies to overcome growing challenges and deliver new value

To tackle growing complexity, volumes, languages, channels, media how are language technologies shaping up to meet today’s challenges and what developments are on the horizon? In this session, we will explore the importance of workflow automation to blend the language technology ecosystem to tackle growing complexity, showcasing STAR challenges. Our example will focus on unlocking the value of Big Data to deliver reliable Translation Memory through a process of alignment and Machine Translation.

 CLM (Corporate Language Management). Secondly, we will discuss a real-world case where blending language technologies can answer new 


Over 20 years, Diana has worked in international business development and global account management at Language Service Companies. Prior to this, she was technical communications manager at a Japanese Manufacturing company migrating its technical information operations from Japan to the UK where she supervised the end-to-end information process from authoring, translation, approval, printing and delivery to the assembly line. Previously, Diana gained early experience at a business development consultancy having graduated from the University of Liverpool, where she read English Literature and French.

Andrzej Zydroń, XTM International (CTO)

16 December, 11:00-12:00

AI and Language Technology: De-demonizing AI

AI gets a lot of attention generally due to the stunning results that can be achieved, in fields such as medicine, automotive technology, diagnostic systems and of course translation. AI systems can seemingly outperform human beings in a wide range of tasks, from playing Chess, Go or even poker, to face and voice recognition. What is often lacking though, is a more realistic understanding of what intelligence is and the actual limitations that exist given the computing tools at our disposal.

The reality is much more prosaic: most of the mathematical basis of what is termed AI is not complicated and generally rooted in early 18th century mathematics, namely work done by Euler and Bayes.

Although some of the achievements of AI based systems may seem phenomenal, they are achieved through processing of gigantic amounts of data which would normally be beyond human capability. The presentation looks at what actually constitutes AI and how it relates to general human intelligence and what implications this has for the translation industry in general.

Andrzej Zydroń MBCS CITP

CTO and co-founder @ XTM International, Andrzej Zydroń is one of the leading IT experts on Localization and related Open Standards. Zydroń sits/has sat on the following Open Standard Technical Committees:


  2. LISA OSCAR xml:tm


  4. W3C ITS


  6. OASIS Translation Web Services

  7. OASIS DITA Translation



  10.  DITA Localization

  11. Interoperability Now!

  12. Linport

Zydroń has been responsible for the architecture of the essential word and character count GMX-V (Global Information Management Metrics eXchange) standard, as well as the revolutionary xml:tm standard which will change the way in which we view and use translation memory. Zydroń is also head of the OASIS OAXAL (Open Architecture for XML Authoring and Localization technical committee.

Zydroń has worked in IT since 1976 and has been responsible for major successful projects at Xerox, SDL, Oxford University Press, Ford of Europe, DocZone and Lingo24.

Lynne Bowker, School of Translation & Interpretation, University of Ottawa

Friday, 18 December, 15:00-16:00

Machine translation literacy in the context of non-professional translation

We recently passed the 70th anniversary of Weaver’s Memorandum (1949), which is widely acknowledged as having launched machine translation (MT) research. A lot has happened in that 70-year period, including the introduction of free, online machine translation accessible to anyone with an internet connection. Through university-based translator education programs and professional development opportunities offered by professional translators associations, language professionals have numerous opportunities to learn more about how to interact effectively with MT tools. But these tools are no longer solely in the hands of language professionals; they are also “in the wild”. How and why are non-professional users employing MT? What do they need to be aware of to use it effectively? What support is available to non-professional users of MT? Why should developers care about non-professional users? In this talk, we will explore the notion of “machine translation literacy”, examine some of the needs of non-professional MT users, consider the social responsibility of translators toward non-professional users, and discuss the results of two different efforts to deliver MT literacy training to non-professional users (one as part of a workshop offered through a university library, and one as part of a first-year university course on translation for non-language professionals).



Lynne Bowker is a certified (French-English) translator and holds a PhD in Language Engineering from the University of Manchester Institute of Science and Technology (UK). She is Full Professor at the School of Translation and Interpretation at the University of Ottawa (Canada), with a cross-appointment to the School of Information Studies. She is the author of Computer-Aided Translation Technology (University of Ottawa Press, 2002) and co-author of both Working with Specialized Language: A Practical Guide to Using Corpora (Routledge, 2002) and Machine Translation and Global Research (Emerald, 2019). In 2020, she was elected to the Royal Society of Canada in recognition of her contributions to research in translation technologies.

Dr Arda Tezcan, Ghent University: Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation

Friday 8 January 2021, 13:00-14:00


Even though Machine Translation (MT) quality may have increased considerably over the past years, most notably with advances in the field of Neural Machine Translation (NMT), Translation Memories (TMs) still offer some advantages over MT systems. They are not only able to translate previously seen sentences ‘perfectly’ but they also offer ‘near perfect’ translation quality when highly similar source sentences are retrieved from

the TM. As a result, in Computer-Assisted Translation (CAT) workflows, the MT system is often used as a back-off mechanism when the TM fails to retrieve high fuzzy matches above a certain threshold, even though it has been shown that this basic integration method is not always the most optimal TM-MT combination strategy.

We present a simple yet powerful data augmentation method for boosting Neural Machine Translation (NMT) performance by leveraging information retrieved from a Translation Memory (TM). Tests on the DGT-TM data set for multiple language pairs show consistent and substantial improvements over a range of baseline systems. The results suggest that this method is promising for any translation environment in which a sizeable TM is available and a certain amount of repetition across translations is to be expected, especially considering its ease of implementation.    

Jochen Hummel, co-founder and CEO of Coreon

13 January 2021, 9.30-10.30

Sunsetting CAT – Threat and Opportunity

For decades the basic architecture of Computer Assisted Translation (CAT) has been left unchanged; and with it the industry’s business model. The advances in Neural Machine Translation (NMT) have now made CAT tools as we know them obsolete. For more and more projects NMT is achieving “human parity”. That changes everything. Different actors using innovative tools in re-engineered workflows demand new business models, but also offer opportunities for innovative service providers.


Jochen Hummel is co-founder and CEO of Coreon, the leading SaaS solution for multilingual knowledge systems. He is CEO of ESTeam AB, a provider of language technology and semantic solutions to EU organisations and corporations. He serves as vice-chairman of LT-Innovate, the Forum for Europe’s Language Technology Industry. He has a software development background and had grown his first company, TRADOS, to the world leader in translation memory and terminology software. In 2006 he founded Metaversum, the inventor of the virtual online world Twinity and was its CEO until 2010. He is a well-known, internationally respected software executive and serial entrepreneur. He serves on boards and is mentor/angel for several start-ups in Berlin.

Julie Giguère, Business Director, Asian Absolute

13 January 2021, 11am

‘Working with Intelligent Machines’ 


Although Artificial Intelligence development in machine translation is leading to lower prices, higher efficiency and increasing speed of translation for businesses, there remains much to be done to create a robust system that covers all known languages and all specialised subject areas with the same level of usage quality. This is why having a data quality management system is key to utilising the technology safely. We like the analogy of “coaching” the machine . This is because NMT are “intelligent” machines and they will learn not just from the translation that the human translator produced, but also from other feedback. NMT engines learn from bilingual and monolingual data, the goal is to learn from the “segment + its post edit” pair and induce the model to better translate the next input segment. This means that, in time, the task of the linguist will involve less fixing of grammatical errors, and more checking whether the translation is correct or not, making the process of post-editor more enjoyable. We will look at the upstream and downstream tasks that the human editor can perform to improve the machine output and the mechanism of the intelligent machine.

Julie C. Giguere 

Holder of degrees in Specialised Translation and Law, Julie’s career saw her manage translation and communication for BMO Bank of Montreal as well as financial and legal translation projects at major Language Service Providers in France and in the UK. These roles were a natural fit for Julie, a passionate communicator who speaks fluent French, Spanish and English. Julie held responsibility of Asian Absolute, a boutique language service provider, specialising in Asian and Asean languages for the last 6 years. She led the global Sales and Operation teams. Her development duties at the company saw her grow the operation from a handful of staff to a team of more than thirty and oversaw a 100% increase in the acquisition rate of new clients, such as UN Women headquartered in Bangkok and WIPO (World International Patent Organisation). She also personally led the start-up of the company’s operations in Bangkok and Panama City. She has over 10 year professional experience in multilingual communications and AI applications in linguistics. She recently completed the Oxford University, Said Business School AI Programme. She is a regular guest speaker at Localisation and tech/AI events. She recently spoke at the ATC Summits 2017 and 2018; EUATC 2018; Connected World Summit 2018; AI & Big Data Innovation Summit 2018 in Beijing and the IP EXPO Manchester 2019.

Professor Ruslan Mitkov, University of Wolverhampton,

15 January 2021, 11am

Don’t get me wrong…What’s next? 

Intelligent Translation Memory Systems…

We witnessed the birth of the modern computer between 1943 and 1946; it was not long after that Warren Weaver wrote his famous memorandum in 1949 suggesting that translation by machine would be possible. Weaver’s dream did not quite come true: while automatic translation went on to work reasonably in some scenarios and to do well for gisting purposes, even today, against the background of the recent promising results delivered by statistical Machine Translation (MT) systems such as Google Translate and latest developments in Neural Machine Translation and in general Deep Learning for MT, automatic translation gets it often wrong and is not good enough for professional translation.

Consequently, there has been a pressing need for a new generation of tools for professional translators to assist them reliably and speed up the translation process. First Krollman put forward the reuse of existing human translations in 1971. A few years later, in 1979 Arthern went further and proposed the retrieval and reuse not only of identical text fragments (exact matches) but also of similar source sentences and their translations (fuzzy matches). It took another decade before the ideas sketched by Krollman and Arthern were commercialised as a result of the development of various computer-aided translation (CAT) tools such as Translation Memory (TM) systems in the early 1990s.  These translation tools revolutionised the work of translators and the last two decades saw dramatic changes in the translation workflow.

The TM memory systems indeed revolutionised the work of translators and now the translators not benefiting from these tools are a tiny minority. However, while these tools have proven to be very efficient for repetitive and voluminous texts, are they intelligent enough? Unfortunately, they operate on fuzzy (surface) matching mostly, cannot benefit from already translated texts which are synonymous to (or paraphrased versions of) the text to be translated and can be ‘fooled’ on numerous occasions.

What is next in the translation world? We cannot get it wrong as we cannot let the translation go wrong: it is obvious that the next generation of TM systems will have to be more intelligent.  A way forward would be to equip the TM tools with Natural Language Processing (NLP) capabilities. NLP can come to help and propose solutions towards addressing this objective.  The invited talk will present recent and latest work by the speaker and his research group in achieving this. More specifically, the speaker will explain how two NLP methods/tasks, namely paraphrasing and clause splitting, make it possible for TM systems to identify semantically equivalent sentences which are not necessarily identical or close syntactically and enhance performance. The first evaluation results of this new generation TM matching technology are already promising….

Professor Ruslan Mitkov: short bio

Prof Dr Ruslan Mitkov has been working in Natural Language Processing (NLP), Computational Linguistics, Corpus Linguistics, Machine Translation, Translation Technology and related areas since the early 1980s. Whereas Prof Mitkov is best known for his seminal contributions to the areas of anaphora resolution and automatic generation of multiple-choice tests, his extensively cited research (more than 260 publications including 17 books, 40 journal articles and 40 book chapters) also covers topics such as machine translation, translation memory and translation technology in general, bilingual term extraction, automatic identification of cognates and false friends, natural language generation, automatic summarisation, computer-aided language processing, centering, evaluation, corpus annotation, NLP-driven corpus-based study of translation universals, text simplification, NLP for people with language disabilities and computational phraseology. Current topics of research interest include the employment of deep learning techniques in translation and interpreting technology as well as conceptual difficulty for text processing and translation. Mitkov is author of the monograph Anaphora resolution (Longman) and Editor of the most successful Oxford University Press Handbook – The Oxford Handbook of Computational Linguistics. Current prestigious projects include his role as Executive Editor of the Journal of Natural Language Engineering published by Cambridge University Press and Editor-in-Chief of the Natural Language Processing book series of John Benjamins publishers. Dr Mitkov is also working on the forthcoming Oxford Dictionary of Computational Linguistics (Oxford University Press, co-authored with Patrick Hanks) and the forthcoming second, substantially revised edition of the Oxford Handbook of Computational Linguistics. Prof Mitkov designed the first international Erasmus Mundus Master programme on Technology for Translation and Interpreting which was awarded competitive EC funding and which he leads as Project Coordinator. Dr Mitkov has been invited as a keynote speaker at a number of international conferences including conferences on translation and translation technology; he has acted as Programme Chair of various international conferences on Natural Language Processing (NLP), Machine Translation, Translation Technology (including the annual London conference ‘Translation and the Computer’), Translation Studies, Corpus Linguistics and Anaphora Resolution. Dr Mitkov is asked on a regular basis to review for leading international funding bodies and organisations and to act as a referee for applications for Professorships both in North America and Europe. Ruslan Mitkov is regularly asked to review for leading journals, publishers and conferences and serve as a member of Programme Committees or Editorial Boards. Prof Mitkov has been an external examiner of many doctoral theses and curricula in the UK and abroad, including Master’s programmes related to NLP, Translation and Translation Technology. Dr Mitkov has considerable external funding to his credit (more than є 25,000,000) and is currently acting as Principal Investigator of several large projects, some of which are funded by UK research councils, by the EC as well as by companies and users from the UK and USA. Ruslan Mitkov received his MSc from the Humboldt University in Berlin, his PhD from the Technical University in Dresden and worked as a Research Professor at the Institute of Mathematics, Bulgarian Academy of Sciences, Sofia. Mitkov is Professor of Computational Linguistics and Language Engineering at the University of Wolverhampton which he joined in 1995 and where he set up the Research Group in Computational Linguistics. His Research Group has emerged as an internationally leading unit in applied Natural Language Processing has been ranked as world No.1 in different international NLP competitions. In addition to being Head of the Research Group in Computational Linguistics, Prof Mitkov is also Director of the Research Institute in Information and Language Processing. The Research Institute consists of the Research Group in Computational Linguistics and the Research Group in Statistical Cybermetrics, which is another top performer internationally. Ruslan Mitkov is Vice President of ASLING, an international Association for promoting Language Technology. Dr Mitkov is Fellow of the Alexander von Humboldt Foundation, Germany, Marie Curie Fellow and Distinguished Visiting Professor at the University of Franche-Comté in Besançon, France; he also serves as Vice-Chair for the prestigious EC funding programme ‘Future and Emerging Technologies’. In recognition of his outstanding professional/research achievements, Prof Mitkov was awarded the title of Doctor Honoris Causa at Plovdiv University in November 2011. At the end of October 2014 Dr Mitkov was also conferred Professor Honoris Causa at Veliko Tarnovo University.

Tharindu Ranasinghe, University of Wolverhampton

20 January 2021, 11am

‘Semantic Textual Similarity based on Deep Learning: Can it improve matching and retrieval for Translation Memory tools?’

Matching and retrieving previously translated segments from a Translation Memory is the key functionality in Translation Memories systems. However, this matching and retrieving process is still limited to algorithms based on edit distance which we have identified as a major drawback in Translation Memories systems. In this talk, we talk about our research [1,2] on sentence encoders to improve the matching and retrieving process in Translation Memories systems – an effective and efficient solution to replace edit distance-based algorithms.

[1] Ranasinghe, T. Mitkov, R., Orasan, C. and Caro, R., 2020, May. Semantic Textual Similarity based on Deep Learning: Can it improve matching and retrieval for Translation Memory tools?. In Parallel Corpora: Creation and Applications. John Benjamins. 

[2] Ranasinghe, T. Orasan, C. and Mitkov, R., 2020, May. Intelligent Translation Memory Matching and Retrieval with Sentence Encoders. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation (pp. 175–184).

Josep Crego (Systran, Head of Research)

22 January 2021, 11:00-12:00

Neural Machine Translation – an industrial perspective to a complete technological revolution and further expected research evolutions.

Bio: Graduated from Technical University of Catalunya and holder of a Ph. D. from the same university in Statistical Machine translation, Josep Crego joined SYSTRAN in 2011 as research engineer. Previously joining SYSTRAN he worked as research associate in the LIMSI/CNRS laboratory. He has conducted research on MT for more than 15 years.

Dr Claudia Lecci, University of Bologna

27 January 2021, 11:00-12:00

Creating terminological projects for the detection of in-domain terminology: a workflow for interpreters.

Interpreters, as well as translators, need to be familiar with all the skills and abilities necessary to create a terminological project and detect the terminology belonging to specific domains of the orality genre.

The typical approach for detecting specialized terminology can be described through a workflow which combines different stages, starting from an information mining stage and ending with the creation of a terminological resource, namely a glossary or a terminology database that, when needed, can also be combined with Computer Assisted Translation tools.

The workflow presented in this talk starts with the definition of a domain and with the collection of reference materials on the Internet. The second step consists in the construction of specialized comparable corpora from the web using a dedicated tool. The third stage is the corpus-based extraction of simple or complex terms with the help of a concordancing tool. The last step of the workflow is the creation of terminological entries organized in the form of glossaries and/or TermBases.

The insights gained from this presentation will help trainee and professional interpreters prepare terminological resources both for specific assignments and for more general topics.


Claudia Lecci graduated in Specialised Translation and Translation for the Publishing Industry at the Advanced School of Modern Languages for Interpreters and Translators (now Department of Interpreting and Translation – DIT).

She currently teaches the MA modules “Computer-assisted Translation and web localization” and “Machine Translation and Post-Editing” within the Master’s in Specialized Translation (DIT – Forlì) and coordinates the MA course “Methods and Technologies for Interpreting” within the Master’s in Interpreting (DIT- Forlì). She also teaches the course “Traduzione in Italiano dall’Inglese (assistita)” as part of the Bachelor’s in Intercultural and Linguistic Mediation (DIT – Forlì).

She is an SDL Trados Authorised Trainer for SDL Trados Studio 2021 and SDL MultiTerm 2021.

Speaker: Dr Vilelmini Sosoni, Ionian University

29 January 2021 at 11.00-12.30.

Title: MT and creative texts: a study of translations, translators’ attitude and readers’ views


Many of the translation tools in use today were initially designed to cater for technical, repetitive texts. This is still their main niche 25 years after the first versions of these tools appeared. Computer-aided translation (CAT) and Machine translation (MT) were long regarded as unsuitable for the translation of creative texts, which have a predominant expressive or operative function. This means that they exploit the expressive and associative possibilities of language in order to communicate the writer’s thoughts in an artistic, creative way or induce behavioural responses, as stimuli to action or reaction on the part of the reader. Their translation is anything but straightforward, given that it is not sufficient to merely preserve the meaning, but also preserve the reading experience of the original text (Toral and Way, 2018). In other words, the translation of creative texts should “undo the original” (de Man, 1986) to deal with the uniqueness of the source and target languages and source and target audiences. This undoing requires uniquely human skills and does not seem to fit within the dominant translation workflow where a text is fed into an MT engine to be further post-edited by a translator (Lommel and DePalma, 2016).


Lately advances in Neural Machine Translation (NMT) have led to an improved quality of the MT output, especially at the level of fluency (Castilho et al, 2017a; 2017b) even for lexically-rich texts (Bentivogli et al, 2016), and as a result its use for the translation of creative texts is increasingly put to the test. In the present talk, I will attempt to compare the quality of creative texts, i.e. literary and promotional texts, when translated from scratch with their quality following an MT and PE scenario, based on a fine-grained human error analysis. I will also investigate the translators’ attitudes and perceptions vis-à-vis MT and PE of creative texts and the texts’ reception by average readers.



Bentivogli, Luisa, Andriana Bisazza, Mauro Cettolo, Marcello Federico. 2016. Neural versus phrase-based machine translation quality: a case study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp 257–267


Castilho, Sheila, Joss Moorkens, Federico Gaspari, Rico Sennrich, Vilelmini Sosoni, Panayota Georgakopoulou, Pintu Lohar, Andy Way, Antonio Valerio Miceli Barone, Maria Gialama. 2017a. “A Comparative Quality Evaluation of PBSMT and NMT using Professional Translators.” MT Summit 2017, Nagoya, Japan.

Castilho, Sheila, Joss Moorkens, Federico Gaspari, Iacer Calixto, John Tinsley, Andy Way. 2017b. “Is Neural Machine Translation the New State of the Art?” The Prague Bulletin of Mathematical Linguistics 108: 109-120.

Lommel, Arle, Donald A. DePalma. 2016. Europe’s leading role in Machine Translation: How Europe is driving the shift to MT. Technical report. Common Sense Advisory, Boston.



Dr Vilelmini Sosoni is Senior Lecturer at the Department of Foreign Languages, Translation and Interpreting at the Ionian University in Corfu, Greece, where she teaches Legal and Economic Translation, EU texts Translation and Terminology, Translation Technology, Translation Project Management and Audiovisual Translation (AVT). In the past, she taught Specialised Translation in the UK at the University of Surrey, the University of Westminster and Roehampton University, and in Greece at the National and Kapodistrian University of Athens and the Institut Français d’ Athènes.  She also has extensive industrial experience having worked as translator, editor, subtitler and intepreter. She holds a BA in English Language and Linguistics from the National and Kapodistrian University of Athens, an MA in Translation and a PhD in Translation and Text Linguistics from the University of Surrey. Her research interests lie in the areas of Translation of Institutional Texts, Machine Translation (MT), Corpus Linguistics, Cognitive Studies, and AVT. She is a founding member of the Research Lab “Language and Politics” of the Ionian University and a member of the “Centre for Research in Translation and Transcultural Studies” of Roehampton University. She has participated in several EU-funded projects, notably TraMOOC, Eurolect Observatory and Training Action for Legal Practitioners: Linguistic Skills and Translation in EU Competition Law, while she has edited several volumes and books on translation and published numerous articles in international journals and collective volumes.

Prof Gloria Corpas, University of Malaga

5 February 2021, 11:00-12:00


This talk will revolve around language technologies applied to interpreting. Nowadays there is a pressing need to develop interpreting-related technologies, with practitioners and other end-users increasingly calling for tools tailored to their needs and their new interpreting scenarios. With the advent of new technology, interpreters can work remotely, deliver interpreting in different modes and contexts, on many devices (phones, tablets, laptops, etc.), and even manage bookings and invoice clients with ease. But interpreting as a human activity has resisted complete automation for various reasons, such as fear, unawareness, communication complexities, lack of dedicated tools, etc.

Several computer-assisted interpreting tools and resources for interpreters have been developed, mainly terminology management tools, corpora, and note-taking applications, but they are rather modest in terms of the support they provide. In the same vein, and despite the pressing need to aiding in multilingual mediation, machine interpreting is still under development, with the exception of a few success stories so far.

In this talk, I will present recent R&D projects on interpreting technologies in action. The first one is a speech-to-text system for automating communication of English and Arabic speaking patients in Spanish hospital triage scenarios at A&E services (in progress). The second one is already close to completion. It comprises a suite of NLP-enhanced tools and resources for interpreters and trainees, including but not limited to, terminology tools, corpora building and processing, automatic glossary building, automatic speech recognition and training tools. Final discussion will go back to the two idioms blended in the title of this talk…



BA in German Philology (English) from the University of Malaga. PhD in English Philology from the Universidad Complutense de Madrid (1994). Professor in Translation Technology at the Research Institute in Information and Language Processing (RIILP) of the University of Wolverhampton, UK (since 2007). Professor in Translation and Interpreting at the University of Malaga, Spain (since 2008).  Honorary Adjunct Professor at Xi’an Jiaotong-Liverpool University, China (since 2020). Published and cited extensively, member of several international and national editorial and scientific committees. Her research lines cover computational and corpus-based phraseology, lexicography, corpus-based translation, and language technologies applied to translation and interpreting. Spanish delegate for AEN/CTN 174 and CEN/BTTF 138, actively involved in the development of the UNE-EN 15038:2006 and currently involved in various ISO Standards (ISO TC37/SC2-WG6 “Translation and Interpreting”). Extensive experience in evaluation, validation and quality assurance of University degrees (BA, MA, and Doctorate). Chair of the Evaluation and Verification Commission of the Arts and Humanities field for Madri+d. Consultant for the Spanish Ministry of Research and other University programmes evaluation bodies (ANECA, AQU, ACCUEE, DEVA). President of AIETI (Iberian Association of Translation and Interpreting Studies, 2015-2017), Vice-President of AMIT-A (Association of Women in Science and Technology of Andalusia, 2014-2017), Director of the Department of Translation and Interpreting of the University of Malaga (2016-2021), she is currently Board member of the Advisory council of EUROPHRAS (European Society of Phraseology) and member of the Presidential Committee of AIETI, which is an advisory body of the association.

Dr Anna Zaretskaya, TransPerfect

12 February 2021, 11:00-12:00

How Good is our MT? A Glimpse into MT Evaluation Challenges in Commercial Settings



In this presentation we will talk about how machine translation quality is typically evaluated in commercial settings. While there are a lot of good-quality MT systems available for casual and commercial use, translation industry started to pay a lot more attention to MT evaluation. TransPerfect’s customers, typically large organisations, often struggle to find the best way of choosing between different MT providers. In addition, they also want to be sure that the translation quality of the chosen provider is the best they can get, that it brings them maximum ROI, and that the MT quality is improving over time. These are only some of the typical cases that require well-defined MT evaluation methods. While in research such methods already exist (e.g. WMT shared tasks), they are not always directly applicable in commercial scenarios.



Anna Zaretskaya currently works at TransPerfect as a Senior MT Implementation Manager. In this role she is responsible for designing and implementing MT solutions for TransPerfect clients. She joined the company in 2016 after finishing her PhD in translation technologies and user needs. She has a background in general linguistics (undergraduate studies) and computational linguistics (MS).

Elena Murgolo, Aglatech 14

19 February 2021, 11:00-12:00

Introducing MT. An LSP workflow


Abstract: Have you ever wondered how MT is deployed in an LSP? How do they choose it and test it?

Typically, LSPs are not exactly end-users: the real end-users are either professional translators who need to post-edit MT output or non-language professionals who use it in their daily life, sometimes without even realizing it, as in social media.

Language service providers and translation companies find themselves between developers and end-users. But they are also among the most likely to implement costly solutions, such as trained engines, hosting, and subscriptions, to be able to use cutting-edge technology and high-quality products.

LSPs are also probably the owners of the best data to be used during training and creation of new systems.

Implementing this relatively new technology, however, presents new challenges together with the numerous advantages.

In this talk I will summarize the process and give an overview of what an LSP might be facing when approaching MT and PE and our suggestions based on lessons we learned.

Bio: After graduating in Conference Interpreting and working a couple of years as simultaneous and liaison interpreter in fairs and events, Elena Murgolo began working in Aglatech14 as technical translator for German and English. She led the Post-Editing department of the company, and is now Language Technology Team Lead, managing Language Technology deployment and use within the LSP. She and her team are responsible for centralizing and developing LT expertise within the company and with external resources, including freelance translators, vendors, clients, and partner universities.

She is also involved in training new resources in the field of MT, PE, CAT tools and Language Technology in general.

Prof Rozane Rebechi, Federal University of Rio Grande do Sul

26 February 2021, 11:00-12:00

‘Small comparable corpora for the retrieval of culture-related elements and their impact for translation’


This talk attempts to share findings of research I have carried out for the past years involving English-Portuguese translation of cultural references. Much has been discussed about the difficulties to translate culture-related elements, after all, “[t]he worlds in which different societies live are distinct worlds, not merely the same world with different labels” (Sapir, 1949: 162). However, semiautomatic investigation of simple and compound keywords retrieved from (small) comparable corpora demonstrate that functional translation (Nord, 2006) of texts of a similar genre in different languages and cultures go far beyond linguistic equivalence, as the specificities of the genre in both languages and cultures should also be considered by translators. To produce texts that work properly for the target reader, the translator should be aware of the domain conventions in both languages and cultures before deciding what aspects should be maintained, adapted, or omitted. Using examples resulting from the analysis of obituaries, cooking recipes, and restaurant reviews (Rebechi, 2018, 2020, 2021), the presentation seeks to demonstrate that the assumptions underlying corpus linguistics may not only help translators to interpret the source language texts, but also assist them in finding solutions for the translation process (Stewart, 2000).


Nord, C. (2006). Loyalty and fidelity in specialized translation. Confluências: Revista de Tradução Científica e Técnica, 29–41.

Rebechi, R. R., & da Silva, M. M. (2018). Obituaries in translation: a corpus-based study. Cadernos de Tradução38(3), 298–318.

Rebechi, R., & Tagnin, S. (2020). Brazilian cultural markers in translation: A model for a corpus-based glossary. Research in Corpus Linguistics8, 65–85.

Rebechi, R. R.; Schabbach, G. R.; Freitag, P. H. (2021). Sobre a busca por equivalentes funcionais em um corpus comparável português-inglês de críticas gastronômicas. TradTerm, 37(2), 430-459.

Sapir, E. Culture, Language and Personality. Los Angeles: University of California, 1949.

Stewart, D. (2000). Conventionality, Creativity and Translated Text: The Implications of Electronic Corpora in Translation. In M. Olohan (Ed.), Intercultural Faultlines (pp. 73–91). Manchester/Northampton: St. Jerome.


Rozane Rebechi is a professor and researcher at the Federal University of Rio Grande do Sul. She holds a Master and a Ph.D. degrees in English Language and Literature from the University of São Paulo (Brazil). Her main areas of research are Translation, Terminology, and Discourse, to which she applies Corpus Linguistics as methodology. She is currently chair of the Brazilian Association of Researchers in Translation (ABRAPT) and Associated Partner to the European Masters in Technology for Translation and Interpreting (EM TTI). She was recently nominated vice-dean for International Affairs and director of the academic mobility department of UFRGS. Rozane has published several papers in national and international journals and edited volumes on translation and terminology.

Yves Champollion, Founder of Wordfast

5 March 2021, 11-12:00

‘Machine Translation for us Human Translators: Good, Bad, or Ugly?’



The author starts with defining the limits and scope of MT as used by translators, as compared with other uses of MT.

Then he briefly gives an overview of the various implementations of MT for translators in the past 25 years, and the typical reactions from translators.

He will review the situation with every generation of MT, discussing MT gains, acceptance, but also pain points and fears.

The last part will focus on the current state of MT in the translation industry: the strategic aspect of MT for agencies and corporations, the economics of MT, the gain or pain on a translator’s side.



Yves Champollion has over 30 years experience as a pioneer in the translation and localization industry. Since 1982, Champollion has worked in his native France as a freelance translator in the science field. His languages include French, Latin, German, English , Spanish, Portuguese and Russian as well as some Japanese and Shangana, a Zulu-related language of Southern Mozambique. In 1996, Champollion began working as a project manager and consultant for leading translation agencies, handling large-scale projects for IBM and Microsoft. He began developing software in the 1980’s and in 1999, he developed Wordfast, an MS Word-based translation memory tool. As the author of several articles on translation and the speaker at a number of high-profile industry events, Champollion is an esteemed voice and well-respected figure in the language services industry.

Fardad Zabetian, CEO of Kudo

12 March 2021, 15:00 – 16:00

Multilingual communication is evolving and how KUDO is part of the evolution



During this 45min talk, Fardad will share his journey in the world of multilingual meetings in the last 20 years, how he sees the market is evolving, the new opportunities for businesses and  interpreters . He will cover the new KUDO Marketplace and how this new system going to remove friction points in accessing interpreting services. He will cover the challenges in the new uses cases such as short notice for assignments and how technology and AI will assist interpreters to prepare in shorter time.



A visionary entrepreneur, Fardad has founded and placed two companies among the fastest growing business in America. He has also expanded to key markets over Europe and Asia. Fardad is no stranger to big challenges. In 2012, he was part of the design and roll-out a complete makeover of the United Nation’s meeting facilities, including the general assembly hall in New York. He has also played a key supporting role as a high-end equipment provider to various iterations of the IMF/ World Bank Annual Meetings and several European Institutions. in 2016, Fardad co-founded AVAtronics, a Swiss technology company focusing on active noise canceling technology. In 2017, Fardad founded KUDO, where he now takes the meeting experience beyond the room to connect businesses and people in true border-less fashion, without language or geographic constraints.

Dr Konstantinos Chatzitheodorou, Strategic Agenda

19 March 2021, 11:00-12:00

‘Using technology in the translation quality assessment’


Abstract: The process of determining translation quality is subjective and relies on human judgments. Translation quality is affected by a variety of factors that are weighted differently in each translation task and can be viewed from different perspectives. Hence, it is not equally measurable or assessable (Almutairi, 2018). The talk will emphasize the importance of measuring translation quality and how it can be accomplished. The first part of the presentation will introduce different frameworks and software used in the process of translation evaluation focusing on error classification schemes available in both professional and academic word. The second part of the presentation will include a demonstration of Træval which is a Software-as-a-Service (SaaS) that allows humans to evaluate translation outputs. By providing an easy-to-use graphical interface, it assists researchers and users in this process. In particular, three different scenarios will be presented using the Dynamic Quality Framework – Multidimensional Quality Metrics (DQF-MQM) error typology (Lommel, 2018): an evaluation of a simple translation task, an evaluation task focused on the assessment of multi-word units, and, finally, a technology-aided evaluation task aiming to reduce subjectivity.

Dr Konstantinos Chatzitheodorou is a postdoctoral researcher at the Department of Foreign Languages, Translation and Interpreting, Ionian University. He received his PhD in Applied Translation Studies and Computational Linguistics from the Aristotle University of Thessaloniki. He holds a BA in Italian Language and Literature from the School of Italian Language and Literature, Aristotle University of Thessaloniki and an MSc in Informatics in Humanities from the Department of Informatics, Ionian University. He is also ECQA Certified Terminology Manager – Engineering. He is employed as a Computational Linguist in the private sector, assisting organizations to use language data to gain strategic insights. He has also worked as a Machine Translation Expert and Terminologist at the European Parliament – Directorate-General for Translation in Luxembourg. Over the years, Konstantinos has also contributed as a researcher to several EU projects in areas of his interest.


Almutairi, M.O.L., 2018. The objectivity of the two main academic approaches of translation quality assessment: arab spring presidential speeches as a case study (Doctoral dissertation, University of Leicester).

Chatzitheodorou, K. and Chatzistamatis, S, 2013. COSTA MT evaluation tool: An open toolkit for human machine translation evaluation. The Prague Bulletin of Mathematical Linguistics, 100(2013), pp.83-89.

Lommel, A., 2018. Metrics for translation quality assessment: a case for standardising error typologies. In Translation Quality Assessment (pp. 109-127). Springer, Cham.

Secară, A., 2005, March. Translation evaluation: A state of the art survey. In Proceedings of the eCoLoRe/MeLLANGE workshop, Leeds (Vol. 39, p. 44).

Prof Ricardo Muñoz Martín, University of Bologna

16 April 2021, 11:00-12:00

‘Do translators dream of electric brains?’


We cannot know whether Artificial Intelligence exists because we do not know yet what intelligence is. The way computers process natural languages is not really the way humans do it. Artificial neural networks have but a small, distant ressemblance with our brains’ biological structures. But, why should that matter? Researchers studied the flight of birds to develop the first planes. Birds and planes never were too similar and time has only separated them further. Biomimicry is inspiring, but evolutionary solutions are not necessarily the best for our machines. Furthermore, it leads to notions of competitiveness between humans and machines, not to symbiosis. A large portion of CAT research has focused on why people mistrust or dislike applications and systems. But, shouldn’t we be asking were did we go wrong? What do we know about the effects of digital tools on translators and their working ways? In order to develop practical applications with potential real-world use for translators, we need to approach the tasks in their natural(istic) environments, with a view on avoiding cognitive friction and to implement human in the loop testing that will ensure better pairings of humans and their digital tools.


Ricardo Muñoz Martín is a (now seldom practising) freelance translator since 1987, ATA certified for English-Spanish in 1991. He studied at 6 European and American universities until 1993, when he was granted a PhD from UC Berkeley. Prof Muñoz lectured at 7 American and Spanish universities before he joined the Department of Interpreting & Translation of the University of Bologna, Italy. There, he directs the Laboratory for Multilectal Mediated Communication & Cognition (MC2 Lab), devoted to the empirical research of multilectal mediated communication events from the perspective of Cognitive Translatology—a theoretical frameowrk drawing from situated cognition. As a visiting scholar or guest speaker, Prof Muñoz has travelled widely in Europe, America and China. He is also a member of the TREC and HAL networks and co-editor of the journal Translation, Cognition & Behavior.

Dr. Claudio Fantinuoli, University of Mainz and Head of Innovation at KUDO Inc.

11 June 2021 – 11.00-12.00 BST

Title: Making sense of AI in the interpreter workstation

Speaker’s short bio: Dr. Claudio Fantinuoli is researcher and lecturer at the University of Mainz and Head of Innovation at KUDO Inc. He researches in the area of Natural Language Processing applied to human and machine interpreting. He lectures Language Technologies and Conference Interpreting at the University of Mainz and at the Postgraduate Center of the University of Vienna. He is the founder of InterpretBank.

Abstract: Interpretation is at the verge of a third technical revolution. This will bring about a deeper integration of technology in the interpreter workstation. Artificial Intelligence is becoming integral part of computer-assisted interpreting (CAI) tools and is now allowing machine learning to enter the workflow of professional interpreters. CAI tools can create ad-hoc linguistic resources, suggest in real-time translations, numbers and proper names and automatize several aspects around the service provision. In this presentation I will discuss the promise of AI for the interpreting profession, its potentials and risks.

Prof Jan-Louis Kruger, Macquarie University

18 June 2021, 09:00-10.30

Title: Studying subtitle reading using eye tracking


The world of audiovisual media has changed on a scale last seen with the shift away from print to digital photography. VOD has moved from an expensive concept limited by technology and bandwidth, to the norm in most of not only the developed world, but also as an accelerated equaliser in developing countries. This has increased the reach and potential of audiovisual translation.

While the skills required to create AVT have come within reach of a large groups of practitioners due to advances in editing software and technology, with many processes from transcription to cuing being automated, research on the reception and processing of multimodal texts has also developed rapidly. This has given us new insights into the way viewers, for example, process the text of subtitles while also attending to auditory input as well as the rich visual code of film. This multimodality of film, although being acknowledged as one of the unique qualities of translation in this context, is also often overlooked in technological advances. When the emphasis is on the cheapest and simplest way of transferring spoken dialogue to written text, or visual scenes to auditory descriptions, the complex interplay between language and other signs is often overlooked.

Eye tracking provides a powerful tool for investigating the cognitive processing of viewers when watching subtitled film with research in this area drawing on cognitive science, psycholinguistics and psychology. I will present a brief description of eye tracking in AVT as well as the findings of some recent studies on subtitle reading at different subtitle presentation rates as well as in the presence of secondary visual tasks.


Jan-Louis Kruger is professor and Head of the Department of Linguistics at Macquarie University. He started his research career in English literature with a particular interest in the way in which Modernist poets and novelists manipulate language, and in the construction of narrative point of view. From there he started exploring the creation of narrative in film and how audiovisual translation (subtitling and audio description) facilitates the immersion of audiences in the fictional reality of film.

In the past decade his attention has shifted to the multimodal integration of language in video where auditory and visual sources of information supplement and compete with text in the processing of subtitles. His research uses eye tracking experiments (combined with psychometric instruments and performance measures) to investigate the cognitive processing of language in multimodal contexts. His current work looks at the impact of redundant and competing sources of information on the reading of subtitles at different presentation rates and in the presence of different languages. 

Prof Barry Slaughter Olsen, Middlebury Institute of International Studies at Monterey 

16 July 2021, 17:00- 18:30


RSI has taken the world by storm. So, what have we learned and where do we go from here?


No one could have foreseen the effects of the COVID-19 pandemic on the interpreting profession or its accompanying effects on the adoption rate of remote simultaneous interpretation (RSI) all over the world. In a question of weeks, international organizations, national governments, non-governmental organizations, and private corporations were meeting, negotiating, and conducting business online at a scale never seen before, often in multiple languages. But this abrupt adoption of web conferencing with RSI was not entirely smooth or without its challenges. We are now at a stage where we can compile a list of lessons learned during this unprecedented shift in professional practice and turn our sights toward the future to address the new digital world of multilingual communication and interpretation technology’s place in it.  This presentation will share some of those lessons learned and some thoughts about what the future of RSI may hold.


Barry Slaughter Olsen is a veteran conference interpreter and technophile with over twenty-five years of experience interpreting, training interpreters, and organizing language services. He is a professor at the Middlebury Institute of International Studies at Monterey (MIIS) and the Vice-President of Client Success at KUDO, a multilingual web conferencing platform. He was co-president of InterpretAmerica from 2009 to 2020. A pioneer in the field of remote simultaneous interpretation (RSI), he is co-inventor on two patents on RSI technologies. He is a member of the International Association of Conference Interpreters (AIIC). Barry has been interviewed numerous times by international media (CNN, CBC, MSNBC, NPR, and PBS) about interpreting and translation. For updates on interpreting, technology, and training, follow him on Twitter @ProfessorOlsen.

Prof Ruslan Mitkov, University of Wolverhampton

22 July  2021

What does the future hold for humans, computers, translators, and interpreters?

A non-clairvoyant’s view.

(60-min introduction to Natural Language Processing)

Abstract:  Computers are ubiquitous – they can be found and used everywhere. But how good are computers at understanding, producing, and translating natural languages? In other words, what is the level of their linguistic intelligence? This presentation will examine the linguistic intelligence of computers and will ask the question of how far advances in Artificial Intelligence (AI) can go. Illustrations will be provided through key applications addressing parts of the translation process such as machine translation and translation memory systems and the challenges ahead will be commented on …

The presentation begins with a brief historical flashback, plotting the timeline of the linguistic intelligence of computers against that of humans. It then gives another snapshot in time depicting early work on Machine Translation. Over the last 20 years, as will be discussed in the presentation, advances in Natural Language Processing (NLP) have significantly increased the linguistic intelligence of computers but this intelligence still lags behind that of humans.

The presentation will go on to explain why it is so difficult for computers to understand, translate and, in general, to process natural languages; it is a steep road, and a long and winding one, for both computers and researchers. The talk will briefly present well-established NLP techniques that computers use when ‘learning’ to speak our languages, including initial rule-based and knowledge-based methods and more recent machine learning as well as deep learning methods, which are regarded as highly promising. A selection of Natural Language Processing applications will be outlined after that. In particular, the talk will look at the recent advances in Machine Translation and will assess the claims that Neural Machine Translation has reached parity with human translation.

The speaker will express his views on the potential of MT, and the latest research on ‘intelligent’ Translation Memory systems will be outlined along with expected developments. The future of Interpreting Technology and its impact on interpreters will also be touched on.

I am no clairvoyant, but during my plenary talks I am often asked to predict how far computers will go in their ability to learn and translate language. At the end of my presentation I shall share with you my predictions and, in general, my vision for the future of translation and interpreting technologies. These predictions, though tentative, will be relevant to the impact that AI advances can have on the work of translators and interpreters in the future.

Dr Joss Moorkens, Dublin City University

23 July 2021

Title: Digital Taylorism in the Translation Industry



Translators have worked with the assistance of computers for many years, usually translating whole texts, divided into segments but in sequential order. In order to maximise efficiency and inspired by similar moves in the tech industry and predictions for Industry 4.0, large translation companies have begun to break tasks down into smaller chunks and to rigidly define and monitor translation processes. This is particularly true of platform-mediated work, highly collaborative workflows, and multimedia work that requires near-live turnaround times. This article considers such workflows in the context of measures of job satisfaction and discussion of sustainable work systems, proposing that companies prioritise long-term returns and attempt to balance the needs of all stakeholders in a translation process. Translators and translator trainers also have a role to play in achieving this balance.



Joss Moorkens is an Associate Professor and Chair of postgraduate translation programmes at the School of Applied Language and Intercultural Studies at Dublin City University. He is also a Funded Investigator with the ADAPT Centre and a member the Centre for Translation and Textual Studies. He has authored over 50 journal articles, book chapters, and conference papers on translation technology, user interaction with and evaluation of machine translation, translator precarity, and translation ethics. He is General Coeditor of the journal Translation Spaces with Prof. Dorothy Kenny, and coedited the book ‘Translation Quality Assessment: From Principles to Practice’, published in 2018 by Springer, and special issues of Machine Translation (2019) and Translation Spaces (2020). He leads the Technology working group (with Prof. Tomas Svoboda of Charles University) as a board member of the European Masters in Translation network and sits on the advisory board of the Journal of Specialised Translation.