Machine translation software

Machine translation software automates the process of translating text or audio from one language to another.

Microsoft Translator

This is an embedded tool in Microsoft Office 365 and enables quick and easy translation of key words or sections.

Clicking on ‘Translate’ which is under the ‘Review’ tab opens up the Translator section on the right of the screen. The languages are selected from drop-down menus and either a selection or the full document selected.

It is possible to access a bilingual dictionary by hovering over any of the words in the Translator, but users should be encouraged to check an English or discipline-specific dictionary also, to ensure understanding within the context of the subject.

Please note that Microsoft Office 365 is accessible from the top tight hand corner of the UWE intranet homepage.

image of microsoft office 365 translator

Rowling, J.K. (1997) Harry Potter and the Philosophers Stone. Bloomsbury.

Closed Captions

Closed captions enable users to pause recorded presentations and look up key words that they do not understand or if they are struggling with the pronunciation. Below is the location of the button on Panopto.

image of the closed caption button on panopto

For a session delivered using Blackboard Collaborate Ultra it is possible for users to produce ‘live’ closed captions or subtitles using the MS Translator app on a second device (see ‘Live translation’ section).

Transcripts

The Office 365 version of Word offers a transcription tool within the ‘Home’ tab by clicking on the down arrow to the right of the microphone symbol.

image of the Microsoft Word transcribe menu

So long as the file is in the right format (audio or video) it can be uploaded using the ‘Upload audio’ button and a timed transcript produced. The transcript can then be translated and the timings will help users follow the presentation. In Panopto it is best to download an ‘Audio Podcast’ to keep the file size under the limit of 200mb.

Once the transcript is produced it can then be translated into the user’s first language.

image of a transcript produced using microsoft word

Rowling, J.K. (1997) Harry Potter and the Philosophers Stone. Bloomsbury.

Live translation

Despite the capability of modern translation apps to translate in ‘real time’ there is a valid argument that users may miss the non-verbal elements of communication if they are focussed on a screen. Therefore, it is essential that all activities are recorded so that users can revisit any elements that were not accessible in the ‘live’ session.

Either way, both the Google Translate and Microsoft Translator (see below) apps can ‘listen’ to audio and translate ‘live’ to the language chosen by the user.

image of microsoft translator

The Microsoft Translator app displays the English in the lower part of the screen and the target language at the top.

For key word translation ‘SINGLE’ is selected and for continuous speech ‘AUTO’ should be used.

The language may be changed by selection tools either side of the microphone button.

Live translation (multiple concurrent languages)

Using Microsoft Translator, it is possible to allow multiple users to communicate in real time in their first language.

Under the conversation button there are options to either start or participate in a conversation. The person starting the conversation will need to share the ‘Conversation code’ with other participants.

image of a live translation using microsoft translator

Once the conversation has begun each participant talks by clicking on the microphone. The App will then simultaneously translate into the first language of each participant.

In this case the iPhone version has been used to communicate with my Spanish friend Bob (on a Windows laptop).

image of the web based version of microsoft translator

Image translation

If there is text as part of a flat image (e.g. a poster or other printed materials) it is possible for the Microsoft Translator app to translate any text that is part of the image by using the camera (in this case a Lidl cereal packet).

image of lidl cereal ingredients

image of the translation of a lidl cereal packet

Optimising audio input – tips from JISC

Aiming for the best audio input you can achieve will optimise the performance of automatic speech recognition (ASR) and it will be appreciated by all users who are listening to content. Here are some tips from the JISC accessibility community:

  • Always use a microphone. Wired is more reliable than Bluetooth.
  • Do a test recording before the session to check the quality.
  • Having headset mics too close to your mouth picks up more speech distortion and sibilance. Try lowering or raising from mouth.
  • Minimise background noise. This can be as simple as closing the door to your office.
  • Use the full version of abbreviations and acronyms.
  • Speak at a steady pace. Many users will appreciate this too e.g. those with processing difficulties or difficulties with focussing.

This is taken from the Jisc advice guide Video captioning and accessibility regulations.

Image from Pexels.