Evaluating Translation Technology Partners
The Lab will seek out partners that are qualified, available, and aligned.
Qualified is defined as having the necessary & proven expertise to contribute to the work in the areas needed.
Available is defined as having sufficient capacity and resources to execute the work at the pace the Lab is aiming for.
Aligned is defined as aligning with the ETEN vision & mission of the Lab’s commission to take risks, test the untested, prove the unproven, equip the unequipped, and advance the things that cannot advance without special attention and intentionality.
The following 3 elements are important when evaluating a new or ongoing relationship or partnership: Accessibility, Quality, and Value.
Accessibility
Where the Lab sponsors translation technology projects we look at the following types of transparency and access:
Non-negotiable
The product of the translation work: Scripture or Scripture related material must be made available under the Creative Commons Attribution-Share-A-Like license (CC BY-SA).
The Lab will receive a copy of auxiliary target language data used in the translation process (including, but not limited to, word lists, translation memory, grammars, normalized versions of writing system scripts, and word/text embeddings), for use at its own discretion. This data should also be made available under one of the Creative Commons Licensing options.
Negotiable
When the Lab sponsors development, preference will go to partners where software development is distributed with its source code, making it available for use, modification, and distribution. Ideal licensing would be MIT or a similar permissive open-source license.
Quality
The Lab will seek partners that are qualified, available, and aligned with our process standards for quality review of Machine Translation projects.
Partners will be evaluated for alignment within various Quality Review processes of:
Augmented and automated quality checks
Community assessment
Specialty checks where necessary
The process standards establish a relative set of measures against a benchmark:
That is grounded in similar assessments for known, widely published translations
Where the measured criteria and benchmark score for each selected category is determined by the agency itself
To provide the average baseline used to measure new machine-assisted translations. As a default safeguard we also suggest a community spot-check of the text.
Value
Our ultimate aim is to increase the number of translations that will be completed by the 2033 All-Access Goal deadline—while not compromising quality. We will place the highest value on partners who can help achieve these goals. We’re not simply looking for “cheap” technology, but cost-effective technology that adds value. We will measure these three categories independently and together to determine value:
Quality – Does the quality of the final product meet or exceed that of established Bible translation methods? We have outlined a framework for quality assessment in the above section. Quality measurement is ultimately validated by community acceptance.
In addition to the above we will determine if quality is a result of the technology, or the people involved? If a machine translation produces a sub-par draft that requires massive overhaul by translators, this calls into question the efficacy of the machine translation approach. Can the partner show that the technology helped improve the final draft quality, vs. hindered it?
Speed/ Capacity – Does this enable us to complete a final translation at a more rapid pace than established Bible translation methods? We are looking for exponential speed improvements, not incremental. E.g.: If machine translation accelerates translation by 10%, but costs 20% more, then it is not great value. We’re hoping for improvements of at least 50% on speed (5-7 years vs. 12-15 in traditional BT).
The Innovation Lab is currently prioritizing speed over cost, in order to achieve the 2033 all-access goals. But this cannot come with a compromise of quality.
We will also look at the capacity of each partner and their ability to scale.
Cost – How does the cost compare to established Bible translation methods? If it costs more, it needs to dramatically improve speed (and not compromise quality). If it costs significantly less, but doesn’t improve speed, that is of negotiable value.
Considering the large number of projects to be done, we also recognize that cost has to be a factor. In other words, we need cost efficiencies to achieve the scale we are looking at to finish the goal.
When we assess cost, we look at the entire translation process, looking at quality speed not only the cost of the technology provider. E.g. if the service provider only produces a first draft and the translation agencies still needs to provide editorial and project management services, it will be added in to calculate the total cost of a translation project.
Approach
We will need multiple solutions and approaches to leverage advanced technology toward the 2033 goals. We call this the “toolbox” approach—we need many tools in our toolbox for the significant amount of work that remains. We favor an approach where there is a strong interplay between technology and humans—that is advanced technology solutions that assist humans through suggestions, editing, quality assurance as well as increased engagement in community checking and participation. This is opposed to a batch approach where outputs are produced without or with very little human input.

