MT-Telescope
MT-Telescope lifts the hood on quality performance of machine translation (MT) systems, building on COMET’s predictive quality measurement. This open-source tool delivers a zoomed-in view of performance, so the best MT system can be deployed, for the right reasons.
Greater visibility into performance drivers
MT-Telescope further enhances our ability to evaluate machine translation performance. Where COMET offers a way to predict quality measures of human-executed analysis (for example, MQM), MT-Telescope gives us a range of comparative metrics to help us look deeper into the quality performance of the system. This much-needed context helps in decision-making in system deployment.
Visualized system comparison
MT-Telescope provides three visualizations that compare two MT systems:
Filters that zoom into the detail
In addition to the overall comparison, MT-Telescope allows a filtered view in several categories:
These granular, filtered comparisons give organizations the decision-making tools in system deployment.
MT-Telescope helps our LangOps specialists and development teams make smarter decisions on behalf of our customers about which MT system is best suited to their needs, and enables the MT research community to easily use best practice analysis tools to rigorously benchmark their advances.
Alon Lavie,
VP of Language Technologies Unbabel
Unbabel’s MT engineering team has already adopted MT-Telescope as part of our customer onboarding and continuous learning process.