How To Write An Effective Annotation Specification Document for Machine Learning — innotescus

Innotescus LLC
4 min readSep 9, 2020

Let’s face it, most data scientists dread annotation. That’s the process of manually labelling items in supervised machine learning training examples so that ground truth is established for training the ML model. The challenges of defining data annotation specifications stem from the nature, not the complexity of task. Why is that?

  • Accept the iterative nature of annotation specifications. While you’re at an advantage when you can anticipate all requirements at the start of a project, allow yourself to not shoot for perfection. Just getting in the ballpark is a good start, manage team and management expectations on the iterative nature of the task — we move quick, break models quicker, and come back to iterate where needed.
  • It’s easy to over annotate (waste time labelling details are that not critical to the identification process) or under annotate (neglect to label details that turn out to be important).
  • In general, there have been few guideposts available — consult with other DS on the teams, in the company, when defining requirements. If third party annotation services allow for detailed discussions and transparency then working with them can help define your requirements.

It’s easy to over annotate (waste time labelling details are that not critical to the identification process) or under annotate (neglect to label details that turn out to be important).

Within this post, we recommend key requirements for an annotation specification plan with the intent of giving you a jump start on the annotation specific document, and quickly getting to a position where you can start annotation activity such as, conversing with third party annotators to get quotes while scheduling and budgeting for in-house work. These recommendations are based on our experience at Innotescus™, an “AnnotationOps” platform for computer vision-based ML.

Elements of an Annotation Specification

A specification is equally valuable whether you perform annotation in-house or outsource the activity. The plan should:

  1. Declare screening requirements for the annotators, including domain expertise or life experience, education, cultural background, and seniority.
  2. Enumerate each entity categorization and aspect that can be labelled, along with acceptable values. Also, explain the criteria by which confidence levels are assigned.
  3. Indicate acceptable data specifications, such as file size or image format.
  4. Allow for associating the user ID and timestamp for each annotation submitted by an annotator.
  5. Make preparations for a training document and repository. Include a robust set of examples with proper labelling. Work with the annotation team to identify ambiguous situations and document the correct approach to each of these.
  6. Incorporate quality assessment through the project. State corrective options that may need to be taken, such as rewriting or reevaluating labelling guidelines, or retraining annotators. By being explicit about the alternatives upfront, it becomes more acceptable to introduce these efforts as needed to ensure project success.
  7. Assign a senior project member to serve as the escalation point for new labelling questions and for reporting on whether the ML model features should be reconsidered based on the experiences of the annotators.
  8. Based on the above requirements, the annotation team should provide a forecasted duration and cost quote. (Other outsourcing contractual terms will be addressed in a future post).

Annotation is essential when dealing with data formats ranging from written text and speech recordings to still images and videos. Those outside the field of vision recognition may think that “a dog is always a dog” when it comes to visual identification. However, vision has its sizable share of ambiguities to resolve, including:

  • Facial expressions strongly reflect a person’s current thinking and emotional state, but correctly labeling the difference between “amazed” and “surprised” would need careful consideration before asking annotators to take on that task. It’s also worth considering how cultural norms affect interpretations of expression.
  • Especially if your training set of images includes nighttime or other low light images, instruction needs to be provided on how to treat hard-to-interpret elements, such as vehicle types.

Leveraging an Annotation Platform

In summary, many data scientists under-commit to annotation planning and oversight because the questions that arise during the annotation process are not clear at the outset. This results in an approach that is more improvised than carefully planned. In our experience, applying a structured framework — with a supporting set of annotation software tools — makes for more consistent project success, which in turn enhances the return on your investment in ML.

Innotescus is an innovative platform enabling better data, faster annotation, and deeper insights for impactful computer vision applications. Think of the platform as pioneering the “AnnotationOps” field. It delivers:

  • An easy-to-adopt user interface for intuitive, accurate and fast labeling.
  • Quality assurance, through monitoring results and progress metrics.
  • Scalability, including means of inviting internal annotators, colleagues or third parties to participate.
  • Efficient iteration, by identifying data bias early through statistical visualization and feature engineering tools.

We are conducting a pilot program of the platform with organizations that benefit from leading-edge vision recognition capabilities with high-quality, efficient annotation. To learn more, email us or request to participate in the pilot.

Originally published at https://innotescus.io on September 9, 2020.

--

--

Innotescus LLC

Enabling better data, faster annotation, and deeper insights through innovative computer vision solutions.