This blog post outlines the objectives, tools, challenges, and suggested solutions for making an internal data annotation process more efficient. Airtable was chosen as the tool to use and a data structure was proposed that would allow annotators to assign a unique numeric value to each progress update in order to rank it for relative importance.
The use of the “Save view ordering to field” script from the Airtable Marketplace was proposed in order to make use of the drag and drop ranking functionality, avoiding any issues with potential duplicates. In conclusion, Airtable has proven to be an amazing and flexible solution for internal data annotation tasks.
[Disclaimer: I am in no way affiliated with Airtable – just really like the tool 🤓]
Objectives
Make our internal data annotation process for a ranking exercise more efficient and less mundane – the specific annotation process included ranking a fixed number of grouped progress updates (short text snippets in natural language) for their relative importance, in order to extend our training data set for a model that helps us to extractively summarise large amounts of updates (selecting the most salient ones, as one of multiple steps to create valuable summaries for our users)
Figure out how we can make our internal annotation process more efficient by potentially moving away from a relatively complicated & noisy set-up with Google Forms
Test whether we can create a more efficient, beautiful, and pristine flow with Airtable
Create a cleaner overview of who has to work on which annotation tasks and which tasks are still outstanding
Find a solution that makes the current Slack notification process leaner (currently a lot of notifications being sent when new Google forms with annotation tasks are created, can lead to people – i.e. myself – muting the channel and not doing their annotation homework!)
Tools
Trying to keep it simple: Using Airtable and connections to / automations triggered from there
Tbd whether we can integrate Airtable directly with pseudonymised testing and production data to automate the process further
Challenges
Finding the best data structure for Airtable, which can encompass all necessary dimensions (underlying data, grouped data for certain annotation tasks, annotators, etc.)
Figure out best + most practicable way to actually rank something (not completely straight-forward, e.g. cannot easily use the drag and drop functionality to then compute a field based on the row no.)
How to make sure that each “rank” is only used once and there are no duplicates?
Necessary to include a separate deduplication task as it is hard to convey this information solely through ranking?
Suggested Solution
Suggested Data Structure in Airtable
Suggested Annotation Flow
Annotators
User accounts for each annotator, can create automation for Slack or Email updates when tagged (i.e. when a new Annotation task is create for a user)
Updates
are either imported manually, based on scheduled run or through direct connection to production RDS
are automatically grouped based on imported criteria
create Airtable automation based on CW created and the project / workspace
Annotation Tasks
created for the different groups of updates (based on workspace, project, week of submission)
one annotation task per annotator and group
this could be done via an automation / API call to Airtable as well
First approach:
Individual records are created for each progress update to be annotated by 1 Annotator via a unique Annotation Task
these can be grouped by Task and Status
via the # Ranking No. Column the annotator can manually assign a numeric value
this will then mark the Update as ranked and will be ranked accordingly (through a formula updating the status and by sorting the values on the # rank field within the group of updates)
in order to make sure there are no duplicates in terms of ranks, the MVP solution would be to check the % of unique updates
Second approach:
After some additional research on the great Airtable Community and with help of the amazing people in the Airtable Support team, I figured out that this painful and manual process of ranking can actually be avoided!
by adding the “Save view ordering to field” script to our Airtable base (find it here in the marketplace), we can make use of the fun drag & drop functionality and won’t encounter any issues with potential duplicates, b-e-a-utiful 😍 (duplicates here relates to duplicated ranking numbers, flagging duplicate records will be handled in a separate flow, cf. the suggested data structure above)
see an example of the smooth process of ranking + logging those ranks below:
Conclusion
Challenging your internal (painful) processes can always lead to great outcomes and Airtable has proven to be an amazing, lightweight and flexible solution for our internal data annotation tasks (and many others as well, we are also running our entire CRM set-up via Airtable and are enjoying most minutes of it – almost unbelievable)!
If you are struggling with any internal annotation tasks and are thinking about trying out a tool like Airtable, or just want to chat about Data Operations in general, always feel free to reach out to me via LinkedIn with any questions! 🤝