Last week, we talked about our recommendations on how to make your visual data an accurate representation of the real world. However, having your data match your production environment isn’t the only thing you need in order to get your AI projects out of R&D. You need to involve the right Subject Matter Experts (SMEs) early and often throughout the project.
We can’t stress the importance of this enough! Think of it this way: your SMEs have the expertly-trained eyes that you’re trying to digitize into a computer vision model.
So when creating a new model, consider this: who are the people who can already solve this problem using their eyes? Maybe it’s your quality assurance team, or maybe your most senior intel analysts. Find out who these people are, and invite them to be a part of your AI project as soon as possible!
This is important because your SMEs know what they’re looking for, and they know if the AI model will actually help them. The criteria and definitions you or your data science team set may be quite different from what a SME would expect. These differences then get baked into your model, creating a cascade of errors in the entire AI workflow.
So now, how do you productize SME knowledge and needs?
Understand what model SMEs really want
What are you trying to detect using computer vision? Talk with your SMEs who know that domain of the business and make sure that lines up with their expectation and needs. Ask them: if you could provide them with only one data point using AI, what would it be? What piece of information would make their work easier? Work backwards from there to scope and design your AI project to meet that need, even if only as an initial solution.
Misalignment in this step is one of the most common reasons we see AI that never leaves R&D.
Create well-defined definitions with SMEs to use for annotation
Because SMEs are your most expert eyes for visual problems, they’re the ones who have the most intuitive and robust sense of what an AI model should and shouldn’t detect. Too often, we’ve seen projects get derailed because the ontology—that is, the specific definitions of what you want to train the model to find—is decided without input from the SMEs at the very beginning. If you want your model to ultimately reflect what your SMEs need, it’s best to get their input from the very start!
When doing so, work to create a definitions document that outlines what the model needs to detect and explicitly lists out what does and doesn’t count in that definition.
Try not to use vague categories that might be easy to misinterpret. For example, “damage” is not a great category for computer vision, because “damage” means something different to everyone.
Let’s say you’re an aluminum can manufacturer and you want to train a model to detect damaged cans as they come off the production line. Creating one catch-all “not a good can” category is going to lead to a lot of problems when it comes time to annotate examples. You want to create categories that are as specific and narrowly-scoped as possible. Perhaps try just starting with “dent” or maybe “scratch”. Work with your SMEs to understand what the most common damage types are, how they happen on the production line, and what problems they cause in a real production environment.
Feedback, feedback, feedback
Machine learning models can be a lot like us: they need lots of feedback if they’re going to learn how to do their jobs right.
After you’ve successfully trained a model using the definitions you created together with your SMEs, it’s time to close the loop by getting feedback from those same SMEs on model outputs.
Take a random sample of the output from your model and show it to the SMEs. Ask them to thumbs-up / thumbs-down each image or video. Does this meet their needs and expectations? If not, why not?
Both positive and negative feedback is very useful here as means to fine-tune the model and make it even better. Use the feedback to update the definitions document you created with them, then go back and annotate more data to the updated standards.