If you stuck to the first three rules but still drifted with your ground truth generation, I feel you. I’ve been there. At the beginning of the project, I would have a plan. I knew exactly what I wanted to annotate and how I wanted to do it. I wanted the annotation plan to be fixed before the beginning of the project and followed during the project. However, in the deep learning tissue image analysis projects, I had to change my approaches in the midst of the model development and I was annoyed with that.
Then I realized that the “fixed annotation plan” approach worked well for classical computer vision projects but not necessarily for the deep learning AI-based projects. For deep learning model development, you end up adjusting your ground truth during the model development and it’s OK. This brings us to the: