MIC
MIC
Abstract
• Unsupervised Domain Adaptation (UDA) adapts models from source to target data without
annotations. Previous methods struggle with visually similar classes in the target domain. The Masked
Image Consistency (MIC) module enhances UDA by leveraging spatial context relations, improving
performance across various tasks and achieving state-of-the-art results in different UDA scenarios.
DAFormer and DeepLabV2 networks, starting with pretraining from ImageNet and tweaking training settings like
a method called SDAT, which combines CDAN with MCC and a smoothness enhancing loss. Our training setup
includes SGD with a learning rate of 0.002, a batch size of 32, and a smoothness parameter of 0.02.
FPN, along with a method called SADA for adaptation. The setup follows previous work, including an initial
learning rate of 0.0025 and a batch size of 2. We report results in mean Average Precision (mAP) with a 0.5 IoU
threshold.
5. Conclusion:
Masked Image Consistency (MIC) to improve how models learn from new, unlabeled data. By making sure predictions
match between partially and fully visible images, MIC boosts performance across different tasks and scenarios.
● We started working on the cls repository which was initially ran on Python 3.8.5 but we ran it on Python
3.8.10 by creating a virtual environment named mic-cls
● We then installed the requirements from the requirements.txt
● We then downloaded two separate datasets and placed them into ‘examples/data/’ after downloading
them separately in ‘/group3/data_cls’:
○ Dataset 1 : Office Dataset
○ Dataset 2: VisDA-2017
● We then run the MIC for Domain Adaptive Classification using the script
We track our experiments and results using weights and biases using a project named dl-proj3
D. Problems faced:
Downloading image_list