Skip to content

Evaluate SOP using pretrained model, R@1 much lower from the number on README.md #27

@junwei-h

Description

@junwei-h

Hello,

I guess the README reported training result, not evaluation for SOP? I used your pre-trained model, ran the evaluate.py and got much lower result. Is this expected?

Method Backbone R@1 R@10 R@100 R@1000
Proxy-Anchor512 Inception-BN 79.2 90.7 96.2 98.6
Run code/evaluate.py Inception-BN 49.4 65.0 78.8 91.3

Here is what I did
python Proxy-Anchor-CVPR2020/code/evaluate.py --gpu-id -1 --batch-size 120 --model bn_inception --embedding-size 512 --dataset SOP --resume ../pretrained/SOP_bn_inception_best.pth --workers 4

To get it run on CPU, Ubuntu 20.04 (WSL), torch==1.13.1, I changed code related to cuda:

# model = model.cuda() 
if args.gpu_id != -1:
    model = model.cuda()

torch.load(args.resume, map_location=torch.device('cpu'))

Also fixed an error by adding strict=False

model.load_state_dict(checkpoint['model_state_dict'], strict=False)

Otherwise, I will hit errors raised by torch shown below

        if strict:
            if len(unexpected_keys) > 0:
                error_msgs.insert(
                    0, 'Unexpected key(s) in state_dict: {}. '.format(
                        ', '.join('"{}"'.format(k) for k in unexpected_keys)))
            if len(missing_keys) > 0:
                error_msgs.insert(
                    0, 'Missing key(s) in state_dict: {}. '.format(
                        ', '.join('"{}"'.format(k) for k in missing_keys)))

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions