Coder Social home page Coder Social logo

eagleeye's People

Contributors

ankitvashisht12 avatar anonymous47823493 avatar bezorro avatar bowenwu1 avatar dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eagleeye's Issues

Question about pruning MobileFaceNet on face recognition.

Hi~
I have successfully re-implemented your baseline. Now I want to prune MobileFaceNet on face recognition with your idea. But when I search for the best score, I found it's always 0. My scores calculation script is same as yours.

Question for full-size baseline model performance in paper's Table.4 and Table.5

Hello:
What's your full-size baseline model performance in Table.4 and Table.5?
In your paper, there is only compressed baseline model performance, eg. 0.75× ResNet-50's Acc 74.8%
But in other works, they usually use full-size baseline model as a comparison.
So I wondered what's your full-size baseline.
Looking for your reply.
Thank you.

the ratios Problems in searching ResNet50

FLOPS ratio:0.98236
FLOPS ratio:0.97435
FLOPS ratio:0.96765
FLOPS ratio:0.99743
FLOPS ratio:0.97125
FLOPS ratio:0.95743
............
today,we want to follow the code in your paper, but when we ues the res50_50flops.sh, we find the strategy generation is always range 0.96~1 is far away from the target flops ratios. we use the parameters in the code you provide. can you help me to reproduce the work?

search candidate problems

非常棒的论文,我想复现下您的论文,但是似乎搜索不到足够好的candidate,有两个问题

  • search.py对main函数while循环,导致数据集部分重复计算是什么考虑呢?

  • 除此之外,我执行search.py 搜索的过程中,找不到评分高于0.05的candidate,执行时间超过了1GPUday,个人感觉随机搜索的策略似乎并不能保证在确定时间内找到足够好的candidate

我将while循环移到main函数内部以加速,修改的代码如下
`def main(opt):
# basic settings
# os.environ["CUDA_VISIBLE_DEVICES"] = str(opt.gpu_ids)[1:-1]

if torch.cuda.is_available():
    device = "cuda"
    torch.backends.cudnn.benchmark = True
else:
    device = "cpu"
##################### Get Dataloader ####################
dataloader_train, dataloader_val = custom_get_dataloaders(opt)

train_data = []
for index, sample in enumerate(tqdm(dataloader_train, leave=False)):
    train_data.append(sample)
    if index > 100:
        break

# dummy_input is sample input of dataloaders
if hasattr(dataloader_val, "dataset"):
    dummy_input = dataloader_val.dataset.__getitem__(0)
    dummy_input = dummy_input[0]
    dummy_input = dummy_input.unsqueeze(0)
else:
    # for imagenet dali loader
    dummy_input = torch.rand(1, 3, 224, 224)

while True:
    #####################  Create Baseline Model  ####################
    net = ModelWrapper(opt)
    net.load_checkpoint(opt.checkpoint)
    flops_before, params_before = model_summary(net.get_compress_part(), dummy_input)

    #####################  Pruning Strategy Generation ###############
    compression_scheduler = distiller.file_config(
        net.get_compress_part(), net.optimizer, opt.compress_schedule_path
    )
    num_layer = len(compression_scheduler.policies[1])

    channel_config = get_pruning_strategy(opt, num_layer)  # pruning strategy

    compression_scheduler = random_compression_scheduler(
        compression_scheduler, channel_config
    )

    ###### Adaptive-BN-based Candidate Evaluation of Pruning Strategy ###
    try:
        thinning(net, compression_scheduler, input_tensor=dummy_input)
    except:
        print('[WARNING] This pruning strategy is invalid for distiller thinning module, pass it.')
        continue

    flops_after, params_after = model_summary(net.get_compress_part(), dummy_input)
    ratio = flops_after / flops_before
    print("FLOPs ratio:", ratio)
    if ratio < opt.flops_target - 0.01 or ratio > opt.flops_target + 0.01:
        # illegal pruning strategy
        continue
    net = net.to(device)
    net.parallel(opt.gpu_ids)
    net.get_compress_part().train()
    with torch.no_grad():
        for index, sample in enumerate(tqdm(train_data, leave=True)):
            _ = net.get_loss(sample)

    strategy_score = net.get_eval_scores(dataloader_val)["accuracy"]

    #################### Save Pruning Strategy and Score #########
    log_file = open(opt.output_file, "a+")
    log_file.write("{} {} ".format(strategy_score, ratio))

    for item in channel_config:
        log_file.write("{} ".format(str(item)))
    log_file.write("\n")
    log_file.close()
    print("Eval Score:{}".format(strategy_score))

    if strategy_score >= 0.141:
        return`

about cifar10

您好,请问可以公开cifar10部分的代码和实验结果吗?

when run search using res50, get errors

@anonymous47823493
during the generation of stategy, sometimes get error:

USE PART OF TRAIN SET WITH UNIFORM SPLIT
len(train_dataset) 12724
FLOPs ratio: 0.4226975629989995
USE PART OF TRAIN SET WITH UNIFORM SPLIT
len(train_dataset) 12724
FLOPs ratio: 0.3743268191985986
USE PART OF TRAIN SET WITH UNIFORM SPLIT
len(train_dataset) 12724
Traceback (most recent call last):
  File "/media/jie/Work/EagleEye-master/search.py", line 104, in <module>
    main(opt)
  File "/media/jie/Work/EagleEye-master/search.py", line 70, in main
    thinning(net, compression_scheduler, input_tensor=dummy_input)
  File "/media/jie/Work/EagleEye-master/thinning/__init__.py", line 12, in thinning
    scheduler.on_epoch_begin(1)
  File "/media/jie/Work/EagleEye-master/distiller/scheduler.py", line 129, in on_epoch_begin
    policy.on_epoch_begin(self.model, self.zeros_mask_dict, meta, **kwargs)
  File "/media/jie/Work/EagleEye-master/distiller/policy.py", line 197, in on_epoch_begin
    self.pruner.set_param_mask(param, param_name, zeros_mask_dict, meta)
  File "/media/jie/Work/EagleEye-master/distiller/pruning/ranked_structures_pruner.py", line 63, in set_param_mask
    param, param_name, zeros_mask_dict, fraction_to_prune, model
  File "/media/jie/Work/EagleEye-master/distiller/pruning/ranked_structures_pruner.py", line 85, in prune_to_target_sparsity
    assert self.leader_binary_map is not None
AssertionError

Eval score very less in searching stage

Hello, I am trying to reproduce your work but the eval score for the searched models on validation data of ImageNet(1K) is coming out to be 0.001, 0.002 and not increasing any further. Can you suggest any possible reasons for the same?

Knowledge Distillation

Hello, I can see there is distiller folder which contains script for knowledge distillation. Can you please explain how to perform knowledge distillation with obtained pruned model? Thanks!

Miss match for the loaded weight and model?

Another problem: when I run finetune.py with your provided scripts and checkpoint, errors as follow:

Traceback (most recent call last):
  File "finetune.py", line 173, in <module>
    main(opt)
  File "finetune.py", line 90, in main
    net.load_checkpoint(opt.checkpoint)
  File "/home/yeluyue/lz/program/EagleEye/models/wrapper.py", line 122, in load_checkpoint
    module.weight = torch.nn.Parameter(checkpoint[key + ".weight"])
KeyError: 'layers.0.conv1.weight'

Originally posted by @BlossomingL in #18 (comment)

Search for ResNet-50

I also search max accuracy on ResNet-50, the top-5 accuracy as follow, can I prune and finetune ResNet-50 on these parameters?

strategy index:49, score:0.133
strategy index:165, score:0.122
strategy index:94, score:0.117
strategy index:290, score:0.102
strategy index:22, score:0.101

Originally posted by @BlossomingL in #19 (comment)

while True

I see “while True:” in finetune.py,how can we jump out it?

Question about sub-validation set

Thanks for your great research and code.
I'm just curious about why you use sub-validation set as small amount of training data, instead of same amout of validation data.
is it unlikely that the model will be over-fitting to the training data and will be measured with high accuracy?

Pruned model (ResNet-50 and MobileNetV1) finetune

Hi~
I'm finetuning resnet50 (Pruned 50%, generated by your scripts) according my search parameters, this is my scripts and tensorboard, training epoch is 60 at present (total 120 )but the accuracy is low (0.58, but 0.742 in your paper), is anything wrong?

python3 finetune.py \
--model_name resnet50 \
--num_classes 1000 \
--checkpoint models/ckpt/imagenet_resnet50_full_model.pth \
--gpu_ids 0 \
--batch_size 128 \
--dataset_path /home/linx/dataset/ImageNet2012 \
--dataset_name imagenet \
--exp_name resnet50_50flops_3 \
--search_result search_results/pruning_strategies_resnet50.txt \
--strategy_id 49 \
--epoch 120 \
--lr 1e-2 \
--weight_decay 5e-4 \
--compress_schedule_path compress_config/res50_imagenet.yaml

strategy index:49, score:0.133

 tensorboard

I have an idea as an extension of your work.

I have just read your paper, and I do not have enough GPU & coding ability for test my idea.
I hope you can try my idea if you think it is reasonable.

In your work, you re-register the pruned candidate networks' mean and standrad deviation of a convolution layer's output before the scale and bias term of a batch-normalization layer(you call it adaptive batch normalization).
In my understanding, it can only be used for a network with batch-normalization layer.

My idea is simple, but I think it can attach to any convolutional network.
Before pruning, you can do a pseudo batch-norm for your network.

For an original conv-layer's output (without following a batch-norm layer)
You can pseudo batch-norm your layers' output with that

Where is the statistic of the original network(before pruning).
For the pruned candicate network, I think you can just re-register the std and mean to the following equation:

Where is the statistic of the pruned network.

The reason of my adjustment is that. I think if the pruned model has same statistic value(mean & std) of the old model (like your work), then it may have the same result like your work, but it can use for a model without batch-norm layer.

Key error while running 'inference.py'

Hello, I am trying to reproduce your results and I have performed finetuning of pruned model and the models have been saved. The problem occurs when I'm trying to do inference in inference.py by loading checkpoint of finetuned pruned model and it is resulting into a key error. Can you please suggest any possible solution?
Screenshot 2023-01-11 at 16 09 04

Search results on Cifar

Thanks for your nice paper and code. However, I don't find the search result of ResNet-56 and MobileNetV1 on CIFAR-10. Could you please provide these search result ? (reported in Table 3 of the EagleEye paper)

batch_norm error

Hi, when I run the search.py, an error has raised as follow, do you have any idea, thanks!
image

a question about adaptive bn?

HI , @anonymous47823493

After reading eagle eye, bn-adaptive makes the accuracy on val_dataset more reliable to represent the pruned network as a good network after finetuning. but I have a question about adaptive-BN technology. this technique has already been proposed previously, some jobs like bignas, slimmable network..., they renamed this technique as bn calibration. so I just can't get the innovation.

Select Number of Filters to be pruned?

Hello,

is it possible with this tool to specify how many filters per layer are to be pruned?
Like Layer X, should prune 32 filters.

Thank you in advance!

list out of index error

config setting "--compress_schedule_path compress_config/res50_imagenet.yaml" is needed for finetuning commands in ./scripts/res50_25flops.sh, ./scripts/res50_50flops.sh, and ./scripts/res50_75flops.sh, otherwise a "list out of index" error will occur, because the defaut path is "compress_config/mbv1_imagenet.yaml".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.