【Paper Reading】

本文回顾了计算机视觉、深度学习等领域的重要论文,涵盖了分类、目标检测、分割等多个方向,并总结了数据增强技术的发展。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在这里插入图片描述
Levers are simple too, but they can move the world1.



沈向洋:读论文的三个层次

disconnect between author’s intent and reader’s learning

例如,直到现在人们还在争论孔子写过的那些文字到底是什么意思

它更多是一个反复的理解过程,阅读等同于理解,不同层次的阅读对应不同层次的理解

1、Categories

在这里插入图片描述

【Classification】

  1. 【NIN】《Network In Network》(arXiv-2013)

  2. 【Mixed Pooling】《Mixed Pooling for Convolutional Neural Networks》(RSKT-2014)

  3. 【Distilling】《Distilling the Knowledge in a Neural Network》(arXiv-2015, In NIPS Deep Learning Workshop, 2014)

  4. 【Highway network】《Training Very Deep Networks》(NIPS-2015)

  5. 【Inception-v1】《Going Deeper with Convolutions》(CVPR-2015)

  6. 【Inception-v2】《Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift》(ICML-2015)

  7. 【Inception-v3】《Rethinking the Inception Architecture for Computer Vision》(CVPR-2016)

  8. 【WRNs】《Wide Residual Networks》(arXiv-2016)

  9. 【Stochastic Depth】《Deep Networks with Stochastic Depth》(ECCV-2016)

  10. 【Comprssion】《Deep Compression:Compressing Deep Neural Networks with Pruning,Trained Quantization and Huffman Coding》(ICLR-2016 Best Paper)

  11. 【SGDR】《SGDR:Stochastic Gradient Descent with Warm Restarts》(arXiv-2016)

  12. 【CLR】《Cyclical Learning Rates for Training Neural Networks》(WACV-2017)

  13. 【Distilling】《Learning Efficient Object Detection Models with Knowledge Distillation》(NIPS-2017)

  14. 【RSCM】《RSCM:Region selection and concurrency model for multi-class weather recognition》(TIP-2017)

  15. 【Inception-v4、Inception-Resnet-v1、v2】《Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning》(AAAI-2017)

  16. 【SqueezeNet】《SqueezeNet:AlexNet-Level accuracy with 50× fewer parameters and <0.5MB model size》(ICLR-2017)

  17. 【Snapshot Ensembles】《Snapshot Ensembles:Train 1,Get M for Free》(ICLR-2017)

  18. 【DenseNet】《Densely Connected Convolutional Networks》(CVPR-2017)

  19. 【Xception】《Xception: Deep Learning with Depthwise Separable Convolutions》(CVPR-2017)

  20. 【ResNext】《Aggregated Residual Transformations for Deep Neural Networks》(CVPR-2017)

  21. 【MobileNet】《MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications》(CVPR-2017)

  22. 【NasNet】《Learning Transferable Architectures for Scalable Image Recognition》(CVPR-2018)

  23. 【SENet】《Squeeze-and-Excitation Networks》(CVPR-2018)

  24. 【ShuffleNet】《ShuffleNet:An Extremely Efficient Convolutional Neural Network for Mobile Devices》(CVPR-2018)

  25. 【MobileNet V2】《MobileNetV2:Inverted Residuals and Linear Bottlenecks》(CVPR-2018)

  26. 【ShuffleNet V2】《ShuffleNet V2:Practical Guidelines for Efficient CNN Architecture Design》(ECCV-2018)

  27. 【CBAM】《CBAM: Convolutional Block Attention Module》(ECCV-2018)

  28. 【Bilinear Pooling】《A Novel DR Classfication Scheme based on Compact Bilinear Pooling CNN and GBDT》(JIH-MSP-2018)

  29. 【FD-MobileNet】《FD-MobileNet:Improved MobileNet with a Fast Downsampling Strategy》(ICIP-2018)

  30. 【SKNet】《Selective Kernel Networks》(CVPR-2019)

  31. 【BoT】《Bag of Tricks for Image Classification with Convolutional Neural Networks》(CVPR-2019)

  32. 【C3AE】《C3AE:Exploring the Limits of Compact Model for Age Estimation》(CVPR-2019)

  33. 【MnasNet】《MnasNet:Platform-Aware Neural Architecture Search for Mobile》(CVPR-2019)

  34. 【EfficientNet】《EfficientNet:Rethinking Model Scaling for Convolutional Neural Networks》(ICML-2019)

  35. 【MobileNet V3】《Searching for MobileNetV3》(ICCV-2019)

  36. 【RegNet】《Designing Network Design Spaces》(CVPR-2020)

  37. 【GhostNet】《GhostNet:More Features from Cheap Operations》(CVPR-2020)

  38. 【CSPNet】《CSPNet:A New Backbone that can Enhance Learning Capability of CNN》(CVPRW-2020)

  39. 【RepVGG】《RepVGG:Making VGG-style ConvNets Great Again》(CVPR-2021)

  40. 【CA】《Coordinate Attention for Efficient Mobile Network Design》(CVPR-2021)

  41. 【Shuffle Attention】《SA-Net:Shuffle Attention for Deep Convolutional Neural Networks》(ICASSP-2021)

  42. 【NAM】《NAM:Normalization-based Attention Module》(NeurIPS-2021 workshop)

  43. 【GAM】《Global Attention Mechanism:Retain Information to Enhance Channel-Spatial Interactions》(arXiv-2021)

  44. 【EfficientNetV2】《EfficientNetV2: Smaller Models and Faster Training》(ICML-2021)

  45. 【SPD-Conv】《No More Strided Convolutions or Pooling:A New CNN Building Block for Low-Resolution Images and Small Objects》(ECML-PKDD-2022)

  46. 【Transformer】Introduction to Transformer(learning notes)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值