Levers are simple too, but they can move the world1.
文章目录
disconnect between author’s intent and reader’s learning
例如,直到现在人们还在争论孔子写过的那些文字到底是什么意思
它更多是一个反复的理解过程,阅读等同于理解,不同层次的阅读对应不同层次的理解
1、Categories
【Classification】
-
【NIN】《Network In Network》(arXiv-2013)
-
【Mixed Pooling】《Mixed Pooling for Convolutional Neural Networks》(RSKT-2014)
-
【Distilling】《Distilling the Knowledge in a Neural Network》(arXiv-2015, In NIPS Deep Learning Workshop, 2014)
-
【Inception-v3】《Rethinking the Inception Architecture for Computer Vision》(CVPR-2016)
-
【WRNs】《Wide Residual Networks》(arXiv-2016)
-
【Stochastic Depth】《Deep Networks with Stochastic Depth》(ECCV-2016)
-
【Comprssion】《Deep Compression:Compressing Deep Neural Networks with Pruning,Trained Quantization and Huffman Coding》(ICLR-2016 Best Paper)
-
【SGDR】《SGDR:Stochastic Gradient Descent with Warm Restarts》(arXiv-2016)
-
【CLR】《Cyclical Learning Rates for Training Neural Networks》(WACV-2017)
-
【Distilling】《Learning Efficient Object Detection Models with Knowledge Distillation》(NIPS-2017)
-
【RSCM】《RSCM:Region selection and concurrency model for multi-class weather recognition》(TIP-2017)
-
【SqueezeNet】《SqueezeNet:AlexNet-Level accuracy with 50× fewer parameters and <0.5MB model size》(ICLR-2017)
-
【Snapshot Ensembles】《Snapshot Ensembles:Train 1,Get M for Free》(ICLR-2017)
-
【DenseNet】《Densely Connected Convolutional Networks》(CVPR-2017)
-
【Xception】《Xception: Deep Learning with Depthwise Separable Convolutions》(CVPR-2017)
-
【ResNext】《Aggregated Residual Transformations for Deep Neural Networks》(CVPR-2017)
-
【MobileNet】《MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications》(CVPR-2017)
-
【NasNet】《Learning Transferable Architectures for Scalable Image Recognition》(CVPR-2018)
-
【SENet】《Squeeze-and-Excitation Networks》(CVPR-2018)
-
【ShuffleNet】《ShuffleNet:An Extremely Efficient Convolutional Neural Network for Mobile Devices》(CVPR-2018)
-
【MobileNet V2】《MobileNetV2:Inverted Residuals and Linear Bottlenecks》(CVPR-2018)
-
【ShuffleNet V2】《ShuffleNet V2:Practical Guidelines for Efficient CNN Architecture Design》(ECCV-2018)
-
【CBAM】《CBAM: Convolutional Block Attention Module》(ECCV-2018)
-
【Bilinear Pooling】《A Novel DR Classfication Scheme based on Compact Bilinear Pooling CNN and GBDT》(JIH-MSP-2018)
-
【FD-MobileNet】《FD-MobileNet:Improved MobileNet with a Fast Downsampling Strategy》(ICIP-2018)
-
【SKNet】《Selective Kernel Networks》(CVPR-2019)
-
【BoT】《Bag of Tricks for Image Classification with Convolutional Neural Networks》(CVPR-2019)
-
【C3AE】《C3AE:Exploring the Limits of Compact Model for Age Estimation》(CVPR-2019)
-
【MnasNet】《MnasNet:Platform-Aware Neural Architecture Search for Mobile》(CVPR-2019)
-
【EfficientNet】《EfficientNet:Rethinking Model Scaling for Convolutional Neural Networks》(ICML-2019)
-
【MobileNet V3】《Searching for MobileNetV3》(ICCV-2019)
-
【RegNet】《Designing Network Design Spaces》(CVPR-2020)
-
【GhostNet】《GhostNet:More Features from Cheap Operations》(CVPR-2020)
-
【CSPNet】《CSPNet:A New Backbone that can Enhance Learning Capability of CNN》(CVPRW-2020)
-
【RepVGG】《RepVGG:Making VGG-style ConvNets Great Again》(CVPR-2021)
-
【CA】《Coordinate Attention for Efficient Mobile Network Design》(CVPR-2021)
-
【Shuffle Attention】《SA-Net:Shuffle Attention for Deep Convolutional Neural Networks》(ICASSP-2021)
-
【NAM】《NAM:Normalization-based Attention Module》(NeurIPS-2021 workshop)
-
【GAM】《Global Attention Mechanism:Retain Information to Enhance Channel-Spatial Interactions》(arXiv-2021)
-
【EfficientNetV2】《EfficientNetV2: Smaller Models and Faster Training》(ICML-2021)
-
【SPD-Conv】《No More Strided Convolutions or Pooling:A New CNN Building Block for Low-Resolution Images and Small Objects》(ECML-PKDD-2022)
-
【Transformer】Introduction to Transformer(learning notes)