1. Researchers from Microsoft Research Asia achieved first place rankings in five main visual recognition competitions (ImageNet, COCO) in 2015 using very deep residual neural networks (ResNets) with over 150 layers.
2. Previous networks like AlexNet and VGG had improved performance by increasing depth but encountered degradation problems as depth increased. ResNets address this by introducing "identity skip connections" that learn residual functions with reference to the layer inputs rather than learning unreferenced functions.
3. ResNets achieved much better performance on visual recognition tasks compared to previous networks while also enabling substantially increased depth, with a 152-layer ResNet achieving better error rates than networks with 8 to 22 layers.