This document summarizes a research paper on scaling multinomial logistic regression via hybrid parallelism. The paper proposes a method called DS-MLR that achieves hybrid parallelism for multinomial logistic regression. DS-MLR first reformulates the MLR objective function into a doubly separable form that can be optimized in a distributed manner. It then presents an asynchronous distributed algorithm to optimize the reformulated objective function across multiple workers. Empirical results on large real-world datasets show that DS-MLR can efficiently train MLR models in a hybrid parallel manner and outperform other parallelization approaches.