Given a very large dataset of moderate-to-high di- mensionality, how to mine useful patterns from it? In such cases, dimensionality reduction is essential to overcome the “curse of dimensionality”. Although there exist algorithms to reduce the dimensionality of Big Data, unfortunately, they all fail to identify/eliminate non-linear correlations between attributes. This paper tackles the problem by exploring con- cepts of the Fractal Theory and massive parallel processing to present Curl-Remover, a novel dimensionality reduction technique for very large datasets. Our contributions are: Curl- Remover eliminates linear and non-linear attribute correlations as well as irrelevant ones; it is unsupervised and suits for analytical tasks in general – not only classification; it presents linear scale-up; it does not require the user to guess the number of attributes to be removed, and; it preserves the attributes’ semantics. We performed experiments on synthetic and real data spanning up to 1.1 billion points and Curl- Remover outperformed a PCA-based algorithm, being up to 8% more accurate.