How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
One of the most important, yet often overlooked, aspects of predictive modeling is the transformation of data to create model inputs, better known as feature engineering (FE). This talk will go into the theoretical background behind FE, showing how it leverages existing data to produce better modeling results. It will then detail some important FE techniques that should be in every data scientist’s tool kit.
Winning Kaggle competitions involves getting a good score as fast as possible using versatile machine learning libraries and models like Scikit-learn, XGBoost, and Keras. It also involves model ensembling techniques like voting, averaging, bagging and boosting to improve scores. The document provides tips for approaches like feature engineering, algorithm selection, and stacked generalization/stacking to develop strong ensemble models for competitions.
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its authorVivian S. Zhang
This document provides an overview of XGBoost, an open-source gradient boosting framework. It begins with introductions to machine learning algorithms and XGBoost specifically. The document then walks through using XGBoost with R, including loading data, running models, cross-validation, and prediction. It discusses XGBoost's use in winning the Higgs Boson machine learning competition and provides code to replicate its solution. Finally, it briefly covers XGBoost's model specification and training objectives.
t-SNE is a modern visualization algorithm that presents high-dimensional data in 2 or 3 dimensions according to some desired distances. If you have some data and you can measure their pairwise differences, t-SNE visualization can help you identify various clusters.
How to Win Machine Learning Competitions ? HackerEarth
This presentation was given by Marios Michailidis (a.k.a Kazanova), Current Kaggle Rank #3 to help community learn machine learning better. It comprises of useful ML tips and techniques to perform better in machine learning competitions. Read the full blog: http://blog.hackerearth.com/winning-tips-machine-learning-competitions-kazanova-current-kaggle-3
Feature Engineering for ML - Dmitry Larko, H2O.aiSri Ambati
This talk was given at H2O World 2018 NYC and can be viewed here: https://youtu.be/wcFdmQSX6hM
Description:
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
Speaker's Bio:
Dmitry has more than 10 years of experience in IT. Starting with data warehousing and BI, now in big data and data science. He has a lot of experience in predictive analytics software development for different domains and tasks. He is also a Kaggle Grandmaster who loves to use his machine learning and data science skills on Kaggle competitions.
Jeong-Yoon Lee has extensive experience winning data science competitions, taking first place in KDD Cup 2012 and 2015 and placing in the top 10 in several others. He competes for fun, experience, learning, and networking. Some best practices for competitions include thorough feature engineering, using diverse machine learning algorithms, cross-validation, ensemble methods, and collaboration. While competitions may seem limited, they provide valuable experience in data wrangling, exploration, and pipeline development applicable to real-world work.
Winning Kaggle 101: Introduction to StackingTed Xiao
This document provides an introduction to stacking, an ensemble machine learning method. Stacking involves training a "metalearner" to optimally combine the predictions from multiple "base learners". The stacking algorithm was developed in the 1990s and improved upon with techniques like cross-validation and the "Super Learner" which combines models in a way that is provably asymptotically optimal. H2O implements an efficient stacking method called H2O Ensemble which allows for easily finding the best combination of algorithms like GBM, DNNs, and more to improve predictions.
Feature Engineering - Getting most out of data for predictive models - TDC 2017Gabriel Moreira
How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
This document summarizes a machine learning workshop on feature selection. It discusses typical feature selection methods like single feature evaluation using metrics like mutual information and Gini indexing. It also covers subset selection techniques like sequential forward selection and sequential backward selection. Examples are provided showing how feature selection improves performance for logistic regression on large datasets with more features than samples. The document outlines the workshop agenda and provides details on when and why feature selection is important for machine learning models.
This document discusses unsupervised learning and clustering. It defines unsupervised learning as modeling the underlying structure or distribution of input data without corresponding output variables. Clustering is described as organizing unlabeled data into groups of similar items called clusters. The document focuses on k-means clustering, describing it as a method that partitions data into k clusters by minimizing distances between points and cluster centers. It provides details on the k-means algorithm and gives examples of its steps. Strengths and weaknesses of k-means clustering are also summarized.
This document provides an overview of different techniques for hyperparameter tuning in machine learning models. It begins with introductions to grid search and random search, then discusses sequential model-based optimization techniques like Bayesian optimization and Tree-of-Parzen Estimators. Evolutionary algorithms like CMA-ES and particle-based methods like particle swarm optimization are also covered. Multi-fidelity methods like successive halving and Hyperband are described, along with recommendations on when to use different techniques. The document concludes by listing several popular libraries for hyperparameter tuning.
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
This document discusses machine learning concepts including supervised vs. unsupervised learning, clustering algorithms, and specific clustering methods like k-means and k-nearest neighbors. It provides examples of how clustering can be used for applications such as market segmentation and astronomical data analysis. Key clustering algorithms covered are hierarchy methods, partitioning methods, k-means which groups data by assigning objects to the closest cluster center, and k-nearest neighbors which classifies new data based on its closest training examples.
The document discusses tips for winning data science competitions. It outlines the typical structure of competitions, sources of competitive advantage like feature engineering and modeling techniques. It emphasizes using gradient boosted machines (GBM) and blending models. Specific technical tips are provided for handling different data types and tuning GBM. The document stresses applying lessons from competitions to real-world problems by selecting the right problem and using models appropriately.
Overfitting and underfitting are modeling errors related to how well a model fits training data. Overfitting occurs when a model is too complex and fits the training data too closely, resulting in poor performance on new data. Underfitting occurs when a model is too simple and does not fit the training data well. The bias-variance tradeoff aims to balance these issues by finding a model complexity that minimizes total error.
Hacking Predictive Modeling - RoadSec 2018HJ van Veen
This document provides an overview of machine learning and predictive modeling techniques for hackers and data scientists. It discusses foundational concepts in machine learning like functionalism, connectionism, and black box modeling. It also covers practical techniques like feature engineering, model selection, evaluation, optimization, and popular Python libraries. The document encourages an experimental approach to hacking predictive models through techniques like brute forcing hyperparameters, fuzzing with data permutations, and social engineering within data science communities.
Anomaly detection (or Outlier analysis) is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset. It is used is applications such as intrusion detection, fraud detection, fault detection and monitoring processes in various domains including energy, healthcare and finance. In this talk, we will introduce anomaly detection and discuss the various analytical and machine learning techniques used in in this field. Through a case study, we will discuss how anomaly detection techniques could be applied to energy data sets. We will also demonstrate, using R and Apache Spark, an application to help reinforce concepts in anomaly detection and best practices in analyzing and reviewing results.
This document is a slide presentation on recent advances in deep learning. It discusses self-supervised learning, which involves using unlabeled data to learn representations by predicting structural information within the data. The presentation covers pretext tasks, invariance-based approaches, and generation-based approaches for self-supervised learning in computer vision and natural language processing. It provides examples of specific self-supervised methods like predicting image rotations, clustering representations to generate pseudo-labels, and masked language modeling.
This document provides an introduction to XGBoost, including:
1. XGBoost is an important machine learning library that is commonly used by winners of Kaggle competitions.
2. A quick example is shown using XGBoost to predict diabetes based on patient data, achieving good results with only 20 lines of simple code.
3. XGBoost works by creating an ensemble of decision trees through boosting, and focuses on explaining concepts at a high level rather than detailed algorithms.
Regularization helps address the problem of overfitting in machine learning models. It works by adding parameters to the cost function that penalize high values for the model's coefficients, which encourages simpler models that generalize better to new data. Regularization can be applied to both linear and logistic regression by modifying the cost function and using gradient descent or the normal equation to find the optimal parameters that minimize the new regularized cost function. The regularization parameter controls the tradeoff between model complexity and fitting the training data.
Machine Learning is a subset of artificial intelligence that allows computers to learn without being explicitly programmed. It uses algorithms to recognize patterns in data and make predictions. The document discusses common machine learning algorithms like linear regression, logistic regression, decision trees, and k-means clustering. It also provides examples of machine learning applications such as face detection, speech recognition, fraud detection, and smart cars. Machine learning is expected to have an increasingly important role in the future.
TDC2017 | São Paulo - Trilha Java EE How we figured out we had a SRE team at ...tdc-globalcode
This document discusses various techniques for feature engineering raw data to improve machine learning model performance. It describes transforming data through techniques like handling missing values, aggregation, binning, encoding categorical features, and feature selection. The goal of feature engineering is to represent the underlying problem to models in a way that results in better accuracy on new data.
This document discusses data preprocessing techniques for machine learning. It covers common preprocessing steps like normalization, encoding categorical features, and handling outliers. Normalization techniques like StandardScaler, MinMaxScaler and RobustScaler are described. Label encoding and one-hot encoding are covered for processing categorical variables. The document also discusses polynomial features, custom transformations, and preprocessing text and image data. The goal of preprocessing is to prepare data so it can be better consumed by machine learning algorithms.
How to Win Machine Learning Competitions ? HackerEarth
This presentation was given by Marios Michailidis (a.k.a Kazanova), Current Kaggle Rank #3 to help community learn machine learning better. It comprises of useful ML tips and techniques to perform better in machine learning competitions. Read the full blog: http://blog.hackerearth.com/winning-tips-machine-learning-competitions-kazanova-current-kaggle-3
Feature Engineering for ML - Dmitry Larko, H2O.aiSri Ambati
This talk was given at H2O World 2018 NYC and can be viewed here: https://youtu.be/wcFdmQSX6hM
Description:
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
Speaker's Bio:
Dmitry has more than 10 years of experience in IT. Starting with data warehousing and BI, now in big data and data science. He has a lot of experience in predictive analytics software development for different domains and tasks. He is also a Kaggle Grandmaster who loves to use his machine learning and data science skills on Kaggle competitions.
Jeong-Yoon Lee has extensive experience winning data science competitions, taking first place in KDD Cup 2012 and 2015 and placing in the top 10 in several others. He competes for fun, experience, learning, and networking. Some best practices for competitions include thorough feature engineering, using diverse machine learning algorithms, cross-validation, ensemble methods, and collaboration. While competitions may seem limited, they provide valuable experience in data wrangling, exploration, and pipeline development applicable to real-world work.
Winning Kaggle 101: Introduction to StackingTed Xiao
This document provides an introduction to stacking, an ensemble machine learning method. Stacking involves training a "metalearner" to optimally combine the predictions from multiple "base learners". The stacking algorithm was developed in the 1990s and improved upon with techniques like cross-validation and the "Super Learner" which combines models in a way that is provably asymptotically optimal. H2O implements an efficient stacking method called H2O Ensemble which allows for easily finding the best combination of algorithms like GBM, DNNs, and more to improve predictions.
Feature Engineering - Getting most out of data for predictive models - TDC 2017Gabriel Moreira
How should data be preprocessed for use in machine learning algorithms? How to identify the most predictive attributes of a dataset? What features can generate to improve the accuracy of a model?
Feature Engineering is the process of extracting and selecting, from raw data, features that can be used effectively in predictive models. As the quality of the features greatly influences the quality of the results, knowing the main techniques and pitfalls will help you to succeed in the use of machine learning in your projects.
In this talk, we will present methods and techniques that allow us to extract the maximum potential of the features of a dataset, increasing flexibility, simplicity and accuracy of the models. The analysis of the distribution of features and their correlations, the transformation of numeric attributes (such as scaling, normalization, log-based transformation, binning), categorical attributes (such as one-hot encoding, feature hashing, Temporal (date / time), and free-text attributes (text vectorization, topic modeling).
Python, Python, Scikit-learn, and Spark SQL examples will be presented and how to use domain knowledge and intuition to select and generate features relevant to predictive models.
This document summarizes a machine learning workshop on feature selection. It discusses typical feature selection methods like single feature evaluation using metrics like mutual information and Gini indexing. It also covers subset selection techniques like sequential forward selection and sequential backward selection. Examples are provided showing how feature selection improves performance for logistic regression on large datasets with more features than samples. The document outlines the workshop agenda and provides details on when and why feature selection is important for machine learning models.
This document discusses unsupervised learning and clustering. It defines unsupervised learning as modeling the underlying structure or distribution of input data without corresponding output variables. Clustering is described as organizing unlabeled data into groups of similar items called clusters. The document focuses on k-means clustering, describing it as a method that partitions data into k clusters by minimizing distances between points and cluster centers. It provides details on the k-means algorithm and gives examples of its steps. Strengths and weaknesses of k-means clustering are also summarized.
This document provides an overview of different techniques for hyperparameter tuning in machine learning models. It begins with introductions to grid search and random search, then discusses sequential model-based optimization techniques like Bayesian optimization and Tree-of-Parzen Estimators. Evolutionary algorithms like CMA-ES and particle-based methods like particle swarm optimization are also covered. Multi-fidelity methods like successive halving and Hyperband are described, along with recommendations on when to use different techniques. The document concludes by listing several popular libraries for hyperparameter tuning.
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
<featured> Meetup event hosted by NYC Open Data Meetup, NYC Data Science Academy. Speaker: Owen Zhang, Event Info: http://www.meetup.com/NYC-Open-Data/events/219370251/
This document discusses machine learning concepts including supervised vs. unsupervised learning, clustering algorithms, and specific clustering methods like k-means and k-nearest neighbors. It provides examples of how clustering can be used for applications such as market segmentation and astronomical data analysis. Key clustering algorithms covered are hierarchy methods, partitioning methods, k-means which groups data by assigning objects to the closest cluster center, and k-nearest neighbors which classifies new data based on its closest training examples.
The document discusses tips for winning data science competitions. It outlines the typical structure of competitions, sources of competitive advantage like feature engineering and modeling techniques. It emphasizes using gradient boosted machines (GBM) and blending models. Specific technical tips are provided for handling different data types and tuning GBM. The document stresses applying lessons from competitions to real-world problems by selecting the right problem and using models appropriately.
Overfitting and underfitting are modeling errors related to how well a model fits training data. Overfitting occurs when a model is too complex and fits the training data too closely, resulting in poor performance on new data. Underfitting occurs when a model is too simple and does not fit the training data well. The bias-variance tradeoff aims to balance these issues by finding a model complexity that minimizes total error.
Hacking Predictive Modeling - RoadSec 2018HJ van Veen
This document provides an overview of machine learning and predictive modeling techniques for hackers and data scientists. It discusses foundational concepts in machine learning like functionalism, connectionism, and black box modeling. It also covers practical techniques like feature engineering, model selection, evaluation, optimization, and popular Python libraries. The document encourages an experimental approach to hacking predictive models through techniques like brute forcing hyperparameters, fuzzing with data permutations, and social engineering within data science communities.
Anomaly detection (or Outlier analysis) is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset. It is used is applications such as intrusion detection, fraud detection, fault detection and monitoring processes in various domains including energy, healthcare and finance. In this talk, we will introduce anomaly detection and discuss the various analytical and machine learning techniques used in in this field. Through a case study, we will discuss how anomaly detection techniques could be applied to energy data sets. We will also demonstrate, using R and Apache Spark, an application to help reinforce concepts in anomaly detection and best practices in analyzing and reviewing results.
This document is a slide presentation on recent advances in deep learning. It discusses self-supervised learning, which involves using unlabeled data to learn representations by predicting structural information within the data. The presentation covers pretext tasks, invariance-based approaches, and generation-based approaches for self-supervised learning in computer vision and natural language processing. It provides examples of specific self-supervised methods like predicting image rotations, clustering representations to generate pseudo-labels, and masked language modeling.
This document provides an introduction to XGBoost, including:
1. XGBoost is an important machine learning library that is commonly used by winners of Kaggle competitions.
2. A quick example is shown using XGBoost to predict diabetes based on patient data, achieving good results with only 20 lines of simple code.
3. XGBoost works by creating an ensemble of decision trees through boosting, and focuses on explaining concepts at a high level rather than detailed algorithms.
Regularization helps address the problem of overfitting in machine learning models. It works by adding parameters to the cost function that penalize high values for the model's coefficients, which encourages simpler models that generalize better to new data. Regularization can be applied to both linear and logistic regression by modifying the cost function and using gradient descent or the normal equation to find the optimal parameters that minimize the new regularized cost function. The regularization parameter controls the tradeoff between model complexity and fitting the training data.
Machine Learning is a subset of artificial intelligence that allows computers to learn without being explicitly programmed. It uses algorithms to recognize patterns in data and make predictions. The document discusses common machine learning algorithms like linear regression, logistic regression, decision trees, and k-means clustering. It also provides examples of machine learning applications such as face detection, speech recognition, fraud detection, and smart cars. Machine learning is expected to have an increasingly important role in the future.
TDC2017 | São Paulo - Trilha Java EE How we figured out we had a SRE team at ...tdc-globalcode
This document discusses various techniques for feature engineering raw data to improve machine learning model performance. It describes transforming data through techniques like handling missing values, aggregation, binning, encoding categorical features, and feature selection. The goal of feature engineering is to represent the underlying problem to models in a way that results in better accuracy on new data.
This document discusses data preprocessing techniques for machine learning. It covers common preprocessing steps like normalization, encoding categorical features, and handling outliers. Normalization techniques like StandardScaler, MinMaxScaler and RobustScaler are described. Label encoding and one-hot encoding are covered for processing categorical variables. The document also discusses polynomial features, custom transformations, and preprocessing text and image data. The goal of preprocessing is to prepare data so it can be better consumed by machine learning algorithms.
Machine learning is the hacker art of describing the features of instances that we want to make predictions about, then fitting the data that describes those instances to a model form. Applied machine learning has come a long way from it's beginnings in academia, and with tools like Scikit-Learn, it's easier than ever to generate operational models for a wide variety of applications. Thanks to the ease and variety of the tools in Scikit-Learn, the primary job of the data scientist is model selection. Model selection involves performing feature engineering, hyperparameter tuning, and algorithm selection. These dimensions of machine learning often lead computer scientists towards automatic model selection via optimization (maximization) of a model's evaluation metric. However, the search space is large, and grid search approaches to machine learning can easily lead to failure and frustration. Human intuition is still essential to machine learning, and visual analysis in concert with automatic methods can allow data scientists to steer model selection towards better fitted models, faster. In this talk, we will discuss interactive visual methods for better understanding, steering, and tuning machine learning models.
Lecture 8 - Feature Engineering and Optimization, a lecture in subject module...Maninda Edirisooriya
This lesson covers the core data science related content required for applying ML. This was one of the lectures of a full course I taught in University of Moratuwa, Sri Lanka on 2023 second half of the year.
This document provides an overview of machine learning algorithms and scikit-learn. It begins with an introduction and table of contents. Then it covers topics like dataset loading from files, pandas, scikit-learn datasets, preprocessing data like handling missing values, feature selection, dimensionality reduction, training and test sets, supervised and unsupervised learning models, and saving/loading machine learning models. For each topic, it provides code examples and explanations.
how,when and why to perform Feature scaling?
Different type of feature scaling Technique.
when to perform feature scaling?
why to perform feature scaling?
MinMax feature scaling techniques.
Unit vector scaling.
Scikit-Learn is a powerful machine learning library implemented in Python with numeric and scientific computing powerhouses Numpy, Scipy, and matplotlib for extremely fast analysis of small to medium sized data sets. It is open source, commercially usable and contains many modern machine learning algorithms for classification, regression, clustering, feature extraction, and optimization. For this reason Scikit-Learn is often the first tool in a Data Scientists toolkit for machine learning of incoming data sets.
The purpose of this one day course is to serve as an introduction to Machine Learning with Scikit-Learn. We will explore several clustering, classification, and regression algorithms for a variety of machine learning tasks and learn how to implement these tasks with our data using Scikit-Learn and Python. In particular, we will structure our machine learning models as though we were producing a data product, an actionable model that can be used in larger programs or algorithms; rather than as simply a research or investigation methodology.
This document provides an overview of machine learning concepts including feature selection, dimensionality reduction techniques like principal component analysis and singular value decomposition, feature encoding, normalization and scaling, dataset construction, feature engineering, data exploration, machine learning types and categories, model selection criteria, popular Python libraries, tuning techniques like cross-validation and hyperparameters, and performance analysis metrics like confusion matrix, accuracy, F1 score, ROC curve, and bias-variance tradeoff.
Building a performing Machine Learning model from A to ZCharles Vestur
A 1-hour read to become highly knowledgeable about Machine learning and the machinery underneath, from scratch!
A presentation introducing to all fundamental concepts of Machine Learning step by step, following a classical approach to build a performing model. Simple examples and illustrations are used all along the presentation to make the concepts easier to grasp.
This document discusses feature engineering, which is the process of transforming raw data into features that better represent the underlying problem for predictive models. It covers feature engineering categories like feature selection, feature transformation, and feature extraction. Specific techniques covered include imputation, handling outliers, binning, log transforms, scaling, and feature subset selection methods like filter, wrapper, and embedded methods. The goal of feature engineering is to improve machine learning model performance by preparing proper input data compatible with algorithm requirements.
Feature Engineering in Machine LearningKnoldus Inc.
In this Knolx we are going to explore Data Preprocessing and Feature Engineering Techniques. We will also understand what is Feature Engineering and its importance in Machine Learning. How Feature Engineering can help in getting the best results from the algorithms.
The document provides an overview of the process for making predictions using machine learning models. It discusses the key steps including data cleaning, feature engineering, model training/testing, and model evaluation. Specifically, it covers preprocessing tasks like data cleaning, transformation, and reduction. It also discusses splitting data into training and test sets, exploratory data analysis, feature encoding of different data types, and popular machine learning algorithms like linear models, tree-based models, and support vector machines. The document aims to outline the machine learning workflow and highlight important considerations at each step.
Art of Feature Engineering for Data Science with Nabeel SarwarSpark Summit
We will discuss what feature engineering is all about , various techniques to use and how to scale to 20000 column datasets using random forest, svd, pca. Also demonstrated is how we can build a service around these to save time and effort when building 100s of models. We will share how we did all this using spark ml to build logistic regression, neural networks, Bayesian networks, etc.
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...IJCSES Journal
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate
between algorithms with statistical implementation provides better consequence in terms of accurate
prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical
models, which provide less manual calculations. Presage is the essence of data science and machine
learning requisitions that impart control over situations. Implementation of any dogmas require proper
feature extraction which helps in the proper model building that assist in precision. This paper is
predominantly based on different statistical analysis which includes correlation significance and proper
categorical data distribution using feature engineering technique that unravel accuracy of different models
of machine learning algorithms.
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...ijcseit
This document discusses various statistical analysis and feature engineering techniques that can be used for model building in machine learning algorithms. It describes how proper feature extraction through techniques like correlation analysis, principal component analysis, recursive feature elimination, and feature importance can help improve the accuracy of machine learning models. The document provides examples of applying different feature selection methods like univariate selection, recursive feature elimination, and principal component analysis on a diabetes dataset. It also explains the mathematics behind principal component analysis and how feature importance is estimated using an extra trees classifier. Overall, the document emphasizes how statistical analysis and feature engineering are important for effective model building in machine learning.
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...IJCSES Journal
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate between algorithms with statistical implementation provides better consequence in terms of accurate prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical models, which provide less manual calculations. Presage is the essence of data science and machine learning requisitions that impart control over situations. Implementation of any dogmas require proper feature extraction which helps in the proper model building that assist in precision. This paper is predominantly based on different statistical analysis which includes correlation significance and proper categorical data distribution using feature engineering technique that unravel accuracy of different models of machine learning algorithms.
[Phd Thesis Defense] CHAMELEON: A Deep Learning Meta-Architecture for News Re...Gabriel Moreira
Presentation of the Phd. thesis defense of Gabriel de Souza Pereira Moreira at Instituto Tecnológico de Aeronáutica (ITA), on Dec. 09, 2019, in São José dos Campos, Brazil.
Abstract:
Recommender systems have been increasingly popular in assisting users with their choices, thus enhancing their engagement and overall satisfaction with online services. Since the last decade, recommender systems became a topic of increasing interest among machine learning, human-computer interaction, and information retrieval researchers.
News recommender systems are aimed to personalize users experiences and help them discover relevant articles from a large and dynamic search space. Therefore, it is a challenging scenario for recommendations. Large publishers release hundreds of news daily, implying that they must deal with fast-growing numbers of items that get quickly outdated and irrelevant to most readers. News readers exhibit more unstable consumption behavior than users in other domains such as entertainment. External events, like breaking news, affect readers interests. In addition, the news domain experiences extreme levels of sparsity, as most users are anonymous, with no past behavior tracked.
Since 2016, Deep Learning methods and techniques have been explored in Recommender Systems research. In general, they can be divided into methods for: Deep Collaborative Filtering, Learning Item Embeddings, Session-based Recommendations using Recurrent Neural Networks (RNN), and Feature Extraction from Items' Unstructured Data such as text, images, audio, and video.
The main contribution of this research was named CHAMELEON a meta-architecture designed to tackle the specific challenges of news recommendation. It consists of a modular reference architecture which can be instantiated using different neural building blocks.
As information about users' past interactions is scarce in the news domain, information such as the user context (e.g., time, location, device, the sequence of clicks within the session), static and dynamic article features like the article textual content and its popularity and recency, are explicitly modeled in a hybrid session-based recommendation approach using RNNs.
The recommendation task addressed in this work is the next-item prediction for user sessions, i.e., "what is the next most likely article a user might read in a session?". A temporal offline evaluation is used for a realistic offline evaluation of such task, considering factors that affect global readership interests like popularity, recency, and seasonality.
Experiments performed with two large datasets have shown the effectiveness of the CHAMELEON for news recommendation on many quality factors such as accuracy, item coverage, novelty, and reduced item cold-start problem, when compared to other traditional and state-of-the-art session-based algorithms.
PAPIs LATAM 2019 - Training and deploying ML models with Kubeflow and TensorF...Gabriel Moreira
The document discusses training and deploying machine learning models with Kubeflow and TensorFlow Extended (TFX). It provides an overview of Kubeflow as a platform for building ML products using containers and Kubernetes. It then describes key TFX components like TensorFlow Data Validation (TFDV) for data exploration and validation, TensorFlow Transform (TFT) for preprocessing, and TensorFlow Estimators for training and evaluation. The document demonstrates these components in a Kubeflow pipeline for a session-based news recommender system, covering data validation, transformation, training, and deployment.
Deep Learning for Recommender Systems @ TDC SP 2019Gabriel Moreira
This document provides an overview of deep learning for recommender systems. It discusses how deep learning can be used to extract features from content like text, images, and audio for recommendations. It also describes how deep learning models like convolutional and recurrent neural networks can learn complex representations of users and items for collaborative filtering. The document then presents CHAMELEON, a meta-architecture for news recommendations that uses different deep learning techniques for tasks like article embedding, metadata prediction, and next-article recommendation. It evaluates CHAMELEON on a real-world news dataset and finds it outperforms other baseline methods on metrics like hit rate and mean reciprocal rank.
PAPIs LATAM 2019 - Training and deploying ML models with Kubeflow and TensorF...Gabriel Moreira
For real-world ML systems, it is crucial to have scalable and flexible platforms to build ML workflows. In this workshop, we will demonstrate how to build an ML DevOps pipeline using Kubeflow and TensorFlow Extended (TFX). Kubeflow is a flexible environment to implement ML workflows on top of Kubernetes - an open-source platform for managing containerized workloads and services, which can be deployed either on-premises or on a Cloud platform. TFX has a special integration with Kubeflow and provides tools for data pre-processing, model training, evaluation, deployment, and monitoring.
In this workshop, we will demonstrate a pipeline for training and deploying an RNN-based Recommender System model using Kubeflow.
https://papislatam2019.sched.com/event/OV1M/training-and-deploying-ml-models-with-kubeflow-and-tensorflow-extended-tfx-sponsored-by-cit
This document provides an introduction to data science, including:
- Why data science has gained popularity due to advances in AI research and commoditized hardware.
- Examples of where data science is applied, such as e-commerce, healthcare, and marketing.
- Definitions of data science, data scientists, and their roles.
- Overviews of machine learning techniques like supervised learning, unsupervised learning, deep learning and examples of their applications.
- How data science can be used by businesses to understand customers, create personalized experiences, and optimize processes.
Nesta palestra no evento GDG DataFest, apresentei uma introdução prática sobre as principais técnicas de sistemas de recomendação, incluindo arquiteturas recentes baseadas em Deep Learning. Foram apresentados exemplos utilizando Python, TensorFlow e Google ML Engine, e fornecidos datasets para exercitarmos um cenário de recomendação de artigos e notícias.
Deep Recommender Systems - PAPIs.io LATAM 2018Gabriel Moreira
In this talk, we provide an overview of the state on how Deep Learning techniques have been recently applied to Recommender Systems. Furthermore, I provide an brief view of my ongoing Phd. research on News Recommender Systems with Deep Learning
CI&T Tech Summit 2017 - Machine Learning para Sistemas de RecomendaçãoGabriel Moreira
Este documento discute sistemas de recomendação, apresentando dois tipos principais: filtragem colaborativa e filtragem baseada em conteúdo. A filtragem colaborativa faz recomendações baseadas na similaridade entre usuários, enquanto a filtragem baseada em conteúdo analisa os atributos dos itens para fazer recomendações. O documento também fornece exemplos de como implementar esses sistemas usando ferramentas como Mahout e scikit-learn.
Discovering User's Topics of Interest in Recommender Systems @ Meetup Machine...Gabriel Moreira
This talk introduces the main techniques of Recommender Systems and Topic Modeling. Then, we present a case of how we've combined those techniques to build Smart Canvas, a SaaS that allows people to bring, create and curate content relevant to their organization, and also helps to tear down knowledge silos.
We give a deep dive into the design of our large-scale recommendation algorithms, giving special attention to a content-based approach that uses topic modeling techniques (like LDA and NMF) to discover people’s topics of interest from unstructured text, and social-based algorithms using a graph database connecting content, people and teams around topics.
Our typical data pipeline that includes the ingestion millions of user events (using Google PubSub and BigQuery), the batch processing of the models (with PySpark, MLib, and Scikit-learn), the online recommendations (with Google App Engine, Titan Graph Database and Elasticsearch), and the data-driven evaluation of UX and algorithms through A/B testing experimentation. We also touch topics about non-functional requirements of a software-as-a-service like scalability, performance, availability, reliability and multi-tenancy and how we addressed it in a robust architecture deployed on Google Cloud Platform.
Short-Bio: Gabriel Moreira is a scientist passionate about solving problems with data. He is Head of Machine Learning at CI&T and Doctoral student at Instituto Tecnológico de Aeronáutica - ITA. where he has also got his Masters on Science. His current research interests are recommender systems and deep learning.
https://www.meetup.com/pt-BR/machine-learning-big-data-engenharia/events/239037949/
Smart Canvas is a machine learning platform that delivers personalized recommendations for web and mobile content using a hybrid recommender system. It analyzes user interactions and ingests content from various sources to provide recommendations using algorithms like collaborative filtering, content-based filtering, and popularity rankings. The system is evaluated using metrics like nDCG, CTR, coverage, and user engagement to analyze recommendation quality and make improvements.
Discovering User's Topics of Interest in Recommender SystemsGabriel Moreira
This talk introduces the main techniques of Recommender Systems and Topic Modeling.
Then, we present a case of how we've combined those techniques to build Smart Canvas (www.smartcanvas.com), a service that allows people to bring, create and curate content relevant to their organization, and also helps to tear down knowledge silos.
We present some of Smart Canvas features powered by its recommender system, such as:
- Highlight relevant content, explaining to the users which of his topics of interest have generated each recommendation.
- Associate tags to users’ profiles based on topics discovered from content they have contributed. These tags become searchable, allowing users to find experts or people with specific interests.
- Recommends people with similar interests, explaining which topics brings them together.
We give a deep dive into the design of our large-scale recommendation algorithms, giving special attention to our content-based approach that uses topic modeling techniques (like LDA and NMF) to discover people’s topics of interest from unstructured text, and social-based algorithms using a graph database connecting content, people and teams around topics.
Our typical data pipeline that includes the ingestion millions of user events (using Google PubSub and BigQuery), the batch processing of the models (with PySpark, MLib, and Scikit-learn), the online recommendations (with Google App Engine, Titan Graph Database and Elasticsearch), and the data-driven evaluation of UX and algorithms through A/B testing experimentation. We also touch topics about non-functional requirements of a software-as-a-service like scalability, performance, availability, reliability and multi-tenancy and how we addressed it in a robust architecture deployed on Google Cloud Platform.
Python for Data Science - Python Brasil 11 (2015)Gabriel Moreira
This talk demonstrate a complete Data Science process, involving Obtaining, Scrubbing, Exploring, Modeling and Interpreting data using Python ecosystem tools, like IPython Notebook, Pandas, Matplotlib, NumPy, SciPy and Scikit-learn.
In this talk, we introduce the Data Scientist role , differentiate investigative and operational analytics, and demonstrate a complete Data Science process using Python ecosystem tools, like IPython Notebook, Pandas, Matplotlib, NumPy, SciPy and Scikit-learn. We also touch the usage of Python in Big Data context, using Hadoop and Spark.
In this presentation its given an introduction about Data Science, Data Scientist role and features, and how Python ecosystem provides great tools for Data Science process (Obtain, Scrub, Explore, Model, Interpret).
For that, an attached IPython Notebook ( http://bit.ly/python4datascience_nb ) exemplifies the full process of a corporate network analysis, using Pandas, Matplotlib, Scikit-learn, Numpy and Scipy.
Using Neural Networks and 3D sensors data to model LIBRAS gestures recognitio...Gabriel Moreira
Paper entitled "Using Neural Networks and 3D sensors data to model LIBRAS gestures recognition", presented at II Symposium on Knowledge Discovery, Mining and Learning – KDMILE, USP, São Carlos, SP, Brazil.
Developing GeoGames for Education with Kinect and Android for ArcGIS RuntimeGabriel Moreira
This presentation is about Where Is That, a game developed for geography and history education. There are two versions, one for Android, available on Google Play, and the other for Windows.
O documento discute um encontro de programação onde desenvolvedores trabalham juntos em desafios. Eles se reúnem para se divertir e melhorar suas habilidades em programação e trabalho em equipe através de uma metodologia pragmática. O documento também descreve um projeto de um jogo Tic-Tac-Toe para Android com diferentes histórias de usuário.
O documento apresenta uma introdução sobre testes ágeis, com foco em valores, tipos de teste e exemplos de user stories e critérios de aceitação. Os palestrantes discutem como implementar testes no desenvolvimento ágil de software, incluindo TDD, e fornecem referências sobre o tema.
The document discusses the ArcGIS Runtime for Android SDK, including that version 1.0 was released in December 2011 and version 2.0 is scheduled for summer, and it provides an overview of dependencies, supported Android platforms, environment setup, map layer types, and demos of editing and offline functionality. Samples and documentation are available on Esri's website and developer forums.
EARLY-FIX: Um Framework para Predição de Manutenção Corretiva de Software uti...Gabriel Moreira
Este documento apresenta o framework EARLY-FIX para predição de manutenção corretiva de software utilizando métricas de produto. O framework inclui modelos conceituais para indicadores de volume e predição de volume, métodos para medição de produto, histórico de manutenção e calibração de modelos preditivos, e técnicas para detecção de módulos propensos a defeitos. O framework foi implementado e testado em dois projetos da indústria para validar sua aplicabilidade.
What’s New in Web3 Development Trends to Watch in 2025.pptxLisa ward
Emerging Web3 development trends in 2025 include AI integration, enhanced scalability, decentralized identity, and increased enterprise adoption of blockchain technologies.
Supercharge Your AI Development with Local LLMsFrancesco Corti
In today's AI development landscape, developers face significant challenges when building applications that leverage powerful large language models (LLMs) through SaaS platforms like ChatGPT, Gemini, and others. While these services offer impressive capabilities, they come with substantial costs that can quickly escalate especially during the development lifecycle. Additionally, the inherent latency of web-based APIs creates frustrating bottlenecks during the critical testing and iteration phases of development, slowing down innovation and frustrating developers.
This talk will introduce the transformative approach of integrating local LLMs directly into their development environments. By bringing these models closer to where the code lives, developers can dramatically accelerate development lifecycles while maintaining complete control over model selection and configuration. This methodology effectively reduces costs to zero by eliminating dependency on pay-per-use SaaS services, while opening new possibilities for comprehensive integration testing, rapid prototyping, and specialized use cases.
Droidal: AI Agents Revolutionizing HealthcareDroidal LLC
Droidal’s AI Agents are transforming healthcare by bringing intelligence, speed, and efficiency to key areas such as Revenue Cycle Management (RCM), clinical operations, and patient engagement. Built specifically for the needs of U.S. hospitals and clinics, Droidal's solutions are designed to improve outcomes and reduce administrative burden.
Through simple visuals and clear examples, the presentation explains how AI Agents can support medical coding, streamline claims processing, manage denials, ensure compliance, and enhance communication between providers and patients. By integrating seamlessly with existing systems, these agents act as digital coworkers that deliver faster reimbursements, reduce errors, and enable teams to focus more on patient care.
Droidal's AI technology is more than just automation — it's a shift toward intelligent healthcare operations that are scalable, secure, and cost-effective. The presentation also offers insights into future developments in AI-driven healthcare, including how continuous learning and agent autonomy will redefine daily workflows.
Whether you're a healthcare administrator, a tech leader, or a provider looking for smarter solutions, this presentation offers a compelling overview of how Droidal’s AI Agents can help your organization achieve operational excellence and better patient outcomes.
A free demo trial is available for those interested in experiencing Droidal’s AI Agents firsthand. Our team will walk you through a live demo tailored to your specific workflows, helping you understand the immediate value and long-term impact of adopting AI in your healthcare environment.
To request a free trial or learn more:
https://droidal.com/
DePIN = Real-World Infra + Blockchain
DePIN stands for Decentralized Physical Infrastructure Networks.
It connects physical devices to Web3 using token incentives.
How Does It Work?
Individuals contribute to infrastructure like:
Wireless networks (e.g., Helium)
Storage (e.g., Filecoin)
Sensors, compute, and energy
They earn tokens for their participation.
Protecting Your Sensitive Data with Microsoft Purview - IRMS 2025Nikki Chapple
Session | Protecting Your Sensitive Data with Microsoft Purview: Practical Information Protection and DLP Strategies
Presenter | Nikki Chapple (MVP| Principal Cloud Architect CloudWay) & Ryan John Murphy (Microsoft)
Event | IRMS Conference 2025
Format | Birmingham UK
Date | 18-20 May 2025
In this closing keynote session from the IRMS Conference 2025, Nikki Chapple and Ryan John Murphy deliver a compelling and practical guide to data protection, compliance, and information governance using Microsoft Purview. As organizations generate over 2 billion pieces of content daily in Microsoft 365, the need for robust data classification, sensitivity labeling, and Data Loss Prevention (DLP) has never been more urgent.
This session addresses the growing challenge of managing unstructured data, with 73% of sensitive content remaining undiscovered and unclassified. Using a mountaineering metaphor, the speakers introduce the “Secure by Default” blueprint—a four-phase maturity model designed to help organizations scale their data security journey with confidence, clarity, and control.
🔐 Key Topics and Microsoft 365 Security Features Covered:
Microsoft Purview Information Protection and DLP
Sensitivity labels, auto-labeling, and adaptive protection
Data discovery, classification, and content labeling
DLP for both labeled and unlabeled content
SharePoint Advanced Management for workspace governance
Microsoft 365 compliance center best practices
Real-world case study: reducing 42 sensitivity labels to 4 parent labels
Empowering users through training, change management, and adoption strategies
🧭 The Secure by Default Path – Microsoft Purview Maturity Model:
Foundational – Apply default sensitivity labels at content creation; train users to manage exceptions; implement DLP for labeled content.
Managed – Focus on crown jewel data; use client-side auto-labeling; apply DLP to unlabeled content; enable adaptive protection.
Optimized – Auto-label historical content; simulate and test policies; use advanced classifiers to identify sensitive data at scale.
Strategic – Conduct operational reviews; identify new labeling scenarios; implement workspace governance using SharePoint Advanced Management.
🎒 Top Takeaways for Information Management Professionals:
Start secure. Stay protected. Expand with purpose.
Simplify your sensitivity label taxonomy for better adoption.
Train your users—they are your first line of defense.
Don’t wait for perfection—start small and iterate fast.
Align your data protection strategy with business goals and regulatory requirements.
💡 Who Should Watch This Presentation?
This session is ideal for compliance officers, IT administrators, records managers, data protection officers (DPOs), security architects, and Microsoft 365 governance leads. Whether you're in the public sector, financial services, healthcare, or education.
🔗 Read the blog: https://nikkichapple.com/irms-conference-2025/
Dev Dives: System-to-system integration with UiPath API WorkflowsUiPathCommunity
Join the next Dev Dives webinar on May 29 for a first contact with UiPath API Workflows, a powerful tool purpose-fit for API integration and data manipulation!
This session will guide you through the technical aspects of automating communication between applications, systems and data sources using API workflows.
📕 We'll delve into:
- How this feature delivers API integration as a first-party concept of the UiPath Platform.
- How to design, implement, and debug API workflows to integrate with your existing systems seamlessly and securely.
- How to optimize your API integrations with runtime built for speed and scalability.
This session is ideal for developers looking to solve API integration use cases with the power of the UiPath Platform.
👨🏫 Speakers:
Gunter De Souter, Sr. Director, Product Manager @UiPath
Ramsay Grove, Product Manager @UiPath
This session streamed live on May 29, 2025, 16:00 CET.
Check out all our upcoming UiPath Dev Dives sessions:
👉 https://community.uipath.com/dev-dives-automation-developer-2025/
UiPath Community Zurich: Release Management and Build PipelinesUiPathCommunity
Ensuring robust, reliable, and repeatable delivery processes is more critical than ever - it's a success factor for your automations and for automation programmes as a whole. In this session, we’ll dive into modern best practices for release management and explore how tools like the UiPathCLI can streamline your CI/CD pipelines. Whether you’re just starting with automation or scaling enterprise-grade deployments, our event promises to deliver helpful insights to you. This topic is relevant for both on-premise and cloud users - as well as for automation developers and software testers alike.
📕 Agenda:
- Best Practices for Release Management
- What it is and why it matters
- UiPath Build Pipelines Deep Dive
- Exploring CI/CD workflows, the UiPathCLI and showcasing scenarios for both on-premise and cloud
- Discussion, Q&A
👨🏫 Speakers
Roman Tobler, CEO@ Routinuum
Johans Brink, CTO@ MvR Digital Workforce
We look forward to bringing best practices and showcasing build pipelines to you - and to having interesting discussions on this important topic!
If you have any questions or inputs prior to the event, don't hesitate to reach out to us.
This event streamed live on May 27, 16:00 pm CET.
Check out all our upcoming UiPath Community sessions at:
👉 https://community.uipath.com/events/
Join UiPath Community Zurich chapter:
👉 https://community.uipath.com/zurich/
With Claude 4, Anthropic redefines AI capabilities, effectively unleashing a ...SOFTTECHHUB
With the introduction of Claude Opus 4 and Sonnet 4, Anthropic's newest generation of AI models is not just an incremental step but a pivotal moment, fundamentally reshaping what's possible in software development, complex problem-solving, and intelligent business automation.
SAP Sapphire 2025 ERP1612 Enhancing User Experience with SAP Fiori and AIPeter Spielvogel
Explore how AI in SAP Fiori apps enhances productivity and collaboration. Learn best practices for SAPUI5, Fiori elements, and tools to build enterprise-grade apps efficiently. Discover practical tips to deploy apps quickly, leveraging AI, and bring your questions for a deep dive into innovative solutions.
Offshore IT Support: Balancing In-House and Offshore Help Desk Techniciansjohn823664
In today's always-on digital environment, businesses must deliver seamless IT support across time zones, devices, and departments. This SlideShare explores how companies can strategically combine in-house expertise with offshore talent to build a high-performing, cost-efficient help desk operation.
From the benefits and challenges of offshore support to practical models for integrating global teams, this presentation offers insights, real-world examples, and key metrics for success. Whether you're scaling a startup or optimizing enterprise support, discover how to balance cost, quality, and responsiveness with a hybrid IT support strategy.
Perfect for IT managers, operations leads, and business owners considering global help desk solutions.
Fully Open-Source Private Clouds: Freedom, Security, and ControlShapeBlue
In this presentation, Swen Brüseke introduced proIO's strategy for 100% open-source driven private clouds. proIO leverage the proven technologies of CloudStack and LINBIT, complemented by professional maintenance contracts, to provide you with a secure, flexible, and high-performance IT infrastructure. He highlighted the advantages of private clouds compared to public cloud offerings and explain why CloudStack is in many cases a superior solution to Proxmox.
--
The CloudStack European User Group 2025 took place on May 8th in Vienna, Austria. The event once again brought together open-source cloud professionals, contributors, developers, and users for a day of deep technical insights, knowledge sharing, and community connection.
New Ways to Reduce Database Costs with ScyllaDBScyllaDB
How ScyllaDB’s latest capabilities can reduce your infrastructure costs
ScyllaDB has been obsessed with price-performance from day 1. Our core database is architected with low-level engineering optimizations that squeeze every ounce of power from the underlying infrastructure. And we just completed a multi-year effort to introduce a set of new capabilities for additional savings.
Join this webinar to learn about these new capabilities: the underlying challenges we wanted to address, the workloads that will benefit most from each, and how to get started. We’ll cover ways to:
- Avoid overprovisioning with “just-in-time” scaling
- Safely operate at up to ~90% storage utilization
- Cut network costs with new compression strategies and file-based streaming
We’ll also highlight a “hidden gem” capability that lets you safely balance multiple workloads in a single cluster. To conclude, we will share the efficiency-focused capabilities on our short-term and long-term roadmaps.
Maxx nft market place new generation nft marketing placeusersalmanrazdelhi
PREFACE OF MAXXNFT
MaxxNFT: Powering the Future of Digital Ownership
MaxxNFT is a cutting-edge Web3 platform designed to revolutionize how
digital assets are owned, traded, and valued. Positioned at the forefront of the
NFT movement, MaxxNFT views NFTs not just as collectibles, but as the next
generation of internet equity—unique, verifiable digital assets that unlock new
possibilities for creators, investors, and everyday users alike.
Through strategic integrations with OKT Chain and OKX Web3, MaxxNFT
enables seamless cross-chain NFT trading, improved liquidity, and enhanced
user accessibility. These collaborations make it easier than ever to participate
in the NFT ecosystem while expanding the platform’s global reach.
With a focus on innovation, user rewards, and inclusive financial growth,
MaxxNFT offers multiple income streams—from referral bonuses to liquidity
incentives—creating a vibrant community-driven economy. Whether you
'
re
minting your first NFT or building a digital asset portfolio, MaxxNFT empowers
you to participate in the future of decentralized value exchange.
https://maxxnft.xyz/
2. Agenda
● Machine Learning Pipeline
● Data Munging
● Feature Engineering
○ Numerical features
○ Categorical features
○ Temporal and Spatial features
○ Textual features
● Feature Selection
Extra slides marker
4. "Feature engineering is the process of
transforming raw data into features that better
represent the underlying problem to the
predictive models, resulting in improved
model accuracy on unseen data."
– Jason Brownlee
5. “Coming up with features is difficult,
time-consuming,
requires expert knowledge.
'Applied machine learning' is basically
feature engineering.”
– Andrew Ng
10. Outbrain Click Prediction - Kaggle competition
Dataset
● Sample of users page views
and clicks during
14 days on June, 2016
● 2 Billion page views
● 17 million click records
● 700 Million unique users
● 560 sites
Can you predict which recommended content each user will click?
11. I got 19th position
from about
1000 competitors
(top 2%),
mostly due to
Feature Engineering
techniques.
13. ● What does the data model look like?
● What is the features distribution?
● What are the features with missing
or inconsistent values?
● What are the most predictive features?
● Conduct a Exploratory Data Analysis (EDA)
First at all … a closer look at your data
15. ML-Ready Dataset
Fields (Features)
Instances
Tabular data (rows and columns)
● Usually denormalized in a single file/dataset
● Each row contains information about one instance
● Each column is a feature that describes a property of the instance
16. Data Cleansing
Homogenize missing values and different types of in the same feature, fix input errors, types, etc.
Original data
Cleaned data
17. Aggregating
Necessary when the entity to model is an aggregation from the provided data.
Original data (list of playbacks)
Aggregated data (list of users)
18. Pivoting
Necessary when the entity to model is an aggregation from the provided data.
Aggregated data with pivoted columns
Original data
# playbacks by device Play duration by device
20. Numerical features
● Usually easy to ingest by mathematical
models, but feature engineering is indeed
necessary.
● Can be floats, counts, ...
● Easier to impute missing data
● Distribution and scale matters to some
models
21. Imputation for missing values
● Datasets contain missing values, often encoded as blanks, NaNs or other
placeholders
● Ignoring rows and/or columns with missing values is possible, but at the price of
loosing data which might be valuable
● Better strategy is to infer them from the known part of data
● Strategies
○ Mean: Basic approach
○ Median: More robust to outliers
○ Mode: Most frequent value
○ Using a model: Can expose algorithmic bias
23. Rounding
● Form of lossy compression: retain most significant features of the data.
● Sometimes too much precision is just noise
● Rounded variables can be treated as categorical variables
● Example:
Some models like Association Rules work only with categorical features. It is
possible to convert a percentage into categorial feature this way
Extra slides marker
24. Binarization
● Transform discrete or continuous numeric features in binary features
Example: Number of user views of the same document
>>> from sklearn import preprocessing
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> binarizer =
preprocessing.Binarizer(threshold=1.0)
>>> binarizer.transform(X)
array([[ 1., 0., 1.],
[ 1., 0., 0.],
[ 0., 1., 0.]])
Binarization with scikit-learn
25. Binning
● Split numerical values into bins and encode with a bin ID
● Can be set arbitrarily or based on distribution
● Fixed-width binning
Does fixed-width binning make sense for this long-tailed distribution?
Most users (458,234,809 ~ 5*10^8) had only 1 pageview during the period.
27. Log transformation
Compresses the range of large numbers and expand the range of small numbers.
Eg. The larger x is, the slower log(x) increments.
28. Log transformation
Histogram of # views by user Histogram of # views by user
smoothed by log(1+x)
Smoothing long-tailed data with log
29. Scaling
● Models that are smooth functions of input features are sensitive to the scale
of the input (eg. Linear Regression)
● Scale numerical variables into a certain range, dividing values by a
normalization constant (no changes in single-feature distribution)
● Popular techniques
○ MinMax Scaling
○ Standard (Z) Scaling
30. Min-max scaling
● Squeezes (or stretches) all values within the range of [0, 1] to add robustness to
very small standard deviations and preserving zeros for sparse data.
>>> from sklearn import preprocessing
>>> X_train = np.array([[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]])
...
>>> min_max_scaler =
preprocessing.MinMaxScaler()
>>> X_train_minmax =
min_max_scaler.fit_transform(X_train)
>>> X_train_minmax
array([[ 0.5 , 0. , 1. ],
[ 1. , 0.5 , 0.33333333],
[ 0. , 1. , 0. ]])
Min-max scaling with scikit-learn
31. Standard (Z) Scaling
After Standardization, a feature has mean of 0 and variance of 1 (assumption of
many learning algorithms)
>>> from sklearn import preprocessing
>>> import numpy as np
>>> X = np.array([[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]])
>>> X_scaled = preprocessing.scale(X)
>>> X_scaled
array([[ 0. ..., -1.22..., 1.33...],
[ 1.22..., 0. ..., -0.26...],
[-1.22..., 1.22..., -1.06...]])
>> X_scaled.mean(axis=0)
array([ 0., 0., 0.])
>>> X_scaled.std(axis=0)
array([ 1., 1., 1.])
Standardization with scikit-learn
32. ● Scales individual samples (rows) to have unit vector, dividing values by
vector’s L2
norm, a.k.a. the Euclidean norm
● Useful for quadratic form (like dot-product) or any other kernel to quantify
similarity of pairs of samples. This assumption is the base of the Vector
Space Model often used in text classification and clustering contexts
Normalization
Normalized vector
Euclidean (L2
) norm
34. Interaction Features
● Simple linear models use a linear combination of the individual input
features, x1
, x2
, ... xn
to predict the outcome y.
y = w1
x1
+ w2
x2
+ ... + wn
xn
● An easy way to increase the complexity of the linear model is to create
feature combinations (nonlinear features).
● Example:
Degree 2 interaction features for vector x = (x1,
x2
)
y = w1
x1
+ w2
x2
+ w3
x1
x2
+ w4
x1
2
+ w4
x2
2
35. Interaction Features
>>> import numpy as np
>>> from sklearn.preprocessing import PolynomialFeatures
>>> X = np.arange(6).reshape(3, 2)
>>> X
array([[0, 1],
[2, 3],
[4, 5]])
>>> poly = poly = PolynomialFeatures(degree=2, interaction_only=False,
include_bias=True)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 0., 0., 1.],
[ 1., 2., 3., 4., 6., 9.],
[ 1., 4., 5., 16., 20., 25.]])
Polynomial features with scikit-learn
36. Interaction Features - Vowpal Wabbit
vw --loss_function logistic --link=logistic --ftrl --ftrl_alpha 0.005 --ftrl_beta 0.1
-q cc -q zc -q zm
-l 0.01 --l1 1.0 --l2 1.0 -b 28 --hash all
--compressed -d data/train_fv.vw -f output.model
Feature interactions with VW
Interacting (quadratic) features of some namespaces
vw_line = '{} |i {} |m {} |z {} |c {}n'.format(
label,
' '.join(integer_features),
' '.join(ctr_features),
' '.join(similarity_features),
' '.join(categorical_features))
Separating features in namespaces in Vowpal Wabbit (VW) sparse format
1 |i 12:5 18:126 |m 2:0.015 45:0.123 |z 32:0.576 17:0.121 |c 16:1 295:1 3554:1
Sample data point (line in VW format file)
38. Categorical Features
● Nearly always need some treatment to be suitable for models
● High cardinality can create very sparse data
● Difficult to impute missing
● Examples:
Platform: [“desktop”, “tablet”, “mobile”]
Document_ID or User_ID: [121545, 64845, 121545]
39. One-Hot Encoding (OHE)
● Transform a categorical feature with m possible values into m binary features.
● If the variable cannot be multiple categories at once, then only one bit in the
group can be on.
● Sparse format is memory-friendly
● Example: “platform=tablet” can be sparsely encoded as “2:1”
41. Large Categorical Variables
● Common in applications like targeted advertising and fraud detection
● Example:
Some large categorical features from Outbrain Click Prediction competition
42. Feature hashing
● Hashes categorical values into vectors with fixed-length.
● Lower sparsity and higher compression compared to OHE
● Deals with new and rare categorical values (eg: new user-agents)
● May introduce collisions
100 hashed columns
43. Feature hashing
import hashlib
def hashstr(s, nr_bins):
return int(hashlib.md5(s.encode('utf8')).hexdigest(), 16)%(nr_bins-1)+1
CATEGORICAL_VALUE='ad_id=354424'
MAX_BINS=100000
>>> hashstr(CATEGORICAL_VALUE, MAX_BINS)
49389
Feature hashing with pure Python
Original category
Hashed category
import tensorflow as tf
ad_id_hashed = tf.contrib.layers.sparse_column_with_hash_bucket('ad_id',
hash_bucket_size=250000, dtype=tf.int64, combiner="sum")
Feature hashing with TensorFlow
44. Feature hashing
vw --loss_function logistic --link=logistic --ftrl --ftrl_alpha 0.005 --ftrl_beta 0.1
-q cc -q zc -q zm -l 0.01 --l1 1.0 --l2 1.0
-b 18 --hash all
--compressed -d data/train_fv.vw -f output.model
Feature hashing with Vowpal Wabbit
Hashes values to a feature space of 218
positions (columns)
45. Bin-counting
● Instead of using the actual categorical value, use a global statistic of this
category on historical data.
● Useful for both linear and non-linear algorithms
● May give collisions (same encoding for different categories)
● Be careful about leakage
● Strategies
○ Count
○ Average CTR
47. LabelCount encoding
● Rank categorical variables by count in train set
● Useful for both linear and non-linear algorithms (eg: decision trees)
● Not sensitive to outliers
● Won’t give same encoding to different variables
48. Category Embedding
● Use a Neural Network to create dense embeddings from categorical
variables.
● Map categorical variables in a function approximation problem into Euclidean
spaces
● Faster model training.
● Less memory overhead.
● Can give better accuracy than 1-hot encoded.
51. Time Zone conversion
Factors to consider:
● Multiple time zones in some countries
● Daylight Saving Time (DST)
○ Start and end DST dates
52. ● Apply binning on time data to make it categorial and more general.
● Binning a time in hours or periods of day, like below.
● Extraction: weekday/weekend, weeks, months, quarters, years...
Hour range Bin ID Bin Description
[5, 8) 1 Early Morning
[8, 11) 2 Morning
[11, 14) 3 Midday
[14, 19) 4 Afternoon
[19, 22) 5 Evening
[22-24) and (00-05] 6 Night
Time binning
53. ● Instead of encoding: total spend, encode things like:
Spend in last week, spend in last month, spend in last
year.
● Gives a trend to the algorithm: two customers with equal
spend, can have wildly different behavior — one
customer may be starting to spend more, while the other
is starting to decline spending.
Trendlines
54. ● Hardcode categorical features from dates
● Example: Factors that might have major influence on spending behavior
● Proximity to major events (holidays, major sports events)
○ Eg. date_X_days_before_holidays
● Proximity to wages payment date (monthly seasonality)
○ Eg. first_saturday_of_the_month
Closeness to major events
55. ● Differences between dates might be relevant
● Examples:
○ user_interaction_date - published_doc_date
To model how recent was the ad when the user viewed it.
Hypothesis: user interests on a topic may decay over time
○ last_user_interaction_date - user_interaction_date
To model how old was a given user interaction compared to his last
interaction
Time differences
57. Spatial Variables
● Spatial variables encode a location in space, like:
○ GPS-coordinates (lat. / long.) - sometimes require projection to a different
coordinate system
○ Street Addresses - require geocoding
○ ZipCodes, Cities, States, Countries - usually enriched with the centroid
coordinate of the polygon (from external GIS data)
● Derived features
○ Distance between a user location and searched hotels (Expedia competition)
○ Impossible travel speed (fraud detection)
58. Spatial Enrichment
Usually useful to enrich with external geographic data (eg. Census demographics)
Beverage Containers Redemption Fraud Detection: Usage of # containers redeemed (red circles) by
store and Census households median income by Census Tracts
60. Natural Language Processing
Cleaning
• Lowercasing
• Convert accented characters
• Removing non-alphanumeric
• Repairing
Tokenizing
• Encode punctuation marks
• Tokenize
• N-Grams
• Skip-grams
• Char-grams
• Affixes
Removing
• Stopwords
• Rare words
• Common words
Roots
• Spelling correction
• Chop
• Stem
• Lemmatize
Enrich
• Entity Insertion / Extraction
• Parse Trees
• Reading Level
61. Represent each document as a feature vector in the vector space, where each
position represents a word (token) and the contained value is its relevance in the
document.
● BoW (Bag of words)
● TF-IDF (Term Frequency - Inverse Document Frequency)
● Embeddings (eg. Word2Vec, Glove)
● Topic models (e.g LDA)
Document Term Matrix - Bag of Words
Text vectorization
63. from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(max_df=0.5, max_features=1000,
min_df=2, stop_words='english')
tfidf_corpus = vectorizer.fit_transform(text_corpus)
face person guide lock cat dog sleep micro pool gym
0 1 2 3 4 5 6 7 8 9
D1 0.05 0.25
D2 0.02 0.32 0.45
...
...
tokens
documents
TF-IDF sparse matrix example
Text vectorization - TF-IDF
TF-IDF with scikit-learn
64. Similarity metric between two vectors is cosine among the angle between them
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(tfidf_matrix[0:1], tfidf_matrix)
Cosine Similarity with scikit-learn
Cosine Similarity
65. Textual Similarities
• Token similarity: Count number of tokens that appear in
two texts.
• Levenshtein/Hamming/Jaccard Distance: Check
similarity between two strings, by looking at number of
operations needed to transform one in the other.
• Word2Vec / Glove: Check cosine similarity between two
word embedding vectors
69. Feature Selection
Reduces model complexity and training time
● Filtering - Eg. Correlation our Mutual Information between
each feature and the response variable
● Wrapper methods - Expensive, trying to optimize the best
subset of features (eg. Stepwise Regression)
● Embedded methods - Feature selection as part of model
training process (eg. Feature Importances of Decision Trees or
Trees Ensembles)
70. “More data beats clever algorithms,
but better data beats more data.”
– Peter Norvig
71. Diverse set of Features and Models leads to different results!
Outbrain Click Prediction - Leaderboard score of my approaches
73. “...some machine learning projects
succeed and some fail.
Where is the difference?
Easily the most important factor is the
features used.”
– Pedro Domingos