SlideShare a Scribd company logo
SigOpt. Confidential.
Tuning 2.0
Advanced Optimization Techniques
Scott Clark, PhD — Founder and CEO
Tuesday, September 10, 2019
SigOpt. Confidential.
Accelerate and amplify the
impact of modelers everywhere
SigOpt. Confidential.
Differentiated
Models
Tailored Models
10,000x
Analytics 2.0 Models
100x
1x
Modelers by Segment Value per Model
Enterprise AI
Goals:
Differentiate Products
Generate Revenue
Requirements:
Modelers with Expertise
Best-in-Class Solutions
SigOpt. Confidential.
Your firewall
Training
Data
AI, ML, DL,
Simulation Model
Model Evaluation
or Backtest
Testing
Data
New
Configurations
Objective
Metric
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale
with your needs
OPTIMIZATION ENGINE
Explore and exploit with a
variety of techniques
RESTAPI
Configuration
Parameters or
Hyperparameters
Your data
and models
stay private
Iterative, automated optimization
Integrates
with any
modeling
stack
SigOpt. Confidential.
$300B+
in assets under management
Current SigOpt algorithmic
trading customers represent
$500B+
in market capitalization
Current SigOpt enterprise customers
across six industries represent
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
Hyperparameter Optimization
(including long training cycles)
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
Early Stopping
Convergence Monitoring
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
HPO with
Conditional
Parameters
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
Tuning Transformations
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
Multimetric
HPO
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
Re-tuning
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
Tuning
Transformations
Balancing
Metrics
Tuning
Architecture
Early
Stopping
HPO (long
training cycles)
Re-tuning
Opportunity
Iteratively tune at all stages of the modeling lifecycle
SigOpt. Confidential.
Benefits
Learn fast, fail fast
Give yourself the best chance at finding good use
cases while avoiding false negatives
Connect outputs to outcomes
Define, select and iterate on your metrics
with end-to-end evaluation
Find the global maximum
Early non-optimized decisions in the process limit
your ability to maximize performance
Boost productivity
Automate modeling tasks so modelers spend
more time applying their expertise
SigOpt. Confidential.
Data
Engineering
Feature
Engineering
Metric
Definition
Model
Search
Model
Training
Model
Tuning
Model
Evaluation
Model
Deployment
Tuning
Transformations
Balancing
Metrics
Tuning
Architecture
Early
Stopping
HPO (long
training cycles)
Re-tuning
Focus for today
Metric Definition, Model Search, Long Training Cycles
SigOpt. Confidential.
Techniques
1. Metric definition: multimetric optimization
2. Model search: conditional parameters
3. Long training cycles: multitask optimization
SigOpt. Confidential.
Metric definition with
multimetric optimization
1
SigOpt. Confidential.
How it works: Multimetric optimization (with thresholds)
● Define two metrics instead of one
● Optimize against both metrics
automatically and simultaneously
● Set thresholds on each individual metric to
reflect business or modeling needs
● Compare a Pareto frontier of best model
configurations that balance these two
metrics
● Relevant docs
● Relevant blog post
SigOpt. Confidential.
Potential applications of multimetric optimization
Balance Competing Objectives Define and Select Metrics Connect Metrics to Outcomes
https://sigopt.com/blog/intro-to-multicriteria
-optimization/
https://sigopt.com/blog/multimetric-updates-
in-the-experiment-insights-dashboard/
https://sigopt.com/blog/metric-thresholds-a
-new-feature-to-supercharge-multimetric-
optimization/
Use Case: Balancing Speed & Accuracy in Deep Learning
Multimetric Use Case 1
● Category: Time Series
● Task: Sequence Classification
● Model: CNN
● Data: Diatom Images
● Analysis: Accuracy-Time Tradeoff
● Result: Similar accuracy, 33% the inference time
Multimetric Use Case 2
● Category: NLP
● Task: Sentiment Analysis
● Model: CNN
● Data: Rotten Tomatoes
● Analysis: Accuracy-Time Tradeoff
● Result: ~2% in accuracy versus 50% of training time
Learn more
https://devblogs.nvidia.com/sigopt-deep-learning-
hyperparameter-optimization/
SigOpt. Confidential.
Experiment Design for Sequence Classification
Data
● Diatom Images
● Source: UCR Time Series Classification
Model
● Convolutional Neural Network
● Source: Wang et al. (paper, code)
● Tensorflow via Keras
Metrics
● Inference Time
● Accuracy
HPO Methods (Implemented via SigOpt)
● Random Search
● Bayesian Optimization
Note: Experiment code available here
SigOpt. Confidential.
Process: Tune variety of parameters, maximize metrics
Network
Architecture
Stochastic Gradient
Descent
SigOpt. Confidential.
Result: Bayesian outperforms random search
● Both methods were executed
via the SigOpt API
● Bayesian optimization required
90% fewer training runs than
random search
● Bayesian optimization found
85.7% of the combined Pareto
frontier of optimal model
configurations—almost 6x as
many choices
10x random
SigOpt. Confidential.
Result: Minimal accuracy loss for 66% inference gain
Maximize accuracy
Minimize inference
time
Balance Both
SigOpt. Confidential.
Model search with conditional parameters2
SigOpt. Confidential.
How it works: Conditional parameters
Take into account the conditionality of
certain parameter types in the
optimization process
● Establish conditionality between
various parameters
● Use this conditionality to improve
the Bayesian optimization process
● Boost results from the hyper-
parameter optimization process
● Example: Architecture parameters
for deep learning models
● Example: Parameter types for SGD
variants (to the right)
● Relevant docs
Use Case: Effective and Efficient NLP Optimization
Use Case
● Category: NLP
● Task: Question Answering
● Model: MemN2N
● Data: bAbI
● Analysis: Performance benchmark
● Result: 4.84% gain, 30% the cost
Learn more
https://devblogs.nvidia.com/optimizing-end-to-end-memory-
networks-using-sigopt-gpus/
SigOpt. Confidential.
Design: Question answering data and memory networks
Data Model
Sources:
Facebook AI Research (FAIR) bAbI dataset: https://research.fb.com/downloads/babi/
Sukhbaatar et al.: https://arxiv.org/abs/1503.08895
SigOpt. Confidential.
Hyperparameter Optimization Experiment Setup
Comparison of Bayesian Optimization and Random Search
Standard Parameters Conditional Parameters
SigOpt. Confidential.
Result: Significant boost in consistency, accuracy
Comparison across random search versus Bayesian optimization with conditionals
SigOpt. Confidential.
Result: Highly cost efficient accuracy gains
Comparison across random search versus Bayesian optimization with conditionals
SigOpt is
18.5x as
efficient
SigOpt. Confidential.
Long training cycles with
multitask optimization in parallel
3
SigOpt. Confidential.33
How it works: Multitask Optimization
Partial Full
● Introduce a variety of cheap
and expensive tasks in a
hyperparameter optimization
experiment
● Use cheaper tasks earlier
(explore) in the tuning process
to inform more expensive tasks
later (exploit)
● In the process, reduce the full
time required to tune an
expensive model
● Relevant docs
Sources:
Matthias Poloczek, Jialei Wang, Peter I. Frazier: https://arxiv.org/abs/1603.00389
Aaron Klein, Frank Hutter, et al.: https://arxiv.org/abs/1605.07079
SigOpt. Confidential.
How it works: Combine multitask with parallelization
Your firewall
New
Configurations
Objective
Metric
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale
with your needs
OPTIMIZATION ENGINE
Explore and exploit with a
variety of techniques
RESTAPI
Configuration
Parameters or
Hyperparameters
WorkerWorker
Worker Worker
Use Case: Image Classification on a Budget
Use Case
● Category: Computer Vision
● Task: Image Classification
● Model: CNN
● Data: Stanford Cars
● Analysis: Architecture Comparison
● Result: 2.4% accuracy gain for much cheaper model
Learn more
https://mlconf.com/blog/insights-for-building-high-performing-
image-classification-models/
SigOpt. Confidential.
Data: Cars image classification
36
Stanford CARS Dataset 16,185 images, 196 classes Labels: Car, Make, Year
Source:
Stanford CARS Dataset: https://ai.stanford.edu/~jkrause/cars/car_dataset.html
SigOpt. Confidential.
Architecture: Classifying images of cars using ResNet
37
Convolutions Classification
ResNet
Input
Acura
TLX
2015
Output
Label
Sources:
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun: https://arxiv.org/abs/1512.03385
SigOpt. Confidential.
Architecture
Comparison Training
Setup
Comparison
Experiment design scenarios
38
Baseline SigOpt Multitask
ResNet 50
Scenario 1a
Pre-Train on Imagenet
Tune Fully Connected Layer
Scenario 1b
Optimize Hyperparameters to
Tune the Fully Connected Layer
ResNet 18
Scenario 2a
Fine Tune Full Network
Scenario 2b
Optimize Hyperparameters to
Fine Tune the Full Network
SigOpt. Confidential.
Training setup comparison
ImageNet Pretrained
Convolutional Layers
Fully Connected Layer
ImageNet Pretrained
Convolutional Layers
Fully Connected Layer
Input
Convolutional Features
Classification
Input
Convolutional Features
Classification
Fine Tuning Feature Extractor
Tuning
Tuning
SigOpt. Confidential.
Hyperparameter setup
40
Hyperparameter Lower Bound Upper Bound
Log Learning Rate 1.2e-4 1.0
Learning Rate Scheduler 0 0.99
Batch Size (powers of 2) 16 256
Nesterov False True
Log Weight Decay 1.2e-5 1.0
Momentum 0 0.9
Scheduler Step 1 20
SigOpt. Confidential.
Fine-tuning the smaller
network significantly
outperforms feature
extraction on a bigger
network
Results: Optimizing and tuning the full network outperforms
41
Multitask optimization
drives significant
performance gains
+3.92%
+1.58%
SigOpt. Confidential.42
Insight: Multitask efficiency at the hyperparameter level
Example: Learning rate accuracy and values by cost of task over time
Progression of observations over time Accuracy and value for each observation Parameter importance analysis
SigOpt. Confidential.
Insight: Parallelization further accelerates wall-clock time
43
928 total hours to optimize ResNet 18
220 observations per experiment
20 p2.xlarge AWS ec2 instances
45 hour actual wall-clock time
SigOpt. Confidential.
Implication: Fine-tuning significantly outperforms
Cost Breakdown for Multitask Optimization
Cost efficiency Feature Extractor ResNet 50 Fine-Tuning ResNet 18
Hours per training 4.08 4.2
Observations 220 220
Number of Runs 1 1
Total compute hours 898 924
Cost per GPU-hour $0.90 $0.90
% Improvement 1.58% 3.92%
Total compute cost $808 $832
cost ($) per %
improvement $509 $20
Similar Compute Cost
Fine-Tuning Significantly
More Efficient and Effective
Similar Wall-Clock Time
SigOpt. Confidential.
Implication: Multiple benefits from multitask
45
Tuning ResNet-18
Cost efficiency Multitask Bayesian Random
Hours per training 4.2 4.2 4.2
Observations 220 646 646
Number of Runs 1 1 20
Total compute hours 924 2,713 54,264
Cost per GPU-hour $0.90 $0.90 $0.90
Total compute cost $832 $2,442 $48,838
Time to optimize Multitask Bayesian Random
Total compute hours 924 2,713 54,264
# of Machines 20 20 20
Wall-clock time (hrs) 46 136 2,713
1.7% the cost of
random search to
achieve similar
performance
58x faster
wall-clock time
to optimize with
multitask versus
random search
SigOpt. Confidential.
Techniques
1. Metric definition: multimetric optimization
Read the blog here.
2. Model search: conditional parameters
Read the blog here.
3. Long training cycles: multitask optimization
Read the blog here.
SigOpt. Confidential.
Try our solution
Sign up at
sigopt.com/try-it
today.
Register with code: 1SFSPON25
https://sanfrancisco.theaisummit.com
September 25-26, 2019
Register with code: SIGOPT20
https://twimlcon.com/
October 1-2, 2019
Download eBook
https://twimlai.com/announcing-our-
ai-platforms-series-and-ebooks/

More Related Content

What's hot (20)

PDF
Seldon: Deploying Models at Scale
Seldon
 
PPTX
Amitpal Tagore, Integral Ad Science - Leveraging Data for Successful Ad Campa...
Sri Ambati
 
PDF
Citizen Data Science Training using KNIME
Ali Raza Anjum
 
PDF
A Look Under the Hood of H2O Driverless AI
Sri Ambati
 
PPTX
Scaling & Managing Production Deployments with H2O ModelOps
Sri Ambati
 
PDF
Delivering Large Scale Real-time Graph Analytics with Dell Infrastructure and...
TigerGraph
 
PDF
The Beauty of (Big) Data Privacy Engineering
Databricks
 
PDF
APIDays SF 2019: Managing multiple api stacks on serverless
Alexander Graebe
 
PPTX
TechWiseTV Workshop: Improving Performance and Agility with Cisco HyperFlex
Robb Boyd
 
PDF
DXC Industrialized A.I. – Von der Data Story zum industrialisierten A.I. Service
Lukas Ott
 
PDF
Debugging AI
Dr. Christian Betz
 
PPTX
Meetup Spark UDF performance
Guilherme Braccialli
 
PDF
AccuWeather: Recasting API Experiences in a Developer-First World
Apigee | Google Cloud
 
PPTX
AI and AutoML: Debunking Myths
Sri Ambati
 
PPTX
Ankit Sinha, Experian - Ascend Analytical Sandbox - #H2OWorld
Sri Ambati
 
PDF
ML Model Deployment and Scoring on the Edge with Automatic ML & DF
Sri Ambati
 
PDF
AI Foundations Course Module 1 - An AI Transformation Journey
Sri Ambati
 
PDF
Tim Daines, QuantumBlack
Mad*Pow
 
PDF
Airbyte - Series-A deck
Airbyte
 
PDF
The Role of AI and Automation
mcoello
 
Seldon: Deploying Models at Scale
Seldon
 
Amitpal Tagore, Integral Ad Science - Leveraging Data for Successful Ad Campa...
Sri Ambati
 
Citizen Data Science Training using KNIME
Ali Raza Anjum
 
A Look Under the Hood of H2O Driverless AI
Sri Ambati
 
Scaling & Managing Production Deployments with H2O ModelOps
Sri Ambati
 
Delivering Large Scale Real-time Graph Analytics with Dell Infrastructure and...
TigerGraph
 
The Beauty of (Big) Data Privacy Engineering
Databricks
 
APIDays SF 2019: Managing multiple api stacks on serverless
Alexander Graebe
 
TechWiseTV Workshop: Improving Performance and Agility with Cisco HyperFlex
Robb Boyd
 
DXC Industrialized A.I. – Von der Data Story zum industrialisierten A.I. Service
Lukas Ott
 
Debugging AI
Dr. Christian Betz
 
Meetup Spark UDF performance
Guilherme Braccialli
 
AccuWeather: Recasting API Experiences in a Developer-First World
Apigee | Google Cloud
 
AI and AutoML: Debunking Myths
Sri Ambati
 
Ankit Sinha, Experian - Ascend Analytical Sandbox - #H2OWorld
Sri Ambati
 
ML Model Deployment and Scoring on the Edge with Automatic ML & DF
Sri Ambati
 
AI Foundations Course Module 1 - An AI Transformation Journey
Sri Ambati
 
Tim Daines, QuantumBlack
Mad*Pow
 
Airbyte - Series-A deck
Airbyte
 
The Role of AI and Automation
mcoello
 

Similar to Tuning 2.0: Advanced Optimization Techniques Webinar (20)

PDF
SigOpt for Machine Learning and AI
SigOpt
 
PDF
Tuning for Systematic Trading: Talk 2: Deep Learning
SigOpt
 
PDF
Modeling at Scale: SigOpt at TWIMLcon 2019
SigOpt
 
PDF
Tuning for Systematic Trading: Talk 3: Training, Tuning, and Metric Strategy
SigOpt
 
PDF
Tuning for Systematic Trading: Talk 1
SigOpt
 
PDF
SigOpt at GTC - Tuning the Untunable
SigOpt
 
PDF
Using Bayesian Optimization to Tune Machine Learning Models
SigOpt
 
PDF
Using Bayesian Optimization to Tune Machine Learning Models
Scott Clark
 
PDF
Advanced Optimization for the Enterprise Webinar
SigOpt
 
PDF
Tuning the Untunable - Insights on Deep Learning Optimization
SigOpt
 
PDF
SigOpt for Hedge Funds
SigOpt
 
PDF
Metric Management: a SigOpt Applied Use Case
SigOpt
 
PDF
Building successful and secure products with AI and ML
Simon Lia-Jonassen
 
PDF
Using SigOpt to Tune Deep Learning Models with Nervana Cloud
SigOpt
 
PPTX
CNCF-Istanbul-MLOps for Devops Engineers.pptx
cansukavili1
 
PDF
Modeling at scale in systematic trading
SigOpt
 
PDF
Advanced MLflow: Multi-Step Workflows, Hyperparameter Tuning and Integrating ...
Databricks
 
PDF
Failure is an Option: Scaling Resilient Feature Delivery
Optimizely
 
PDF
SigOpt at GTC - Reducing operational barriers to optimization
SigOpt
 
PPTX
MLOps and Reproducible ML on AWS with Kubeflow and SageMaker
Provectus
 
SigOpt for Machine Learning and AI
SigOpt
 
Tuning for Systematic Trading: Talk 2: Deep Learning
SigOpt
 
Modeling at Scale: SigOpt at TWIMLcon 2019
SigOpt
 
Tuning for Systematic Trading: Talk 3: Training, Tuning, and Metric Strategy
SigOpt
 
Tuning for Systematic Trading: Talk 1
SigOpt
 
SigOpt at GTC - Tuning the Untunable
SigOpt
 
Using Bayesian Optimization to Tune Machine Learning Models
SigOpt
 
Using Bayesian Optimization to Tune Machine Learning Models
Scott Clark
 
Advanced Optimization for the Enterprise Webinar
SigOpt
 
Tuning the Untunable - Insights on Deep Learning Optimization
SigOpt
 
SigOpt for Hedge Funds
SigOpt
 
Metric Management: a SigOpt Applied Use Case
SigOpt
 
Building successful and secure products with AI and ML
Simon Lia-Jonassen
 
Using SigOpt to Tune Deep Learning Models with Nervana Cloud
SigOpt
 
CNCF-Istanbul-MLOps for Devops Engineers.pptx
cansukavili1
 
Modeling at scale in systematic trading
SigOpt
 
Advanced MLflow: Multi-Step Workflows, Hyperparameter Tuning and Integrating ...
Databricks
 
Failure is an Option: Scaling Resilient Feature Delivery
Optimizely
 
SigOpt at GTC - Reducing operational barriers to optimization
SigOpt
 
MLOps and Reproducible ML on AWS with Kubeflow and SageMaker
Provectus
 
Ad

More from SigOpt (18)

PDF
Optimizing BERT and Natural Language Models with SigOpt Experiment Management
SigOpt
 
PDF
Experiment Management for the Enterprise
SigOpt
 
PDF
Efficient NLP by Distilling BERT and Multimetric Optimization
SigOpt
 
PDF
Detecting COVID-19 Cases with Deep Learning
SigOpt
 
PDF
Tuning Data Augmentation to Boost Model Performance
SigOpt
 
PDF
SigOpt at Ai4 Finance—Modeling at Scale
SigOpt
 
PDF
Interactive Tradeoffs Between Competing Offline Metrics with Bayesian Optimiz...
SigOpt
 
PDF
Machine Learning Infrastructure
SigOpt
 
PDF
SigOpt at Uber Science Symposium - Exploring the spectrum of black-box optimi...
SigOpt
 
PDF
SigOpt at O'Reilly - Best Practices for Scaling Modeling Platforms
SigOpt
 
PDF
Lessons for an enterprise approach to modeling at scale
SigOpt
 
PDF
SigOpt at MLconf - Reducing Operational Barriers to Model Training
SigOpt
 
PDF
Machine Learning Infrastructure
SigOpt
 
PPTX
Machine Learning Fundamentals
SigOpt
 
PPTX
Tips and techniques for hyperparameter optimization
SigOpt
 
PDF
MLconf 2017 Seattle Lunch Talk - Using Optimal Learning to tune Deep Learning...
SigOpt
 
PDF
Using Optimal Learning to Tune Deep Learning Pipelines
SigOpt
 
PDF
Common Problems in Hyperparameter Optimization
SigOpt
 
Optimizing BERT and Natural Language Models with SigOpt Experiment Management
SigOpt
 
Experiment Management for the Enterprise
SigOpt
 
Efficient NLP by Distilling BERT and Multimetric Optimization
SigOpt
 
Detecting COVID-19 Cases with Deep Learning
SigOpt
 
Tuning Data Augmentation to Boost Model Performance
SigOpt
 
SigOpt at Ai4 Finance—Modeling at Scale
SigOpt
 
Interactive Tradeoffs Between Competing Offline Metrics with Bayesian Optimiz...
SigOpt
 
Machine Learning Infrastructure
SigOpt
 
SigOpt at Uber Science Symposium - Exploring the spectrum of black-box optimi...
SigOpt
 
SigOpt at O'Reilly - Best Practices for Scaling Modeling Platforms
SigOpt
 
Lessons for an enterprise approach to modeling at scale
SigOpt
 
SigOpt at MLconf - Reducing Operational Barriers to Model Training
SigOpt
 
Machine Learning Infrastructure
SigOpt
 
Machine Learning Fundamentals
SigOpt
 
Tips and techniques for hyperparameter optimization
SigOpt
 
MLconf 2017 Seattle Lunch Talk - Using Optimal Learning to tune Deep Learning...
SigOpt
 
Using Optimal Learning to Tune Deep Learning Pipelines
SigOpt
 
Common Problems in Hyperparameter Optimization
SigOpt
 
Ad

Recently uploaded (20)

PPTX
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
PPTX
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
PPTX
IP_Journal_Articles_2025IP_Journal_Articles_2025
mishell212144
 
PPTX
Future_of_AI_Presentation for everyone.pptx
boranamanju07
 
PPTX
Customer Segmentation: Seeing the Trees and the Forest Simultaneously
Sione Palu
 
PDF
WISE main accomplishments for ISQOLS award July 2025.pdf
StatsCommunications
 
PDF
Key_Statistical_Techniques_in_Analytics_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PDF
D9110.pdfdsfvsdfvsdfvsdfvfvfsvfsvffsdfvsdfvsd
minhn6673
 
PPTX
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
PPTX
M1-T1.pptxM1-T1.pptxM1-T1.pptxM1-T1.pptx
teodoroferiarevanojr
 
PDF
McKinsey - Global Energy Perspective 2023_11.pdf
niyudha
 
PPTX
7 Easy Ways to Improve Clarity in Your BI Reports
sophiegracewriter
 
PDF
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
PDF
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
PDF
apidays Munich 2025 - The Physics of Requirement Sciences Through Application...
apidays
 
PDF
An Uncut Conversation With Grok | PDF Document
Mike Hydes
 
PDF
apidays Munich 2025 - Developer Portals, API Catalogs, and Marketplaces, Miri...
apidays
 
PPTX
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
PPTX
Insurance-Analytics-Branch-Dashboard (1).pptx
trivenisapate02
 
PPTX
MR and reffffffvvvvvvvfversal_083605.pptx
manjeshjain
 
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
IP_Journal_Articles_2025IP_Journal_Articles_2025
mishell212144
 
Future_of_AI_Presentation for everyone.pptx
boranamanju07
 
Customer Segmentation: Seeing the Trees and the Forest Simultaneously
Sione Palu
 
WISE main accomplishments for ISQOLS award July 2025.pdf
StatsCommunications
 
Key_Statistical_Techniques_in_Analytics_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
D9110.pdfdsfvsdfvsdfvsdfvfvfsvfsvffsdfvsdfvsd
minhn6673
 
Pipeline Automatic Leak Detection for Water Distribution Systems
Sione Palu
 
M1-T1.pptxM1-T1.pptxM1-T1.pptxM1-T1.pptx
teodoroferiarevanojr
 
McKinsey - Global Energy Perspective 2023_11.pdf
niyudha
 
7 Easy Ways to Improve Clarity in Your BI Reports
sophiegracewriter
 
202501214233242351219 QASS Session 2.pdf
lauramejiamillan
 
SUMMER INTERNSHIP REPORT[1] (AutoRecovered) (6) (1).pdf
pandeydiksha814
 
apidays Munich 2025 - The Physics of Requirement Sciences Through Application...
apidays
 
An Uncut Conversation With Grok | PDF Document
Mike Hydes
 
apidays Munich 2025 - Developer Portals, API Catalogs, and Marketplaces, Miri...
apidays
 
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
Insurance-Analytics-Branch-Dashboard (1).pptx
trivenisapate02
 
MR and reffffffvvvvvvvfversal_083605.pptx
manjeshjain
 

Tuning 2.0: Advanced Optimization Techniques Webinar