Machine Learning & Deep Learning Bootcamp- IIT Alumni

Machine Learning & Deep Learning Bootcamp- IIT Alumni

Top Rated Bootcamp from IIT Alumni with hands on Practical Projects

Bestseller
120.46 20.48

About This Course

You're looking for a complete Machine Learning and Deep Learning Bootcamp that can help you launch a flourishing career in the field of Data Science & Machine Learning, right?

You've found the right Machine Learning and Deep Learning Bootcamp!

After completing this Bootcamp you will be able to:

· Confidently build predictive Machine Learning and Deep Learning models to solve business problems and create business strategy

· Answer Machine Learning related interview questions

· Participate and perform in online Data Analytics competitions such as Kaggle competitions

Check out the table of contents below to see what all Machine Learning and Deep Learning models you are going to learn.

How this Bootcamp will help you?

Verifiable Certificate of Completion is presented to all students who undertake this Machine learning bootcamp.

If you are a business manager or an executive, or a student who wants to learn and apply machine learning in Real world problems of business, this bootcamp will give you a solid base for that by teaching you the most popular techniques of machine learning.

Why should you choose this bootcamp?

This bootcamp covers all the steps that one should take while solving a business problem through linear regression.

Most bootcamps only focus on teaching how to run the analysis but we believe that what happens before and after running analysis is even more important i.e. before running analysis it is very important that you have the right data and do some pre-processing on it. And after running analysis, you should be able to judge how good your model is and interpret the results to actually be able to help your business.

What makes us qualified to teach you?

The bootcamp is taught by Abhishek and Pukhraj. As managers in Global Analytics Consulting firm, we have helped businesses solve their business problem using machine learning techniques and we have used our experience to include the practical aspects of data analysis in this bootcamp

We are also the creators of some of the most popular online bootcamps - with over 1,800,000 enrollments and thousands of 5-star reviews like these ones:

This is very good, i love the fact the all explanation given can be understood by a layman - Joshua

Thank you Author for this wonderful bootcamp. You are the best and this bootcamp is worth any price. - Daisy

Our Promise

Teaching our students is our job and we are committed to it. If you have any questions about the bootcamp content, practice sheet or anything related to any topic, you can always post a question in the bootcamp or send us a direct message.

Download Practice filesand complete Assignments

With each lecture, there are class notes attached for you to follow along. Each section contains a practice assignment for you to practically implement your learning.

Table of Contents

  • Section 1 - Python basic

This section gets you started with Python.

This section will help you set up the python and Jupyter environment on your system and it'll teach

you how to perform some basic operations in Python. We will understand the importance of different libraries such as Numpy, Pandas & Seaborn.

  • Section 2 - R basic

This section will help you set up the R and R studio on your system and it'll teach you how to perform some basic operations in R. 

  • Section 3 - Basics of Statistics

This section is divided into five different lectures starting from types of data then types of statistics

then graphical representations to describe the data and then a lecture on measures of center like mean

median and mode and lastly measures of dispersion like range and standard deviation

  • Section 4 - Introduction to Machine Learning

In this section we will learn - What does Machine Learning mean. What are the meanings or different terms associated with machine learning? You will see some examples so that you understand what machine learning actually is. It also contains steps involved in building a machine learning model, not just linear models, any machine learning model.

  • Section 5 - Data Preprocessing

In this section you will learn what actions you need to take a step by step to get the data and then

prepare it for the analysis these steps are very important.

We start with understanding the importance of business knowledge then we will see how to do data exploration. We learn how to do uni-variate analysis and bi-variate analysis then we cover topics like outlier treatment, missing value imputation, variable transformation and correlation.

  • Section 6 - Regression Model

This section starts with simple linear regression and then covers multiple linear regression.

We have covered the basic theory behind each concept without getting too mathematical about it so that you

understand where the concept is coming from and how it is important. But even if you don't understand

it,  it will be okay as long as you learn how to run and interpret the result as taught in the practical lectures.

We also look at how to quantify models accuracy, what is the meaning of F statistic, how categorical variables in the independent variables dataset are interpreted in the results, what are other variations to the ordinary least squared method and how do we finally interpret the result to find out the answer to a business problem.

  • Section 7 - Classification Models

This section starts with Logistic regression and then covers Linear Discriminant Analysis and K-Nearest Neighbors.

We have covered the basic theory behind each concept without getting too mathematical about it so that you

understand where the concept is coming from and how it is important. But even if you don't understand

it,  it will be okay as long as you learn how to run and interpret the result as taught in the practical lectures.

We also look at how to quantify models performance using confusion matrix, how categorical variables in the independent variables dataset are interpreted in the results, test-train split and how do we finally interpret the result to find out the answer to a business problem.

  • Section 8 - Decision trees

In this section, we will start with the basic theory of decision tree then we will create and plot a simple Regression decision tree. Then we will expand our knowledge of regression Decision tree to classification trees, we will also learn how to create a classification tree in Python and R

  • Section 9 - Ensemble technique
    In this section, we will start our discussion about advanced ensemble techniques for Decision trees. Ensembles techniques are used to improve the stability and accuracy of machine learning algorithms. We will discuss Random Forest, Bagging, Gradient Boosting, AdaBoost and XGBoost.
  • Section 10 - Support Vector Machines
    SVM's are unique models and stand out in terms of their conceptIn this section, we will discussion about support vector classifiers and support vector machines. 
  • Section 11 - ANN Theoretical Concepts

This part will give you a solid understanding of concepts involved in Neural Networks.

In this section you will learn about the single cells or Perceptrons and how Perceptrons are stacked to create a network architecture. Once architecture is set, we understand the Gradient descent algorithm to find the minima of a function and learn how this is used to optimize our network model. 

  • Section 12 - Creating ANN model in Python and R

In this part you will learn how to create ANN models in Python and R.

We will start this section by creating an ANN model using Sequential API to solve a classification problem. We learn how to define network architecture, configure the model and train the model. Then we evaluate the performance of our trained model and use it to predict on new data. Lastly we learn how to save and restore models.

We also understand the importance of libraries such as Keras and TensorFlow in this part.

  • Section 13 - CNN Theoretical Concepts

In this part you will learn about convolutional and pooling layers which are the building blocks of CNN models.

In this section, we will start with the basic theory of convolutional layer, stride, filters and feature maps. We also explain how gray-scale images are different from colored images. Lastly we discuss pooling layer which bring computational efficiency in our model.

  • Section 14 - Creating CNN model in Python and R
    In this part you will learn how to create CNN models in Python and R.

We will take the same problem of recognizing fashion objects and apply CNN model to it. We will compare the performance of our CNN model with our ANN model and notice that the accuracy increases by 9-10% when we use CNN. However, this is not the end of it. We can further improve accuracy by using certain techniques which we explore in the next part.

  • Section 15 - End-to-End Image Recognition project in Python and R
    In this section we build a complete image recognition project on colored images.

We take a Kaggle image recognition competition and build CNN model to solve it. With a simple model we achieve nearly 70% accuracy on test set. Then we learn concepts like Data Augmentation and Transfer Learning which help us improve accuracy level from 70% to nearly 97% (as good as the winners of that competition).

  • Section 16 - Pre-processing Time Series Data

In this section, you will learn how to visualize time series, perform feature engineering, do re-sampling of data, and various other tools to analyze and prepare the data for models

  • Section 17 - Time Series Forecasting

In this section, you will learn common time series models such as Auto-regression (AR), Moving Average (MA), ARMA, ARIMA, SARIMA and SARIMAX. 

By the end of this bootcamp, your confidence in creating a Machine Learning or Deep Learning model in Python and R will soar. You'll have a thorough understanding of how to use ML/ DL models to create predictive models and solve real world business problems.

Below is a list of popular FAQs of students who want to start their Machine learning journey-

What is Machine Learning?

Machine Learning is a field of computer science which gives the computer the ability to learn without being explicitly programmed. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.

Why use Python for Machine Learning?

Understanding Python is one of the valuable skills needed for a career in Machine Learning.

Though it hasn’t always been, Python is the programming language of choice for data science. Here’s a brief history:

    In 2016, it overtook R on Kaggle, the premier platform for data science competitions.

    In 2017, it overtook R on KDNuggets’s annual poll of data scientists’ most used tools.

    In 2018, 66% of data scientists reported using Python daily, making it the number one tool for analytics professionals.

Machine Learning experts expect this trend to continue with increasing development in the Python ecosystem. And while your journey to learn Python programming may be just beginning, it’s nice to know that employment opportunities are abundant (and growing) as well.

Why use R for Machine Learning?

Understanding R is one of the valuable skills needed for a career in Machine Learning. Below are some reasons why you should learn Machine learning in R

1. It’s a popular language for Machine Learning at top tech firms. Almost all of them hire data scientists who use R. Facebook, for example, uses R to do behavioral analysis with user post data. Google uses R to assess ad effectiveness and make economic forecasts. And by the way, it’s not just tech firms: R is in use at analysis and consulting firms, banks and other financial institutions, academic institutions and research labs, and pretty much everywhere else data needs analyzing and visualizing.

2. Learning the data science basics is arguably easier in R. R has a big advantage: it was designed specifically with data manipulation and analysis in mind. 

3. Amazing packages that make your life easier. Because R was designed with statistical analysis in mind, it has a fantastic ecosystem of packages and other resources that are great for data science. 

4. Robust, growing community of data scientists and statisticians. As the field of data science has exploded, R has exploded with it, becoming one of the fastest-growing languages in the world (as measured by StackOverflow). That means it’s easy to find answers to questions and community guidance as you work your way through projects in R.

5. Put another tool in your toolkit. No one language is going to be the right tool for every job. Adding R to your repertoire will make some projects easier – and of bootcamp, it’ll also make you a more flexible and marketable employee when you’re looking for jobs in data science.

What is the difference between Data Mining, Machine Learning, and Deep Learning?

Put simply, machine learning and data mining use the same algorithms and techniques as data mining, except the kinds of predictions vary. While data mining discovers previously unknown patterns and knowledge, machine learning reproduces known patterns and knowledge—and further automatically applies that information to data, decision-making, and actions.

Deep learning, on the other hand, uses advanced computing power and special types of neural networks and applies them to large amounts of data to learn, understand, and identify complicated patterns. Automatic language translation and medical diagnoses are examples of deep learning.

Other Information

  • Certificate will provided in this course on Completion
  • Full lifetime access
  • Available on Mobile & Laptop

What Students Will Learn In Your Course?

  • Learn how to solve real life problem using the Machine learning techniques
  • Machine Learning models such as Linear Regression, Logistic Regression, KNN etc.
  • Advanced Machine Learning models such as Decision trees, XGBoost, Random Forest, SVM etc.
  • Understanding of basics of statistics and concepts of Machine Learning
  • How to do statistical operations and run ML models in Python and R
  • In-depth knowledge of data collection and data pre-processing for Machine Learning problem
  • How to convert business problem into a Machine learning problem

Are There Any Course Requirements Or Prerequisites?

  • Students will need to install Anaconda software but we have a separate lecture to guide you install the same.

Who Are Your Target Students?

  • People pursuing a career in data science
  • Students want to learn Machine Learning and Deep Learning from scratch
  • Working Professionals beginning their Data journey 
  • Statisticians needing more practical experience

Course Content

  • 277 lectures
  • 34:56:28
  • 001 Introduction
    00:04:12
  • Please enroll for live support- Link Details
  • 003 Installing Python and Anaconda
    00:03:04
  • 004 Opening Jupyter Notebook
    00:09:06
  • 005 Introduction to Jupyter
    00:13:26
  • 006 Arithmetic operators in Python_ Python Basics
    00:04:28
  • 007 Strings in Python_ Python Basics
    00:19:07
  • 008 Lists, Tuples and Directories_ Python Basics
    00:18:41
  • 009 Working with Numpy Library of Python
    00:11:54
  • 010 Working with Pandas Library of Python
    00:09:15
  • 011 Working with Seaborn Library of Python
    00:08:57
  • 012 Installing R and R studio
    00:05:52
  • 013 Basics of R and R studio
    00:10:47
  • 014 Packages in R
    00:10:52
  • 015 Inputting data part 1_ Inbuilt datasets of R
    00:04:21
  • 016 Inputting data part 2_ Manual data entry
    00:03:11
  • 017 Inputting data part 3_ Importing from CSV or Text files
    00:06:49
  • 018 Creating Barplots in R
    00:13:42
  • 019 Creating Histograms in R
    00:06:01
  • 020 Types of Data
    00:04:04
  • 021 Types of Statistics
    00:02:45
  • 022 Describing data Graphically
    00:11:37
  • 023 Measures of Centers
    00:07:05
  • 024 Measures of Dispersion
    00:04:37
  • 025 Introduction to Machine Learning
    00:16:03
  • 026 Building a Machine Learning Model
    00:08:42
  • 027 Gathering Business Knowledge
    00:03:26
  • 028 Data Exploration
    00:03:19
  • 029 The Dataset and the Data Dictionary
    00:07:31
  • 030 Importing Data in Python
    00:06:03
  • 031 Importing the dataset into R
    00:03:00
  • 032 Univariate analysis and EDD
    00:03:33
  • 033 EDD in Python
    00:12:11
  • 034 EDD in R
    00:12:43
  • 035 Outlier Treatment
    00:04:15
  • 036 Outlier Treatment in Python
    00:14:18
  • 037 Outlier Treatment in R
    00:04:49
  • 038 Missing Value Imputation
    00:03:36
  • 039 Missing Value Imputation in Python
    00:04:57
  • 040 Missing Value imputation in R
    00:03:49
  • 041 Seasonality in Data
    00:03:34
  • 042 Bi-variate analysis and Variable transformation
    00:16:14
  • 043 Variable transformation and deletion in Python
    00:09:21
  • 044 Variable transformation in R
    00:09:37
  • 045 Non-usable variables
    00:04:44
  • 046 Dummy variable creation_ Handling qualitative data
    00:04:50
  • 047 Dummy variable creation in Python
    00:05:45
  • 048 Dummy variable creation in R
    00:05:01
  • 049 Correlation Analysis
    00:10:05
  • 050 Correlation Analysis in Python
    00:07:07
  • 051 Correlation Matrix in R
    00:08:09
  • 052 The Problem Statement
    00:01:25
  • 053 Basic Equations and Ordinary Least Squares (OLS) method
    00:08:13
  • 054 Assessing accuracy of predicted coefficients
    00:14:40
  • 055 Assessing Model Accuracy_ RSE and R squared
    00:07:19
  • 056 Simple Linear Regression in Python
    00:14:06
  • 057 Simple Linear Regression in R
    00:07:40
  • 058 Multiple Linear Regression
    00:04:57
  • 059 The F - statistic
    00:08:22
  • 060 Interpreting results of Categorical variables
    00:05:04
  • 061 Multiple Linear Regression in Python
    00:14:13
  • 062 Multiple Linear Regression in R
    00:07:50
  • 063 Test-train split
    00:09:32
  • 064 Bias Variance trade-off
    00:06:01
  • 065 Test train split in Python
    00:10:19
  • 066 Test-Train Split in R
    00:08:44
  • 067 Linear models other than OLS
    00:04:18
  • 068 Subset selection techniques
    00:11:34
  • 069 Subset selection in R
    00:07:38
  • 070 Shrinkage methods_ Ridge and Lasso
    00:07:14
  • 071 Ridge regression and Lasso in Python
    00:23:50
  • 072 Ridge regression and Lasso in R
    00:12:51
  • 073 Heteroscedasticity
    00:02:30
  • 074 The Data and the Data Dictionary
    00:08:14
  • 075 Data Import in Python
    00:04:56
  • 076 Importing the dataset into R
    00:03:00
  • 077 EDD in Python
    00:18:01
  • 078 EDD in R
    00:11:26
  • 079 Outlier treatment in Python
    00:09:53
  • 080 Outlier Treatment in R
    00:04:49
  • 081 Missing Value Imputation in Python
    00:04:49
  • 082 Missing Value imputation in R
    00:03:49
  • 083 Variable transformation and Deletion in Python
    00:04:55
  • 084 Variable transformation in R
    00:06:27
  • 085 Dummy variable creation in Python
    00:05:45
  • 086 Dummy variable creation in R
    00:05:19
  • 087 Three Classifiers and the problem statement
    00:03:17
  • 088 Why can't we use Linear Regression
    00:04:32
  • 089 Logistic Regression
    00:07:55
  • 090 Training a Simple Logistic Model in Python
    00:12:25
  • 091 Training a Simple Logistic model in R
    00:03:34
  • 092 Result of Simple Logistic Regression
    00:05:11
  • 093 Logistic with multiple predictors
    00:02:23
  • 094 Training multiple predictor Logistic model in Python
    00:06:05
  • 095 Training multiple predictor Logistic model in R
    00:01:49
  • 096 Confusion Matrix
    00:03:47
  • 097 Creating Confusion Matrix in Python
    00:09:55
  • 098 Evaluating performance of model
    00:07:40
  • 099 Evaluating model performance in Python
    00:02:21
  • 100 Predicting probabilities, assigning classes and making Confusion Matrix in R
    00:06:23
  • 101 Linear Discriminant Analysis
    00:09:42
  • 102 LDA in Python
    00:02:30
  • 103 Linear Discriminant Analysis in R
    00:09:10
  • 104 Test-Train Split
    00:09:30
  • 105 Test-Train Split in Python
    00:06:46
  • 106 Test-Train Split in R
    00:09:27
  • 107 K-Nearest Neighbors classifier
    00:08:41
  • 108 K-Nearest Neighbors in Python_ Part 1
    00:05:51
  • 109 K-Nearest Neighbors in Python_ Part 2
    00:07:00
  • 110 K-Nearest Neighbors in R
    00:08:50
  • 111 Understanding the results of classification models
    00:06:06
  • 112 Summary of the three models
    00:04:32
  • 113 Basics of Decision Trees
    00:10:10
  • 114 Understanding a Regression Tree
    00:10:17
  • 115 The stopping criteria for controlling tree growth
    00:03:15
  • 116 The Data set for this part
    00:02:59
  • 117 Importing the Data set into Python
    00:05:40
  • 118 Importing the Data set into R
    00:06:26
  • 119 Dependent- Independent Data split in Python
    00:04:02
  • 120 Test-Train split in Python
    00:06:04
  • 121 Splitting Data into Test and Train Set in R
    00:05:30
  • 122 Creating Decision tree in Python
    00:03:47
  • 123 Building a Regression Tree in R
    00:14:18
  • 124 Evaluating model performance in Python
    00:04:10
  • 125 Plotting decision tree in Python
    00:04:58
  • 126 Pruning a tree
    00:04:16
  • 127 Pruning a tree in Python
    00:10:37
  • 128 Pruning a Tree in R
    00:09:18
  • 129 Classification tree
    00:06:05
  • 130 The Data set for Classification problem
    00:01:38
  • 131 Classification tree in Python _ Preprocessing
    00:08:25
  • 132 Classification tree in Python _ Training
    00:13:13
  • 133 Building a classification Tree in R
    00:08:59
  • 134 Advantages and Disadvantages of Decision Trees
    00:01:34
  • 135 Ensemble technique 1 - Bagging
    00:06:39
  • 136 Ensemble technique 1 - Bagging in Python
    00:11:05
  • 137 Bagging in R
    00:06:20
  • 138 Ensemble technique 2 - Random Forests
    00:03:56
  • 139 Ensemble technique 2 - Random Forests in Python
    00:06:06
  • 140 Using Grid Search in Python
    00:12:14
  • 141 Random Forest in R
    00:03:58
  • 142 Boosting
    00:07:10
  • 143 Ensemble technique 3a - Boosting in Python
    00:05:08
  • 144 Gradient Boosting in R
    00:07:10
  • 145 Ensemble technique 3b - AdaBoost in Python
    00:04:00
  • 146 AdaBoosting in R
    00:09:44
  • 147 Ensemble technique 3c - XGBoost in Python
    00:11:07
  • 148 XGBoosting in R
    00:16:09
  • 149 Content flow
    00:01:34
  • 150 The Concept of a Hyperplane
    00:04:55
  • 151 Maximum Margin Classifier
    00:03:18
  • 152 Limitations of Maximum Margin Classifier
    00:02:28
  • 153 Support Vector classifiers
    00:10:00
  • 154 Limitations of Support Vector Classifiers
    00:01:34
  • 155 Kernel Based Support Vector Machines
    00:06:45
  • 156 Regression and Classification Models
    00:00:46
  • 157 The Data set for the Regression problem
    00:02:59
  • 158 Importing data for regression model
    00:05:40
  • 159 Missing value treatment
    00:03:38
  • 160 Dummy Variable creation
    00:04:58
  • 161 X-y Split
    00:04:02
  • 162 Test-Train Split
    00:06:04
  • 163 Standardizing the data
    00:06:28
  • 164 SVM based Regression Model in Python
    00:10:08
  • 165 The Data set for the Classification problem
    00:01:38
  • 166 Classification model - Preprocessing
    00:08:25
  • 167 Classification model - Standardizing the data
    00:01:57
  • 168 SVM Based classification model
    00:11:28
  • 169 Hyper Parameter Tuning
    00:09:47
  • 170 Polynomial Kernel with Hyperparameter Tuning
    00:04:07
  • 171 Radial Kernel with Hyperparameter Tuning
    00:06:31
  • 172 Importing Data into R
    00:08:00
  • 173 Test-Train Split
    00:05:29
  • 174 Classification SVM model using Linear Kernel
    00:16:11
  • 175 Hyperparameter Tuning for Linear Kernel
    00:06:28
  • 176 Polynomial Kernel with Hyperparameter Tuning
    00:10:19
  • 177 Radial Kernel with Hyperparameter Tuning
    00:06:31
  • 178 SVM based Regression Model in R
    00:11:14
  • 179 Introduction to Neural Networks and Course flow
    00:04:38
  • 180 Perceptron
    00:09:47
  • 181 Activation Functions
    00:07:30
  • 182 Python - Creating Perceptron model
    00:14:10
  • 183 Basic Terminologies
    00:09:47
  • 184 Gradient Descent
    00:12:17
  • 185 Back Propagation
    00:22:27
  • 186 Some Important Concepts
    00:12:44
  • 187 Hyperparameter
    00:08:19
  • 188 Keras and Tensorflow
    00:03:04
  • 189 Installing Tensorflow and Keras
    00:04:04
  • 190 Dataset for classification
    00:07:19
  • 191 Normalization and Test-Train split
    00:05:59
  • 192 Different ways to create ANN using Keras
    00:01:58
  • 193 Building the Neural Network using Keras
    00:12:24
  • 194 Compiling and Training the Neural Network model
    00:10:34
  • 195 Evaluating performance and Predicting using Keras
    00:09:21
  • 196 Building Neural Network for Regression Problem
    00:22:10
  • 197 Using Functional API for complex architectures
    00:12:40
  • 198 Saving - Restoring Models and Using Callbacks
    00:19:49
  • 199 Hyperparameter Tuning
    00:09:05
  • 200 Installing Keras and Tensorflow
    00:02:54
  • 201 Data Normalization and Test-Train Split
    00:12:00
  • 202 Building,Compiling and Training
    00:14:57
  • 203 Evaluating and Predicting
    00:09:46
  • 204 ANN with NeuralNets Package
    00:08:07
  • 205 Building Regression Model with Functional AP
    00:12:34
  • 206 Complex Architectures using Functional API
    00:08:50
  • 207 Saving - Restoring Models and Using Callbacks
    00:20:16
  • 208 CNN Introduction
    00:07:42
  • 209 Stride
    00:02:51
  • 210 Padding
    00:05:07
  • 211 Filters and Feature maps
    00:07:48
  • 212 Channels
    00:06:31
  • 213 PoolingLayer
    00:05:32
  • 214 CNN model in Python - Preprocessing
    00:05:42
  • 215 CNN model in Python - structure and Compile
    00:06:24
  • 216 CNN model in Python - Training and results
    00:06:50
  • 217 Comparison - Pooling vs Without Pooling in Python
    00:06:20
  • 218 CNN on MNIST Fashion Dataset - Model Architecture
    00:02:04
  • 219 Data Preprocessing
    00:07:08
  • 220 Creating Model Architecture
    00:06:04
  • 221 Compiling and training
    00:02:53
  • 222 Model Performance
    00:06:26
  • 223 Comparison - Pooling vs Without Pooling in R
    00:04:33
  • 224 Project - Introduction
    00:07:04
  • 226 Project - Data Preprocessing in Python
    00:09:19
  • 227 Project - Training CNN model in Python
    00:09:05
  • 228 Project in Python - model results
    00:03:07
  • 229 Project in R - Data Preprocessing
    00:10:28
  • 230 CNN Project in R - Structure and Compile
    00:04:59
  • 231 Project in R - Training
    00:02:57
  • 232 Project in R - Model Performance
    00:02:22
  • 233 Project in R - Data Augmentation
    00:07:12
  • 234 Project in R - Validation Performance
    00:02:24
  • 235 Project - Data Augmentation Preprocessing
    00:06:46
  • 236 Project - Data Augmentation Training and Results
    00:06:26
  • 237 ILSVRC
    00:04:10
  • 238 LeNET
    00:01:31
  • 239 VGG16NET
    00:02:00
  • 240 GoogLeNet
    00:02:52
  • 241 Transfer Learning
    00:05:15
  • 242 Project - Transfer Learning - VGG16
    00:19:40
  • 243 Project - Transfer Learning - VGG16 (Implementation)
    00:12:44
  • 244 Project - Transfer Learning - VGG16 (Performance)
    00:08:02
  • 245 Introduction
    00:03:12
  • 246 Time Series Forecasting - Use cases
    00:02:25
  • 247 Forecasting model creation - Steps
    00:02:46
  • 248 Forecasting model creation - Steps 1 (Goal)
    00:06:03
  • 249 Time Series - Basic Notations
    00:09:02
  • 250 Data Loading in Python
    00:17:51
  • 251 Time Series - Visualization Basics
    00:09:28
  • 252 Time Series - Visualization in Python
    00:27:10
  • 253 Time Series - Feature Engineering Basics
    00:11:03
  • 254 Time Series - Feature Engineering in Python
    00:18:01
  • 255 Time Series - Upsampling and Downsampling
    00:04:17
  • 256 Time Series - Upsampling and Downsampling in Python
    00:16:45
  • 257 Time Series - Power Transformation
    00:02:32
  • 258 Moving Average
    00:07:12
  • 259 Exponential Smoothing
    00:02:07
  • 260 White Noise
    00:02:29
  • 261 Random Walk
    00:04:23
  • 262 Decomposing Time Series in Python
    00:09:41
  • 263 Differencing
    00:06:16
  • 264 Differencing in Python
    00:15:07
  • 265 Test Train Split in Python
    00:11:28
  • 266 Naive (Persistence) model in Python
    00:07:54
  • 267 Auto Regression Model - Basics
    00:03:29
  • 268 Auto Regression Model creation in Python
    00:09:22
  • 269 Auto Regression with Walk Forward validation in Python
    00:08:20
  • 270 Moving Average model -Basics
    00:04:33
  • 271 Moving Average model in Python
    00:08:58
  • 272 ACF and PACF
    00:08:07
  • 273 ARIMA model - Basics
    00:04:43
  • 274 ARIMA model in Python
    00:13:15
  • 275 ARIMA model with Walk Forward Validation in Python
    00:05:24
  • 276 SARIMA model
    00:07:26
  • 277 SARIMA model in Python
    00:10:40
  • 278 Stationary time Series
    00:01:42
Image

Start-Tech Academy

Start-Tech Academy
  • 4.9 (101)
  • 15 Reviews
  • 101 Students
  • 3 Courses

Start-Tech Academy is a technology-based Analytics Education Company and aims at Bringing Together the analytics companies and interested Learners. 
Our top quality training content along with internships and project opportunities helps students in launching their Analytics journey. 

Founded by Abhishek Bansal and Pukhraj Parikh. 

Working as a Project manager in an Analytics consulting firm, Pukhraj has multiple years of experience working on analytics tools and software. He is competent in  MS office suites, Cloud computing, SQL, Tableau, SAS, Google analytics and Python.

Abhishek worked as an Acquisition Process owner in a leading telecom company before moving on to learning and teaching technologies like Machine Learning and Artificial Intelligence.