Machine Learning Fundamentals
Description
This course covers the major aspects of Machine Learning, diving into essential concepts and methodologies. Starting with the basics, you'll gain a solid foundation in understanding the core principles of Machine Learning. The course will then guide you through the world of distributed computing using Apache Spark, helping you understand how to harness the power of large data sets.
You'll also explore Neural Networks and the pivotal role they play in the realm of Machine Learning, before venturing into the depths of Deep Learning, where you'll learn how to construct and train complex models. The course delves into key algorithms, including Decision Trees, Naive Bayes, and Logistic Regression, demonstrating their use cases and how to implement them in real-world scenarios. We will cap our journey with an exploration of Clustering, a powerful technique for data segmentation and pattern recognition.
Overall, the course is designed to equip you with the knowledge and tools to unlock the potential of Machine Learning in a wide array of contexts.
This course provides an introduction to the fundamentals of Machine Learning, featuring key topics such as Basics, Spark for distributed data processing, Neural Networks, and Deep Learning. We will study how to utilize essential algorithms like Decision Trees, Naive Bayes, Logistic Regression, and Clustering techniques to unlock insightful patterns from data.
is issued on the Luxoft Training form
Objectives
After completion of the course, students will better understand:
- Machine learning Basics
- Apache Spark Usage
- Deep Learning
- Decision Trees
- Naive Bayes
- Logic Regression
- Neural Nets
- Clustering
Target Audience
ML developers, architects & testers that need to automate a part of their activity.
Prerequisites
- Understand principles of object-oriented programming
- At least one year’s experience of working with object-oriented languages
- Advisable to know Java
Roadmap
- Machine learning tasks (Supervised/Unsupervised, Inputs, Outputs)
- Feature Engineering (Types, Selection, Discretization, Multi-class to binary, Normalizing, Projections/dimensionality reduction/PCA, Sampling, Cleansing)
- Model evaluation (Metrics: MLE, Precision/Recall, ROC, Loss functions, Cross-validation)
- Classification (Naïve Bayes, Decision trees, Logistic Regression, SVM, Neural nets, RBF)
- Predictions (Linear regression, CART, Bayesian Networks)
- Clustering (Hierarchical, Spectral, Mixture models and EM, Clustering for classification, LDA)
- Ensembles (Bagging, Random Forests, Boosting, Stacking)
- Recommendations Deep learning (Restricted Boltzmann Machines, Convolutional nets, LSTM)