What You Will Learn :
- Differentiate between AI, ML and DL, compare and select appropriate AI techniques, identify various components of DL applications, and identify features and components of the Intel® Distribution of OpenVINO™ toolkit and Intel® DevCloud.
- Understand reasons to optimize and tune Deep Learning models for inference, use tools like Model Optimizer and POT, and make informed decisions for choosing the right optimization strategy.
- Use Inference Engine workflow, implement SYNC and ASYNC execution modes, and run Inference Engine to deploy optimized and future ready AI Applications.
- Evaluate different hardware platforms for AI inference, and differentiate/ select between various hardware platforms available in the Intel Ecosystem.
- Work with various features of DL Workbench workflow, use various benchmarking tools present in the DL Workbench and quickly prototype DL application development using Jupyter* Notebooks on the Intel® DevCloud.
Who should attend ?
- AI developers and researchers
- AI enthusiasts
- Data scientist
- ML engineers
- Experience with Python
- Basic understanding of data processing and deep learning