If you aren’t learning about AI, then you’re falling behind.
Luckily for you, Google just made learning AI easier than ever with 10 free short courses.
1. Introduction to Generative AI
This course is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.
2. Introduction to Large Language Models
This course is an introductory level microlearning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance.
3. Introduction to Responsible AI
This course is an introductory-level microlearning course aimed at explaining what responsible AI is, why it’s important, and how Google implements responsible AI in their products.
4. Generative AI Fundamentals
This course will test your knowledge on Generative AI, Language Models and Responsible AI.
5. Introduction to Image Generation
This course introduces diffusion models, a family of machine learning models that recently showed promise in the image generation space. Diffusion models underpin many state-of-the-art image generation models.
6. Encoder-Decoder Architecture
This course gives you a synopsis of the encoder-decoder architecture, which is a powerful and prevalent machine learning architecture for sequence-to-sequence tasks such as machine translation, text summarization, and question answering.
7. Attention Mechanism
This course will introduce you to the attention mechanism, a powerful technique that allows neural networks to focus on specific parts of an input sequence. You will learn how attention works, and how it can be used to improve the performance of a variety of machine learning tasks, including machine translation, text summarization, and question answering.
8. Transformer Models and BERT Model
This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference.
9. Create Image Captioning Models
This course teaches you how to create an image captioning model by using deep learning. You learn about the different components of an image captioning model, such as the encoder and decoder, and how to train and evaluate your model. By the end of this course, you will be able to create your own image captioning models and use them to generate captions for images.
10. Introduction to Generative AI Studio
This course introduces Generative AI Studio, a product on Vertex AI, that helps you prototype and customize generative AI models so you can use their capabilities in your applications. In this course, you learn what Generative AI Studio is, its features and options, and how to use it by walking through demos of the product.
I want to learn more about AI
If you want to stay up to date with the latest AI tools and updates (and how to use them to your advantage), make sure you are subscribed to the WGMI newsletter.