If you’ve taken any of the first five Microsoft Cognitive Services courses taught by MCT Scott J. Peterson, you know that’s true! And now Scott has three more courses (along with others in the works) for developers who are interested in taking advantage of this easy-to-use suite of APIs, SDKs, and services that enable you to integrate machine learning and artificial intelligence APIs into your apps.

In this round, Scott explores Translation APIs, which can help translate text and speech into other languages—often required for collaboration in today’s business world—or take text abbreviations and spell them out. Create your own chat scenarios, and learn about the many possible applications for speech translation—at the speed of speech—as you use these APIs to build apps that bridge the language barrier and enable users to communicate in real time.

Then he shows off Video Indexer, an impressive cloud service that lets you extract insights from videos. There are huge opportunities for enterprise applications with Video Indexer in that it creates transcripts and indexes them, and it supports multilingual and multicultural scenarios. You can easily edit the transcripts and indexes, too! It pulls out people and places, and it maps them to specific spots in the video. And it can generate search metadata from videos. Scott explores how you can use Video Indexer and the Video Indexer API to increase user engagement and to make each video that you publish more than just a video. Be sure to check out the demos!

Scott wraps up this set with a look at Custom Vision Service, which you can use to build sophisticated image-classification models backed by neural networks that recognize objects in photos, identify defective parts rolling off an assembly line, and much more. With more interesting demos (think national landmarks), see how to use the Custom Vision API to create apps that use the models you build and even to train models programmatically.

Each hour-long course is packed with information, starting with the nuts and bolts—how to get and provision the service or API. From there, he explores what it does and why, highlights the main features, and offers helpful demos. He even points out what’s going on in the code in Visual Studio, explains best practices, and gives you practical design tips, so you can use Cognitive Services to help you build smarter, richer, and more sophisticated apps.

Follow along with Visual Studio 2017,and an Azure trial account. And watch this space for additional courses in the Cognitive Services series, including one on Project Cuzco.

Did you miss any of the first five? Here’s a quick summary of each one. Check them out!

  • Part 1: Computer Vision API. Get the details on how this API can recognize objects in photos, caption photos, extract text from images, and more. Filter out images with adult content, and create apps that allow photos to be searched using computer-generated keywords.
  • Part 2: Face API. Detect faces in images and identify "features" of those faces. See how to use it to build intelligent apps that treat faces as just another type of data.
  • Part 3: Emotion API and Text Analytics API. Use these APIs to detect emotion in faces in photos and videos, analyze written communications, such as tweets and e-mails, for sentiment, and extract topics and key phrases from text documents.
  • Part 4: LUIS and QnA Maker. See how LUIS enables developers to build apps that respond to natural-language commands and how, combined with QnA Maker, it can be used to build bots.
  • Part 5: Search API. Find out how to use the Bing Search API to incorporate rich search functionality into your apps. Explore web, image, news, and video search, and enhance your search with Autosuggest.