ml-kit-banner
Feb
5
2019

What is ML Kit?

ML Kit is a mobile SDK that brings Google’s machine learning expertise to Android and iOS apps in a powerful yet easy-to-use package. Whether you’re new or experienced in machine learning, you can implement the functionality you need in just a few lines of code. There’s no need to have deep knowledge of neural networks or model optimization to get started.

Machine learning gives computers the ability to “learn” through a process which trains a model with a set of inputs that produce known outputs. By feeding a machine learning algorithm a bunch of data, the resulting model is able to make predictions, such as whether or not a cute cat is in a photo. When you don’t have the help of awesome libraries, the machine learning training process takes lots of math and specialized knowledge.

At Google I/O 2018, Google announced a new library, ML Kit, for developers to easily leverage machine learning on mobile. With it, you can now add some common machine learning functionality to your app without necessarily being a machine learning expert!

Mobile machine learning for all skill levels : ML Kit lets you bring powerful machine learning features to your app whether it’s for Android or iOS, and whether you’re an experienced machine learning developer or you’re just getting started.

Production-ready for common use cases : ML Kit comes with a set of ready-to-use APIs for common mobile use cases recognizing text, detecting faces, scanning barcodes, labeling images and recognizing landmarks. You simply pass in data to the ML Kit library and it will give you the information you need – all in a few lines of code.

On-device or in the Cloud : ML Kit gives you both on-device and Cloud APIs, all in a common and simple interface, allowing you to choose the ones that fit your requirements best. The on-device APIs process data quickly and will work even when there’s no network connection, while the cloud-based APIs leverage the power of Google Cloud Platform’s machine learning technology to give a higher level of accuracy.

Deploy custom models : If ML Kit’s APIs don’t cover your use cases, you can always bring your own existing TensorFlow Lite models. Just upload your model on to Firebase, and we’ll take care of hosting and serving it to your app. ML Kit acts as an API layer to your custom model, making it easy to run and use.

Key capabilities

Production-ready for common use cases : ML Kit comes with a set of ready-to-use APIs for common mobile use cases: recognizing text, detecting faces, identifying landmarks, scanning barcodes, and labeling images. Simply pass in data to the ML Kit library and it gives you the information you need.

On-device or in the cloud : ML Kit’s selection of APIs run on-device or in the cloud. Our on-device APIs can process your data quickly and work even when there’s no network connection. Our cloud-based APIs, on the other hand, leverage the power of Google Cloud Platform’s machine learning technology to give you an even higher level of accuracy.

Deploy custom models : If ML Kit’s APIs don’t cover your use cases, you can always bring your own existing TensorFlow Lite models. Just upload your model to Firebase, and we’ll take care of hosting and serving it to your app. ML Kit acts as an API layer to your custom model, making it simpler to run and use.

The advantage of ML Kit

  1. Text recognition
  2. Face detection
  3. Barcode scanning
  4. Image labeling
  5. Landmark recognition

How does it work?

           ML Kit makes it easy to apply ML techniques in your apps by bringing Google’s ML technologies, such as the Google Cloud Vision API, Mobile Vision, and TensorFlow Lite, together in a single SDK. Whether you need the power of cloud-based processing, the real-time capabilities of Mobile Vision’s on-device models, or the flexibility of custom TensorFlow Lite models, ML Kit makes it possible with just a few lines of code.

What you will build

You’re going to build an Android app with Firebase ML Kit. Your app will:

  1. Use the ML Kit Text Recognition API to detect text in images.
  2. Use the ML Kit Face Contour API to identify facial features in images.
  3. (Optional) Use the ML Kit Cloud Text Recognition API to expand text recognition capabilities (such as non-Latin alphabets) when the device has internet connectivity.
  4. Learn how to host a custom pre-trained Tensor Flow Lite model using Firebase.
  5. Use the ML Kit Custom Model API to download the pre-trained TensorFlow Lite model to your app.
  6. Use the downloaded model to run inference and label images.

What you’ll need

  1. A recent version of Android Studio (v3.0+)
  2. Android Studio Emulator or a physical Android device
  3. The sample code
  4. Basic knowledge of Android development in Java
  5. Basic understanding of machine learning models