merocode

Let's reveal the code!

CoreML and coremltools

Core ML framework as Apple documentation states enable you to integrate trained machine learning models into your app.

Models should be in Core ML model format (models with a .mlmodel file extension). This is a new public format for describing models developed be Apple.

Apple provides ready-to-use Core ML models that were already converted to Core ML model format from popular open source trained models.

In one of WWDC 2017 sessions; Introducing Core ML, there is a demonstration on how to use a Core ML format model in your project.

In another interesting session Core ML in depth, there is a walk through on how to convert a trained model to Core ML model format using Core ML Tools which is a python package that supports converting models from these popular training libraries: Keras, Caffe, scikit-learn, libsvm, and XGBoost.

To be able to install Core ML Tools you should have a python environment with python version 2.7, otherwise you will get an error stating that “No matching distribution found for coremltools”.

For me I use Anaconda to manage packages and environments.
It comes with conda (which is package and environment manager), Python plus over 150 scientific packages and their dependencies.
If you don’t need all of that you can use Miniconda, a smaller distribution that includes only conda and Python, then you can install any package you need individually.

On the other hand we have pip which  is the default package manager for Python libraries.
We may still use pip alongside conda to install packages because the available packages from conda are focused around data science while pip is for general use.

But why use environments? Environments enable you to isolate the packages you are using for different projects. So you can have both python 3 and 2 on the same machine.

So after installing Anaconda, you can create environment with python version 3 installed by typing following code in the terminal.

and another one for python version 2.

and you can list all the environments you have

You will get a list of them marking the current environment with an asterisk (root is the default environment).

So before installing coremltools you need to activate the environment you created with python 2

Now use pip to install coremltools package

By now we are ready to use this tool to convert models.

So let’s suppose you want to create an app that can predict emotions in a facial photo. You can try this by using open source Emotion Recognition trained model.
So you will begin to convert it to the Core ML model format. To do that you will need the following files:

  1. .caffemodel file which contains the learned weights of the network as it was being trained . You can use VGG_S_rgb/EmotiW_VGG_S.caffemodel.
  2. .prototxt file which defines the network design or structure. You can use deploy.txt but you should change extension to .prototxt
  3. labels.txt file which contains the list of the named emotions in a specific order as mentioned by the author in comments.

Now you can create a python script that will convert those files to a Core ML format model. You can put that script in a file “conversion.py”

Parameter image_input_names means that we want the model to take an image as an input instead of multi-array.

Then in Terminal you run the script.

You will see a message stating that “Starting Conversion from Caffe to CoreML” then after some time depending on the model size, you will get the output file of model EmotiW_VGG_S.mlmodel

Now you have the Core ML format model which you can drag to Xcode and start to use it.
When you select the model in the Project navigator in Xcode you can see what kind of parameters this model has below “Model Evaluation Parameter” section.
As an input it expects a parameter “data” of type Image of 224 width, 224 height and RGB color space.
As output you can use the parameter “classLabel” of type String whose value is the most likely class label.