Deep residual networks, or ResNets for short, provided the breakthrough idea of identity mappings in order to enable training of very deep convolutional neural networks. Don't worry, you no need to understand what that exactly means in order to follow this guide successfully.
In only 5 simple steps you'll train your own ResNet on the CIFAR-10 Dataset (60000 32x32 colour images in 10 classes).
Let's get started!
Step 1: Launch a TensorFlow Docker Container on Genesis Cloud
Just follow the steps that we've outlined here for you.
Step 2: Clone the official TensorFlow Repository
Our Ubuntu instances already have git installed. If you are missing git you can install it with:
sudo apt-get install git
Using git, we need to clone the TensorFlow Models repositors from GitHub to a folder of our choice:
git clone https://github.com/tensorflow/tpu.git
Step 3: Preparing to run the models
We're running the official models, which are a collection of models that use TensorFlow's high-level APIs and are well-maintained, tested and therefore perfect for our example.
Before we get started, we need to add the top-level folder to the resnet model we're using to the Python path. For resnet that path is "models/official/r1/resnet/" :
export PYTHONPATH="$PYTHONPATH:your-path/models/official/r1/resnet/"
We also need to install certain dependencies via pip:
pip install --user -r official/requirements.txt
Step 4: Download the CIFAR-10 data
In order to download and extract the CIFAR-10 data, we can simply run the following script. With the --data_dir flag and the following path you can specify where the dataset should be downloaded and extracted:
python cifar10_download_and_extract.py --data_dir /path/to/CIFAR-10-data
Step 5: Train the model
Training the model is simple: Just launch cifar10_main.py and specify the folder where you saved the dataset (see above:/path/to/CIFAR-10-data) and another one where you want to save the model parameters (here:/path/to/model-parameters):
python cifar10_main.py --data_dir/path/to/CIFAR-10-data/cifar-10-batches-bin --model_dir/path/to/model-parameters
After that you'll see the training of the model progressing, depending on the chosen hardware at various speeds: