Example of keras prediction in 5 python web frameworks

Dmytro Kisil
13 min readDec 14, 2019

and Flask as example and started point

TL;DR: all code is placed in this repo.

Introduction

For checking if API is working properly will be used Postman. If you haven’t this on machine, don’t forget to install it.

Flask

This is started point, pickup code right from keras docs, and updated this to a Tensorflow 2.0 version (use tensorflow.keras instead of keras).

install dependencies:

pip3 install django djangorestframework numpy pillow pyramid sanic tornado flask gevent requests aiohttp tensorflow

create python script:

mkdir flask-keras && cd flask-keras && touch flask-keras-prediction.py

Fill script with the code below.

Note: suggest to use tf.keras instead of keras. If you want to use keras, add threaded=False because app can failed with error ‘thread._local’ object has no attribute ‘value’ error):

# import the necessary packages
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications import imagenet_utils
from PIL import Image
import numpy as np
import flask
import io

# initialize our Flask application and the Keras model
app = flask.Flask(__name__)
model = None

def load_model():
# load the pre-trained Keras model (here we are using a model
# pre-trained on ImageNet and provided by Keras, but you can
# substitute in your own networks just as easily)
global model
model = ResNet50(weights="imagenet")

def prepare_image(image, target):
# if the image mode is not RGB, convert it
if image.mode != "RGB":
image = image.convert("RGB")

# resize the input image and preprocess it
image = image.resize(target)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = imagenet_utils.preprocess_input(image)

# return the processed image
return image

@app.route("/predict/", methods=["POST"])
def predict():
# initialize the data dictionary that will be returned from the
# view
data = {"success": False}

# ensure an image was properly uploaded to our endpoint
if flask.request.method == "POST":
if flask.request.files.get("image"):
# read the image in PIL format
image = flask.request.files["image"].read()
image = Image.open(io.BytesIO(image))

# preprocess the image and prepare it for classification
image = prepare_image(image, target=(224, 224))

# classify the input image and then initialize the list
# of predictions to return to the client
preds = model.predict(image)
results = imagenet_utils.decode_predictions(preds)
data["predictions"] = []

# loop over the results and add them to the list of
# returned predictions
for (imagenetID, label, prob) in results[0]:
r = {"label": label, "probability": float(prob)}
data["predictions"].append(r)

# indicate that the request was a success
data["success"] = True

# return the data dictionary as a JSON response
return flask.jsonify(data)

# if this is the main thread of execution first load the model and
# then start the server
if __name__ == "__main__":
print(("* Loading Keras model and Flask starting server..."
"please wait until server has fully started"))
load_model()
# Add threaded=False if you want to use keras instead of tensorflow.keras
app.run(host='0.0.0.0', port='8000', threaded=False)

Run:

python3 flask-keras-prediction.py

Results:

Here are provide image field as field which contain image data in a form data.

To run Flask first — initialize app as app = flask.Flask(__name__) and second: app.run() in if __name__ == “__main__”: part of code.

In Flask to operate on image you need to operate with:

flask.request.files[“image”].read()

flask.request is a dictionary, which have key similar to what received from POST request. So, in Flask to get the data of a file you need to know a field, where this file is located in request — and that’s should be enough. In our case, field is image so image data is placed in flask.request.files[“image”]. For receive an image object add .read() method. After it you will get really long string on which prediction could be made.

Here screen from Postman:

Notice that if you are using Tensorflow, first run will took around 3.0 seconds. Don’t worry — try second run — and got 234 ms. Why is this so? As I understand correctly, for the first run model.predict() made a huge calculations and created a new data structures. After first run modelreuse calculations and data structures without recreating them. Tensorflow Session (also used as tf.sess()) have a similar behaviour.

Pyramid

create script:

cd .. && mkdir pyramid-keras && cd pyramid-keras && touch pyramid-keras-prediction.py

fill with the code below:

# import the necessary packages
# for predictions
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications import imagenet_utils
from PIL import Image
import numpy as np
import io
# for web framework
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
from pyramid.view import view_config


model = None

def load_model():
# load the pre-trained Keras model (here we are using a model
# pre-trained on ImageNet and provided by Keras, but you can
# substitute in your own networks just as easily)
global model
model = ResNet50(weights="imagenet")

def prepare_image(image, target):
# if the image mode is not RGB, convert it
if image.mode != "RGB":
image = image.convert("RGB")

# resize the input image and preprocess it
image = image.resize(target)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = imagenet_utils.preprocess_input(image)

# return the processed image
return image


def predict(request):
# initialize the data dictionary that will be returned from the
# view
data = {"success": False}

# ensure an image was properly uploaded to our endpoint
if request.method == 'POST':
if request.POST.get("image", None) is not None:
# read the image in PIL format
image = request.POST["image"].file.read()
image = Image.open(io.BytesIO(image))

# preprocess the image and prepare it for classification
image = prepare_image(image, target=(224, 224))

# classify the input image and then initialize the list
# of predictions to return to the client
preds = model.predict(image)
results = imagenet_utils.decode_predictions(preds)
data["predictions"] = []
# loop over the results and add them to the list of
# returned predictions
for (imagenetID, label, prob) in results[0]:
r = {"label": label, "probability": float(prob)}
data["predictions"].append(r)

# indicate that the request was a success
data["success"] = True

return data

if __name__ == '__main__':
print(("* Loading Keras model and Pyramid starting server..."
"please wait until server has fully started"))
load_model()
with Configurator() as config:
config.add_route('predict', '/predict/')
config.add_view(predict, route_name='predict', renderer='json')
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8000, app)
print("Pyramid is up!")
server.serve_forever()

Run:

python3 pyramid-keras-prediction.py

Difference:

For run you need create app: app = config.make_wsgi_app(); then make a server = makeserver(app); and run server by calling server.serve_forever(). In config placed routes and views. Routes — this helps Pyramid to specify which URL set for endpoint(‘/predict/’) and name it (‘predict’). Views — this is functions, where you specify required actions(what to do when got POST request, for example). In add_views first argument is function (predict), which could operate on data and route_name — Pyramid will known, by which URL this function will be trigerred(route_name=’predict’). Also Pyramid want to know how to process info from this function so set (renderer=’json’) is needed.

For reading image data use:

request.POST["image"].file.read()

Inside request.POST is MultiDict([(‘image’, FieldStorage(‘image’, ‘Screenshot 2019–12–04 22:33:49.png’))]). Again, you see here field ‘image’, and name of this image. To access data use .file and then .read() to get a string.

Sanic

create script:

cd .. && mkdir sanic-keras && cd sanic-keras && touch sanic-keras-prediction.py

fill with the code below:

# import the necessary packages
# for predictions
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications import imagenet_utils
from PIL import Image
import numpy as np
import io
# for web framework
from sanic import Sanic
from sanic.response import json

app = Sanic()

model = None

def load_model():
# load the pre-trained Keras model (here we are using a model
# pre-trained on ImageNet and provided by Keras, but you can
# substitute in your own networks just as easily)
global model
model = ResNet50(weights="imagenet")

def prepare_image(image, target):
# if the image mode is not RGB, convert it
if image.mode != "RGB":
image = image.convert("RGB")

# resize the input image and preprocess it
image = image.resize(target)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = imagenet_utils.preprocess_input(image)

# return the processed image
return image


@app.route("/predict", methods=["POST",])
def test(request):
# initialize the data dictionary that will be returned from the
# view
data = {"success": False}

# ensure an image was properly uploaded to our endpoint
if request.method == 'POST':
if request.files.get("image"):
# read the image in PIL format
image = request.files["image"][0].body # [File(type='', body=b'123', name='screenshot.jpg')]
image = Image.open(io.BytesIO(image))

# preprocess the image and prepare it for classification
image = prepare_image(image, target=(224, 224))

# classify the input image and then initialize the list
# of predictions to return to the client
preds = model.predict(image)
results = imagenet_utils.decode_predictions(preds)
data["predictions"] = []
# loop over the results and add them to the list of
# returned predictions
for (imagenetID, label, prob) in results[0]:
r = {"label": label, "probability": float(prob)}
data["predictions"].append(r)

# indicate that the request was a success
data["success"] = True

return json(data)


if __name__ == '__main__':
print(("* Loading Keras model and Sanic starting server..."
"please wait until server has fully started"))
load_model()
app.run(host='0.0.0.0', port=8000)

Run:

python3 sanic-keras-prediction.py

Difference:

For run a server: first initialize app as app=Sanic(), then run simple as app.run(). This pattern look really similar to Flask behavior. Important difference is using a decorator @app.route(“/predict”, methods=[“POST”,]) before predict function — and Sanic will know to rout function predict to URL “/predict”. And Sanic will know that this function should be used when server received POST request only.

To access image data:

request.files["image"][0].body

request.files as previous, is a dict key with field name from given response. request.files[‘image’] is [File(type=’’, body=b’123', name=’screenshot.jpg’)]. This is a list, inside which you could get type body and name. So you could add print(request.files[‘image’][0].name) — and Sanic will print name of your image. [0] is needed to get inside the list. Important, .read() method here is not needed, because Sanic already give you a string from a body tag.

Tornado

create script:

cd .. && mkdir tornado-keras && cd tornado-keras && touch tornado-keras-prediction.py

fill with the code below:

# import the necessary packages
# for predictions
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications import imagenet_utils
from PIL import Image
import numpy as np
# for web framework
from tornado.web import Application, RequestHandler
from tornado.ioloop import IOLoop
import json
import io

model = None

def load_model():
# load the pre-trained Keras model (here we are using a model
# pre-trained on ImageNet and provided by Keras, but you can
# substitute in your own networks just as easily)
global model
model = ResNet50(weights="imagenet")

def prepare_image(image, target):
# if the image mode is not RGB, convert it
if image.mode != "RGB":
image = image.convert("RGB")

# resize the input image and preprocess it
image = image.resize(target)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = imagenet_utils.preprocess_input(image)

# return the processed image
return image


class Predict(RequestHandler):
def post(self):
# items.append(json.loads(self.request.body))
# self.write({'message': 'new item added'})
# initialize the data dictionary that will be returned from the
# view
data = {"success": False}

# ensure an image was properly uploaded to our endpoint
if self.request.method == "POST":
if self.request.files.get("image"):
# read the image in PIL format
image = self.request.files["image"][0].body # without .read(), again [0] as first image
image = Image.open(io.BytesIO(image))

# preprocess the image and prepare it for classification
image = prepare_image(image, target=(224, 224))

# classify the input image and then initialize the list
# of predictions to return to the client
preds = model.predict(image)
results = imagenet_utils.decode_predictions(preds)
data["predictions"] = []
# loop over the results and add them to the list of
# returned predictions
for (imagenetID, label, prob) in results[0]:
r = {"label": label, "probability": float(prob)}
data["predictions"].append(r)

# indicate that the request was a success
data["success"] = True
self.write(data)


def make_app():
urls = [
("/predict/", Predict),
]
return Application(urls, debug=True)


if __name__ == '__main__':
app = make_app()
app.listen(8000)
print("Load the model")
load_model()
print("Tornado server is app and listen on port 8000")
IOLoop.instance().start()

Run:

python3 tornado-keras-prediction.py

Difference:

To run Tornado server use make_app(); bound to host, make it listen app.listen(8000); start server with IOLoop.instance().start(). Inside make_app() here is Tornado looking for function and URL routes. Note, that method POST setting by function post inside class Predict. Also, in Tornado to make a function you will need create class from RequestHandler. For my opinion, this setup look hard for understand in comparison with previous frameworks. Let’s look at how to get image data in Tornado:

self.request.files["image"][0].body

slef.request.files[“image”] is a list with one item, which represent as dict with keys ‘filename’, ‘body’, ‘content_type’. So you can check type of image with .content_type and getting something like ‘image/png’ and filename with .filename.

This way is too close to Sanic (self is used because here is a class not a function). And, you don’t need to calling .read() — Tornado is prepared image for you as string.

Aiohttp

create script:

cd .. && mkdir aiohttp-keras && cd aiohttp-keras && touch aiohttp-keras-prediction.py

Fill with the code below:

# import the necessary packages
# for predictions
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications import imagenet_utils
from PIL import Image
import numpy as np
import io
# for web framework
# import asyncio
from aiohttp import web, MultipartReader, hdrs
# import aiohttp

model = None

def load_model():
# load the pre-trained Keras model (here we are using a model
# pre-trained on ImageNet and provided by Keras, but you can
# substitute in your own networks just as easily)
global model
model = ResNet50(weights="imagenet")
return model

def prepare_image(image, target):
# if the image mode is not RGB, convert it
if image.mode != "RGB":
image = image.convert("RGB")

# resize the input image and preprocess it
image = image.resize(target)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = imagenet_utils.preprocess_input(image)

# return the processed image
return image

routes = web.RouteTableDef()

@routes.post("/predict/")
async def post_request(request):
if request.method == 'POST':
# initialize the data dictionary that will be returned from the
# view

data = {"success": False}

reader = MultipartReader.from_response(request)
while True:
part = await reader.next()
if part is None:
break
if part.headers[hdrs.CONTENT_TYPE] == 'application/json':
metadata = await part.json()
continue
filedata = await part.read(decode=False)
# ensure an image was properly uploaded to our endpoint
# read the image in PIL format
image = Image.open(io.BytesIO(filedata))

# preprocess the image and prepare it for classification
image = prepare_image(image, target=(224, 224))

# model = load_model()

# classify the input image and then initialize the list
# of predictions to return to the client
preds = model.predict(image)
results = imagenet_utils.decode_predictions(preds)
data["predictions"] = []
# loop over the results and add them to the list of
# returned predictions
for (imagenetID, label, prob) in results[0]:
r = {"label": label, "probability": float(prob)}
data["predictions"].append(r)

# indicate that the request was a success
data["success"] = True

return web.json_response(data)

if __name__ == '__main__':
print("Load model")
model = load_model()
app = web.Application()
app.add_routes(routes)
print("Aiohttp is up!")
web.run_app(app, host="localhost", port=8000)

Run:

python3 aiohttp-keras-prediction.py

Difference:

For run server create an app by app=web.Application(); initiate routes and attach them to app using app.add_routes(routes); set decorator in which specify method POST and URL to route (routes.post(‘/predict/’)); don’t forget to add async before predict function; run server with web.run_app().

To receive image data is needed calling MultiPartReader; check that request is ‘application/json’ type, read this request using .read(), and got the string.

Django

start project django_keras and app keras_prediction:

cd .. && django-admin startproject django_keras && python3 manage.py startapp keras_prediction && cd keras_prediction

change keras_prediction/views.py:

# import packages for framework part
from django.shortcuts import render
from rest_framework.views import APIView
from django.conf import settings
from django.http import JsonResponse
import base64
# import the necessary packages for prediction
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications import imagenet_utils
from PIL import Image
import numpy as np
import io


class Predict(APIView):

def post(self, request, filename='lol', format=None):
data = {"success": False}
model = None

def prepare_image(image, target):
# if the image mode is not RGB, convert it
if image.mode != "RGB":
image = image.convert("RGB")

# resize the input image and preprocess it
image = image.resize(target)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = imagenet_utils.preprocess_input(image) # ResNet
# return the processed image
return image

# ensure an image was properly uploaded to our endpoint
if request.FILES.get("image"):
# read the image in PIL format
image = request.FILES["image"].read()
image = Image.open(io.BytesIO(image))
# preprocess the image and prepare it for classification
image = prepare_image(image, target=(224, 224)) # for ResNet
# Use already loaded model from settings
model = settings.MODEL
preds = model.predict(image)
# After prediction
results = imagenet_utils.decode_predictions(preds)
data["predictions"] = []
# loop over the results and add them to the list of
# returned predictions
for (imagenetID, label, prob) in results[0]:
r = {"label": label, "probability": float(prob)}
data["predictions"].append(r)

# indicate that the request was a success
data["success"] = True

return JsonResponse(data)

add in django_keras/settings.py load_model function:

from keras.applications import ResNet50

def load_model():
# load the pre-trained Keras model (here we are using a model
# pre-trained on ImageNet and provided by Keras, but you can
# substitute in your own networks just as easily)
global model
model = ResNet50(weights="imagenet")
model._make_predict_function() # maybe error can happens here
return model
# Load our model when server starts
MODEL = load_model()

add keras_prediction and rest_framework to INSTALLED_APPS in django_keras/settings.py:

# Application definition

INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework', # new
'keras_prediction' # new
]

create in folder keras_prediction urls.py:

touch keras_prediction/urls.py

add in keras_prediction/urls.py url for views:

from django.urls import path

from .views import Predict

urlpatterns = [
path('', Predict.as_view(), name='prediction'),
]

add in django_keras/urls.py url for app keras_prediction:

from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('predict/', include('keras_prediction.urls'))
]

run server(if you want keras instead of tensorflow.keras: use nothreading and noreload to avoid https://github.com/keras-team/keras/issues/13353):

python3 manage.py runserver --nothreading --noreload

Difference:

Django is heavy production ready framework and making simple things for the first time may be look a little bit overwhelming. Server is running by manage.py, urls and routes for predict function placed in urls.py and views.py. To load model as server starts provide model loading in settings.py. Then you could call already runned model using settings.MODEL in views.py.

To got image data call:

request.FILES["image"].read()

Look similar to the Flask way. In Django you need to call .read() method to get string from image for further prediction.

Conclusion

Sanic and Tornado almost similar, Flask also not so much different. Django differs because of his complexity (it’s a production ready framework, so he can’t be as simple as Flask which suitable perfectly for pet-projects). And Aiohttp differs because of his asynchronous way for doing things.

WoW! I hope you now can be sure, that different Python web frameworks for a simple tasks as receiving request and getting json not so much different at all. They all used common techniques for handle different events in a similar ways. So you can build a proper understanding of technique only once. And then with some small changes applying that knowledge to got desired results using a different tool.

--

--