Deploy an ML model using APIs on local as well as AWS
What are APIs?
In very simple language, it is a function call. In another way, calling a function which is most likely from a different box which is nothing but a server. The output dataset is in either json or XML format.
So there are two steps, one section of this blog tells you how to deploy your model on your own box, second will tell you how to deploy on AWS.
Steps used:
- Build a model on your local box (Amazon Fine Food reviews) and store the model and other key model-related variables in .pkl files
2. Launch a micro instance on AWS.
3. Connect to the AWS box [ssh]
4. Move the files to an AWS EC2 instance/box [scp]
5. Install all packages needed on the AWS box.
6. Run app.py on the AWS box.
7. Check the output in the browser.
Software needed:
1. Anaconda:
a. Windows 64 bit: https://repo.continuum.io/archive/Anaconda3-5.2.0-Windows-x86_64.exe
b. Windows 32 bit: https://repo.continuum.io/archive/Anaconda3-5.2.0-Windows-x86.exe
c. Mac : https://repo.continuum.io/archive/Anaconda3-5.2.0-MacOSX-x86_64.sh
d. Linux 64 bit: https://repo.continuum.io/archive/Anaconda3-5.2.0-Linux-x86_64.sh
e. Linux 32 bit: https://repo.continuum.io/archive/Anaconda3-5.2.0-Linux-x86.sh
f. Check the previous Archives of Anaconda: https://repo.continuum.io/archive/
Packages need:
pip3, pandas, NumPy, sklearn, beautifulsoup4, lxml, flask, re
you can copy all these packages and try like this: https://stackoverflow.com/a/15593865/4084039
Dataset: the dataset we are using is Amazon fine food review
[1] Code on the local box(local machine)
Open Anaconda Prompt and follow the below steps: Code used available here at Github link with the cropped screen of directory
1. Change to the code directory. ( >cd “path of directory”)
2. Run “python3 app.py”
3. Browser: http://localhost:8080/index
in your browser, a window box will appear, paste the text inside the box
here you will get an answer in local browser:
→Now let me explore all file in directory and process at the backend
First: model.py -> please go through code available at my GitHub link
process: it loads data from SQLite(database.sqlite) > train data > store model in pkl file (model.pkl, count_vect.pkl)
Second: index.html
Here whatever you are pasted in review_text box, will be POST as input to predict function after hitting the submit button.
Third: app.py
it using flask to build API and it basically loading models created in model.py, import some preprocessing function and import dataset from HTML to predict the output using predict function. (please go through code)
Congratulation: you will see the output on local web browser using HTML page
[2] Launch a micro instance on AWS.
Launch a micro instance on AWS.
Here basically you want to lease a box on AWS for computing. The leased box is known as the Elastic compute cloud(ECC). EC2 helps us lease more than one boxes for computing.
Here we follow all step a in the previous case but we will deploy on the AWS server.
Following steps performed for Creating an instance:
1. Create an AWS account https://aws.amazon.com ,
https://portal.aws.amazon.com/billing/signup#/start
2. Login: https://console.aws.amazon.com
After login:
Now launch the EC2 instance
3. Choose the ubuntu free tire
Amazon provides you free tier Ubuntu machine for the experiment. This is only for the experiment, but actually you will select a more powerful machine for deployment and pay for it. Click on select
4. Choose t2.micro free tier eligible
Checked t2.micro instance for the experiment. Select boxes based on your deployment requirement. Click on review and launch
Now click on launch
keypair- give you an encrypted key to safe login on the remote instance(boxes) on AWS. Download key pair and launch instances. Download keypair “.pem” file in location which will be used for connecting your local box with aws box.
You will see this screen, you have successfully launched the EC2 instance, now we need to launch a flask API in it.
5. Create a security group
Select the “Network & security” -> Security groups and then click “Create Security Group”. Here security groups mean who can access the Instances(box).
Leave everything default. Click create
6. Then add the specific security group to the network interface
[3] Connect to the AWS box
So here you need to connect your box with AWS box using ssh command at your local cmd. Now open your system command prompt(cmd) and change directory to .pem file located.
please follow the example code in the below image. you will get this window in right-click at the created instance
After hitting enter->
Now you are logged in remotely to the AWS box. This is a Linux command prompt. Now you have to securely transfer data to the AWS box.
[4] Move the files to an AWS EC2 instance/box []
Open another cmd at your local system and follow code. Command-line to copy files
C:\Users\Asus\OneDrive\Desktop> scp -r -i “for_live.pem” ./AFR
ubuntu@ec2–13–59–191–237.us-east-2.compute.amazonaws.com :~/
[5] Install all packages needed on the AWS box.
[6] Run app.py on the AWS box.
we don't need database and model.py because they used to train the model. but keep all datasets or dictionary which will be used by the model for prediction.
since model.pkl and count_vect.pkl used always so its better keep as a global variable to avoid each time upload during hits.
https://github.com/ranasingh-gkp/Deployment_Models_LOCAL_AWS/blob/master/AFR/app.py
it’s running on the AWS box.
[8] Check the output in the browser.
===============end=====================
Reference:
- Code used in above case
2. Image used from google image.
3. AWS Documents