hello
At some point, interest in generative AI has rapidly increased.
And a variety of services are provided to make testing easier.
However, students or various business participants (developers and planners) worry about testing simple service implementations.
How to make it, where to distribute it?, how much will it cost?, and will it operate stably?
Today, I will guide you through creating a simple sample and configuring a free API server for those who have this problem.
Today we will write an application that provides OpenAI's ChatGPT service in API form.
In order to provide it in API form, an API server will be built using Python FastAPI.
The written program is linked to Github. And it runs on a “free server”.
The knowledge required is as follows:
- Understanding Python FastAPI
- Understanding Github
- Understanding OpenAI services
The entire process proceeds through the following steps.
1. Create ChatGPT Call App
2. Create API Server (call CahtGPT App)
3. Github upload & integration
4. Create Server (Python free server)
5. Test
1. Create ChatGPT Call App
In order to call OpenAI's service, you must register an account with OpenAI in advance and obtain a key for calling the API.
This part will not be covered here.
Now let's configure your Python development environment. It is recommended that you configure the viturlenv environment and write it locally.
A development environment is provided by various tools such as general Pycharm and VS code.
If you are not sure, please search the web to find it.
Once the python development environment has been configured, install the library for calling the OpenAI service.
pip install openai
Next ..... Create an .env file and register the issued OpenAI service key.
This file may cause problems if it is leaked to the outside world.
Care must be taken to manage it.
For example, even when managing development content on GitHub, you must make sure that the .env file is not distributed.
[.env]
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxx
[ getChatGPT.py ]
Below is sample code for calling the chatgpt service.
Modify it to suit your needs
import openai
from dotenv import load_dotenv
load_dotenv()
client = openai.OpenAI()
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content" : prompt}]
response = openai.chat.completions.create(
model=model,
messages=messages,
temperature=0,
)
return response.choices[0].message.content.strip()
2. Create API Server (call CahtGPT App)
Write code to provide get_completion written in the previous step as an external API.
There are various ways to provide API.
- Flask: Flask is one of the most popular lightweight web frameworks in Python. It is used for a variety of purposes, from simple RESTful APIs to complex web applications.
- Django REST Framework: Django is one of the most famous web frameworks in Python, and the Django REST Framework is a powerful tool built on top of Django, making it easy to build complex APIs.
- FastAPI: FastAPI is a modern, high-performance web framework that has recently gained popularity, based on Python 3.6+. It supports asynchronous programming and is particularly optimized for API development.
- Tornado: Tornado is a web framework and asynchronous networking library that supports non-blocking network I/O, capable of handling thousands of concurrent users.
- Bottle: Bottle is a lightweight web framework similar to Flask. It is suitable for quickly developing small web applications.
- Falcon: Falcon is another Python framework designed for high-performance API development. It is suitable for large-scale applications and microservices.
The example we will use here today uses FastAPI.
Install python library
pip install fastapi uvicorn
[app.py]
I have added post and get APIs for testing API services using FastAPI
from fastapi import FastAPI
from getChatGPT import chat_completion
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
@app.post("/getchatgpt")
def process_text(text: str):
result = chat_completion(text)
return {"result": result}
[app.py 실행]
uvicorn app:app - reload
If you run the above command, the server for the API will run and the connection URL will be displayed in the log.
With normal default settings, a URL like the one below will be displayed.
When the server is running, a web screen for testing is provided.
FastAPI provides a testing environment for APIs developed in the /docs path.
http://127.0.0.1:8000/docs
Now you have written an API that calls chatgpt using the /getchatgpt API.
It's written very simply, and you can freely expand it according to your imagination and needs
3. Github upload & integration
Register all the code you have written on GitHub.
If you are using PyCharm or VS Code, refer to how to integrate with GitHub in your development environment.
This will not be covered in detail here.
If you find it difficult to integrate GitHub with your development tool, there is also an option to manually upload all the code written in your development tool directly to GitHub (excluding .env files).
However, I do not particularly recommend this method.
Even if it's challenging at first, proceed with the GitHub integration setup.
4. Create Server (Python free server)
There are various free servers available from external providers.
In addition to the list below, there are many other services available. For example, services like Azure, GCP, etc., also offer partially free tiers.
I use Render.com in this case.
- PythonAnywhere: A Python-pecific hosting service that allows you to host web apps, scripts, Jupyter notebooks, and more. It is beginner-friendly and offers a free account option.
- Glitch: A platform for creating and hosting web applications. It supports several languages including Python, and provides an environment that is particularly friendly for beginners and students.
- Vercel: Primarily a platform for static sites and Jamstack applications, but also supports serverless functions including Python. It offers strong integration features with GitHub.
- Netlify: Supports static sites and serverless backends, and allows you to set up automatic deployment in conjunction with GitHub. You can run serverless functions using Python.
- Repl.it: An online IDE that supports multiple programming languages. You can develop and immediately host web applications using Python.
- Render: Render is a cloud hosting service that supports a variety of development stacks. It supports web applications, backend services, and more, including Python applications.
First, sign up at render.com.
I will not provide separate explanations for this part. :)
(Note that this site has no affiliation with me. It was arbitrarily chosen as an example for deploying development code.)
If you have registered, you can now proceed with deploying your server.
The server deployment will be conducted by integrating with GitHub.
- First, before deploying, make sure that the requirements.txt file in your developed code is properly defined.
The libraries required for running the Python program must be well-defined.
[ requirements.txt ]
openai
python-dotenv
fastapi
uvicorn
Now, let's proceed with the actual process of registering the server.
- Log in to the render.com site and select the 'New' button at the top.
- Choose 'Web Service'.
- For server configuration, select 'Build and deploy from a Git repository'.
- Choose the Git repository to integrate.
- Enter a 'Name'. Input the name you want to be displayed.
- Select a 'Region'. This is the location of the server.
- Choose 'master' for the Branch, but it can be changed if necessary.
- Select 'python3' as the Runtime.
- Choose 'Free' for the Instance Type.
- Complete the process by selecting 'Create Web Service'.
Now, all the deployment processes are complete.
The server will install the necessary libraries based on the information you registered, deploy the source, and start running.
5. Test
The address of the deployed service is provided according to a certain rule. The service is provided with the name you entered.
https://api-name.onrender.com/
All preparations are now complete.
Now, proceed with testing in the same way you did locally.
Although the actual process seems lengthy when explained, the amount of code you have to write is not much, and if you have basic knowledge of Python/OpenAI, you will find it even easier to understand.
This is a method I found necessary, but I hope it will be helpful to someone else.