Welcome to the G4F Requests API Guide, a powerful tool for leveraging AI capabilities directly from your Python applications using HTTP requests. This guide will take you through the steps of setting up requests to interact with AI models for a variety of tasks, from text generation to image creation.
Ensure you have the requests
library installed in your environment. You can install it via pip
if needed:
pip install requests
This guide provides examples on how to make API requests using Python's requests
library, focusing on tasks such as text and image generation, as well as retrieving available models.
Before diving into specific functionalities, it's essential to understand how to structure your API requests. All endpoints assume that your server is running locally at http://localhost
. If your server is running on a different port, adjust the URLs accordingly (e.g., http://localhost:8000
).
To generate text responses using the chat completions endpoint, follow this example:
import requests
# Define the payload
payload = {
"model": "gpt-4o",
"temperature": 0.9,
"messages": [{"role": "system", "content": "Hello, how are you?"}]
}
# Send the POST request to the chat completions endpoint
response = requests.post("http://localhost/v1/chat/completions", json=payload)
# Check if the request was successful
if response.status_code == 200:
# Print the response text
print(response.text)
else:
print(f"Request failed with status code {response.status_code}")
print("Response:", response.text)
Explanation:
temperature
parameter controls the randomness of the output.For scenarios where you want to receive partial responses or stream data as it's generated, you can utilize the streaming capabilities of the API. Here's how you can implement streaming text generation using Python's requests
library:
import requests
import json
from queue import Queue
def fetch_response(url, model, messages):
"""
Sends a POST request to the streaming chat completions endpoint.
Args:
url (str): The API endpoint URL.
model (str): The model to use for text generation.
messages (list): A list of message dictionaries.
Returns:
requests.Response: The streamed response object.
"""
payload = {"model": model, "messages": messages}
headers = {
"Content-Type": "application/json",
"Accept": "text/event-stream",
}
response = requests.post(url, headers=headers, json=payload, stream=True)
if response.status_code != 200:
raise Exception(
f"Failed to send message: {response.status_code} {response.text}"
)
return response
def process_stream(response, output_queue):
"""
Processes the streamed response and extracts messages.
Args:
response (requests.Response): The streamed response object.
output_queue (Queue): A queue to store the extracted messages.
"""
for line in response.iter_lines():
if line:
line = line.decode("utf-8")
if line == "data: [DONE]":
break
if line.startswith("data: "):
try:
data = json.loads(line[6:])
message = data.get("message", "")
if message:
output_queue.put(message)
except json.JSONDecodeError:
continue
# Define the API endpoint
chat_url = "http://localhost/v1/chat/completions"
# Define the payload
model = "gpt-4o"
messages = [{"role": "system", "content": "Hello, how are you?"}]
# Initialize the queue to store output messages
output_queue = Queue()
try:
# Fetch the streamed response
response = fetch_response(chat_url, model, messages)
# Process the streamed response
process_stream(response, output_queue)
# Retrieve messages from the queue
while not output_queue.empty():
msg = output_queue.get()
print(msg)
except Exception as e:
print(f"An error occurred: {e}")
Explanation:
fetch_response
Function:
Accept
header to text/event-stream
to enable streaming.process_stream
Function:
"data: [DONE]"
."data: "
to extract the message content.output_queue
for further processing.Main Execution:
Queue
to store incoming messages.Usage Tips:
Accept
header appropriately.chat_url
if your local server runs on a different port or path.To retrieve a list of available models, you can use the following function:
import requests
def fetch_models():
"""
Retrieves the list of available models from the API.
Returns:
dict: A dictionary containing available models or an error message.
"""
url = "http://localhost/v1/models/"
try:
response = requests.get(url)
response.raise_for_status() # Raise an error for HTTP issues
return response.json() # Parse and return the JSON response
except Exception as e:
return {"error": str(e)} # Return an error message if something goes wrong
models = fetch_models()
print(models)
Explanation:
fetch_models
function makes a GET request to the models endpoint.The following function demonstrates how to generate images using a specified model:
import requests
def generate_image(prompt: str, model: str = "flux-4o"):
"""
Generates an image based on the provided text prompt.
Args:
prompt (str): The text prompt for image generation.
model (str, optional): The model to use for image generation. Defaults to "flux-4o".
Returns:
tuple: A tuple containing the image URL, caption, and the full response.
"""
payload = {
"model": model,
"temperature": 0.9,
"prompt": prompt.replace(" ", "+"),
}
try:
response = requests.post("http://localhost/v1/images/generate", json=payload)
response.raise_for_status()
res = response.json()
data = res.get("data")
if not data or not isinstance(data, list):
raise ValueError("Invalid 'data' in response")
image_url = data[0].get("url")
if not image_url:
raise ValueError("No 'url' found in response data")
timestamp = res.get("created")
caption = f"Prompt: {prompt}\nCreated: {timestamp}\nModel: {model}"
return image_url, caption, res
except Exception as e:
return None, f"Error: {e}", None
prompt = "A tiger in a forest"
image_url, caption, res = generate_image(prompt)
print("API Response:", res)
print("Image URL:", image_url)
print("Caption:", caption)
Explanation:
generate_image
function constructs a request to create an image based on a text prompt.This guide has demonstrated basic usage scenarios for the G4F Requests API. The API provides robust capabilities for integrating advanced AI into your applications. You can expand upon these examples to fit more complex workflows and tasks, ensuring your applications are built with cutting-edge AI features.
For applications requiring high performance and non-blocking operations, consider using asynchronous programming libraries such as aiohttp
or httpx
. Here's an example using aiohttp
:
import aiohttp
import asyncio
import json
from queue import Queue
async def fetch_response_async(url, model, messages, output_queue):
"""
Asynchronously sends a POST request to the streaming chat completions endpoint and processes the stream.
Args:
url (str): The API endpoint URL.
model (str): The model to use for text generation.
messages (list): A list of message dictionaries.
output_queue (Queue): A queue to store the extracted messages.
"""
payload = {"model": model, "messages": messages}
headers = {
"Content-Type": "application/json",
"Accept": "text/event-stream",
}
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload) as resp:
if resp.status != 200:
text = await resp.text()
raise Exception(f"Failed to send message: {resp.status} {text}")
async for line in resp.content:
decoded_line = line.decode('utf-8').strip()
if decoded_line == "data: [DONE]":
break
if decoded_line.startswith("data: "):
try:
data = json.loads(decoded_line[6:])
message = data.get("message", "")
if message:
output_queue.put(message)
except json.JSONDecodeError:
continue
async def main():
chat_url = "http://localhost/v1/chat/completions"
model = "gpt-4o"
messages = [{"role": "system", "content": "Hello, how are you?"}]
output_queue = Queue()
try:
await fetch_response_async(chat_url, model, messages, output_queue)
while not output_queue.empty():
msg = output_queue.get()
print(msg)
except Exception as e:
print(f"An error occurred: {e}")
# Run the asynchronous main function
asyncio.run(main())
Explanation:
aiohttp
Library: Facilitates asynchronous HTTP requests, allowing your application to handle multiple requests concurrently without blocking.fetch_response_async
Function:
output_queue
.main
Function:
Queue
to store incoming messages.Benefits:
Note: Ensure you have aiohttp
installed:
pip install aiohttp
By following this guide, you can effectively integrate the G4F Requests API into your Python applications, enabling powerful AI-driven functionalities such as text and image generation, model retrieval, and handling streaming data. Whether you're building simple scripts or complex, high-performance applications, the examples provided offer a solid foundation to harness the full potential of AI in your projects.
Feel free to customize and expand upon these examples to suit your specific needs. If you encounter any issues or have further questions, don't hesitate to seek assistance or refer to additional resources.
Adjusting the Base URL:
The guide assumes your API server is accessible at http://localhost
. If your server runs on a different port (e.g., 8000
), update the URLs accordingly:
# Example for port 8000
chat_url = "http://localhost:8000/v1/chat/completions"
Environment Variables (Optional):
BASE_URL = os.getenv("API_BASE_URL", "http://localhost") chat_url = f"{BASE_URL}/v1/chat/completions" ```
Error Handling:
Security Considerations:
Testing:
Logging: