将 Gemini 与 OpenAI 库结合使用
基于这篇文章,我们现在可以将 gemini 与 openai 库一起使用。所以,我决定在这篇文章中尝试一下
目前仅提供聊天完成 api 和嵌入 api。
在本文中,我尝试使用 python 和 javascript。
python
首先,我们来搭建环境。
pip install openai python-dotenv
接下来,让我们运行以下代码。
import os from dotenv import load_dotenv from openai import openai load_dotenv() google_api_key = os.getenv("google_api_key") client = openai( api_key=google_api_key, base_url="https://generativelanguage.googleapis.com/v1beta/" ) response = client.chat.completions.create( model="gemini-1.5-flash", n=1, messages=[ {"role": "system", "content": "you are a helpful assistant."}, { "role": "user", "content": "explain briefly(less than 30 words) to me how ai works." } ] ) print(response.choices[0].message.content)
返回了以下响应。
ai mimics human intelligence by learning patterns from data, using algorithms to solve problems and make decisions.
在内容字段中,您可以指定字符串或“类型”:“文本”。
import os from dotenv import load_dotenv from openai import openai load_dotenv() google_api_key = os.getenv("google_api_key") client = openai( api_key=google_api_key, base_url="https://generativelanguage.googleapis.com/v1beta/" ) response = client.chat.completions.create( model="gemini-1.5-flash", n=1, messages=[ {"role": "system", "content": "you are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "explain briefly(less than 30 words) to me how ai works.", }, ] } ] ) print(response.choices[0].message.content)
但是,图像和音频输入出现错误。
图像输入示例代码
import os from dotenv import load_dotenv from openai import openai load_dotenv() google_api_key = os.getenv("google_api_key") client = openai( api_key=google_api_key, base_url="https://generativelanguage.googleapis.com/v1beta/" ) # png to base64 text import base64 with open("test.png", "rb") as image: b64str = base64.b64encode(image.read()).decode("utf-8") response = client.chat.completions.create( model="gemini-1.5-flash", # model="gpt-4o", n=1, messages=[ {"role": "system", "content": "you are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "describe the image in the image below.", }, { "type": "image_url", "image_url": { "url": f"data:image/png;base64,{b64str}" } } ] } ] ) print(response.choices[0].message.content)
音频输入示例代码
import os from dotenv import load_dotenv from openai import openai load_dotenv() google_api_key = os.getenv("google_api_key") client = openai( api_key=google_api_key, base_url="https://generativelanguage.googleapis.com/v1beta/" ) # png to base64 text import base64 with open("test.wav", "rb") as audio: b64str = base64.b64encode(audio.read()).decode("utf-8") response = client.chat.completions.create( model="gemini-1.5-flash", # model="gpt-4o-audio-preview", n=1, modalities=["text"], messages=[ {"role": "system", "content": "you are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "what does he say?", }, { "type": "input_audio", "input_audio": { "data": b64str, "format": "wav", } } ] } ] ) print(response.choices[0].message.content)
返回了以下错误响应。
openai.badrequesterror: error code: 400 - [{'error': {'code': 400, 'message': 'request contains an invalid argument.', 'status': 'invalid_argument'}}]
目前仅支持文字输入,不过后续似乎会支持图片和音频输入。
javascript
让我们看一下 javascript 示例代码。
首先,我们来搭建环境。
npm init -y npm install openai npm pkg set type=module
接下来,让我们运行以下代码。
import openai from "openai"; const google_api_key = process.env.google_api_key; const openai = new openai({ apikey: google_api_key, baseurl: "https://generativelanguage.googleapis.com/v1beta/" }); const response = await openai.chat.completions.create({ model: "gemini-1.5-flash", messages: [ { role: "system", content: "you are a helpful assistant." }, { role: "user", content: "explain briefly(less than 30 words) to me how ai works", }, ], }); console.log(response.choices[0].message.content);
运行代码时,请确保在 .env 文件中包含 api 密钥。 .env 文件将在运行时加载。
node --env-file=.env run.js
返回了以下响应。
AI systems learn from data, identify patterns, and make predictions or decisions based on those patterns.
我们可以在同一个库中使用其他模型,这真是太棒了。
就我个人而言,我对此感到很高兴,因为 openai 使编辑对话历史记录变得更加容易。
以上就是将 Gemini 与 OpenAI 库结合使用的详细内容,更多请关注其它相关文章!