Project Background#
Chat GPT has been popular for so long, with various types and scales of products emerging one after another. As a web developer, after looking at so many open source projects related to front-end development, I couldn't help but itch to try it myself. After considering it for a few days, I decided to give it a try. The original intention is based on the following purposes:
- Technical learning: I haven't written any projects related to next.js before. After following a wave of experts on Twitter, I found that next.js seems to be very popular abroad, so I decided to learn it.
- Broaden horizons: I have seen many front-end projects related to GPT that use front-end technologies and frameworks that I didn't know much about before. Implementing them myself and understanding them afterwards feels like a win-win situation.
- Keep up with the trend: I have seen many open source projects on GitHub that can be deployed with just one click (many early risers have made a lot of money with these projects). With my own API key, I can use them. Instead of using someone else's, I thought, "I think I can do it too," and that's it.
Relevant Links#
- Demo (currently free to use): https://gpt.ltopx.com
- Github (please star✨): https://github.com/Peek-A-Booo/L-GPT
- Bento homepage: https://bento.me/peek
- Twitter: https://twitter.com/peekbomb
Project Introduction#
L-GPT is an open source project that helps you improve your learning, work, and life efficiency by providing different AI models. More new features will be continuously added in the future.
Project Features#
- Supports one-click free deployment to Vercel
- Supports responsive design and dark mode
- Secure and open source, all data is stored locally
- Supports i18n
- Supports Azure OpenAI Service
- Supports configuration and use of custom prompts
Technology Stack#
- Main framework: next.js
- UI framework: radix-ui
- CSS: tailwindcss
- State management: zustand
- Error collection: Sentry
Knowledge Sharing#
1. API Account Application#
First of all, you need to have relevant accounts and API permissions to proceed to the next step. Here are the registration and application addresses and strategies for several platforms:
- OpenAI: Registration Address
- Azure OpenAI: Application Tutorial
- Claude AI: Application Address
2. Service Integration#
Taking the official OpenAI service as an example, there are roughly two solutions. The second one is used for personal projects, which has higher flexibility and is more convenient to use with a proxy.
-
Integration with official library: openai-node
- Installation
pip install openai
- Usage
import { Configuration, OpenAIApi } from 'openai'; // Configure your personal OpenAI API Key const configuration = new Configuration({ apiKey }); const openai = new OpenAIApi(configuration); const completion: any = await openai.createChatCompletion( { // To achieve the official streaming output typewriter effect, you must use stream stream: true, // Choose the corresponding GPT model. After successful account registration, you can use gpt-3.5 for free, while gpt-4 requires queuing for application model: 'gpt-3.5-turbo', messages: [ { role: 'system', content: "Content", }, // Chat list content ...msg_list, ], }, { responseType: 'stream' }, );
-
Directly call through the domain name: https://api.openai.com/v1/chat/completions
const response =
await fetch("https://api.openai.com/v1/chat/completions", {
headers: {
"Content-Type": "application/json",
// Configure your own OpenAI API Key
"Authorization": `Bearer ${API Key}`,
},
method: "POST",
body: JSON.stringify({
stream: true,
model: 'gpt-3.5-turbo',
messages: [
{
role: "system",
content: "Content"
},
// Chat list content
...chat_list,
],
}),
});
3. Handling Streamed Information#
Chat GPT has two ways of returning results. One is to return all the answers at once after they are all completed, which makes the user wait too long and has a poor experience. The other is to return the results as streamed information, which can achieve a typewriter-like effect.
let decoderDone = false;
const handleFragment = (fragment: string) => {
const lines = fragment.split("\n").filter((item) => item.trim());
for (const line of lines) {
const message = line.replace(/^data: /, "");
if (message === "[DONE]") decoderDone = true;
try {
const content = JSON.parse(message).choices[0].delta.content;
if (content) {
console.log(content, "Received streamed information")
}
} catch {}
}
};
return new Promise(async (resolve) => {
while (!decoderDone) {
const { done, value } = await reader.read();
decoderDone = done;
const fragment = textDecoder.decode(value);
handleFragment(fragment);
if (decoderDone) resolve(true);
}
});
4. Service Proxy#
Due to certain reasons, it is not possible to access the services of api.openai.com normally with a domestic IP address. Although it is possible to use other methods to access it normally, it is more troublesome and there is a certain risk of being blocked. For most people, if there is a service that can be accessed normally with a domestic network, that would be the best. In this project, CloudFlare is used to proxy the openai service interface to make it accessible.
You can refer to the specific process here: https://github.com/noobnooc/noobnooc/discussions/9
5. Error Reporting#
The project integrates Sentry, which can quickly collect synchronous and asynchronous error logs and report them to the backend for timely problem solving.
Just fill in your sentry DSN in the environment variable configuration.
Project Deployment#
If you want to have your own chat GPT service, you can quickly deploy this project with one click using Vercel. Configure your own API key and proxy address, and you can use it yourself or share it with friends and family.
One-click Deployment. This project supports the following environment variables (parameters) to be configured:
Environment Variable | Description | Required | Default Value |
---|---|---|---|
NEXT_PUBLIC_OPENAI_API_KEY | Your personal OpenAI API key | No | |
NEXT_PUBLIC_OPENAI_API_PROXY | Your personal OpenAI API proxy address | No | https://api.openai.com |
NEXT_PUBLIC_AZURE_OPENAI_API_KEY | Your personal Azure OpenAI API key. See example | No | |
NEXT_PUBLIC_AZURE_OPENAI_RESOURCE_NAME | Your personal Azure OpenAI API service resource name. See example | No | |
NEXT_PUBLIC_SENTRY_DSN | Your sentry dsn address. If empty, errors will not be reported to sentry | No |
More#
For more details, please refer to the project README.md
Reference Links:#
- https://atlassc.net/2023/04/25/azure-openai-service
- https://github.com/noobnooc/noobnooc/discussions/9
Feel free to exchange ideas with everyone 😝