Artificial Intelligence is rapidly transforming how we build modern web applications. With the release of GPT-4 Turbo by OpenAI, developers can now add powerful language understanding and generation capabilities directly into their apps, with reduced latency and lower costs compared to earlier GPT-4 models.
In this tutorial, we’ll walk through how to integrate GPT-4 Turbo into a full-stack web application using React on the frontend and Express.js on the backend. You’ll build a simple but functional chat interface that sends user input to OpenAI’s API, receives intelligent responses, and displays them in a user-friendly UI.
By the end of this tutorial, you will have:
-
A React-based chat interface
-
An Express API that securely proxies GPT-4 Turbo requests
-
A working end-to-end conversation system powered by AI
💡 Use Case Examples: AI assistants, support bots, content generators, language translators, educational tools, and more.
Prerequisites
Before we start, ensure you have the following installed and ready:
🛠️ System Requirements
-
Node.js (v18 or newer recommended)
-
npm (comes with Node.js)
-
A modern code editor, such as VS Code
📦 Dependencies & Services
-
OpenAI API Key
Sign up or log in at https://platform.openai.com/ and generate an API key.
👩💻 Knowledge Prerequisites
-
Basic understanding of JavaScript/TypeScript
-
Familiarity with React and Express.js
-
Basic API request/response handling (REST)
1. Setting Up the Backend with Express and TypeScript
We’ll start by creating a TypeScript-based Express server that acts as a secure proxy between your frontend and OpenAI’s GPT-4 Turbo API.
1.1 Initialize the Backend Project
mkdir gpt4-backend
cd gpt4-backend
npm init -y
1.2 Install Required Dependencies
npm install express axios dotenv cors
npm install -D typescript ts-node-dev @types/node @types/express @types/cors
1.3 Create tsconfig.json
{
"compilerOptions": {
"target": "es2020",
"module": "commonjs",
"outDir": "dist",
"rootDir": "src",
"strict": true,
"esModuleInterop": true
}
}
1.4 Setup Project Structure
mkdir src
touch src/index.ts
touch .env
1.5 Create .env
OPENAI_API_KEY=your-openai-api-key-here
PORT=5000
1.6 Express Server in src/index.ts
import express, { Request, Response } from 'express';
import cors from 'cors';
import axios from 'axios';
import dotenv from 'dotenv';
dotenv.config();
const app = express();
const port = process.env.PORT || 5000;
app.use(cors());
app.use(express.json());
app.post('/api/chat', async (req: Request, res: Response) => {
const { messages } = req.body;
try {
const response = await axios.post(
'https://api.openai.com/v1/chat/completions',
{
model: 'gpt-4-turbo',
messages,
},
{
headers: {
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
}
);
res.json(response.data);
} catch (error: any) {
console.error('Error communicating with OpenAI:', error.message);
res.status(500).json({ error: 'Failed to get response from GPT-4 Turbo' });
}
});
app.listen(port, () => {
console.log(`Server is running on http://localhost:${port}`);
});
1.7 Run the Server
Add this to package.json
:
"scripts": {
"dev": "ts-node-dev src/index.ts"
}
Then run:
npm run dev
Your backend is now ready to receive chat messages and forward them to GPT-4 Turbo!
2. Building the React Frontend with TypeScript
We’ll use Vite to quickly scaffold a React + TypeScript app and create a chat interface to interact with our Express backend.
2.1 Create the React App with Vite
npm create vite@latest gpt4-frontend --template react-ts
cd gpt4-frontend
npm install
2.2 Install Axios
npm install axios
2.3 Create a Chat Message Interface
Create a new file: src/types.ts
export interface ChatMessage {
role: 'user' | 'assistant' | 'system';
content: string;
}
2.4 Build the Chat Component
Update src/App.tsx
:
import { useState } from 'react';
import axios from 'axios';
import type { ChatMessage } from "./types";
import './App.css';
function App() {
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
const sendMessage = async () => {
if (!input.trim()) return;
const userMessage: ChatMessage = {
role: 'user',
content: input,
};
const updatedMessages = [...messages, userMessage];
setMessages(updatedMessages);
setInput('');
setLoading(true);
try {
const res = await axios.post('http://localhost:5000/api/chat', {
messages: updatedMessages,
});
const reply: ChatMessage = res.data.choices[0].message;
setMessages([...updatedMessages, reply]);
} catch (err) {
console.error('Error sending message:', err);
} finally {
setLoading(false);
}
};
return (
<div className="chat-container">
<h1>GPT-4 Turbo Chat</h1>
<div className="chat-box">
{messages.map((msg, i) => (
<div key={i} className={`message ${msg.role}`}>
<strong>{msg.role}:</strong> {msg.content}
</div>
))}
{loading && <div className="message assistant">Typing...</div>}
</div>
<div className="input-area">
<input
type="text"
value={input}
onChange={e => setInput(e.target.value)}
placeholder="Ask something..."
onKeyDown={e => e.key === 'Enter' && sendMessage()}
/>
<button onClick={sendMessage}>Send</button>
</div>
</div>
);
}
export default App;
2.5 Basic Styling in src/App.css
.chat-container {
max-width: 600px;
margin: 50px auto;
font-family: sans-serif;
}
.chat-box {
border: 1px solid #ccc;
padding: 15px;
height: 400px;
overflow-y: auto;
background: #fafafa;
margin-bottom: 10px;
}
.message {
margin: 8px 0;
}
.message.user {
text-align: right;
color: #007bff;
}
.message.assistant {
text-align: left;
color: #28a745;
}
.input-area {
display: flex;
gap: 10px;
}
input {
flex: 1;
padding: 8px;
font-size: 16px;
}
button {
padding: 8px 16px;
}
Result
You now have a fully working React + TypeScript chat UI that:
-
Sends user messages to your Express backend
-
Forwards them to OpenAI’s GPT-4 Turbo
-
Displays the assistant’s response
3. Improving the UX with Scroll, Loading, and Styles
We’ll enhance the chat app to feel more dynamic and responsive by:
-
Auto-scrolling to the latest message
-
Displaying a typing indicator while waiting for GPT-4's response
-
Improving the UI styling for clarity and usability
3.1 Auto-Scroll to Latest Message
Update App.tsx
:
import { useState, useRef, useEffect } from 'react';
// ... other imports
function App() {
// ...
const bottomRef = useRef<HTMLDivElement | null>(null);
useEffect(() => {
bottomRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages, loading]);
// inside return()
return (
<div className="chat-container">
{/* ... */}
<div className="chat-box">
{messages.map((msg, i) => (
<div key={i} className={`message ${msg.role}`}>
<strong>{msg.role}:</strong> {msg.content}
</div>
))}
{loading && <div className="message assistant">Typing...</div>}
<div ref={bottomRef} />
</div>
{/* ... */}
</div>
);
}
3.2 Improve Visual Clarity
Replace src/App.css
with this upgraded styling:
body {
background: #f0f2f5;
margin: 0;
padding: 0;
}
.chat-container {
max-width: 640px;
margin: 40px auto;
padding: 20px;
background: white;
border-radius: 12px;
box-shadow: 0 4px 14px rgba(0, 0, 0, 0.1);
font-family: 'Segoe UI', sans-serif;
}
h1 {
text-align: center;
margin-bottom: 20px;
}
.chat-box {
border: 1px solid #ddd;
border-radius: 8px;
padding: 15px;
height: 400px;
overflow-y: auto;
background: #fcfcfc;
margin-bottom: 16px;
}
.message {
padding: 8px 12px;
border-radius: 8px;
margin-bottom: 10px;
max-width: 75%;
}
.message.user {
background-color: #d0eaff;
align-self: flex-end;
margin-left: auto;
text-align: right;
}
.message.assistant {
background-color: #e6f9e6;
align-self: flex-start;
margin-right: auto;
}
.input-area {
display: flex;
gap: 10px;
}
input {
flex: 1;
padding: 10px;
font-size: 16px;
border: 1px solid #ccc;
border-radius: 8px;
}
button {
padding: 10px 16px;
background: #007bff;
color: white;
border: none;
border-radius: 8px;
cursor: pointer;
transition: background 0.3s ease;
}
button:hover {
background: #0056b3;
}
The app looks like this:
Make sure you have sufficient quota to get the reply from GPT-4.
Result
You now have:
-
Smooth scrolling to the latest message
-
A friendly typing indicator
-
Clean, styled chat bubbles for both user and assistant messages
4. Connecting Frontend and Backend in Production (CORS, Proxy, Deployment Tips)
In development, your React frontend (localhost:5173
) and Express backend (localhost:5000
) run on separate ports. But in production, you'll typically serve them from the same domain or route. Here’s how to handle both cases cleanly.
4.1 Development: Configure CORS in Express
Your backend is already using CORS:
import cors from 'cors';
app.use(cors());
To restrict CORS in production, you can do:
const allowedOrigins = ['http://localhost:5173', 'https://yourfrontenddomain.com'];
app.use(cors({
origin: allowedOrigins,
}));
4.2 Development: Set Up Vite Proxy
To avoid CORS issues during development, configure Vite to proxy /api
calls to your Express server.
Edit vite.config.ts
in the frontend:
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
server: {
proxy: {
'/api': {
target: 'http://localhost:5000',
changeOrigin: true,
},
},
},
});
Then update your frontend API call from:
await axios.post('http://localhost:5000/api/chat', {...})
to:
await axios.post('/api/chat', {...})
This works cleanly in development and prevents leaking your backend URL to the browser.
4.3 Production: Serve React from Express (Optional)
If you want a single deployment (e.g., on Render, Vercel, or DigitalOcean), you can serve the built React app directly from Express.
-
In React frontend:
npm run build
- In Express backend (
index.ts
):
import path from 'path';
const __dirname = path.resolve();
app.use(express.static(path.join(__dirname, '../gpt4-frontend/dist')));
app.get('*', (_req, res) => {
res.sendFile(path.join(__dirname, '../gpt4-frontend/dist/index.html'));
});
- Make sure your folder structure looks like:
/gpt4-backend
├── /src
├── /dist
└── ../gpt4-frontend/dist
4.4 Deployment Tips
-
Environment Variables: Never hardcode your OpenAI API key. Use
.env
and tools likedotenv
or deployment platform secrets. -
Free Hosting Options:
-
Frontend: Vercel, Netlify
-
Backend: Render, Railway, Fly.io
-
-
Logging & Errors: Add proper error handling and rate limiting before going public.
With this, your app is ready for both development and production environments!
5. Conclusion
In this tutorial, you learned how to integrate GPT-4 Turbo into a modern full-stack web application using React (with Vite + TypeScript) on the frontend and Express.js on the backend. Along the way, you built a functional chatbot UI that:
✅ Sends messages to OpenAI’s GPT-4 Turbo
✅ Receives intelligent AI responses
✅ Displays a smooth, styled conversation flow
✅ Works in both development and production environments
This architecture gives you a flexible foundation to build powerful AI-enabled applications, such as:
-
Customer support chatbots
-
Personalized writing assistants
-
Educational tutoring tools
-
Natural language interfaces for business apps
You can find the full source code on our GitHub.
That's just the basics. If you need more deep learning about MERN Stack, React.js, or React Native, you can take the following cheap course:
- Mastering React JS
- Master React Native Animations
- React: React Native Mobile Development: 3-in-1
- MERN Stack Front To Back: Full Stack React, Redux & Node. js
- Learning React Native Development
Thanks!