347
DEPLOYMENT.md
347
DEPLOYMENT.md
@@ -5,7 +5,7 @@ This guide covers deploying the Hasher application to production.
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18.x or higher
|
||||
- Elasticsearch 8.x cluster
|
||||
- Redis 6.x or higher
|
||||
- Domain name (optional, for custom domain)
|
||||
- SSL certificate (recommended for production)
|
||||
|
||||
@@ -34,12 +34,15 @@ Vercel provides seamless deployment for Next.js applications.
|
||||
|
||||
4. **Set Environment Variables**:
|
||||
- Go to your project settings on Vercel
|
||||
- Add environment variable: `ELASTICSEARCH_NODE=http://your-elasticsearch-host:9200`
|
||||
- Add environment variables:
|
||||
- `REDIS_HOST=your-redis-host.com`
|
||||
- `REDIS_PORT=6379`
|
||||
- `REDIS_PASSWORD=your-secure-password` (if using authentication)
|
||||
- Redeploy: `vercel --prod`
|
||||
|
||||
#### Important Notes:
|
||||
- Ensure Elasticsearch is accessible from Vercel's servers
|
||||
- Consider using Elastic Cloud or a publicly accessible Elasticsearch instance
|
||||
- Ensure Redis is accessible from Vercel's servers
|
||||
- Consider using [Upstash](https://upstash.com) or [Redis Cloud](https://redis.com/try-free/) for managed Redis
|
||||
- Use environment variables for sensitive configuration
|
||||
|
||||
---
|
||||
@@ -59,7 +62,7 @@ FROM base AS deps
|
||||
RUN apk add --no-cache libc6-compat
|
||||
WORKDIR /app
|
||||
|
||||
COPY package.json package-lock.json ./
|
||||
COPY package.json package-lock.json* ./
|
||||
RUN npm ci
|
||||
|
||||
# Rebuild the source code only when needed
|
||||
@@ -68,15 +71,13 @@ WORKDIR /app
|
||||
COPY --from=deps /app/node_modules ./node_modules
|
||||
COPY . .
|
||||
|
||||
ENV NEXT_TELEMETRY_DISABLED=1
|
||||
RUN npm run build
|
||||
|
||||
# Production image, copy all the files and run next
|
||||
FROM base AS runner
|
||||
WORKDIR /app
|
||||
|
||||
ENV NODE_ENV=production
|
||||
ENV NEXT_TELEMETRY_DISABLED=1
|
||||
ENV NODE_ENV production
|
||||
|
||||
RUN addgroup --system --gid 1001 nodejs
|
||||
RUN adduser --system --uid 1001 nextjs
|
||||
@@ -89,24 +90,11 @@ USER nextjs
|
||||
|
||||
EXPOSE 3000
|
||||
|
||||
ENV PORT=3000
|
||||
ENV HOSTNAME="0.0.0.0"
|
||||
ENV PORT 3000
|
||||
|
||||
CMD ["node", "server.js"]
|
||||
```
|
||||
|
||||
#### Update next.config.ts:
|
||||
|
||||
```typescript
|
||||
import type { NextConfig } from 'next';
|
||||
|
||||
const nextConfig: NextConfig = {
|
||||
output: 'standalone',
|
||||
};
|
||||
|
||||
export default nextConfig;
|
||||
```
|
||||
|
||||
#### Build and Run:
|
||||
|
||||
```bash
|
||||
@@ -116,7 +104,9 @@ docker build -t hasher:latest .
|
||||
# Run the container
|
||||
docker run -d \
|
||||
-p 3000:3000 \
|
||||
-e ELASTICSEARCH_NODE=http://elasticsearch:9200 \
|
||||
-e REDIS_HOST=redis \
|
||||
-e REDIS_PORT=6379 \
|
||||
-e REDIS_PASSWORD=your-password \
|
||||
--name hasher \
|
||||
hasher:latest
|
||||
```
|
||||
@@ -134,25 +124,24 @@ services:
|
||||
ports:
|
||||
- "3000:3000"
|
||||
environment:
|
||||
- ELASTICSEARCH_NODE=http://elasticsearch:9200
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=your-secure-password
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
- redis
|
||||
restart: unless-stopped
|
||||
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
|
||||
environment:
|
||||
- discovery.type=single-node
|
||||
- xpack.security.enabled=false
|
||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
command: redis-server --requirepass your-secure-password --appendonly yes
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "6379:6379"
|
||||
volumes:
|
||||
- elasticsearch-data:/usr/share/elasticsearch/data
|
||||
- redis-data:/data
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
elasticsearch-data:
|
||||
redis-data:
|
||||
```
|
||||
|
||||
Run with:
|
||||
@@ -173,13 +162,28 @@ curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
|
||||
sudo apt-get install -y nodejs
|
||||
```
|
||||
|
||||
#### 2. Install PM2 (Process Manager):
|
||||
#### 2. Install Redis:
|
||||
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install redis-server
|
||||
|
||||
# Configure Redis
|
||||
sudo nano /etc/redis/redis.conf
|
||||
# Set: requirepass your-strong-password
|
||||
|
||||
# Start Redis
|
||||
sudo systemctl start redis-server
|
||||
sudo systemctl enable redis-server
|
||||
```
|
||||
|
||||
#### 3. Install PM2 (Process Manager):
|
||||
|
||||
```bash
|
||||
sudo npm install -g pm2
|
||||
```
|
||||
|
||||
#### 3. Clone and Build:
|
||||
#### 4. Clone and Build:
|
||||
|
||||
```bash
|
||||
cd /var/www
|
||||
@@ -189,16 +193,18 @@ npm install
|
||||
npm run build
|
||||
```
|
||||
|
||||
#### 4. Configure Environment:
|
||||
#### 5. Configure Environment:
|
||||
|
||||
```bash
|
||||
cat > .env.local << EOF
|
||||
ELASTICSEARCH_NODE=http://localhost:9200
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=your-strong-password
|
||||
NODE_ENV=production
|
||||
EOF
|
||||
```
|
||||
|
||||
#### 5. Start with PM2:
|
||||
#### 6. Start with PM2:
|
||||
|
||||
```bash
|
||||
pm2 start npm --name "hasher" -- start
|
||||
@@ -206,7 +212,7 @@ pm2 save
|
||||
pm2 startup
|
||||
```
|
||||
|
||||
#### 6. Configure Nginx (Optional):
|
||||
#### 7. Configure Nginx (Optional):
|
||||
|
||||
```nginx
|
||||
server {
|
||||
@@ -233,43 +239,62 @@ sudo systemctl reload nginx
|
||||
|
||||
---
|
||||
|
||||
## Elasticsearch Setup
|
||||
## Redis Setup
|
||||
|
||||
### Option 1: Elastic Cloud (Managed)
|
||||
### Option 1: Managed Redis (Recommended)
|
||||
|
||||
1. Sign up at [Elastic Cloud](https://cloud.elastic.co/)
|
||||
2. Create a deployment
|
||||
3. Note the endpoint URL
|
||||
4. Update `ELASTICSEARCH_NODE` environment variable
|
||||
#### Upstash (Serverless Redis)
|
||||
1. Sign up at [Upstash](https://upstash.com)
|
||||
2. Create a database
|
||||
3. Copy connection details
|
||||
4. Update environment variables
|
||||
|
||||
### Option 2: Self-Hosted
|
||||
#### Redis Cloud
|
||||
1. Sign up at [Redis Cloud](https://redis.com/try-free/)
|
||||
2. Create a database
|
||||
3. Note the endpoint and password
|
||||
4. Update `REDIS_HOST`, `REDIS_PORT`, and `REDIS_PASSWORD`
|
||||
|
||||
### Option 2: Self-Hosted Redis
|
||||
|
||||
```bash
|
||||
# Ubuntu/Debian
|
||||
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
|
||||
sudo sh -c 'echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list'
|
||||
sudo apt-get update
|
||||
sudo apt-get install elasticsearch
|
||||
sudo apt-get install redis-server
|
||||
|
||||
# Configure
|
||||
sudo nano /etc/elasticsearch/elasticsearch.yml
|
||||
# Set: network.host: 0.0.0.0
|
||||
# Configure Redis security
|
||||
sudo nano /etc/redis/redis.conf
|
||||
|
||||
# Start
|
||||
sudo systemctl start elasticsearch
|
||||
sudo systemctl enable elasticsearch
|
||||
# Important settings:
|
||||
# bind 127.0.0.1 ::1 # Only local connections (remove for remote)
|
||||
# requirepass your-strong-password
|
||||
# maxmemory 256mb
|
||||
# maxmemory-policy allkeys-lru
|
||||
|
||||
# Start Redis
|
||||
sudo systemctl start redis-server
|
||||
sudo systemctl enable redis-server
|
||||
|
||||
# Test connection
|
||||
redis-cli -a your-strong-password ping
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Elasticsearch Security
|
||||
### 1. Redis Security
|
||||
|
||||
- Enable authentication on Elasticsearch
|
||||
- Use HTTPS for Elasticsearch connection
|
||||
- Restrict network access with firewall rules
|
||||
- Update credentials regularly
|
||||
- **Always** use a strong password with `requirepass`
|
||||
- Bind Redis to localhost if possible (`bind 127.0.0.1`)
|
||||
- Use TLS/SSL for remote connections (Redis 6+)
|
||||
- Disable dangerous commands:
|
||||
```
|
||||
rename-command FLUSHDB ""
|
||||
rename-command FLUSHALL ""
|
||||
rename-command CONFIG ""
|
||||
```
|
||||
- Set memory limits to prevent OOM
|
||||
|
||||
### 2. Application Security
|
||||
|
||||
@@ -285,7 +310,7 @@ sudo systemctl enable elasticsearch
|
||||
# Example UFW firewall rules
|
||||
sudo ufw allow 80/tcp
|
||||
sudo ufw allow 443/tcp
|
||||
sudo ufw allow from YOUR_IP to any port 9200 # Elasticsearch
|
||||
sudo ufw allow from YOUR_IP to any port 6379 # Redis (if remote)
|
||||
sudo ufw enable
|
||||
```
|
||||
|
||||
@@ -303,37 +328,96 @@ pm2 monit
|
||||
pm2 logs hasher
|
||||
```
|
||||
|
||||
### Elasticsearch Monitoring
|
||||
### Redis Monitoring
|
||||
|
||||
```bash
|
||||
# Health check
|
||||
curl http://localhost:9200/_cluster/health?pretty
|
||||
# Test connection
|
||||
redis-cli ping
|
||||
|
||||
# Index stats
|
||||
curl http://localhost:9200/hasher/_stats?pretty
|
||||
# Get server info
|
||||
redis-cli INFO
|
||||
|
||||
# Monitor commands
|
||||
redis-cli MONITOR
|
||||
|
||||
# Check memory usage
|
||||
redis-cli INFO memory
|
||||
|
||||
# Check stats
|
||||
redis-cli INFO stats
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backup and Recovery
|
||||
|
||||
### Elasticsearch Snapshots
|
||||
### Redis Persistence
|
||||
|
||||
Redis offers two persistence options:
|
||||
|
||||
#### RDB (Redis Database Backup)
|
||||
```bash
|
||||
# Configure in redis.conf
|
||||
save 900 1 # Save if 1 key changed in 15 minutes
|
||||
save 300 10 # Save if 10 keys changed in 5 minutes
|
||||
save 60 10000 # Save if 10000 keys changed in 1 minute
|
||||
|
||||
# Manual snapshot
|
||||
redis-cli SAVE
|
||||
|
||||
# Backup file location
|
||||
/var/lib/redis/dump.rdb
|
||||
```
|
||||
|
||||
#### AOF (Append Only File)
|
||||
```bash
|
||||
# Enable in redis.conf
|
||||
appendonly yes
|
||||
appendfilename "appendonly.aof"
|
||||
|
||||
# Sync options
|
||||
appendfsync everysec # Good balance
|
||||
|
||||
# Backup file location
|
||||
/var/lib/redis/appendonly.aof
|
||||
```
|
||||
|
||||
### Backup Script
|
||||
|
||||
```bash
|
||||
# Configure snapshot repository
|
||||
curl -X PUT "localhost:9200/_snapshot/hasher_backup" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "/mnt/backups/elasticsearch"
|
||||
}
|
||||
}'
|
||||
#!/bin/bash
|
||||
# backup-redis.sh
|
||||
|
||||
# Create snapshot
|
||||
curl -X PUT "localhost:9200/_snapshot/hasher_backup/snapshot_1?wait_for_completion=true"
|
||||
BACKUP_DIR="/backup/redis"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
# Restore snapshot
|
||||
curl -X POST "localhost:9200/_snapshot/hasher_backup/snapshot_1/_restore"
|
||||
# Create backup directory
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Trigger Redis save
|
||||
redis-cli -a your-password SAVE
|
||||
|
||||
# Copy RDB file
|
||||
cp /var/lib/redis/dump.rdb $BACKUP_DIR/dump_$DATE.rdb
|
||||
|
||||
# Keep only last 7 days
|
||||
find $BACKUP_DIR -name "dump_*.rdb" -mtime +7 -delete
|
||||
|
||||
echo "Backup completed: dump_$DATE.rdb"
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
# Stop Redis
|
||||
sudo systemctl stop redis-server
|
||||
|
||||
# Replace dump file
|
||||
sudo cp /backup/redis/dump_YYYYMMDD_HHMMSS.rdb /var/lib/redis/dump.rdb
|
||||
sudo chown redis:redis /var/lib/redis/dump.rdb
|
||||
|
||||
# Start Redis
|
||||
sudo systemctl start redis-server
|
||||
```
|
||||
|
||||
---
|
||||
@@ -343,14 +427,24 @@ curl -X POST "localhost:9200/_snapshot/hasher_backup/snapshot_1/_restore"
|
||||
### Horizontal Scaling
|
||||
|
||||
1. Deploy multiple Next.js instances
|
||||
2. Use a load balancer (nginx, HAProxy)
|
||||
3. Share the same Elasticsearch cluster
|
||||
2. Use a load balancer (nginx, HAProxy, Cloudflare)
|
||||
3. Share the same Redis instance
|
||||
|
||||
### Elasticsearch Scaling
|
||||
### Redis Scaling Options
|
||||
|
||||
1. Add more nodes to the cluster
|
||||
2. Increase shard count (already set to 10)
|
||||
3. Use replicas for read scaling
|
||||
#### 1. Redis Cluster
|
||||
- Automatic sharding across multiple nodes
|
||||
- High availability with automatic failover
|
||||
- Good for very large datasets
|
||||
|
||||
#### 2. Redis Sentinel
|
||||
- High availability without sharding
|
||||
- Automatic failover
|
||||
- Monitoring and notifications
|
||||
|
||||
#### 3. Read Replicas
|
||||
- Separate read and write operations
|
||||
- Scale read capacity
|
||||
|
||||
---
|
||||
|
||||
@@ -363,28 +457,40 @@ pm2 status
|
||||
pm2 logs hasher --lines 100
|
||||
```
|
||||
|
||||
### Check Elasticsearch
|
||||
### Check Redis
|
||||
|
||||
```bash
|
||||
curl http://localhost:9200/_cluster/health
|
||||
curl http://localhost:9200/hasher/_count
|
||||
# Test connection
|
||||
redis-cli ping
|
||||
|
||||
# Check memory
|
||||
redis-cli INFO memory
|
||||
|
||||
# Count keys
|
||||
redis-cli DBSIZE
|
||||
|
||||
# Get stats
|
||||
redis-cli INFO stats
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Cannot connect to Elasticsearch
|
||||
- Check firewall rules
|
||||
- Verify Elasticsearch is running
|
||||
- Check `ELASTICSEARCH_NODE` environment variable
|
||||
**Issue**: Cannot connect to Redis
|
||||
- Check if Redis is running: `sudo systemctl status redis-server`
|
||||
- Verify firewall rules
|
||||
- Check `REDIS_HOST` and `REDIS_PORT` environment variables
|
||||
- Verify password is correct
|
||||
|
||||
**Issue**: Out of memory
|
||||
- Increase Node.js memory: `NODE_OPTIONS=--max-old-space-size=4096`
|
||||
- Increase Elasticsearch heap size
|
||||
- Configure Redis maxmemory
|
||||
- Set appropriate eviction policy
|
||||
|
||||
**Issue**: Slow searches
|
||||
- Add more Elasticsearch nodes
|
||||
- Optimize queries
|
||||
- Increase replica count
|
||||
- Check Redis memory usage
|
||||
- Verify O(1) key lookups are being used
|
||||
- Monitor Redis with `redis-cli MONITOR`
|
||||
- Consider Redis Cluster for very large datasets
|
||||
|
||||
---
|
||||
|
||||
@@ -392,9 +498,25 @@ curl http://localhost:9200/hasher/_count
|
||||
|
||||
1. **Enable Next.js Static Optimization**
|
||||
2. **Use CDN for static assets**
|
||||
3. **Enable Elasticsearch caching**
|
||||
4. **Configure appropriate JVM heap for Elasticsearch**
|
||||
5. **Use SSD storage for Elasticsearch**
|
||||
3. **Configure Redis pipelining** (already implemented)
|
||||
4. **Set appropriate maxmemory and eviction policy**
|
||||
5. **Use SSD storage for Redis persistence**
|
||||
6. **Enable connection pooling** (already implemented)
|
||||
7. **Monitor and optimize Redis memory usage**
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default | Required |
|
||||
|----------|-------------|---------|----------|
|
||||
| `REDIS_HOST` | Redis server hostname | `localhost` | No |
|
||||
| `REDIS_PORT` | Redis server port | `6379` | No |
|
||||
| `REDIS_PASSWORD` | Redis authentication password | - | No* |
|
||||
| `NODE_ENV` | Node environment | `development` | No |
|
||||
| `PORT` | Application port | `3000` | No |
|
||||
|
||||
*Required if Redis has authentication enabled
|
||||
|
||||
---
|
||||
|
||||
@@ -402,5 +524,28 @@ curl http://localhost:9200/hasher/_count
|
||||
|
||||
For deployment issues, check:
|
||||
- [Next.js Deployment Docs](https://nextjs.org/docs/deployment)
|
||||
- [Elasticsearch Setup Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html)
|
||||
- [Redis Documentation](https://redis.io/docs/)
|
||||
- [Upstash Documentation](https://docs.upstash.com/)
|
||||
- Project GitHub Issues
|
||||
|
||||
---
|
||||
|
||||
## Deployment Checklist
|
||||
|
||||
Before going live:
|
||||
|
||||
- [ ] Redis is secured with password
|
||||
- [ ] Environment variables are configured
|
||||
- [ ] SSL/TLS certificates are installed
|
||||
- [ ] Firewall rules are configured
|
||||
- [ ] Monitoring is set up
|
||||
- [ ] Backup strategy is in place
|
||||
- [ ] Load testing completed
|
||||
- [ ] Error logging configured
|
||||
- [ ] Redis persistence (RDB/AOF) configured
|
||||
- [ ] Rate limiting implemented (if needed)
|
||||
- [ ] Documentation is up to date
|
||||
|
||||
---
|
||||
|
||||
**Ready to deploy! 🚀**
|
||||
|
||||
Referencia en una nueva incidencia
Block a user