163
DEPLOYMENT.md
163
DEPLOYMENT.md
@@ -5,7 +5,7 @@ This guide covers deploying the Hasher application to production.
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18.x or higher
|
||||
- Elasticsearch 8.x cluster
|
||||
- Redis 6.x or higher
|
||||
- Domain name (optional, for custom domain)
|
||||
- SSL certificate (recommended for production)
|
||||
|
||||
@@ -34,12 +34,16 @@ Vercel provides seamless deployment for Next.js applications.
|
||||
|
||||
4. **Set Environment Variables**:
|
||||
- Go to your project settings on Vercel
|
||||
- Add environment variable: `ELASTICSEARCH_NODE=http://your-elasticsearch-host:9200`
|
||||
- Add environment variables:
|
||||
- `REDIS_HOST=your-redis-host`
|
||||
- `REDIS_PORT=6379`
|
||||
- `REDIS_PASSWORD=your-password` (if using authentication)
|
||||
- `REDIS_DB=0`
|
||||
- Redeploy: `vercel --prod`
|
||||
|
||||
#### Important Notes:
|
||||
- Ensure Elasticsearch is accessible from Vercel's servers
|
||||
- Consider using Elastic Cloud or a publicly accessible Elasticsearch instance
|
||||
- Ensure Redis is accessible from Vercel's servers
|
||||
- Consider using Redis Cloud (Upstash) or a publicly accessible Redis instance
|
||||
- Use environment variables for sensitive configuration
|
||||
|
||||
---
|
||||
@@ -116,7 +120,8 @@ docker build -t hasher:latest .
|
||||
# Run the container
|
||||
docker run -d \
|
||||
-p 3000:3000 \
|
||||
-e ELASTICSEARCH_NODE=http://elasticsearch:9200 \
|
||||
-e REDIS_HOST=redis \
|
||||
-e REDIS_PORT=6379 \
|
||||
--name hasher \
|
||||
hasher:latest
|
||||
```
|
||||
@@ -134,25 +139,23 @@ services:
|
||||
ports:
|
||||
- "3000:3000"
|
||||
environment:
|
||||
- ELASTICSEARCH_NODE=http://elasticsearch:9200
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
- redis
|
||||
restart: unless-stopped
|
||||
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
|
||||
environment:
|
||||
- discovery.type=single-node
|
||||
- xpack.security.enabled=false
|
||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "6379:6379"
|
||||
volumes:
|
||||
- elasticsearch-data:/usr/share/elasticsearch/data
|
||||
- redis-data:/data
|
||||
restart: unless-stopped
|
||||
command: redis-server --appendonly yes
|
||||
|
||||
volumes:
|
||||
elasticsearch-data:
|
||||
redis-data:
|
||||
```
|
||||
|
||||
Run with:
|
||||
@@ -193,7 +196,10 @@ npm run build
|
||||
|
||||
```bash
|
||||
cat > .env.local << EOF
|
||||
ELASTICSEARCH_NODE=http://localhost:9200
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=your-password
|
||||
REDIS_DB=0
|
||||
NODE_ENV=production
|
||||
EOF
|
||||
```
|
||||
@@ -233,43 +239,43 @@ sudo systemctl reload nginx
|
||||
|
||||
---
|
||||
|
||||
## Elasticsearch Setup
|
||||
## Redis Setup
|
||||
|
||||
### Option 1: Elastic Cloud (Managed)
|
||||
### Option 1: Redis Cloud (Managed)
|
||||
|
||||
1. Sign up at [Elastic Cloud](https://cloud.elastic.co/)
|
||||
2. Create a deployment
|
||||
3. Note the endpoint URL
|
||||
4. Update `ELASTICSEARCH_NODE` environment variable
|
||||
1. Sign up at [Redis Cloud](https://redis.com/try-free/) or [Upstash](https://upstash.com/)
|
||||
2. Create a database
|
||||
3. Note the connection details (host, port, password)
|
||||
4. Update `REDIS_HOST`, `REDIS_PORT`, and `REDIS_PASSWORD` environment variables
|
||||
|
||||
### Option 2: Self-Hosted
|
||||
|
||||
```bash
|
||||
# Ubuntu/Debian
|
||||
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
|
||||
sudo sh -c 'echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list'
|
||||
sudo apt-get update
|
||||
sudo apt-get install elasticsearch
|
||||
sudo apt-get install redis-server
|
||||
|
||||
# Configure
|
||||
sudo nano /etc/elasticsearch/elasticsearch.yml
|
||||
# Set: network.host: 0.0.0.0
|
||||
sudo nano /etc/redis/redis.conf
|
||||
# Set: bind 0.0.0.0 (to allow remote connections)
|
||||
# Set: requirepass your-strong-password (for security)
|
||||
|
||||
# Start
|
||||
sudo systemctl start elasticsearch
|
||||
sudo systemctl enable elasticsearch
|
||||
sudo systemctl start redis-server
|
||||
sudo systemctl enable redis-server
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Elasticsearch Security
|
||||
### 1. Redis Security
|
||||
|
||||
- Enable authentication on Elasticsearch
|
||||
- Use HTTPS for Elasticsearch connection
|
||||
- Enable authentication with requirepass
|
||||
- Use TLS for Redis connections (Redis 6+)
|
||||
- Restrict network access with firewall rules
|
||||
- Update credentials regularly
|
||||
- Disable dangerous commands (FLUSHDB, FLUSHALL, etc.)
|
||||
|
||||
### 2. Application Security
|
||||
|
||||
@@ -285,7 +291,7 @@ sudo systemctl enable elasticsearch
|
||||
# Example UFW firewall rules
|
||||
sudo ufw allow 80/tcp
|
||||
sudo ufw allow 443/tcp
|
||||
sudo ufw allow from YOUR_IP to any port 9200 # Elasticsearch
|
||||
sudo ufw allow from YOUR_IP to any port 6379 # Redis
|
||||
sudo ufw enable
|
||||
```
|
||||
|
||||
@@ -303,37 +309,48 @@ pm2 monit
|
||||
pm2 logs hasher
|
||||
```
|
||||
|
||||
### Elasticsearch Monitoring
|
||||
### Redis Monitoring
|
||||
|
||||
```bash
|
||||
# Health check
|
||||
curl http://localhost:9200/_cluster/health?pretty
|
||||
redis-cli ping
|
||||
|
||||
# Index stats
|
||||
curl http://localhost:9200/hasher/_stats?pretty
|
||||
# Get info
|
||||
redis-cli INFO
|
||||
|
||||
# Database stats
|
||||
redis-cli INFO stats
|
||||
|
||||
# Memory usage
|
||||
redis-cli INFO memory
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backup and Recovery
|
||||
|
||||
### Elasticsearch Snapshots
|
||||
### Redis Backups
|
||||
|
||||
```bash
|
||||
# Configure snapshot repository
|
||||
curl -X PUT "localhost:9200/_snapshot/hasher_backup" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "/mnt/backups/elasticsearch"
|
||||
}
|
||||
}'
|
||||
# Enable AOF (Append Only File) persistence
|
||||
redis-cli CONFIG SET appendonly yes
|
||||
|
||||
# Create snapshot
|
||||
curl -X PUT "localhost:9200/_snapshot/hasher_backup/snapshot_1?wait_for_completion=true"
|
||||
# Save RDB snapshot manually
|
||||
redis-cli SAVE
|
||||
|
||||
# Restore snapshot
|
||||
curl -X POST "localhost:9200/_snapshot/hasher_backup/snapshot_1/_restore"
|
||||
# Configure automatic backups in redis.conf
|
||||
save 900 1 # Save if 1 key changed in 15 minutes
|
||||
save 300 10 # Save if 10 keys changed in 5 minutes
|
||||
save 60 10000 # Save if 10000 keys changed in 1 minute
|
||||
|
||||
# Backup files location (default)
|
||||
# RDB: /var/lib/redis/dump.rdb
|
||||
# AOF: /var/lib/redis/appendonly.aof
|
||||
|
||||
# Restore from backup
|
||||
sudo systemctl stop redis-server
|
||||
sudo cp /backup/dump.rdb /var/lib/redis/
|
||||
sudo systemctl start redis-server
|
||||
```
|
||||
|
||||
---
|
||||
@@ -344,13 +361,14 @@ curl -X POST "localhost:9200/_snapshot/hasher_backup/snapshot_1/_restore"
|
||||
|
||||
1. Deploy multiple Next.js instances
|
||||
2. Use a load balancer (nginx, HAProxy)
|
||||
3. Share the same Elasticsearch cluster
|
||||
3. Share the same Redis instance or cluster
|
||||
|
||||
### Elasticsearch Scaling
|
||||
### Redis Scaling
|
||||
|
||||
1. Add more nodes to the cluster
|
||||
2. Increase shard count (already set to 10)
|
||||
3. Use replicas for read scaling
|
||||
1. Use Redis Cluster for horizontal scaling
|
||||
2. Set up Redis Sentinel for high availability
|
||||
3. Use read replicas for read-heavy workloads
|
||||
4. Consider Redis Enterprise for advanced features
|
||||
|
||||
---
|
||||
|
||||
@@ -363,28 +381,31 @@ pm2 status
|
||||
pm2 logs hasher --lines 100
|
||||
```
|
||||
|
||||
### Check Elasticsearch
|
||||
### Check Redis
|
||||
|
||||
```bash
|
||||
curl http://localhost:9200/_cluster/health
|
||||
curl http://localhost:9200/hasher/_count
|
||||
redis-cli ping
|
||||
redis-cli DBSIZE
|
||||
redis-cli INFO stats
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Cannot connect to Elasticsearch
|
||||
**Issue**: Cannot connect to Redis
|
||||
- Check firewall rules
|
||||
- Verify Elasticsearch is running
|
||||
- Check `ELASTICSEARCH_NODE` environment variable
|
||||
- Verify Redis is running: `redis-cli ping`
|
||||
- Check `REDIS_HOST`, `REDIS_PORT`, and `REDIS_PASSWORD` environment variables
|
||||
|
||||
**Issue**: Out of memory
|
||||
- Increase Node.js memory: `NODE_OPTIONS=--max-old-space-size=4096`
|
||||
- Increase Elasticsearch heap size
|
||||
- Configure Redis maxmemory and eviction policy
|
||||
- Use Redis persistence (RDB/AOF) carefully
|
||||
|
||||
**Issue**: Slow searches
|
||||
- Add more Elasticsearch nodes
|
||||
- Optimize queries
|
||||
- Increase replica count
|
||||
- Verify O(1) lookups are being used (direct key access)
|
||||
- Check Redis memory and CPU usage
|
||||
- Consider using Redis Cluster for distribution
|
||||
- Optimize key patterns
|
||||
|
||||
---
|
||||
|
||||
@@ -392,9 +413,10 @@ curl http://localhost:9200/hasher/_count
|
||||
|
||||
1. **Enable Next.js Static Optimization**
|
||||
2. **Use CDN for static assets**
|
||||
3. **Enable Elasticsearch caching**
|
||||
4. **Configure appropriate JVM heap for Elasticsearch**
|
||||
5. **Use SSD storage for Elasticsearch**
|
||||
3. **Enable Redis pipelining for bulk operations**
|
||||
4. **Configure appropriate maxmemory for Redis**
|
||||
5. **Use SSD storage for Redis persistence**
|
||||
6. **Enable Redis connection pooling (already implemented)**
|
||||
|
||||
---
|
||||
|
||||
@@ -402,5 +424,6 @@ curl http://localhost:9200/hasher/_count
|
||||
|
||||
For deployment issues, check:
|
||||
- [Next.js Deployment Docs](https://nextjs.org/docs/deployment)
|
||||
- [Elasticsearch Setup Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html)
|
||||
- [Redis Setup Guide](https://redis.io/docs/getting-started/)
|
||||
- [ioredis Documentation](https://github.com/redis/ioredis)
|
||||
- Project GitHub Issues
|
||||
|
||||
Referencia en una nueva incidencia
Block a user