53
.env.example
Archivo normal
53
.env.example
Archivo normal
@@ -0,0 +1,53 @@
|
||||
# Elasticsearch Configuration
|
||||
ES_NODE=http://localhost:9200
|
||||
ES_USERNAME=elastic
|
||||
ES_PASSWORD=changeme
|
||||
ES_INDEX=network-packets
|
||||
|
||||
# Capture Configuration
|
||||
# Comma-separated list of interfaces (leave empty for all)
|
||||
CAPTURE_INTERFACES=
|
||||
|
||||
# Enable promiscuous mode
|
||||
PROMISCUOUS_MODE=false
|
||||
|
||||
# Buffer size in bytes
|
||||
BUFFER_SIZE=10485760
|
||||
|
||||
# Custom BPF filter (leave empty to use filter configuration below)
|
||||
CAPTURE_FILTER=
|
||||
|
||||
# Filter Configuration
|
||||
# Comma-separated protocols: tcp,udp,icmp
|
||||
FILTER_PROTOCOLS=
|
||||
|
||||
# Comma-separated ports to exclude
|
||||
EXCLUDE_PORTS=
|
||||
|
||||
# Port ranges to exclude (JSON array format)
|
||||
# Example: [[8000,9000],[3000,3100]]
|
||||
EXCLUDE_PORT_RANGES=[]
|
||||
|
||||
# Comma-separated ports to include (takes precedence over excludes)
|
||||
INCLUDE_PORTS=
|
||||
|
||||
# Content Configuration
|
||||
# Maximum content size to index in bytes (1MB default)
|
||||
MAX_CONTENT_SIZE=1048576
|
||||
|
||||
# Index readable content
|
||||
INDEX_READABLE_CONTENT=true
|
||||
|
||||
# Cache Configuration (for Elasticsearch failover)
|
||||
# Maximum documents to keep in memory when ES is down
|
||||
CACHE_MAX_SIZE=10000
|
||||
|
||||
# Check ES availability interval in milliseconds
|
||||
CACHE_CHECK_INTERVAL=5000
|
||||
|
||||
# Logging Configuration
|
||||
# Log level: debug, info, warn, error
|
||||
LOG_LEVEL=info
|
||||
|
||||
# Statistics interval in seconds
|
||||
STATS_INTERVAL=60
|
||||
42
.gitignore
vendido
Archivo normal
42
.gitignore
vendido
Archivo normal
@@ -0,0 +1,42 @@
|
||||
# Dependencies
|
||||
node_modules/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
.env.local
|
||||
.env.*.local
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
.DS_Store
|
||||
|
||||
# Logs
|
||||
logs/
|
||||
*.log
|
||||
|
||||
# Runtime data
|
||||
pids/
|
||||
*.pid
|
||||
*.seed
|
||||
*.pid.lock
|
||||
|
||||
# Coverage directory
|
||||
coverage/
|
||||
.nyc_output/
|
||||
|
||||
# Optional npm cache
|
||||
.npm/
|
||||
|
||||
# Optional eslint cache
|
||||
.eslintcache
|
||||
|
||||
# Build output
|
||||
dist/
|
||||
build/
|
||||
346
README.md
Archivo normal
346
README.md
Archivo normal
@@ -0,0 +1,346 @@
|
||||
# Network Packet Capture & Elasticsearch Indexer
|
||||
|
||||
A Node.js-based network packet capture tool that captures packets from network interfaces and indexes them into Elasticsearch for analysis and monitoring.
|
||||
|
||||
## Features
|
||||
|
||||
- 🔍 **Multi-interface capture**: Capture from one or multiple network interfaces simultaneously
|
||||
- 🎯 **Flexible filtering**: Filter by protocol (TCP/UDP/ICMP), ports, and port ranges
|
||||
- 🔒 **Promiscuous mode support**: Optionally capture all packets on the network segment
|
||||
- 📊 **Elasticsearch integration**: Automatic indexing with optimized mapping
|
||||
- <20> **Failover cache**: In-memory cache for packets when Elasticsearch is unavailable
|
||||
- <20>📝 **Content extraction**: Captures and indexes readable (ASCII) packet content
|
||||
- 🚀 **Smart content handling**: Automatically skips large binary content while preserving packet metadata
|
||||
- 📈 **Real-time statistics**: Track capture performance and statistics
|
||||
- ⚙️ **Highly configurable**: Environment variables and config file support
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js >= 14.0.0
|
||||
- Elasticsearch 7.x or 8.x
|
||||
- Root/Administrator privileges (required for packet capture)
|
||||
- Linux: libpcap-dev (`apt-get install libpcap-dev`)
|
||||
- macOS: XCode Command Line Tools
|
||||
|
||||
### Installing System Dependencies
|
||||
|
||||
**Ubuntu/Debian:**
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install libpcap-dev build-essential
|
||||
```
|
||||
|
||||
**CentOS/RHEL:**
|
||||
```bash
|
||||
sudo yum install libpcap-devel gcc-c++ make
|
||||
```
|
||||
|
||||
**macOS:**
|
||||
```bash
|
||||
xcode-select --install
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
1. Clone or navigate to the project directory:
|
||||
```bash
|
||||
cd /path/to/netpcap
|
||||
```
|
||||
|
||||
2. Install Node.js dependencies:
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
3. Copy the example environment file and configure:
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# Edit .env with your configuration
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration can be done via environment variables or by editing the `config.js` file directly.
|
||||
|
||||
### Elasticsearch Configuration
|
||||
|
||||
```bash
|
||||
ES_NODE=http://localhost:9200
|
||||
ES_USERNAME=elastic
|
||||
ES_PASSWORD=your_password
|
||||
ES_INDEX=network-packets
|
||||
```
|
||||
|
||||
### Capture Settings
|
||||
|
||||
**Interfaces:**
|
||||
```bash
|
||||
# Capture from specific interfaces
|
||||
CAPTURE_INTERFACES=eth0,wlan0
|
||||
|
||||
# Leave empty to capture from all available interfaces
|
||||
CAPTURE_INTERFACES=
|
||||
```
|
||||
|
||||
**Promiscuous Mode:**
|
||||
```bash
|
||||
# Enable to capture all packets on the network segment
|
||||
PROMISCUOUS_MODE=true
|
||||
```
|
||||
|
||||
### Filtering
|
||||
|
||||
**Protocol Filtering:**
|
||||
```bash
|
||||
# Only capture specific protocols
|
||||
FILTER_PROTOCOLS=tcp,udp
|
||||
|
||||
# Capture all protocols (leave empty)
|
||||
FILTER_PROTOCOLS=
|
||||
```
|
||||
|
||||
**Port Filtering:**
|
||||
```bash
|
||||
# Exclude specific ports (e.g., SSH, HTTP, HTTPS)
|
||||
EXCLUDE_PORTS=22,80,443
|
||||
|
||||
# Exclude port ranges
|
||||
EXCLUDE_PORT_RANGES=[[8000,9000],[3000,3100]]
|
||||
|
||||
# Only capture specific ports (takes precedence)
|
||||
INCLUDE_PORTS=3306,5432
|
||||
```
|
||||
|
||||
**Custom BPF Filter:**
|
||||
```bash
|
||||
# Use custom Berkeley Packet Filter syntax
|
||||
CAPTURE_FILTER="tcp and not port 22"
|
||||
```
|
||||
|
||||
### Content Indexing
|
||||
|
||||
```bash
|
||||
# Maximum content size to index (1MB default)
|
||||
MAX_CONTENT_SIZE=1048576
|
||||
|
||||
# Enable/disable content indexing
|
||||
INDEX_READABLE_CONTENT=true
|
||||
```
|
||||
|
||||
### Cache System (Elasticsearch Failover)
|
||||
|
||||
The application includes an in-memory cache system to handle Elasticsearch outages:
|
||||
|
||||
```bash
|
||||
# Maximum documents to cache in memory (default: 10000)
|
||||
CACHE_MAX_SIZE=10000
|
||||
|
||||
# ES availability check interval in milliseconds (default: 5000)
|
||||
CACHE_CHECK_INTERVAL=5000
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
- When Elasticsearch is unavailable, packets are stored in memory cache
|
||||
- The system periodically checks ES availability (every 5 seconds by default)
|
||||
- When ES comes back online, cached documents are automatically flushed
|
||||
- If cache reaches maximum size, oldest documents are removed (FIFO)
|
||||
- On graceful shutdown (SIGINT/SIGTERM), the system attempts to flush all cached documents
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run with default configuration:
|
||||
```bash
|
||||
sudo npm start
|
||||
```
|
||||
|
||||
Or directly:
|
||||
```bash
|
||||
sudo node index.js
|
||||
```
|
||||
|
||||
### Capture from Specific Interface
|
||||
|
||||
```bash
|
||||
sudo CAPTURE_INTERFACES=eth0 node index.js
|
||||
```
|
||||
|
||||
### Capture Only HTTP/HTTPS Traffic
|
||||
|
||||
```bash
|
||||
sudo INCLUDE_PORTS=80,443 FILTER_PROTOCOLS=tcp node index.js
|
||||
```
|
||||
|
||||
### Exclude SSH and High Ports
|
||||
|
||||
```bash
|
||||
sudo EXCLUDE_PORTS=22 EXCLUDE_PORT_RANGES=[[8000,65535]] node index.js
|
||||
```
|
||||
|
||||
### Enable Promiscuous Mode
|
||||
|
||||
```bash
|
||||
sudo PROMISCUOUS_MODE=true node index.js
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
```bash
|
||||
sudo LOG_LEVEL=debug node index.js
|
||||
```
|
||||
|
||||
## Elasticsearch Index Structure
|
||||
|
||||
The tool creates an index with the following document structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"@timestamp": "2026-02-11T10:30:00.000Z",
|
||||
"interface": {
|
||||
"name": "eth0",
|
||||
"ip": "192.168.1.100",
|
||||
"mac": "aa:bb:cc:dd:ee:ff"
|
||||
},
|
||||
"ethernet": {
|
||||
"src": "aa:bb:cc:dd:ee:ff",
|
||||
"dst": "11:22:33:44:55:66",
|
||||
"type": 2048
|
||||
},
|
||||
"ip": {
|
||||
"version": 4,
|
||||
"src": "192.168.1.100",
|
||||
"dst": "8.8.8.8",
|
||||
"protocol": 6,
|
||||
"ttl": 64,
|
||||
"length": 60
|
||||
},
|
||||
"tcp": {
|
||||
"src_port": 54321,
|
||||
"dst_port": 443,
|
||||
"flags": {
|
||||
"syn": true,
|
||||
"ack": false,
|
||||
"fin": false,
|
||||
"rst": false,
|
||||
"psh": false
|
||||
},
|
||||
"seq": 123456789,
|
||||
"ack_seq": 0,
|
||||
"window": 65535
|
||||
},
|
||||
"content": "GET / HTTP/1.1\r\nHost: example.com\r\n",
|
||||
"content_length": 1024,
|
||||
"content_type": "binary"
|
||||
}
|
||||
```
|
||||
|
||||
## Querying Captured Data
|
||||
|
||||
### Example Elasticsearch Queries
|
||||
|
||||
**Find all packets from a specific IP:**
|
||||
```bash
|
||||
curl -X GET "localhost:9200/network-packets/_search?pretty" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"query": {
|
||||
"term": {
|
||||
"ip.src": "192.168.1.100"
|
||||
}
|
||||
}
|
||||
}
|
||||
'
|
||||
```
|
||||
|
||||
**Find all SYN packets (connection attempts):**
|
||||
```bash
|
||||
curl -X GET "localhost:9200/network-packets/_search?pretty" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"query": {
|
||||
"bool": {
|
||||
"must": [
|
||||
{ "term": { "tcp.flags.syn": true } },
|
||||
{ "term": { "tcp.flags.ack": false } }
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
'
|
||||
```
|
||||
|
||||
**Find packets with readable content:**
|
||||
```bash
|
||||
curl -X GET "localhost:9200/network-packets/_search?pretty" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"query": {
|
||||
"exists": {
|
||||
"field": "content"
|
||||
}
|
||||
}
|
||||
}
|
||||
'
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- **Promiscuous mode** can generate high packet volumes on busy networks
|
||||
- **Content indexing** increases storage requirements significantly
|
||||
- Use **port filters** to reduce captured packet volume
|
||||
- Adjust `MAX_CONTENT_SIZE` based on your storage capacity
|
||||
- Monitor Elasticsearch cluster health when capturing high-volume traffic
|
||||
- **Cache system** protects against data loss during ES outages but consumes memory
|
||||
- Adjust `CACHE_MAX_SIZE` based on available RAM (each packet ~1-5KB in memory)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Permission Denied Errors
|
||||
|
||||
Packet capture requires root privileges:
|
||||
```bash
|
||||
sudo node index.js
|
||||
```
|
||||
|
||||
### Interface Not Found
|
||||
|
||||
List available interfaces:
|
||||
```bash
|
||||
ip link show # Linux
|
||||
ifconfig # macOS/Unix
|
||||
```
|
||||
|
||||
### Elasticsearch Connection Failed
|
||||
|
||||
Verify Elasticsearch is running:
|
||||
```bash
|
||||
curl -X GET "localhost:9200"
|
||||
```
|
||||
|
||||
### No Packets Being Captured
|
||||
|
||||
1. Check if the interface is up and receiving traffic
|
||||
2. Verify filter configuration isn't too restrictive
|
||||
3. Try running without filters first
|
||||
4. Check system firewall settings
|
||||
|
||||
## Security Considerations
|
||||
|
||||
⚠️ **Important Security Notes:**
|
||||
|
||||
- This tool captures network traffic and may contain sensitive information
|
||||
- Store Elasticsearch credentials securely
|
||||
- Restrict access to the Elasticsearch index
|
||||
- Be aware of privacy and legal implications when capturing network traffic
|
||||
- Use encryption for Elasticsearch connections in production
|
||||
- Comply with applicable laws and regulations
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Author
|
||||
|
||||
ale
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! Please feel free to submit issues or pull requests.
|
||||
81
config.js
Archivo normal
81
config.js
Archivo normal
@@ -0,0 +1,81 @@
|
||||
/**
|
||||
* Network Packet Capture Configuration
|
||||
* Adjust these settings according to your environment and requirements
|
||||
*/
|
||||
|
||||
module.exports = {
|
||||
// Elasticsearch configuration
|
||||
elasticsearch: {
|
||||
node: process.env.ES_NODE || 'http://localhost:9200',
|
||||
auth: {
|
||||
username: process.env.ES_USERNAME || 'elastic',
|
||||
password: process.env.ES_PASSWORD || 'changeme'
|
||||
},
|
||||
index: process.env.ES_INDEX || 'network-packets'
|
||||
},
|
||||
|
||||
// Network capture settings
|
||||
capture: {
|
||||
// Network interfaces to capture from (empty array = all available interfaces)
|
||||
// Example: ['eth0', 'wlan0']
|
||||
interfaces: process.env.CAPTURE_INTERFACES ? process.env.CAPTURE_INTERFACES.split(',') : [],
|
||||
|
||||
// Enable promiscuous mode (capture all packets on the network segment)
|
||||
promiscuousMode: process.env.PROMISCUOUS_MODE === 'true' || false,
|
||||
|
||||
// Buffer size in bytes for packet capture
|
||||
bufferSize: parseInt(process.env.BUFFER_SIZE) || 10 * 1024 * 1024, // 10 MB
|
||||
|
||||
// Capture filter (BPF syntax)
|
||||
// This will be built dynamically based on the filters below
|
||||
filter: process.env.CAPTURE_FILTER || null
|
||||
},
|
||||
|
||||
// Packet filtering options
|
||||
filters: {
|
||||
// Protocols to capture (empty array = all protocols)
|
||||
// Options: 'tcp', 'udp', 'icmp'
|
||||
protocols: process.env.FILTER_PROTOCOLS ? process.env.FILTER_PROTOCOLS.split(',') : [],
|
||||
|
||||
// Ports to exclude from capture
|
||||
// Example: [22, 80, 443]
|
||||
excludePorts: process.env.EXCLUDE_PORTS ? process.env.EXCLUDE_PORTS.split(',').map(Number) : [],
|
||||
|
||||
// Port ranges to exclude from capture
|
||||
// Example: [[8000, 9000], [3000, 3100]]
|
||||
excludePortRanges: process.env.EXCLUDE_PORT_RANGES ?
|
||||
JSON.parse(process.env.EXCLUDE_PORT_RANGES) : [],
|
||||
|
||||
// Ports to include (if specified, only these ports will be captured)
|
||||
includePorts: process.env.INCLUDE_PORTS ? process.env.INCLUDE_PORTS.split(',').map(Number) : []
|
||||
},
|
||||
|
||||
// Content indexing settings
|
||||
content: {
|
||||
// Maximum content size to index (in bytes)
|
||||
// Content larger than this will not be indexed
|
||||
maxContentSize: parseInt(process.env.MAX_CONTENT_SIZE) || 1024 * 1024, // 1 MB
|
||||
|
||||
// Try to detect and index ASCII/readable content
|
||||
indexReadableContent: process.env.INDEX_READABLE_CONTENT !== 'false'
|
||||
},
|
||||
|
||||
// Cache settings for Elasticsearch failover
|
||||
cache: {
|
||||
// Maximum number of documents to keep in memory cache
|
||||
// when Elasticsearch is unavailable
|
||||
maxSize: parseInt(process.env.CACHE_MAX_SIZE) || 10000,
|
||||
|
||||
// Interval to check ES availability and flush cache (in milliseconds)
|
||||
checkInterval: parseInt(process.env.CACHE_CHECK_INTERVAL) || 5000
|
||||
},
|
||||
|
||||
// Logging options
|
||||
logging: {
|
||||
// Log level: 'debug', 'info', 'warn', 'error'
|
||||
level: process.env.LOG_LEVEL || 'info',
|
||||
|
||||
// Log packet statistics every N seconds
|
||||
statsInterval: parseInt(process.env.STATS_INTERVAL) || 60
|
||||
}
|
||||
};
|
||||
687
index.js
Archivo normal
687
index.js
Archivo normal
@@ -0,0 +1,687 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Network Packet Capture and Elasticsearch Indexer
|
||||
* Captures network packets and indexes them to Elasticsearch
|
||||
*/
|
||||
|
||||
const Cap = require('cap').Cap;
|
||||
const decoders = require('cap').decoders;
|
||||
const PROTOCOL = decoders.PROTOCOL;
|
||||
const { Client } = require('@elastic/elasticsearch');
|
||||
const os = require('os');
|
||||
const config = require('./config');
|
||||
|
||||
// Initialize Elasticsearch client
|
||||
const esClient = new Client({
|
||||
node: config.elasticsearch.node,
|
||||
auth: config.elasticsearch.auth
|
||||
});
|
||||
|
||||
// Memory cache for failed indexing
|
||||
const documentCache = [];
|
||||
let maxCacheSize = config.cache.maxSize;
|
||||
let esAvailable = true;
|
||||
let lastESCheckTime = Date.now();
|
||||
const ES_CHECK_INTERVAL = config.cache.checkInterval;
|
||||
|
||||
// Statistics tracking
|
||||
const stats = {
|
||||
packetsProcessed: 0,
|
||||
packetsIndexed: 0,
|
||||
packetsSkipped: 0,
|
||||
contentSkipped: 0,
|
||||
cachedDocuments: 0,
|
||||
cacheOverflows: 0,
|
||||
errors: 0,
|
||||
startTime: Date.now()
|
||||
};
|
||||
|
||||
/**
|
||||
* Logger utility
|
||||
*/
|
||||
const logger = {
|
||||
debug: (...args) => config.logging.level === 'debug' && console.log('[DEBUG]', ...args),
|
||||
info: (...args) => ['debug', 'info'].includes(config.logging.level) && console.log('[INFO]', ...args),
|
||||
warn: (...args) => ['debug', 'info', 'warn'].includes(config.logging.level) && console.warn('[WARN]', ...args),
|
||||
error: (...args) => console.error('[ERROR]', ...args)
|
||||
};
|
||||
|
||||
/**
|
||||
* Get network interface information
|
||||
*/
|
||||
function getInterfaceInfo(interfaceName) {
|
||||
const interfaces = os.networkInterfaces();
|
||||
const iface = interfaces[interfaceName];
|
||||
|
||||
if (!iface) return null;
|
||||
|
||||
// Find IPv4 address
|
||||
const ipv4 = iface.find(addr => addr.family === 'IPv4');
|
||||
return {
|
||||
name: interfaceName,
|
||||
ip: ipv4 ? ipv4.address : null,
|
||||
mac: ipv4 ? ipv4.mac : null
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all available network interfaces
|
||||
*/
|
||||
function getAvailableInterfaces() {
|
||||
const interfaces = os.networkInterfaces();
|
||||
return Object.keys(interfaces).filter(name => {
|
||||
const iface = interfaces[name];
|
||||
// Filter out loopback and interfaces without IPv4
|
||||
return iface.some(addr => addr.family === 'IPv4' && !addr.internal);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Build BPF filter string based on configuration
|
||||
*/
|
||||
function buildBPFFilter() {
|
||||
if (config.capture.filter) {
|
||||
return config.capture.filter;
|
||||
}
|
||||
|
||||
const filters = [];
|
||||
|
||||
// Protocol filter
|
||||
if (config.filters.protocols.length > 0) {
|
||||
const protoFilter = config.filters.protocols.map(p => p.toLowerCase()).join(' or ');
|
||||
filters.push(`(${protoFilter})`);
|
||||
}
|
||||
|
||||
// Port exclusion filter
|
||||
if (config.filters.excludePorts.length > 0) {
|
||||
const portFilters = config.filters.excludePorts.map(port =>
|
||||
`not port ${port}`
|
||||
);
|
||||
filters.push(...portFilters);
|
||||
}
|
||||
|
||||
// Port range exclusion filter
|
||||
if (config.filters.excludePortRanges.length > 0) {
|
||||
const rangeFilters = config.filters.excludePortRanges.map(([start, end]) =>
|
||||
`not portrange ${start}-${end}`
|
||||
);
|
||||
filters.push(...rangeFilters);
|
||||
}
|
||||
|
||||
// Port inclusion filter (takes precedence)
|
||||
if (config.filters.includePorts.length > 0) {
|
||||
const includeFilter = config.filters.includePorts.map(port =>
|
||||
`port ${port}`
|
||||
).join(' or ');
|
||||
filters.push(`(${includeFilter})`);
|
||||
}
|
||||
|
||||
return filters.length > 0 ? filters.join(' and ') : '';
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if content is ASCII/readable
|
||||
*/
|
||||
function isReadableContent(buffer) {
|
||||
if (!buffer || buffer.length === 0) return false;
|
||||
|
||||
let readableChars = 0;
|
||||
const sampleSize = Math.min(buffer.length, 100); // Sample first 100 bytes
|
||||
|
||||
for (let i = 0; i < sampleSize; i++) {
|
||||
const byte = buffer[i];
|
||||
// Check for printable ASCII characters and common whitespace
|
||||
if ((byte >= 32 && byte <= 126) || byte === 9 || byte === 10 || byte === 13) {
|
||||
readableChars++;
|
||||
}
|
||||
}
|
||||
|
||||
// Consider readable if more than 70% are printable characters
|
||||
return (readableChars / sampleSize) > 0.7;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract readable content from buffer
|
||||
*/
|
||||
function extractContent(buffer, maxSize) {
|
||||
if (!buffer || buffer.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Skip if too large
|
||||
if (buffer.length > maxSize) {
|
||||
stats.contentSkipped++;
|
||||
return null;
|
||||
}
|
||||
|
||||
if (!config.content.indexReadableContent) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Check if content is readable
|
||||
if (isReadableContent(buffer)) {
|
||||
try {
|
||||
return buffer.toString('utf8', 0, Math.min(buffer.length, maxSize));
|
||||
} catch (e) {
|
||||
logger.debug('Failed to convert buffer to string:', e.message);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add document to cache
|
||||
*/
|
||||
function addToCache(document) {
|
||||
if (documentCache.length >= maxCacheSize) {
|
||||
// Remove oldest document if cache is full
|
||||
documentCache.shift();
|
||||
stats.cacheOverflows++;
|
||||
logger.warn(`Cache overflow: removed oldest document (cache size: ${maxCacheSize})`);
|
||||
}
|
||||
documentCache.push(document);
|
||||
stats.cachedDocuments = documentCache.length;
|
||||
logger.debug(`Document added to cache (total: ${documentCache.length})`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Try to flush cached documents to Elasticsearch
|
||||
*/
|
||||
async function flushCache() {
|
||||
if (documentCache.length === 0) return;
|
||||
|
||||
logger.info(`Attempting to flush ${documentCache.length} cached documents...`);
|
||||
|
||||
const documentsToFlush = [...documentCache];
|
||||
let flushedCount = 0;
|
||||
|
||||
for (const document of documentsToFlush) {
|
||||
try {
|
||||
await esClient.index({
|
||||
index: config.elasticsearch.index,
|
||||
document: document
|
||||
});
|
||||
|
||||
// Remove from cache on success
|
||||
const index = documentCache.indexOf(document);
|
||||
if (index > -1) {
|
||||
documentCache.splice(index, 1);
|
||||
}
|
||||
|
||||
flushedCount++;
|
||||
stats.packetsIndexed++;
|
||||
|
||||
} catch (error) {
|
||||
logger.debug(`Failed to flush cached document: ${error.message}`);
|
||||
// Stop trying if ES is still unavailable
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
stats.cachedDocuments = documentCache.length;
|
||||
|
||||
if (flushedCount > 0) {
|
||||
logger.info(`Successfully flushed ${flushedCount} documents. Remaining in cache: ${documentCache.length}`);
|
||||
}
|
||||
|
||||
return flushedCount > 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check Elasticsearch availability
|
||||
*/
|
||||
async function checkESAvailability() {
|
||||
try {
|
||||
await esClient.ping();
|
||||
|
||||
if (!esAvailable) {
|
||||
logger.info('Elasticsearch connection restored!');
|
||||
esAvailable = true;
|
||||
|
||||
// Try to flush cache
|
||||
await flushCache();
|
||||
}
|
||||
|
||||
return true;
|
||||
} catch (error) {
|
||||
if (esAvailable) {
|
||||
logger.error('Elasticsearch connection lost!');
|
||||
esAvailable = false;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Index document to Elasticsearch with cache fallback
|
||||
*/
|
||||
async function indexDocument(document) {
|
||||
// First, try to flush cache if we have pending documents
|
||||
if (documentCache.length > 0 && esAvailable) {
|
||||
const now = Date.now();
|
||||
if (now - lastESCheckTime > ES_CHECK_INTERVAL) {
|
||||
await flushCache();
|
||||
lastESCheckTime = now;
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
await esClient.index({
|
||||
index: config.elasticsearch.index,
|
||||
document: document
|
||||
});
|
||||
|
||||
stats.packetsIndexed++;
|
||||
esAvailable = true;
|
||||
logger.debug('Document indexed successfully');
|
||||
|
||||
} catch (error) {
|
||||
logger.warn(`Failed to index document: ${error.message}. Adding to cache.`);
|
||||
esAvailable = false;
|
||||
addToCache(document);
|
||||
stats.errors++;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse and index a packet
|
||||
*/
|
||||
async function processPacket(buffer, interfaceInfo) {
|
||||
stats.packetsProcessed++;
|
||||
|
||||
try {
|
||||
// Decode Ethernet layer
|
||||
const ret = decoders.Ethernet(buffer);
|
||||
|
||||
if (!ret || !ret.info) {
|
||||
stats.packetsSkipped++;
|
||||
return;
|
||||
}
|
||||
|
||||
const packet = {
|
||||
'@timestamp': new Date().toISOString(),
|
||||
date: new Date().toISOString(),
|
||||
interface: {
|
||||
name: interfaceInfo.name,
|
||||
ip: interfaceInfo.ip,
|
||||
mac: interfaceInfo.mac
|
||||
},
|
||||
ethernet: {
|
||||
src: ret.info.srcmac,
|
||||
dst: ret.info.dstmac,
|
||||
type: ret.info.type
|
||||
}
|
||||
};
|
||||
|
||||
// Decode IP layer
|
||||
if (ret.info.type === PROTOCOL.ETHERNET.IPV4) {
|
||||
const ipRet = decoders.IPV4(buffer, ret.offset);
|
||||
|
||||
if (ipRet) {
|
||||
packet.ip = {
|
||||
version: 4,
|
||||
src: ipRet.info.srcaddr,
|
||||
dst: ipRet.info.dstaddr,
|
||||
protocol: ipRet.info.protocol,
|
||||
ttl: ipRet.info.ttl,
|
||||
length: ipRet.info.totallen
|
||||
};
|
||||
|
||||
// Decode TCP
|
||||
if (ipRet.info.protocol === PROTOCOL.IP.TCP) {
|
||||
const tcpRet = decoders.TCP(buffer, ipRet.offset);
|
||||
|
||||
if (tcpRet) {
|
||||
packet.tcp = {
|
||||
src_port: tcpRet.info.srcport,
|
||||
dst_port: tcpRet.info.dstport,
|
||||
flags: {
|
||||
syn: !!(tcpRet.info.flags & 0x02),
|
||||
ack: !!(tcpRet.info.flags & 0x10),
|
||||
fin: !!(tcpRet.info.flags & 0x01),
|
||||
rst: !!(tcpRet.info.flags & 0x04),
|
||||
psh: !!(tcpRet.info.flags & 0x08)
|
||||
},
|
||||
seq: tcpRet.info.seqno,
|
||||
ack_seq: tcpRet.info.ackno,
|
||||
window: tcpRet.info.window
|
||||
};
|
||||
|
||||
// Extract payload
|
||||
if (tcpRet.offset < buffer.length) {
|
||||
const payload = buffer.slice(tcpRet.offset);
|
||||
const content = extractContent(payload, config.content.maxContentSize);
|
||||
if (content) {
|
||||
packet.content = content;
|
||||
packet.content_length = payload.length;
|
||||
} else if (payload.length > 0) {
|
||||
packet.content_length = payload.length;
|
||||
packet.content_type = 'binary';
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Decode UDP
|
||||
else if (ipRet.info.protocol === PROTOCOL.IP.UDP) {
|
||||
const udpRet = decoders.UDP(buffer, ipRet.offset);
|
||||
|
||||
if (udpRet) {
|
||||
packet.udp = {
|
||||
src_port: udpRet.info.srcport,
|
||||
dst_port: udpRet.info.dstport,
|
||||
length: udpRet.info.length
|
||||
};
|
||||
|
||||
// Extract payload
|
||||
if (udpRet.offset < buffer.length) {
|
||||
const payload = buffer.slice(udpRet.offset);
|
||||
const content = extractContent(payload, config.content.maxContentSize);
|
||||
if (content) {
|
||||
packet.content = content;
|
||||
packet.content_length = payload.length;
|
||||
} else if (payload.length > 0) {
|
||||
packet.content_length = payload.length;
|
||||
packet.content_type = 'binary';
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Handle ICMP
|
||||
else if (ipRet.info.protocol === PROTOCOL.IP.ICMP) {
|
||||
packet.icmp = {
|
||||
protocol: 'icmp'
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
// Handle IPv6
|
||||
else if (ret.info.type === PROTOCOL.ETHERNET.IPV6) {
|
||||
const ipv6Ret = decoders.IPV6(buffer, ret.offset);
|
||||
|
||||
if (ipv6Ret) {
|
||||
packet.ip = {
|
||||
version: 6,
|
||||
src: ipv6Ret.info.srcaddr,
|
||||
dst: ipv6Ret.info.dstaddr,
|
||||
protocol: ipv6Ret.info.protocol,
|
||||
hop_limit: ipv6Ret.info.hoplimit
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Index to Elasticsearch (with cache fallback)
|
||||
await indexDocument(packet);
|
||||
|
||||
} catch (error) {
|
||||
stats.errors++;
|
||||
logger.error('Error processing packet:', error.message);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup packet capture for an interface
|
||||
*/
|
||||
function setupCapture(interfaceName) {
|
||||
const interfaceInfo = getInterfaceInfo(interfaceName);
|
||||
|
||||
if (!interfaceInfo || !interfaceInfo.ip) {
|
||||
logger.warn(`Interface ${interfaceName} not found or has no IPv4 address`);
|
||||
return null;
|
||||
}
|
||||
|
||||
const cap = new Cap();
|
||||
const device = interfaceName;
|
||||
const filter = buildBPFFilter();
|
||||
const bufferSize = config.capture.bufferSize;
|
||||
|
||||
try {
|
||||
const linkType = cap.open(device, filter, bufferSize, Buffer.alloc(65535));
|
||||
|
||||
logger.info(`Capturing on interface: ${interfaceName} (${interfaceInfo.ip})`);
|
||||
logger.info(`Promiscuous mode: ${config.capture.promiscuousMode ? 'enabled' : 'disabled'}`);
|
||||
if (filter) {
|
||||
logger.info(`BPF filter: ${filter}`);
|
||||
}
|
||||
logger.info(`Link type: ${linkType}`);
|
||||
|
||||
cap.setMinBytes(0);
|
||||
|
||||
cap.on('packet', (nbytes, trunc) => {
|
||||
if (linkType === 'ETHERNET') {
|
||||
const buffer = cap.buffer.slice(0, nbytes);
|
||||
processPacket(buffer, interfaceInfo).catch(err => {
|
||||
logger.error('Failed to process packet:', err.message);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return cap;
|
||||
|
||||
} catch (error) {
|
||||
logger.error(`Failed to setup capture on ${interfaceName}:`, error.message);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize Elasticsearch index with mapping
|
||||
*/
|
||||
async function initializeElasticsearch() {
|
||||
try {
|
||||
// Check if index exists
|
||||
const indexExists = await esClient.indices.exists({
|
||||
index: config.elasticsearch.index
|
||||
});
|
||||
|
||||
if (!indexExists) {
|
||||
logger.info(`Creating Elasticsearch index: ${config.elasticsearch.index}`);
|
||||
|
||||
await esClient.indices.create({
|
||||
index: config.elasticsearch.index,
|
||||
body: {
|
||||
mappings: {
|
||||
properties: {
|
||||
'@timestamp': { type: 'date' },
|
||||
date: { type: 'date' },
|
||||
interface: {
|
||||
properties: {
|
||||
name: { type: 'keyword' },
|
||||
ip: { type: 'ip' },
|
||||
mac: { type: 'keyword' }
|
||||
}
|
||||
},
|
||||
ethernet: {
|
||||
properties: {
|
||||
src: { type: 'keyword' },
|
||||
dst: { type: 'keyword' },
|
||||
type: { type: 'integer' }
|
||||
}
|
||||
},
|
||||
ip: {
|
||||
properties: {
|
||||
version: { type: 'integer' },
|
||||
src: { type: 'ip' },
|
||||
dst: { type: 'ip' },
|
||||
protocol: { type: 'integer' },
|
||||
ttl: { type: 'integer' },
|
||||
length: { type: 'integer' },
|
||||
hop_limit: { type: 'integer' }
|
||||
}
|
||||
},
|
||||
tcp: {
|
||||
properties: {
|
||||
src_port: { type: 'integer' },
|
||||
dst_port: { type: 'integer' },
|
||||
flags: {
|
||||
properties: {
|
||||
syn: { type: 'boolean' },
|
||||
ack: { type: 'boolean' },
|
||||
fin: { type: 'boolean' },
|
||||
rst: { type: 'boolean' },
|
||||
psh: { type: 'boolean' }
|
||||
}
|
||||
},
|
||||
seq: { type: 'long' },
|
||||
ack_seq: { type: 'long' },
|
||||
window: { type: 'integer' }
|
||||
}
|
||||
},
|
||||
udp: {
|
||||
properties: {
|
||||
src_port: { type: 'integer' },
|
||||
dst_port: { type: 'integer' },
|
||||
length: { type: 'integer' }
|
||||
}
|
||||
},
|
||||
icmp: {
|
||||
properties: {
|
||||
protocol: { type: 'keyword' }
|
||||
}
|
||||
},
|
||||
content: { type: 'text' },
|
||||
content_length: { type: 'integer' },
|
||||
content_type: { type: 'keyword' }
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
logger.info('Elasticsearch index created successfully');
|
||||
} else {
|
||||
logger.info(`Using existing Elasticsearch index: ${config.elasticsearch.index}`);
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Failed to initialize Elasticsearch:', error.message);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Print statistics
|
||||
*/
|
||||
function printStats() {
|
||||
const uptime = Math.floor((Date.now() - stats.startTime) / 1000);
|
||||
const rate = uptime > 0 ? (stats.packetsProcessed / uptime).toFixed(2) : 0;
|
||||
|
||||
logger.info('=== Packet Capture Statistics ===');
|
||||
logger.info(`Uptime: ${uptime}s`);
|
||||
logger.info(`Packets processed: ${stats.packetsProcessed}`);
|
||||
logger.info(`Packets indexed: ${stats.packetsIndexed}`);
|
||||
logger.info(`Packets skipped: ${stats.packetsSkipped}`);
|
||||
logger.info(`Content skipped (too large): ${stats.contentSkipped}`);
|
||||
logger.info(`Cached documents: ${stats.cachedDocuments}`);
|
||||
logger.info(`Cache overflows: ${stats.cacheOverflows}`);
|
||||
logger.info(`Elasticsearch status: ${esAvailable ? 'connected' : 'disconnected'}`);
|
||||
logger.info(`Errors: ${stats.errors}`);
|
||||
logger.info(`Processing rate: ${rate} packets/sec`);
|
||||
logger.info('================================');
|
||||
}
|
||||
|
||||
/**
|
||||
* Main function
|
||||
*/
|
||||
async function main() {
|
||||
logger.info('Network Packet Capture Starting...');
|
||||
|
||||
// Check for root/admin privileges
|
||||
if (process.getuid && process.getuid() !== 0) {
|
||||
logger.warn('Warning: Not running as root. Packet capture may fail.');
|
||||
logger.warn('Consider running with: sudo node index.js');
|
||||
}
|
||||
|
||||
// Initialize Elasticsearch
|
||||
try {
|
||||
await initializeElasticsearch();
|
||||
} catch (error) {
|
||||
logger.error('Failed to initialize Elasticsearch. Exiting.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Determine interfaces to capture
|
||||
let interfaces = config.capture.interfaces;
|
||||
|
||||
if (interfaces.length === 0) {
|
||||
interfaces = getAvailableInterfaces();
|
||||
logger.info(`No interfaces specified. Using all available: ${interfaces.join(', ')}`);
|
||||
}
|
||||
|
||||
if (interfaces.length === 0) {
|
||||
logger.error('No network interfaces available for capture');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Setup capture on each interface
|
||||
const captures = [];
|
||||
for (const iface of interfaces) {
|
||||
const cap = setupCapture(iface);
|
||||
if (cap) {
|
||||
captures.push(cap);
|
||||
}
|
||||
}
|
||||
|
||||
if (captures.length === 0) {
|
||||
logger.error('Failed to setup capture on any interface');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Setup statistics reporting
|
||||
setInterval(printStats, config.logging.statsInterval * 1000);
|
||||
|
||||
// Setup periodic ES availability check and cache flush
|
||||
setInterval(async () => {
|
||||
await checkESAvailability();
|
||||
}, ES_CHECK_INTERVAL);
|
||||
|
||||
// Graceful shutdown
|
||||
process.on('SIGINT', async () => {
|
||||
logger.info('\nShutting down...');
|
||||
|
||||
// Try to flush remaining cached documents
|
||||
if (documentCache.length > 0) {
|
||||
logger.info(`Attempting to flush ${documentCache.length} cached documents before exit...`);
|
||||
try {
|
||||
await flushCache();
|
||||
if (documentCache.length > 0) {
|
||||
logger.warn(`Warning: ${documentCache.length} documents remain in cache and will be lost`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to flush cache on shutdown:', error.message);
|
||||
}
|
||||
}
|
||||
|
||||
printStats();
|
||||
process.exit(0);
|
||||
});
|
||||
|
||||
process.on('SIGTERM', async () => {
|
||||
logger.info('\nShutting down...');
|
||||
|
||||
// Try to flush remaining cached documents
|
||||
if (documentCache.length > 0) {
|
||||
logger.info(`Attempting to flush ${documentCache.length} cached documents before exit...`);
|
||||
try {
|
||||
await flushCache();
|
||||
if (documentCache.length > 0) {
|
||||
logger.warn(`Warning: ${documentCache.length} documents remain in cache and will be lost`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to flush cache on shutdown:', error.message);
|
||||
}
|
||||
}
|
||||
|
||||
printStats();
|
||||
process.exit(0);
|
||||
});
|
||||
|
||||
logger.info('Packet capture running. Press Ctrl+C to stop.');
|
||||
}
|
||||
|
||||
// Run the application
|
||||
main().catch(error => {
|
||||
logger.error('Fatal error:', error);
|
||||
process.exit(1);
|
||||
});
|
||||
76
install.sh
Archivo ejecutable
76
install.sh
Archivo ejecutable
@@ -0,0 +1,76 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Network Packet Capture - Installation Script
|
||||
# This script installs system dependencies and Node.js packages
|
||||
|
||||
set -e
|
||||
|
||||
echo "=================================="
|
||||
echo "Network Packet Capture - Installer"
|
||||
echo "=================================="
|
||||
echo ""
|
||||
|
||||
# Check if running as root
|
||||
if [ "$EUID" -eq 0 ]; then
|
||||
SUDO=""
|
||||
else
|
||||
SUDO="sudo"
|
||||
fi
|
||||
|
||||
# Detect OS
|
||||
if [ -f /etc/os-release ]; then
|
||||
. /etc/os-release
|
||||
OS=$ID
|
||||
else
|
||||
OS=$(uname -s)
|
||||
fi
|
||||
|
||||
echo "Detected OS: $OS"
|
||||
echo ""
|
||||
|
||||
# Install system dependencies
|
||||
echo "Installing system dependencies..."
|
||||
case $OS in
|
||||
ubuntu|debian|linuxmint)
|
||||
echo "Installing libpcap-dev and build-essential..."
|
||||
$SUDO apt-get update
|
||||
$SUDO apt-get install -y libpcap-dev build-essential
|
||||
;;
|
||||
fedora|rhel|centos)
|
||||
echo "Installing libpcap-devel and development tools..."
|
||||
$SUDO yum install -y libpcap-devel gcc-c++ make
|
||||
;;
|
||||
arch|manjaro)
|
||||
echo "Installing libpcap and base-devel..."
|
||||
$SUDO pacman -S --noconfirm libpcap base-devel
|
||||
;;
|
||||
Darwin)
|
||||
echo "Installing Xcode Command Line Tools..."
|
||||
xcode-select --install || echo "Xcode tools already installed"
|
||||
;;
|
||||
*)
|
||||
echo "Warning: Unknown OS. Please install libpcap development libraries manually."
|
||||
echo "For Debian/Ubuntu: sudo apt-get install libpcap-dev build-essential"
|
||||
echo "For RHEL/CentOS: sudo yum install libpcap-devel gcc-c++ make"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
echo "System dependencies installed successfully!"
|
||||
echo ""
|
||||
|
||||
# Install Node.js dependencies
|
||||
echo "Installing Node.js dependencies..."
|
||||
npm install
|
||||
|
||||
echo ""
|
||||
echo "=================================="
|
||||
echo "Installation completed successfully!"
|
||||
echo "=================================="
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Configure your settings: cp .env.example .env && nano .env"
|
||||
echo "2. Make sure Elasticsearch is running"
|
||||
echo "3. Run the capture: sudo npm start"
|
||||
echo ""
|
||||
28
package.json
Archivo normal
28
package.json
Archivo normal
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"name": "netpcap",
|
||||
"version": "1.0.0",
|
||||
"description": "Network packet capture tool with Elasticsearch indexing",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
"start": "node index.js",
|
||||
"test": "echo \"Error: no test specified\" && exit 1"
|
||||
},
|
||||
"keywords": [
|
||||
"packet",
|
||||
"capture",
|
||||
"pcap",
|
||||
"network",
|
||||
"elasticsearch",
|
||||
"monitoring"
|
||||
],
|
||||
"author": "ale",
|
||||
"license": "MIT",
|
||||
"private": true,
|
||||
"dependencies": {
|
||||
"@elastic/elasticsearch": "^8.11.0",
|
||||
"cap": "^0.2.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.0.0"
|
||||
}
|
||||
}
|
||||
Referencia en una nueva incidencia
Block a user