initial commit

Signed-off-by: ale <ale@manalejandro.com>
Este commit está contenido en:
ale
2025-12-12 22:13:54 +01:00
commit 0c913a770f
Se han modificado 18 ficheros con 3334 adiciones y 0 borrados

56
.gitignore vendido Archivo normal
Ver fichero

@@ -0,0 +1,56 @@
# Dependencies
node_modules/
# Build output
dist/
build/
# Temporary files
*.tmp
*.temp
.tmp/
tmp/
# OS files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# IDE
.idea/
.vscode/
*.swp
*.swo
*~
# Logs
logs/
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Test coverage
coverage/
.nyc_output/
# Environment files
.env
.env.local
.env.*.local
# Audio test files (to avoid large files in repo)
*.mp3
*.wav
*.flac
*.ogg
*.m4a
!tests/fixtures/*.mp3
# Package lock files (optional - uncomment if you want to ignore)
# package-lock.json
# yarn.lock

43
CHANGELOG.md Archivo normal
Ver fichero

@@ -0,0 +1,43 @@
# Changelog
All notable changes to AutoMixer will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.0.0] - 2024-12-11
### Added
- Initial release of AutoMixer
- **BPM Detection**: Automatic tempo detection using music-tempo algorithm
- **Beat Synchronization**: Alignment of beats between consecutive tracks
- **Pitch-Preserving Tempo Adjustment**: Time-stretching using FFmpeg rubberband filter
- **Smooth Crossfades**: Equal-power crossfade mixing between tracks
- **CLI Interface**: Command-line tool for easy mixing
- `mix` command for mixing multiple tracks
- `analyze` command for track analysis
- `check` command for system requirements verification
- **Programmatic API**: Full Node.js library for integration
- AutoMixer class for complete mixing workflow
- BPMDetector for tempo analysis
- AudioAnalyzer for metadata extraction
- PitchShifter for tempo/pitch adjustments
- TrackMixer for crossfade operations
- **Event System**: Progress tracking through EventEmitter
- **Multiple Crossfade Curves**: linear, log, sqrt, sine, exponential
### Technical Details
- Uses FFmpeg for audio processing
- Supports MP3, WAV, FLAC, and other common formats
- Node.js 18+ required
- ES Modules support
## [Unreleased]
### Planned
- Key detection and harmonic mixing
- Automatic intro/outro detection
- Energy-based transition point selection
- Web interface
- Real-time preview
- Batch processing improvements

114
CONTRIBUTING.md Archivo normal
Ver fichero

@@ -0,0 +1,114 @@
# Contributing to AutoMixer
First off, thank you for considering contributing to AutoMixer! It's people like you that make AutoMixer such a great tool.
## Code of Conduct
This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold this code.
## How Can I Contribute?
### Reporting Bugs
Before creating bug reports, please check existing issues as you might find out that you don't need to create one. When you are creating a bug report, please include as many details as possible:
- **Use a clear and descriptive title**
- **Describe the exact steps to reproduce the problem**
- **Provide specific examples** (including sample audio files if possible)
- **Describe the behavior you observed and what you expected**
- **Include your environment details** (OS, Node.js version, FFmpeg version)
### Suggesting Enhancements
Enhancement suggestions are tracked as GitHub issues. When creating an enhancement suggestion, please include:
- **Use a clear and descriptive title**
- **Provide a detailed description of the suggested enhancement**
- **Explain why this enhancement would be useful**
- **List any alternative solutions you've considered**
### Pull Requests
1. Fork the repo and create your branch from `main`
2. If you've added code that should be tested, add tests
3. If you've changed APIs, update the documentation
4. Ensure the test suite passes
5. Make sure your code follows the existing style
6. Issue that pull request!
## Development Setup
```bash
# Clone your fork
git clone https://github.com/your-username/automixer.git
cd automixer
# Install dependencies
npm install
# Run tests
npm test
# Run linter
npm run lint
```
## Project Structure
```
automixer/
├── bin/
│ └── cli.js # CLI entry point
├── src/
│ ├── index.js # Main exports
│ ├── core/
│ │ └── AutoMixer.js # Main orchestrator
│ ├── audio/
│ │ ├── BPMDetector.js # BPM detection
│ │ ├── AudioAnalyzer.js # Metadata extraction
│ │ ├── TrackMixer.js # Crossfade mixing
│ │ └── PitchShifter.js # Tempo/pitch adjustment
│ └── utils/
│ └── index.js # Utility functions
├── tests/
│ └── ... # Test files
└── package.json
```
## Coding Guidelines
### JavaScript Style
- Use ES modules (`import`/`export`)
- Use `async`/`await` for asynchronous code
- Document functions with JSDoc comments
- Use meaningful variable and function names
### Commit Messages
- Use the present tense ("Add feature" not "Added feature")
- Use the imperative mood ("Move cursor to..." not "Moves cursor to...")
- Limit the first line to 72 characters
- Reference issues and pull requests when relevant
### Documentation
- Update README.md for any user-facing changes
- Update JSDoc comments for API changes
- Add inline comments for complex logic
## Testing
```bash
# Run all tests
npm test
# Run with coverage
npm run test:coverage
```
## Questions?
Feel free to open an issue with your question or reach out to the maintainers.
Thank you for contributing! 🎵

21
LICENSE Archivo normal
Ver fichero

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 ale
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

370
README.md Archivo normal
Ver fichero

@@ -0,0 +1,370 @@
# AutoMixer 🎵
Automatic DJ-style audio mixer that sequentially blends MP3 files with BPM detection, pitch adjustment, and beat synchronization.
## Features
- 🎯 **Automatic BPM Detection** - Analyzes audio files to detect tempo using beat detection algorithms
- 🔄 **Beat Synchronization** - Aligns beats between tracks for seamless transitions
- 🎚️ **Pitch-Preserving Tempo Adjustment** - Adjusts tempo while maintaining original pitch
- 🌊 **Smooth Crossfades** - Creates professional equal-power crossfades between tracks
- 📊 **Audio Analysis** - Extracts metadata, duration, and audio characteristics
- 🖥️ **CLI & API** - Use from command line or integrate into your Node.js projects
## Prerequisites
- **Node.js** >= 18.0.0
- **FFmpeg** with FFprobe (required for audio processing)
### Installing FFmpeg
```bash
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpeg
# Windows (with Chocolatey)
choco install ffmpeg
# Windows (with winget)
winget install FFmpeg
```
## Installation
### Global Installation (CLI)
```bash
npm install -g automixer
```
### Local Installation (Library)
```bash
npm install automixer
```
## CLI Usage
### Mix Multiple Tracks
```bash
# Basic usage - mix tracks in order
automixer mix track1.mp3 track2.mp3 track3.mp3 -o my_mix.mp3
# Specify crossfade duration (default: 8 seconds)
automixer mix track1.mp3 track2.mp3 -c 12 -o output.mp3
# Set a specific target BPM
automixer mix track1.mp3 track2.mp3 -b 128 -o output.mp3
# Allow pitch to change with tempo (faster processing)
automixer mix track1.mp3 track2.mp3 --no-preserve-pitch -o output.mp3
```
### Analyze Tracks
```bash
# Analyze a single track
automixer analyze track.mp3
# Analyze multiple tracks
automixer analyze track1.mp3 track2.mp3
# Output as JSON
automixer analyze track.mp3 --json
```
### Check System Requirements
```bash
automixer check
```
### CLI Options
| Option | Description | Default |
|--------|-------------|---------|
| `-o, --output <file>` | Output file path | `mix_output.mp3` |
| `-c, --crossfade <seconds>` | Crossfade duration | `8` |
| `-b, --bpm <number>` | Target BPM | Auto-detect |
| `--max-bpm-change <percent>` | Maximum BPM change allowed | `8` |
| `--no-preserve-pitch` | Allow pitch to change with tempo | Pitch preserved |
| `-q, --quiet` | Suppress progress output | Show progress |
## API Usage
### Basic Example
```javascript
import AutoMixer from 'automixer';
const mixer = new AutoMixer({
crossfadeDuration: 8, // seconds
preservePitch: true,
maxBPMChange: 8 // percent
});
// Mix multiple tracks
await mixer.mix(
['track1.mp3', 'track2.mp3', 'track3.mp3'],
'output.mp3'
);
```
### Advanced Usage with Events
```javascript
import { AutoMixer } from 'automixer';
const mixer = new AutoMixer({
crossfadeDuration: 10,
targetBPM: 128,
preservePitch: true
});
// Listen to events
mixer.on('analysis:track:start', ({ index, filepath }) => {
console.log(`Analyzing track ${index + 1}: ${filepath}`);
});
mixer.on('analysis:track:complete', ({ index, trackInfo }) => {
console.log(`Track ${index + 1}: ${trackInfo.bpm} BPM`);
});
mixer.on('mix:bpm', ({ targetBPM }) => {
console.log(`Target BPM: ${targetBPM}`);
});
mixer.on('mix:render:progress', ({ current, total, message }) => {
console.log(`Progress: ${current}/${total}`);
});
mixer.on('mix:complete', ({ outputPath }) => {
console.log(`Mix saved to: ${outputPath}`);
});
// Run the mix
await mixer.mix(['track1.mp3', 'track2.mp3'], 'output.mp3');
```
### Step-by-Step Processing
```javascript
import { AutoMixer } from 'automixer';
const mixer = new AutoMixer();
// Add tracks
mixer.addTracks(['track1.mp3', 'track2.mp3', 'track3.mp3']);
// Analyze tracks
const analyzedTracks = await mixer.analyzeTracks();
// Log track information
for (const track of analyzedTracks) {
console.log(`${track.filename}: ${track.bpm} BPM, ${track.duration}s`);
}
// Get optimal BPM
const targetBPM = mixer.calculateOptimalBPM();
console.log(`Optimal BPM: ${targetBPM}`);
// Create the mix
await mixer.createMix('output.mp3');
```
### Using Individual Components
```javascript
import { BPMDetector, AudioAnalyzer, PitchShifter } from 'automixer';
// Detect BPM
const detector = new BPMDetector();
const { bpm, beats, confidence } = await detector.detect('track.mp3');
console.log(`BPM: ${bpm} (confidence: ${confidence})`);
// Get audio metadata
const analyzer = new AudioAnalyzer();
const metadata = await analyzer.getMetadata('track.mp3');
console.log(`Duration: ${metadata.duration}s`);
// Adjust tempo
const shifter = new PitchShifter();
const adjustedPath = await shifter.adjustTempo('track.mp3', 1.1, true);
console.log(`Tempo-adjusted file: ${adjustedPath}`);
```
## API Reference
### AutoMixer
Main class that orchestrates the mixing process.
#### Constructor Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `crossfadeDuration` | number | `8` | Crossfade duration in seconds |
| `targetBPM` | number | `null` | Target BPM (auto-detect if null) |
| `preservePitch` | boolean | `true` | Preserve pitch when changing tempo |
| `maxBPMChange` | number | `8` | Maximum BPM change percentage |
| `outputFormat` | string | `'mp3'` | Output format |
| `outputBitrate` | number | `320` | Output bitrate in kbps |
#### Methods
- `addTracks(filepaths)` - Add tracks to the queue
- `clearTracks()` - Clear all tracks
- `analyzeTracks()` - Analyze all tracks (returns track info)
- `calculateOptimalBPM()` - Calculate the optimal target BPM
- `createMix(outputPath)` - Create the final mix
- `mix(inputFiles, outputPath)` - Full process in one call
#### Events
- `analysis:start` - Analysis started
- `analysis:track:start` - Individual track analysis started
- `analysis:track:complete` - Individual track analysis completed
- `analysis:complete` - All analysis completed
- `mix:start` - Mixing started
- `mix:bpm` - Target BPM calculated
- `mix:prepare:start` - Track preparation started
- `mix:prepare:complete` - Track preparation completed
- `mix:render:start` - Rendering started
- `mix:render:progress` - Rendering progress update
- `mix:complete` - Mixing completed
### BPMDetector
Detects BPM and beat positions in audio files.
```javascript
const detector = new BPMDetector({ minBPM: 60, maxBPM: 200 });
const { bpm, beats, confidence } = await detector.detect('track.mp3');
```
### AudioAnalyzer
Extracts metadata and analyzes audio characteristics.
```javascript
const analyzer = new AudioAnalyzer();
const metadata = await analyzer.getMetadata('track.mp3');
// { duration, sampleRate, channels, bitrate, codec, format, tags }
```
### PitchShifter
Adjusts tempo and pitch of audio files.
```javascript
const shifter = new PitchShifter();
// Adjust tempo (preserving pitch)
await shifter.adjustTempo('input.mp3', 1.1, true);
// Shift pitch (in semitones)
await shifter.shiftPitch('input.mp3', 2);
// Adjust both
await shifter.adjustTempoAndPitch('input.mp3', 1.1, 2);
```
### TrackMixer
Handles crossfading and track mixing.
```javascript
const mixer = new TrackMixer({
crossfadeDuration: 8,
crossfadeCurve: 'log' // 'linear', 'log', 'sqrt', 'sine', 'exponential'
});
```
## How It Works
### BPM Detection
AutoMixer uses the `music-tempo` library combined with FFmpeg for BPM detection:
1. Audio is decoded to raw PCM samples using FFmpeg
2. A 30-second analysis window is extracted (skipping intro)
3. Beat detection algorithm identifies tempo and beat positions
4. BPM is normalized to a standard range (60-200)
5. Beats are extrapolated across the full track
### Beat Matching
The mixing algorithm:
1. Analyzes all input tracks to detect BPM
2. Calculates optimal target BPM (median of all tracks)
3. Adjusts each track's tempo to match target BPM
4. Finds beat-aligned transition points
5. Creates equal-power crossfades at transition points
### Tempo Adjustment
Tempo adjustment uses FFmpeg's audio filters:
- **With pitch preservation**: Uses the rubberband filter for high-quality time-stretching
- **Without pitch preservation**: Uses the atempo filter for simple speed changes
## Performance Tips
- **Use consistent BPM tracks**: Mixing tracks with similar BPMs produces better results
- **Allow larger BPM changes**: Increase `maxBPMChange` if tracks have very different tempos
- **Longer crossfades**: Use longer crossfades (10-15s) for smoother transitions
- **Skip pitch preservation**: Use `--no-preserve-pitch` for faster processing when pitch shift is acceptable
## Troubleshooting
### FFmpeg Not Found
Make sure FFmpeg is installed and available in your PATH:
```bash
ffmpeg -version
```
### Poor BPM Detection
Some tracks may have ambiguous tempos. You can:
- Set a specific target BPM with `-b` option
- Increase `maxBPMChange` to allow larger adjustments
### Audio Quality Issues
- Ensure source files are high quality
- Use higher bitrate: `outputBitrate: 320`
- Use pitch-preserved tempo adjustment
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Repository
Project repository: [https://github.com/manalejandro/automixer](https://github.com/manalejandro/automixer)
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## Changelog
### v1.0.0
- Initial release
- BPM detection and beat synchronization
- Pitch-preserving tempo adjustment
- Smooth crossfade mixing
- CLI and API interfaces

338
bin/cli.js Archivo normal
Ver fichero

@@ -0,0 +1,338 @@
#!/usr/bin/env node
/**
* AutoMixer CLI
*
* Command-line interface for the automixer library.
* Provides an easy way to mix multiple audio files from the terminal.
*
* Usage:
* automixer mix track1.mp3 track2.mp3 -o output.mp3
* automixer analyze track.mp3
*/
import { Command } from 'commander';
import chalk from 'chalk';
import ora from 'ora';
import cliProgress from 'cli-progress';
import path from 'path';
import fs from 'fs/promises';
import { AutoMixer } from '../src/core/AutoMixer.js';
import { BPMDetector } from '../src/audio/BPMDetector.js';
import { AudioAnalyzer } from '../src/audio/AudioAnalyzer.js';
const program = new Command();
// Package info
const pkg = JSON.parse(
await fs.readFile(new URL('../package.json', import.meta.url), 'utf-8')
);
program
.name('automixer')
.description('Automatic DJ-style audio mixer with BPM detection and beat synchronization')
.version(pkg.version);
/**
* Mix command - Main mixing functionality
*/
program
.command('mix')
.description('Mix multiple audio files with automatic beat matching')
.argument('<files...>', 'Audio files to mix (in order)')
.option('-o, --output <file>', 'Output file path', 'mix_output.mp3')
.option('-c, --crossfade <seconds>', 'Crossfade duration in seconds', '32')
.option('-b, --bpm <number>', 'Target BPM (auto-detect if not specified)')
.option('--max-bpm-change <percent>', 'Maximum BPM change allowed', '8')
.option('--no-preserve-pitch', 'Allow pitch to change with tempo')
.option('-q, --quiet', 'Suppress progress output')
.action(async (files, options) => {
const spinner = ora();
try {
// Validate input files
console.log(chalk.cyan('\n🎵 AutoMixer - Automatic DJ Mixer\n'));
if (files.length < 2) {
console.log(chalk.yellow('⚠️ At least 2 files are required for mixing.'));
console.log(chalk.gray(' Use "automixer analyze" to analyze a single file.\n'));
process.exit(1);
}
// Check files exist
spinner.start('Validating input files...');
const validatedFiles = [];
for (const file of files) {
const fullPath = path.resolve(file);
try {
await fs.access(fullPath);
validatedFiles.push(fullPath);
} catch {
spinner.fail(`File not found: ${file}`);
process.exit(1);
}
}
spinner.succeed(`Found ${validatedFiles.length} audio files`);
// Create mixer instance
const mixer = new AutoMixer({
crossfadeDuration: parseInt(options.crossfade, 10),
targetBPM: options.bpm ? parseFloat(options.bpm) : null,
maxBPMChange: parseFloat(options.maxBpmChange),
preservePitch: options.preservePitch
});
// Progress bar for analysis
const analysisBar = new cliProgress.SingleBar({
format: chalk.cyan('Analyzing') + ' |{bar}| {percentage}% | {track}',
hideCursor: true
}, cliProgress.Presets.shades_classic);
if (!options.quiet) {
console.log(chalk.gray('\n📊 Analyzing tracks...\n'));
analysisBar.start(validatedFiles.length, 0, { track: '' });
}
// Set up event listeners
let currentTrackIndex = 0;
mixer.on('analysis:track:start', ({ index, filepath }) => {
if (!options.quiet) {
analysisBar.update(index, { track: path.basename(filepath) });
}
});
mixer.on('analysis:track:complete', ({ index }) => {
currentTrackIndex = index + 1;
if (!options.quiet) {
analysisBar.update(currentTrackIndex);
}
});
// Analyze tracks
mixer.addTracks(validatedFiles);
const analyzedTracks = await mixer.analyzeTracks();
if (!options.quiet) {
analysisBar.stop();
console.log();
}
// Display track info
if (!options.quiet) {
console.log(chalk.cyan('📋 Track Information:\n'));
for (const track of analyzedTracks) {
console.log(chalk.white(` ${track.filename}`));
console.log(chalk.gray(` BPM: ${chalk.yellow(track.bpm)} | Duration: ${formatDuration(track.duration)}`));
}
console.log();
}
// Calculate target BPM
const targetBPM = mixer.calculateOptimalBPM();
if (!options.quiet) {
console.log(chalk.cyan(`🎯 Target BPM: ${chalk.yellow(targetBPM)}\n`));
}
// Create the mix
const outputPath = path.resolve(options.output);
if (!options.quiet) {
spinner.start('Creating mix...');
}
mixer.on('mix:prepare:complete', ({ index, tempoRatio }) => {
if (!options.quiet && Math.abs(tempoRatio - 1) > 0.001) {
const change = ((tempoRatio - 1) * 100).toFixed(1);
const sign = change > 0 ? '+' : '';
spinner.text = `Adjusting tempo for track ${index + 1} (${sign}${change}%)`;
}
});
mixer.on('mix:render:progress', ({ current, total, message }) => {
if (!options.quiet) {
spinner.text = message || `Mixing ${current}/${total}...`;
}
});
await mixer.createMix(outputPath);
if (!options.quiet) {
spinner.succeed('Mix created successfully!');
console.log(chalk.green(`\n✅ Output saved to: ${chalk.white(outputPath)}\n`));
}
} catch (error) {
spinner.fail(chalk.red(`Error: ${error.message}`));
if (process.env.DEBUG) {
console.error(error);
}
process.exit(1);
}
});
/**
* Analyze command - Analyze a single track
*/
program
.command('analyze')
.description('Analyze audio file(s) and display BPM and other information')
.argument('<files...>', 'Audio files to analyze')
.option('-j, --json', 'Output as JSON')
.action(async (files, options) => {
const spinner = ora();
const results = [];
try {
for (const file of files) {
const fullPath = path.resolve(file);
if (!options.json) {
spinner.start(`Analyzing ${path.basename(file)}...`);
}
try {
await fs.access(fullPath);
} catch {
if (options.json) {
results.push({ file, error: 'File not found' });
} else {
spinner.fail(`File not found: ${file}`);
}
continue;
}
const bpmDetector = new BPMDetector();
const audioAnalyzer = new AudioAnalyzer();
const [bpmResult, metadata] = await Promise.all([
bpmDetector.detect(fullPath),
audioAnalyzer.getMetadata(fullPath)
]);
const result = {
file: path.basename(file),
path: fullPath,
bpm: bpmResult.bpm,
confidence: bpmResult.confidence,
duration: metadata.duration,
durationFormatted: formatDuration(metadata.duration),
sampleRate: metadata.sampleRate,
channels: metadata.channels,
bitrate: metadata.bitrate,
format: metadata.format,
codec: metadata.codec
};
results.push(result);
if (!options.json) {
spinner.succeed(`${chalk.white(result.file)}`);
console.log(chalk.gray(` BPM: ${chalk.yellow(result.bpm)} (confidence: ${(result.confidence * 100).toFixed(0)}%)`));
console.log(chalk.gray(` Duration: ${result.durationFormatted}`));
console.log(chalk.gray(` Format: ${result.codec} ${result.sampleRate}Hz ${result.channels}ch ${Math.round(result.bitrate / 1000)}kbps`));
console.log();
}
}
if (options.json) {
console.log(JSON.stringify(results, null, 2));
}
} catch (error) {
if (options.json) {
console.log(JSON.stringify({ error: error.message }));
} else {
spinner.fail(chalk.red(`Error: ${error.message}`));
}
process.exit(1);
}
});
/**
* Check command - Verify FFmpeg installation
*/
program
.command('check')
.description('Check system requirements (FFmpeg installation)')
.action(async () => {
const spinner = ora();
console.log(chalk.cyan('\n🔍 Checking system requirements...\n'));
// Check FFmpeg
spinner.start('Checking FFmpeg...');
try {
const { spawn } = await import('child_process');
await new Promise((resolve, reject) => {
const ffmpeg = spawn('ffmpeg', ['-version']);
let output = '';
ffmpeg.stdout.on('data', (data) => {
output += data.toString();
});
ffmpeg.on('close', (code) => {
if (code === 0) {
const version = output.match(/ffmpeg version (\S+)/)?.[1] || 'unknown';
spinner.succeed(`FFmpeg installed (version: ${version})`);
resolve();
} else {
reject(new Error('FFmpeg not working'));
}
});
ffmpeg.on('error', () => {
reject(new Error('FFmpeg not found'));
});
});
} catch {
spinner.fail('FFmpeg not found');
console.log(chalk.yellow('\n Please install FFmpeg:'));
console.log(chalk.gray(' - macOS: brew install ffmpeg'));
console.log(chalk.gray(' - Ubuntu: sudo apt install ffmpeg'));
console.log(chalk.gray(' - Windows: choco install ffmpeg'));
console.log();
}
// Check FFprobe
spinner.start('Checking FFprobe...');
try {
const { spawn } = await import('child_process');
await new Promise((resolve, reject) => {
const ffprobe = spawn('ffprobe', ['-version']);
ffprobe.on('close', (code) => {
if (code === 0) {
spinner.succeed('FFprobe installed');
resolve();
} else {
reject(new Error('FFprobe not working'));
}
});
ffprobe.on('error', () => {
reject(new Error('FFprobe not found'));
});
});
} catch {
spinner.fail('FFprobe not found (usually included with FFmpeg)');
}
console.log();
});
/**
* Format duration in mm:ss format
*/
function formatDuration(seconds) {
const mins = Math.floor(seconds / 60);
const secs = Math.floor(seconds % 60);
return `${mins}:${secs.toString().padStart(2, '0')}`;
}
// Parse and run
program.parse();

32
docs/QUICK_START.md Archivo normal
Ver fichero

@@ -0,0 +1,32 @@
# AutoMixer
> Automatic DJ-style audio mixer with BPM detection and beat synchronization
[![npm version](https://badge.fury.io/js/automixer.svg)](https://www.npmjs.com/package/automixer)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
## Quick Start
```bash
# Install globally
npm install -g automixer
# Mix tracks
automixer mix track1.mp3 track2.mp3 track3.mp3 -o my_mix.mp3
# Analyze a track
automixer analyze track.mp3
```
## Requirements
- Node.js >= 18.0.0
- FFmpeg installed on your system
## Documentation
See [README.md](README.md) for full documentation.
## License
MIT

47
eslint.config.js Archivo normal
Ver fichero

@@ -0,0 +1,47 @@
import js from '@eslint/js';
export default [
js.configs.recommended,
{
languageOptions: {
ecmaVersion: 2022,
sourceType: 'module',
globals: {
console: 'readonly',
process: 'readonly',
Buffer: 'readonly',
URL: 'readonly',
setTimeout: 'readonly',
clearTimeout: 'readonly',
setInterval: 'readonly',
clearInterval: 'readonly'
}
},
rules: {
'no-unused-vars': ['warn', { argsIgnorePattern: '^_' }],
'no-console': 'off',
'prefer-const': 'error',
'no-var': 'error',
'eqeqeq': ['error', 'always'],
'curly': ['error', 'all'],
'brace-style': ['error', '1tbs'],
'indent': ['error', 2],
'quotes': ['error', 'single', { avoidEscape: true }],
'semi': ['error', 'always'],
'comma-dangle': ['error', 'never'],
'arrow-spacing': 'error',
'keyword-spacing': 'error',
'space-before-blocks': 'error',
'space-infix-ops': 'error',
'object-curly-spacing': ['error', 'always'],
'array-bracket-spacing': ['error', 'never']
}
},
{
ignores: [
'node_modules/**',
'coverage/**',
'dist/**'
]
}
];

29
jsconfig.json Archivo normal
Ver fichero

@@ -0,0 +1,29 @@
{
"compilerOptions": {
"module": "ESNext",
"moduleResolution": "node",
"target": "ES2022",
"checkJs": true,
"allowJs": true,
"noEmit": true,
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"declaration": true,
"declarationMap": true,
"baseUrl": ".",
"paths": {
"automixer": ["./src/index.js"]
}
},
"include": [
"src/**/*",
"bin/**/*"
],
"exclude": [
"node_modules",
"tests"
]
}

63
package.json Archivo normal
Ver fichero

@@ -0,0 +1,63 @@
{
"name": "automixer",
"version": "1.0.0",
"description": "Automatic DJ-style audio mixer that sequentially blends MP3 files with BPM detection, pitch adjustment, and beat synchronization",
"main": "src/index.js",
"bin": {
"automixer": "./bin/cli.js"
},
"type": "module",
"scripts": {
"start": "node src/index.js",
"cli": "node bin/cli.js",
"test": "node --test tests/*.test.js",
"lint": "eslint src/ bin/",
"prepublishOnly": "npm test"
},
"keywords": [
"audio",
"mixer",
"dj",
"bpm",
"beat-detection",
"mp3",
"music",
"crossfade",
"pitch-shift",
"tempo",
"beat-matching"
],
"author": "ale",
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/manalejandro/automixer.git"
},
"bugs": {
"url": "https://github.com/manalejandro/automixer/issues"
},
"homepage": "https://github.com/manalejandro/automixer#readme",
"engines": {
"node": ">=18.0.0"
},
"files": [
"src/",
"bin/",
"README.md",
"LICENSE"
],
"dependencies": {
"commander": "^12.1.0",
"fluent-ffmpeg": "^2.1.3",
"music-tempo": "^1.0.3",
"ora": "^8.1.1",
"chalk": "^5.3.0",
"cli-progress": "^3.12.0"
},
"devDependencies": {
"eslint": "^9.16.0"
},
"peerDependencies": {
"ffmpeg": "*"
}
}

248
src/audio/AudioAnalyzer.js Archivo normal
Ver fichero

@@ -0,0 +1,248 @@
/**
* AudioAnalyzer - Audio file metadata and analysis utilities
*
* Provides methods for extracting audio metadata,
* duration, sample rate, and other technical information.
*
* @class AudioAnalyzer
*/
import { spawn } from 'child_process';
export class AudioAnalyzer {
/**
* Create an AudioAnalyzer instance
*/
constructor() {
this.ffprobeCache = new Map();
}
/**
* Get audio file metadata using FFprobe
* @param {string} filepath - Path to the audio file
* @returns {Promise<Object>} - Audio metadata
*/
async getMetadata(filepath) {
// Check cache first
if (this.ffprobeCache.has(filepath)) {
return this.ffprobeCache.get(filepath);
}
return new Promise((resolve, reject) => {
const ffprobe = spawn('ffprobe', [
'-v', 'quiet',
'-print_format', 'json',
'-show_format',
'-show_streams',
filepath
]);
let stdout = '';
let stderr = '';
ffprobe.stdout.on('data', (data) => {
stdout += data.toString();
});
ffprobe.stderr.on('data', (data) => {
stderr += data.toString();
});
ffprobe.on('close', (code) => {
if (code !== 0) {
reject(new Error(`FFprobe failed: ${stderr}`));
return;
}
try {
const info = JSON.parse(stdout);
const audioStream = info.streams?.find(s => s.codec_type === 'audio');
if (!audioStream) {
reject(new Error('No audio stream found in file'));
return;
}
const metadata = {
duration: parseFloat(info.format?.duration || audioStream.duration || 0),
sampleRate: parseInt(audioStream.sample_rate, 10),
channels: audioStream.channels,
bitrate: parseInt(info.format?.bit_rate || audioStream.bit_rate || 0, 10),
codec: audioStream.codec_name,
format: info.format?.format_name,
tags: info.format?.tags || {}
};
// Cache the result
this.ffprobeCache.set(filepath, metadata);
resolve(metadata);
} catch (error) {
reject(new Error(`Failed to parse FFprobe output: ${error.message}`));
}
});
ffprobe.on('error', (error) => {
reject(new Error(`Failed to spawn FFprobe: ${error.message}. Make sure FFmpeg is installed.`));
});
});
}
/**
* Get audio duration in seconds
* @param {string} filepath - Path to the audio file
* @returns {Promise<number>} - Duration in seconds
*/
async getDuration(filepath) {
const metadata = await this.getMetadata(filepath);
return metadata.duration;
}
/**
* Analyze audio energy/loudness at specific points
* @param {string} filepath - Path to the audio file
* @param {number} startTime - Start time in seconds
* @param {number} duration - Duration to analyze in seconds
* @returns {Promise<Object>} - Energy analysis
*/
async analyzeEnergy(filepath, startTime, duration) {
return new Promise((resolve, reject) => {
const ffmpeg = spawn('ffmpeg', [
'-i', filepath,
'-ss', startTime.toString(),
'-t', duration.toString(),
'-af', 'volumedetect',
'-f', 'null',
'-'
]);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', () => {
// FFmpeg outputs to stderr even on success
const meanMatch = stderr.match(/mean_volume:\s*(-?\d+\.?\d*)\s*dB/);
const maxMatch = stderr.match(/max_volume:\s*(-?\d+\.?\d*)\s*dB/);
resolve({
meanVolume: meanMatch ? parseFloat(meanMatch[1]) : null,
maxVolume: maxMatch ? parseFloat(maxMatch[1]) : null
});
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to analyze energy: ${error.message}`));
});
});
}
/**
* Detect silence at the beginning and end of a track
* @param {string} filepath - Path to the audio file
* @param {number} [threshold=-50] - Silence threshold in dB
* @returns {Promise<Object>} - Silence detection results
*/
async detectSilence(filepath, threshold = -50) {
return new Promise((resolve, reject) => {
const ffmpeg = spawn('ffmpeg', [
'-i', filepath,
'-af', `silencedetect=noise=${threshold}dB:d=0.5`,
'-f', 'null',
'-'
]);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', () => {
const silenceStarts = [];
const silenceEnds = [];
const startMatches = stderr.matchAll(/silence_start:\s*(\d+\.?\d*)/g);
const endMatches = stderr.matchAll(/silence_end:\s*(\d+\.?\d*)/g);
for (const match of startMatches) {
silenceStarts.push(parseFloat(match[1]));
}
for (const match of endMatches) {
silenceEnds.push(parseFloat(match[1]));
}
resolve({
silenceStarts,
silenceEnds,
hasLeadingSilence: silenceStarts.length > 0 && silenceStarts[0] < 0.1,
leadingSilenceEnd: silenceEnds.length > 0 ? silenceEnds[0] : 0
});
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to detect silence: ${error.message}`));
});
});
}
/**
* Get the waveform peaks for visualization
* @param {string} filepath - Path to the audio file
* @param {number} [samples=100] - Number of samples to return
* @returns {Promise<number[]>} - Array of peak values (0-1)
*/
async getWaveformPeaks(filepath, samples = 100) {
const metadata = await this.getMetadata(filepath);
const duration = metadata.duration;
const interval = duration / samples;
return new Promise((resolve, reject) => {
const ffmpeg = spawn('ffmpeg', [
'-i', filepath,
'-af', `asetnsamples=${Math.floor(metadata.sampleRate * interval)},astats=metadata=1:reset=1`,
'-f', 'null',
'-'
]);
let stderr = '';
const peaks = [];
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
// Extract peak values from metadata
const matches = stderr.matchAll(/Peak level dB:\s*(-?\d+\.?\d*)/g);
for (const match of matches) {
const db = parseFloat(match[1]);
// Convert dB to linear (0-1 scale)
const linear = Math.pow(10, db / 20);
peaks.push(Math.min(1, Math.max(0, linear)));
}
});
ffmpeg.on('close', () => {
// If we didn't get enough peaks, pad with zeros
while (peaks.length < samples) {
peaks.push(0);
}
resolve(peaks.slice(0, samples));
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to get waveform: ${error.message}`));
});
});
}
/**
* Clear the metadata cache
*/
clearCache() {
this.ffprobeCache.clear();
}
}
export default AudioAnalyzer;

403
src/audio/BPMDetector.js Archivo normal
Ver fichero

@@ -0,0 +1,403 @@
/**
* BPMDetector - Tempo and beat detection for audio files
*
* Uses multiple detection methods for improved accuracy:
* 1. music-tempo for BPM estimation
* 2. FFmpeg's ebur128 for onset detection
* 3. Autocorrelation for beat phase alignment
*
* @class BPMDetector
*/
import { spawn } from 'child_process';
import fs from 'fs/promises';
import path from 'path';
import os from 'os';
import MusicTempo from 'music-tempo';
export class BPMDetector {
/**
* Create a BPMDetector instance
* @param {Object} options - Detection options
* @param {number} [options.minBPM=60] - Minimum expected BPM
* @param {number} [options.maxBPM=200] - Maximum expected BPM
*/
constructor(options = {}) {
this.options = {
minBPM: 60,
maxBPM: 200,
...options
};
}
/**
* Extract raw PCM audio data from a file using FFmpeg
* @param {string} filepath - Path to the audio file
* @returns {Promise<Float32Array>} - Mono audio samples
*/
async extractAudioData(filepath) {
const tempFile = path.join(os.tmpdir(), `automixer_${Date.now()}.raw`);
return new Promise((resolve, reject) => {
// Use FFmpeg to convert to raw PCM mono audio
const ffmpeg = spawn('ffmpeg', [
'-i', filepath,
'-ac', '1', // Mono
'-ar', '44100', // 44.1kHz sample rate
'-f', 'f32le', // 32-bit float little-endian
'-y', // Overwrite output
tempFile
]);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', async (code) => {
if (code !== 0) {
reject(new Error(`FFmpeg failed: ${stderr}`));
return;
}
try {
const buffer = await fs.readFile(tempFile);
await fs.unlink(tempFile);
// Convert buffer to Float32Array
const samples = new Float32Array(buffer.length / 4);
for (let i = 0; i < samples.length; i++) {
samples[i] = buffer.readFloatLE(i * 4);
}
resolve(samples);
} catch (error) {
reject(error);
}
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Detect onsets (transients/kicks) using full-spectrum energy analysis
* Detects attack transients across all frequencies for better precision
* @param {Float32Array} samples - Audio samples
* @param {number} sampleRate - Sample rate
* @returns {number[]} - Array of onset timestamps in seconds
*/
detectOnsets(samples, sampleRate = 44100) {
const hopSize = Math.floor(sampleRate * 0.005); // 5ms windows
const frameSize = Math.floor(sampleRate * 0.020); // 20ms frame
const onsets = [];
// NO FILTRAR - usar todas las frecuencias
// El ataque del kick tiene energía transiente en todo el espectro
// especialmente en medios (200-800Hz) donde el "click" del beater es más fuerte
// Calculate spectral flux (cambio en el espectro = onset)
// Usando energía RMS simple pero efectiva
const energies = [];
for (let i = 0; i < samples.length - frameSize; i += hopSize) {
let energy = 0;
for (let j = 0; j < frameSize; j++) {
energy += samples[i + j] * samples[i + j];
}
energies.push(Math.sqrt(energy / frameSize)); // RMS
}
// Calculate spectral flux (onset strength function)
// Detecta cambios súbitos en la energía = transientes
const onsetStrength = [];
for (let i = 1; i < energies.length; i++) {
// Solo cambios positivos (aumentos de energía)
const diff = energies[i] - energies[i - 1];
onsetStrength.push(Math.max(0, diff));
}
// Adaptive thresholding usando media móvil
const windowSize = 40; // 200ms window
const threshold = 3.0; // onset debe ser 3x sobre el promedio local
for (let i = windowSize; i < onsetStrength.length - windowSize; i++) {
// Local mean
let localMean = 0;
for (let j = i - windowSize; j < i + windowSize; j++) {
localMean += onsetStrength[j];
}
localMean /= (windowSize * 2);
// Peak detection: debe ser máximo local Y superar threshold
const isPeak = onsetStrength[i] > onsetStrength[i - 1] &&
onsetStrength[i] > onsetStrength[i + 1] &&
onsetStrength[i] > onsetStrength[i - 2] &&
onsetStrength[i] > onsetStrength[i + 2];
if (isPeak && onsetStrength[i] > localMean * threshold && localMean > 0) {
const time = ((i + 1) * hopSize) / sampleRate;
// Evitar detecciones muy cercanas (mínimo 250ms = ~130 BPM en quarter notes)
if (onsets.length === 0 || time - onsets[onsets.length - 1] > 0.25) {
onsets.push(time);
}
}
}
return onsets;
}
/**
* Find the phase offset (first beat position) from detected onsets
* @param {number[]} onsets - Detected onset times
* @param {number} bpm - Detected BPM
* @returns {number} - Phase offset in seconds
*/
findBeatPhase(onsets, bpm) {
if (onsets.length < 4) {
return 0;
}
const beatInterval = 60 / bpm;
// Count how many onsets align with each possible phase
// Use higher resolution (5ms) for more precision
const phaseResolution = 0.005; // 5ms resolution for better precision
const phaseCounts = {};
for (const onset of onsets) {
// Calculate the phase of this onset
const phase = onset % beatInterval;
const quantizedPhase = Math.round(phase / phaseResolution) * phaseResolution;
phaseCounts[quantizedPhase] = (phaseCounts[quantizedPhase] || 0) + 1;
}
// Find the phase with most onsets
let bestPhase = 0;
let maxCount = 0;
for (const [phase, count] of Object.entries(phaseCounts)) {
if (count > maxCount) {
maxCount = count;
bestPhase = parseFloat(phase);
}
}
return bestPhase;
}
/**
* Detect BPM and beat positions in an audio file
* @param {string} filepath - Path to the audio file
* @returns {Promise<{bpm: number, beats: number[]}>} - BPM and beat timestamps
*/
async detect(filepath) {
// Extract audio samples
const audioData = await this.extractAudioData(filepath);
const sampleRate = 44100;
const trackDuration = audioData.length / sampleRate;
// Use a single, larger section from the middle of the track (most stable rhythm)
// Skip first 15 seconds (intro) and take 45 seconds from there
const analysisStart = Math.min(15 * sampleRate, Math.floor(audioData.length * 0.1));
const analysisLength = Math.min(45 * sampleRate, Math.floor(audioData.length * 0.5));
const analysisSamples = audioData.slice(analysisStart, analysisStart + analysisLength);
let bpm = null;
// Primary detection with analysis section
try {
const tempo = new MusicTempo(analysisSamples);
if (tempo.tempo && !isNaN(tempo.tempo) && tempo.tempo > 0) {
bpm = this.normalizeBPM(tempo.tempo);
}
} catch {
// Primary analysis failed
}
// Fallback: try full track if section analysis failed
if (bpm === null) {
try {
const fullTempo = new MusicTempo(audioData);
if (fullTempo.tempo && !isNaN(fullTempo.tempo) && fullTempo.tempo > 0) {
bpm = this.normalizeBPM(fullTempo.tempo);
}
} catch {
// Full track analysis also failed
}
}
// Final fallback
if (bpm === null || isNaN(bpm) || bpm <= 0) {
bpm = 128;
}
bpm = Math.round(bpm * 10) / 10;
// Detect onsets (kicks) across the first 60 seconds for phase detection
const onsetSectionLength = Math.min(60 * sampleRate, audioData.length);
const onsetSection = audioData.slice(0, onsetSectionLength);
const rawOnsets = this.detectOnsets(onsetSection, sampleRate);
// Find the phase using the FIRST onset
// Este es el enfoque más simple y consistente
const beatInterval = 60 / bpm;
let phase = 0;
if (rawOnsets.length > 0) {
// Normalizar fase al rango [0, beatInterval)
phase = rawOnsets[0] % beatInterval;
}
// Generate a perfect beat grid aligned to the detected phase
const beats = this.generateAlignedBeats(rawOnsets[0] || 0, bpm, trackDuration);
return {
bpm,
beats,
confidence: rawOnsets.length > 10 ? 0.9 : 0.7,
phase,
onsets: rawOnsets
};
}
/**
* Filter detected onsets to match expected BPM
* Removes onsets that don't align with the beat grid
* @param {number[]} onsets - Raw detected onsets
* @param {number} beatInterval - Expected beat interval in seconds
* @returns {number[]} - Filtered onsets aligned to beat grid
*/
filterOnsetsToBPM(onsets, beatInterval) {
if (onsets.length < 2) return onsets;
const filtered = [];
const tolerance = beatInterval * 0.20; // 20% tolerance
// Start with the first onset
filtered.push(onsets[0]);
let lastFilteredOnset = onsets[0];
for (let i = 1; i < onsets.length; i++) {
const timeSinceLast = onsets[i] - lastFilteredOnset;
// Check if this onset is roughly on a beat (1, 2, 3, or 4 beats away)
let isOnBeat = false;
for (let beats = 1; beats <= 8; beats++) { // Allow up to 8 beats gap
const expectedTime = beatInterval * beats;
if (Math.abs(timeSinceLast - expectedTime) < tolerance) {
isOnBeat = true;
break;
}
}
if (isOnBeat) {
filtered.push(onsets[i]);
lastFilteredOnset = onsets[i];
}
}
return filtered;
}
/**
* Generate beat grid aligned with detected phase
* @param {number} phase - Phase offset in seconds
* @param {number} bpm - BPM
* @param {number} trackDuration - Track duration in seconds
* @returns {number[]} - Array of beat timestamps
*/
generateAlignedBeats(phase, bpm, trackDuration) {
const beatInterval = 60 / bpm;
const beats = [];
// Start from the phase offset
let time = phase;
// If phase is too large, find the first beat
while (time > beatInterval) {
time -= beatInterval;
}
// Generate all beats
while (time < trackDuration) {
if (time >= 0) {
beats.push(Math.round(time * 1000) / 1000);
}
time += beatInterval;
}
return beats;
}
/**
* Normalize BPM to a standard range
* Handles cases where detected BPM is half or double the actual tempo
* @param {number} bpm - Detected BPM
* @returns {number} - Normalized BPM
*/
normalizeBPM(bpm) {
// Normalize to our expected range
while (bpm < this.options.minBPM && bpm > 0) {
bpm *= 2;
}
while (bpm > this.options.maxBPM) {
bpm /= 2;
}
return bpm;
}
/**
* Extrapolate beats across the full track
* @param {number[]} detectedBeats - Beats detected in the analysis section
* @param {number} bpm - Detected BPM
* @param {number} trackDuration - Total track duration in seconds
* @returns {number[]} - Full array of beat timestamps
*/
extrapolateBeats(detectedBeats, bpm, trackDuration) {
if (detectedBeats.length < 2) {
// Generate beats from scratch based on BPM
const beatInterval = 60 / bpm;
const beats = [];
let time = 0;
while (time < trackDuration) {
beats.push(time);
time += beatInterval;
}
return beats;
}
const beatInterval = 60 / bpm;
const beats = [];
// Find the first beat position
const firstBeat = detectedBeats[0] % beatInterval;
// Generate beats for the full track
let time = firstBeat;
while (time < trackDuration) {
beats.push(Math.round(time * 1000) / 1000); // Round to milliseconds
time += beatInterval;
}
return beats;
}
/**
* Quick BPM detection using a smaller sample
* Useful for getting a rough estimate quickly
* @param {string} filepath - Path to the audio file
* @returns {Promise<number>} - Estimated BPM
*/
async quickDetect(filepath) {
const { bpm } = await this.detect(filepath);
return bpm;
}
}
export default BPMDetector;

332
src/audio/PitchShifter.js Archivo normal
Ver fichero

@@ -0,0 +1,332 @@
/**
* PitchShifter - Tempo and pitch adjustment for audio files
*
* Uses FFmpeg's audio filters to adjust tempo while optionally
* preserving pitch, enabling beat-matched mixing.
*
* @class PitchShifter
*/
import { spawn } from 'child_process';
import path from 'path';
import os from 'os';
import fs from 'fs/promises';
export class PitchShifter {
/**
* Create a PitchShifter instance
* @param {Object} options - Shifter options
*/
constructor(options = {}) {
this.options = {
tempDir: os.tmpdir(),
outputFormat: 'mp3',
outputBitrate: 320,
...options
};
}
/**
* Adjust tempo of an audio file
* @param {string} inputPath - Input file path
* @param {number} tempoRatio - Tempo multiplier (1.0 = no change, 1.1 = 10% faster)
* @param {boolean} preservePitch - Whether to preserve pitch
* @returns {Promise<string>} - Path to the processed file
*/
async adjustTempo(inputPath, tempoRatio, preservePitch = true) {
// Usar WAV sin compresión para archivos intermedios (mejor sincronización de beats)
const outputPath = path.join(
this.options.tempDir,
`automixer_tempo_${Date.now()}_${Math.random().toString(36).substr(2, 9)}.wav`
);
// FFmpeg's atempo filter only accepts values between 0.5 and 2.0
// For larger changes, we need to chain multiple atempo filters
const atempoFilters = this.buildAtempoChain(tempoRatio);
let filterComplex;
if (preservePitch) {
// Use rubberband for high-quality pitch-preserved tempo change
// Fall back to atempo if rubberband is not available
filterComplex = `rubberband=tempo=${tempoRatio}:pitch=1`;
} else {
// Simple atempo changes pitch along with tempo
filterComplex = atempoFilters;
}
return new Promise((resolve, reject) => {
// Usar PCM sin compresión para máxima precisión temporal
const args = [
'-i', inputPath,
'-af', filterComplex,
'-c:a', 'pcm_s24le',
'-ar', '48000',
'-y',
outputPath
];
const ffmpeg = spawn('ffmpeg', args);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', async (code) => {
if (code !== 0) {
// If rubberband failed, try with atempo
if (preservePitch && stderr.includes('rubberband')) {
try {
const result = await this.adjustTempoWithAtempo(inputPath, tempoRatio, outputPath);
resolve(result);
return;
} catch (e) {
reject(new Error(`Tempo adjustment failed: ${e.message}`));
return;
}
}
reject(new Error(`Tempo adjustment failed: ${stderr}`));
return;
}
resolve(outputPath);
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Adjust tempo using atempo filter (fallback)
* @param {string} inputPath - Input file path
* @param {number} tempoRatio - Tempo multiplier
* @param {string} outputPath - Output file path
* @returns {Promise<string>} - Path to the processed file
*/
async adjustTempoWithAtempo(inputPath, tempoRatio, outputPath) {
const atempoFilters = this.buildAtempoChain(tempoRatio);
return new Promise((resolve, reject) => {
// Usar PCM sin compresión para máxima precisión temporal
const args = [
'-i', inputPath,
'-af', atempoFilters,
'-c:a', 'pcm_s24le',
'-ar', '48000',
'-y',
outputPath
];
const ffmpeg = spawn('ffmpeg', args);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', (code) => {
if (code !== 0) {
reject(new Error(`Atempo adjustment failed: ${stderr}`));
return;
}
resolve(outputPath);
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Build a chain of atempo filters for the given ratio
* Each atempo filter can only handle 0.5-2.0 range
* @param {number} ratio - Target tempo ratio
* @returns {string} - FFmpeg atempo filter chain
*/
buildAtempoChain(ratio) {
const filters = [];
let remaining = ratio;
while (remaining > 2.0 || remaining < 0.5) {
if (remaining > 2.0) {
filters.push('atempo=2.0');
remaining /= 2.0;
} else if (remaining < 0.5) {
filters.push('atempo=0.5');
remaining /= 0.5;
}
}
filters.push(`atempo=${remaining}`);
return filters.join(',');
}
/**
* Shift pitch without changing tempo
* @param {string} inputPath - Input file path
* @param {number} semitones - Pitch shift in semitones
* @returns {Promise<string>} - Path to the processed file
*/
async shiftPitch(inputPath, semitones) {
// Usar WAV sin compresión para archivos intermedios
const outputPath = path.join(
this.options.tempDir,
`automixer_pitch_${Date.now()}_${Math.random().toString(36).substr(2, 9)}.wav`
);
// Calculate pitch ratio from semitones
const pitchRatio = Math.pow(2, semitones / 12);
return new Promise((resolve, reject) => {
// Use rubberband for pitch shifting con PCM sin compresión
const args = [
'-i', inputPath,
'-af', `rubberband=pitch=${pitchRatio}:tempo=1`,
'-c:a', 'pcm_s24le',
'-ar', '48000',
'-y',
outputPath
];
const ffmpeg = spawn('ffmpeg', args);
// Capture stderr for debugging (used implicitly in error fallback)
ffmpeg.stderr.on('data', () => {
// Stderr captured but not logged - used only for debugging
});
ffmpeg.on('close', async (code) => {
if (code !== 0) {
// Fallback: use asetrate + atempo combination
try {
const result = await this.shiftPitchFallback(inputPath, semitones, outputPath);
resolve(result);
return;
} catch (err) {
reject(new Error(`Pitch shift failed: ${err.message}`));
return;
}
}
resolve(outputPath);
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Fallback pitch shifting using asetrate + atempo
* @param {string} inputPath - Input file path
* @param {number} semitones - Pitch shift in semitones
* @param {string} outputPath - Output file path
* @returns {Promise<string>} - Path to the processed file
*/
async shiftPitchFallback(inputPath, semitones, outputPath) {
const pitchRatio = Math.pow(2, semitones / 12);
const atempoFilters = this.buildAtempoChain(1 / pitchRatio);
return new Promise((resolve, reject) => {
// asetrate changes pitch, then atempo corrects the tempo
// Usar PCM sin compresión para máxima precisión
const args = [
'-i', inputPath,
'-af', `asetrate=44100*${pitchRatio},${atempoFilters},aresample=48000`,
'-c:a', 'pcm_s24le',
'-y',
outputPath
];
const ffmpeg = spawn('ffmpeg', args);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', (code) => {
if (code !== 0) {
reject(new Error(`Pitch shift fallback failed: ${stderr}`));
return;
}
resolve(outputPath);
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Adjust both tempo and pitch simultaneously
* @param {string} inputPath - Input file path
* @param {number} tempoRatio - Tempo multiplier
* @param {number} pitchSemitones - Pitch shift in semitones
* @returns {Promise<string>} - Path to the processed file
*/
async adjustTempoAndPitch(inputPath, tempoRatio, pitchSemitones) {
// Usar WAV sin compresión para archivos intermedios
const outputPath = path.join(
this.options.tempDir,
`automixer_both_${Date.now()}_${Math.random().toString(36).substr(2, 9)}.wav`
);
const pitchRatio = Math.pow(2, pitchSemitones / 12);
return new Promise((resolve, reject) => {
// Usar PCM sin compresión para máxima precisión
const args = [
'-i', inputPath,
'-af', `rubberband=tempo=${tempoRatio}:pitch=${pitchRatio}`,
'-c:a', 'pcm_s24le',
'-ar', '48000',
'-y',
outputPath
];
const ffmpeg = spawn('ffmpeg', args);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', (code) => {
if (code !== 0) {
reject(new Error(`Tempo and pitch adjustment failed: ${stderr}`));
return;
}
resolve(outputPath);
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Clean up temporary files
* @param {string[]} files - Array of file paths to delete
*/
async cleanup(files) {
for (const file of files) {
try {
await fs.unlink(file);
} catch {
// Ignore cleanup errors
}
}
}
}
export default PitchShifter;

507
src/audio/TrackMixer.js Archivo normal
Ver fichero

@@ -0,0 +1,507 @@
/**
* TrackMixer - Audio track mixing and crossfading
*
* Handles the actual audio mixing process using FFmpeg,
* including crossfades, volume adjustments, and track concatenation.
*
* @class TrackMixer
*/
import { spawn } from 'child_process';
import path from 'path';
import os from 'os';
import fs from 'fs/promises';
export class TrackMixer {
/**
* Create a TrackMixer instance
* @param {Object} options - Mixer options
*/
constructor(options = {}) {
this.options = {
crossfadeDuration: 8, // Shorter crossfade for faster transitions
outputFormat: 'mp3',
outputBitrate: 320,
crossfadeCurve: 'log', // 'linear', 'log', 'sqrt'
introFadeDuration: 3, // Fade in at the start
outroFadeDuration: 3, // Longer fade out at the end
...options
};
}
/**
* Mix multiple tracks with crossfades
* @param {Object[]} tracks - Array of track objects with processedPath
* @param {Object[]} transitions - Array of transition points
* @param {string} outputPath - Output file path
* @param {Function} progressCallback - Progress update callback
* @returns {Promise<string>} - Path to output file
*/
async mixTracks(tracks, transitions, outputPath, progressCallback = () => {}) {
if (tracks.length === 0) {
throw new Error('No tracks to mix');
}
if (tracks.length === 1) {
// Single track - just add fades
await this.addFades(tracks[0].processedPath, outputPath);
return outputPath;
}
// For complex mixing with multiple tracks, we'll chain pairs
let currentMix = tracks[0].processedPath;
let currentDuration = tracks[0].adjustedDuration || tracks[0].duration;
const tempFiles = [];
const crossfadeDuration = this.options.crossfadeDuration;
for (let i = 0; i < tracks.length - 1; i++) {
const trackB = tracks[i + 1];
const transition = transitions[i];
// Calculate effective transition points
let effectiveTransition;
if (i === 0) {
// First mix: use calculated transition points directly (beat-aligned)
effectiveTransition = {
outPoint: transition.outPoint,
inPoint: transition.inPoint,
beatIntervalA: transition.beatIntervalA,
beatIntervalB: transition.beatIntervalB
};
} else {
// Subsequent mixes: outPoint is near the end of current combined mix
// The current mix ends at a beat-aligned position from previous iteration
const mixZoneStart = currentDuration * 0.65;
const mixZoneEnd = currentDuration * 0.80;
// Use the target BPM's beat interval for alignment
const beatInterval = transition.beatIntervalB || (60 / 130);
const targetPoint = (mixZoneStart + mixZoneEnd) / 2;
const alignedOutPoint = Math.round(targetPoint / beatInterval) * beatInterval;
// For inPoint, use the same phase offset calculated for this track
// This ensures beats align even when chaining
effectiveTransition = {
outPoint: Math.max(mixZoneStart, Math.min(mixZoneEnd, alignedOutPoint)),
inPoint: transition.inPoint,
beatIntervalA: beatInterval,
beatIntervalB: transition.beatIntervalB
};
}
const isFirst = i === 0;
const isLast = i === tracks.length - 2;
// Usar WAV sin compresión para archivos intermedios (mejor sincronización)
// Solo el archivo final será MP3
const tempOutput = isLast
? outputPath
: path.join(os.tmpdir(), `automixer_mix_${Date.now()}_${i}.wav`);
if (!isLast) {
tempFiles.push(tempOutput);
}
progressCallback({
stage: 'mixing',
current: i + 1,
total: tracks.length - 1,
message: `Mixing track ${i + 1} with track ${i + 2}`
});
const trackBDuration = trackB.adjustedDuration || trackB.duration;
await this.crossfadeTracks(
currentMix,
trackB.processedPath,
tempOutput,
effectiveTransition,
{
trackADuration: currentDuration,
trackBDuration: trackBDuration,
isFirst,
isLast
}
);
// Update current mix path and calculate new combined duration
// Duración = (track A hasta outPoint) + crossfade + (track B desde inPoint)
const trackBRemainder = trackBDuration - effectiveTransition.inPoint;
currentDuration = effectiveTransition.outPoint + crossfadeDuration + trackBRemainder;
currentMix = tempOutput;
}
// Cleanup temp files
for (const tempFile of tempFiles) {
try {
await fs.unlink(tempFile);
} catch {
// Ignore cleanup errors
}
}
return outputPath;
}
/**
* Add fade in/out to a single track
* @param {string} inputPath - Input file path
* @param {string} outputPath - Output file path
* @returns {Promise<void>}
*/
async addFades(inputPath, outputPath) {
const introFade = this.options.introFadeDuration || 3;
const outroFade = this.options.outroFadeDuration || 5;
return new Promise((resolve, reject) => {
const args = [
'-i', inputPath,
'-af', `afade=t=in:st=0:d=${introFade},areverse,afade=t=in:st=0:d=${outroFade},areverse`,
'-c:a', 'libmp3lame',
'-b:a', `${this.options.outputBitrate}k`,
'-y',
outputPath
];
const ffmpeg = spawn('ffmpeg', args);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', (code) => {
if (code !== 0) {
reject(new Error(`FFmpeg fade failed: ${stderr}`));
return;
}
resolve();
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Crossfade two tracks together with beat synchronization
* @param {string} trackAPath - Path to first track
* @param {string} trackBPath - Path to second track
* @param {string} outputPath - Path for output
* @param {Object} transition - Transition points { outPoint, inPoint, beatOffset }
* @param {Object} options - Additional options (isFirst, isLast)
* @returns {Promise<void>}
*/
async crossfadeTracks(trackAPath, trackBPath, outputPath, transition, options = {}) {
const { outPoint, inPoint } = transition;
const { isLast = true } = options;
const crossfadeDuration = this.options.crossfadeDuration;
// Calculate the actual crossfade start point in track A
const fadeOutStart = Math.max(0, outPoint);
// Build the FFmpeg filter complex
const filterComplex = this.buildCrossfadeFilter(
fadeOutStart,
inPoint,
crossfadeDuration,
this.options.crossfadeCurve,
{ isLast }
);
return new Promise((resolve, reject) => {
const timestamp = Math.floor(Date.now() / 1000);
// Determinar si el output es el archivo final (MP3) o intermedio (WAV)
const isFinalOutput = outputPath.toLowerCase().endsWith('.mp3');
const args = [
'-i', trackAPath,
'-i', trackBPath,
'-filter_complex', filterComplex,
'-map', '[out]'
];
if (isFinalOutput) {
// Archivo final: codificar a MP3 con metadata
args.push(
'-c:a', 'libmp3lame',
'-b:a', `${this.options.outputBitrate}k`,
'-metadata', `title=automixer-${timestamp}`,
'-metadata', 'artist=automixer',
'-metadata', `album=automixer-${timestamp}`,
'-metadata', `comment=Generated by automixer at ${timestamp}`
);
} else {
// Archivo intermedio: usar WAV PCM sin compresión para máxima precisión
args.push(
'-c:a', 'pcm_s24le',
'-ar', '48000'
);
}
args.push('-y', outputPath);
const ffmpeg = spawn('ffmpeg', args);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', (code) => {
if (code !== 0) {
reject(new Error(`FFmpeg crossfade failed: ${stderr}`));
return;
}
resolve();
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
/**
* Build FFmpeg filter complex for DJ-style EQ crossfade
*
* Técnica de mezcla DJ real:
* 1. Track A suena completo (graves, medios, agudos)
* 2. Track B entra SIN GRAVES (solo medios/agudos), sincronizado al beat
* 3. Gradualmente: quitas graves de A + subes graves de B (EQ crossfade)
* 4. Track A hace fade-out suave mientras B ya tiene el control
*
* @param {number} fadeStart - Start of crossfade in track A (seconds from start)
* @param {number} inPoint - Start point in track B to begin crossfade
* @param {number} duration - Crossfade duration
* @param {string} curve - Fade curve type
* @param {Object} options - { isLast }
* @returns {string} - FFmpeg filter complex string
*/
buildCrossfadeFilter(fadeStart, inPoint, duration, curve = 'log', options = {}) {
const curveType = this.getCurveType(curve);
const { isLast = true } = options;
// Frecuencia de corte para separar graves de medios/agudos
const bassFreq = 180;
// Fases del crossfade:
// Fase 1 (0-15%): Track B entra sin graves (solo hi-hats, melodia)
// Fase 2 (15-25%): EQ crossfade breve de graves
// Fase 3: Track A hace fade-out gradual desde el inicio
const phase1End = duration * 0.15;
const phase2End = duration * 0.25;
const fadeOutStart = 0; // Comenzar fade-out desde el inicio del crossfade
const fadeOutDuration = duration * 0.7; // Fade-out durante 70% del crossfade
// Los puntos ya incluyen compensación de latencia de detección
// Ahora solo aplicamos los tiempos directamente
const aFadeStart = fadeStart;
const aFadeEnd = fadeStart + duration;
const bFadeStart = inPoint;
const bFadeEnd = inPoint + duration;
const bPostStart = bFadeEnd;
const filters = [];
// === TRACK A: Parte antes del crossfade ===
filters.push(`[0:a]atrim=0:${aFadeStart},asetpts=PTS-STARTPTS[a_pre]`);
// === TRACK A durante crossfade ===
// Graves de A: fade out gradual durante fase 2-3
filters.push(
`[0:a]atrim=${aFadeStart}:${aFadeEnd},asetpts=PTS-STARTPTS,` +
`lowpass=f=${bassFreq},` +
`afade=t=out:st=${phase1End}:d=${phase2End - phase1End}:curve=${curveType}[a_bass]`
);
// Medios/agudos de A: fade out suave y prolongado en fase 3
filters.push(
`[0:a]atrim=${aFadeStart}:${aFadeEnd},asetpts=PTS-STARTPTS,` +
`highpass=f=${bassFreq},` +
`afade=t=out:st=${fadeOutStart}:d=${fadeOutDuration}:curve=${curveType}[a_mid_high]`
);
// === TRACK B durante crossfade ===
// Medios/agudos de B: entran desde el inicio
filters.push(
`[1:a]atrim=${bFadeStart}:${bFadeEnd},asetpts=PTS-STARTPTS,` +
`highpass=f=${bassFreq},` +
`afade=t=in:st=0:d=${phase1End}:curve=${curveType}[b_mid_high]`
);
// Graves de B: entran en fase 2
filters.push(
`[1:a]atrim=${bFadeStart}:${bFadeEnd},asetpts=PTS-STARTPTS,` +
`lowpass=f=${bassFreq},` +
`afade=t=in:st=${phase1End}:d=${phase2End - phase1End}:curve=${curveType}[b_bass]`
);
// === TRACK B: Parte después del crossfade ===
filters.push(`[1:a]atrim=${bPostStart},asetpts=PTS-STARTPTS[b_post]`);
// === MEZCLA durante crossfade ===
filters.push(
'[a_bass][a_mid_high][b_mid_high][b_bass]amix=inputs=4:duration=longest:normalize=0[crossfade_mix]'
);
// === CONCATENACIÓN FINAL ===
if (isLast) {
const outroFade = this.options.outroFadeDuration || 5;
filters.push('[a_pre][crossfade_mix][b_post]concat=n=3:v=0:a=1[pre_out]');
filters.push(`[pre_out]areverse,afade=t=in:st=0:d=${outroFade}:curve=log,areverse[out]`);
} else {
filters.push('[a_pre][crossfade_mix][b_post]concat=n=3:v=0:a=1[out]');
}
return filters.join(';');
}
/**
* Get FFmpeg curve type name
* @param {string} curve - Curve type
* @returns {string} - FFmpeg curve name
*/
getCurveType(curve) {
const curves = {
linear: 'tri',
log: 'log',
sqrt: 'qsin',
sine: 'hsin',
exponential: 'exp'
};
return curves[curve] || 'log';
}
/**
* Apply volume normalization to a track
* @param {string} inputPath - Input file path
* @param {string} outputPath - Output file path
* @param {number} targetLUFS - Target loudness in LUFS
* @returns {Promise<void>}
*/
async normalizeVolume(inputPath, outputPath, targetLUFS = -14) {
return new Promise((resolve, reject) => {
// First pass: analyze loudness
const analyzeArgs = [
'-i', inputPath,
'-af', `loudnorm=I=${targetLUFS}:TP=-2:LRA=11:print_format=json`,
'-f', 'null',
'-'
];
const analyze = spawn('ffmpeg', analyzeArgs);
let stderr = '';
analyze.stderr.on('data', (data) => {
stderr += data.toString();
});
analyze.on('close', async (_code) => {
// Parse loudness info from output
const inputI = stderr.match(/"input_i"\s*:\s*"(-?\d+\.?\d*)"/);
const inputTP = stderr.match(/"input_tp"\s*:\s*"(-?\d+\.?\d*)"/);
const inputLRA = stderr.match(/"input_lra"\s*:\s*"(-?\d+\.?\d*)"/);
const inputThresh = stderr.match(/"input_thresh"\s*:\s*"(-?\d+\.?\d*)"/);
if (!inputI) {
// If analysis failed, just copy the file
await fs.copyFile(inputPath, outputPath);
resolve();
return;
}
// Second pass: apply normalization
const normalizeArgs = [
'-i', inputPath,
'-af', `loudnorm=I=${targetLUFS}:TP=-2:LRA=11:measured_I=${inputI[1]}:measured_TP=${inputTP?.[1] || '-1'}:measured_LRA=${inputLRA?.[1] || '11'}:measured_thresh=${inputThresh?.[1] || '-40'}:offset=0:linear=true`,
'-c:a', 'libmp3lame',
'-b:a', `${this.options.outputBitrate}k`,
'-y',
outputPath
];
const normalize = spawn('ffmpeg', normalizeArgs);
let normStderr = '';
normalize.stderr.on('data', (data) => {
normStderr += data.toString();
});
normalize.on('close', (normCode) => {
if (normCode !== 0) {
reject(new Error(`Volume normalization failed: ${normStderr}`));
return;
}
resolve();
});
normalize.on('error', (error) => {
reject(new Error(`Failed to normalize: ${error.message}`));
});
});
analyze.on('error', (error) => {
reject(new Error(`Failed to analyze loudness: ${error.message}`));
});
});
}
/**
* Simple concatenation without crossfade
* @param {string[]} trackPaths - Array of track file paths
* @param {string} outputPath - Output file path
* @returns {Promise<void>}
*/
async concatenateTracks(trackPaths, outputPath) {
// Create a temporary file list
const listFile = path.join(os.tmpdir(), `automixer_list_${Date.now()}.txt`);
const listContent = trackPaths.map(p => `file '${p}'`).join('\n');
await fs.writeFile(listFile, listContent);
return new Promise((resolve, reject) => {
const args = [
'-f', 'concat',
'-safe', '0',
'-i', listFile,
'-c:a', 'libmp3lame',
'-b:a', `${this.options.outputBitrate}k`,
'-y',
outputPath
];
const ffmpeg = spawn('ffmpeg', args);
let stderr = '';
ffmpeg.stderr.on('data', (data) => {
stderr += data.toString();
});
ffmpeg.on('close', async (code) => {
try {
await fs.unlink(listFile);
} catch {
// Ignore
}
if (code !== 0) {
reject(new Error(`FFmpeg concat failed: ${stderr}`));
return;
}
resolve();
});
ffmpeg.on('error', (error) => {
reject(new Error(`Failed to spawn FFmpeg: ${error.message}`));
});
});
}
}
export default TrackMixer;

386
src/core/AutoMixer.js Archivo normal
Ver fichero

@@ -0,0 +1,386 @@
/**
* AutoMixer - Main orchestrator class
*
* Coordinates the entire mixing process:
* 1. Analyzes all input tracks
* 2. Calculates optimal BPM transitions
* 3. Applies pitch/tempo adjustments
* 4. Creates seamless crossfades between tracks
*
* @class AutoMixer
*/
import { EventEmitter } from 'events';
import path from 'path';
import fs from 'fs/promises';
import { BPMDetector } from '../audio/BPMDetector.js';
import { AudioAnalyzer } from '../audio/AudioAnalyzer.js';
import { TrackMixer } from '../audio/TrackMixer.js';
import { PitchShifter } from '../audio/PitchShifter.js';
/**
* @typedef {Object} MixOptions
* @property {number} [crossfadeDuration=8] - Duration of crossfade in seconds
* @property {number} [targetBPM=null] - Target BPM for all tracks (null = auto-detect)
* @property {boolean} [preservePitch=true] - Preserve pitch when changing tempo
* @property {number} [maxBPMChange=8] - Maximum BPM change percentage allowed
* @property {string} [outputFormat='mp3'] - Output format (mp3, wav, flac)
* @property {number} [outputBitrate=320] - Output bitrate in kbps
*/
/**
* @typedef {Object} TrackInfo
* @property {string} filepath - Path to the audio file
* @property {number} bpm - Detected BPM
* @property {number} duration - Duration in seconds
* @property {number[]} beats - Array of beat timestamps
* @property {string} key - Musical key (if detectable)
*/
export class AutoMixer extends EventEmitter {
/**
* Create an AutoMixer instance
* @param {MixOptions} options - Mixer configuration options
*/
constructor(options = {}) {
super();
this.options = {
crossfadeDuration: 8,
targetBPM: null,
preservePitch: true,
maxBPMChange: 15, // Increased to allow bigger tempo changes
outputFormat: 'mp3',
outputBitrate: 320,
tempDir: null,
...options
};
this.bpmDetector = new BPMDetector();
this.audioAnalyzer = new AudioAnalyzer();
this.trackMixer = new TrackMixer(this.options);
this.pitchShifter = new PitchShifter();
this.tracks = [];
this.analyzedTracks = [];
}
/**
* Add tracks to the mix queue
* @param {string[]} filepaths - Array of paths to audio files
* @returns {AutoMixer} - Returns this for chaining
*/
addTracks(filepaths) {
this.tracks.push(...filepaths);
return this;
}
/**
* Clear all tracks from the queue
* @returns {AutoMixer} - Returns this for chaining
*/
clearTracks() {
this.tracks = [];
this.analyzedTracks = [];
return this;
}
/**
* Analyze all tracks in the queue
* Detects BPM, beats, and other audio characteristics
* @returns {Promise<TrackInfo[]>} - Array of analyzed track information
*/
async analyzeTracks() {
this.emit('analysis:start', { totalTracks: this.tracks.length });
this.analyzedTracks = [];
for (let i = 0; i < this.tracks.length; i++) {
const filepath = this.tracks[i];
this.emit('analysis:track:start', { index: i, filepath });
try {
// Validate file exists
await fs.access(filepath);
// Get audio duration and metadata
const metadata = await this.audioAnalyzer.getMetadata(filepath);
// Detect BPM and beats
const detection = await this.bpmDetector.detect(filepath);
const trackInfo = {
filepath,
filename: path.basename(filepath),
bpm: detection.bpm,
beats: detection.beats,
onsets: detection.onsets || [], // Store raw onsets for sync
phase: detection.phase || 0,
duration: metadata.duration,
sampleRate: metadata.sampleRate,
channels: metadata.channels
};
this.analyzedTracks.push(trackInfo);
this.emit('analysis:track:complete', { index: i, trackInfo });
} catch (error) {
this.emit('analysis:track:error', { index: i, filepath, error });
throw new Error(`Failed to analyze track "${filepath}": ${error.message}`);
}
}
this.emit('analysis:complete', { tracks: this.analyzedTracks });
return this.analyzedTracks;
}
/**
* Calculate the optimal target BPM for the mix
* Uses the median BPM to minimize required tempo changes
* @returns {number} - Optimal target BPM
*/
calculateOptimalBPM() {
if (this.analyzedTracks.length === 0) {
throw new Error('No tracks analyzed. Call analyzeTracks() first.');
}
if (this.options.targetBPM) {
return this.options.targetBPM;
}
// Use median BPM to minimize overall tempo changes
const bpms = this.analyzedTracks.map(t => t.bpm).sort((a, b) => a - b);
const mid = Math.floor(bpms.length / 2);
return bpms.length % 2 !== 0
? bpms[mid]
: (bpms[mid - 1] + bpms[mid]) / 2;
}
/**
* Calculate tempo adjustment ratio for a track
* @param {number} sourceBPM - Original BPM
* @param {number} targetBPM - Target BPM
* @returns {number} - Tempo ratio (1.0 = no change)
*/
calculateTempoRatio(sourceBPM, targetBPM) {
const ratio = targetBPM / sourceBPM;
const changePercent = Math.abs(ratio - 1) * 100;
// Check if change exceeds maximum allowed
if (changePercent > this.options.maxBPMChange) {
// Try halving or doubling the source BPM to find better match
const halfRatio = targetBPM / (sourceBPM / 2);
const doubleRatio = targetBPM / (sourceBPM * 2);
const halfChange = Math.abs(halfRatio - 1) * 100;
const doubleChange = Math.abs(doubleRatio - 1) * 100;
// Pick the option with smallest change
if (halfChange < doubleChange && halfChange <= this.options.maxBPMChange) {
return halfRatio;
}
if (doubleChange <= this.options.maxBPMChange) {
return doubleRatio;
}
// If neither works, still apply the original ratio (better than no adjustment)
// This ensures all tracks match tempo even with big differences
}
return ratio;
}
/**
* Find the optimal transition point between two tracks
*
* ENFOQUE SIMPLIFICADO:
* 1. outPoint es un punto en la zona de mezcla de A
* 2. inPoint es un punto cerca del inicio de B
* 3. La CLAVE: la diferencia (outPoint - inPoint) debe ser un múltiplo
* EXACTO del beat interval para que los kicks coincidan.
*
* @param {TrackInfo} trackA - Outgoing track
* @param {TrackInfo} trackB - Incoming track
* @returns {Object} - Transition points { outPoint, inPoint }
*/
findTransitionPoints(trackA, trackB) {
const duration = trackA.adjustedDuration || trackA.duration;
const bpmA = trackA.adjustedBPM || trackA.bpm || 128;
const bpmB = trackB.adjustedBPM || trackB.bpm || 128;
// Ambos tracks tienen el mismo BPM target
const beatInterval = 60 / bpmA; // ~461ms para 130 BPM
const halfBeat = beatInterval / 2;
// Usar el PRIMER ONSET detectado como referencia absoluta
const firstOnsetA = (trackA.onsets && trackA.onsets.length > 0) ? trackA.onsets[0] : 0;
const firstOnsetB = (trackB.onsets && trackB.onsets.length > 0) ? trackB.onsets[0] : 0;
// Ajustar fases si hubo cambio de tempo
const tempoRatioA = trackA.tempoRatio || 1;
const tempoRatioB = trackB.tempoRatio || 1;
const adjustedOnsetA = firstOnsetA / tempoRatioA;
const adjustedOnsetB = firstOnsetB / tempoRatioB;
// Calcular fase normalizada [0, beatInterval)
let phaseA = adjustedOnsetA % beatInterval;
let phaseB = adjustedOnsetB % beatInterval;
// Mix zone: 65-80% of track A
const mixZoneStart = duration * 0.65;
const mixZoneEnd = duration * 0.80;
const mixZoneMid = (mixZoneStart + mixZoneEnd) / 2;
// PASO 1: outPoint debe estar en un beat de A (alineado a su fase)
// Encontrar el beat más cercano a mixZoneMid
const beatsFromStartA = Math.round((mixZoneMid - adjustedOnsetA) / beatInterval);
let outPoint = adjustedOnsetA + beatsFromStartA * beatInterval;
// Asegurar que está en la zona de mezcla
while (outPoint < mixZoneStart) outPoint += beatInterval;
while (outPoint > mixZoneEnd) outPoint -= beatInterval;
// PASO 2: inPoint debe estar en un beat de B
// Usar directamente los onsets detectados para encontrar un beat real
const targetInPoint = 4; // ~4 segundos
const beatsFromStartB = Math.round((targetInPoint - adjustedOnsetB) / beatInterval);
let inPoint = adjustedOnsetB + beatsFromStartB * beatInterval;
// Asegurar que está en el rango válido (2-8 segundos)
while (inPoint < 2) inPoint += beatInterval;
while (inPoint > 8) inPoint -= beatInterval;
// PASO 3: Alinear las fases
// outPoint ya está en un beat de A (fase = phaseA)
// inPoint ya está en un beat de B (fase = phaseB)
// Para que coincidan, ajustamos inPoint por la diferencia de fases
let phaseDiff = (outPoint % beatInterval) - (inPoint % beatInterval);
// Normalizar a [-halfBeat, +halfBeat]
while (phaseDiff > halfBeat) phaseDiff -= beatInterval;
while (phaseDiff < -halfBeat) phaseDiff += beatInterval;
// Ajustar inPoint para que su fase coincida con outPoint
inPoint += phaseDiff;
// Re-verificar que inPoint es válido
while (inPoint < 1) inPoint += beatInterval;
while (inPoint > 10) inPoint -= beatInterval;
return {
outPoint,
inPoint,
beatIntervalA: beatInterval,
beatIntervalB: beatInterval
};
}
/**
* Create the final mix from all analyzed tracks
* @param {string} outputPath - Path for the output file
* @returns {Promise<string>} - Path to the created mix
*/
async createMix(outputPath) {
if (this.analyzedTracks.length === 0) {
throw new Error('No tracks analyzed. Call analyzeTracks() first.');
}
if (this.analyzedTracks.length === 1) {
// Just copy the single track
await fs.copyFile(this.analyzedTracks[0].filepath, outputPath);
return outputPath;
}
this.emit('mix:start', { totalTracks: this.analyzedTracks.length, outputPath });
const targetBPM = this.calculateOptimalBPM();
this.emit('mix:bpm', { targetBPM });
// Prepare tracks with tempo adjustments
const preparedTracks = [];
for (let i = 0; i < this.analyzedTracks.length; i++) {
const track = this.analyzedTracks[i];
this.emit('mix:prepare:start', { index: i, track });
const tempoRatio = this.calculateTempoRatio(track.bpm, targetBPM);
// If tempo adjustment needed, create adjusted version
let processedPath = track.filepath;
// Always adjust tempo if ratio differs from 1 (even small differences matter for beat sync)
if (Math.abs(tempoRatio - 1) > 0.001) {
console.log(` -> Adjusting tempo for track ${i + 1}`);
processedPath = await this.pitchShifter.adjustTempo(
track.filepath,
tempoRatio,
this.options.preservePitch
);
}
preparedTracks.push({
...track,
processedPath,
tempoRatio,
adjustedBPM: track.bpm * tempoRatio,
adjustedDuration: track.duration / tempoRatio,
adjustedBeats: track.beats.map(b => b / tempoRatio),
adjustedOnsets: (track.onsets || []).map(o => o / tempoRatio)
});
this.emit('mix:prepare:complete', { index: i, tempoRatio });
}
// Calculate transition points for all track pairs
const transitions = [];
for (let i = 0; i < preparedTracks.length - 1; i++) {
const transition = this.findTransitionPoints(
preparedTracks[i],
preparedTracks[i + 1]
);
transitions.push(transition);
}
// Create the final mix
this.emit('mix:render:start');
await this.trackMixer.mixTracks(
preparedTracks,
transitions,
outputPath,
(progress) => this.emit('mix:render:progress', progress)
);
// Cleanup temporary files
for (const track of preparedTracks) {
if (track.processedPath !== track.filepath) {
try {
await fs.unlink(track.processedPath);
} catch {
// Ignore cleanup errors
}
}
}
this.emit('mix:complete', { outputPath });
return outputPath;
}
/**
* Run the complete mixing process
* Analyzes tracks and creates the mix in one call
* @param {string[]} inputFiles - Array of input file paths
* @param {string} outputPath - Path for the output file
* @returns {Promise<string>} - Path to the created mix
*/
async mix(inputFiles, outputPath) {
this.clearTracks();
this.addTracks(inputFiles);
await this.analyzeTracks();
return await this.createMix(outputPath);
}
}
export default AutoMixer;

24
src/index.js Archivo normal
Ver fichero

@@ -0,0 +1,24 @@
/**
* AutoMixer - Automatic DJ-style audio mixer
*
* Main entry point for the automixer library.
* Provides programmatic access to audio mixing functionality.
*
* @module automixer
*/
import { AutoMixer } from './core/AutoMixer.js';
import { BPMDetector } from './audio/BPMDetector.js';
import { AudioAnalyzer } from './audio/AudioAnalyzer.js';
import { TrackMixer } from './audio/TrackMixer.js';
import { PitchShifter } from './audio/PitchShifter.js';
export {
AutoMixer,
BPMDetector,
AudioAnalyzer,
TrackMixer,
PitchShifter
};
export default AutoMixer;

173
src/utils/index.js Archivo normal
Ver fichero

@@ -0,0 +1,173 @@
/**
* Utility functions for AutoMixer
*
* @module utils
*/
/**
* Format seconds to mm:ss string
* @param {number} seconds - Duration in seconds
* @returns {string} - Formatted time string
*/
export function formatDuration(seconds) {
const mins = Math.floor(seconds / 60);
const secs = Math.floor(seconds % 60);
return `${mins}:${secs.toString().padStart(2, '0')}`;
}
/**
* Format seconds to hh:mm:ss string
* @param {number} seconds - Duration in seconds
* @returns {string} - Formatted time string
*/
export function formatLongDuration(seconds) {
const hours = Math.floor(seconds / 3600);
const mins = Math.floor((seconds % 3600) / 60);
const secs = Math.floor(seconds % 60);
if (hours > 0) {
return `${hours}:${mins.toString().padStart(2, '0')}:${secs.toString().padStart(2, '0')}`;
}
return `${mins}:${secs.toString().padStart(2, '0')}`;
}
/**
* Convert BPM to beat interval in seconds
* @param {number} bpm - Beats per minute
* @returns {number} - Interval between beats in seconds
*/
export function bpmToInterval(bpm) {
return 60 / bpm;
}
/**
* Convert beat interval to BPM
* @param {number} interval - Interval in seconds
* @returns {number} - Beats per minute
*/
export function intervalToBpm(interval) {
return 60 / interval;
}
/**
* Calculate tempo ratio between two BPMs
* @param {number} sourceBPM - Source BPM
* @param {number} targetBPM - Target BPM
* @returns {number} - Tempo ratio
*/
export function calculateTempoRatio(sourceBPM, targetBPM) {
return targetBPM / sourceBPM;
}
/**
* Find the nearest beat to a given timestamp
* @param {number[]} beats - Array of beat timestamps
* @param {number} timestamp - Target timestamp
* @returns {number} - Nearest beat timestamp
*/
export function findNearestBeat(beats, timestamp) {
if (!beats || beats.length === 0) {
return timestamp;
}
return beats.reduce((closest, beat) => {
return Math.abs(beat - timestamp) < Math.abs(closest - timestamp)
? beat
: closest;
});
}
/**
* Calculate percentage change between two values
* @param {number} original - Original value
* @param {number} updated - New value
* @returns {number} - Percentage change
*/
export function percentageChange(original, updated) {
return ((updated - original) / original) * 100;
}
/**
* Clamp a value between min and max
* @param {number} value - Value to clamp
* @param {number} min - Minimum value
* @param {number} max - Maximum value
* @returns {number} - Clamped value
*/
export function clamp(value, min, max) {
return Math.min(Math.max(value, min), max);
}
/**
* Linear interpolation between two values
* @param {number} a - Start value
* @param {number} b - End value
* @param {number} t - Interpolation factor (0-1)
* @returns {number} - Interpolated value
*/
export function lerp(a, b, t) {
return a + (b - a) * t;
}
/**
* Generate a unique ID
* @returns {string} - Unique ID string
*/
export function generateId() {
return `${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Delay execution for specified milliseconds
* @param {number} ms - Milliseconds to delay
* @returns {Promise<void>}
*/
export function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
/**
* Check if a file path has an audio extension
* @param {string} filepath - File path to check
* @returns {boolean} - True if audio file
*/
export function isAudioFile(filepath) {
const audioExtensions = ['.mp3', '.wav', '.flac', '.ogg', '.m4a', '.aac', '.wma'];
const ext = filepath.toLowerCase().slice(filepath.lastIndexOf('.'));
return audioExtensions.includes(ext);
}
/**
* Convert decibels to linear amplitude
* @param {number} db - Value in decibels
* @returns {number} - Linear amplitude (0-1)
*/
export function dbToLinear(db) {
return Math.pow(10, db / 20);
}
/**
* Convert linear amplitude to decibels
* @param {number} linear - Linear amplitude (0-1)
* @returns {number} - Value in decibels
*/
export function linearToDb(linear) {
return 20 * Math.log10(Math.max(linear, 0.00001));
}
export default {
formatDuration,
formatLongDuration,
bpmToInterval,
intervalToBpm,
calculateTempoRatio,
findNearestBeat,
percentageChange,
clamp,
lerp,
generateId,
delay,
isAudioFile,
dbToLinear,
linearToDb
};

148
tests/automixer.test.js Archivo normal
Ver fichero

@@ -0,0 +1,148 @@
/**
* AutoMixer Tests
*
* Basic test suite for the automixer library
*/
import { describe, it } from 'node:test';
import assert from 'node:assert';
import { AutoMixer, BPMDetector, AudioAnalyzer, PitchShifter } from '../src/index.js';
describe('AutoMixer', () => {
it('should create an instance with default options', () => {
const mixer = new AutoMixer();
assert.ok(mixer);
assert.strictEqual(mixer.options.crossfadeDuration, 8);
assert.strictEqual(mixer.options.preservePitch, true);
});
it('should create an instance with custom options', () => {
const mixer = new AutoMixer({
crossfadeDuration: 12,
targetBPM: 128,
preservePitch: false
});
assert.strictEqual(mixer.options.crossfadeDuration, 12);
assert.strictEqual(mixer.options.targetBPM, 128);
assert.strictEqual(mixer.options.preservePitch, false);
});
it('should add and clear tracks', () => {
const mixer = new AutoMixer();
mixer.addTracks(['track1.mp3', 'track2.mp3']);
assert.strictEqual(mixer.tracks.length, 2);
mixer.addTracks(['track3.mp3']);
assert.strictEqual(mixer.tracks.length, 3);
mixer.clearTracks();
assert.strictEqual(mixer.tracks.length, 0);
});
it('should calculate tempo ratio correctly', () => {
const mixer = new AutoMixer({ maxBPMChange: 10 });
// Simple ratio
const ratio1 = mixer.calculateTempoRatio(120, 132);
assert.strictEqual(ratio1, 1.1);
// Identity
const ratio2 = mixer.calculateTempoRatio(128, 128);
assert.strictEqual(ratio2, 1);
});
it('should throw error when analyzing without tracks', async () => {
const mixer = new AutoMixer();
await assert.rejects(
async () => mixer.createMix('output.mp3'),
{ message: 'No tracks analyzed. Call analyzeTracks() first.' }
);
});
});
describe('BPMDetector', () => {
it('should create an instance with default options', () => {
const detector = new BPMDetector();
assert.ok(detector);
assert.strictEqual(detector.options.minBPM, 60);
assert.strictEqual(detector.options.maxBPM, 200);
});
it('should normalize BPM correctly', () => {
const detector = new BPMDetector({ minBPM: 60, maxBPM: 200 });
// Normal BPM
assert.strictEqual(detector.normalizeBPM(120), 120);
// Half BPM (should double)
assert.strictEqual(detector.normalizeBPM(55), 110);
// Double BPM (should halve)
assert.strictEqual(detector.normalizeBPM(240), 120);
});
it('should extrapolate beats correctly', () => {
const detector = new BPMDetector();
// 120 BPM = 0.5s per beat
const beats = detector.extrapolateBeats([0, 0.5, 1.0], 120, 5);
assert.ok(beats.length > 0);
assert.ok(beats.every(b => b >= 0 && b < 5));
});
});
describe('AudioAnalyzer', () => {
it('should create an instance', () => {
const analyzer = new AudioAnalyzer();
assert.ok(analyzer);
});
it('should have cache management', () => {
const analyzer = new AudioAnalyzer();
analyzer.clearCache();
assert.strictEqual(analyzer.ffprobeCache.size, 0);
});
});
describe('PitchShifter', () => {
it('should create an instance with default options', () => {
const shifter = new PitchShifter();
assert.ok(shifter);
assert.strictEqual(shifter.options.outputFormat, 'mp3');
assert.strictEqual(shifter.options.outputBitrate, 320);
});
it('should build atempo chain correctly', () => {
const shifter = new PitchShifter();
// Simple ratio within range
const chain1 = shifter.buildAtempoChain(1.5);
assert.strictEqual(chain1, 'atempo=1.5');
// Ratio above 2.0 (needs chaining)
const chain2 = shifter.buildAtempoChain(3.0);
assert.ok(chain2.includes('atempo=2.0'));
assert.ok(chain2.includes('atempo=1.5'));
// Ratio below 0.5 (needs chaining)
const chain3 = shifter.buildAtempoChain(0.25);
assert.ok(chain3.includes('atempo=0.5'));
});
});
describe('Module Exports', () => {
it('should export all components', async () => {
const module = await import('../src/index.js');
assert.ok(module.AutoMixer);
assert.ok(module.BPMDetector);
assert.ok(module.AudioAnalyzer);
assert.ok(module.TrackMixer);
assert.ok(module.PitchShifter);
assert.ok(module.default);
});
});