Migration Gitea
This commit is contained in:
parent
d4ac6f5859
commit
ce32ae3134
@ -1,9 +1,9 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(npm run server:*)"
|
||||
],
|
||||
"deny": [],
|
||||
"ask": []
|
||||
}
|
||||
}
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(npm run server:*)"
|
||||
],
|
||||
"deny": [],
|
||||
"ask": []
|
||||
}
|
||||
}
|
||||
|
||||
256
CLAUDE.md
256
CLAUDE.md
@ -1,128 +1,128 @@
|
||||
# Video to MP3 Transcriptor - Instructions pour Claude
|
||||
|
||||
## À propos du projet
|
||||
Ce projet est une API Node.js/Express pour télécharger des vidéos YouTube en MP3, les transcrire, les traduire et les résumer.
|
||||
|
||||
## Documentation
|
||||
|
||||
### Documentation API
|
||||
La documentation complète de l'API se trouve dans **`docs/API.md`**.
|
||||
|
||||
**IMPORTANT** : Cette documentation doit TOUJOURS être maintenue à jour. Chaque fois qu'un endpoint est modifié, ajouté ou supprimé, la documentation doit être mise à jour en conséquence.
|
||||
|
||||
### Responsabilités de maintenance de la documentation
|
||||
|
||||
Quand tu modifies le code, tu DOIS mettre à jour `docs/API.md` si :
|
||||
- Un nouvel endpoint est ajouté
|
||||
- Un endpoint existant est modifié (paramètres, réponses, etc.)
|
||||
- Un endpoint est supprimé
|
||||
- Les modèles par défaut changent
|
||||
- De nouveaux paramètres sont ajoutés
|
||||
- Le format des réponses change
|
||||
|
||||
## Structure du projet
|
||||
|
||||
```
|
||||
videotoMP3Transcriptor/
|
||||
├── docs/
|
||||
│ └── API.md # Documentation complète de l'API
|
||||
├── src/
|
||||
│ ├── server.js # Serveur Express et routes API
|
||||
│ ├── services/
|
||||
│ │ ├── youtube.js # Téléchargement YouTube
|
||||
│ │ ├── transcription.js # Transcription OpenAI
|
||||
│ │ ├── translation.js # Traduction GPT
|
||||
│ │ └── summarize.js # Résumé GPT-5.1
|
||||
│ └── cli.js # Interface en ligne de commande
|
||||
├── public/ # Interface web (si présente)
|
||||
├── output/ # Répertoire de sortie par défaut
|
||||
├── .env # Variables d'environnement
|
||||
└── package.json
|
||||
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Port du serveur
|
||||
- Port par défaut : **8888**
|
||||
- Configurable via `process.env.PORT` dans `.env`
|
||||
|
||||
### Modèles par défaut
|
||||
- **Transcription** : `gpt-4o-mini-transcribe`
|
||||
- **Résumé** : `gpt-5.1`
|
||||
- **Traduction** : `gpt-4o-mini` (hardcodé)
|
||||
|
||||
### Variables d'environnement requises
|
||||
```env
|
||||
OPENAI_API_KEY=sk-...
|
||||
PORT=8888 # optionnel
|
||||
OUTPUT_DIR=./output # optionnel
|
||||
```
|
||||
|
||||
## Commandes importantes
|
||||
|
||||
```bash
|
||||
# Lancer le serveur
|
||||
npm run server
|
||||
|
||||
# Lancer le CLI
|
||||
npm run cli
|
||||
|
||||
# Installer les dépendances
|
||||
npm install
|
||||
```
|
||||
|
||||
## Points d'attention
|
||||
|
||||
### Paramètres outputPath
|
||||
Tous les endpoints supportent maintenant un paramètre `outputPath` optionnel pour spécifier un répertoire de sortie personnalisé. Si non spécifié, le répertoire par défaut `OUTPUT_DIR` est utilisé.
|
||||
|
||||
### Modèles de transcription disponibles
|
||||
- `gpt-4o-mini-transcribe` (par défaut) - Rapide et économique
|
||||
- `gpt-4o-transcribe` - Qualité supérieure
|
||||
- `whisper-1` - Modèle original Whisper (supporte plus de formats)
|
||||
|
||||
### Formats de sortie
|
||||
- **Transcription** : txt, json, srt, vtt (selon le modèle)
|
||||
- **Traduction** : txt
|
||||
- **Résumé** : txt
|
||||
|
||||
## Règles de développement
|
||||
|
||||
1. **Documentation d'abord** : Avant de modifier un endpoint, vérifie `docs/API.md`
|
||||
2. **Après modification** : Mets à jour immédiatement `docs/API.md`
|
||||
3. **Tests** : Redémarre le serveur après chaque modification
|
||||
4. **Cohérence** : Garde la même structure de réponse pour tous les endpoints similaires
|
||||
|
||||
## Architecture des endpoints
|
||||
|
||||
### Endpoints streaming (SSE)
|
||||
- `/download-stream`
|
||||
- `/process-stream`
|
||||
- `/summarize-stream`
|
||||
|
||||
Ces endpoints utilisent Server-Sent Events pour envoyer des mises à jour de progression en temps réel.
|
||||
|
||||
### Endpoints non-streaming
|
||||
- `/download`
|
||||
- `/process`
|
||||
- Tous les endpoints POST avec upload de fichiers
|
||||
|
||||
Ces endpoints retournent une réponse unique une fois le traitement terminé.
|
||||
|
||||
## Maintenance
|
||||
|
||||
Lors de l'ajout de nouvelles fonctionnalités :
|
||||
1. Implémente la fonctionnalité dans le service approprié (`src/services/`)
|
||||
2. Ajoute les routes dans `src/server.js`
|
||||
3. **Mets à jour `docs/API.md` IMMÉDIATEMENT**
|
||||
4. Teste l'endpoint avec curl ou Postman
|
||||
5. Vérifie que la documentation est claire et complète
|
||||
|
||||
## Notes importantes
|
||||
|
||||
- Le serveur doit toujours être sur le port **8888**
|
||||
- Les clés API OpenAI sont requises pour transcription/traduction/résumé
|
||||
- Le répertoire `output/` est créé automatiquement si inexistant
|
||||
- Les fichiers uploadés sont stockés dans `OUTPUT_DIR`
|
||||
- Les vidéos YouTube sont téléchargées en MP3 automatiquement
|
||||
# Video to MP3 Transcriptor - Instructions pour Claude
|
||||
|
||||
## À propos du projet
|
||||
Ce projet est une API Node.js/Express pour télécharger des vidéos YouTube en MP3, les transcrire, les traduire et les résumer.
|
||||
|
||||
## Documentation
|
||||
|
||||
### Documentation API
|
||||
La documentation complète de l'API se trouve dans **`docs/API.md`**.
|
||||
|
||||
**IMPORTANT** : Cette documentation doit TOUJOURS être maintenue à jour. Chaque fois qu'un endpoint est modifié, ajouté ou supprimé, la documentation doit être mise à jour en conséquence.
|
||||
|
||||
### Responsabilités de maintenance de la documentation
|
||||
|
||||
Quand tu modifies le code, tu DOIS mettre à jour `docs/API.md` si :
|
||||
- Un nouvel endpoint est ajouté
|
||||
- Un endpoint existant est modifié (paramètres, réponses, etc.)
|
||||
- Un endpoint est supprimé
|
||||
- Les modèles par défaut changent
|
||||
- De nouveaux paramètres sont ajoutés
|
||||
- Le format des réponses change
|
||||
|
||||
## Structure du projet
|
||||
|
||||
```
|
||||
videotoMP3Transcriptor/
|
||||
├── docs/
|
||||
│ └── API.md # Documentation complète de l'API
|
||||
├── src/
|
||||
│ ├── server.js # Serveur Express et routes API
|
||||
│ ├── services/
|
||||
│ │ ├── youtube.js # Téléchargement YouTube
|
||||
│ │ ├── transcription.js # Transcription OpenAI
|
||||
│ │ ├── translation.js # Traduction GPT
|
||||
│ │ └── summarize.js # Résumé GPT-5.1
|
||||
│ └── cli.js # Interface en ligne de commande
|
||||
├── public/ # Interface web (si présente)
|
||||
├── output/ # Répertoire de sortie par défaut
|
||||
├── .env # Variables d'environnement
|
||||
└── package.json
|
||||
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Port du serveur
|
||||
- Port par défaut : **8888**
|
||||
- Configurable via `process.env.PORT` dans `.env`
|
||||
|
||||
### Modèles par défaut
|
||||
- **Transcription** : `gpt-4o-mini-transcribe`
|
||||
- **Résumé** : `gpt-5.1`
|
||||
- **Traduction** : `gpt-4o-mini` (hardcodé)
|
||||
|
||||
### Variables d'environnement requises
|
||||
```env
|
||||
OPENAI_API_KEY=sk-...
|
||||
PORT=8888 # optionnel
|
||||
OUTPUT_DIR=./output # optionnel
|
||||
```
|
||||
|
||||
## Commandes importantes
|
||||
|
||||
```bash
|
||||
# Lancer le serveur
|
||||
npm run server
|
||||
|
||||
# Lancer le CLI
|
||||
npm run cli
|
||||
|
||||
# Installer les dépendances
|
||||
npm install
|
||||
```
|
||||
|
||||
## Points d'attention
|
||||
|
||||
### Paramètres outputPath
|
||||
Tous les endpoints supportent maintenant un paramètre `outputPath` optionnel pour spécifier un répertoire de sortie personnalisé. Si non spécifié, le répertoire par défaut `OUTPUT_DIR` est utilisé.
|
||||
|
||||
### Modèles de transcription disponibles
|
||||
- `gpt-4o-mini-transcribe` (par défaut) - Rapide et économique
|
||||
- `gpt-4o-transcribe` - Qualité supérieure
|
||||
- `whisper-1` - Modèle original Whisper (supporte plus de formats)
|
||||
|
||||
### Formats de sortie
|
||||
- **Transcription** : txt, json, srt, vtt (selon le modèle)
|
||||
- **Traduction** : txt
|
||||
- **Résumé** : txt
|
||||
|
||||
## Règles de développement
|
||||
|
||||
1. **Documentation d'abord** : Avant de modifier un endpoint, vérifie `docs/API.md`
|
||||
2. **Après modification** : Mets à jour immédiatement `docs/API.md`
|
||||
3. **Tests** : Redémarre le serveur après chaque modification
|
||||
4. **Cohérence** : Garde la même structure de réponse pour tous les endpoints similaires
|
||||
|
||||
## Architecture des endpoints
|
||||
|
||||
### Endpoints streaming (SSE)
|
||||
- `/download-stream`
|
||||
- `/process-stream`
|
||||
- `/summarize-stream`
|
||||
|
||||
Ces endpoints utilisent Server-Sent Events pour envoyer des mises à jour de progression en temps réel.
|
||||
|
||||
### Endpoints non-streaming
|
||||
- `/download`
|
||||
- `/process`
|
||||
- Tous les endpoints POST avec upload de fichiers
|
||||
|
||||
Ces endpoints retournent une réponse unique une fois le traitement terminé.
|
||||
|
||||
## Maintenance
|
||||
|
||||
Lors de l'ajout de nouvelles fonctionnalités :
|
||||
1. Implémente la fonctionnalité dans le service approprié (`src/services/`)
|
||||
2. Ajoute les routes dans `src/server.js`
|
||||
3. **Mets à jour `docs/API.md` IMMÉDIATEMENT**
|
||||
4. Teste l'endpoint avec curl ou Postman
|
||||
5. Vérifie que la documentation est claire et complète
|
||||
|
||||
## Notes importantes
|
||||
|
||||
- Le serveur doit toujours être sur le port **8888**
|
||||
- Les clés API OpenAI sont requises pour transcription/traduction/résumé
|
||||
- Le répertoire `output/` est créé automatiquement si inexistant
|
||||
- Les fichiers uploadés sont stockés dans `OUTPUT_DIR`
|
||||
- Les vidéos YouTube sont téléchargées en MP3 automatiquement
|
||||
|
||||
1122
docs/API.md
1122
docs/API.md
File diff suppressed because it is too large
Load Diff
2258
public/app.js
2258
public/app.js
File diff suppressed because it is too large
Load Diff
1154
public/index.html
1154
public/index.html
File diff suppressed because it is too large
Load Diff
1432
public/style.css
1432
public/style.css
File diff suppressed because it is too large
Load Diff
2274
src/server.js
2274
src/server.js
File diff suppressed because it is too large
Load Diff
@ -1,145 +1,145 @@
|
||||
import { exec } from 'child_process';
|
||||
import { promisify } from 'util';
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
|
||||
const execPromise = promisify(exec);
|
||||
|
||||
/**
|
||||
* Convert a video/audio file to MP3 using FFmpeg
|
||||
* @param {string} inputPath - Path to input file
|
||||
* @param {object} options - Conversion options
|
||||
* @param {string} options.outputDir - Output directory (default: same as input)
|
||||
* @param {string} options.bitrate - Audio bitrate (default: 192k)
|
||||
* @param {string} options.quality - Audio quality 0-9 (default: 2, where 0 is best)
|
||||
* @returns {Promise<object>} Conversion result with output path
|
||||
*/
|
||||
export async function convertToMP3(inputPath, options = {}) {
|
||||
const {
|
||||
outputDir = path.dirname(inputPath),
|
||||
bitrate = '192k',
|
||||
quality = '2',
|
||||
} = options;
|
||||
|
||||
// Ensure input file exists
|
||||
if (!fs.existsSync(inputPath)) {
|
||||
throw new Error(`Input file not found: ${inputPath}`);
|
||||
}
|
||||
|
||||
// Generate output path
|
||||
const inputFilename = path.basename(inputPath, path.extname(inputPath));
|
||||
const outputPath = path.join(outputDir, `${inputFilename}.mp3`);
|
||||
|
||||
// Check if output already exists
|
||||
if (fs.existsSync(outputPath)) {
|
||||
// Add timestamp to make it unique
|
||||
const timestamp = Date.now();
|
||||
const uniqueOutputPath = path.join(outputDir, `${inputFilename}_${timestamp}.mp3`);
|
||||
return convertToMP3Internal(inputPath, uniqueOutputPath, bitrate, quality);
|
||||
}
|
||||
|
||||
return convertToMP3Internal(inputPath, outputPath, bitrate, quality);
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal conversion function
|
||||
*/
|
||||
async function convertToMP3Internal(inputPath, outputPath, bitrate, quality) {
|
||||
try {
|
||||
// FFmpeg command to convert to MP3
|
||||
// -i: input file
|
||||
// -vn: no video (audio only)
|
||||
// -ar 44100: audio sample rate 44.1kHz
|
||||
// -ac 2: stereo
|
||||
// -b:a: audio bitrate
|
||||
// -q:a: audio quality (VBR)
|
||||
const command = `ffmpeg -i "${inputPath}" -vn -ar 44100 -ac 2 -b:a ${bitrate} -q:a ${quality} "${outputPath}"`;
|
||||
|
||||
console.log(`Converting: ${path.basename(inputPath)} -> ${path.basename(outputPath)}`);
|
||||
|
||||
const { stdout, stderr } = await execPromise(command);
|
||||
|
||||
// Verify output file was created
|
||||
if (!fs.existsSync(outputPath)) {
|
||||
throw new Error('Conversion failed: output file not created');
|
||||
}
|
||||
|
||||
const stats = fs.statSync(outputPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
inputPath,
|
||||
outputPath,
|
||||
filename: path.basename(outputPath),
|
||||
size: stats.size,
|
||||
sizeHuman: formatBytes(stats.size),
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Conversion error: ${error.message}`);
|
||||
throw new Error(`FFmpeg conversion failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert multiple files to MP3
|
||||
* @param {string[]} inputPaths - Array of input file paths
|
||||
* @param {object} options - Conversion options
|
||||
* @returns {Promise<object>} Batch conversion results
|
||||
*/
|
||||
export async function convertMultipleToMP3(inputPaths, options = {}) {
|
||||
const results = [];
|
||||
let successCount = 0;
|
||||
let failCount = 0;
|
||||
|
||||
for (let i = 0; i < inputPaths.length; i++) {
|
||||
const inputPath = inputPaths[i];
|
||||
console.log(`[${i + 1}/${inputPaths.length}] Converting: ${path.basename(inputPath)}`);
|
||||
|
||||
try {
|
||||
const result = await convertToMP3(inputPath, options);
|
||||
results.push({ ...result, index: i });
|
||||
successCount++;
|
||||
} catch (error) {
|
||||
results.push({
|
||||
success: false,
|
||||
inputPath,
|
||||
error: error.message,
|
||||
index: i,
|
||||
});
|
||||
failCount++;
|
||||
console.error(`Failed to convert ${path.basename(inputPath)}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
totalFiles: inputPaths.length,
|
||||
successCount,
|
||||
failCount,
|
||||
results,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Format bytes to human readable format
|
||||
*/
|
||||
function formatBytes(bytes, decimals = 2) {
|
||||
if (bytes === 0) return '0 Bytes';
|
||||
|
||||
const k = 1024;
|
||||
const dm = decimals < 0 ? 0 : decimals;
|
||||
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
|
||||
|
||||
const i = Math.floor(Math.log(bytes) / Math.log(k));
|
||||
|
||||
return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get supported input formats
|
||||
*/
|
||||
export function getSupportedFormats() {
|
||||
return {
|
||||
video: ['.mp4', '.avi', '.mkv', '.mov', '.wmv', '.flv', '.webm', '.m4v'],
|
||||
audio: ['.m4a', '.wav', '.flac', '.ogg', '.aac', '.wma', '.opus'],
|
||||
};
|
||||
}
|
||||
import { exec } from 'child_process';
|
||||
import { promisify } from 'util';
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
|
||||
const execPromise = promisify(exec);
|
||||
|
||||
/**
|
||||
* Convert a video/audio file to MP3 using FFmpeg
|
||||
* @param {string} inputPath - Path to input file
|
||||
* @param {object} options - Conversion options
|
||||
* @param {string} options.outputDir - Output directory (default: same as input)
|
||||
* @param {string} options.bitrate - Audio bitrate (default: 192k)
|
||||
* @param {string} options.quality - Audio quality 0-9 (default: 2, where 0 is best)
|
||||
* @returns {Promise<object>} Conversion result with output path
|
||||
*/
|
||||
export async function convertToMP3(inputPath, options = {}) {
|
||||
const {
|
||||
outputDir = path.dirname(inputPath),
|
||||
bitrate = '192k',
|
||||
quality = '2',
|
||||
} = options;
|
||||
|
||||
// Ensure input file exists
|
||||
if (!fs.existsSync(inputPath)) {
|
||||
throw new Error(`Input file not found: ${inputPath}`);
|
||||
}
|
||||
|
||||
// Generate output path
|
||||
const inputFilename = path.basename(inputPath, path.extname(inputPath));
|
||||
const outputPath = path.join(outputDir, `${inputFilename}.mp3`);
|
||||
|
||||
// Check if output already exists
|
||||
if (fs.existsSync(outputPath)) {
|
||||
// Add timestamp to make it unique
|
||||
const timestamp = Date.now();
|
||||
const uniqueOutputPath = path.join(outputDir, `${inputFilename}_${timestamp}.mp3`);
|
||||
return convertToMP3Internal(inputPath, uniqueOutputPath, bitrate, quality);
|
||||
}
|
||||
|
||||
return convertToMP3Internal(inputPath, outputPath, bitrate, quality);
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal conversion function
|
||||
*/
|
||||
async function convertToMP3Internal(inputPath, outputPath, bitrate, quality) {
|
||||
try {
|
||||
// FFmpeg command to convert to MP3
|
||||
// -i: input file
|
||||
// -vn: no video (audio only)
|
||||
// -ar 44100: audio sample rate 44.1kHz
|
||||
// -ac 2: stereo
|
||||
// -b:a: audio bitrate
|
||||
// -q:a: audio quality (VBR)
|
||||
const command = `ffmpeg -i "${inputPath}" -vn -ar 44100 -ac 2 -b:a ${bitrate} -q:a ${quality} "${outputPath}"`;
|
||||
|
||||
console.log(`Converting: ${path.basename(inputPath)} -> ${path.basename(outputPath)}`);
|
||||
|
||||
const { stdout, stderr } = await execPromise(command);
|
||||
|
||||
// Verify output file was created
|
||||
if (!fs.existsSync(outputPath)) {
|
||||
throw new Error('Conversion failed: output file not created');
|
||||
}
|
||||
|
||||
const stats = fs.statSync(outputPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
inputPath,
|
||||
outputPath,
|
||||
filename: path.basename(outputPath),
|
||||
size: stats.size,
|
||||
sizeHuman: formatBytes(stats.size),
|
||||
};
|
||||
} catch (error) {
|
||||
console.error(`Conversion error: ${error.message}`);
|
||||
throw new Error(`FFmpeg conversion failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert multiple files to MP3
|
||||
* @param {string[]} inputPaths - Array of input file paths
|
||||
* @param {object} options - Conversion options
|
||||
* @returns {Promise<object>} Batch conversion results
|
||||
*/
|
||||
export async function convertMultipleToMP3(inputPaths, options = {}) {
|
||||
const results = [];
|
||||
let successCount = 0;
|
||||
let failCount = 0;
|
||||
|
||||
for (let i = 0; i < inputPaths.length; i++) {
|
||||
const inputPath = inputPaths[i];
|
||||
console.log(`[${i + 1}/${inputPaths.length}] Converting: ${path.basename(inputPath)}`);
|
||||
|
||||
try {
|
||||
const result = await convertToMP3(inputPath, options);
|
||||
results.push({ ...result, index: i });
|
||||
successCount++;
|
||||
} catch (error) {
|
||||
results.push({
|
||||
success: false,
|
||||
inputPath,
|
||||
error: error.message,
|
||||
index: i,
|
||||
});
|
||||
failCount++;
|
||||
console.error(`Failed to convert ${path.basename(inputPath)}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
totalFiles: inputPaths.length,
|
||||
successCount,
|
||||
failCount,
|
||||
results,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Format bytes to human readable format
|
||||
*/
|
||||
function formatBytes(bytes, decimals = 2) {
|
||||
if (bytes === 0) return '0 Bytes';
|
||||
|
||||
const k = 1024;
|
||||
const dm = decimals < 0 ? 0 : decimals;
|
||||
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
|
||||
|
||||
const i = Math.floor(Math.log(bytes) / Math.log(k));
|
||||
|
||||
return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get supported input formats
|
||||
*/
|
||||
export function getSupportedFormats() {
|
||||
return {
|
||||
video: ['.mp4', '.avi', '.mkv', '.mov', '.wmv', '.flv', '.webm', '.m4v'],
|
||||
audio: ['.m4a', '.wav', '.flac', '.ogg', '.aac', '.wma', '.opus'],
|
||||
};
|
||||
}
|
||||
|
||||
@ -1,195 +1,195 @@
|
||||
import OpenAI from 'openai';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
let openai = null;
|
||||
|
||||
// Max characters per chunk for summarization
|
||||
const MAX_CHUNK_CHARS = 30000;
|
||||
|
||||
/**
|
||||
* Get OpenAI client (lazy initialization)
|
||||
*/
|
||||
function getOpenAI() {
|
||||
if (!openai) {
|
||||
if (!process.env.OPENAI_API_KEY) {
|
||||
throw new Error('OPENAI_API_KEY environment variable is not set');
|
||||
}
|
||||
openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
|
||||
}
|
||||
return openai;
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarize text using GPT-4o
|
||||
*/
|
||||
export async function summarizeText(text, options = {}) {
|
||||
const {
|
||||
model = 'gpt-5.1', // GPT-5.1 - latest OpenAI model (Nov 2025)
|
||||
language = 'same', // 'same' = same as input, or specify language code
|
||||
style = 'concise', // 'concise', 'detailed', 'bullet'
|
||||
maxLength = null, // optional max length in words
|
||||
} = options;
|
||||
|
||||
const client = getOpenAI();
|
||||
|
||||
let styleInstruction = '';
|
||||
switch (style) {
|
||||
case 'detailed':
|
||||
styleInstruction = 'Provide a detailed summary that captures all important points and nuances.';
|
||||
break;
|
||||
case 'bullet':
|
||||
styleInstruction = 'Provide the summary as bullet points, highlighting the key points.';
|
||||
break;
|
||||
case 'concise':
|
||||
default:
|
||||
styleInstruction = 'Provide a concise summary that captures the main points.';
|
||||
}
|
||||
|
||||
let languageInstruction = '';
|
||||
if (language === 'same') {
|
||||
languageInstruction = 'Write the summary in the same language as the input text.';
|
||||
} else {
|
||||
languageInstruction = `Write the summary in ${language}.`;
|
||||
}
|
||||
|
||||
let lengthInstruction = '';
|
||||
if (maxLength) {
|
||||
lengthInstruction = `Keep the summary under ${maxLength} words.`;
|
||||
}
|
||||
|
||||
const systemPrompt = `You are an expert summarizer. ${styleInstruction} ${languageInstruction} ${lengthInstruction}
|
||||
Focus on the most important information and main ideas. Be accurate and objective.`;
|
||||
|
||||
// Handle long texts by chunking
|
||||
if (text.length > MAX_CHUNK_CHARS) {
|
||||
return await summarizeLongText(text, { model, systemPrompt, style });
|
||||
}
|
||||
|
||||
const response = await client.chat.completions.create({
|
||||
model,
|
||||
messages: [
|
||||
{ role: 'system', content: systemPrompt },
|
||||
{ role: 'user', content: `Please summarize the following text:\n\n${text}` },
|
||||
],
|
||||
temperature: 0.3,
|
||||
});
|
||||
|
||||
return {
|
||||
summary: response.choices[0].message.content,
|
||||
model,
|
||||
style,
|
||||
inputLength: text.length,
|
||||
chunks: 1,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarize long text by chunking and combining summaries
|
||||
*/
|
||||
async function summarizeLongText(text, options) {
|
||||
const { model, systemPrompt, style } = options;
|
||||
const client = getOpenAI();
|
||||
|
||||
// Split into chunks
|
||||
const chunks = [];
|
||||
let currentChunk = '';
|
||||
const sentences = text.split(/(?<=[.!?。!?\n])\s*/);
|
||||
|
||||
for (const sentence of sentences) {
|
||||
if ((currentChunk + sentence).length > MAX_CHUNK_CHARS && currentChunk) {
|
||||
chunks.push(currentChunk.trim());
|
||||
currentChunk = sentence;
|
||||
} else {
|
||||
currentChunk += ' ' + sentence;
|
||||
}
|
||||
}
|
||||
if (currentChunk.trim()) {
|
||||
chunks.push(currentChunk.trim());
|
||||
}
|
||||
|
||||
console.log(`Summarizing ${chunks.length} chunks...`);
|
||||
|
||||
// Summarize each chunk
|
||||
const chunkSummaries = [];
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
console.log(`[${i + 1}/${chunks.length}] Summarizing chunk...`);
|
||||
const response = await client.chat.completions.create({
|
||||
model,
|
||||
messages: [
|
||||
{ role: 'system', content: systemPrompt },
|
||||
{ role: 'user', content: `Please summarize the following text (part ${i + 1} of ${chunks.length}):\n\n${chunks[i]}` },
|
||||
],
|
||||
temperature: 0.3,
|
||||
});
|
||||
chunkSummaries.push(response.choices[0].message.content);
|
||||
}
|
||||
|
||||
// Combine summaries if multiple chunks
|
||||
if (chunkSummaries.length === 1) {
|
||||
return {
|
||||
summary: chunkSummaries[0],
|
||||
model,
|
||||
style,
|
||||
inputLength: text.length,
|
||||
chunks: 1,
|
||||
};
|
||||
}
|
||||
|
||||
// Create final combined summary
|
||||
const combinedText = chunkSummaries.join('\n\n---\n\n');
|
||||
const finalResponse = await client.chat.completions.create({
|
||||
model,
|
||||
messages: [
|
||||
{ role: 'system', content: `You are an expert summarizer. Combine and synthesize the following partial summaries into a single coherent ${style} summary. Remove redundancy and ensure a smooth flow.` },
|
||||
{ role: 'user', content: `Please combine these summaries into one:\n\n${combinedText}` },
|
||||
],
|
||||
temperature: 0.3,
|
||||
});
|
||||
|
||||
return {
|
||||
summary: finalResponse.choices[0].message.content,
|
||||
model,
|
||||
style,
|
||||
inputLength: text.length,
|
||||
chunks: chunks.length,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarize a text file
|
||||
*/
|
||||
export async function summarizeFile(filePath, options = {}) {
|
||||
if (!fs.existsSync(filePath)) {
|
||||
throw new Error(`File not found: ${filePath}`);
|
||||
}
|
||||
|
||||
const { outputDir, ...otherOptions } = options;
|
||||
|
||||
const text = fs.readFileSync(filePath, 'utf-8');
|
||||
const result = await summarizeText(text, otherOptions);
|
||||
|
||||
// Save summary to file
|
||||
const dir = outputDir || path.dirname(filePath);
|
||||
const baseName = path.basename(filePath, path.extname(filePath));
|
||||
const summaryPath = path.join(dir, `${baseName}_summary.txt`);
|
||||
|
||||
fs.writeFileSync(summaryPath, result.summary, 'utf-8');
|
||||
|
||||
return {
|
||||
...result,
|
||||
filePath,
|
||||
summaryPath,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available summary styles
|
||||
*/
|
||||
export function getSummaryStyles() {
|
||||
return {
|
||||
concise: 'A brief summary capturing main points',
|
||||
detailed: 'A comprehensive summary with nuances',
|
||||
bullet: 'Key points as bullet points',
|
||||
};
|
||||
}
|
||||
import OpenAI from 'openai';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
let openai = null;
|
||||
|
||||
// Max characters per chunk for summarization
|
||||
const MAX_CHUNK_CHARS = 30000;
|
||||
|
||||
/**
|
||||
* Get OpenAI client (lazy initialization)
|
||||
*/
|
||||
function getOpenAI() {
|
||||
if (!openai) {
|
||||
if (!process.env.OPENAI_API_KEY) {
|
||||
throw new Error('OPENAI_API_KEY environment variable is not set');
|
||||
}
|
||||
openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
|
||||
}
|
||||
return openai;
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarize text using GPT-4o
|
||||
*/
|
||||
export async function summarizeText(text, options = {}) {
|
||||
const {
|
||||
model = 'gpt-5.1', // GPT-5.1 - latest OpenAI model (Nov 2025)
|
||||
language = 'same', // 'same' = same as input, or specify language code
|
||||
style = 'concise', // 'concise', 'detailed', 'bullet'
|
||||
maxLength = null, // optional max length in words
|
||||
} = options;
|
||||
|
||||
const client = getOpenAI();
|
||||
|
||||
let styleInstruction = '';
|
||||
switch (style) {
|
||||
case 'detailed':
|
||||
styleInstruction = 'Provide a detailed summary that captures all important points and nuances.';
|
||||
break;
|
||||
case 'bullet':
|
||||
styleInstruction = 'Provide the summary as bullet points, highlighting the key points.';
|
||||
break;
|
||||
case 'concise':
|
||||
default:
|
||||
styleInstruction = 'Provide a concise summary that captures the main points.';
|
||||
}
|
||||
|
||||
let languageInstruction = '';
|
||||
if (language === 'same') {
|
||||
languageInstruction = 'Write the summary in the same language as the input text.';
|
||||
} else {
|
||||
languageInstruction = `Write the summary in ${language}.`;
|
||||
}
|
||||
|
||||
let lengthInstruction = '';
|
||||
if (maxLength) {
|
||||
lengthInstruction = `Keep the summary under ${maxLength} words.`;
|
||||
}
|
||||
|
||||
const systemPrompt = `You are an expert summarizer. ${styleInstruction} ${languageInstruction} ${lengthInstruction}
|
||||
Focus on the most important information and main ideas. Be accurate and objective.`;
|
||||
|
||||
// Handle long texts by chunking
|
||||
if (text.length > MAX_CHUNK_CHARS) {
|
||||
return await summarizeLongText(text, { model, systemPrompt, style });
|
||||
}
|
||||
|
||||
const response = await client.chat.completions.create({
|
||||
model,
|
||||
messages: [
|
||||
{ role: 'system', content: systemPrompt },
|
||||
{ role: 'user', content: `Please summarize the following text:\n\n${text}` },
|
||||
],
|
||||
temperature: 0.3,
|
||||
});
|
||||
|
||||
return {
|
||||
summary: response.choices[0].message.content,
|
||||
model,
|
||||
style,
|
||||
inputLength: text.length,
|
||||
chunks: 1,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarize long text by chunking and combining summaries
|
||||
*/
|
||||
async function summarizeLongText(text, options) {
|
||||
const { model, systemPrompt, style } = options;
|
||||
const client = getOpenAI();
|
||||
|
||||
// Split into chunks
|
||||
const chunks = [];
|
||||
let currentChunk = '';
|
||||
const sentences = text.split(/(?<=[.!?。!?\n])\s*/);
|
||||
|
||||
for (const sentence of sentences) {
|
||||
if ((currentChunk + sentence).length > MAX_CHUNK_CHARS && currentChunk) {
|
||||
chunks.push(currentChunk.trim());
|
||||
currentChunk = sentence;
|
||||
} else {
|
||||
currentChunk += ' ' + sentence;
|
||||
}
|
||||
}
|
||||
if (currentChunk.trim()) {
|
||||
chunks.push(currentChunk.trim());
|
||||
}
|
||||
|
||||
console.log(`Summarizing ${chunks.length} chunks...`);
|
||||
|
||||
// Summarize each chunk
|
||||
const chunkSummaries = [];
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
console.log(`[${i + 1}/${chunks.length}] Summarizing chunk...`);
|
||||
const response = await client.chat.completions.create({
|
||||
model,
|
||||
messages: [
|
||||
{ role: 'system', content: systemPrompt },
|
||||
{ role: 'user', content: `Please summarize the following text (part ${i + 1} of ${chunks.length}):\n\n${chunks[i]}` },
|
||||
],
|
||||
temperature: 0.3,
|
||||
});
|
||||
chunkSummaries.push(response.choices[0].message.content);
|
||||
}
|
||||
|
||||
// Combine summaries if multiple chunks
|
||||
if (chunkSummaries.length === 1) {
|
||||
return {
|
||||
summary: chunkSummaries[0],
|
||||
model,
|
||||
style,
|
||||
inputLength: text.length,
|
||||
chunks: 1,
|
||||
};
|
||||
}
|
||||
|
||||
// Create final combined summary
|
||||
const combinedText = chunkSummaries.join('\n\n---\n\n');
|
||||
const finalResponse = await client.chat.completions.create({
|
||||
model,
|
||||
messages: [
|
||||
{ role: 'system', content: `You are an expert summarizer. Combine and synthesize the following partial summaries into a single coherent ${style} summary. Remove redundancy and ensure a smooth flow.` },
|
||||
{ role: 'user', content: `Please combine these summaries into one:\n\n${combinedText}` },
|
||||
],
|
||||
temperature: 0.3,
|
||||
});
|
||||
|
||||
return {
|
||||
summary: finalResponse.choices[0].message.content,
|
||||
model,
|
||||
style,
|
||||
inputLength: text.length,
|
||||
chunks: chunks.length,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Summarize a text file
|
||||
*/
|
||||
export async function summarizeFile(filePath, options = {}) {
|
||||
if (!fs.existsSync(filePath)) {
|
||||
throw new Error(`File not found: ${filePath}`);
|
||||
}
|
||||
|
||||
const { outputDir, ...otherOptions } = options;
|
||||
|
||||
const text = fs.readFileSync(filePath, 'utf-8');
|
||||
const result = await summarizeText(text, otherOptions);
|
||||
|
||||
// Save summary to file
|
||||
const dir = outputDir || path.dirname(filePath);
|
||||
const baseName = path.basename(filePath, path.extname(filePath));
|
||||
const summaryPath = path.join(dir, `${baseName}_summary.txt`);
|
||||
|
||||
fs.writeFileSync(summaryPath, result.summary, 'utf-8');
|
||||
|
||||
return {
|
||||
...result,
|
||||
filePath,
|
||||
summaryPath,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available summary styles
|
||||
*/
|
||||
export function getSummaryStyles() {
|
||||
return {
|
||||
concise: 'A brief summary capturing main points',
|
||||
detailed: 'A comprehensive summary with nuances',
|
||||
bullet: 'Key points as bullet points',
|
||||
};
|
||||
}
|
||||
|
||||
@ -1,178 +1,178 @@
|
||||
import OpenAI from 'openai';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
let openai = null;
|
||||
|
||||
// Available transcription models
|
||||
const MODELS = {
|
||||
'gpt-4o-transcribe': {
|
||||
name: 'gpt-4o-transcribe',
|
||||
formats: ['json', 'text'],
|
||||
supportsLanguage: true,
|
||||
},
|
||||
'gpt-4o-mini-transcribe': {
|
||||
name: 'gpt-4o-mini-transcribe',
|
||||
formats: ['json', 'text'],
|
||||
supportsLanguage: true,
|
||||
},
|
||||
'whisper-1': {
|
||||
name: 'whisper-1',
|
||||
formats: ['json', 'text', 'srt', 'vtt', 'verbose_json'],
|
||||
supportsLanguage: true,
|
||||
},
|
||||
};
|
||||
|
||||
const DEFAULT_MODEL = 'gpt-4o-mini-transcribe';
|
||||
|
||||
/**
|
||||
* Get OpenAI client (lazy initialization)
|
||||
*/
|
||||
function getOpenAI() {
|
||||
if (!openai) {
|
||||
if (!process.env.OPENAI_API_KEY) {
|
||||
throw new Error('OPENAI_API_KEY environment variable is not set');
|
||||
}
|
||||
openai = new OpenAI({
|
||||
apiKey: process.env.OPENAI_API_KEY,
|
||||
});
|
||||
}
|
||||
return openai;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available models
|
||||
*/
|
||||
export function getAvailableModels() {
|
||||
return Object.keys(MODELS);
|
||||
}
|
||||
|
||||
/**
|
||||
* Transcribe an audio file using OpenAI API
|
||||
* @param {string} filePath - Path to audio file
|
||||
* @param {Object} options - Transcription options
|
||||
* @param {string} options.language - Language code (e.g., 'en', 'fr', 'es', 'zh')
|
||||
* @param {string} options.responseFormat - Output format: 'json' or 'text' (gpt-4o models), or 'srt'/'vtt' (whisper-1 only)
|
||||
* @param {string} options.prompt - Optional context prompt for better accuracy
|
||||
* @param {string} options.model - Model to use (default: gpt-4o-transcribe)
|
||||
*/
|
||||
export async function transcribeFile(filePath, options = {}) {
|
||||
const {
|
||||
language = null, // Auto-detect if null
|
||||
responseFormat = 'text', // json or text for gpt-4o models
|
||||
prompt = null, // Optional context prompt
|
||||
model = DEFAULT_MODEL,
|
||||
} = options;
|
||||
|
||||
if (!fs.existsSync(filePath)) {
|
||||
throw new Error(`File not found: ${filePath}`);
|
||||
}
|
||||
|
||||
const modelConfig = MODELS[model] || MODELS[DEFAULT_MODEL];
|
||||
const actualModel = modelConfig.name;
|
||||
|
||||
// Validate response format for model
|
||||
let actualFormat = responseFormat;
|
||||
if (!modelConfig.formats.includes(responseFormat)) {
|
||||
console.warn(`Format '${responseFormat}' not supported by ${actualModel}, using 'text'`);
|
||||
actualFormat = 'text';
|
||||
}
|
||||
|
||||
try {
|
||||
const transcriptionOptions = {
|
||||
file: fs.createReadStream(filePath),
|
||||
model: actualModel,
|
||||
response_format: actualFormat,
|
||||
};
|
||||
|
||||
if (language) {
|
||||
transcriptionOptions.language = language;
|
||||
}
|
||||
|
||||
if (prompt) {
|
||||
transcriptionOptions.prompt = prompt;
|
||||
}
|
||||
|
||||
console.log(`Using model: ${actualModel}, format: ${actualFormat}${language ? `, language: ${language}` : ''}`);
|
||||
|
||||
const transcription = await getOpenAI().audio.transcriptions.create(transcriptionOptions);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
filePath,
|
||||
text: actualFormat === 'json' || actualFormat === 'verbose_json'
|
||||
? transcription.text
|
||||
: transcription,
|
||||
format: actualFormat,
|
||||
model: actualModel,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Transcription failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Transcribe and save to file
|
||||
*/
|
||||
export async function transcribeAndSave(filePath, options = {}) {
|
||||
const { outputFormat = 'txt', outputDir = null } = options;
|
||||
|
||||
const result = await transcribeFile(filePath, options);
|
||||
|
||||
// Determine output path
|
||||
const baseName = path.basename(filePath, path.extname(filePath));
|
||||
const outputPath = path.join(
|
||||
outputDir || path.dirname(filePath),
|
||||
`${baseName}.${outputFormat}`
|
||||
);
|
||||
|
||||
// Save transcription
|
||||
fs.writeFileSync(outputPath, result.text, 'utf-8');
|
||||
|
||||
return {
|
||||
...result,
|
||||
transcriptionPath: outputPath,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Transcribe multiple files
|
||||
*/
|
||||
export async function transcribeMultiple(filePaths, options = {}) {
|
||||
const { onProgress, onFileComplete } = options;
|
||||
const results = [];
|
||||
|
||||
for (let i = 0; i < filePaths.length; i++) {
|
||||
const filePath = filePaths[i];
|
||||
|
||||
if (onProgress) {
|
||||
onProgress({ current: i + 1, total: filePaths.length, filePath });
|
||||
}
|
||||
|
||||
console.log(`[${i + 1}/${filePaths.length}] Transcribing: ${path.basename(filePath)}`);
|
||||
|
||||
try {
|
||||
const result = await transcribeAndSave(filePath, options);
|
||||
results.push(result);
|
||||
|
||||
if (onFileComplete) {
|
||||
onFileComplete(result);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Failed to transcribe ${filePath}: ${error.message}`);
|
||||
results.push({
|
||||
success: false,
|
||||
filePath,
|
||||
error: error.message,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
results,
|
||||
totalFiles: filePaths.length,
|
||||
successCount: results.filter(r => r.success).length,
|
||||
failCount: results.filter(r => !r.success).length,
|
||||
};
|
||||
}
|
||||
import OpenAI from 'openai';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
let openai = null;
|
||||
|
||||
// Available transcription models
|
||||
const MODELS = {
|
||||
'gpt-4o-transcribe': {
|
||||
name: 'gpt-4o-transcribe',
|
||||
formats: ['json', 'text'],
|
||||
supportsLanguage: true,
|
||||
},
|
||||
'gpt-4o-mini-transcribe': {
|
||||
name: 'gpt-4o-mini-transcribe',
|
||||
formats: ['json', 'text'],
|
||||
supportsLanguage: true,
|
||||
},
|
||||
'whisper-1': {
|
||||
name: 'whisper-1',
|
||||
formats: ['json', 'text', 'srt', 'vtt', 'verbose_json'],
|
||||
supportsLanguage: true,
|
||||
},
|
||||
};
|
||||
|
||||
const DEFAULT_MODEL = 'gpt-4o-mini-transcribe';
|
||||
|
||||
/**
|
||||
* Get OpenAI client (lazy initialization)
|
||||
*/
|
||||
function getOpenAI() {
|
||||
if (!openai) {
|
||||
if (!process.env.OPENAI_API_KEY) {
|
||||
throw new Error('OPENAI_API_KEY environment variable is not set');
|
||||
}
|
||||
openai = new OpenAI({
|
||||
apiKey: process.env.OPENAI_API_KEY,
|
||||
});
|
||||
}
|
||||
return openai;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available models
|
||||
*/
|
||||
export function getAvailableModels() {
|
||||
return Object.keys(MODELS);
|
||||
}
|
||||
|
||||
/**
|
||||
* Transcribe an audio file using OpenAI API
|
||||
* @param {string} filePath - Path to audio file
|
||||
* @param {Object} options - Transcription options
|
||||
* @param {string} options.language - Language code (e.g., 'en', 'fr', 'es', 'zh')
|
||||
* @param {string} options.responseFormat - Output format: 'json' or 'text' (gpt-4o models), or 'srt'/'vtt' (whisper-1 only)
|
||||
* @param {string} options.prompt - Optional context prompt for better accuracy
|
||||
* @param {string} options.model - Model to use (default: gpt-4o-transcribe)
|
||||
*/
|
||||
export async function transcribeFile(filePath, options = {}) {
|
||||
const {
|
||||
language = null, // Auto-detect if null
|
||||
responseFormat = 'text', // json or text for gpt-4o models
|
||||
prompt = null, // Optional context prompt
|
||||
model = DEFAULT_MODEL,
|
||||
} = options;
|
||||
|
||||
if (!fs.existsSync(filePath)) {
|
||||
throw new Error(`File not found: ${filePath}`);
|
||||
}
|
||||
|
||||
const modelConfig = MODELS[model] || MODELS[DEFAULT_MODEL];
|
||||
const actualModel = modelConfig.name;
|
||||
|
||||
// Validate response format for model
|
||||
let actualFormat = responseFormat;
|
||||
if (!modelConfig.formats.includes(responseFormat)) {
|
||||
console.warn(`Format '${responseFormat}' not supported by ${actualModel}, using 'text'`);
|
||||
actualFormat = 'text';
|
||||
}
|
||||
|
||||
try {
|
||||
const transcriptionOptions = {
|
||||
file: fs.createReadStream(filePath),
|
||||
model: actualModel,
|
||||
response_format: actualFormat,
|
||||
};
|
||||
|
||||
if (language) {
|
||||
transcriptionOptions.language = language;
|
||||
}
|
||||
|
||||
if (prompt) {
|
||||
transcriptionOptions.prompt = prompt;
|
||||
}
|
||||
|
||||
console.log(`Using model: ${actualModel}, format: ${actualFormat}${language ? `, language: ${language}` : ''}`);
|
||||
|
||||
const transcription = await getOpenAI().audio.transcriptions.create(transcriptionOptions);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
filePath,
|
||||
text: actualFormat === 'json' || actualFormat === 'verbose_json'
|
||||
? transcription.text
|
||||
: transcription,
|
||||
format: actualFormat,
|
||||
model: actualModel,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Transcription failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Transcribe and save to file
|
||||
*/
|
||||
export async function transcribeAndSave(filePath, options = {}) {
|
||||
const { outputFormat = 'txt', outputDir = null } = options;
|
||||
|
||||
const result = await transcribeFile(filePath, options);
|
||||
|
||||
// Determine output path
|
||||
const baseName = path.basename(filePath, path.extname(filePath));
|
||||
const outputPath = path.join(
|
||||
outputDir || path.dirname(filePath),
|
||||
`${baseName}.${outputFormat}`
|
||||
);
|
||||
|
||||
// Save transcription
|
||||
fs.writeFileSync(outputPath, result.text, 'utf-8');
|
||||
|
||||
return {
|
||||
...result,
|
||||
transcriptionPath: outputPath,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Transcribe multiple files
|
||||
*/
|
||||
export async function transcribeMultiple(filePaths, options = {}) {
|
||||
const { onProgress, onFileComplete } = options;
|
||||
const results = [];
|
||||
|
||||
for (let i = 0; i < filePaths.length; i++) {
|
||||
const filePath = filePaths[i];
|
||||
|
||||
if (onProgress) {
|
||||
onProgress({ current: i + 1, total: filePaths.length, filePath });
|
||||
}
|
||||
|
||||
console.log(`[${i + 1}/${filePaths.length}] Transcribing: ${path.basename(filePath)}`);
|
||||
|
||||
try {
|
||||
const result = await transcribeAndSave(filePath, options);
|
||||
results.push(result);
|
||||
|
||||
if (onFileComplete) {
|
||||
onFileComplete(result);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Failed to transcribe ${filePath}: ${error.message}`);
|
||||
results.push({
|
||||
success: false,
|
||||
filePath,
|
||||
error: error.message,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
results,
|
||||
totalFiles: filePaths.length,
|
||||
successCount: results.filter(r => r.success).length,
|
||||
failCount: results.filter(r => !r.success).length,
|
||||
};
|
||||
}
|
||||
|
||||
@ -1,271 +1,271 @@
|
||||
import OpenAI from 'openai';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
let openai = null;
|
||||
|
||||
// Max characters per chunk (~6000 tokens ≈ 24000 characters for most languages)
|
||||
const MAX_CHUNK_CHARS = 20000;
|
||||
|
||||
const LANGUAGES = {
|
||||
en: 'English',
|
||||
fr: 'French',
|
||||
es: 'Spanish',
|
||||
de: 'German',
|
||||
it: 'Italian',
|
||||
pt: 'Portuguese',
|
||||
zh: 'Chinese',
|
||||
ja: 'Japanese',
|
||||
ko: 'Korean',
|
||||
ru: 'Russian',
|
||||
ar: 'Arabic',
|
||||
hi: 'Hindi',
|
||||
nl: 'Dutch',
|
||||
pl: 'Polish',
|
||||
tr: 'Turkish',
|
||||
vi: 'Vietnamese',
|
||||
th: 'Thai',
|
||||
sv: 'Swedish',
|
||||
da: 'Danish',
|
||||
fi: 'Finnish',
|
||||
no: 'Norwegian',
|
||||
cs: 'Czech',
|
||||
el: 'Greek',
|
||||
he: 'Hebrew',
|
||||
id: 'Indonesian',
|
||||
ms: 'Malay',
|
||||
ro: 'Romanian',
|
||||
uk: 'Ukrainian',
|
||||
};
|
||||
|
||||
// Sentence ending patterns for different languages
|
||||
const SENTENCE_ENDINGS = /[.!?。!?。\n]/g;
|
||||
|
||||
/**
|
||||
* Get OpenAI client (lazy initialization)
|
||||
*/
|
||||
function getOpenAI() {
|
||||
if (!openai) {
|
||||
if (!process.env.OPENAI_API_KEY) {
|
||||
throw new Error('OPENAI_API_KEY environment variable is not set');
|
||||
}
|
||||
openai = new OpenAI({
|
||||
apiKey: process.env.OPENAI_API_KEY,
|
||||
});
|
||||
}
|
||||
return openai;
|
||||
}
|
||||
|
||||
/**
|
||||
* Split text into chunks at sentence boundaries
|
||||
* @param {string} text - Text to split
|
||||
* @param {number} maxChars - Maximum characters per chunk
|
||||
* @returns {string[]} Array of text chunks
|
||||
*/
|
||||
function splitIntoChunks(text, maxChars = MAX_CHUNK_CHARS) {
|
||||
if (text.length <= maxChars) {
|
||||
return [text];
|
||||
}
|
||||
|
||||
const chunks = [];
|
||||
let currentPos = 0;
|
||||
|
||||
while (currentPos < text.length) {
|
||||
let endPos = currentPos + maxChars;
|
||||
|
||||
// If we're at the end, just take the rest
|
||||
if (endPos >= text.length) {
|
||||
chunks.push(text.slice(currentPos));
|
||||
break;
|
||||
}
|
||||
|
||||
// Find the last sentence ending before maxChars
|
||||
const searchText = text.slice(currentPos, endPos);
|
||||
let lastSentenceEnd = -1;
|
||||
|
||||
// Find all sentence endings in the search range
|
||||
let match;
|
||||
SENTENCE_ENDINGS.lastIndex = 0;
|
||||
while ((match = SENTENCE_ENDINGS.exec(searchText)) !== null) {
|
||||
lastSentenceEnd = match.index + 1; // Include the punctuation
|
||||
}
|
||||
|
||||
// If we found a sentence ending, cut there
|
||||
// Otherwise, look for the next sentence ending after maxChars (up to 20% more)
|
||||
if (lastSentenceEnd > maxChars * 0.5) {
|
||||
endPos = currentPos + lastSentenceEnd;
|
||||
} else {
|
||||
// Look forward for a sentence ending (up to 20% more characters)
|
||||
const extendedSearch = text.slice(endPos, endPos + maxChars * 0.2);
|
||||
SENTENCE_ENDINGS.lastIndex = 0;
|
||||
const forwardMatch = SENTENCE_ENDINGS.exec(extendedSearch);
|
||||
if (forwardMatch) {
|
||||
endPos = endPos + forwardMatch.index + 1;
|
||||
}
|
||||
// If still no sentence ending found, just cut at maxChars
|
||||
}
|
||||
|
||||
chunks.push(text.slice(currentPos, endPos).trim());
|
||||
currentPos = endPos;
|
||||
|
||||
// Skip any leading whitespace for the next chunk
|
||||
while (currentPos < text.length && /\s/.test(text[currentPos])) {
|
||||
currentPos++;
|
||||
}
|
||||
}
|
||||
|
||||
return chunks.filter(chunk => chunk.length > 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available languages
|
||||
*/
|
||||
export function getLanguages() {
|
||||
return LANGUAGES;
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate a single chunk of text
|
||||
*/
|
||||
async function translateChunk(text, targetLanguage, sourceLanguage) {
|
||||
const prompt = sourceLanguage
|
||||
? `Translate the following text from ${sourceLanguage} to ${targetLanguage}. Only output the translation, nothing else:\n\n${text}`
|
||||
: `Translate the following text to ${targetLanguage}. Only output the translation, nothing else:\n\n${text}`;
|
||||
|
||||
const response = await getOpenAI().chat.completions.create({
|
||||
model: 'gpt-4o-mini',
|
||||
max_tokens: 16384,
|
||||
messages: [
|
||||
{
|
||||
role: 'user',
|
||||
content: prompt,
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
return response.choices[0].message.content;
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate text using GPT-4o-mini with chunking for long texts
|
||||
* @param {string} text - Text to translate
|
||||
* @param {string} targetLang - Target language code (e.g., 'en', 'fr')
|
||||
* @param {string} sourceLang - Source language code (optional, auto-detect if null)
|
||||
*/
|
||||
export async function translateText(text, targetLang, sourceLang = null) {
|
||||
if (!text || !text.trim()) {
|
||||
throw new Error('No text provided for translation');
|
||||
}
|
||||
|
||||
const targetLanguage = LANGUAGES[targetLang] || targetLang;
|
||||
const sourceLanguage = sourceLang ? (LANGUAGES[sourceLang] || sourceLang) : null;
|
||||
|
||||
try {
|
||||
// Split text into chunks
|
||||
const chunks = splitIntoChunks(text);
|
||||
|
||||
if (chunks.length === 1) {
|
||||
// Single chunk - translate directly
|
||||
const translation = await translateChunk(text, targetLanguage, sourceLanguage);
|
||||
return {
|
||||
success: true,
|
||||
originalText: text,
|
||||
translatedText: translation,
|
||||
targetLanguage: targetLanguage,
|
||||
sourceLanguage: sourceLanguage || 'auto-detected',
|
||||
chunks: 1,
|
||||
};
|
||||
}
|
||||
|
||||
// Multiple chunks - translate each and combine
|
||||
console.log(`Splitting text into ${chunks.length} chunks for translation...`);
|
||||
const translations = [];
|
||||
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
console.log(` Translating chunk ${i + 1}/${chunks.length} (${chunks[i].length} chars)...`);
|
||||
const translation = await translateChunk(chunks[i], targetLanguage, sourceLanguage);
|
||||
translations.push(translation);
|
||||
}
|
||||
|
||||
const combinedTranslation = translations.join('\n\n');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
originalText: text,
|
||||
translatedText: combinedTranslation,
|
||||
targetLanguage: targetLanguage,
|
||||
sourceLanguage: sourceLanguage || 'auto-detected',
|
||||
chunks: chunks.length,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Translation failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate a text file
|
||||
* @param {string} filePath - Path to text file
|
||||
* @param {string} targetLang - Target language code
|
||||
* @param {string} sourceLang - Source language code (optional)
|
||||
* @param {string} outputDir - Output directory (optional)
|
||||
*/
|
||||
export async function translateFile(filePath, targetLang, sourceLang = null, outputDir = null) {
|
||||
if (!fs.existsSync(filePath)) {
|
||||
throw new Error(`File not found: ${filePath}`);
|
||||
}
|
||||
|
||||
const text = fs.readFileSync(filePath, 'utf-8');
|
||||
const result = await translateText(text, targetLang, sourceLang);
|
||||
|
||||
// Save translation
|
||||
const baseName = path.basename(filePath, path.extname(filePath));
|
||||
const outputPath = path.join(
|
||||
outputDir || path.dirname(filePath),
|
||||
`${baseName}_${targetLang}.txt`
|
||||
);
|
||||
|
||||
fs.writeFileSync(outputPath, result.translatedText, 'utf-8');
|
||||
|
||||
return {
|
||||
...result,
|
||||
originalPath: filePath,
|
||||
translationPath: outputPath,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate multiple files
|
||||
*/
|
||||
export async function translateMultiple(filePaths, targetLang, sourceLang = null, outputDir = null, onProgress = null) {
|
||||
const results = [];
|
||||
|
||||
for (let i = 0; i < filePaths.length; i++) {
|
||||
const filePath = filePaths[i];
|
||||
|
||||
if (onProgress) {
|
||||
onProgress({ current: i + 1, total: filePaths.length, filePath });
|
||||
}
|
||||
|
||||
console.log(`[${i + 1}/${filePaths.length}] Translating: ${path.basename(filePath)}`);
|
||||
|
||||
try {
|
||||
const result = await translateFile(filePath, targetLang, sourceLang, outputDir);
|
||||
results.push(result);
|
||||
} catch (error) {
|
||||
console.error(`Failed to translate ${filePath}: ${error.message}`);
|
||||
results.push({
|
||||
success: false,
|
||||
originalPath: filePath,
|
||||
error: error.message,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
results,
|
||||
totalFiles: filePaths.length,
|
||||
successCount: results.filter(r => r.success).length,
|
||||
failCount: results.filter(r => !r.success).length,
|
||||
};
|
||||
}
|
||||
import OpenAI from 'openai';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
let openai = null;
|
||||
|
||||
// Max characters per chunk (~6000 tokens ≈ 24000 characters for most languages)
|
||||
const MAX_CHUNK_CHARS = 20000;
|
||||
|
||||
const LANGUAGES = {
|
||||
en: 'English',
|
||||
fr: 'French',
|
||||
es: 'Spanish',
|
||||
de: 'German',
|
||||
it: 'Italian',
|
||||
pt: 'Portuguese',
|
||||
zh: 'Chinese',
|
||||
ja: 'Japanese',
|
||||
ko: 'Korean',
|
||||
ru: 'Russian',
|
||||
ar: 'Arabic',
|
||||
hi: 'Hindi',
|
||||
nl: 'Dutch',
|
||||
pl: 'Polish',
|
||||
tr: 'Turkish',
|
||||
vi: 'Vietnamese',
|
||||
th: 'Thai',
|
||||
sv: 'Swedish',
|
||||
da: 'Danish',
|
||||
fi: 'Finnish',
|
||||
no: 'Norwegian',
|
||||
cs: 'Czech',
|
||||
el: 'Greek',
|
||||
he: 'Hebrew',
|
||||
id: 'Indonesian',
|
||||
ms: 'Malay',
|
||||
ro: 'Romanian',
|
||||
uk: 'Ukrainian',
|
||||
};
|
||||
|
||||
// Sentence ending patterns for different languages
|
||||
const SENTENCE_ENDINGS = /[.!?。!?。\n]/g;
|
||||
|
||||
/**
|
||||
* Get OpenAI client (lazy initialization)
|
||||
*/
|
||||
function getOpenAI() {
|
||||
if (!openai) {
|
||||
if (!process.env.OPENAI_API_KEY) {
|
||||
throw new Error('OPENAI_API_KEY environment variable is not set');
|
||||
}
|
||||
openai = new OpenAI({
|
||||
apiKey: process.env.OPENAI_API_KEY,
|
||||
});
|
||||
}
|
||||
return openai;
|
||||
}
|
||||
|
||||
/**
|
||||
* Split text into chunks at sentence boundaries
|
||||
* @param {string} text - Text to split
|
||||
* @param {number} maxChars - Maximum characters per chunk
|
||||
* @returns {string[]} Array of text chunks
|
||||
*/
|
||||
function splitIntoChunks(text, maxChars = MAX_CHUNK_CHARS) {
|
||||
if (text.length <= maxChars) {
|
||||
return [text];
|
||||
}
|
||||
|
||||
const chunks = [];
|
||||
let currentPos = 0;
|
||||
|
||||
while (currentPos < text.length) {
|
||||
let endPos = currentPos + maxChars;
|
||||
|
||||
// If we're at the end, just take the rest
|
||||
if (endPos >= text.length) {
|
||||
chunks.push(text.slice(currentPos));
|
||||
break;
|
||||
}
|
||||
|
||||
// Find the last sentence ending before maxChars
|
||||
const searchText = text.slice(currentPos, endPos);
|
||||
let lastSentenceEnd = -1;
|
||||
|
||||
// Find all sentence endings in the search range
|
||||
let match;
|
||||
SENTENCE_ENDINGS.lastIndex = 0;
|
||||
while ((match = SENTENCE_ENDINGS.exec(searchText)) !== null) {
|
||||
lastSentenceEnd = match.index + 1; // Include the punctuation
|
||||
}
|
||||
|
||||
// If we found a sentence ending, cut there
|
||||
// Otherwise, look for the next sentence ending after maxChars (up to 20% more)
|
||||
if (lastSentenceEnd > maxChars * 0.5) {
|
||||
endPos = currentPos + lastSentenceEnd;
|
||||
} else {
|
||||
// Look forward for a sentence ending (up to 20% more characters)
|
||||
const extendedSearch = text.slice(endPos, endPos + maxChars * 0.2);
|
||||
SENTENCE_ENDINGS.lastIndex = 0;
|
||||
const forwardMatch = SENTENCE_ENDINGS.exec(extendedSearch);
|
||||
if (forwardMatch) {
|
||||
endPos = endPos + forwardMatch.index + 1;
|
||||
}
|
||||
// If still no sentence ending found, just cut at maxChars
|
||||
}
|
||||
|
||||
chunks.push(text.slice(currentPos, endPos).trim());
|
||||
currentPos = endPos;
|
||||
|
||||
// Skip any leading whitespace for the next chunk
|
||||
while (currentPos < text.length && /\s/.test(text[currentPos])) {
|
||||
currentPos++;
|
||||
}
|
||||
}
|
||||
|
||||
return chunks.filter(chunk => chunk.length > 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available languages
|
||||
*/
|
||||
export function getLanguages() {
|
||||
return LANGUAGES;
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate a single chunk of text
|
||||
*/
|
||||
async function translateChunk(text, targetLanguage, sourceLanguage) {
|
||||
const prompt = sourceLanguage
|
||||
? `Translate the following text from ${sourceLanguage} to ${targetLanguage}. Only output the translation, nothing else:\n\n${text}`
|
||||
: `Translate the following text to ${targetLanguage}. Only output the translation, nothing else:\n\n${text}`;
|
||||
|
||||
const response = await getOpenAI().chat.completions.create({
|
||||
model: 'gpt-4o-mini',
|
||||
max_tokens: 16384,
|
||||
messages: [
|
||||
{
|
||||
role: 'user',
|
||||
content: prompt,
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
return response.choices[0].message.content;
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate text using GPT-4o-mini with chunking for long texts
|
||||
* @param {string} text - Text to translate
|
||||
* @param {string} targetLang - Target language code (e.g., 'en', 'fr')
|
||||
* @param {string} sourceLang - Source language code (optional, auto-detect if null)
|
||||
*/
|
||||
export async function translateText(text, targetLang, sourceLang = null) {
|
||||
if (!text || !text.trim()) {
|
||||
throw new Error('No text provided for translation');
|
||||
}
|
||||
|
||||
const targetLanguage = LANGUAGES[targetLang] || targetLang;
|
||||
const sourceLanguage = sourceLang ? (LANGUAGES[sourceLang] || sourceLang) : null;
|
||||
|
||||
try {
|
||||
// Split text into chunks
|
||||
const chunks = splitIntoChunks(text);
|
||||
|
||||
if (chunks.length === 1) {
|
||||
// Single chunk - translate directly
|
||||
const translation = await translateChunk(text, targetLanguage, sourceLanguage);
|
||||
return {
|
||||
success: true,
|
||||
originalText: text,
|
||||
translatedText: translation,
|
||||
targetLanguage: targetLanguage,
|
||||
sourceLanguage: sourceLanguage || 'auto-detected',
|
||||
chunks: 1,
|
||||
};
|
||||
}
|
||||
|
||||
// Multiple chunks - translate each and combine
|
||||
console.log(`Splitting text into ${chunks.length} chunks for translation...`);
|
||||
const translations = [];
|
||||
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
console.log(` Translating chunk ${i + 1}/${chunks.length} (${chunks[i].length} chars)...`);
|
||||
const translation = await translateChunk(chunks[i], targetLanguage, sourceLanguage);
|
||||
translations.push(translation);
|
||||
}
|
||||
|
||||
const combinedTranslation = translations.join('\n\n');
|
||||
|
||||
return {
|
||||
success: true,
|
||||
originalText: text,
|
||||
translatedText: combinedTranslation,
|
||||
targetLanguage: targetLanguage,
|
||||
sourceLanguage: sourceLanguage || 'auto-detected',
|
||||
chunks: chunks.length,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Translation failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate a text file
|
||||
* @param {string} filePath - Path to text file
|
||||
* @param {string} targetLang - Target language code
|
||||
* @param {string} sourceLang - Source language code (optional)
|
||||
* @param {string} outputDir - Output directory (optional)
|
||||
*/
|
||||
export async function translateFile(filePath, targetLang, sourceLang = null, outputDir = null) {
|
||||
if (!fs.existsSync(filePath)) {
|
||||
throw new Error(`File not found: ${filePath}`);
|
||||
}
|
||||
|
||||
const text = fs.readFileSync(filePath, 'utf-8');
|
||||
const result = await translateText(text, targetLang, sourceLang);
|
||||
|
||||
// Save translation
|
||||
const baseName = path.basename(filePath, path.extname(filePath));
|
||||
const outputPath = path.join(
|
||||
outputDir || path.dirname(filePath),
|
||||
`${baseName}_${targetLang}.txt`
|
||||
);
|
||||
|
||||
fs.writeFileSync(outputPath, result.translatedText, 'utf-8');
|
||||
|
||||
return {
|
||||
...result,
|
||||
originalPath: filePath,
|
||||
translationPath: outputPath,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Translate multiple files
|
||||
*/
|
||||
export async function translateMultiple(filePaths, targetLang, sourceLang = null, outputDir = null, onProgress = null) {
|
||||
const results = [];
|
||||
|
||||
for (let i = 0; i < filePaths.length; i++) {
|
||||
const filePath = filePaths[i];
|
||||
|
||||
if (onProgress) {
|
||||
onProgress({ current: i + 1, total: filePaths.length, filePath });
|
||||
}
|
||||
|
||||
console.log(`[${i + 1}/${filePaths.length}] Translating: ${path.basename(filePath)}`);
|
||||
|
||||
try {
|
||||
const result = await translateFile(filePath, targetLang, sourceLang, outputDir);
|
||||
results.push(result);
|
||||
} catch (error) {
|
||||
console.error(`Failed to translate ${filePath}: ${error.message}`);
|
||||
results.push({
|
||||
success: false,
|
||||
originalPath: filePath,
|
||||
error: error.message,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
results,
|
||||
totalFiles: filePaths.length,
|
||||
successCount: results.filter(r => r.success).length,
|
||||
failCount: results.filter(r => !r.success).length,
|
||||
};
|
||||
}
|
||||
|
||||
@ -1,291 +1,291 @@
|
||||
import { createRequire } from 'module';
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import { spawn } from 'child_process';
|
||||
|
||||
// Use system yt-dlp binary (check common paths)
|
||||
const YTDLP_PATH = process.env.YTDLP_PATH || 'yt-dlp';
|
||||
|
||||
/**
|
||||
* Execute yt-dlp command and return parsed JSON
|
||||
*/
|
||||
async function ytdlp(url, args = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const proc = spawn(YTDLP_PATH, [...args, url]);
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
|
||||
proc.stdout.on('data', (data) => { stdout += data; });
|
||||
proc.stderr.on('data', (data) => { stderr += data; });
|
||||
|
||||
proc.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
try {
|
||||
resolve(JSON.parse(stdout));
|
||||
} catch {
|
||||
resolve(stdout);
|
||||
}
|
||||
} else {
|
||||
reject(new Error(stderr || `yt-dlp exited with code ${code}`));
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute yt-dlp command with progress callback
|
||||
*/
|
||||
function ytdlpExec(url, args = [], onProgress) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const proc = spawn(YTDLP_PATH, [...args, url]);
|
||||
let stderr = '';
|
||||
|
||||
proc.stdout.on('data', (data) => {
|
||||
const line = data.toString();
|
||||
if (onProgress) {
|
||||
const progressMatch = line.match(/\[download\]\s+(\d+\.?\d*)%/);
|
||||
const etaMatch = line.match(/ETA\s+(\d+:\d+)/);
|
||||
const speedMatch = line.match(/at\s+([\d.]+\w+\/s)/);
|
||||
|
||||
if (progressMatch) {
|
||||
onProgress({
|
||||
percent: parseFloat(progressMatch[1]),
|
||||
eta: etaMatch ? etaMatch[1] : null,
|
||||
speed: speedMatch ? speedMatch[1] : null,
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
proc.stderr.on('data', (data) => { stderr += data; });
|
||||
|
||||
proc.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
resolve();
|
||||
} else {
|
||||
reject(new Error(stderr || `yt-dlp exited with code ${code}`));
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
const OUTPUT_DIR = process.env.OUTPUT_DIR || './output';
|
||||
|
||||
/**
|
||||
* Sanitize filename to remove invalid characters
|
||||
*/
|
||||
function sanitizeFilename(filename) {
|
||||
return filename
|
||||
.replace(/[<>:"/\\|?*]/g, '')
|
||||
.replace(/\s+/g, '_')
|
||||
.substring(0, 200);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if URL contains a playlist parameter
|
||||
*/
|
||||
function hasPlaylistParam(url) {
|
||||
try {
|
||||
const urlObj = new URL(url);
|
||||
return urlObj.searchParams.has('list');
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract playlist URL if present in the URL
|
||||
*/
|
||||
function extractPlaylistUrl(url) {
|
||||
const urlObj = new URL(url);
|
||||
const listId = urlObj.searchParams.get('list');
|
||||
if (listId) {
|
||||
return `https://www.youtube.com/playlist?list=${listId}`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get video/playlist info without downloading
|
||||
*/
|
||||
export async function getInfo(url, forcePlaylist = false) {
|
||||
try {
|
||||
// If URL contains a playlist ID and we want to force playlist mode
|
||||
const playlistUrl = extractPlaylistUrl(url);
|
||||
const targetUrl = (forcePlaylist && playlistUrl) ? playlistUrl : url;
|
||||
|
||||
const info = await ytdlp(targetUrl, [
|
||||
'--dump-single-json',
|
||||
'--no-download',
|
||||
'--no-warnings',
|
||||
'--flat-playlist',
|
||||
]);
|
||||
return info;
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to get info: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if URL is a playlist
|
||||
*/
|
||||
export async function isPlaylist(url) {
|
||||
const info = await getInfo(url);
|
||||
return info._type === 'playlist';
|
||||
}
|
||||
|
||||
/**
|
||||
* Download a single video as MP3
|
||||
*/
|
||||
export async function downloadVideo(url, options = {}) {
|
||||
const { outputDir = OUTPUT_DIR, onProgress, onDownloadProgress } = options;
|
||||
|
||||
// Ensure output directory exists
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
try {
|
||||
// Get video info first
|
||||
const info = await ytdlp(url, [
|
||||
'--dump-single-json',
|
||||
'--no-download',
|
||||
'--no-warnings',
|
||||
]);
|
||||
|
||||
const title = sanitizeFilename(info.title);
|
||||
const outputPath = path.join(outputDir, `${title}.mp3`);
|
||||
|
||||
// Download and convert to MP3 with progress
|
||||
await ytdlpExec(url, [
|
||||
'--extract-audio',
|
||||
'--audio-format', 'mp3',
|
||||
'--audio-quality', '0',
|
||||
'-o', outputPath,
|
||||
'--no-warnings',
|
||||
'--newline',
|
||||
], (progress) => {
|
||||
if (onDownloadProgress) {
|
||||
onDownloadProgress({
|
||||
...progress,
|
||||
title: info.title,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
title: info.title,
|
||||
duration: info.duration,
|
||||
filePath: outputPath,
|
||||
url: url,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to download: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Download all videos from a playlist as MP3
|
||||
*/
|
||||
export async function downloadPlaylist(url, options = {}) {
|
||||
const { outputDir = OUTPUT_DIR, onProgress, onVideoComplete, onDownloadProgress, forcePlaylist = false } = options;
|
||||
|
||||
// Ensure output directory exists
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
try {
|
||||
// Get playlist info (force playlist mode if URL has list= param)
|
||||
const info = await getInfo(url, forcePlaylist || hasPlaylistParam(url));
|
||||
|
||||
if (info._type !== 'playlist') {
|
||||
// Single video, redirect to downloadVideo
|
||||
const result = await downloadVideo(url, { ...options, onDownloadProgress });
|
||||
return {
|
||||
success: true,
|
||||
playlistTitle: result.title,
|
||||
videos: [result],
|
||||
totalVideos: 1,
|
||||
};
|
||||
}
|
||||
|
||||
const results = [];
|
||||
const entries = info.entries || [];
|
||||
|
||||
console.log(`Playlist: ${info.title} (${entries.length} videos)`);
|
||||
|
||||
for (let i = 0; i < entries.length; i++) {
|
||||
const entry = entries[i];
|
||||
const videoUrl = entry.url || `https://www.youtube.com/watch?v=${entry.id}`;
|
||||
|
||||
try {
|
||||
if (onProgress) {
|
||||
onProgress({ current: i + 1, total: entries.length, title: entry.title });
|
||||
}
|
||||
|
||||
console.log(`[${i + 1}/${entries.length}] Downloading: ${entry.title}`);
|
||||
|
||||
// Wrap progress callback to include playlist context
|
||||
const wrappedProgress = onDownloadProgress ? (progress) => {
|
||||
onDownloadProgress({
|
||||
...progress,
|
||||
videoIndex: i + 1,
|
||||
totalVideos: entries.length,
|
||||
playlistTitle: info.title,
|
||||
});
|
||||
} : undefined;
|
||||
|
||||
const result = await downloadVideo(videoUrl, { outputDir, onDownloadProgress: wrappedProgress });
|
||||
results.push(result);
|
||||
|
||||
if (onVideoComplete) {
|
||||
onVideoComplete(result);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Failed to download ${entry.title}: ${error.message}`);
|
||||
results.push({
|
||||
success: false,
|
||||
title: entry.title,
|
||||
url: videoUrl,
|
||||
error: error.message,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
playlistTitle: info.title,
|
||||
videos: results,
|
||||
totalVideos: entries.length,
|
||||
successCount: results.filter(r => r.success).length,
|
||||
failCount: results.filter(r => !r.success).length,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to download playlist: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Smart download - detects if URL is video or playlist
|
||||
*/
|
||||
export async function download(url, options = {}) {
|
||||
// If URL contains list= parameter, treat it as a playlist
|
||||
const isPlaylistUrl = hasPlaylistParam(url);
|
||||
const info = await getInfo(url, isPlaylistUrl);
|
||||
|
||||
if (info._type === 'playlist') {
|
||||
return downloadPlaylist(url, { ...options, forcePlaylist: true });
|
||||
} else {
|
||||
const result = await downloadVideo(url, options);
|
||||
return {
|
||||
success: true,
|
||||
playlistTitle: null,
|
||||
videos: [result],
|
||||
totalVideos: 1,
|
||||
successCount: 1,
|
||||
failCount: 0,
|
||||
};
|
||||
}
|
||||
}
|
||||
import { createRequire } from 'module';
|
||||
import path from 'path';
|
||||
import fs from 'fs';
|
||||
import { spawn } from 'child_process';
|
||||
|
||||
// Use system yt-dlp binary (check common paths)
|
||||
const YTDLP_PATH = process.env.YTDLP_PATH || 'yt-dlp';
|
||||
|
||||
/**
|
||||
* Execute yt-dlp command and return parsed JSON
|
||||
*/
|
||||
async function ytdlp(url, args = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const proc = spawn(YTDLP_PATH, [...args, url]);
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
|
||||
proc.stdout.on('data', (data) => { stdout += data; });
|
||||
proc.stderr.on('data', (data) => { stderr += data; });
|
||||
|
||||
proc.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
try {
|
||||
resolve(JSON.parse(stdout));
|
||||
} catch {
|
||||
resolve(stdout);
|
||||
}
|
||||
} else {
|
||||
reject(new Error(stderr || `yt-dlp exited with code ${code}`));
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute yt-dlp command with progress callback
|
||||
*/
|
||||
function ytdlpExec(url, args = [], onProgress) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const proc = spawn(YTDLP_PATH, [...args, url]);
|
||||
let stderr = '';
|
||||
|
||||
proc.stdout.on('data', (data) => {
|
||||
const line = data.toString();
|
||||
if (onProgress) {
|
||||
const progressMatch = line.match(/\[download\]\s+(\d+\.?\d*)%/);
|
||||
const etaMatch = line.match(/ETA\s+(\d+:\d+)/);
|
||||
const speedMatch = line.match(/at\s+([\d.]+\w+\/s)/);
|
||||
|
||||
if (progressMatch) {
|
||||
onProgress({
|
||||
percent: parseFloat(progressMatch[1]),
|
||||
eta: etaMatch ? etaMatch[1] : null,
|
||||
speed: speedMatch ? speedMatch[1] : null,
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
proc.stderr.on('data', (data) => { stderr += data; });
|
||||
|
||||
proc.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
resolve();
|
||||
} else {
|
||||
reject(new Error(stderr || `yt-dlp exited with code ${code}`));
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
const OUTPUT_DIR = process.env.OUTPUT_DIR || './output';
|
||||
|
||||
/**
|
||||
* Sanitize filename to remove invalid characters
|
||||
*/
|
||||
function sanitizeFilename(filename) {
|
||||
return filename
|
||||
.replace(/[<>:"/\\|?*]/g, '')
|
||||
.replace(/\s+/g, '_')
|
||||
.substring(0, 200);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if URL contains a playlist parameter
|
||||
*/
|
||||
function hasPlaylistParam(url) {
|
||||
try {
|
||||
const urlObj = new URL(url);
|
||||
return urlObj.searchParams.has('list');
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract playlist URL if present in the URL
|
||||
*/
|
||||
function extractPlaylistUrl(url) {
|
||||
const urlObj = new URL(url);
|
||||
const listId = urlObj.searchParams.get('list');
|
||||
if (listId) {
|
||||
return `https://www.youtube.com/playlist?list=${listId}`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get video/playlist info without downloading
|
||||
*/
|
||||
export async function getInfo(url, forcePlaylist = false) {
|
||||
try {
|
||||
// If URL contains a playlist ID and we want to force playlist mode
|
||||
const playlistUrl = extractPlaylistUrl(url);
|
||||
const targetUrl = (forcePlaylist && playlistUrl) ? playlistUrl : url;
|
||||
|
||||
const info = await ytdlp(targetUrl, [
|
||||
'--dump-single-json',
|
||||
'--no-download',
|
||||
'--no-warnings',
|
||||
'--flat-playlist',
|
||||
]);
|
||||
return info;
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to get info: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if URL is a playlist
|
||||
*/
|
||||
export async function isPlaylist(url) {
|
||||
const info = await getInfo(url);
|
||||
return info._type === 'playlist';
|
||||
}
|
||||
|
||||
/**
|
||||
* Download a single video as MP3
|
||||
*/
|
||||
export async function downloadVideo(url, options = {}) {
|
||||
const { outputDir = OUTPUT_DIR, onProgress, onDownloadProgress } = options;
|
||||
|
||||
// Ensure output directory exists
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
try {
|
||||
// Get video info first
|
||||
const info = await ytdlp(url, [
|
||||
'--dump-single-json',
|
||||
'--no-download',
|
||||
'--no-warnings',
|
||||
]);
|
||||
|
||||
const title = sanitizeFilename(info.title);
|
||||
const outputPath = path.join(outputDir, `${title}.mp3`);
|
||||
|
||||
// Download and convert to MP3 with progress
|
||||
await ytdlpExec(url, [
|
||||
'--extract-audio',
|
||||
'--audio-format', 'mp3',
|
||||
'--audio-quality', '0',
|
||||
'-o', outputPath,
|
||||
'--no-warnings',
|
||||
'--newline',
|
||||
], (progress) => {
|
||||
if (onDownloadProgress) {
|
||||
onDownloadProgress({
|
||||
...progress,
|
||||
title: info.title,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
title: info.title,
|
||||
duration: info.duration,
|
||||
filePath: outputPath,
|
||||
url: url,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to download: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Download all videos from a playlist as MP3
|
||||
*/
|
||||
export async function downloadPlaylist(url, options = {}) {
|
||||
const { outputDir = OUTPUT_DIR, onProgress, onVideoComplete, onDownloadProgress, forcePlaylist = false } = options;
|
||||
|
||||
// Ensure output directory exists
|
||||
if (!fs.existsSync(outputDir)) {
|
||||
fs.mkdirSync(outputDir, { recursive: true });
|
||||
}
|
||||
|
||||
try {
|
||||
// Get playlist info (force playlist mode if URL has list= param)
|
||||
const info = await getInfo(url, forcePlaylist || hasPlaylistParam(url));
|
||||
|
||||
if (info._type !== 'playlist') {
|
||||
// Single video, redirect to downloadVideo
|
||||
const result = await downloadVideo(url, { ...options, onDownloadProgress });
|
||||
return {
|
||||
success: true,
|
||||
playlistTitle: result.title,
|
||||
videos: [result],
|
||||
totalVideos: 1,
|
||||
};
|
||||
}
|
||||
|
||||
const results = [];
|
||||
const entries = info.entries || [];
|
||||
|
||||
console.log(`Playlist: ${info.title} (${entries.length} videos)`);
|
||||
|
||||
for (let i = 0; i < entries.length; i++) {
|
||||
const entry = entries[i];
|
||||
const videoUrl = entry.url || `https://www.youtube.com/watch?v=${entry.id}`;
|
||||
|
||||
try {
|
||||
if (onProgress) {
|
||||
onProgress({ current: i + 1, total: entries.length, title: entry.title });
|
||||
}
|
||||
|
||||
console.log(`[${i + 1}/${entries.length}] Downloading: ${entry.title}`);
|
||||
|
||||
// Wrap progress callback to include playlist context
|
||||
const wrappedProgress = onDownloadProgress ? (progress) => {
|
||||
onDownloadProgress({
|
||||
...progress,
|
||||
videoIndex: i + 1,
|
||||
totalVideos: entries.length,
|
||||
playlistTitle: info.title,
|
||||
});
|
||||
} : undefined;
|
||||
|
||||
const result = await downloadVideo(videoUrl, { outputDir, onDownloadProgress: wrappedProgress });
|
||||
results.push(result);
|
||||
|
||||
if (onVideoComplete) {
|
||||
onVideoComplete(result);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Failed to download ${entry.title}: ${error.message}`);
|
||||
results.push({
|
||||
success: false,
|
||||
title: entry.title,
|
||||
url: videoUrl,
|
||||
error: error.message,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
playlistTitle: info.title,
|
||||
videos: results,
|
||||
totalVideos: entries.length,
|
||||
successCount: results.filter(r => r.success).length,
|
||||
failCount: results.filter(r => !r.success).length,
|
||||
};
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to download playlist: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Smart download - detects if URL is video or playlist
|
||||
*/
|
||||
export async function download(url, options = {}) {
|
||||
// If URL contains list= parameter, treat it as a playlist
|
||||
const isPlaylistUrl = hasPlaylistParam(url);
|
||||
const info = await getInfo(url, isPlaylistUrl);
|
||||
|
||||
if (info._type === 'playlist') {
|
||||
return downloadPlaylist(url, { ...options, forcePlaylist: true });
|
||||
} else {
|
||||
const result = await downloadVideo(url, options);
|
||||
return {
|
||||
success: true,
|
||||
playlistTitle: null,
|
||||
videos: [result],
|
||||
totalVideos: 1,
|
||||
successCount: 1,
|
||||
failCount: 0,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
122
start-server.bat
122
start-server.bat
@ -1,61 +1,61 @@
|
||||
@echo off
|
||||
REM Video to MP3 Transcriptor Server Starter
|
||||
REM This script starts the API server on port 8888
|
||||
|
||||
echo ==========================================
|
||||
echo Video to MP3 Transcriptor API
|
||||
echo ==========================================
|
||||
echo.
|
||||
|
||||
REM Check if node is installed
|
||||
where node >nul 2>nul
|
||||
if %ERRORLEVEL% NEQ 0 (
|
||||
echo Error: Node.js is not installed
|
||||
echo Please install Node.js from https://nodejs.org/
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
REM Check if npm is installed
|
||||
where npm >nul 2>nul
|
||||
if %ERRORLEVEL% NEQ 0 (
|
||||
echo Error: npm is not installed
|
||||
echo Please install npm
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
REM Check if .env file exists
|
||||
if not exist .env (
|
||||
echo Warning: .env file not found
|
||||
echo Creating .env file...
|
||||
(
|
||||
echo OPENAI_API_KEY=
|
||||
echo PORT=8888
|
||||
echo OUTPUT_DIR=./output
|
||||
) > .env
|
||||
echo.
|
||||
echo Please edit .env and add your OPENAI_API_KEY
|
||||
echo.
|
||||
)
|
||||
|
||||
REM Check if node_modules exists
|
||||
if not exist node_modules (
|
||||
echo Installing dependencies...
|
||||
call npm install
|
||||
echo.
|
||||
)
|
||||
|
||||
REM Kill any process using port 8888
|
||||
echo Checking port 8888...
|
||||
npx kill-port 8888 >nul 2>nul
|
||||
|
||||
echo.
|
||||
echo Starting server on http://localhost:8888
|
||||
echo Press Ctrl+C to stop the server
|
||||
echo.
|
||||
echo ==========================================
|
||||
echo.
|
||||
|
||||
REM Start the server
|
||||
call npm run server
|
||||
@echo off
|
||||
REM Video to MP3 Transcriptor Server Starter
|
||||
REM This script starts the API server on port 8888
|
||||
|
||||
echo ==========================================
|
||||
echo Video to MP3 Transcriptor API
|
||||
echo ==========================================
|
||||
echo.
|
||||
|
||||
REM Check if node is installed
|
||||
where node >nul 2>nul
|
||||
if %ERRORLEVEL% NEQ 0 (
|
||||
echo Error: Node.js is not installed
|
||||
echo Please install Node.js from https://nodejs.org/
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
REM Check if npm is installed
|
||||
where npm >nul 2>nul
|
||||
if %ERRORLEVEL% NEQ 0 (
|
||||
echo Error: npm is not installed
|
||||
echo Please install npm
|
||||
pause
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
REM Check if .env file exists
|
||||
if not exist .env (
|
||||
echo Warning: .env file not found
|
||||
echo Creating .env file...
|
||||
(
|
||||
echo OPENAI_API_KEY=
|
||||
echo PORT=8888
|
||||
echo OUTPUT_DIR=./output
|
||||
) > .env
|
||||
echo.
|
||||
echo Please edit .env and add your OPENAI_API_KEY
|
||||
echo.
|
||||
)
|
||||
|
||||
REM Check if node_modules exists
|
||||
if not exist node_modules (
|
||||
echo Installing dependencies...
|
||||
call npm install
|
||||
echo.
|
||||
)
|
||||
|
||||
REM Kill any process using port 8888
|
||||
echo Checking port 8888...
|
||||
npx kill-port 8888 >nul 2>nul
|
||||
|
||||
echo.
|
||||
echo Starting server on http://localhost:8888
|
||||
echo Press Ctrl+C to stop the server
|
||||
echo.
|
||||
echo ==========================================
|
||||
echo.
|
||||
|
||||
REM Start the server
|
||||
call npm run server
|
||||
|
||||
116
start-server.sh
116
start-server.sh
@ -1,58 +1,58 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Video to MP3 Transcriptor Server Starter
|
||||
# This script starts the API server on port 8888
|
||||
|
||||
echo "=========================================="
|
||||
echo "Video to MP3 Transcriptor API"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check if node is installed
|
||||
if ! command -v node &> /dev/null
|
||||
then
|
||||
echo "Error: Node.js is not installed"
|
||||
echo "Please install Node.js from https://nodejs.org/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if npm is installed
|
||||
if ! command -v npm &> /dev/null
|
||||
then
|
||||
echo "Error: npm is not installed"
|
||||
echo "Please install npm"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if .env file exists
|
||||
if [ ! -f .env ]; then
|
||||
echo "Warning: .env file not found"
|
||||
echo "Creating .env file..."
|
||||
echo "OPENAI_API_KEY=" > .env
|
||||
echo "PORT=8888" >> .env
|
||||
echo "OUTPUT_DIR=./output" >> .env
|
||||
echo ""
|
||||
echo "Please edit .env and add your OPENAI_API_KEY"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Check if node_modules exists
|
||||
if [ ! -d "node_modules" ]; then
|
||||
echo "Installing dependencies..."
|
||||
npm install
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Kill any process using port 8888
|
||||
echo "Checking port 8888..."
|
||||
npx kill-port 8888 2>/dev/null
|
||||
|
||||
echo ""
|
||||
echo "Starting server on http://localhost:8888"
|
||||
echo "Press Ctrl+C to stop the server"
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Start the server
|
||||
npm run server
|
||||
#!/bin/bash
|
||||
|
||||
# Video to MP3 Transcriptor Server Starter
|
||||
# This script starts the API server on port 8888
|
||||
|
||||
echo "=========================================="
|
||||
echo "Video to MP3 Transcriptor API"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check if node is installed
|
||||
if ! command -v node &> /dev/null
|
||||
then
|
||||
echo "Error: Node.js is not installed"
|
||||
echo "Please install Node.js from https://nodejs.org/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if npm is installed
|
||||
if ! command -v npm &> /dev/null
|
||||
then
|
||||
echo "Error: npm is not installed"
|
||||
echo "Please install npm"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if .env file exists
|
||||
if [ ! -f .env ]; then
|
||||
echo "Warning: .env file not found"
|
||||
echo "Creating .env file..."
|
||||
echo "OPENAI_API_KEY=" > .env
|
||||
echo "PORT=8888" >> .env
|
||||
echo "OUTPUT_DIR=./output" >> .env
|
||||
echo ""
|
||||
echo "Please edit .env and add your OPENAI_API_KEY"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Check if node_modules exists
|
||||
if [ ! -d "node_modules" ]; then
|
||||
echo "Installing dependencies..."
|
||||
npm install
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Kill any process using port 8888
|
||||
echo "Checking port 8888..."
|
||||
npx kill-port 8888 2>/dev/null
|
||||
|
||||
echo ""
|
||||
echo "Starting server on http://localhost:8888"
|
||||
echo "Press Ctrl+C to stop the server"
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Start the server
|
||||
npm run server
|
||||
|
||||
Loading…
Reference in New Issue
Block a user