Appearance
本地安装
本指南将详细介绍如何在本地环境中从源代码安装和运行 AnythingLLM。这种方式适合开发者、高级用户或需要自定义配置的场景。
前置要求
系统要求
- 操作系统: Linux, macOS, Windows
- 内存: 最少 4GB RAM,推荐 8GB+
- 存储: 最少 5GB 可用空间
- CPU: 2 核心以上
软件要求
- Node.js: 版本 18.x 或更高
- npm: 版本 8.x 或更高(通常随 Node.js 安装)
- Git: 用于克隆代码仓库
- Python: 版本 3.8+ (某些依赖需要)
安装 Node.js
使用 Node Version Manager (推荐)
Linux/macOS:
bash
# 安装 nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# 重新加载终端或运行
source ~/.bashrc
# 安装最新的 LTS 版本
nvm install --lts
nvm use --lts
# 验证安装
node --version
npm --version
Windows:
powershell
# 下载并安装 nvm-windows
# https://github.com/coreybutler/nvm-windows/releases
# 安装 Node.js
nvm install lts
nvm use lts
# 验证安装
node --version
npm --version
直接安装
Linux (Ubuntu/Debian):
bash
# 添加 NodeSource 仓库
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
# 安装 Node.js
sudo apt-get install -y nodejs
# 验证安装
node --version
npm --version
macOS:
bash
# 使用 Homebrew
brew install node
# 或下载安装包
# https://nodejs.org/en/download/
Windows:
powershell
# 使用 Chocolatey
choco install nodejs
# 或下载安装包
# https://nodejs.org/en/download/
快速开始
1. 克隆代码仓库
bash
# 克隆主仓库
git clone https://github.com/Mintplex-Labs/anything-llm.git
cd anything-llm
# 查看可用分支
git branch -a
# 切换到稳定分支(可选)
git checkout main
2. 安装依赖
bash
# 安装前端依赖
cd frontend
npm install
# 返回根目录并安装后端依赖
cd ../server
npm install
# 安装全局依赖(可选)
npm install -g concurrently nodemon
3. 环境配置
bash
# 在 server 目录下创建环境文件
cd server
cp .env.example .env
# 编辑环境文件
nano .env
4. 基础环境配置
bash
# .env 文件内容
# 基础配置
NODE_ENV=development
SERVER_PORT=3001
FRONTEND_PORT=3000
# JWT 密钥(请更改为随机字符串)
JWT_SECRET=your-super-secret-jwt-key-change-this
# 数据库配置(SQLite 用于开发)
DATABASE_TYPE=sqlite
SQLITE_DB_PATH=./storage/anythingllm.db
# AI 模型配置
OPENAI_API_KEY=your-openai-api-key-here
OPENAI_MODEL=gpt-3.5-turbo
# 向量数据库配置(使用内置 LanceDB)
VECTOR_DB=lancedb
LANCE_DB_PATH=./storage/lancedb
# 存储配置
STORAGE_DIR=./storage
5. 启动应用
开发模式(推荐用于开发)
bash
# 在项目根目录
npm run dev
# 或分别启动前后端
# 终端 1 - 启动后端
cd server
npm run dev
# 终端 2 - 启动前端
cd frontend
npm start
生产模式
bash
# 构建前端
cd frontend
npm run build
# 启动后端(生产模式)
cd ../server
npm start
6. 访问应用
- 前端: 默认端口 3000
- 后端 API: 默认端口 3001
- 健康检查: API 端点
/health
详细配置
数据库配置
SQLite(默认,适合开发和小型部署)
bash
# .env
DATABASE_TYPE=sqlite
SQLITE_DB_PATH=./storage/anythingllm.db
PostgreSQL(推荐用于生产)
bash
# 安装 PostgreSQL
# Ubuntu/Debian
sudo apt-get install postgresql postgresql-contrib
# macOS
brew install postgresql
# 创建数据库和用户
sudo -u postgres psql
CREATE DATABASE anythingllm;
CREATE USER anythingllm WITH ENCRYPTED PASSWORD 'your_password';
GRANT ALL PRIVILEGES ON DATABASE anythingllm TO anythingllm;
\q
# .env 配置
DATABASE_TYPE=postgres
DB_HOST=localhost
DB_PORT=5432
DB_USERNAME=anythingllm
DB_PASSWORD=your_password
DB_NAME=anythingllm
MySQL/MariaDB
bash
# 安装 MySQL
# Ubuntu/Debian
sudo apt-get install mysql-server
# macOS
brew install mysql
# 创建数据库和用户
mysql -u root -p
CREATE DATABASE anythingllm;
CREATE USER 'anythingllm'@'localhost' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON anythingllm.* TO 'anythingllm'@'localhost';
FLUSH PRIVILEGES;
EXIT;
# .env 配置
DATABASE_TYPE=mysql
DB_HOST=localhost
DB_PORT=3306
DB_USERNAME=anythingllm
DB_PASSWORD=your_password
DB_NAME=anythingllm
向量数据库配置
LanceDB(默认,无需额外安装)
bash
# .env
VECTOR_DB=lancedb
LANCE_DB_PATH=./storage/lancedb
Chroma(本地部署)
bash
# 安装 Chroma
pip install chromadb
# 启动 Chroma 服务器
chroma run --host localhost --port 8000
# .env 配置
VECTOR_DB=chroma
CHROMA_ENDPOINT=http://localhost:8000
CHROMA_COLLECTION=anythingllm
Pinecone(云服务)
bash
# .env 配置
VECTOR_DB=pinecone
PINECONE_API_KEY=your-pinecone-api-key
PINECONE_ENVIRONMENT=your-pinecone-environment
PINECONE_INDEX=anythingllm
Weaviate(本地或云)
bash
# 本地 Docker 部署
docker run -d \
--name weaviate \
-p 8080:8080 \
-e QUERY_DEFAULTS_LIMIT=25 \
-e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true \
-e PERSISTENCE_DATA_PATH='/var/lib/weaviate' \
semitechnologies/weaviate:latest
# .env 配置
VECTOR_DB=weaviate
WEAVIATE_ENDPOINT=http://localhost:8080
WEAVIATE_API_KEY=your-api-key # 如果启用了认证
AI 模型配置
OpenAI
bash
# .env
LLM_PROVIDER=openai
OPENAI_API_KEY=your-openai-api-key
OPENAI_MODEL=gpt-4
OPENAI_BASE_URL=https://api.openai.com/v1 # 可选,自定义端点
Azure OpenAI
bash
# .env
LLM_PROVIDER=azure
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
AZURE_OPENAI_KEY=your-azure-key
AZURE_OPENAI_DEPLOYMENT_NAME=your-deployment-name
AZURE_OPENAI_VERSION=2023-05-15
Anthropic Claude
bash
# .env
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-anthropic-api-key
ANTHROPIC_MODEL=claude-3-sonnet-20240229
本地模型(Ollama)
bash
# 安装 Ollama
# Linux/macOS
curl -fsSL https://ollama.ai/install.sh | sh
# Windows
# 下载安装包:https://ollama.ai/download
# 启动 Ollama 服务
ollama serve
# 下载模型
ollama pull llama2
ollama pull codellama
# .env 配置
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama2
LM Studio
bash
# 下载并安装 LM Studio
# https://lmstudio.ai/
# 在 LM Studio 中启动本地服务器
# 默认端口:1234
# .env 配置
LLM_PROVIDER=lmstudio
LMSTUDIO_BASE_URL=http://localhost:1234/v1
LMSTUDIO_MODEL=your-model-name
嵌入模型配置
OpenAI Embeddings
bash
# .env
EMBEDDING_PROVIDER=openai
OPENAI_EMBEDDING_MODEL=text-embedding-ada-002
本地嵌入模型
bash
# .env
EMBEDDING_PROVIDER=local
LOCAL_EMBEDDING_MODEL=all-MiniLM-L6-v2
LOCAL_EMBEDDING_PATH=./models/embeddings
Azure OpenAI Embeddings
bash
# .env
EMBEDDING_PROVIDER=azure
AZURE_EMBEDDING_ENDPOINT=https://your-resource.openai.azure.com
AZURE_EMBEDDING_KEY=your-azure-key
AZURE_EMBEDDING_DEPLOYMENT=your-embedding-deployment
高级配置
完整环境变量配置
bash
# .env.production
# ================================
# 基础服务配置
# ================================
NODE_ENV=production
SERVER_PORT=3001
SERVER_HOST=0.0.0.0
FRONTEND_PORT=3000
# ================================
# 安全配置
# ================================
JWT_SECRET=your-super-secret-jwt-key-change-this-in-production
FORCE_HTTPS=true
CORS_ORIGINS=https://yourdomain.com,https://app.yourdomain.com
# ================================
# 数据库配置
# ================================
DATABASE_TYPE=postgres
DB_HOST=localhost
DB_PORT=5432
DB_USERNAME=anythingllm
DB_PASSWORD=your-secure-database-password
DB_NAME=anythingllm_prod
DB_SSL=true
DB_POOL_SIZE=20
# ================================
# 向量数据库配置
# ================================
VECTOR_DB=pinecone
PINECONE_API_KEY=your-pinecone-api-key
PINECONE_ENVIRONMENT=us-west1-gcp
PINECONE_INDEX=anythingllm-prod
# ================================
# AI 模型配置
# ================================
LLM_PROVIDER=openai
OPENAI_API_KEY=your-production-openai-key
OPENAI_MODEL=gpt-4
OPENAI_MAX_TOKENS=4000
OPENAI_TEMPERATURE=0.7
# 嵌入模型配置
EMBEDDING_PROVIDER=openai
OPENAI_EMBEDDING_MODEL=text-embedding-ada-002
# ================================
# 存储配置
# ================================
STORAGE_DIR=./storage
MAX_FILE_SIZE=25MB
ALLOWED_FILE_TYPES=.txt,.pdf,.docx,.md,.csv
# ================================
# 缓存配置
# ================================
CACHE_PROVIDER=redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=your-redis-password
REDIS_DB=0
# ================================
# 邮件配置
# ================================
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USERNAME=your-email@gmail.com
SMTP_PASSWORD=your-app-password
SMTP_FROM=noreply@yourdomain.com
# ================================
# 监控和日志
# ================================
LOG_LEVEL=info
LOG_FILE=./logs/anythingllm.log
MONITORING_ENABLED=true
METRICS_PORT=9090
# ================================
# 功能开关
# ================================
ENABLE_REGISTRATION=false
ENABLE_PASSWORD_RESET=true
ENABLE_MULTI_WORKSPACE=true
ENABLE_API_ACCESS=true
# ================================
# 限制配置
# ================================
RATE_LIMIT_ENABLED=true
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_WINDOW=15 # 分钟
MAX_WORKSPACES_PER_USER=5
MAX_DOCUMENTS_PER_WORKSPACE=1000
开发环境配置
bash
# .env.development
NODE_ENV=development
SERVER_PORT=3001
FRONTEND_PORT=3000
# 开发数据库
DATABASE_TYPE=sqlite
SQLITE_DB_PATH=./storage/dev.db
# 开发向量数据库
VECTOR_DB=lancedb
LANCE_DB_PATH=./storage/dev_lancedb
# 开发 AI 配置
LLM_PROVIDER=openai
OPENAI_API_KEY=your-dev-openai-key
OPENAI_MODEL=gpt-3.5-turbo
# 开发功能
ENABLE_REGISTRATION=true
LOG_LEVEL=debug
HOT_RELOAD=true
测试环境配置
bash
# .env.test
NODE_ENV=test
SERVER_PORT=3002
# 测试数据库
DATABASE_TYPE=sqlite
SQLITE_DB_PATH=:memory:
# 测试向量数据库
VECTOR_DB=lancedb
LANCE_DB_PATH=./storage/test_lancedb
# 模拟 AI 服务
LLM_PROVIDER=mock
EMBEDDING_PROVIDER=mock
# 测试配置
ENABLE_REGISTRATION=true
LOG_LEVEL=error
开发工具和脚本
开发脚本
json
// package.json (根目录)
{
"scripts": {
"dev": "concurrently \"npm run server:dev\" \"npm run frontend:dev\"",
"server:dev": "cd server && npm run dev",
"frontend:dev": "cd frontend && npm start",
"build": "cd frontend && npm run build",
"start": "cd server && npm start",
"test": "npm run test:server && npm run test:frontend",
"test:server": "cd server && npm test",
"test:frontend": "cd frontend && npm test",
"lint": "npm run lint:server && npm run lint:frontend",
"lint:server": "cd server && npm run lint",
"lint:frontend": "cd frontend && npm run lint",
"setup": "npm run setup:server && npm run setup:frontend",
"setup:server": "cd server && npm install",
"setup:frontend": "cd frontend && npm install"
}
}
数据库迁移脚本
bash
#!/bin/bash
# scripts/migrate.sh
# 数据库迁移脚本
cd server
# 运行迁移
npm run migrate
# 如果需要回滚
# npm run migrate:rollback
# 创建新迁移
# npm run migrate:make migration_name
开发环境重置脚本
bash
#!/bin/bash
# scripts/reset-dev.sh
echo "重置开发环境..."
# 停止所有进程
pkill -f "node.*server"
pkill -f "node.*frontend"
# 清理数据库
rm -f server/storage/dev.db
rm -rf server/storage/dev_lancedb
# 清理缓存
rm -rf server/node_modules/.cache
rm -rf frontend/node_modules/.cache
# 重新安装依赖
cd server && npm install
cd ../frontend && npm install
# 运行迁移
cd ../server && npm run migrate
echo "开发环境重置完成"
构建脚本
bash
#!/bin/bash
# scripts/build.sh
echo "构建生产版本..."
# 清理旧构建
rm -rf frontend/build
rm -rf server/dist
# 构建前端
cd frontend
npm run build
# 构建后端(如果使用 TypeScript)
cd ../server
npm run build
# 复制静态文件
cp -r ../frontend/build ./public
echo "构建完成"
性能优化
Node.js 性能优化
bash
# 增加 Node.js 内存限制
export NODE_OPTIONS="--max-old-space-size=4096"
# 启用 V8 优化
export NODE_OPTIONS="--optimize-for-size"
# 启用集群模式
# server/cluster.js
const cluster = require('cluster');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`);
cluster.fork();
});
} else {
require('./app.js');
}
数据库性能优化
sql
-- PostgreSQL 优化
-- postgresql.conf
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
-- 创建索引
CREATE INDEX CONCURRENTLY idx_documents_workspace_id ON documents(workspace_id);
CREATE INDEX CONCURRENTLY idx_messages_workspace_id ON messages(workspace_id);
CREATE INDEX CONCURRENTLY idx_messages_created_at ON messages(created_at);
缓存配置
javascript
// server/config/cache.js
const Redis = require('redis');
const client = Redis.createClient({
host: process.env.REDIS_HOST || 'localhost',
port: process.env.REDIS_PORT || 6379,
password: process.env.REDIS_PASSWORD,
db: process.env.REDIS_DB || 0,
retryDelayOnFailover: 100,
maxRetriesPerRequest: 3
});
// 缓存中间件
const cacheMiddleware = (duration = 300) => {
return async (req, res, next) => {
const key = `cache:${req.originalUrl}`;
try {
const cached = await client.get(key);
if (cached) {
return res.json(JSON.parse(cached));
}
res.sendResponse = res.json;
res.json = (body) => {
client.setex(key, duration, JSON.stringify(body));
res.sendResponse(body);
};
next();
} catch (error) {
next();
}
};
};
module.exports = { client, cacheMiddleware };
监控和日志
日志配置
javascript
// server/config/logger.js
const winston = require('winston');
const path = require('path');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: { service: 'anythingllm' },
transports: [
new winston.transports.File({
filename: path.join(process.env.STORAGE_DIR || './storage', 'logs/error.log'),
level: 'error'
}),
new winston.transports.File({
filename: path.join(process.env.STORAGE_DIR || './storage', 'logs/combined.log')
})
]
});
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.simple()
}));
}
module.exports = logger;
健康检查端点
javascript
// server/routes/health.js
const express = require('express');
const router = express.Router();
const { client: redisClient } = require('../config/cache');
const { sequelize } = require('../models');
router.get('/health', async (req, res) => {
const health = {
status: 'ok',
timestamp: new Date().toISOString(),
services: {}
};
try {
// 检查数据库连接
await sequelize.authenticate();
health.services.database = 'ok';
} catch (error) {
health.services.database = 'error';
health.status = 'error';
}
try {
// 检查 Redis 连接
await redisClient.ping();
health.services.redis = 'ok';
} catch (error) {
health.services.redis = 'error';
}
// 检查内存使用
const memUsage = process.memoryUsage();
health.memory = {
rss: Math.round(memUsage.rss / 1024 / 1024) + 'MB',
heapTotal: Math.round(memUsage.heapTotal / 1024 / 1024) + 'MB',
heapUsed: Math.round(memUsage.heapUsed / 1024 / 1024) + 'MB'
};
res.status(health.status === 'ok' ? 200 : 503).json(health);
});
module.exports = router;
故障排除
常见问题
端口冲突
bash
# 查找占用端口的进程
lsof -i :3001
netstat -tulpn | grep :3001
# 终止进程
kill -9 <PID>
# 更改端口
export SERVER_PORT=3002
依赖安装问题
bash
# 清理 npm 缓存
npm cache clean --force
# 删除 node_modules 重新安装
rm -rf node_modules package-lock.json
npm install
# 使用 yarn 替代 npm
npm install -g yarn
yarn install
数据库连接问题
bash
# 检查数据库服务状态
sudo systemctl status postgresql
sudo systemctl status mysql
# 测试数据库连接
psql -h localhost -U anythingllm -d anythingllm
mysql -h localhost -u anythingllm -p anythingllm
# 检查防火墙设置
sudo ufw status
sudo iptables -L
内存不足
bash
# 增加 Node.js 内存限制
export NODE_OPTIONS="--max-old-space-size=8192"
# 监控内存使用
top -p $(pgrep -f node)
htop
# 启用交换空间
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
调试工具
bash
# 启用 Node.js 调试
node --inspect server/app.js
# 使用 Chrome DevTools
# 打开 chrome://inspect
# 启用详细日志
DEBUG=* npm run dev
# 性能分析
node --prof server/app.js
node --prof-process isolate-*.log > processed.txt
开发工具推荐
bash
# 安装开发工具
npm install -g nodemon
npm install -g pm2
npm install -g clinic
# 使用 nodemon 自动重启
nodemon server/app.js
# 使用 PM2 进程管理
pm2 start ecosystem.config.js
pm2 monit
# 性能诊断
clinic doctor -- node server/app.js
clinic bubbleprof -- node server/app.js
本地安装提供了最大的灵活性和控制权,适合开发环境和需要深度定制的生产环境。请根据您的具体需求选择合适的配置选项。