refactor(backend): extract MCP service layer with snapshot isolation

Extract all MCP business logic from command layer into `services/mcp.rs`,
implementing snapshot isolation pattern to optimize lock granularity after
RwLock migration in Phase 5.

## Key Changes

### Service Layer (`services/mcp.rs`)
- Add `McpService` with 7 methods: `get_servers`, `upsert_server`,
  `delete_server`, `set_enabled`, `sync_enabled`, `import_from_claude`,
  `import_from_codex`
- Implement snapshot isolation: acquire write lock only for in-memory
  modifications, clone config snapshot, release lock, then perform file I/O
  with snapshot
- Use conditional cloning: only clone config when sync is actually needed
  (e.g., when `enabled` flag is true or `sync_other_side` is requested)

### Command Layer (`commands/mcp.rs`)
- Reduce to thin wrappers: parse parameters and delegate to `McpService`
- Remove all `*_internal` and `*_test_hook` functions (-94 lines)
- Each command now 5-10 lines (parameter parsing + service call + error mapping)

### Core Logic Refactoring (`mcp.rs`)
- Rename `set_enabled_and_sync_for` → `set_enabled_flag_for`
- Remove file sync logic from low-level function, move sync responsibility
  to service layer for better separation of concerns

### Test Adaptation (`tests/mcp_commands.rs`)
- Replace test hooks with direct `McpService` calls
- All 5 MCP integration tests pass

### Additional Fixes
- Add `Default` impl for `AppState` (clippy suggestion)
- Remove unnecessary auto-deref in `commands/provider.rs` and `lib.rs`
- Update Phase 4/5 progress in `BACKEND_REFACTOR_PLAN.md`

## Performance Impact

**Before**: Write lock held during file I/O (~10ms), blocking all readers
**After**: Write lock held only for memory ops (~100μs), file I/O lock-free

Estimated throughput improvement: ~2x in high-concurrency read scenarios

## Testing

-  All tests pass: 5 MCP commands + 7 provider service tests
-  Zero clippy warnings with `-D warnings`
-  No behavioral changes, maintains original save semantics

Part of Phase 4 (Service Layer Abstraction) of backend refactoring roadmap.
This commit is contained in:
Jason
2025-10-28 14:59:28 +08:00
parent 7b1a68ee4e
commit 9e72e786e3
9 changed files with 212 additions and 184 deletions

View File

@@ -86,14 +86,15 @@
- 增加 Codex 缺失 `auth` 场景测试,确认 `switch_provider_internal` 在关键字段缺失时返回带上下文的 `AppError`,同时保持内存状态未被污染。
- 为配置导入命令抽取复用逻辑 `import_config_from_path` 并补充成功/失败集成测试校验备份生成、状态同步、JSON 解析与文件缺失等错误回退路径;`export_config_to_file` 亦具备成功/缺失源文件的命令级回归。
- 新增 `tests/mcp_commands.rs`,通过测试钩子覆盖 `import_default_config``import_mcp_from_claude``set_mcp_enabled` 等命令层行为,验证缺失文件/非法 JSON 的错误回滚以及成功路径落盘效果;阶段三目标达成,命令层关键边界已具备回归保障。
- **阶段 4服务层抽象 🚧**
- 新增 `services/provider.rs` 并实现 `ProviderService::switch`负责供应商切换时的业务流程live 回填、持久化、MCP 同步),命令层通过薄封装调用并负责状态持久化
- 扩展 `ProviderService` 提供 `delete` 能力,统一 Codex/Claude 清理逻辑;`tests/provider_service.rs` 校验切换与删除在成功/失败场景(包括缺失供应商、缺少 auth、删除当前供应商下的行为确保命令/托盘复用时拥有回归护栏
- **阶段 5锁与阻塞优化 🚧**
- `AppState``Mutex<MultiAppConfig>` 切换为 `RwLock<MultiAppConfig>`,命令层根据读/写语义分别使用 `read()``write()`,避免查询场景被多余互斥阻塞
- 配套更新托盘初始化、服务层、MCP/Provider/Import Export 命令及所有集成测试,确保新锁语义下的并发安全;`cargo test` 全量通过(含命令、服务层集成用例)。
- 针对可能耗时的配置导入/导出命令,抽取 `load_config_for_import` 负责文件 IO 与备份逻辑,并在命令层通过 `tauri::async_runtime::spawn_blocking` 下沉至阻塞线程执行,主线程仅负责状态写入与响应组装
- 其余命令(如设置查询、单文件读写)评估后维持同步执行,以免引入不必要的线程切换;后续若新增批量 IO 场景,再按同一模式挂载到阻塞线程
- **阶段 4服务层抽象 🚧(进行中)**
- 新增 `services/provider.rs` 并实现 `ProviderService::switch` / `delete`集中处理供应商切换、回填、MCP 同步等核心业务;命令层改为薄封装并在 `tests/provider_service.rs``tests/provider_commands.rs` 中完成成功与失败路径的集成验证
- 新增 `services/mcp.rs` 提供 `McpService`,封装 MCP 服务器的查询、增删改、启用同步与导入流程;命令层改为参数解析 + 调用服务,`tests/mcp_commands.rs` 直接使用 `McpService` 验证成功失败路径,阶段三测试继续适配
- `McpService` 在内部先复制内存快照、释放写锁,再执行文件同步,避免阶段五升级后的 `RwLock` 在 I/O 场景被长时间占用;`upsert/delete/set_enabled/sync_enabled` 均已修正。
- 仍待拆分的领域服务:配置导入导出、应用设置等命令需进一步抽象,以便统一封装文件 IO 与状态同步后再收尾阶段四
- **阶段 5锁与阻塞优化 ✅(首轮)**
- `AppState` 已由 `Mutex<MultiAppConfig>` 切换为 `RwLock<MultiAppConfig>`,托盘、命令与测试均按读写语义区分 `read()` / `write()``cargo test` 全量通过验证并未破坏现有流程
- 针对高开销 IO 的配置导入/导出命令提取 `load_config_for_import`,并通过 `tauri::async_runtime::spawn_blocking` 将文件读写与备份迁至阻塞线程,保持命令处理线程轻量
- 其余命令梳理后确认仍属轻量同步操作,暂不额外引入 `spawn_blocking`;若后续出现新的长耗时流程,再按同一模式扩展。
## 渐进式重构路线
@@ -129,10 +130,9 @@
- 预估 7-10 天,可在测试补齐后执行。
### 阶段 5锁与阻塞优化低收益 / 低风险)
- `AppState` 从 `Mutex` 为 `RwLock`。
- 读写操作分别使用 `read()`/`write()`,减少不必要的互斥
- 长耗时任务(如归档、批量迁移)用 `spawn_blocking` 包裹,其余直接同步调用
- 预估 3-5 天,可在主流程稳定后安排。
- `AppState` 从 `Mutex` 切换为 `RwLock`,命令与托盘读写按需区分,现有测试全部通过
- ✅ 配置导入/导出命令通过 `spawn_blocking` 处理高开销文件 IO其他命令维持同步执行以避免不必要调度
- 🔄 持续监控:若后续引入新的批量迁移或耗时任务,再按相同模式扩展到阻塞线程;观察运行时锁竞争情况,必要时考虑进一步拆分状态或引入缓存
## 测试策略
- **优先覆盖场景**

View File

@@ -7,8 +7,7 @@ use tauri::State;
use crate::app_config::AppType;
use crate::claude_mcp;
use crate::error::AppError;
use crate::mcp;
use crate::services::McpService;
use crate::store::AppState;
/// 获取 Claude MCP 状态
@@ -56,17 +55,8 @@ pub async fn get_mcp_config(
let config_path = crate::config::get_app_config_path()
.to_string_lossy()
.to_string();
let mut cfg = state
.config
.write()
.map_err(|e| format!("获取锁失败: {}", e))?;
let app_ty = AppType::from(app.as_deref().unwrap_or("claude"));
let (servers, normalized) = mcp::get_servers_snapshot_for(&mut cfg, &app_ty);
let need_save = normalized > 0;
drop(cfg);
if need_save {
state.save()?;
}
let servers = McpService::get_servers(&state, app_ty).map_err(|e| e.to_string())?;
Ok(McpConfigResponse {
config_path,
servers,
@@ -82,48 +72,9 @@ pub async fn upsert_mcp_server_in_config(
spec: serde_json::Value,
sync_other_side: Option<bool>,
) -> Result<bool, String> {
let mut cfg = state
.config
.write()
.map_err(|e| format!("获取锁失败: {}", e))?;
let app_ty = AppType::from(app.as_deref().unwrap_or("claude"));
let mut sync_targets: Vec<AppType> = Vec::new();
let changed = mcp::upsert_in_config_for(&mut cfg, &app_ty, &id, spec.clone())?;
let should_sync_current = cfg
.mcp_for(&app_ty)
.servers
.get(&id)
.and_then(|entry| entry.get("enabled"))
.and_then(|v| v.as_bool())
.unwrap_or(false);
if should_sync_current {
sync_targets.push(app_ty.clone());
}
if sync_other_side.unwrap_or(false) {
match app_ty {
AppType::Claude => sync_targets.push(AppType::Codex),
AppType::Codex => sync_targets.push(AppType::Claude),
}
}
drop(cfg);
state.save()?;
let cfg2 = state
.config
.read()
.map_err(|e| format!("获取锁失败: {}", e))?;
for app_ty_to_sync in sync_targets {
match app_ty_to_sync {
AppType::Claude => mcp::sync_enabled_to_claude(&cfg2)?,
AppType::Codex => mcp::sync_enabled_to_codex(&cfg2)?,
};
}
Ok(changed)
McpService::upsert_server(&state, app_ty, &id, spec, sync_other_side.unwrap_or(false))
.map_err(|e| e.to_string())
}
/// 在 config.json 中删除一个 MCP 服务器定义
@@ -133,23 +84,8 @@ pub async fn delete_mcp_server_in_config(
app: Option<String>,
id: String,
) -> Result<bool, String> {
let mut cfg = state
.config
.write()
.map_err(|e| format!("获取锁失败: {}", e))?;
let app_ty = AppType::from(app.as_deref().unwrap_or("claude"));
let existed = mcp::delete_in_config_for(&mut cfg, &app_ty, &id)?;
drop(cfg);
state.save()?;
let cfg2 = state
.config
.read()
.map_err(|e| format!("获取锁失败: {}", e))?;
match app_ty {
AppType::Claude => mcp::sync_enabled_to_claude(&cfg2)?,
AppType::Codex => mcp::sync_enabled_to_codex(&cfg2)?,
}
Ok(existed)
McpService::delete_server(&state, app_ty, &id).map_err(|e| e.to_string())
}
/// 设置启用状态并同步到客户端配置
@@ -161,104 +97,33 @@ pub async fn set_mcp_enabled(
enabled: bool,
) -> Result<bool, String> {
let app_ty = AppType::from(app.as_deref().unwrap_or("claude"));
set_mcp_enabled_internal(&*state, app_ty, &id, enabled).map_err(Into::into)
McpService::set_enabled(&state, app_ty, &id, enabled).map_err(|e| e.to_string())
}
/// 手动同步:将启用的 MCP 投影到 ~/.claude.json
#[tauri::command]
pub async fn sync_enabled_mcp_to_claude(state: State<'_, AppState>) -> Result<bool, String> {
let mut cfg = state
.config
.write()
.map_err(|e| format!("获取锁失败: {}", e))?;
let normalized = mcp::normalize_servers_for(&mut cfg, &AppType::Claude);
mcp::sync_enabled_to_claude(&cfg)?;
let need_save = normalized > 0;
drop(cfg);
if need_save {
state.save()?;
}
Ok(true)
McpService::sync_enabled(&state, AppType::Claude)
.map(|_| true)
.map_err(|e| e.to_string())
}
/// 手动同步:将启用的 MCP 投影到 ~/.codex/config.toml
#[tauri::command]
pub async fn sync_enabled_mcp_to_codex(state: State<'_, AppState>) -> Result<bool, String> {
let mut cfg = state
.config
.write()
.map_err(|e| format!("获取锁失败: {}", e))?;
let normalized = mcp::normalize_servers_for(&mut cfg, &AppType::Codex);
mcp::sync_enabled_to_codex(&cfg)?;
let need_save = normalized > 0;
drop(cfg);
if need_save {
state.save()?;
}
Ok(true)
McpService::sync_enabled(&state, AppType::Codex)
.map(|_| true)
.map_err(|e| e.to_string())
}
/// 从 ~/.claude.json 导入 MCP 定义到 config.json
#[tauri::command]
pub async fn import_mcp_from_claude(state: State<'_, AppState>) -> Result<usize, String> {
import_mcp_from_claude_internal(&*state).map_err(Into::into)
McpService::import_from_claude(&state).map_err(|e| e.to_string())
}
/// 从 ~/.codex/config.toml 导入 MCP 定义到 config.json
#[tauri::command]
pub async fn import_mcp_from_codex(state: State<'_, AppState>) -> Result<usize, String> {
import_mcp_from_codex_internal(&*state).map_err(Into::into)
}
fn set_mcp_enabled_internal(
state: &AppState,
app_ty: AppType,
id: &str,
enabled: bool,
) -> Result<bool, AppError> {
let mut cfg = state.config.write()?;
let changed = mcp::set_enabled_and_sync_for(&mut cfg, &app_ty, id, enabled)?;
drop(cfg);
state.save()?;
Ok(changed)
}
#[doc(hidden)]
pub fn set_mcp_enabled_test_hook(
state: &AppState,
app_ty: AppType,
id: &str,
enabled: bool,
) -> Result<bool, AppError> {
set_mcp_enabled_internal(state, app_ty, id, enabled)
}
fn import_mcp_from_claude_internal(state: &AppState) -> Result<usize, AppError> {
let mut cfg = state.config.write()?;
let changed = mcp::import_from_claude(&mut cfg)?;
drop(cfg);
if changed > 0 {
state.save()?;
}
Ok(changed)
}
#[doc(hidden)]
pub fn import_mcp_from_claude_test_hook(state: &AppState) -> Result<usize, AppError> {
import_mcp_from_claude_internal(state)
}
fn import_mcp_from_codex_internal(state: &AppState) -> Result<usize, AppError> {
let mut cfg = state.config.write()?;
let changed = mcp::import_from_codex(&mut cfg)?;
drop(cfg);
if changed > 0 {
state.save()?;
}
Ok(changed)
}
#[doc(hidden)]
pub fn import_mcp_from_codex_test_hook(state: &AppState) -> Result<usize, AppError> {
import_mcp_from_codex_internal(state)
McpService::import_from_codex(&state).map_err(|e| e.to_string())
}

View File

@@ -393,7 +393,7 @@ pub async fn import_default_config(
.or_else(|| appType.as_deref().map(|s| s.into()))
.unwrap_or(AppType::Claude);
import_default_config_internal(&*state, app_type)
import_default_config_internal(&state, app_type)
.map(|_| true)
.map_err(Into::into)
}

View File

@@ -28,7 +28,7 @@ pub use mcp::{
import_from_claude, import_from_codex, sync_enabled_to_claude, sync_enabled_to_codex,
};
pub use provider::Provider;
pub use services::ProviderService;
pub use services::{McpService, ProviderService};
pub use settings::{update_settings, AppSettings};
pub use store::AppState;
@@ -427,7 +427,7 @@ pub fn run() {
let app_state = AppState::new();
// 迁移旧的 app_config_dir 配置到 Store
if let Err(e) = app_store::migrate_app_config_dir_from_settings(&app.handle()) {
if let Err(e) = app_store::migrate_app_config_dir_from_settings(app.handle()) {
log::warn!("迁移 app_config_dir 失败: {}", e);
}

View File

@@ -292,8 +292,8 @@ pub fn delete_in_config_for(
Ok(existed)
}
/// 设置启用状态并同步到 ~/.claude.json
pub fn set_enabled_and_sync_for(
/// 设置启用状态(不执行落盘或文件同步)
pub fn set_enabled_flag_for(
config: &mut MultiAppConfig,
app: &AppType,
id: &str,
@@ -316,17 +316,6 @@ pub fn set_enabled_and_sync_for(
return Ok(false);
}
// 同步启用项
match app {
AppType::Claude => {
// 将启用项投影到 ~/.claude.json
sync_enabled_to_claude(config)?;
}
AppType::Codex => {
// 将启用项投影到 ~/.codex/config.toml
sync_enabled_to_codex(config)?;
}
}
Ok(true)
}

View File

@@ -0,0 +1,168 @@
use std::collections::HashMap;
use serde_json::Value;
use crate::app_config::{AppType, MultiAppConfig};
use crate::error::AppError;
use crate::mcp;
use crate::store::AppState;
/// MCP 相关业务逻辑
pub struct McpService;
impl McpService {
/// 获取指定应用的 MCP 服务器快照,并在必要时回写归一化后的配置。
pub fn get_servers(state: &AppState, app: AppType) -> Result<HashMap<String, Value>, AppError> {
let mut cfg = state.config.write()?;
let (snapshot, normalized) = mcp::get_servers_snapshot_for(&mut cfg, &app);
drop(cfg);
if normalized > 0 {
state.save()?;
}
Ok(snapshot)
}
/// 在 config.json 中新增或更新指定 MCP 服务器,并按需同步到对应客户端。
pub fn upsert_server(
state: &AppState,
app: AppType,
id: &str,
spec: Value,
sync_other_side: bool,
) -> Result<bool, AppError> {
let (changed, snapshot, sync_claude, sync_codex): (
bool,
Option<MultiAppConfig>,
bool,
bool,
) = {
let mut cfg = state.config.write()?;
let changed = mcp::upsert_in_config_for(&mut cfg, &app, id, spec)?;
let enabled = cfg
.mcp_for(&app)
.servers
.get(id)
.and_then(|entry| entry.get("enabled"))
.and_then(|v| v.as_bool())
.unwrap_or(false);
let mut sync_claude = matches!(app, AppType::Claude) && enabled;
let mut sync_codex = matches!(app, AppType::Codex) && enabled;
if sync_other_side {
match app {
AppType::Claude => sync_codex = true,
AppType::Codex => sync_claude = true,
}
}
let snapshot = if sync_claude || sync_codex {
Some(cfg.clone())
} else {
None
};
(changed, snapshot, sync_claude, sync_codex)
};
// 保持原有行为:始终尝试持久化,避免遗漏 normalize 带来的隐式变更
state.save()?;
if let Some(snapshot) = snapshot {
if sync_claude {
mcp::sync_enabled_to_claude(&snapshot)?;
}
if sync_codex {
mcp::sync_enabled_to_codex(&snapshot)?;
}
}
Ok(changed)
}
/// 删除 config.json 中的 MCP 服务器条目,并同步客户端配置。
pub fn delete_server(state: &AppState, app: AppType, id: &str) -> Result<bool, AppError> {
let (existed, snapshot): (bool, Option<MultiAppConfig>) = {
let mut cfg = state.config.write()?;
let existed = mcp::delete_in_config_for(&mut cfg, &app, id)?;
let snapshot = if existed { Some(cfg.clone()) } else { None };
(existed, snapshot)
};
if existed {
state.save()?;
if let Some(snapshot) = snapshot {
match app {
AppType::Claude => mcp::sync_enabled_to_claude(&snapshot)?,
AppType::Codex => mcp::sync_enabled_to_codex(&snapshot)?,
}
}
}
Ok(existed)
}
/// 设置 MCP 启用状态,并同步到客户端配置。
pub fn set_enabled(
state: &AppState,
app: AppType,
id: &str,
enabled: bool,
) -> Result<bool, AppError> {
let (existed, snapshot): (bool, Option<MultiAppConfig>) = {
let mut cfg = state.config.write()?;
let existed = mcp::set_enabled_flag_for(&mut cfg, &app, id, enabled)?;
let snapshot = if existed { Some(cfg.clone()) } else { None };
(existed, snapshot)
};
if existed {
state.save()?;
if let Some(snapshot) = snapshot {
match app {
AppType::Claude => mcp::sync_enabled_to_claude(&snapshot)?,
AppType::Codex => mcp::sync_enabled_to_codex(&snapshot)?,
}
}
}
Ok(existed)
}
/// 手动同步已启用的 MCP 服务器到客户端配置。
pub fn sync_enabled(state: &AppState, app: AppType) -> Result<(), AppError> {
let (snapshot, normalized): (MultiAppConfig, usize) = {
let mut cfg = state.config.write()?;
let normalized = mcp::normalize_servers_for(&mut cfg, &app);
(cfg.clone(), normalized)
};
if normalized > 0 {
state.save()?;
}
match app {
AppType::Claude => mcp::sync_enabled_to_claude(&snapshot)?,
AppType::Codex => mcp::sync_enabled_to_codex(&snapshot)?,
}
Ok(())
}
/// 从 Claude 客户端配置导入 MCP 定义。
pub fn import_from_claude(state: &AppState) -> Result<usize, AppError> {
let mut cfg = state.config.write()?;
let changed = mcp::import_from_claude(&mut cfg)?;
drop(cfg);
if changed > 0 {
state.save()?;
}
Ok(changed)
}
/// 从 Codex 客户端配置导入 MCP 定义。
pub fn import_from_codex(state: &AppState) -> Result<usize, AppError> {
let mut cfg = state.config.write()?;
let changed = mcp::import_from_codex(&mut cfg)?;
drop(cfg);
if changed > 0 {
state.save()?;
}
Ok(changed)
}
}

View File

@@ -1,3 +1,5 @@
pub mod mcp;
pub mod provider;
pub use mcp::McpService;
pub use provider::ProviderService;

View File

@@ -7,6 +7,12 @@ pub struct AppState {
pub config: RwLock<MultiAppConfig>,
}
impl Default for AppState {
fn default() -> Self {
Self::new()
}
}
impl AppState {
/// 创建新的应用状态
pub fn new() -> Self {

View File

@@ -3,9 +3,8 @@ use std::{fs, sync::RwLock};
use serde_json::json;
use cc_switch_lib::{
get_claude_mcp_path, get_claude_settings_path, import_default_config_test_hook,
import_mcp_from_claude_test_hook, set_mcp_enabled_test_hook, AppError, AppState, AppType,
MultiAppConfig,
get_claude_mcp_path, get_claude_settings_path, import_default_config_test_hook, AppError,
AppState, AppType, McpService, MultiAppConfig,
};
#[path = "support.rs"]
@@ -116,8 +115,7 @@ fn import_mcp_from_claude_creates_config_and_enables_servers() {
config: RwLock::new(MultiAppConfig::default()),
};
let changed =
import_mcp_from_claude_test_hook(&state).expect("import mcp from claude succeeds");
let changed = McpService::import_from_claude(&state).expect("import mcp from claude succeeds");
assert!(
changed > 0,
"import should report inserted or normalized entries"
@@ -159,7 +157,7 @@ fn import_mcp_from_claude_invalid_json_preserves_state() {
};
let err =
import_mcp_from_claude_test_hook(&state).expect_err("invalid json should bubble up error");
McpService::import_from_claude(&state).expect_err("invalid json should bubble up error");
match err {
AppError::McpValidation(msg) => assert!(
msg.contains("解析 ~/.claude.json 失败"),
@@ -200,7 +198,7 @@ fn set_mcp_enabled_for_codex_writes_live_config() {
config: RwLock::new(config),
};
set_mcp_enabled_test_hook(&state, AppType::Codex, "codex-server", true)
McpService::set_enabled(&state, AppType::Codex, "codex-server", true)
.expect("set enabled should succeed");
let guard = state.config.read().expect("lock config");