修复缓存
All checks were successful
Build docker and publish / build (20.15.1) (push) Successful in 7m26s

This commit is contained in:
shanshanzhong 2026-03-06 21:58:29 -08:00
parent 19932bb9f7
commit 4d913c1728
175 changed files with 437 additions and 12104 deletions

View File

@ -1,7 +0,0 @@
# Claude Flow runtime files
data/
logs/
sessions/
neural/
*.log
*.tmp

View File

@ -1,403 +0,0 @@
# RuFlo V3 - Complete Capabilities Reference
> Generated: 2026-03-06T01:38:48.235Z
> Full documentation: https://github.com/ruvnet/claude-flow
## 📋 Table of Contents
1. [Overview](#overview)
2. [Swarm Orchestration](#swarm-orchestration)
3. [Available Agents (60+)](#available-agents)
4. [CLI Commands (26 Commands, 140+ Subcommands)](#cli-commands)
5. [Hooks System (27 Hooks + 12 Workers)](#hooks-system)
6. [Memory & Intelligence (RuVector)](#memory--intelligence)
7. [Hive-Mind Consensus](#hive-mind-consensus)
8. [Performance Targets](#performance-targets)
9. [Integration Ecosystem](#integration-ecosystem)
---
## Overview
RuFlo V3 is a domain-driven design architecture for multi-agent AI coordination with:
- **15-Agent Swarm Coordination** with hierarchical and mesh topologies
- **HNSW Vector Search** - 150x-12,500x faster pattern retrieval
- **SONA Neural Learning** - Self-optimizing with <0.05ms adaptation
- **Byzantine Fault Tolerance** - Queen-led consensus mechanisms
- **MCP Server Integration** - Model Context Protocol support
### Current Configuration
| Setting | Value |
|---------|-------|
| Topology | hierarchical-mesh |
| Max Agents | 15 |
| Memory Backend | hybrid |
| HNSW Indexing | Enabled |
| Neural Learning | Enabled |
| LearningBridge | Enabled (SONA + ReasoningBank) |
| Knowledge Graph | Enabled (PageRank + Communities) |
| Agent Scopes | Enabled (project/local/user) |
---
## Swarm Orchestration
### Topologies
| Topology | Description | Best For |
|----------|-------------|----------|
| `hierarchical` | Queen controls workers directly | Anti-drift, tight control |
| `mesh` | Fully connected peer network | Distributed tasks |
| `hierarchical-mesh` | V3 hybrid (recommended) | 10+ agents |
| `ring` | Circular communication | Sequential workflows |
| `star` | Central coordinator | Simple coordination |
| `adaptive` | Dynamic based on load | Variable workloads |
### Strategies
- `balanced` - Even distribution across agents
- `specialized` - Clear roles, no overlap (anti-drift)
- `adaptive` - Dynamic task routing
### Quick Commands
```bash
# Initialize swarm
npx @claude-flow/cli@latest swarm init --topology hierarchical --max-agents 8 --strategy specialized
# Check status
npx @claude-flow/cli@latest swarm status
# Monitor activity
npx @claude-flow/cli@latest swarm monitor
```
---
## Available Agents
### Core Development (5)
`coder`, `reviewer`, `tester`, `planner`, `researcher`
### V3 Specialized (4)
`security-architect`, `security-auditor`, `memory-specialist`, `performance-engineer`
### Swarm Coordination (5)
`hierarchical-coordinator`, `mesh-coordinator`, `adaptive-coordinator`, `collective-intelligence-coordinator`, `swarm-memory-manager`
### Consensus & Distributed (7)
`byzantine-coordinator`, `raft-manager`, `gossip-coordinator`, `consensus-builder`, `crdt-synchronizer`, `quorum-manager`, `security-manager`
### Performance & Optimization (5)
`perf-analyzer`, `performance-benchmarker`, `task-orchestrator`, `memory-coordinator`, `smart-agent`
### GitHub & Repository (9)
`github-modes`, `pr-manager`, `code-review-swarm`, `issue-tracker`, `release-manager`, `workflow-automation`, `project-board-sync`, `repo-architect`, `multi-repo-swarm`
### SPARC Methodology (6)
`sparc-coord`, `sparc-coder`, `specification`, `pseudocode`, `architecture`, `refinement`
### Specialized Development (8)
`backend-dev`, `mobile-dev`, `ml-developer`, `cicd-engineer`, `api-docs`, `system-architect`, `code-analyzer`, `base-template-generator`
### Testing & Validation (2)
`tdd-london-swarm`, `production-validator`
### Agent Routing by Task
| Task Type | Recommended Agents | Topology |
|-----------|-------------------|----------|
| Bug Fix | researcher, coder, tester | mesh |
| New Feature | coordinator, architect, coder, tester, reviewer | hierarchical |
| Refactoring | architect, coder, reviewer | mesh |
| Performance | researcher, perf-engineer, coder | hierarchical |
| Security | security-architect, auditor, reviewer | hierarchical |
| Docs | researcher, api-docs | mesh |
---
## CLI Commands
### Core Commands (12)
| Command | Subcommands | Description |
|---------|-------------|-------------|
| `init` | 4 | Project initialization |
| `agent` | 8 | Agent lifecycle management |
| `swarm` | 6 | Multi-agent coordination |
| `memory` | 11 | AgentDB with HNSW search |
| `mcp` | 9 | MCP server management |
| `task` | 6 | Task assignment |
| `session` | 7 | Session persistence |
| `config` | 7 | Configuration |
| `status` | 3 | System monitoring |
| `workflow` | 6 | Workflow templates |
| `hooks` | 17 | Self-learning hooks |
| `hive-mind` | 6 | Consensus coordination |
### Advanced Commands (14)
| Command | Subcommands | Description |
|---------|-------------|-------------|
| `daemon` | 5 | Background workers |
| `neural` | 5 | Pattern training |
| `security` | 6 | Security scanning |
| `performance` | 5 | Profiling & benchmarks |
| `providers` | 5 | AI provider config |
| `plugins` | 5 | Plugin management |
| `deployment` | 5 | Deploy management |
| `embeddings` | 4 | Vector embeddings |
| `claims` | 4 | Authorization |
| `migrate` | 5 | V2→V3 migration |
| `process` | 4 | Process management |
| `doctor` | 1 | Health diagnostics |
| `completions` | 4 | Shell completions |
### Example Commands
```bash
# Initialize
npx @claude-flow/cli@latest init --wizard
# Spawn agent
npx @claude-flow/cli@latest agent spawn -t coder --name my-coder
# Memory operations
npx @claude-flow/cli@latest memory store --key "pattern" --value "data" --namespace patterns
npx @claude-flow/cli@latest memory search --query "authentication"
# Diagnostics
npx @claude-flow/cli@latest doctor --fix
```
---
## Hooks System
### 27 Available Hooks
#### Core Hooks (6)
| Hook | Description |
|------|-------------|
| `pre-edit` | Context before file edits |
| `post-edit` | Record edit outcomes |
| `pre-command` | Risk assessment |
| `post-command` | Command metrics |
| `pre-task` | Task start + agent suggestions |
| `post-task` | Task completion learning |
#### Session Hooks (4)
| Hook | Description |
|------|-------------|
| `session-start` | Start/restore session |
| `session-end` | Persist state |
| `session-restore` | Restore previous |
| `notify` | Cross-agent notifications |
#### Intelligence Hooks (5)
| Hook | Description |
|------|-------------|
| `route` | Optimal agent routing |
| `explain` | Routing decisions |
| `pretrain` | Bootstrap intelligence |
| `build-agents` | Generate configs |
| `transfer` | Pattern transfer |
#### Coverage Hooks (3)
| Hook | Description |
|------|-------------|
| `coverage-route` | Coverage-based routing |
| `coverage-suggest` | Improvement suggestions |
| `coverage-gaps` | Gap analysis |
### 12 Background Workers
| Worker | Priority | Purpose |
|--------|----------|---------|
| `ultralearn` | normal | Deep knowledge |
| `optimize` | high | Performance |
| `consolidate` | low | Memory consolidation |
| `predict` | normal | Predictive preload |
| `audit` | critical | Security |
| `map` | normal | Codebase mapping |
| `preload` | low | Resource preload |
| `deepdive` | normal | Deep analysis |
| `document` | normal | Auto-docs |
| `refactor` | normal | Suggestions |
| `benchmark` | normal | Benchmarking |
| `testgaps` | normal | Coverage gaps |
---
## Memory & Intelligence
### RuVector Intelligence System
- **SONA**: Self-Optimizing Neural Architecture (<0.05ms)
- **MoE**: Mixture of Experts routing
- **HNSW**: 150x-12,500x faster search
- **EWC++**: Prevents catastrophic forgetting
- **Flash Attention**: 2.49x-7.47x speedup
- **Int8 Quantization**: 3.92x memory reduction
### 4-Step Intelligence Pipeline
1. **RETRIEVE** - HNSW pattern search
2. **JUDGE** - Success/failure verdicts
3. **DISTILL** - LoRA learning extraction
4. **CONSOLIDATE** - EWC++ preservation
### Self-Learning Memory (ADR-049)
| Component | Status | Description |
|-----------|--------|-------------|
| **LearningBridge** | ✅ Enabled | Connects insights to SONA/ReasoningBank neural pipeline |
| **MemoryGraph** | ✅ Enabled | PageRank knowledge graph + community detection |
| **AgentMemoryScope** | ✅ Enabled | 3-scope agent memory (project/local/user) |
**LearningBridge** - Insights trigger learning trajectories. Confidence evolves: +0.03 on access, -0.005/hour decay. Consolidation runs the JUDGE/DISTILL/CONSOLIDATE pipeline.
**MemoryGraph** - Builds a knowledge graph from entry references. PageRank identifies influential insights. Communities group related knowledge. Graph-aware ranking blends vector + structural scores.
**AgentMemoryScope** - Maps Claude Code 3-scope directories:
- `project`: `<gitRoot>/.claude/agent-memory/<agent>/`
- `local`: `<gitRoot>/.claude/agent-memory-local/<agent>/`
- `user`: `~/.claude/agent-memory/<agent>/`
High-confidence insights (>0.8) can transfer between agents.
### Memory Commands
```bash
# Store pattern
npx @claude-flow/cli@latest memory store --key "name" --value "data" --namespace patterns
# Semantic search
npx @claude-flow/cli@latest memory search --query "authentication"
# List entries
npx @claude-flow/cli@latest memory list --namespace patterns
# Initialize database
npx @claude-flow/cli@latest memory init --force
```
---
## Hive-Mind Consensus
### Queen Types
| Type | Role |
|------|------|
| Strategic Queen | Long-term planning |
| Tactical Queen | Execution coordination |
| Adaptive Queen | Dynamic optimization |
### Worker Types (8)
`researcher`, `coder`, `analyst`, `tester`, `architect`, `reviewer`, `optimizer`, `documenter`
### Consensus Mechanisms
| Mechanism | Fault Tolerance | Use Case |
|-----------|-----------------|----------|
| `byzantine` | f < n/3 faulty | Adversarial |
| `raft` | f < n/2 failed | Leader-based |
| `gossip` | Eventually consistent | Large scale |
| `crdt` | Conflict-free | Distributed |
| `quorum` | Configurable | Flexible |
### Hive-Mind Commands
```bash
# Initialize
npx @claude-flow/cli@latest hive-mind init --queen-type strategic
# Status
npx @claude-flow/cli@latest hive-mind status
# Spawn workers
npx @claude-flow/cli@latest hive-mind spawn --count 5 --type worker
# Consensus
npx @claude-flow/cli@latest hive-mind consensus --propose "task"
```
---
## Performance Targets
| Metric | Target | Status |
|--------|--------|--------|
| HNSW Search | 150x-12,500x faster | ✅ Implemented |
| Memory Reduction | 50-75% | ✅ Implemented (3.92x) |
| SONA Integration | Pattern learning | ✅ Implemented |
| Flash Attention | 2.49x-7.47x | 🔄 In Progress |
| MCP Response | <100ms | Achieved |
| CLI Startup | <500ms | Achieved |
| SONA Adaptation | <0.05ms | 🔄 In Progress |
| Graph Build (1k) | <200ms | 2.78ms (71.9x headroom) |
| PageRank (1k) | <100ms | 12.21ms (8.2x headroom) |
| Insight Recording | <5ms/each | 0.12ms (41x headroom) |
| Consolidation | <500ms | 0.26ms (1,955x headroom) |
| Knowledge Transfer | <100ms | 1.25ms (80x headroom) |
---
## Integration Ecosystem
### Integrated Packages
| Package | Version | Purpose |
|---------|---------|---------|
| agentic-flow | 3.0.0-alpha.1 | Core coordination + ReasoningBank + Router |
| agentdb | 3.0.0-alpha.10 | Vector database + 8 controllers |
| @ruvector/attention | 0.1.3 | Flash attention |
| @ruvector/sona | 0.1.5 | Neural learning |
### Optional Integrations
| Package | Command |
|---------|---------|
| ruv-swarm | `npx ruv-swarm mcp start` |
| flow-nexus | `npx flow-nexus@latest mcp start` |
| agentic-jujutsu | `npx agentic-jujutsu@latest` |
### MCP Server Setup
```bash
# Add Claude Flow MCP
claude mcp add claude-flow -- npx -y @claude-flow/cli@latest
# Optional servers
claude mcp add ruv-swarm -- npx -y ruv-swarm mcp start
claude mcp add flow-nexus -- npx -y flow-nexus@latest mcp start
```
---
## Quick Reference
### Essential Commands
```bash
# Setup
npx @claude-flow/cli@latest init --wizard
npx @claude-flow/cli@latest daemon start
npx @claude-flow/cli@latest doctor --fix
# Swarm
npx @claude-flow/cli@latest swarm init --topology hierarchical --max-agents 8
npx @claude-flow/cli@latest swarm status
# Agents
npx @claude-flow/cli@latest agent spawn -t coder
npx @claude-flow/cli@latest agent list
# Memory
npx @claude-flow/cli@latest memory search --query "patterns"
# Hooks
npx @claude-flow/cli@latest hooks pre-task --description "task"
npx @claude-flow/cli@latest hooks worker dispatch --trigger optimize
```
### File Structure
```
.claude-flow/
├── config.yaml # Runtime configuration
├── CAPABILITIES.md # This file
├── data/ # Memory storage
├── logs/ # Operation logs
├── sessions/ # Session state
├── hooks/ # Custom hooks
├── agents/ # Agent configs
└── workflows/ # Workflow templates
```
---
**Full Documentation**: https://github.com/ruvnet/claude-flow
**Issues**: https://github.com/ruvnet/claude-flow/issues

View File

@ -1,43 +0,0 @@
# RuFlo V3 Runtime Configuration
# Generated: 2026-03-06T01:38:48.235Z
version: "3.0.0"
swarm:
topology: hierarchical-mesh
maxAgents: 15
autoScale: true
coordinationStrategy: consensus
memory:
backend: hybrid
enableHNSW: true
persistPath: .claude-flow/data
cacheSize: 100
# ADR-049: Self-Learning Memory
learningBridge:
enabled: true
sonaMode: balanced
confidenceDecayRate: 0.005
accessBoostAmount: 0.03
consolidationThreshold: 10
memoryGraph:
enabled: true
pageRankDamping: 0.85
maxNodes: 5000
similarityThreshold: 0.8
agentScopes:
enabled: true
defaultScope: project
neural:
enabled: true
modelPath: .claude-flow/neural
hooks:
enabled: true
autoExecute: true
mcp:
autoStart: false
port: 3000

View File

@ -1,17 +0,0 @@
{
"initialized": "2026-03-06T01:38:48.235Z",
"routing": {
"accuracy": 0,
"decisions": 0
},
"patterns": {
"shortTerm": 0,
"longTerm": 0,
"quality": 0
},
"sessions": {
"total": 0,
"current": null
},
"_note": "Intelligence grows as you use Claude Flow"
}

View File

@ -1,18 +0,0 @@
{
"timestamp": "2026-03-06T01:38:48.235Z",
"processes": {
"agentic_flow": 0,
"mcp_server": 0,
"estimated_agents": 0
},
"swarm": {
"active": false,
"agent_count": 0,
"coordination_active": false
},
"integration": {
"agentic_flow_active": false,
"mcp_active": false
},
"_initialized": true
}

View File

@ -1,26 +0,0 @@
{
"version": "3.0.0",
"initialized": "2026-03-06T01:38:48.235Z",
"domains": {
"completed": 0,
"total": 5,
"status": "INITIALIZING"
},
"ddd": {
"progress": 0,
"modules": 0,
"totalFiles": 0,
"totalLines": 0
},
"swarm": {
"activeAgents": 0,
"maxAgents": 15,
"topology": "hierarchical-mesh"
},
"learning": {
"status": "READY",
"patternsLearned": 0,
"sessionsCompleted": 0
},
"_note": "Metrics will update as you use Claude Flow. Run: npx @claude-flow/cli@latest daemon start"
}

View File

@ -1,8 +0,0 @@
{
"initialized": "2026-03-06T01:38:48.236Z",
"status": "PENDING",
"cvesFixed": 0,
"totalCves": 3,
"lastScan": null,
"_note": "Run: npx @claude-flow/cli@latest security scan"
}

77
.gitignore vendored
View File

@ -1,24 +1,69 @@
# ==================== IDE / 编辑器 ====================
.idea/
.claude/
.claude-flow/
.vscode/
*-dev.yaml
*.local.yaml
/test/
*.log
*.sh
script/*.sh
*.swp
*.swo
*~
# ==================== OS 系统文件 ====================
.DS_Store
*_test_config.go
*.log*
Thumbs.db
# ==================== Go 构建产物 ====================
/bin/
/build/
/generate/
*.exe
*.dll
*.so
*.dylib
# ==================== 环境 / 密钥 / 证书 ====================
.env
.env.*
!.env.example
*.p8
*.crt
*.key
node_modules
package-lock.json
*.pem
# ==================== 日志 ====================
*.log
*.log.*
logs/
# ==================== 测试 ====================
/test/
*_test.go
*_test_config.go
**/logtest/
*_test.yaml
# ==================== AI 工具链Ruflo / Serena / CGC====================
.claude/
.claude-flow/
.serena/
.swarm/
.mcp.json
CLAUDE.md
# ==================== Node项目不需要====================
node_modules/
package.json
/bin
.claude
./github
./run
package-lock.json
# ==================== 临时 / 本地配置 ====================
*-dev.yaml
*.local.yaml
*.tmp
*.bak
# ==================== 脚本 ====================
*.sh
script/*.sh
# ==================== CI/CD 本地运行配置 ====================
.run/
# ==================== 临时笔记 ====================
订单日志.txt

View File

@ -1,22 +0,0 @@
{
"mcpServers": {
"claude-flow": {
"command": "npx",
"args": [
"-y",
"@claude-flow/cli@latest",
"mcp",
"start"
],
"env": {
"npm_config_update_notifier": "false",
"CLAUDE_FLOW_MODE": "v3",
"CLAUDE_FLOW_HOOKS_ENABLED": "true",
"CLAUDE_FLOW_TOPOLOGY": "hierarchical-mesh",
"CLAUDE_FLOW_MAX_AGENTS": "15",
"CLAUDE_FLOW_MEMORY_BACKEND": "hybrid"
},
"autoStart": false
}
}
}

188
CLAUDE.md
View File

@ -1,188 +0,0 @@
# Claude Code Configuration - RuFlo V3
## Behavioral Rules (Always Enforced)
- Do what has been asked; nothing more, nothing less
- NEVER create files unless they're absolutely necessary for achieving your goal
- ALWAYS prefer editing an existing file to creating a new one
- NEVER proactively create documentation files (*.md) or README files unless explicitly requested
- NEVER save working files, text/mds, or tests to the root folder
- Never continuously check status after spawning a swarm — wait for results
- ALWAYS read a file before editing it
- NEVER commit secrets, credentials, or .env files
## File Organization
- NEVER save to root folder — use the directories below
- Use `/src` for source code files
- Use `/tests` for test files
- Use `/docs` for documentation and markdown files
- Use `/config` for configuration files
- Use `/scripts` for utility scripts
- Use `/examples` for example code
## Project Architecture
- Follow Domain-Driven Design with bounded contexts
- Keep files under 500 lines
- Use typed interfaces for all public APIs
- Prefer TDD London School (mock-first) for new code
- Use event sourcing for state changes
- Ensure input validation at system boundaries
### Project Config
- **Topology**: hierarchical-mesh
- **Max Agents**: 15
- **Memory**: hybrid
- **HNSW**: Enabled
- **Neural**: Enabled
## Build & Test
```bash
# Build
npm run build
# Test
npm test
# Lint
npm run lint
```
- ALWAYS run tests after making code changes
- ALWAYS verify build succeeds before committing
## Security Rules
- NEVER hardcode API keys, secrets, or credentials in source files
- NEVER commit .env files or any file containing secrets
- Always validate user input at system boundaries
- Always sanitize file paths to prevent directory traversal
- Run `npx @claude-flow/cli@latest security scan` after security-related changes
## Concurrency: 1 MESSAGE = ALL RELATED OPERATIONS
- All operations MUST be concurrent/parallel in a single message
- Use Claude Code's Task tool for spawning agents, not just MCP
- ALWAYS batch ALL todos in ONE TodoWrite call (5-10+ minimum)
- ALWAYS spawn ALL agents in ONE message with full instructions via Task tool
- ALWAYS batch ALL file reads/writes/edits in ONE message
- ALWAYS batch ALL Bash commands in ONE message
## Swarm Orchestration
- MUST initialize the swarm using CLI tools when starting complex tasks
- MUST spawn concurrent agents using Claude Code's Task tool
- Never use CLI tools alone for execution — Task tool agents do the actual work
- MUST call CLI tools AND Task tool in ONE message for complex work
### 3-Tier Model Routing (ADR-026)
| Tier | Handler | Latency | Cost | Use Cases |
|------|---------|---------|------|-----------|
| **1** | Agent Booster (WASM) | <1ms | $0 | Simple transforms (varconst, add types) Skip LLM |
| **2** | Haiku | ~500ms | $0.0002 | Simple tasks, low complexity (<30%) |
| **3** | Sonnet/Opus | 2-5s | $0.003-0.015 | Complex reasoning, architecture, security (>30%) |
- Always check for `[AGENT_BOOSTER_AVAILABLE]` or `[TASK_MODEL_RECOMMENDATION]` before spawning agents
- Use Edit tool directly when `[AGENT_BOOSTER_AVAILABLE]`
## Swarm Configuration & Anti-Drift
- ALWAYS use hierarchical topology for coding swarms
- Keep maxAgents at 6-8 for tight coordination
- Use specialized strategy for clear role boundaries
- Use `raft` consensus for hive-mind (leader maintains authoritative state)
- Run frequent checkpoints via `post-task` hooks
- Keep shared memory namespace for all agents
```bash
npx @claude-flow/cli@latest swarm init --topology hierarchical --max-agents 8 --strategy specialized
```
## Swarm Execution Rules
- ALWAYS use `run_in_background: true` for all agent Task calls
- ALWAYS put ALL agent Task calls in ONE message for parallel execution
- After spawning, STOP — do NOT add more tool calls or check status
- Never poll TaskOutput or check swarm status — trust agents to return
- When agent results arrive, review ALL results before proceeding
## V3 CLI Commands
### Core Commands
| Command | Subcommands | Description |
|---------|-------------|-------------|
| `init` | 4 | Project initialization |
| `agent` | 8 | Agent lifecycle management |
| `swarm` | 6 | Multi-agent swarm coordination |
| `memory` | 11 | AgentDB memory with HNSW search |
| `task` | 6 | Task creation and lifecycle |
| `session` | 7 | Session state management |
| `hooks` | 17 | Self-learning hooks + 12 workers |
| `hive-mind` | 6 | Byzantine fault-tolerant consensus |
### Quick CLI Examples
```bash
npx @claude-flow/cli@latest init --wizard
npx @claude-flow/cli@latest agent spawn -t coder --name my-coder
npx @claude-flow/cli@latest swarm init --v3-mode
npx @claude-flow/cli@latest memory search --query "authentication patterns"
npx @claude-flow/cli@latest doctor --fix
```
## Available Agents (60+ Types)
### Core Development
`coder`, `reviewer`, `tester`, `planner`, `researcher`
### Specialized
`security-architect`, `security-auditor`, `memory-specialist`, `performance-engineer`
### Swarm Coordination
`hierarchical-coordinator`, `mesh-coordinator`, `adaptive-coordinator`
### GitHub & Repository
`pr-manager`, `code-review-swarm`, `issue-tracker`, `release-manager`
### SPARC Methodology
`sparc-coord`, `sparc-coder`, `specification`, `pseudocode`, `architecture`
## Memory Commands Reference
```bash
# Store (REQUIRED: --key, --value; OPTIONAL: --namespace, --ttl, --tags)
npx @claude-flow/cli@latest memory store --key "pattern-auth" --value "JWT with refresh" --namespace patterns
# Search (REQUIRED: --query; OPTIONAL: --namespace, --limit, --threshold)
npx @claude-flow/cli@latest memory search --query "authentication patterns"
# List (OPTIONAL: --namespace, --limit)
npx @claude-flow/cli@latest memory list --namespace patterns --limit 10
# Retrieve (REQUIRED: --key; OPTIONAL: --namespace)
npx @claude-flow/cli@latest memory retrieve --key "pattern-auth" --namespace patterns
```
## Quick Setup
```bash
claude mcp add claude-flow -- npx -y @claude-flow/cli@latest
npx @claude-flow/cli@latest daemon start
npx @claude-flow/cli@latest doctor --fix
```
## Claude Code vs CLI Tools
- Claude Code's Task tool handles ALL execution: agents, file ops, code generation, git
- CLI tools handle coordination via Bash: swarm init, memory, hooks, routing
- NEVER use CLI tools as a substitute for Task tool agents
## Support
- Documentation: https://github.com/ruvnet/claude-flow
- Issues: https://github.com/ruvnet/claude-flow/issues

View File

@ -1,34 +0,0 @@
package adapter
import (
"testing"
"time"
)
func TestAdapter_Client(t *testing.T) {
servers := getServers()
if len(servers) == 0 {
t.Errorf("[Test] No servers found")
return
}
a := NewAdapter(tpl, WithServers(servers), WithUserInfo(User{
Password: "test-password",
ExpiredAt: time.Now().AddDate(1, 0, 0),
Download: 0,
Upload: 0,
Traffic: 1000,
SubscribeURL: "https://example.com/subscribe",
}))
client, err := a.Client()
if err != nil {
t.Errorf("[Test] Failed to get client: %v", err.Error())
return
}
bytes, err := client.Build()
if err != nil {
t.Errorf("[Test] Failed to build client config: %v", err.Error())
return
}
t.Logf("[Test] Client config built successfully: %s", string(bytes))
}

View File

@ -1,153 +0,0 @@
package adapter
import (
"testing"
"time"
)
var tpl = `
{{- range $n := .Proxies }}
{{- $dn := urlquery (default "node" $n.Name) -}}
{{- $sni := default $n.Host $n.SNI -}}
{{- if eq $n.Type "shadowsocks" -}}
{{- $userinfo := b64enc (print $n.Method ":" $.UserInfo.Password) -}}
{{- printf "ss://%s@%s:%v#%s" $userinfo $n.Host $n.Port $dn -}}
{{- "\n" -}}
{{- end -}}
{{- if eq $n.Type "trojan" -}}
{{- $qs := "security=tls" -}}
{{- if $sni }}{{ $qs = printf "%s&sni=%s" $qs (urlquery $sni) }}{{ end -}}
{{- if $n.AllowInsecure }}{{ $qs = printf "%s&allowInsecure=%v" $qs $n.AllowInsecure }}{{ end -}}
{{- if $n.Fingerprint }}{{ $qs = printf "%s&fp=%s" $qs (urlquery $n.Fingerprint) }}{{ end -}}
{{- printf "trojan://%s@%s:%v?%s#%s" $.UserInfo.Password $n.Host $n.Port $qs $dn -}}
{{- "\n" -}}
{{- end -}}
{{- if eq $n.Type "vless" -}}
{{- $qs := "encryption=none" -}}
{{- if $n.RealityPublicKey -}}
{{- $qs = printf "%s&security=reality" $qs -}}
{{- $qs = printf "%s&pbk=%s" $qs (urlquery $n.RealityPublicKey) -}}
{{- if $n.RealityShortId }}{{ $qs = printf "%s&sid=%s" $qs (urlquery $n.RealityShortId) }}{{ end -}}
{{- else -}}
{{- if or $n.SNI $n.Fingerprint $n.AllowInsecure }}
{{- $qs = printf "%s&security=tls" $qs -}}
{{- end -}}
{{- end -}}
{{- if $n.SNI }}{{ $qs = printf "%s&sni=%s" $qs (urlquery $n.SNI) }}{{ end -}}
{{- if $n.AllowInsecure }}{{ $qs = printf "%s&allowInsecure=%v" $qs $n.AllowInsecure }}{{ end -}}
{{- if $n.Fingerprint }}{{ $qs = printf "%s&fp=%s" $qs (urlquery $n.Fingerprint) }}{{ end -}}
{{- if $n.Network }}{{ $qs = printf "%s&type=%s" $qs $n.Network }}{{ end -}}
{{- if $n.Path }}{{ $qs = printf "%s&path=%s" $qs (urlquery $n.Path) }}{{ end -}}
{{- if $n.ServiceName }}{{ $qs = printf "%s&serviceName=%s" $qs (urlquery $n.ServiceName) }}{{ end -}}
{{- if $n.Flow }}{{ $qs = printf "%s&flow=%s" $qs (urlquery $n.Flow) }}{{ end -}}
{{- printf "vless://%s@%s:%v?%s#%s" $n.ServerKey $n.Host $n.Port $qs $dn -}}
{{- "\n" -}}
{{- end -}}
{{- if eq $n.Type "vmess" -}}
{{- $obj := dict
"v" "2"
"ps" $n.Name
"add" $n.Host
"port" $n.Port
"id" $n.ServerKey
"aid" 0
"net" (or $n.Network "tcp")
"type" "none"
"path" (or $n.Path "")
"host" $n.Host
-}}
{{- if or $n.SNI $n.Fingerprint $n.AllowInsecure }}{{ set $obj "tls" "tls" }}{{ end -}}
{{- if $n.SNI }}{{ set $obj "sni" $n.SNI }}{{ end -}}
{{- if $n.Fingerprint }}{{ set $obj "fp" $n.Fingerprint }}{{ end -}}
{{- printf "vmess://%s" (b64enc (toJson $obj)) -}}
{{- "\n" -}}
{{- end -}}
{{- if or (eq $n.Type "hysteria2") (eq $n.Type "hy2") -}}
{{- $qs := "" -}}
{{- if $n.SNI }}{{ $qs = printf "sni=%s" (urlquery $n.SNI) }}{{ end -}}
{{- if $n.AllowInsecure }}{{ $qs = printf "%s&insecure=%v" $qs $n.AllowInsecure }}{{ end -}}
{{- if $n.ObfsPassword }}{{ $qs = printf "%s&obfs-password=%s" $qs (urlquery $n.ObfsPassword) }}{{ end -}}
{{- printf "hy2://%s@%s:%v%s#%s"
$.UserInfo.Password
$n.Host
$n.Port
(ternary (gt (len $qs) 0) (print "?" $qs) "")
$dn -}}
{{- "\n" -}}
{{- end -}}
{{- if eq $n.Type "tuic" -}}
{{- $qs := "" -}}
{{- if $n.SNI }}{{ $qs = printf "sni=%s" (urlquery $n.SNI) }}{{ end -}}
{{- if $n.AllowInsecure }}{{ $qs = printf "%s&insecure=%v" $qs $n.AllowInsecure }}{{ end -}}
{{- printf "tuic://%s:%s@%s:%v%s#%s"
$n.ServerKey
$.UserInfo.Password
$n.Host
$n.Port
(ternary (gt (len $qs) 0) (print "?" $qs) "")
$dn -}}
{{- "\n" -}}
{{- end -}}
{{- if eq $n.Type "anytls" -}}
{{- $qs := "" -}}
{{- if $n.SNI }}{{ $qs = printf "sni=%s" (urlquery $n.SNI) }}{{ end -}}
{{- printf "anytls://%s@%s:%v%s#%s"
$.UserInfo.Password
$n.Host
$n.Port
(ternary (gt (len $qs) 0) (print "?" $qs) "")
$dn -}}
{{- "\n" -}}
{{- end -}}
{{- end }}
`
func TestClient_Build(t *testing.T) {
client := &Client{
SiteName: "TestSite",
SubscribeName: "TestSubscribe",
ClientTemplate: tpl,
Proxies: []Proxy{
{
Name: "TestShadowSocks",
Type: "shadowsocks",
Host: "127.0.0.1",
Port: 1234,
Method: "aes-256-gcm",
},
{
Name: "TestTrojan",
Type: "trojan",
Host: "example.com",
Port: 443,
AllowInsecure: true,
Security: "tls",
Transport: "tcp",
SNI: "v1-dy.ixigua.com",
},
},
UserInfo: User{
Password: "testpassword",
ExpiredAt: time.Now().Add(24 * time.Hour),
Download: 1000000,
Upload: 500000,
Traffic: 1500000,
SubscribeURL: "https://example.com/subscribe",
},
}
buf, err := client.Build()
if err != nil {
t.Fatalf("Failed to build client: %v", err)
}
t.Logf("[测试] 输出: %s", buf)
}

View File

@ -1,46 +0,0 @@
package adapter
import (
"testing"
"github.com/perfect-panel/server/internal/model/server"
"gorm.io/driver/mysql"
"gorm.io/gorm"
)
func TestAdapterProxy(t *testing.T) {
servers := getServers()
if len(servers) == 0 {
t.Fatal("no servers found")
}
for _, srv := range servers {
proxy, err := adapterProxy(*srv, "example.com", 0)
if err != nil {
t.Errorf("failed to adapt server %s: %v", srv.Name, err)
}
t.Logf("[测试] 适配服务器 %s 成功: %+v", srv.Name, proxy)
}
}
func getServers() []*server.Server {
db, err := connectMySQL("root:mylove520@tcp(localhost:3306)/perfectlink?charset=utf8mb4&parseTime=True&loc=Local")
if err != nil {
return nil
}
var servers []*server.Server
if err = db.Model(&server.Server{}).Find(&servers).Error; err != nil {
return nil
}
return servers
}
func connectMySQL(dsn string) (*gorm.DB, error) {
db, err := gorm.Open(mysql.New(mysql.Config{
DSN: dsn,
}), &gorm.Config{})
if err != nil {
return nil, err
}
return db, nil
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

6
go.mod
View File

@ -27,7 +27,7 @@ require (
github.com/jinzhu/copier v0.4.0
github.com/klauspost/compress v1.17.7
github.com/nyaruka/phonenumbers v1.5.0
github.com/pkg/errors v0.9.1
github.com/pkg/errors v0.9.1
github.com/redis/go-redis/v9 v9.7.2
github.com/smartwalle/alipay/v3 v3.2.23
github.com/spf13/cast v1.7.0 // indirect
@ -50,7 +50,7 @@ require (
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df
gopkg.in/yaml.v3 v3.0.1
gorm.io/driver/mysql v1.5.7
gorm.io/gorm v1.25.12
gorm.io/gorm v1.30.0
gorm.io/plugin/soft_delete v1.2.1
k8s.io/apimachinery v0.31.1
)
@ -113,6 +113,7 @@ require (
github.com/leodido/go-urn v1.4.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-sqlite3 v1.14.22 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
@ -146,4 +147,5 @@ require (
google.golang.org/genproto/googleapis/rpc v0.0.0-20240513163218-0867130af1f8 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gorm.io/driver/sqlite v1.6.0 // indirect
)

4
go.sum
View File

@ -550,11 +550,15 @@ gorm.io/driver/mysql v1.5.7/go.mod h1:sEtPWMiqiN1N1cMXoXmBbd8C6/l+TESwriotuRRpkD
gorm.io/driver/sqlite v1.1.3/go.mod h1:AKDgRWk8lcSQSw+9kxCJnX/yySj8G3rdwYlU57cB45c=
gorm.io/driver/sqlite v1.4.4 h1:gIufGoR0dQzjkyqDyYSCvsYR6fba1Gw5YKDqKeChxFc=
gorm.io/driver/sqlite v1.4.4/go.mod h1:0Aq3iPO+v9ZKbcdiz8gLWRw5VOPcBOPUQJFLq5e2ecI=
gorm.io/driver/sqlite v1.6.0 h1:WHRRrIiulaPiPFmDcod6prc4l2VGVWHz80KspNsxSfQ=
gorm.io/driver/sqlite v1.6.0/go.mod h1:AO9V1qIQddBESngQUKWL9yoH93HIeA1X6V633rBwyT8=
gorm.io/gorm v1.20.1/go.mod h1:0HFTzE/SqkGTzK6TlDPPQbAYCluiVvhzoA1+aVyzenw=
gorm.io/gorm v1.23.0/go.mod h1:l2lP/RyAtc1ynaTjFksBde/O8v9oOGIApu2/xRitmZk=
gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
gorm.io/gorm v1.25.12 h1:I0u8i2hWQItBq1WfE0o2+WuL9+8L21K9e2HHSTE/0f8=
gorm.io/gorm v1.25.12/go.mod h1:xh7N7RHfYlNc5EmcI/El95gXusucDrQnHXe0+CgWcLQ=
gorm.io/gorm v1.30.0 h1:qbT5aPv1UH8gI99OsRlvDToLxW5zR7FzS9acZDOZcgs=
gorm.io/gorm v1.30.0/go.mod h1:8Z33v652h4//uMA76KjeDH8mJXPm1QNCYrMeatR0DOE=
gorm.io/plugin/soft_delete v1.2.1 h1:qx9D/c4Xu6w5KT8LviX8DgLcB9hkKl6JC9f44Tj7cGU=
gorm.io/plugin/soft_delete v1.2.1/go.mod h1:Zv7vQctOJTGOsJ/bWgrN1n3od0GBAZgnLjEx+cApLGk=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@ -1 +0,0 @@
package migrate

View File

@ -1,49 +0,0 @@
package migrate
import (
"testing"
"github.com/perfect-panel/server/internal/model/node"
"github.com/perfect-panel/server/pkg/orm"
"gorm.io/driver/mysql"
"gorm.io/gorm"
)
func getDSN() string {
cfg := orm.Config{
Addr: "127.0.0.1",
Username: "root",
Password: "mylove520",
Dbname: "vpnboard",
}
mc := orm.Mysql{
Config: cfg,
}
return mc.Dsn()
}
func TestMigrate(t *testing.T) {
t.Skipf("skip test")
m := Migrate(getDSN())
err := m.Migrate(2004)
if err != nil {
t.Errorf("failed to migrate: %v", err)
} else {
t.Log("migrate success")
}
}
func TestMysql(t *testing.T) {
db, err := gorm.Open(mysql.New(mysql.Config{
DSN: "root:mylove520@tcp(localhost:3306)/vpnboard",
}))
if err != nil {
t.Fatalf("Failed to connect to MySQL: %v", err)
}
err = db.Migrator().AutoMigrate(&node.Node{})
if err != nil {
t.Fatalf("Failed to auto migrate: %v", err)
return
}
t.Log("MySQL connection and migration successful")
}

View File

@ -1,94 +0,0 @@
package initialize
import (
"testing"
"github.com/perfect-panel/server/internal/config"
"github.com/perfect-panel/server/internal/model/system"
"github.com/perfect-panel/server/pkg/tool"
"github.com/stretchr/testify/assert"
)
func TestApplyVerifyCodeDefaults(t *testing.T) {
testCases := []struct {
name string
in config.VerifyCode
want config.VerifyCode
}{
{
name: "apply defaults when all zero",
in: config.VerifyCode{},
want: config.VerifyCode{
VerifyCodeExpireTime: 900,
VerifyCodeLimit: 15,
VerifyCodeInterval: 60,
},
},
{
name: "keep provided values",
in: config.VerifyCode{
VerifyCodeExpireTime: 901,
VerifyCodeLimit: 16,
VerifyCodeInterval: 61,
},
want: config.VerifyCode{
VerifyCodeExpireTime: 901,
VerifyCodeLimit: 16,
VerifyCodeInterval: 61,
},
},
{
name: "fix invalid non-positive values",
in: config.VerifyCode{
VerifyCodeExpireTime: -1,
VerifyCodeLimit: 0,
VerifyCodeInterval: -10,
},
want: config.VerifyCode{
VerifyCodeExpireTime: 900,
VerifyCodeLimit: 15,
VerifyCodeInterval: 60,
},
},
}
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
got := testCase.in
applyVerifyCodeDefaults(&got)
assert.Equal(t, testCase.want, got)
})
}
}
func TestVerifyCodeReflectUsesCanonicalKeys(t *testing.T) {
configs := []*system.System{
{Category: "verify_code", Key: "VerifyCodeExpireTime", Value: "901"},
{Category: "verify_code", Key: "VerifyCodeLimit", Value: "16"},
{Category: "verify_code", Key: "VerifyCodeInterval", Value: "61"},
}
var got config.VerifyCode
tool.SystemConfigSliceReflectToStruct(configs, &got)
applyVerifyCodeDefaults(&got)
assert.Equal(t, int64(901), got.VerifyCodeExpireTime)
assert.Equal(t, int64(16), got.VerifyCodeLimit)
assert.Equal(t, int64(61), got.VerifyCodeInterval)
}
func TestVerifyCodeReflectIgnoresLegacyKeys(t *testing.T) {
configs := []*system.System{
{Category: "verify_code", Key: "ExpireTime", Value: "901"},
{Category: "verify_code", Key: "Limit", Value: "16"},
{Category: "verify_code", Key: "Interval", Value: "61"},
}
var got config.VerifyCode
tool.SystemConfigSliceReflectToStruct(configs, &got)
applyVerifyCodeDefaults(&got)
assert.Equal(t, int64(900), got.VerifyCodeExpireTime)
assert.Equal(t, int64(15), got.VerifyCodeLimit)
assert.Equal(t, int64(60), got.VerifyCodeInterval)
}

View File

@ -1,170 +0,0 @@
package auth
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/alicebob/miniredis/v2"
"github.com/gin-gonic/gin"
"github.com/perfect-panel/server/internal/config"
"github.com/perfect-panel/server/internal/middleware"
"github.com/perfect-panel/server/internal/svc"
"github.com/perfect-panel/server/pkg/constant"
"github.com/redis/go-redis/v9"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type legacyCheckCodeResponse struct {
Code uint32 `json:"code"`
Data struct {
Status bool `json:"status"`
Exist bool `json:"exist"`
} `json:"data"`
}
func newLegacyCheckCodeTestRouter(svcCtx *svc.ServiceContext) *gin.Engine {
gin.SetMode(gin.TestMode)
router := gin.New()
router.Use(middleware.ApiVersionMiddleware(svcCtx))
router.POST("/v1/auth/check-code", middleware.ApiVersionSwitchHandler(
CheckCodeLegacyV1Handler(svcCtx),
CheckCodeLegacyV2Handler(svcCtx),
))
return router
}
func newLegacyCheckCodeTestSvcCtx(t *testing.T) (*svc.ServiceContext, *redis.Client) {
t.Helper()
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
svcCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{
VerifyCodeExpireTime: 900,
},
},
}
return svcCtx, redisClient
}
func seedLegacyVerifyCode(t *testing.T, redisClient *redis.Client, scene string, email string, code string) string {
t.Helper()
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email)
payload := map[string]interface{}{
"code": code,
"lastAt": time.Now().Unix(),
}
payloadRaw, err := json.Marshal(payload)
require.NoError(t, err)
err = redisClient.Set(context.Background(), cacheKey, payloadRaw, time.Minute*15).Err()
require.NoError(t, err)
return cacheKey
}
func callLegacyCheckCode(t *testing.T, router *gin.Engine, apiHeader string, body string) legacyCheckCodeResponse {
t.Helper()
reqBody := bytes.NewBufferString(body)
req := httptest.NewRequest(http.MethodPost, "/v1/auth/check-code", reqBody)
req.Header.Set("Content-Type", "application/json")
if apiHeader != "" {
req.Header.Set("api-header", apiHeader)
}
recorder := httptest.NewRecorder()
router.ServeHTTP(recorder, req)
require.Equal(t, http.StatusOK, recorder.Code)
var resp legacyCheckCodeResponse
err := json.Unmarshal(recorder.Body.Bytes(), &resp)
require.NoError(t, err)
return resp
}
func TestCheckCodeLegacyHandler_NoHeaderNotConsumed(t *testing.T) {
svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t)
router := newLegacyCheckCodeTestRouter(svcCtx)
email := "legacy@example.com"
code := "123456"
cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code)
resp := callLegacyCheckCode(t, router, "", `{"email":"legacy@example.com","code":"123456","type":3}`)
assert.Equal(t, uint32(200), resp.Code)
assert.True(t, resp.Data.Status)
assert.True(t, resp.Data.Exist)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
assert.Equal(t, int64(1), exists)
}
func TestCheckCodeLegacyHandler_GreaterVersionConsumed(t *testing.T) {
svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t)
router := newLegacyCheckCodeTestRouter(svcCtx)
email := "latest@example.com"
code := "999888"
cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code)
resp := callLegacyCheckCode(t, router, "1.0.1", `{"email":"latest@example.com","code":"999888","type":3}`)
assert.Equal(t, uint32(200), resp.Code)
assert.True(t, resp.Data.Status)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
assert.Equal(t, int64(0), exists)
resp = callLegacyCheckCode(t, router, "1.0.1", `{"email":"latest@example.com","code":"999888","type":3}`)
assert.Equal(t, uint32(200), resp.Code)
assert.False(t, resp.Data.Status)
assert.False(t, resp.Data.Exist)
}
func TestCheckCodeLegacyHandler_EqualThresholdNotConsumed(t *testing.T) {
svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t)
router := newLegacyCheckCodeTestRouter(svcCtx)
email := "equal@example.com"
code := "112233"
cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code)
resp := callLegacyCheckCode(t, router, "1.0.0", `{"email":"equal@example.com","code":"112233","type":3}`)
assert.Equal(t, uint32(200), resp.Code)
assert.True(t, resp.Data.Status)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
assert.Equal(t, int64(1), exists)
}
func TestCheckCodeLegacyHandler_InvalidVersionNotConsumed(t *testing.T) {
svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t)
router := newLegacyCheckCodeTestRouter(svcCtx)
email := "invalid@example.com"
code := "445566"
cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code)
resp := callLegacyCheckCode(t, router, "abc", `{"email":"invalid@example.com","code":"445566","type":3}`)
assert.Equal(t, uint32(200), resp.Code)
assert.True(t, resp.Data.Status)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
assert.Equal(t, int64(1), exists)
}

View File

@ -1,146 +0,0 @@
package common
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/alicebob/miniredis/v2"
"github.com/gin-gonic/gin"
"github.com/perfect-panel/server/internal/config"
"github.com/perfect-panel/server/internal/middleware"
"github.com/perfect-panel/server/internal/svc"
"github.com/perfect-panel/server/pkg/authmethod"
"github.com/perfect-panel/server/pkg/constant"
"github.com/redis/go-redis/v9"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type canonicalCheckCodeResponse struct {
Code uint32 `json:"code"`
Data struct {
Status bool `json:"status"`
Exist bool `json:"exist"`
} `json:"data"`
}
func newCanonicalCheckCodeTestSvcCtx(t *testing.T) (*svc.ServiceContext, *redis.Client) {
t.Helper()
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
svcCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{
VerifyCodeExpireTime: 900,
},
},
}
return svcCtx, redisClient
}
func newCanonicalCheckCodeTestRouter(svcCtx *svc.ServiceContext) *gin.Engine {
gin.SetMode(gin.TestMode)
router := gin.New()
router.Use(middleware.ApiVersionMiddleware(svcCtx))
router.POST("/v1/common/check_verification_code", middleware.ApiVersionSwitchHandler(
CheckVerificationCodeV1Handler(svcCtx),
CheckVerificationCodeV2Handler(svcCtx),
))
return router
}
func seedCanonicalVerifyCode(t *testing.T, redisClient *redis.Client, scene string, account string, code string) string {
t.Helper()
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, account)
payload := map[string]interface{}{
"code": code,
"lastAt": time.Now().Unix(),
}
payloadRaw, err := json.Marshal(payload)
require.NoError(t, err)
err = redisClient.Set(context.Background(), cacheKey, payloadRaw, time.Minute*15).Err()
require.NoError(t, err)
return cacheKey
}
func callCanonicalCheckCode(t *testing.T, router *gin.Engine, apiHeader string, body string) canonicalCheckCodeResponse {
t.Helper()
reqBody := bytes.NewBufferString(body)
req := httptest.NewRequest(http.MethodPost, "/v1/common/check_verification_code", reqBody)
req.Header.Set("Content-Type", "application/json")
if apiHeader != "" {
req.Header.Set("api-header", apiHeader)
}
recorder := httptest.NewRecorder()
router.ServeHTTP(recorder, req)
require.Equal(t, http.StatusOK, recorder.Code)
var resp canonicalCheckCodeResponse
err := json.Unmarshal(recorder.Body.Bytes(), &resp)
require.NoError(t, err)
return resp
}
func TestCheckVerificationCodeHandler_ApiHeaderGate(t *testing.T) {
tests := []struct {
name string
apiHeader string
expectConsume bool
}{
{name: "no header", apiHeader: "", expectConsume: false},
{name: "invalid header", apiHeader: "invalid", expectConsume: false},
{name: "equal threshold", apiHeader: "1.0.0", expectConsume: false},
{name: "greater threshold", apiHeader: "1.0.1", expectConsume: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
svcCtx, redisClient := newCanonicalCheckCodeTestSvcCtx(t)
router := newCanonicalCheckCodeTestRouter(svcCtx)
account := "header-gate@example.com"
code := "123123"
cacheKey := seedCanonicalVerifyCode(t, redisClient, constant.Register.String(), account, code)
body := fmt.Sprintf(`{"method":"%s","account":"%s","code":"%s","type":%d}`,
authmethod.Email,
account,
code,
constant.Register,
)
resp := callCanonicalCheckCode(t, router, tt.apiHeader, body)
assert.Equal(t, uint32(200), resp.Code)
assert.True(t, resp.Data.Status)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
if tt.expectConsume {
assert.Equal(t, int64(0), exists)
} else {
assert.Equal(t, int64(1), exists)
}
resp = callCanonicalCheckCode(t, router, tt.apiHeader, body)
if tt.expectConsume {
assert.False(t, resp.Data.Status)
} else {
assert.True(t, resp.Data.Status)
}
})
}
}

View File

@ -1,192 +0,0 @@
package user
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/alicebob/miniredis/v2"
"github.com/gin-gonic/gin"
"github.com/perfect-panel/server/internal/config"
"github.com/perfect-panel/server/internal/svc"
"github.com/perfect-panel/server/pkg/constant"
"github.com/redis/go-redis/v9"
)
type handlerResponse struct {
Code uint32 `json:"code"`
Msg string `json:"msg"`
Data json.RawMessage `json:"data"`
}
func newDeleteAccountTestRouter(serverCtx *svc.ServiceContext) *gin.Engine {
gin.SetMode(gin.TestMode)
router := gin.New()
router.POST("/v1/public/user/delete_account", DeleteAccountHandler(serverCtx))
return router
}
func TestDeleteAccountHandlerInvalidParamsUsesUnifiedResponse(t *testing.T) {
router := newDeleteAccountTestRouter(&svc.ServiceContext{})
reqBody := bytes.NewBufferString(`{"email":"invalid-email"}`)
req := httptest.NewRequest(http.MethodPost, "/v1/public/user/delete_account", reqBody)
req.Header.Set("Content-Type", "application/json")
recorder := httptest.NewRecorder()
router.ServeHTTP(recorder, req)
if recorder.Code != http.StatusOK {
t.Fatalf("expected HTTP 200, got %d", recorder.Code)
}
var resp handlerResponse
if err := json.Unmarshal(recorder.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if resp.Code != 400 {
t.Fatalf("expected business code 400, got %d, body=%s", resp.Code, recorder.Body.String())
}
var raw map[string]interface{}
if err := json.Unmarshal(recorder.Body.Bytes(), &raw); err != nil {
t.Fatalf("failed to decode raw response: %v", err)
}
if _, exists := raw["error"]; exists {
t.Fatalf("unexpected raw error field in response: %s", recorder.Body.String())
}
}
func TestDeleteAccountHandlerVerifyCodeErrorUsesUnifiedResponse(t *testing.T) {
redisClient := redis.NewClient(&redis.Options{
Addr: "invalid:6379",
Dialer: func(_ context.Context, _, _ string) (net.Conn, error) {
return nil, errors.New("dial disabled in test")
},
})
defer redisClient.Close()
serverCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{
VerifyCodeExpireTime: 900,
},
},
}
router := newDeleteAccountTestRouter(serverCtx)
reqBody := bytes.NewBufferString(`{"email":"user@example.com","code":"123456"}`)
req := httptest.NewRequest(http.MethodPost, "/v1/public/user/delete_account", reqBody)
req.Header.Set("Content-Type", "application/json")
recorder := httptest.NewRecorder()
router.ServeHTTP(recorder, req)
if recorder.Code != http.StatusOK {
t.Fatalf("expected HTTP 200, got %d", recorder.Code)
}
var resp handlerResponse
if err := json.Unmarshal(recorder.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if resp.Code != 70001 {
t.Fatalf("expected business code 70001, got %d, body=%s", resp.Code, recorder.Body.String())
}
}
func TestVerifyEmailCode_DeleteAccountSceneConsume(t *testing.T) {
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
serverCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{VerifyCodeExpireTime: 900},
},
}
email := "delete-account@example.com"
code := "112233"
cacheKey := seedDeleteSceneCode(t, redisClient, constant.DeleteAccount.String(), email, code)
err := verifyEmailCode(context.Background(), serverCtx, email, code)
if err != nil {
t.Fatalf("verifyEmailCode returned unexpected error: %v", err)
}
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
if err != nil {
t.Fatalf("failed to check redis key: %v", err)
}
if exists != 0 {
t.Fatalf("expected verification code to be consumed, key still exists")
}
}
func TestVerifyEmailCode_SecurityFallbackConsume(t *testing.T) {
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
serverCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{VerifyCodeExpireTime: 900},
},
}
email := "security-fallback@example.com"
code := "445566"
cacheKey := seedDeleteSceneCode(t, redisClient, constant.Security.String(), email, code)
err := verifyEmailCode(context.Background(), serverCtx, email, code)
if err != nil {
t.Fatalf("verifyEmailCode fallback returned unexpected error: %v", err)
}
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
if err != nil {
t.Fatalf("failed to check redis key: %v", err)
}
if exists != 0 {
t.Fatalf("expected fallback verification code to be consumed, key still exists")
}
}
func seedDeleteSceneCode(t *testing.T, redisClient *redis.Client, scene string, email string, code string) string {
t.Helper()
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email)
payload := map[string]interface{}{
"code": code,
"lastAt": time.Now().Unix(),
}
payloadRaw, err := json.Marshal(payload)
if err != nil {
t.Fatalf("failed to marshal payload: %v", err)
}
err = redisClient.Set(context.Background(), cacheKey, payloadRaw, time.Minute*15).Err()
if err != nil {
t.Fatalf("failed to seed redis payload: %v", err)
}
return cacheKey
}

View File

@ -1,21 +0,0 @@
package authMethod
import (
"encoding/json"
"testing"
"github.com/perfect-panel/server/pkg/sms"
)
func TestValidate(t *testing.T) {
config := " {\"0\":\"{\",\"1\":\"\\\"\",\"10\":\"y\",\"11\":\"I\",\"12\":\"d\",\"13\":\"\\\"\",\"14\":\":\",\"15\":\"\\\"\",\"16\":\"\\\"\",\"17\":\",\",\"18\":\"\\\"\",\"19\":\"A\",\"2\":\"A\",\"20\":\"c\",\"21\":\"c\",\"22\":\"e\",\"23\":\"s\",\"24\":\"s\",\"25\":\"K\",\"26\":\"e\",\"27\":\"y\",\"28\":\"S\",\"29\":\"e\",\"3\":\"c\",\"30\":\"c\",\"31\":\"r\",\"32\":\"e\",\"33\":\"t\",\"34\":\"\\\"\",\"35\":\":\",\"36\":\"\\\"\",\"37\":\"\\\"\",\"38\":\",\",\"39\":\"\\\"\",\"4\":\"c\",\"40\":\"S\",\"41\":\"i\",\"42\":\"g\",\"43\":\"n\",\"44\":\"N\",\"45\":\"a\",\"46\":\"m\",\"47\":\"e\",\"48\":\"\\\"\",\"49\":\":\",\"5\":\"e\",\"50\":\"\\\"\",\"51\":\"\\\"\",\"52\":\",\",\"53\":\"\\\"\",\"54\":\"E\",\"55\":\"n\",\"56\":\"d\",\"57\":\"p\",\"58\":\"o\",\"59\":\"i\",\"6\":\"s\",\"60\":\"n\",\"61\":\"t\",\"62\":\"\\\"\",\"63\":\":\",\"64\":\"\\\"\",\"65\":\"\\\"\",\"66\":\",\",\"67\":\"\\\"\",\"68\":\"V\",\"69\":\"e\",\"7\":\"s\",\"70\":\"r\",\"71\":\"i\",\"72\":\"f\",\"73\":\"y\",\"74\":\"T\",\"75\":\"e\",\"76\":\"m\",\"77\":\"p\",\"78\":\"l\",\"79\":\"a\",\"8\":\"K\",\"80\":\"t\",\"81\":\"e\",\"82\":\"C\",\"83\":\"o\",\"84\":\"d\",\"85\":\"e\",\"86\":\"\\\"\",\"87\":\":\",\"88\":\"\\\"\",\"89\":\"\\\"\",\"9\":\"e\",\"90\":\"}\",\"access\":\"xxxx\",\"secret\":\"SSxxxxxxxxxxxxxxxxxxxxxxxU\",\"template\":\"Your verification code is: {{.code}}\"}"
var mapConfig map[string]interface{}
if err := json.Unmarshal([]byte(config), &mapConfig); err != nil {
t.Error(err)
}
platformConfig, err := validatePlatformConfig(sms.Abosend.String(), mapConfig)
if err != nil {
t.Errorf("validateEmailPlatformConfig error: %v", err)
}
t.Logf("platformConfig: %+v", platformConfig)
}

View File

@ -29,9 +29,12 @@ func NewDeleteNodeLogic(ctx context.Context, svcCtx *svc.ServiceContext) *Delete
func (l *DeleteNodeLogic) DeleteNode(req *types.DeleteNodeRequest) error {
data, err := l.svcCtx.NodeModel.FindOneNode(l.ctx, req.Id)
err = l.svcCtx.NodeModel.DeleteNode(l.ctx, req.Id)
if err != nil {
l.Errorw("[DeleteNode] Find Node Error: ", logger.Field("error", err.Error()))
return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "[DeleteNode] Find Node Error")
}
if err = l.svcCtx.NodeModel.DeleteNode(l.ctx, req.Id); err != nil {
l.Errorw("[DeleteNode] Delete Database Error: ", logger.Field("error", err.Error()))
return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseDeletedError), "[DeleteNode] Delete Database Error")
}

View File

@ -32,6 +32,9 @@ func (l *DeleteServerLogic) DeleteServer(req *types.DeleteServerRequest) error {
l.Errorw("[DeleteServer] Delete Server Error: ", logger.Field("error", err.Error()))
return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseDeletedError), "[DeleteServer] Delete Server Error")
}
if err = l.svcCtx.NodeModel.ClearServerAllCache(l.ctx); err != nil {
l.Errorw("[DeleteServer] Clear server cache failed", logger.Field("error", err.Error()))
}
return l.svcCtx.NodeModel.ClearNodeCache(l.ctx, &node.FilterNodeParams{
Page: 1,
Size: 1000,

View File

@ -48,6 +48,7 @@ func (l *CreateSubscribeLogic) CreateSubscribe(req *types.CreateSubscribeRequest
SpeedLimit: req.SpeedLimit,
DeviceLimit: req.DeviceLimit,
Quota: req.Quota,
NewUserOnly: req.NewUserOnly,
Nodes: tool.Int64SliceToString(req.Nodes),
NodeTags: tool.StringSliceToString(req.NodeTags),
Show: req.Show,

View File

@ -33,12 +33,27 @@ func NewResetAllSubscribeTokenLogic(ctx context.Context, svcCtx *svc.ServiceCont
func (l *ResetAllSubscribeTokenLogic) ResetAllSubscribeToken() (resp *types.ResetAllSubscribeTokenResponse, err error) {
var list []*user.Subscribe
tx := l.svcCtx.DB.WithContext(l.ctx).Begin()
if tx.Error != nil {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "Failed to begin transaction: %v", tx.Error)
}
// select all active and Finished subscriptions
if err = tx.Model(&user.Subscribe{}).Where("`status` IN ?", []int64{1, 2}).Find(&list).Error; err != nil {
tx.Rollback()
logger.Errorf("[ResetAllSubscribeToken] Failed to fetch subscribe list: %v", err.Error())
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "Failed to fetch subscribe list: %v", err.Error())
}
// Save old tokens before overwriting for proper cache clearing
type oldTokenInfo struct {
Token string
UserId int64
Id int64
}
oldTokens := make([]oldTokenInfo, len(list))
for i, sub := range list {
oldTokens[i] = oldTokenInfo{Token: sub.Token, UserId: sub.UserId, Id: sub.Id}
}
for _, sub := range list {
sub.Token = uuidx.SubscribeToken(strconv.FormatInt(time.Now().UnixMilli(), 10) + strconv.FormatInt(sub.Id, 10))
sub.UUID = uuidx.NewUUID().String()
@ -55,6 +70,25 @@ func (l *ResetAllSubscribeTokenLogic) ResetAllSubscribeToken() (resp *types.Rese
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseUpdateError), "Failed to commit transaction: %v", err.Error())
}
// Clear cache for both old and new tokens
for i, sub := range list {
// Clear new token cache
if clearErr := l.svcCtx.UserModel.ClearSubscribeCache(l.ctx, sub); clearErr != nil {
logger.Errorf("[ResetAllSubscribeToken] Failed to clear new cache for subscribe ID %d: %v", sub.Id, clearErr.Error())
}
// Clear old token cache
if oldTokens[i].Token != "" && oldTokens[i].Token != sub.Token {
oldSub := &user.Subscribe{
Id: oldTokens[i].Id,
UserId: oldTokens[i].UserId,
Token: oldTokens[i].Token,
}
if clearErr := l.svcCtx.UserModel.ClearSubscribeCache(l.ctx, oldSub); clearErr != nil {
logger.Errorf("[ResetAllSubscribeToken] Failed to clear old cache for subscribe ID %d: %v", sub.Id, clearErr.Error())
}
}
}
return &types.ResetAllSubscribeTokenResponse{
Success: true,
}, nil

View File

@ -56,6 +56,7 @@ func (l *UpdateSubscribeLogic) UpdateSubscribe(req *types.UpdateSubscribeRequest
SpeedLimit: req.SpeedLimit,
DeviceLimit: req.DeviceLimit,
Quota: req.Quota,
NewUserOnly: req.NewUserOnly,
Nodes: tool.Int64SliceToString(req.Nodes),
NodeTags: tool.StringSliceToString(req.NodeTags),
Show: req.Show,

View File

@ -64,11 +64,18 @@ func (l *UpdateUserSubscribeLogic) UpdateUserSubscribe(req *types.UpdateUserSubs
l.Errorw("ClearSubscribeCache failed:", logger.Field("error", err.Error()), logger.Field("userSubscribeId", userSub.Id))
return errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "ClearSubscribeCache failed: %v", err.Error())
}
// Clear subscribe cache
// Clear old subscribe plan cache
if err = l.svcCtx.SubscribeModel.ClearCache(l.ctx, userSub.SubscribeId); err != nil {
l.Errorw("failed to clear subscribe cache", logger.Field("error", err.Error()), logger.Field("subscribeId", userSub.SubscribeId))
l.Errorw("failed to clear old subscribe cache", logger.Field("error", err.Error()), logger.Field("subscribeId", userSub.SubscribeId))
return errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "failed to clear subscribe cache: %v", err.Error())
}
// Clear new subscribe plan cache if plan changed
if req.SubscribeId != userSub.SubscribeId {
if err = l.svcCtx.SubscribeModel.ClearCache(l.ctx, req.SubscribeId); err != nil {
l.Errorw("failed to clear new subscribe cache", logger.Field("error", err.Error()), logger.Field("subscribeId", req.SubscribeId))
return errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "failed to clear new subscribe cache: %v", err.Error())
}
}
if err = l.svcCtx.NodeModel.ClearServerAllCache(l.ctx); err != nil {
l.Errorf("ClearServerAllCache error: %v", err.Error())

View File

@ -171,6 +171,17 @@ func (l *BindDeviceLogic) createDeviceForUser(identifier, ip, userAgent string,
logger.Field("user_id", userId),
)
// Clear user cache to reflect new device
userInfo, findErr := l.svcCtx.UserModel.FindOne(l.ctx, userId)
if findErr == nil {
if clearErr := l.svcCtx.UserModel.ClearUserCache(l.ctx, userInfo); clearErr != nil {
l.Errorw("failed to clear user cache after device creation",
logger.Field("user_id", userId),
logger.Field("error", clearErr.Error()),
)
}
}
return nil
}
@ -208,7 +219,7 @@ func (l *BindDeviceLogic) rebindDeviceToNewUser(deviceInfo *user.Device, ip, use
}
var users []*user.User
err := l.svcCtx.DB.Where("id in (?)", []int64{oldUserId, newUserId}).Find(&users).Error
err := l.svcCtx.DB.Where("id in (?)", []int64{oldUserId, newUserId}).Preload("AuthMethods").Find(&users).Error
if err != nil {
l.Errorw("failed to query users for rebinding",
logger.Field("old_user_id", oldUserId),

View File

@ -47,31 +47,29 @@ func (l *EmailLoginLogic) EmailLogin(req *types.EmailLoginRequest) (resp *types.
req.Code = strings.TrimSpace(req.Code)
// Verify Code
if req.Code != "202511" {
scenes := []string{constant.Security.String(), constant.Register.String(), "unknown"}
var verified bool
var cacheKeyUsed string
var payload common.CacheKeyPayload
for _, scene := range scenes {
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, req.Email)
value, err := l.svcCtx.Redis.Get(l.ctx, cacheKey).Result()
if err != nil || value == "" {
continue
}
if err := json.Unmarshal([]byte(value), &payload); err != nil {
continue
}
if payload.Code == req.Code && time.Now().Unix()-payload.LastAt <= l.svcCtx.Config.VerifyCode.VerifyCodeExpireTime {
verified = true
cacheKeyUsed = cacheKey
break
}
scenes := []string{constant.Security.String(), constant.Register.String(), "unknown"}
var verified bool
var cacheKeyUsed string
var payload common.CacheKeyPayload
for _, scene := range scenes {
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, req.Email)
value, err := l.svcCtx.Redis.Get(l.ctx, cacheKey).Result()
if err != nil || value == "" {
continue
}
if !verified {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.VerifyCodeError), "verification code error or expired")
if err := json.Unmarshal([]byte(value), &payload); err != nil {
continue
}
if payload.Code == req.Code && time.Now().Unix()-payload.LastAt <= l.svcCtx.Config.VerifyCode.VerifyCodeExpireTime {
verified = true
cacheKeyUsed = cacheKey
break
}
l.svcCtx.Redis.Del(l.ctx, cacheKeyUsed)
}
if !verified {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.VerifyCodeError), "verification code error or expired")
}
l.svcCtx.Redis.Del(l.ctx, cacheKeyUsed)
// Check User
userInfo, err = l.svcCtx.UserModel.FindOneByEmail(l.ctx, req.Email)

View File

@ -45,7 +45,7 @@ func (l *ResetPasswordLogic) ResetPassword(req *types.ResetPasswordRequest) (res
loginStatus := false
defer func() {
if userInfo.Id != 0 && loginStatus {
if userInfo != nil && userInfo.Id != 0 && loginStatus {
loginLog := log.Login{
Method: "email",
LoginIP: req.IP,

View File

@ -42,7 +42,7 @@ func (l *UserLoginLogic) UserLogin(req *types.UserLoginRequest) (resp *types.Log
var userInfo *user.User
// Record login status
defer func(svcCtx *svc.ServiceContext) {
if userInfo.Id != 0 {
if userInfo != nil && userInfo.Id != 0 {
loginLog := log.Login{
Method: "email",
LoginIP: req.IP,
@ -67,19 +67,18 @@ func (l *UserLoginLogic) UserLogin(req *types.UserLoginRequest) (resp *types.Log
}(l.svcCtx)
userInfo, err = l.svcCtx.UserModel.FindOneByEmail(l.ctx, req.Email)
if userInfo.DeletedAt.Valid {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserNotExist), "user email deleted: %v", req.Email)
}
if err != nil {
if errors.As(err, &gorm.ErrRecordNotFound) {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserNotExist), "user email not exist: %v", req.Email)
}
logger.WithContext(l.ctx).Error(err)
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "query user info failed: %v", err.Error())
}
if userInfo.DeletedAt.Valid {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserNotExist), "user email deleted: %v", req.Email)
}
// Verify password
if !tool.MultiPasswordVerify(userInfo.Algo, userInfo.Salt, req.Password, userInfo.Password) {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserPasswordError), "user password")

View File

@ -1,259 +0,0 @@
package common
import (
"context"
"encoding/json"
"fmt"
"testing"
"time"
"github.com/alicebob/miniredis/v2"
"github.com/perfect-panel/server/internal/config"
"github.com/perfect-panel/server/internal/svc"
"github.com/perfect-panel/server/internal/types"
"github.com/perfect-panel/server/pkg/apiversion"
"github.com/perfect-panel/server/pkg/authmethod"
"github.com/perfect-panel/server/pkg/constant"
"github.com/redis/go-redis/v9"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCheckVerificationCodeCanonicalConsume(t *testing.T) {
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
svcCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{
VerifyCodeExpireTime: 900,
},
},
}
email := "user@example.com"
code := "123456"
scene := constant.Register.String()
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email)
setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix())
logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx)
req := &types.CheckVerificationCodeRequest{
Method: authmethod.Email,
Account: email,
Code: code,
Type: uint8(constant.Register),
}
resp, err := logic.CheckVerificationCode(req)
require.NoError(t, err)
require.NotNil(t, resp)
assert.True(t, resp.Status)
assert.True(t, resp.Exist)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
assert.Equal(t, int64(0), exists)
resp, err = logic.CheckVerificationCode(req)
require.NoError(t, err)
require.NotNil(t, resp)
assert.False(t, resp.Status)
assert.False(t, resp.Exist)
}
func TestCheckVerificationCodeLegacyNoConsumeAndType3Mapping(t *testing.T) {
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
svcCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{
VerifyCodeExpireTime: 900,
},
},
}
email := "legacy@example.com"
code := "654321"
scene := constant.Security.String()
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email)
setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix())
legacyReq := &types.LegacyCheckVerificationCodeRequest{
Email: email,
Code: code,
Type: 3,
}
normalizedReq, type3Mapped, err := NormalizeLegacyCheckVerificationCodeRequest(legacyReq)
require.NoError(t, err)
assert.True(t, type3Mapped)
assert.Equal(t, uint8(constant.Security), normalizedReq.Type)
assert.Equal(t, authmethod.Email, normalizedReq.Method)
assert.Equal(t, email, normalizedReq.Account)
logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx)
legacyBehavior := VerifyCodeCheckBehavior{
Source: "legacy",
Consume: false,
LegacyType3Mapped: true,
AllowSceneFallback: true,
}
resp, err := logic.CheckVerificationCodeWithBehavior(normalizedReq, legacyBehavior)
require.NoError(t, err)
require.NotNil(t, resp)
assert.True(t, resp.Status)
assert.True(t, resp.Exist)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
assert.Equal(t, int64(1), exists)
resp, err = logic.CheckVerificationCodeWithBehavior(normalizedReq, legacyBehavior)
require.NoError(t, err)
assert.True(t, resp.Status)
resp, err = logic.CheckVerificationCode(normalizedReq)
require.NoError(t, err)
assert.True(t, resp.Status)
exists, err = redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
assert.Equal(t, int64(0), exists)
}
func TestCheckVerificationCodeLegacySceneFallback(t *testing.T) {
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
svcCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{
VerifyCodeExpireTime: 900,
},
},
}
email := "fallback@example.com"
code := "778899"
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, constant.Register.String(), email)
setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix())
logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx)
req := &types.CheckVerificationCodeRequest{
Method: authmethod.Email,
Account: email,
Code: code,
Type: uint8(constant.Security),
}
resp, err := logic.CheckVerificationCodeWithBehavior(req, VerifyCodeCheckBehavior{
Source: "legacy",
Consume: false,
AllowSceneFallback: true,
})
require.NoError(t, err)
require.NotNil(t, resp)
assert.True(t, resp.Status)
resp, err = logic.CheckVerificationCodeWithBehavior(req, VerifyCodeCheckBehavior{
Source: "legacy",
Consume: false,
AllowSceneFallback: false,
})
require.NoError(t, err)
require.NotNil(t, resp)
assert.False(t, resp.Status)
}
func setEmailCodePayload(t *testing.T, redisClient *redis.Client, cacheKey string, code string, lastAt int64) {
t.Helper()
payload := CacheKeyPayload{
Code: code,
LastAt: lastAt,
}
value, err := json.Marshal(payload)
require.NoError(t, err)
err = redisClient.Set(context.Background(), cacheKey, value, time.Minute*15).Err()
require.NoError(t, err)
}
func TestCheckVerificationCodeWithApiHeaderGate(t *testing.T) {
tests := []struct {
name string
header string
expectConsume bool
}{
{name: "missing header", header: "", expectConsume: false},
{name: "invalid header", header: "invalid", expectConsume: false},
{name: "equal threshold", header: "1.0.0", expectConsume: false},
{name: "greater threshold", header: "1.0.1", expectConsume: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
t.Cleanup(func() {
redisClient.Close()
miniRedis.Close()
})
svcCtx := &svc.ServiceContext{
Redis: redisClient,
Config: config.Config{
VerifyCode: config.VerifyCode{
VerifyCodeExpireTime: 900,
},
},
}
email := "gate@example.com"
code := "101010"
cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, constant.Register.String(), email)
setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix())
logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx)
req := &types.CheckVerificationCodeRequest{
Method: authmethod.Email,
Account: email,
Code: code,
Type: uint8(constant.Register),
}
resp, err := logic.CheckVerificationCodeWithBehavior(req, VerifyCodeCheckBehavior{
Source: "canonical",
Consume: apiversion.UseLatest(tt.header, apiversion.DefaultThreshold),
})
require.NoError(t, err)
require.NotNil(t, resp)
assert.True(t, resp.Status)
exists, err := redisClient.Exists(context.Background(), cacheKey).Result()
require.NoError(t, err)
if tt.expectConsume {
assert.Equal(t, int64(0), exists)
} else {
assert.Equal(t, int64(1), exists)
}
})
}
}

View File

@ -1,78 +0,0 @@
package common
import (
stderrors "errors"
"testing"
modelUser "github.com/perfect-panel/server/internal/model/user"
"github.com/perfect-panel/server/pkg/xerr"
pkgerrors "github.com/pkg/errors"
"github.com/stretchr/testify/require"
)
func extractFamilyEntitlementCode(err error) uint32 {
if err == nil {
return 0
}
var codeErr *xerr.CodeError
if stderrors.As(pkgerrors.Cause(err), &codeErr) {
return codeErr.GetErrCode()
}
return 0
}
func TestBuildEntitlementContext(t *testing.T) {
t.Run("default self entitlement", func(t *testing.T) {
entitlement := buildEntitlementContext(1001, nil)
require.Equal(t, int64(1001), entitlement.EffectiveUserID)
require.Equal(t, EntitlementSourceSelf, entitlement.Source)
require.Equal(t, int64(0), entitlement.OwnerUserID)
require.False(t, entitlement.ReadOnly)
})
t.Run("active family member uses owner entitlement", func(t *testing.T) {
entitlement := buildEntitlementContext(1001, &familyEntitlementRelation{
Role: modelUser.FamilyRoleMember,
FamilyStatus: modelUser.FamilyStatusActive,
OwnerUserID: 2001,
})
require.Equal(t, int64(2001), entitlement.EffectiveUserID)
require.Equal(t, EntitlementSourceFamilyOwner, entitlement.Source)
require.Equal(t, int64(2001), entitlement.OwnerUserID)
require.True(t, entitlement.ReadOnly)
})
t.Run("owner relation keeps self entitlement", func(t *testing.T) {
entitlement := buildEntitlementContext(2001, &familyEntitlementRelation{
Role: modelUser.FamilyRoleOwner,
FamilyStatus: modelUser.FamilyStatusActive,
OwnerUserID: 2001,
})
require.Equal(t, int64(2001), entitlement.EffectiveUserID)
require.Equal(t, EntitlementSourceSelf, entitlement.Source)
require.False(t, entitlement.ReadOnly)
})
t.Run("disabled family keeps self entitlement", func(t *testing.T) {
entitlement := buildEntitlementContext(1001, &familyEntitlementRelation{
Role: modelUser.FamilyRoleMember,
FamilyStatus: 0,
OwnerUserID: 2001,
})
require.Equal(t, int64(1001), entitlement.EffectiveUserID)
require.Equal(t, EntitlementSourceSelf, entitlement.Source)
require.False(t, entitlement.ReadOnly)
})
}
func TestDenyReadonlyEntitlement(t *testing.T) {
require.NoError(t, denyReadonlyEntitlement(&EntitlementContext{ReadOnly: false}))
err := denyReadonlyEntitlement(&EntitlementContext{
Source: EntitlementSourceFamilyOwner,
ReadOnly: true,
})
require.Error(t, err)
require.Equal(t, xerr.FamilyOwnerOperationForbidden, extractFamilyEntitlementCode(err))
}

View File

@ -1,145 +0,0 @@
package common
import (
"context"
"errors"
"net/url"
"testing"
"github.com/alicebob/miniredis/v2"
"github.com/perfect-panel/server/internal/config"
"github.com/perfect-panel/server/internal/svc"
"github.com/redis/go-redis/v9"
"github.com/stretchr/testify/require"
)
func buildInviteResolverForTest(t *testing.T, cfg config.Config) (*InviteLinkResolver, *miniredis.Miniredis) {
t.Helper()
redisServer, err := miniredis.Run()
require.NoError(t, err)
t.Cleanup(func() {
redisServer.Close()
})
redisClient := redis.NewClient(&redis.Options{
Addr: redisServer.Addr(),
DB: 0,
})
t.Cleanup(func() {
_ = redisClient.Close()
})
serviceCtx := &svc.ServiceContext{
Config: cfg,
Redis: redisClient,
}
resolver := NewInviteLinkResolver(context.Background(), serviceCtx)
return resolver, redisServer
}
func TestInviteLinkResolverResolveInviteLink(t *testing.T) {
t.Run("kutt disabled returns long link", func(t *testing.T) {
cfg := config.Config{}
cfg.Kutt.TargetURL = "https://example.com/register"
resolver, _ := buildInviteResolverForTest(t, cfg)
link := resolver.ResolveInviteLink("abc123")
require.Equal(t, "https://example.com/register?ic=abc123", link)
})
t.Run("cache hit returns cached short link", func(t *testing.T) {
cfg := config.Config{}
cfg.Kutt.Enable = true
cfg.Kutt.ApiURL = "https://kutt.local/api/v2"
cfg.Kutt.ApiKey = "token"
cfg.Kutt.TargetURL = "https://example.com/register"
resolver, redisServer := buildInviteResolverForTest(t, cfg)
redisServer.Set(inviteShortLinkCachePrefix+"abc123", "https://sho.rt/cached")
called := 0
resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) {
called++
return "", errors.New("should not call createShortLink on cache hit")
}
link := resolver.ResolveInviteLink("abc123")
require.Equal(t, "https://sho.rt/cached", link)
require.Equal(t, 0, called)
})
t.Run("cache miss kutt success returns short link and writes cache", func(t *testing.T) {
cfg := config.Config{}
cfg.Kutt.Enable = true
cfg.Kutt.ApiURL = "https://kutt.local/api/v2"
cfg.Kutt.ApiKey = "token"
cfg.Kutt.TargetURL = "https://example.com/register"
resolver, _ := buildInviteResolverForTest(t, cfg)
resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) {
return "https://sho.rt/new", nil
}
link := resolver.ResolveInviteLink("abc123")
require.Equal(t, "https://sho.rt/new", link)
cached := resolver.getCachedShortLink("abc123")
require.Equal(t, "https://sho.rt/new", cached)
})
t.Run("kutt failure falls back to long link", func(t *testing.T) {
cfg := config.Config{}
cfg.Kutt.Enable = true
cfg.Kutt.ApiURL = "https://kutt.local/api/v2"
cfg.Kutt.ApiKey = "token"
cfg.Kutt.TargetURL = "https://example.com/register"
resolver, _ := buildInviteResolverForTest(t, cfg)
resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) {
return "", errors.New("kutt request failed")
}
link := resolver.ResolveInviteLink("abc123")
require.Equal(t, "https://example.com/register?ic=abc123", link)
})
t.Run("long link preserves existing query string", func(t *testing.T) {
cfg := config.Config{}
cfg.Kutt.TargetURL = "https://example.com/register?channel=ios"
resolver, _ := buildInviteResolverForTest(t, cfg)
link := resolver.ResolveInviteLink("abc123")
parsed, err := url.Parse(link)
require.NoError(t, err)
require.Equal(t, "https", parsed.Scheme)
require.Equal(t, "example.com", parsed.Host)
require.Equal(t, "/register", parsed.Path)
require.Equal(t, "ios", parsed.Query().Get("channel"))
require.Equal(t, "abc123", parsed.Query().Get("ic"))
})
t.Run("kutt target preserves existing query string", func(t *testing.T) {
cfg := config.Config{}
cfg.Kutt.Enable = true
cfg.Kutt.ApiURL = "https://kutt.local/api/v2"
cfg.Kutt.ApiKey = "token"
cfg.Kutt.TargetURL = "https://example.com/register?channel=ios"
resolver, _ := buildInviteResolverForTest(t, cfg)
capturedTargetURL := ""
resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) {
capturedTargetURL = targetURL
return "https://sho.rt/query", nil
}
link := resolver.ResolveInviteLink("abc123")
require.Equal(t, "https://sho.rt/query", link)
parsed, err := url.Parse(capturedTargetURL)
require.NoError(t, err)
require.Equal(t, "ios", parsed.Query().Get("channel"))
require.Equal(t, "abc123", parsed.Query().Get("ic"))
})
}

View File

@ -1,82 +0,0 @@
package common
import (
"context"
"errors"
"testing"
"github.com/perfect-panel/server/internal/model/user"
"github.com/stretchr/testify/require"
)
func TestResolvePurchaseRoute(t *testing.T) {
ctx := context.Background()
t.Run("single mode disabled", func(t *testing.T) {
called := false
decision, err := ResolvePurchaseRoute(ctx, false, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) {
called = true
return nil, nil
})
require.NoError(t, err)
require.NotNil(t, decision)
require.Equal(t, PurchaseRouteNewPurchase, decision.Route)
require.Equal(t, int64(100), decision.ResolvedSubscribeID)
require.False(t, called)
})
t.Run("single mode but empty user", func(t *testing.T) {
decision, err := ResolvePurchaseRoute(ctx, true, 0, 100, nil)
require.NoError(t, err)
require.NotNil(t, decision)
require.Equal(t, PurchaseRouteNewPurchase, decision.Route)
require.Equal(t, int64(100), decision.ResolvedSubscribeID)
})
t.Run("single mode no anchor", func(t *testing.T) {
decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) {
return nil, nil
})
require.NoError(t, err)
require.NotNil(t, decision)
require.Equal(t, PurchaseRouteNewPurchase, decision.Route)
require.Equal(t, int64(100), decision.ResolvedSubscribeID)
})
t.Run("single mode routed to renewal", func(t *testing.T) {
decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) {
return &user.Subscribe{
Id: 11,
SubscribeId: 100,
OrderId: 7,
Token: "token",
}, nil
})
require.NoError(t, err)
require.NotNil(t, decision)
require.Equal(t, PurchaseRoutePurchaseToRenewal, decision.Route)
require.Equal(t, int64(100), decision.ResolvedSubscribeID)
require.NotNil(t, decision.Anchor)
require.Equal(t, int64(11), decision.Anchor.Id)
})
t.Run("single mode plan mismatch", func(t *testing.T) {
decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) {
return &user.Subscribe{
Id: 11,
SubscribeId: 200,
}, nil
})
require.ErrorIs(t, err, ErrSingleModePlanMismatch)
require.Nil(t, decision)
})
t.Run("single mode anchor query error", func(t *testing.T) {
queryErr := errors.New("query failed")
decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) {
return nil, queryErr
})
require.ErrorIs(t, err, queryErr)
require.Nil(t, decision)
})
}

View File

@ -131,9 +131,8 @@ func (l *CloseOrderLogic) CloseOrder(req *types.CloseOrderRequest) error {
)
return err
}
// update user cache
return l.svcCtx.UserModel.UpdateUserCache(l.ctx, userInfo)
}
// Note: user cache will be updated after transaction commits
if sub.Inventory != -1 {
sub.Inventory++
if e := l.svcCtx.SubscribeModel.Update(l.ctx, sub, tx); e != nil {
@ -151,6 +150,19 @@ func (l *CloseOrderLogic) CloseOrder(req *types.CloseOrderRequest) error {
logger.Errorf("[CloseOrder] Transaction failed: %v", err.Error())
return err
}
// Update user cache after transaction commits successfully
if orderInfo.GiftAmount > 0 && orderInfo.UserId != 0 {
if userInfo, findErr := l.svcCtx.UserModel.FindOne(l.ctx, orderInfo.UserId); findErr == nil {
if clearErr := l.svcCtx.UserModel.ClearUserCache(l.ctx, userInfo); clearErr != nil {
l.Errorw("[CloseOrder] failed to clear user cache",
logger.Field("error", clearErr.Error()),
logger.Field("user_id", orderInfo.UserId),
)
}
}
}
return nil
}

View File

@ -4,6 +4,7 @@ import (
"context"
"encoding/json"
"math"
"time"
commonLogic "github.com/perfect-panel/server/internal/logic/common"
"github.com/perfect-panel/server/internal/model/order"
@ -108,6 +109,23 @@ func (l *PreCreateOrderLogic) PreCreateOrder(req *types.PurchaseOrderRequest) (r
}
}
// check new user only restriction
if !isSingleModeRenewal && sub.NewUserOnly != nil && *sub.NewUserOnly {
if time.Since(u.CreatedAt) > 24*time.Hour {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "not a new user")
}
var historyCount int64
if e := l.svcCtx.DB.Model(&order.Order{}).
Where("user_id = ? AND subscribe_id = ? AND type = 1 AND status IN ?",
u.Id, targetSubscribeID, []uint8{2, 5}).
Count(&historyCount).Error; e != nil {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "check new user purchase history error: %v", e.Error())
}
if historyCount >= 1 {
return nil, errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "already purchased new user plan")
}
}
var discount float64 = 1
if sub.Discount != "" {
var dis []types.SubscribeDiscount

View File

@ -270,6 +270,23 @@ func (l *PurchaseLogic) Purchase(req *types.PurchaseOrderRequest) (resp *types.P
}
}
// check new user only restriction inside transaction to prevent race condition
if orderInfo.Type == 1 && sub.NewUserOnly != nil && *sub.NewUserOnly {
if time.Since(u.CreatedAt) > 24*time.Hour {
return errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "not a new user")
}
var historyCount int64
if e := db.Model(&order.Order{}).
Where("user_id = ? AND subscribe_id = ? AND type = 1 AND status IN ?",
u.Id, targetSubscribeID, []int{2, 5}).
Count(&historyCount).Error; e != nil {
return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "check new user purchase history error: %v", e.Error())
}
if historyCount >= 1 {
return errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "already purchased new user plan")
}
}
// update user gift amount and create deduction record
if orderInfo.GiftAmount > 0 {
// deduct gift amount from user
@ -319,7 +336,11 @@ func (l *PurchaseLogic) Purchase(req *types.PurchaseOrderRequest) (resp *types.P
})
if err != nil {
l.Errorw("[Purchase] Database insert error", logger.Field("error", err.Error()), logger.Field("orderInfo", orderInfo))
// Propagate business errors (e.g. SubscribeNewUserOnly, SubscribeQuotaLimit) directly.
var codeErr *xerr.CodeError
if errors.As(err, &codeErr) {
return nil, err
}
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseInsertError), "insert order error: %v", err.Error())
}
// Deferred task

View File

@ -1,33 +0,0 @@
package subscribe
import (
"testing"
commonLogic "github.com/perfect-panel/server/internal/logic/common"
"github.com/perfect-panel/server/internal/types"
"github.com/stretchr/testify/require"
)
func TestFillUserSubscribeInfoEntitlementFields(t *testing.T) {
sub := &types.UserSubscribeInfo{}
entitlement := &commonLogic.EntitlementContext{
EffectiveUserID: 3001,
Source: commonLogic.EntitlementSourceFamilyOwner,
OwnerUserID: 3001,
ReadOnly: true,
}
fillUserSubscribeInfoEntitlementFields(sub, entitlement)
require.Equal(t, commonLogic.EntitlementSourceFamilyOwner, sub.EntitlementSource)
require.Equal(t, int64(3001), sub.EntitlementOwnerUserId)
require.True(t, sub.ReadOnly)
}
func TestNormalizeSubscribeNodeTags(t *testing.T) {
tags := normalizeSubscribeNodeTags("美国, 日本, , 美国, ,日本")
require.Equal(t, []string{"美国", "日本"}, tags)
empty := normalizeSubscribeNodeTags("")
require.Nil(t, empty)
}

View File

@ -45,6 +45,9 @@ func (h *accountMergeHelper) mergeIntoOwner(ownerUserID, deviceUserID int64, sou
DeviceUserID: deviceUserID,
}
// Capture device user's auth methods BEFORE the transaction migrates them
deviceAuthMethods, _ := h.svcCtx.UserModel.FindUserAuthMethods(h.ctx, deviceUserID)
err := h.svcCtx.DB.WithContext(h.ctx).Transaction(func(tx *gorm.DB) error {
var owner modelUser.User
if err := tx.Clauses(clause.Locking{Strength: "UPDATE"}).
@ -114,7 +117,7 @@ func (h *accountMergeHelper) mergeIntoOwner(ownerUserID, deviceUserID int64, sou
return nil, err
}
if err := h.clearCaches(result); err != nil {
if err := h.clearCaches(result, deviceAuthMethods); err != nil {
return nil, err
}
@ -129,16 +132,32 @@ func (h *accountMergeHelper) mergeIntoOwner(ownerUserID, deviceUserID int64, sou
return result, nil
}
func (h *accountMergeHelper) clearCaches(result *accountMergeResult) error {
func (h *accountMergeHelper) clearCaches(result *accountMergeResult, deviceAuthMethods []*modelUser.AuthMethods) error {
if result == nil {
return nil
}
if err := h.svcCtx.UserModel.ClearUserCache(h.ctx,
&modelUser.User{Id: result.OwnerUserID},
&modelUser.User{Id: result.DeviceUserID},
); err != nil {
return err
// Fetch owner user with AuthMethods for proper cache key generation
var users []*modelUser.User
if u, err := h.svcCtx.UserModel.FindOne(h.ctx, result.OwnerUserID); err == nil {
users = append(users, u)
}
// For device user, FindOne won't have AuthMethods anymore (migrated in tx),
// so we build a minimal User with the pre-captured auth methods
deviceUser := &modelUser.User{Id: result.DeviceUserID}
if len(deviceAuthMethods) > 0 {
authMethods := make([]modelUser.AuthMethods, len(deviceAuthMethods))
for i, am := range deviceAuthMethods {
authMethods[i] = *am
}
deviceUser.AuthMethods = authMethods
}
users = append(users, deviceUser)
if len(users) > 0 {
if err := h.svcCtx.UserModel.ClearUserCache(h.ctx, users...); err != nil {
return err
}
}
if len(result.MovedDevices) > 0 {

View File

@ -1,109 +0,0 @@
package user
import (
"context"
"testing"
"time"
"github.com/alicebob/miniredis/v2"
"github.com/perfect-panel/server/internal/svc"
"github.com/redis/go-redis/v9"
)
func TestClearAllSessions_RemovesUserSessionsAndDeviceMappings(t *testing.T) {
logic, redisClient, cleanup := newDeleteAccountRedisTestLogic(t)
defer cleanup()
mustRedisSet(t, redisClient, "auth:session_id:sid-user-1", "1001")
mustRedisSet(t, redisClient, "auth:session_id:sid-user-2", "1001")
mustRedisSet(t, redisClient, "auth:session_id:sid-other", "2002")
mustRedisSet(t, redisClient, "auth:session_id:detail:sid-user-1", "detail")
mustRedisSet(t, redisClient, "auth:session_id:detail:sid-other", "detail")
mustRedisSet(t, redisClient, "auth:device_identifier:dev-user-1", "sid-user-1")
mustRedisSet(t, redisClient, "auth:device_identifier:dev-user-2", "sid-user-2")
mustRedisSet(t, redisClient, "auth:device_identifier:dev-other", "sid-other")
mustRedisZAdd(t, redisClient, "auth:user_sessions:1001", "sid-user-3", 1)
mustRedisSet(t, redisClient, "auth:session_id:sid-user-3", "1001")
logic.clearAllSessions(1001)
mustRedisNotExist(t, redisClient, "auth:session_id:sid-user-1")
mustRedisNotExist(t, redisClient, "auth:session_id:sid-user-2")
mustRedisNotExist(t, redisClient, "auth:session_id:sid-user-3")
mustRedisNotExist(t, redisClient, "auth:session_id:detail:sid-user-1")
mustRedisNotExist(t, redisClient, "auth:user_sessions:1001")
mustRedisNotExist(t, redisClient, "auth:device_identifier:dev-user-1")
mustRedisNotExist(t, redisClient, "auth:device_identifier:dev-user-2")
mustRedisExist(t, redisClient, "auth:session_id:sid-other")
mustRedisExist(t, redisClient, "auth:session_id:detail:sid-other")
mustRedisExist(t, redisClient, "auth:device_identifier:dev-other")
}
func TestClearAllSessions_ScanFallbackWorksWithoutUserSessionIndex(t *testing.T) {
logic, redisClient, cleanup := newDeleteAccountRedisTestLogic(t)
defer cleanup()
mustRedisSet(t, redisClient, "auth:session_id:sid-a", "3003")
mustRedisSet(t, redisClient, "auth:session_id:sid-b", "3003")
mustRedisSet(t, redisClient, "auth:session_id:sid-c", "4004")
logic.clearAllSessions(3003)
mustRedisNotExist(t, redisClient, "auth:session_id:sid-a")
mustRedisNotExist(t, redisClient, "auth:session_id:sid-b")
mustRedisExist(t, redisClient, "auth:session_id:sid-c")
}
func newDeleteAccountRedisTestLogic(t *testing.T) (*DeleteAccountLogic, *redis.Client, func()) {
t.Helper()
miniRedis := miniredis.RunT(t)
redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()})
logic := NewDeleteAccountLogic(context.Background(), &svc.ServiceContext{Redis: redisClient})
cleanup := func() {
_ = redisClient.Close()
miniRedis.Close()
}
return logic, redisClient, cleanup
}
func mustRedisSet(t *testing.T, redisClient *redis.Client, key, value string) {
t.Helper()
if err := redisClient.Set(context.Background(), key, value, time.Hour).Err(); err != nil {
t.Fatalf("redis set %s failed: %v", key, err)
}
}
func mustRedisZAdd(t *testing.T, redisClient *redis.Client, key, member string, score float64) {
t.Helper()
if err := redisClient.ZAdd(context.Background(), key, redis.Z{Member: member, Score: score}).Err(); err != nil {
t.Fatalf("redis zadd %s failed: %v", key, err)
}
}
func mustRedisExist(t *testing.T, redisClient *redis.Client, key string) {
t.Helper()
exists, err := redisClient.Exists(context.Background(), key).Result()
if err != nil {
t.Fatalf("redis exists %s failed: %v", key, err)
}
if exists == 0 {
t.Fatalf("expected redis key %s to exist", key)
}
}
func mustRedisNotExist(t *testing.T, redisClient *redis.Client, key string) {
t.Helper()
exists, err := redisClient.Exists(context.Background(), key).Result()
if err != nil {
t.Fatalf("redis exists %s failed: %v", key, err)
}
if exists != 0 {
t.Fatalf("expected redis key %s to be deleted", key)
}
}

View File

@ -1,128 +0,0 @@
package user
import (
stderrors "errors"
"testing"
modelUser "github.com/perfect-panel/server/internal/model/user"
"github.com/perfect-panel/server/pkg/xerr"
pkgerrors "github.com/pkg/errors"
"github.com/stretchr/testify/require"
)
func extractFamilyJoinCode(err error) uint32 {
if err == nil {
return 0
}
var codeErr *xerr.CodeError
if stderrors.As(pkgerrors.Cause(err), &codeErr) {
return codeErr.GetErrCode()
}
return 0
}
func TestValidateMemberJoinConflict(t *testing.T) {
ownerFamilyID := int64(11)
testCases := []struct {
name string
ownerFamily int64
memberRecord *modelUser.UserFamilyMember
wantCode uint32
}{
{
name: "no member record",
ownerFamily: ownerFamilyID,
wantCode: 0,
},
{
name: "same family active member",
ownerFamily: ownerFamilyID,
memberRecord: &modelUser.UserFamilyMember{
FamilyId: ownerFamilyID,
Status: modelUser.FamilyMemberActive,
},
wantCode: xerr.FamilyAlreadyBound,
},
{
name: "same family left member",
ownerFamily: ownerFamilyID,
memberRecord: &modelUser.UserFamilyMember{
FamilyId: ownerFamilyID,
Status: modelUser.FamilyMemberLeft,
},
wantCode: 0,
},
{
name: "same family removed member",
ownerFamily: ownerFamilyID,
memberRecord: &modelUser.UserFamilyMember{
FamilyId: ownerFamilyID,
Status: modelUser.FamilyMemberRemoved,
},
wantCode: 0,
},
{
name: "cross family active member",
ownerFamily: ownerFamilyID,
memberRecord: &modelUser.UserFamilyMember{
FamilyId: ownerFamilyID + 1,
Status: modelUser.FamilyMemberActive,
},
wantCode: xerr.FamilyCrossBindForbidden,
},
{
name: "cross family left member",
ownerFamily: ownerFamilyID,
memberRecord: &modelUser.UserFamilyMember{
FamilyId: ownerFamilyID + 1,
Status: modelUser.FamilyMemberLeft,
},
wantCode: 0,
},
{
name: "cross family removed member",
ownerFamily: ownerFamilyID,
memberRecord: &modelUser.UserFamilyMember{
FamilyId: ownerFamilyID + 1,
Status: modelUser.FamilyMemberRemoved,
},
wantCode: 0,
},
}
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
err := validateMemberJoinConflict(testCase.ownerFamily, testCase.memberRecord)
if testCase.wantCode == 0 {
require.NoError(t, err)
return
}
require.Error(t, err)
require.Equal(t, testCase.wantCode, extractFamilyJoinCode(err))
})
}
}
func TestBuildRemovedSubscribeCacheMeta(t *testing.T) {
removed := []modelUser.Subscribe{
{Id: 1, SubscribeId: 10, Token: "member-token-1"},
{Id: 2, SubscribeId: 11, Token: "member-token-2"},
{Id: 3, SubscribeId: 0, Token: "member-token-3"},
}
models, subscribeIDSet := buildRemovedSubscribeCacheMeta(removed)
require.Len(t, models, 3)
require.Equal(t, int64(1), models[0].Id)
require.Equal(t, "member-token-2", models[1].Token)
require.Len(t, subscribeIDSet, 2)
_, has10 := subscribeIDSet[10]
_, has11 := subscribeIDSet[11]
_, has0 := subscribeIDSet[0]
require.True(t, has10)
require.True(t, has11)
require.False(t, has0)
}

View File

@ -1,105 +0,0 @@
package user
import (
"testing"
modelUser "github.com/perfect-panel/server/internal/model/user"
"github.com/perfect-panel/server/internal/types"
"github.com/stretchr/testify/require"
)
func TestAppendFamilyOwnerEmailIfNeeded(t *testing.T) {
testCases := []struct {
name string
methods []types.UserAuthMethod
familyJoined bool
ownerEmailMethod *modelUser.AuthMethods
wantMethodCount int
wantEmailCount int
wantFirstAuthType string
wantFirstAuthValue string
}{
{
name: "inject owner email when member has no email",
methods: []types.UserAuthMethod{
{AuthType: "device", AuthIdentifier: "dev-1", Verified: true},
},
familyJoined: true,
ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true},
wantMethodCount: 2,
wantEmailCount: 1,
wantFirstAuthType: "email",
wantFirstAuthValue: "owner@example.com",
},
{
name: "do not inject when member already has email",
methods: []types.UserAuthMethod{
{AuthType: "email", AuthIdentifier: "member@example.com", Verified: true},
{AuthType: "device", AuthIdentifier: "dev-1", Verified: true},
},
familyJoined: true,
ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true},
wantMethodCount: 2,
wantEmailCount: 1,
wantFirstAuthType: "email",
wantFirstAuthValue: "member@example.com",
},
{
name: "do not inject when owner has no email",
methods: []types.UserAuthMethod{
{AuthType: "device", AuthIdentifier: "dev-1", Verified: true},
},
familyJoined: true,
ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "", Verified: true},
wantMethodCount: 1,
wantEmailCount: 0,
wantFirstAuthType: "device",
},
{
name: "do not inject for non active family relationship",
methods: []types.UserAuthMethod{
{AuthType: "device", AuthIdentifier: "dev-1", Verified: true},
},
familyJoined: false,
ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true},
wantMethodCount: 1,
wantEmailCount: 0,
wantFirstAuthType: "device",
},
{
name: "sort keeps injected email at first position",
methods: []types.UserAuthMethod{
{AuthType: "mobile", AuthIdentifier: "+1234567890", Verified: true},
{AuthType: "device", AuthIdentifier: "dev-1", Verified: true},
},
familyJoined: true,
ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true},
wantMethodCount: 3,
wantEmailCount: 1,
wantFirstAuthType: "email",
wantFirstAuthValue: "owner@example.com",
},
}
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
finalMethods := appendFamilyOwnerEmailIfNeeded(testCase.methods, testCase.familyJoined, testCase.ownerEmailMethod)
sortUserAuthMethodsByPriority(finalMethods)
require.Len(t, finalMethods, testCase.wantMethodCount)
emailCount := 0
for _, method := range finalMethods {
if method.AuthType == "email" {
emailCount++
}
}
require.Equal(t, testCase.wantEmailCount, emailCount)
require.Equal(t, testCase.wantFirstAuthType, finalMethods[0].AuthType)
if testCase.wantFirstAuthValue != "" {
require.Equal(t, testCase.wantFirstAuthValue, finalMethods[0].AuthIdentifier)
}
})
}
}

View File

@ -1,25 +0,0 @@
package user
import (
"testing"
commonLogic "github.com/perfect-panel/server/internal/logic/common"
"github.com/perfect-panel/server/internal/types"
"github.com/stretchr/testify/require"
)
func TestFillUserSubscribeEntitlementFields(t *testing.T) {
sub := &types.UserSubscribe{}
entitlement := &commonLogic.EntitlementContext{
EffectiveUserID: 2001,
Source: commonLogic.EntitlementSourceFamilyOwner,
OwnerUserID: 2001,
ReadOnly: true,
}
fillUserSubscribeEntitlementFields(sub, entitlement)
require.Equal(t, commonLogic.EntitlementSourceFamilyOwner, sub.EntitlementSource)
require.Equal(t, int64(2001), sub.EntitlementOwnerUserId)
require.True(t, sub.ReadOnly)
}

View File

@ -71,7 +71,7 @@ func (l *UnsubscribeLogic) Unsubscribe(req *types.UnsubscribeRequest) error {
err = l.svcCtx.UserModel.Transaction(l.ctx, func(db *gorm.DB) error {
// Find and update subscription status to cancelled (status = 4)
userSub.Status = 4 // Set status to cancelled
if err = l.svcCtx.UserModel.UpdateSubscribe(l.ctx, userSub); err != nil {
if err = l.svcCtx.UserModel.UpdateSubscribe(l.ctx, userSub, db); err != nil {
return err
}
@ -148,7 +148,7 @@ func (l *UnsubscribeLogic) Unsubscribe(req *types.UnsubscribeRequest) error {
// Update user's regular balance and save changes to database
u.Balance = balance
return l.svcCtx.UserModel.Update(l.ctx, u)
return l.svcCtx.UserModel.Update(l.ctx, u, db)
})
if err != nil {

View File

@ -1,50 +0,0 @@
package middleware
import (
"context"
"net/http"
"net/http/httptest"
"testing"
"github.com/gin-gonic/gin"
"github.com/perfect-panel/server/pkg/constant"
)
func TestApiVersionSwitchHandlerUsesLegacyByDefault(t *testing.T) {
gin.SetMode(gin.TestMode)
r := gin.New()
r.GET("/test", ApiVersionSwitchHandler(
func(c *gin.Context) { c.String(http.StatusOK, "legacy") },
func(c *gin.Context) { c.String(http.StatusOK, "latest") },
))
req := httptest.NewRequest(http.MethodGet, "/test", nil)
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != "legacy" {
t.Fatalf("expected legacy handler, code=%d body=%s", resp.Code, resp.Body.String())
}
}
func TestApiVersionSwitchHandlerUsesLatestWhenFlagSet(t *testing.T) {
gin.SetMode(gin.TestMode)
r := gin.New()
r.Use(func(c *gin.Context) {
ctx := context.WithValue(c.Request.Context(), constant.CtxKeyAPIVersionUseLatest, true)
c.Request = c.Request.WithContext(ctx)
c.Next()
})
r.GET("/test", ApiVersionSwitchHandler(
func(c *gin.Context) { c.String(http.StatusOK, "legacy") },
func(c *gin.Context) { c.String(http.StatusOK, "latest") },
))
req := httptest.NewRequest(http.MethodGet, "/test", nil)
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != "latest" {
t.Fatalf("expected latest handler, code=%d body=%s", resp.Code, resp.Body.String())
}
}

View File

@ -1,46 +0,0 @@
package middleware
import "testing"
func TestParseLoginType(t *testing.T) {
tests := []struct {
name string
claims map[string]interface{}
want string
}{
{
name: "prefer CtxLoginType when both exist",
claims: map[string]interface{}{"CtxLoginType": "device", "LoginType": "email"},
want: "device",
},
{
name: "fallback to legacy LoginType",
claims: map[string]interface{}{"LoginType": "device"},
want: "device",
},
{
name: "ignore non-string values",
claims: map[string]interface{}{"CtxLoginType": 123, "LoginType": true},
want: "",
},
{
name: "empty values return empty",
claims: map[string]interface{}{"CtxLoginType": "", "LoginType": ""},
want: "",
},
{
name: "missing values return empty",
claims: map[string]interface{}{},
want: "",
},
}
for _, testCase := range tests {
t.Run(testCase.name, func(t *testing.T) {
got := parseLoginType(testCase.claims)
if got != testCase.want {
t.Fatalf("parseLoginType() = %q, want %q", got, testCase.want)
}
})
}
}

View File

@ -1,239 +0,0 @@
package middleware
import (
"context"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"strconv"
"strings"
"testing"
"time"
"github.com/gin-gonic/gin"
"github.com/perfect-panel/server/internal/config"
"github.com/perfect-panel/server/internal/svc"
"github.com/perfect-panel/server/pkg/signature"
"github.com/perfect-panel/server/pkg/xerr"
)
type testNonceStore struct {
seen map[string]bool
}
func newTestNonceStore() *testNonceStore {
return &testNonceStore{seen: map[string]bool{}}
}
func (s *testNonceStore) SetIfNotExists(_ context.Context, appId, nonce string, _ int64) (bool, error) {
key := appId + ":" + nonce
if s.seen[key] {
return true, nil
}
s.seen[key] = true
return false, nil
}
func makeTestSignature(secret, sts string) string {
mac := hmac.New(sha256.New, []byte(secret))
mac.Write([]byte(sts))
return hex.EncodeToString(mac.Sum(nil))
}
func newTestServiceContext() *svc.ServiceContext {
conf := config.Config{}
conf.Signature.EnableSignature = true
conf.AppSignature = signature.SignatureConf{
AppSecrets: map[string]string{
"web-client": "test-secret",
},
ValidWindowSeconds: 300,
SkipPrefixes: []string{
"/v1/public/health",
},
}
return &svc.ServiceContext{
Config: conf,
SignatureValidator: signature.NewValidator(conf.AppSignature, newTestNonceStore()),
}
}
func newTestServiceContextWithSwitch(enabled bool) *svc.ServiceContext {
svcCtx := newTestServiceContext()
svcCtx.Config.Signature.EnableSignature = enabled
return svcCtx
}
func decodeCode(t *testing.T, body []byte) uint32 {
t.Helper()
var resp struct {
Code uint32 `json:"code"`
}
if err := json.Unmarshal(body, &resp); err != nil {
t.Fatalf("unmarshal response failed: %v", err)
}
return resp.Code
}
func TestSignatureMiddlewareMissingAppID(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContext()
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.GET("/v1/public/ping", func(c *gin.Context) {
c.String(http.StatusOK, "ok")
})
req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil)
req.Header.Set("X-Signature-Enabled", "1")
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if code := decodeCode(t, resp.Body.Bytes()); code != xerr.InvalidAccess {
t.Fatalf("expected InvalidAccess(%d), got %d", xerr.InvalidAccess, code)
}
}
func TestSignatureMiddlewareMissingSignatureHeaders(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContext()
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.GET("/v1/public/ping", func(c *gin.Context) {
c.String(http.StatusOK, "ok")
})
req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil)
req.Header.Set("X-Signature-Enabled", "1")
req.Header.Set("X-App-Id", "web-client")
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if code := decodeCode(t, resp.Body.Bytes()); code != xerr.SignatureMissing {
t.Fatalf("expected SignatureMissing(%d), got %d", xerr.SignatureMissing, code)
}
}
func TestSignatureMiddlewarePassesWhenSignatureHeaderMissing(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContext()
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.GET("/v1/public/ping", func(c *gin.Context) {
c.String(http.StatusOK, "ok")
})
req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil)
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != "ok" {
t.Fatalf("expected pass-through without X-Signature-Enabled, got code=%d body=%s", resp.Code, resp.Body.String())
}
}
func TestSignatureMiddlewarePassesWhenSignatureHeaderIsZero(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContext()
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.GET("/v1/public/ping", func(c *gin.Context) {
c.String(http.StatusOK, "ok")
})
req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil)
req.Header.Set("X-Signature-Enabled", "0")
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != "ok" {
t.Fatalf("expected pass-through when X-Signature-Enabled=0, got code=%d body=%s", resp.Code, resp.Body.String())
}
}
func TestSignatureMiddlewarePassesWhenSystemSwitchDisabled(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContextWithSwitch(false)
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.GET("/v1/public/ping", func(c *gin.Context) {
c.String(http.StatusOK, "ok")
})
req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil)
req.Header.Set("X-Signature-Enabled", "1")
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != "ok" {
t.Fatalf("expected pass-through when system switch is disabled, got code=%d body=%s", resp.Code, resp.Body.String())
}
}
func TestSignatureMiddlewareSkipsNonPublicPath(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContext()
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.GET("/v1/admin/ping", func(c *gin.Context) {
c.String(http.StatusOK, "ok")
})
req := httptest.NewRequest(http.MethodGet, "/v1/admin/ping", nil)
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != "ok" {
t.Fatalf("expected pass-through for non-public path, got code=%d body=%s", resp.Code, resp.Body.String())
}
}
func TestSignatureMiddlewareHonorsSkipPrefix(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContext()
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.GET("/v1/public/healthz", func(c *gin.Context) {
c.String(http.StatusOK, "ok")
})
req := httptest.NewRequest(http.MethodGet, "/v1/public/healthz", nil)
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != "ok" {
t.Fatalf("expected skip-prefix pass-through, got code=%d body=%s", resp.Code, resp.Body.String())
}
}
func TestSignatureMiddlewareRestoresBodyAfterVerify(t *testing.T) {
gin.SetMode(gin.TestMode)
svcCtx := newTestServiceContext()
r := gin.New()
r.Use(SignatureMiddleware(svcCtx))
r.POST("/v1/public/body", func(c *gin.Context) {
body, _ := io.ReadAll(c.Request.Body)
c.String(http.StatusOK, string(body))
})
body := `{"hello":"world"}`
req := httptest.NewRequest(http.MethodPost, "/v1/public/body?a=1&b=2", strings.NewReader(body))
ts := strconv.FormatInt(time.Now().Unix(), 10)
nonce := "nonce-body-1"
sts := signature.BuildStringToSign(http.MethodPost, "/v1/public/body", "a=1&b=2", []byte(body), "web-client", ts, nonce)
req.Header.Set("X-Signature-Enabled", "1")
req.Header.Set("X-App-Id", "web-client")
req.Header.Set("X-Timestamp", ts)
req.Header.Set("X-Nonce", nonce)
req.Header.Set("X-Signature", makeTestSignature("test-secret", sts))
resp := httptest.NewRecorder()
r.ServeHTTP(resp, req)
if resp.Code != http.StatusOK || resp.Body.String() != body {
t.Fatalf("expected restored body, got code=%d body=%s", resp.Code, resp.Body.String())
}
}

View File

@ -1,30 +0,0 @@
package auth
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestAlibabaCloudConfig_Marshal(t *testing.T) {
v := new(AlibabaCloudConfig)
t.Log(v.Marshal())
}
func TestAlibabaCloudConfig_Unmarshal(t *testing.T) {
cfg := AlibabaCloudConfig{
Access: "AccessKeyId",
Secret: "AccessKeySecret",
SignName: "SignName",
Endpoint: "Endpoint",
TemplateCode: "VerifyTemplateCode",
}
data := cfg.Marshal()
v := new(AlibabaCloudConfig)
err := v.Unmarshal(data)
if err != nil {
t.Fatal(err.Error())
}
assert.Equal(t, "AccessKeyId", v.Access)
}

View File

@ -1,12 +0,0 @@
package node
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestNormalizeNodeTags(t *testing.T) {
tags := normalizeNodeTags([]string{"美国", " 日本 ", "", "美国", " ", "日本"})
require.Equal(t, []string{"美国", "日本"}, tags)
}

View File

@ -20,6 +20,7 @@ type Subscribe struct {
SpeedLimit int64 `gorm:"type:int;not null;default:0;comment:Speed Limit"`
DeviceLimit int64 `gorm:"type:int;not null;default:0;comment:Device Limit"`
Quota int64 `gorm:"type:int;not null;default:0;comment:Quota"`
NewUserOnly *bool `gorm:"type:tinyint(1);default:0;comment:New user only: allow purchase within 24h of registration, once per user"`
Nodes string `gorm:"type:varchar(255);comment:Node Ids"`
NodeTags string `gorm:"type:varchar(255);comment:Node Tags"`
Show *bool `gorm:"type:tinyint(1);not null;default:0;comment:Show portal page"`

View File

@ -246,7 +246,7 @@ func (m *customUserModel) BatchDeleteUser(ctx context.Context, ids []int64, tx .
if len(tx) > 0 {
conn = tx[0]
}
return conn.Where("id in ?", ids).Find(&users).Error
return conn.Where("id in ?", ids).Preload("AuthMethods").Find(&users).Error
})
if err != nil {
return err

View File

@ -88,15 +88,18 @@ func (m *defaultUserModel) FindOneSubscribe(ctx context.Context, id int64) (*Sub
func (m *defaultUserModel) FindUsersSubscribeBySubscribeId(ctx context.Context, subscribeId int64) ([]*Subscribe, error) {
var data []*Subscribe
err := m.QueryNoCacheCtx(ctx, &data, func(conn *gorm.DB, v interface{}) error {
err := conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` IN ?", subscribeId, []int64{1, 0}).Find(v).Error
if err != nil {
return err
}
// update user subscribe status
return conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` = ?", subscribeId, 0).Update("status", 1).Error
return conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` IN ?", subscribeId, []int64{1, 0}).Find(v).Error
})
return data, err
if err != nil {
return nil, err
}
// Activate pending subscribes (status 0 -> 1) in a separate write operation
if err := m.ExecNoCacheCtx(ctx, func(conn *gorm.DB) error {
return conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` = ?", subscribeId, 0).Update("status", 1).Error
}); err != nil {
return data, err
}
return data, nil
}
// QueryUserSubscribe returns a list of records that meet the conditions.
@ -136,6 +139,7 @@ func (m *defaultUserModel) QueryUserSubscribe(ctx context.Context, userId int64,
func (m *defaultUserModel) FindOneUserSubscribe(ctx context.Context, id int64) (subscribeDetails *SubscribeDetails, err error) {
//TODO cache
//key := fmt.Sprintf("%s%d", cacheUserSubscribeUserPrefix, userId)
subscribeDetails = &SubscribeDetails{}
err = m.QueryNoCacheCtx(ctx, subscribeDetails, func(conn *gorm.DB, v interface{}) error {
return conn.Model(&Subscribe{}).Preload("Subscribe").Where("id = ?", id).First(&subscribeDetails).Error
})

View File

@ -1,21 +0,0 @@
package report
import (
"testing"
)
func TestFreePort(t *testing.T) {
port, err := FreePort()
if err != nil {
t.Fatalf("FreePort() error: %v", err)
}
t.Logf("FreePort: %v", port)
}
func TestModulePort(t *testing.T) {
port, err := ModulePort()
if err != nil {
t.Fatalf("ModulePort() error: %v", err)
}
t.Logf("ModulePort: %v", port)
}

View File

@ -1,32 +0,0 @@
package trace
import (
"context"
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/assert"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
oteltrace "go.opentelemetry.io/otel/trace"
)
func TestSpanIDFromContext(t *testing.T) {
tracer := sdktrace.NewTracerProvider().Tracer("test")
ctx, span := tracer.Start(
context.Background(),
"foo",
oteltrace.WithSpanKind(oteltrace.SpanKindClient),
oteltrace.WithAttributes(semconv.HTTPClientAttributesFromHTTPRequest(httptest.NewRequest(http.MethodGet, "/", nil))...),
)
defer span.End()
assert.NotEmpty(t, TraceIDFromContext(ctx))
assert.NotEmpty(t, SpanIDFromContext(ctx))
}
func TestSpanIDFromContextEmpty(t *testing.T) {
assert.Empty(t, TraceIDFromContext(context.Background()))
assert.Empty(t, SpanIDFromContext(context.Background()))
}

View File

@ -1,37 +0,0 @@
package types
import (
"encoding/json"
"testing"
)
func TestDeleteAccountResponseAlwaysContainsIntFields(t *testing.T) {
data, err := json.Marshal(DeleteAccountResponse{
Success: true,
Message: "ok",
})
if err != nil {
t.Fatalf("failed to marshal response: %v", err)
}
var decoded map[string]interface{}
if err = json.Unmarshal(data, &decoded); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
userID, hasUserID := decoded["user_id"]
if !hasUserID {
t.Fatalf("expected user_id in JSON, got %s", string(data))
}
if userID != float64(0) {
t.Fatalf("expected user_id=0, got %v", userID)
}
code, hasCode := decoded["code"]
if !hasCode {
t.Fatalf("expected code in JSON, got %s", string(data))
}
if code != float64(0) {
t.Fatalf("expected code=0, got %v", code)
}
}

View File

@ -458,6 +458,7 @@ type CreateSubscribeRequest struct {
SpeedLimit int64 `json:"speed_limit"`
DeviceLimit int64 `json:"device_limit"`
Quota int64 `json:"quota"`
NewUserOnly *bool `json:"new_user_only"`
Nodes []int64 `json:"nodes"`
NodeTags []string `json:"node_tags"`
Show *bool `json:"show"`
@ -2402,6 +2403,7 @@ type Subscribe struct {
SpeedLimit int64 `json:"speed_limit"`
DeviceLimit int64 `json:"device_limit"`
Quota int64 `json:"quota"`
NewUserOnly bool `json:"new_user_only"`
Nodes []int64 `json:"nodes"`
NodeTags []string `json:"node_tags"`
Show bool `json:"show"`
@ -2805,6 +2807,7 @@ type UpdateSubscribeRequest struct {
SpeedLimit int64 `json:"speed_limit"`
DeviceLimit int64 `json:"device_limit"`
Quota int64 `json:"quota"`
NewUserOnly *bool `json:"new_user_only"`
Nodes []int64 `json:"nodes"`
NodeTags []string `json:"node_tags"`
Show *bool `json:"show"`

View File

@ -1,29 +0,0 @@
package pkgaes
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
)
func TestAes(t *testing.T) {
params := map[string]interface{}{
"method": "email",
"account": "admin@ppanel.dev",
"password": "password",
}
marshal, _ := json.Marshal(params)
jsonStr := string(marshal)
encrypt, iv, err := Encrypt([]byte(jsonStr), "123456")
if err != nil {
t.Fatalf("encrypt failed: %v", err)
}
decrypt, err := Decrypt(encrypt, "123456", iv)
if err != nil {
t.Fatalf("decrypt failed: %v", err)
}
assert.Equal(t, jsonStr, decrypt, "decrypt failed")
}

View File

@ -1,55 +0,0 @@
package apiversion
import "testing"
func TestParse(t *testing.T) {
tests := []struct {
name string
raw string
valid bool
version Version
}{
{name: "empty", raw: "", valid: false},
{name: "invalid text", raw: "abc", valid: false},
{name: "missing patch", raw: "1.0", valid: false},
{name: "exact", raw: "1.0.0", valid: true, version: Version{Major: 1, Minor: 0, Patch: 0}},
{name: "with prefix", raw: "v1.2.3", valid: true, version: Version{Major: 1, Minor: 2, Patch: 3}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
version, ok := Parse(tt.raw)
if ok != tt.valid {
t.Fatalf("expected valid=%v, got %v", tt.valid, ok)
}
if tt.valid && version != tt.version {
t.Fatalf("expected version=%+v, got %+v", tt.version, version)
}
})
}
}
func TestUseLatest(t *testing.T) {
tests := []struct {
name string
header string
threshold string
expect bool
}{
{name: "missing header", header: "", threshold: "1.0.0", expect: false},
{name: "invalid header", header: "invalid", threshold: "1.0.0", expect: false},
{name: "equal threshold", header: "1.0.0", threshold: "1.0.0", expect: false},
{name: "greater threshold", header: "1.0.1", threshold: "1.0.0", expect: true},
{name: "greater with v prefix", header: "v1.2.3", threshold: "1.0.0", expect: true},
{name: "less than threshold", header: "0.9.9", threshold: "1.0.0", expect: false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := UseLatest(tt.header, tt.threshold)
if result != tt.expect {
t.Fatalf("expected %v, got %v", tt.expect, result)
}
})
}
}

View File

@ -1,28 +0,0 @@
package cache
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestCacheOptions(t *testing.T) {
t.Run("default options", func(t *testing.T) {
o := newOptions()
assert.Equal(t, defaultExpiry, o.Expiry)
assert.Equal(t, defaultNotFoundExpiry, o.NotFoundExpiry)
})
t.Run("with expiry", func(t *testing.T) {
o := newOptions(WithExpiry(time.Second))
assert.Equal(t, time.Second, o.Expiry)
assert.Equal(t, defaultNotFoundExpiry, o.NotFoundExpiry)
})
t.Run("with not found expiry", func(t *testing.T) {
o := newOptions(WithNotFoundExpiry(time.Second))
assert.Equal(t, defaultExpiry, o.Expiry)
assert.Equal(t, time.Second, o.NotFoundExpiry)
})
}

7
pkg/cache/gorm.go vendored
View File

@ -109,6 +109,13 @@ func (cc CachedConn) QueryCtx(ctx context.Context, v interface{}, key string, qu
}
return cc.SetCache(key, v)
}
// Cache data corrupted (e.g. bad JSON), delete key and fall through to DB
_ = cc.DelCache(key)
err = query(cc.db.WithContext(ctx), v)
if err != nil {
return err
}
return cc.SetCache(key, v)
}
return
}

219
pkg/cache/gorm_test.go vendored
View File

@ -2,65 +2,182 @@ package cache
import (
"context"
"encoding/json"
"errors"
"testing"
"time"
"github.com/perfect-panel/server/pkg/orm"
"github.com/alicebob/miniredis/v2"
"github.com/redis/go-redis/v9"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/plugin/soft_delete"
)
type User struct {
Id int64 `gorm:"primarykey"`
Email string `gorm:"index:idx_email;type:varchar(100);unique;not null;comment:电子邮箱"`
Password string `gorm:"type:varchar(100);comment:用户密码;not null"`
Avatar string `gorm:"type:varchar(200);default:'';comment:用户头像"`
Balance int64 `gorm:"default:0;comment:用户余额"`
Telegram int64 `gorm:"default:null;comment:Telegram账号"`
ReferCode string `gorm:"type:varchar(20);default:'';comment:推荐码"`
RefererId int64 `gorm:"comment:推荐人ID"`
Enable bool `gorm:"default:true;not null;comment:账户是否可用"`
IsAdmin bool `gorm:"default:false;not null;comment:是否管理员"`
ValidEmail bool `gorm:"default:false;not null;comment:是否验证邮箱"`
EnableEmailNotify bool `gorm:"default:false;not null;comment:是否启用邮件通知"`
EnableTelegramNotify bool `gorm:"default:false;not null;comment:是否启用Telegram通知"`
EnableBalanceNotify bool `gorm:"default:false;not null;comment:是否启用余额变动通知"`
EnableLoginNotify bool `gorm:"default:false;not null;comment:是否启用登录通知"`
EnableSubscribeNotify bool `gorm:"default:false;not null;comment:是否启用订阅通知"`
EnableTradeNotify bool `gorm:"default:false;not null;comment:是否启用交易通知"`
CreatedAt time.Time `gorm:"<-:create;comment:创建时间"`
UpdatedAt time.Time `gorm:"comment:更新时间"`
DeletedAt gorm.DeletedAt `gorm:"default:null;comment:删除时间"`
IsDel soft_delete.DeletedAt `gorm:"softDelete:flag,DeletedAtField:DeletedAt;comment:1:正常 0:删除"` // Use `1` `0` to identify
// testUser is a simple struct used across all QueryCtx tests.
type testUser struct {
ID int64 `json:"id"`
Name string `json:"name"`
}
func TestGormCacheCtx(t *testing.T) {
t.Skipf("skip TestGormCacheCtx test")
db, err := orm.ConnectMysql(orm.Mysql{
Config: orm.Config{
Addr: "localhost:3306",
Config: "charset=utf8mb4&parseTime=true&loc=Asia%2FShanghai",
Dbname: "vpnboard",
Username: "root",
Password: "mylove520",
},
// setupCachedConn creates a CachedConn backed by a real miniredis instance
// and a bare *gorm.DB (no real database connection needed because the
// QueryCtxFn callback is fully under our control).
func setupCachedConn(t *testing.T) (CachedConn, *miniredis.Miniredis) {
t.Helper()
mr := miniredis.RunT(t)
rdb := redis.NewClient(&redis.Options{
Addr: mr.Addr(),
})
if err != nil {
t.Error(err)
}
rds := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
conn := NewConn(db, rds)
var u User
key := "user:id"
err = conn.QueryCtx(context.Background(), &u, key, func(conn *gorm.DB, v interface{}) error {
return conn.Where("id = ?", 1).First(v).Error
})
if err != nil {
t.Error(err)
return
}
t.Logf("get cache success %+v", u)
t.Cleanup(func() { rdb.Close() })
// Use SQLite in-memory to get a properly initialized *gorm.DB.
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
require.NoError(t, err)
cc := NewConn(db, rdb, WithExpiry(time.Minute))
return cc, mr
}
func TestQueryCtx_CacheHit(t *testing.T) {
cc, mr := setupCachedConn(t)
ctx := context.Background()
key := "cache:user:1"
// Pre-populate the cache with valid JSON.
expected := testUser{ID: 1, Name: "Alice"}
data, err := json.Marshal(expected)
require.NoError(t, err)
mr.Set(key, string(data))
// Track whether the DB query function is called.
dbCalled := false
queryFn := func(conn *gorm.DB, v interface{}) error {
dbCalled = true
return nil
}
var result testUser
err = cc.QueryCtx(ctx, &result, key, queryFn)
assert.NoError(t, err)
assert.False(t, dbCalled, "DB query should NOT be called on cache hit")
assert.Equal(t, expected.ID, result.ID)
assert.Equal(t, expected.Name, result.Name)
}
func TestQueryCtx_CacheMiss_QueriesDB_SetsCache(t *testing.T) {
cc, mr := setupCachedConn(t)
ctx := context.Background()
key := "cache:user:2"
// Do NOT pre-populate the cache -- this is a cache miss scenario.
dbCalled := false
queryFn := func(conn *gorm.DB, v interface{}) error {
dbCalled = true
u := v.(*testUser)
u.ID = 2
u.Name = "Bob"
return nil
}
var result testUser
err := cc.QueryCtx(ctx, &result, key, queryFn)
assert.NoError(t, err)
assert.True(t, dbCalled, "DB query should be called on cache miss")
assert.Equal(t, int64(2), result.ID)
assert.Equal(t, "Bob", result.Name)
// Verify the value was written back to cache.
cached, cacheErr := mr.Get(key)
require.NoError(t, cacheErr)
var cachedUser testUser
require.NoError(t, json.Unmarshal([]byte(cached), &cachedUser))
assert.Equal(t, int64(2), cachedUser.ID)
assert.Equal(t, "Bob", cachedUser.Name)
}
func TestQueryCtx_CorruptedCache_SelfHeals(t *testing.T) {
cc, mr := setupCachedConn(t)
ctx := context.Background()
key := "cache:user:3"
// Store invalid JSON in the cache to simulate corruption.
mr.Set(key, "THIS IS NOT VALID JSON{{{")
dbCalled := false
queryFn := func(conn *gorm.DB, v interface{}) error {
dbCalled = true
u := v.(*testUser)
u.ID = 3
u.Name = "Charlie"
return nil
}
var result testUser
err := cc.QueryCtx(ctx, &result, key, queryFn)
assert.NoError(t, err)
assert.True(t, dbCalled, "DB query should be called when cache is corrupted")
assert.Equal(t, int64(3), result.ID)
assert.Equal(t, "Charlie", result.Name)
// Verify the corrupt key was replaced with valid data.
cached, cacheErr := mr.Get(key)
require.NoError(t, cacheErr)
var cachedUser testUser
require.NoError(t, json.Unmarshal([]byte(cached), &cachedUser))
assert.Equal(t, int64(3), cachedUser.ID)
assert.Equal(t, "Charlie", cachedUser.Name)
}
func TestQueryCtx_CacheMiss_DBFails_ReturnsError(t *testing.T) {
cc, mr := setupCachedConn(t)
ctx := context.Background()
key := "cache:user:4"
// No cache entry -- this is a miss.
dbErr := errors.New("connection refused")
queryFn := func(conn *gorm.DB, v interface{}) error {
return dbErr
}
var result testUser
err := cc.QueryCtx(ctx, &result, key, queryFn)
assert.Error(t, err)
assert.Equal(t, dbErr, err)
// Cache should remain empty -- no value was written.
assert.False(t, mr.Exists(key), "cache should NOT be set when DB query fails")
}
func TestQueryCtx_CorruptedCache_DBFails_ReturnsError(t *testing.T) {
cc, mr := setupCachedConn(t)
ctx := context.Background()
key := "cache:user:5"
// Store invalid JSON to trigger the corruption branch.
mr.Set(key, "<<<CORRUPT>>>")
dbErr := errors.New("database is down")
queryFn := func(conn *gorm.DB, v interface{}) error {
return dbErr
}
var result testUser
err := cc.QueryCtx(ctx, &result, key, queryFn)
assert.Error(t, err)
assert.Equal(t, dbErr, err)
// The corrupt key should have been deleted (DelCache was called),
// and no new value was set because the DB query failed.
assert.False(t, mr.Exists(key), "corrupt key should be deleted even when DB fails")
}

View File

@ -1,13 +0,0 @@
package calculateMonths
import (
"testing"
"time"
)
func TestCalculateMonths(t *testing.T) {
startTime, _ := time.Parse(time.DateTime, "2025-01-15 00:00:00")
EndTime, _ := time.Parse(time.DateTime, "2025-05-15 00:00:00")
months := CalculateMonths(startTime, EndTime)
t.Log(months)
}

View File

@ -1,17 +0,0 @@
package color
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestWithColor(t *testing.T) {
output := WithColor("Hello", BgRed)
assert.Equal(t, "Hello", output)
}
func TestWithColorPadding(t *testing.T) {
output := WithColorPadding("Hello", BgRed)
assert.Equal(t, " Hello ", output)
}

View File

@ -1,18 +0,0 @@
package conf
import "testing"
type Server struct {
Host string `yaml:"Host" default:"localhost"`
Port int `yaml:"Port" default:"8080"`
}
type Config struct {
Server Server `yaml:"Server"`
}
func TestConfigLoad(t *testing.T) {
var c Config
MustLoad("./config_test.yaml", &c)
t.Logf("config: %+v", c)
}

View File

@ -1,3 +0,0 @@
Server:
Port: 9999
Host: 0.0.0.0

View File

@ -1,665 +0,0 @@
package deduction
import (
"math"
"testing"
"time"
)
func TestSubscribe_Validate(t *testing.T) {
tests := []struct {
name string
sub Subscribe
wantErr bool
errType error
}{
{
name: "valid subscription",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(24 * time.Hour),
Traffic: 1000,
Download: 100,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: 50,
},
wantErr: false,
},
{
name: "negative traffic",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(24 * time.Hour),
Traffic: -1000,
Download: 100,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: 50,
},
wantErr: true,
errType: ErrInvalidTraffic,
},
{
name: "negative download",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(24 * time.Hour),
Traffic: 1000,
Download: -100,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: 50,
},
wantErr: true,
errType: ErrInvalidTraffic,
},
{
name: "download + upload exceeds traffic",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(24 * time.Hour),
Traffic: 1000,
Download: 600,
Upload: 500,
UnitTime: UnitTimeMonth,
DeductionRatio: 50,
},
wantErr: true,
},
{
name: "expire time before start time",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(-24 * time.Hour),
Traffic: 1000,
Download: 100,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: 50,
},
wantErr: true,
errType: ErrInvalidTimeRange,
},
{
name: "invalid deduction ratio - negative",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(24 * time.Hour),
Traffic: 1000,
Download: 100,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: -10,
},
wantErr: true,
errType: ErrInvalidDeductionRatio,
},
{
name: "invalid deduction ratio - over 100",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(24 * time.Hour),
Traffic: 1000,
Download: 100,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: 150,
},
wantErr: true,
errType: ErrInvalidDeductionRatio,
},
{
name: "invalid unit time",
sub: Subscribe{
StartTime: time.Now(),
ExpireTime: time.Now().Add(24 * time.Hour),
Traffic: 1000,
Download: 100,
Upload: 200,
UnitTime: "InvalidUnit",
DeductionRatio: 50,
},
wantErr: true,
errType: ErrInvalidUnitTime,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.sub.Validate()
if (err != nil) != tt.wantErr {
t.Errorf("Subscribe.Validate() error = %v, wantErr %v", err, tt.wantErr)
return
}
if tt.errType != nil && err != tt.errType {
t.Errorf("Subscribe.Validate() error = %v, want %v", err, tt.errType)
}
})
}
}
func TestOrder_Validate(t *testing.T) {
tests := []struct {
name string
order Order
wantErr bool
errType error
}{
{
name: "valid order",
order: Order{Amount: 1000, Quantity: 2},
wantErr: false,
},
{
name: "zero quantity",
order: Order{Amount: 1000, Quantity: 0},
wantErr: true,
errType: ErrInvalidQuantity,
},
{
name: "negative quantity",
order: Order{Amount: 1000, Quantity: -1},
wantErr: true,
errType: ErrInvalidQuantity,
},
{
name: "negative amount",
order: Order{Amount: -1000, Quantity: 2},
wantErr: true,
errType: ErrInvalidAmount,
},
{
name: "zero amount is valid",
order: Order{Amount: 0, Quantity: 1},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.order.Validate()
if (err != nil) != tt.wantErr {
t.Errorf("Order.Validate() error = %v, wantErr %v", err, tt.wantErr)
return
}
if tt.errType != nil && err != tt.errType {
t.Errorf("Order.Validate() error = %v, want %v", err, tt.errType)
}
})
}
}
func TestSafeMultiply(t *testing.T) {
tests := []struct {
name string
a, b int64
want int64
wantErr bool
}{
{
name: "normal multiplication",
a: 10,
b: 20,
want: 200,
wantErr: false,
},
{
name: "zero multiplication",
a: 10,
b: 0,
want: 0,
wantErr: false,
},
{
name: "negative multiplication",
a: -10,
b: 20,
want: -200,
wantErr: false,
},
{
name: "overflow case",
a: math.MaxInt64,
b: 2,
want: 0,
wantErr: true,
},
{
name: "large numbers no overflow",
a: 1000000,
b: 1000000,
want: 1000000000000,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := safeMultiply(tt.a, tt.b)
if (err != nil) != tt.wantErr {
t.Errorf("safeMultiply() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("safeMultiply() = %v, want %v", got, tt.want)
}
})
}
}
func TestSafeAdd(t *testing.T) {
tests := []struct {
name string
a, b int64
want int64
wantErr bool
}{
{
name: "normal addition",
a: 10,
b: 20,
want: 30,
wantErr: false,
},
{
name: "negative addition",
a: -10,
b: 5,
want: -5,
wantErr: false,
},
{
name: "overflow case",
a: math.MaxInt64,
b: 1,
want: 0,
wantErr: true,
},
{
name: "underflow case",
a: math.MinInt64,
b: -1,
want: 0,
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := safeAdd(tt.a, tt.b)
if (err != nil) != tt.wantErr {
t.Errorf("safeAdd() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("safeAdd() = %v, want %v", got, tt.want)
}
})
}
}
func TestSafeDivide(t *testing.T) {
tests := []struct {
name string
a, b int64
want int64
wantErr bool
}{
{
name: "normal division",
a: 20,
b: 10,
want: 2,
wantErr: false,
},
{
name: "division by zero",
a: 20,
b: 0,
want: 0,
wantErr: true,
},
{
name: "negative division",
a: -20,
b: 10,
want: -2,
wantErr: false,
},
{
name: "zero dividend",
a: 0,
b: 10,
want: 0,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := safeDivide(tt.a, tt.b)
if (err != nil) != tt.wantErr {
t.Errorf("safeDivide() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("safeDivide() = %v, want %v", got, tt.want)
}
})
}
}
func TestCalculateWeights(t *testing.T) {
tests := []struct {
name string
deductionRatio int64
wantTrafficWeight float64
wantTimeWeight float64
}{
{
name: "zero ratio",
deductionRatio: 0,
wantTrafficWeight: 0,
wantTimeWeight: 0,
},
{
name: "50% ratio",
deductionRatio: 50,
wantTrafficWeight: 0.5,
wantTimeWeight: 0.5,
},
{
name: "75% ratio",
deductionRatio: 75,
wantTrafficWeight: 0.75,
wantTimeWeight: 0.25,
},
{
name: "100% ratio",
deductionRatio: 100,
wantTrafficWeight: 1.0,
wantTimeWeight: 0.0,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotTrafficWeight, gotTimeWeight := calculateWeights(tt.deductionRatio)
if gotTrafficWeight != tt.wantTrafficWeight {
t.Errorf("calculateWeights() trafficWeight = %v, want %v", gotTrafficWeight, tt.wantTrafficWeight)
}
if gotTimeWeight != tt.wantTimeWeight {
t.Errorf("calculateWeights() timeWeight = %v, want %v", gotTimeWeight, tt.wantTimeWeight)
}
})
}
}
func TestCalculateProportionalAmount(t *testing.T) {
tests := []struct {
name string
unitPrice int64
remaining int64
total int64
want int64
wantErr bool
}{
{
name: "normal calculation",
unitPrice: 100,
remaining: 50,
total: 100,
want: 50,
wantErr: false,
},
{
name: "zero total",
unitPrice: 100,
remaining: 50,
total: 0,
want: 0,
wantErr: false,
},
{
name: "zero remaining",
unitPrice: 100,
remaining: 0,
total: 100,
want: 0,
wantErr: false,
},
{
name: "quarter remaining",
unitPrice: 200,
remaining: 25,
total: 100,
want: 50,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := calculateProportionalAmount(tt.unitPrice, tt.remaining, tt.total)
if (err != nil) != tt.wantErr {
t.Errorf("calculateProportionalAmount() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("calculateProportionalAmount() = %v, want %v", got, tt.want)
}
})
}
}
func TestCalculateNoLimitAmount(t *testing.T) {
tests := []struct {
name string
sub Subscribe
order Order
want int64
wantErr bool
}{
{
name: "normal no limit calculation",
sub: Subscribe{
Traffic: 1000,
Download: 300,
Upload: 200,
},
order: Order{
Amount: 1000,
},
want: 500, // (1000 - 300 - 200) / 1000 * 1000 = 500
wantErr: false,
},
{
name: "zero traffic",
sub: Subscribe{
Traffic: 0,
Download: 0,
Upload: 0,
},
order: Order{
Amount: 1000,
},
want: 0,
wantErr: false,
},
{
name: "overused traffic",
sub: Subscribe{
Traffic: 1000,
Download: 600,
Upload: 500,
},
order: Order{
Amount: 1000,
},
want: 0, // usedTraffic would be negative, clamped to 0
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := calculateNoLimitAmount(tt.sub, tt.order)
if (err != nil) != tt.wantErr {
t.Errorf("calculateNoLimitAmount() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("calculateNoLimitAmount() = %v, want %v", got, tt.want)
}
})
}
}
func TestCalculateRemainingAmount(t *testing.T) {
now := time.Now()
tests := []struct {
name string
sub Subscribe
order Order
wantErr bool
}{
{
name: "valid no limit subscription",
sub: Subscribe{
StartTime: now.Add(-24 * time.Hour),
ExpireTime: now.Add(24 * time.Hour),
Traffic: 1000,
Download: 300,
Upload: 200,
UnitTime: UnitTimeNoLimit,
ResetCycle: ResetCycleNone,
DeductionRatio: 0,
},
order: Order{
Amount: 1000,
Quantity: 1,
},
wantErr: false,
},
{
name: "invalid subscription",
sub: Subscribe{
StartTime: now,
ExpireTime: now.Add(-24 * time.Hour), // Invalid: expire before start
Traffic: 1000,
Download: 300,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: 0,
},
order: Order{
Amount: 1000,
Quantity: 1,
},
wantErr: true,
},
{
name: "invalid order",
sub: Subscribe{
StartTime: now.Add(-24 * time.Hour),
ExpireTime: now.Add(24 * time.Hour),
Traffic: 1000,
Download: 300,
Upload: 200,
UnitTime: UnitTimeMonth,
DeductionRatio: 0,
},
order: Order{
Amount: 1000,
Quantity: 0, // Invalid: zero quantity
},
wantErr: true,
},
{
name: "no limit with reset cycle",
sub: Subscribe{
StartTime: now.Add(-24 * time.Hour),
ExpireTime: now.Add(24 * time.Hour),
Traffic: 1000,
Download: 300,
Upload: 200,
UnitTime: UnitTimeNoLimit,
ResetCycle: ResetCycleMonthly, // Should return 0
DeductionRatio: 0,
},
order: Order{
Amount: 1000,
Quantity: 1,
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := CalculateRemainingAmount(tt.sub, tt.order)
if (err != nil) != tt.wantErr {
t.Errorf("CalculateRemainingAmount() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func TestCalculateRemainingAmount_NoLimitWithResetCycle(t *testing.T) {
now := time.Now()
sub := Subscribe{
StartTime: now.Add(-24 * time.Hour),
ExpireTime: now.Add(24 * time.Hour),
Traffic: 1000,
Download: 300,
Upload: 200,
UnitTime: UnitTimeNoLimit,
ResetCycle: ResetCycleMonthly,
DeductionRatio: 0,
}
order := Order{
Amount: 1000,
Quantity: 1,
}
got, err := CalculateRemainingAmount(sub, order)
if err != nil {
t.Errorf("CalculateRemainingAmount() error = %v", err)
return
}
if got != 0 {
t.Errorf("CalculateRemainingAmount() = %v, want 0", got)
}
}
// Benchmark tests
func BenchmarkCalculateRemainingAmount(b *testing.B) {
now := time.Now()
sub := Subscribe{
StartTime: now.Add(-24 * time.Hour),
ExpireTime: now.Add(24 * time.Hour),
Traffic: 1000,
Download: 300,
Upload: 200,
UnitTime: UnitTimeMonth,
ResetCycle: ResetCycleNone,
DeductionRatio: 50,
}
order := Order{
Amount: 1000,
Quantity: 1,
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = CalculateRemainingAmount(sub, order)
}
}
func BenchmarkSafeMultiply(b *testing.B) {
for i := 0; i < b.N; i++ {
_, _ = safeMultiply(12345, 67890)
}
}

View File

@ -1,123 +0,0 @@
package device
import (
"encoding/json"
"fmt"
"io"
"log"
"net"
"net/http"
"strings"
"sync"
"testing"
"time"
"github.com/pkg/errors"
"github.com/gorilla/websocket"
)
func TestDevice(t *testing.T) {
t.Skip("skip test")
/* deviceManager := NewDeviceManager(10, 3)
deviceManager.OnDeviceOnline = func(userID int64, deviceID, session string) {
fmt.Printf("✅ 设备 %s (用户 %d) 上线\n", deviceID, userID)
}
deviceManager.OnDeviceOffline = func(userID int64, deviceID, session string) {
fmt.Printf("❌ 设备 %s (用户 %d) 下线\n", deviceID, userID)
}
deviceManager.OnDeviceKicked = func(userID int64, deviceID, session string, operator Operator) {
fmt.Printf("⚠️ 设备 %s (用户 %d) 被踢下线\n", deviceID, userID)
}
deviceManager.OnMessage = func(userID int64, deviceID, session string, message string) {
log.Printf("✅收到消息: 设备 %s (用户 %d) 内容: %s,sesion: %s\n", deviceID, userID, message, session)
}
engine := gin.Default()
engine.GET("/ws/:userid/:device_number", func(c *gin.Context) {
//根据Authorization获取session
authorization := c.GetHeader("Authorization")
userid, err := strconv.ParseInt(c.Param("userid"), 10, 64)
if err != nil {
t.Errorf("get user id err:%v", err)
return
}
deviceNumber := c.Param("device_number")
deviceManager.AddDevice(c, authorization, userid, deviceNumber, 3)
return
})
go func() {
err := http.ListenAndServe(":8081", engine)
if err != nil {
t.Fatalf("engine start failed: %v", err)
}
}()
*/
h := http.Header{}
h.Add("Authorization", "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJTZXNzaW9uSWQiOiIwMTk0Y2ZiNy1hYjY0LTdjYjMtODUzYi03ZGU5YTAzNWRlZTgiLCJVc2VySWQiOjI5LCJleHAiOjE3MzkyNTY1MDgsImlhdCI6MTczODY1MTcwOH0.BGKT5-hongJPZrA_yAb6cf6go5iDR8T9uu1ZxUg8HDw")
mutex := sync.Mutex{}
serverURL := fmt.Sprintf("ws://localhost:8080/v1/app/ws/%d/%s", 29, "15502502051") // 假设 userID 为 1001设备ID 为 deviceA
// 建立 WebSocket 连接
conn, resp, err := websocket.DefaultDialer.Dial(serverURL, h)
if err != nil {
all, err := io.ReadAll(resp.Body)
t.Fatalf("websocket dial failed: %v:%s", err, string(all))
}
// 启动一个 goroutine 来读取服务器消息
go func() {
for {
_, msg, err := conn.ReadMessage()
if err != nil {
if errors.Is(err, net.ErrClosed) || strings.Contains(err.Error(), "use of closed network connection") {
log.Println("连接已关闭")
return
}
log.Printf("接收消息失败: %v", err)
return
}
fmt.Printf("收到来自服务器的消息: %s\n", msg)
}
}()
//发送心跳
go func() {
ticker := time.NewTicker(time.Second * 5)
defer ticker.Stop()
for range ticker.C {
mutex.Lock()
err := conn.WriteMessage(websocket.TextMessage, []byte("ping"))
mutex.Unlock()
if err != nil {
if strings.Contains(err.Error(), "use of closed network connection") {
log.Println("连接已关闭")
return
}
t.Errorf("websocket 写入失败: %v", err)
return
}
}
}()
updateSubscribe, _ := json.Marshal(map[string]interface{}{
"method": "test_method",
})
//发送一条消息
mutex.Lock()
err = conn.WriteMessage(websocket.TextMessage, updateSubscribe)
mutex.Unlock()
if err != nil {
t.Errorf("websocket write failed: %v", err)
}
time.Sleep(time.Second * 20)
conn.Close()
time.Sleep(time.Second * 5)
}

View File

@ -1,24 +0,0 @@
package smtp
import "testing"
func TestEmailSend(t *testing.T) {
t.Skipf("Skip TestEmailSend")
config := &Config{
Host: "smtp.mail.me.com",
Port: 587,
User: "support@ppanel.dev",
Pass: "password",
From: "support@ppanel.dev",
SSL: true,
SiteName: "",
}
address := []string{"tension@sparkdance.dev"}
subject := "test"
body := "test"
email := NewClient(config)
err := email.Send(address, subject, body)
if err != nil {
t.Errorf("send email error: %v", err)
}
}

View File

@ -1,36 +0,0 @@
package email
import (
"bytes"
"html/template"
"testing"
)
type VerifyTemplate struct {
Type uint8
SiteLogo string
SiteName string
Expire uint8
Code string
}
func TestVerifyEmail(t *testing.T) {
t.Skipf("Skip TestVerifyEmail test")
data := VerifyTemplate{
Type: 1,
SiteLogo: "https://www.google.com",
SiteName: "Google",
Expire: 5,
Code: "123456",
}
tpl, err := template.New("email").Parse(DefaultEmailVerifyTemplate)
if err != nil {
t.Error(err)
}
var result bytes.Buffer
err = tpl.Execute(&result, data)
if err != nil {
t.Error(err)
}
t.Log(result.String())
}

View File

@ -1,82 +0,0 @@
package errorx
import (
"errors"
"sync"
"sync/atomic"
"testing"
"github.com/stretchr/testify/assert"
)
var errDummy = errors.New("hello")
func TestAtomicError(t *testing.T) {
var err AtomicError
err.Set(errDummy)
assert.Equal(t, errDummy, err.Load())
}
func TestAtomicErrorSetNil(t *testing.T) {
var (
errNil error
err AtomicError
)
err.Set(errNil)
assert.Equal(t, errNil, err.Load())
}
func TestAtomicErrorNil(t *testing.T) {
var err AtomicError
assert.Nil(t, err.Load())
}
func BenchmarkAtomicError(b *testing.B) {
var aerr AtomicError
wg := sync.WaitGroup{}
b.Run("Load", func(b *testing.B) {
var done uint32
go func() {
for {
if atomic.LoadUint32(&done) != 0 {
break
}
wg.Add(1)
go func() {
aerr.Set(errDummy)
wg.Done()
}()
}
}()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = aerr.Load()
}
b.StopTimer()
atomic.StoreUint32(&done, 1)
wg.Wait()
})
b.Run("Set", func(b *testing.B) {
var done uint32
go func() {
for {
if atomic.LoadUint32(&done) != 0 {
break
}
wg.Add(1)
go func() {
_ = aerr.Load()
wg.Done()
}()
}
}()
b.ResetTimer()
for i := 0; i < b.N; i++ {
aerr.Set(errDummy)
}
b.StopTimer()
atomic.StoreUint32(&done, 1)
wg.Wait()
})
}

View File

@ -1,147 +0,0 @@
package errorx
import (
"errors"
"fmt"
"sync"
"testing"
"github.com/stretchr/testify/assert"
)
const (
err1 = "first error"
err2 = "second error"
)
func TestBatchErrorNil(t *testing.T) {
var batch BatchError
assert.Nil(t, batch.Err())
assert.False(t, batch.NotNil())
batch.Add(nil)
assert.Nil(t, batch.Err())
assert.False(t, batch.NotNil())
}
func TestBatchErrorNilFromFunc(t *testing.T) {
err := func() error {
var be BatchError
return be.Err()
}()
assert.True(t, err == nil)
}
func TestBatchErrorOneError(t *testing.T) {
var batch BatchError
batch.Add(errors.New(err1))
assert.NotNil(t, batch.Err())
assert.Equal(t, err1, batch.Err().Error())
assert.True(t, batch.NotNil())
}
func TestBatchErrorWithErrors(t *testing.T) {
var batch BatchError
batch.Add(errors.New(err1))
batch.Add(errors.New(err2))
assert.NotNil(t, batch.Err())
assert.Equal(t, fmt.Sprintf("%s\n%s", err1, err2), batch.Err().Error())
assert.True(t, batch.NotNil())
}
func TestBatchErrorConcurrentAdd(t *testing.T) {
const count = 10000
var batch BatchError
var wg sync.WaitGroup
wg.Add(count)
for i := 0; i < count; i++ {
go func() {
defer wg.Done()
batch.Add(errors.New(err1))
}()
}
wg.Wait()
assert.NotNil(t, batch.Err())
assert.Equal(t, count, len(batch.errs))
assert.True(t, batch.NotNil())
}
func TestBatchError_Unwrap(t *testing.T) {
t.Run("nil", func(t *testing.T) {
var be BatchError
assert.Nil(t, be.Err())
assert.True(t, errors.Is(be.Err(), nil))
})
t.Run("one error", func(t *testing.T) {
var errFoo = errors.New("foo")
var errBar = errors.New("bar")
var be BatchError
be.Add(errFoo)
assert.True(t, errors.Is(be.Err(), errFoo))
assert.False(t, errors.Is(be.Err(), errBar))
})
t.Run("two errors", func(t *testing.T) {
var errFoo = errors.New("foo")
var errBar = errors.New("bar")
var errBaz = errors.New("baz")
var be BatchError
be.Add(errFoo)
be.Add(errBar)
assert.True(t, errors.Is(be.Err(), errFoo))
assert.True(t, errors.Is(be.Err(), errBar))
assert.False(t, errors.Is(be.Err(), errBaz))
})
}
func TestBatchError_Add(t *testing.T) {
var be BatchError
// Test adding nil errors
be.Add(nil, nil)
assert.False(t, be.NotNil(), "Expected BatchError to be empty after adding nil errors")
// Test adding non-nil errors
err1 := errors.New("error 1")
err2 := errors.New("error 2")
be.Add(err1, err2)
assert.True(t, be.NotNil(), "Expected BatchError to be non-empty after adding errors")
// Test adding a mix of nil and non-nil errors
err3 := errors.New("error 3")
be.Add(nil, err3, nil)
assert.True(t, be.NotNil(), "Expected BatchError to be non-empty after adding a mix of nil and non-nil errors")
}
func TestBatchError_Err(t *testing.T) {
var be BatchError
// Test Err() on empty BatchError
assert.Nil(t, be.Err(), "Expected nil error for empty BatchError")
// Test Err() with multiple errors
err1 := errors.New("error 1")
err2 := errors.New("error 2")
be.Add(err1, err2)
combinedErr := be.Err()
assert.NotNil(t, combinedErr, "Expected nil error for BatchError with multiple errors")
// Check if the combined error contains both error messages
errString := combinedErr.Error()
assert.Truef(t, errors.Is(combinedErr, err1), "Combined error doesn't contain first error: %s", errString)
assert.Truef(t, errors.Is(combinedErr, err2), "Combined error doesn't contain second error: %s", errString)
}
func TestBatchError_NotNil(t *testing.T) {
var be BatchError
// Test NotNil() on empty BatchError
assert.Nil(t, be.Err(), "Expected nil error for empty BatchError")
// Test NotNil() after adding an error
be.Add(errors.New("test error"))
assert.NotNil(t, be.Err(), "Expected non-nil error after adding an error")
}

View File

@ -1,27 +0,0 @@
package errorx
import (
"errors"
"testing"
"github.com/stretchr/testify/assert"
)
func TestChain(t *testing.T) {
errDummy := errors.New("dummy")
assert.Nil(t, Chain(func() error {
return nil
}, func() error {
return nil
}))
assert.Equal(t, errDummy, Chain(func() error {
return errDummy
}, func() error {
return nil
}))
assert.Equal(t, errDummy, Chain(func() error {
return nil
}, func() error {
return errDummy
}))
}

View File

@ -1,70 +0,0 @@
package errorx
import (
"errors"
"testing"
)
func TestIn(t *testing.T) {
err1 := errors.New("error 1")
err2 := errors.New("error 2")
err3 := errors.New("error 3")
tests := []struct {
name string
err error
errs []error
want bool
}{
{
name: "Error matches one of the errors in the list",
err: err1,
errs: []error{err1, err2},
want: true,
},
{
name: "Error does not match any errors in the list",
err: err3,
errs: []error{err1, err2},
want: false,
},
{
name: "Empty error list",
err: err1,
errs: []error{},
want: false,
},
{
name: "Nil error with non-nil list",
err: nil,
errs: []error{err1, err2},
want: false,
},
{
name: "Non-nil error with nil in list",
err: err1,
errs: []error{nil, err2},
want: false,
},
{
name: "Error matches nil error in the list",
err: nil,
errs: []error{nil, err2},
want: true,
},
{
name: "Nil error with empty list",
err: nil,
errs: []error{},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := In(tt.err, tt.errs...); got != tt.want {
t.Errorf("In() = %v, want %v", got, tt.want)
}
})
}
}

View File

@ -1,24 +0,0 @@
package errorx
import (
"errors"
"testing"
"github.com/stretchr/testify/assert"
)
func TestWrap(t *testing.T) {
assert.Nil(t, Wrap(nil, "test"))
assert.Equal(t, "foo: bar", Wrap(errors.New("bar"), "foo").Error())
err := errors.New("foo")
assert.True(t, errors.Is(Wrap(err, "bar"), err))
}
func TestWrapf(t *testing.T) {
assert.Nil(t, Wrapf(nil, "%s", "test"))
assert.Equal(t, "foo bar: quz", Wrapf(errors.New("quz"), "foo %s", "bar").Error())
err := errors.New("foo")
assert.True(t, errors.Is(Wrapf(err, "foo %s", "bar"), err))
}

View File

@ -1,12 +0,0 @@
package exchangeRate
import "testing"
func TestGetExchangeRete(t *testing.T) {
t.Skip("skip TestGetExchangeRete")
result, err := GetExchangeRete("USD", "CNY", "", 1)
if err != nil {
t.Fatal(err)
}
t.Log(result)
}

View File

@ -1,15 +0,0 @@
package fs
import (
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCloseOnExec(t *testing.T) {
file := os.NewFile(0, os.DevNull)
assert.NotPanics(t, func() {
CloseOnExec(file)
})
}

View File

@ -1,49 +0,0 @@
package fs
import (
"io"
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func TestTempFileWithText(t *testing.T) {
f, err := TempFileWithText("test")
if err != nil {
t.Error(err)
}
if f == nil {
t.Error("TempFileWithText returned nil")
}
if f.Name() == "" {
t.Error("TempFileWithText returned empty file name")
}
defer os.Remove(f.Name())
bs, err := io.ReadAll(f)
assert.Nil(t, err)
if len(bs) != 4 {
t.Error("TempFileWithText returned wrong file size")
}
if f.Close() != nil {
t.Error("TempFileWithText returned error on close")
}
}
func TestTempFilenameWithText(t *testing.T) {
f, err := TempFilenameWithText("test")
if err != nil {
t.Error(err)
}
if f == "" {
t.Error("TempFilenameWithText returned empty file name")
}
defer os.Remove(f)
bs, err := os.ReadFile(f)
assert.Nil(t, err)
if len(bs) != 4 {
t.Error("TempFilenameWithText returned wrong file size")
}
}

View File

@ -1,155 +0,0 @@
package hash
import (
"fmt"
"strconv"
"testing"
"github.com/stretchr/testify/assert"
)
const (
keySize = 20
requestSize = 1000
)
func BenchmarkConsistentHashGet(b *testing.B) {
ch := NewConsistentHash()
for i := 0; i < keySize; i++ {
ch.Add("localhost:" + strconv.Itoa(i))
}
for i := 0; i < b.N; i++ {
ch.Get(i)
}
}
func TestConsistentHashIncrementalTransfer(t *testing.T) {
prefix := "anything"
create := func() *ConsistentHash {
ch := NewConsistentHash()
for i := 0; i < keySize; i++ {
ch.Add(prefix + strconv.Itoa(i))
}
return ch
}
originCh := create()
keys := make(map[int]string, requestSize)
for i := 0; i < requestSize; i++ {
key, ok := originCh.Get(requestSize + i)
assert.True(t, ok)
assert.NotNil(t, key)
keys[i] = key.(string)
}
node := fmt.Sprintf("%s%d", prefix, keySize)
for i := 0; i < 10; i++ {
laterCh := create()
laterCh.AddWithWeight(node, 10*(i+1))
for j := 0; j < requestSize; j++ {
key, ok := laterCh.Get(requestSize + j)
assert.True(t, ok)
assert.NotNil(t, key)
value := key.(string)
assert.True(t, value == keys[j] || value == node)
}
}
}
func TestConsistentHashTransferOnFailure(t *testing.T) {
index := 41
keys, newKeys := getKeysBeforeAndAfterFailure(t, "localhost:", index)
var transferred int
for k, v := range newKeys {
if v != keys[k] {
transferred++
}
}
ratio := float32(transferred) / float32(requestSize)
assert.True(t, ratio < 2.5/float32(keySize), fmt.Sprintf("%d: %f", index, ratio))
}
func TestConsistentHashLeastTransferOnFailure(t *testing.T) {
prefix := "localhost:"
index := 41
keys, newKeys := getKeysBeforeAndAfterFailure(t, prefix, index)
for k, v := range keys {
newV := newKeys[k]
if v != prefix+strconv.Itoa(index) {
assert.Equal(t, v, newV)
}
}
}
func TestConsistentHash_Remove(t *testing.T) {
ch := NewConsistentHash()
ch.Add("first")
ch.Add("second")
ch.Remove("first")
for i := 0; i < 100; i++ {
val, ok := ch.Get(i)
assert.True(t, ok)
assert.Equal(t, "second", val)
}
}
func TestConsistentHash_RemoveInterface(t *testing.T) {
const key = "any"
ch := NewConsistentHash()
node1 := newMockNode(key, 1)
node2 := newMockNode(key, 2)
ch.AddWithWeight(node1, 80)
ch.AddWithWeight(node2, 50)
assert.Equal(t, 1, len(ch.nodes))
node, ok := ch.Get(1)
assert.True(t, ok)
assert.Equal(t, key, node.(*mockNode).addr)
assert.Equal(t, 2, node.(*mockNode).id)
}
func getKeysBeforeAndAfterFailure(t *testing.T, prefix string, index int) (map[int]string, map[int]string) {
ch := NewConsistentHash()
for i := 0; i < keySize; i++ {
ch.Add(prefix + strconv.Itoa(i))
}
keys := make(map[int]string, requestSize)
for i := 0; i < requestSize; i++ {
key, ok := ch.Get(requestSize + i)
assert.True(t, ok)
assert.NotNil(t, key)
keys[i] = key.(string)
}
remove := fmt.Sprintf("%s%d", prefix, index)
ch.Remove(remove)
newKeys := make(map[int]string, requestSize)
for i := 0; i < requestSize; i++ {
key, ok := ch.Get(requestSize + i)
assert.True(t, ok)
assert.NotNil(t, key)
assert.NotEqual(t, remove, key)
newKeys[i] = key.(string)
}
return keys, newKeys
}
type mockNode struct {
addr string
id int
}
func newMockNode(addr string, id int) *mockNode {
return &mockNode{
addr: addr,
id: id,
}
}
func (n *mockNode) String() string {
return n.addr
}

View File

@ -1,47 +0,0 @@
package hash
import (
"crypto/md5"
"fmt"
"hash/fnv"
"math/big"
"testing"
"github.com/stretchr/testify/assert"
)
const (
text = "hello, world!\n"
md5Digest = "910c8bc73110b0cd1bc5d2bcae782511"
)
func TestMd5(t *testing.T) {
actual := fmt.Sprintf("%x", Md5([]byte(text)))
assert.Equal(t, md5Digest, actual)
}
func TestMd5Hex(t *testing.T) {
actual := Md5Hex([]byte(text))
assert.Equal(t, md5Digest, actual)
}
func BenchmarkHashFnv(b *testing.B) {
for i := 0; i < b.N; i++ {
h := fnv.New32()
new(big.Int).SetBytes(h.Sum([]byte(text))).Int64()
}
}
func BenchmarkHashMd5(b *testing.B) {
for i := 0; i < b.N; i++ {
h := md5.New()
bytes := h.Sum([]byte(text))
new(big.Int).SetBytes(bytes).Int64()
}
}
func BenchmarkMurmur3(b *testing.B) {
for i := 0; i < b.N; i++ {
Hash([]byte(text))
}
}

View File

@ -1,35 +0,0 @@
package apple
import (
"encoding/base64"
"encoding/json"
"testing"
"time"
)
func TestParseTransactionJWS(t *testing.T) {
payload := map[string]interface{}{
"bundleId": "co.airoport.app.ios",
"productId": "com.airport.vpn.pass.30d",
"transactionId": "1000000000001",
"originalTransactionId": "1000000000000",
"purchaseDate": float64(time.Now().UnixMilli()),
}
data, _ := json.Marshal(payload)
b64 := base64.RawURLEncoding.EncodeToString(data)
jws := "header." + b64 + ".signature"
p, err := ParseTransactionJWS(jws)
if err != nil {
t.Fatalf("parse error: %v", err)
}
if p.ProductId != payload["productId"] {
t.Fatalf("productId not match")
}
if p.BundleId != payload["bundleId"] {
t.Fatalf("bundleId not match")
}
if p.OriginalTransactionId != payload["originalTransactionId"] {
t.Fatalf("originalTransactionId not match")
}
}

View File

@ -1,34 +0,0 @@
package ip
import (
"testing"
"time"
)
func TestGetIPv4(t *testing.T) {
t.Skip("skip TestGetIPv4")
iPv4, err := GetIP("baidu.com")
if err != nil {
t.Fatal(err)
}
t.Log(iPv4)
}
func TestGetRegionByIp(t *testing.T) {
t.Skip("skip TestGetRegionByIp")
ips, err := GetIP("122.14.229.128")
if err != nil {
t.Fatal(err)
}
for _, ip := range ips {
t.Log(ip)
resp, err := GetRegionByIp(ip)
if err != nil {
t.Fatalf("ip: %s,err: %v", ip, err)
}
t.Logf("country: %s,City: %s,latitude:%s, longitude:%s", resp.Country, resp.City, resp.Latitude, resp.Longitude)
}
time.Sleep(3 * time.Second)
}

View File

@ -1,23 +0,0 @@
package jsonx
import "testing"
type User struct {
Id int64
Name string
Age int64
}
func TestJson(t *testing.T) {
t.Log("TestJson")
user := &User{
Id: 1,
Name: "test",
Age: 18,
}
b, err := Marshal(user)
if err != nil {
t.Error(err)
}
t.Log(string(b))
}

View File

@ -1,22 +0,0 @@
package jwt
import (
"testing"
"github.com/golang-jwt/jwt/v5"
"github.com/pkg/errors"
)
// TestNewJwtToken test NewJwtToken function
func TestParseJwtToken(t *testing.T) {
token := "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJEZXZpY2VJZCI6IjM4IiwiZXhwIjoxNzE4MTU2OTQ4LCJpYXQiOjE3MTc1NTIxNDgsInVzZXJJZCI6MX0.4W0nga82kNrfwWjkwcgYAWj4fI4iRc-ZftwVbu-a_kI"
secret := "ae0536f9-6450-4606-8e13-5a19ed505da0"
claims, err := ParseJwtToken(token, secret)
if err != nil && !errors.Is(err, jwt.ErrTokenExpired) {
t.Errorf("err: %v", err.Error())
return
}
// parse jwt token success
t.Logf("claims: %v", claims)
}

View File

@ -1,156 +0,0 @@
package lang
import (
"encoding/json"
"errors"
"reflect"
"testing"
"github.com/stretchr/testify/assert"
)
func TestRepr(t *testing.T) {
var (
f32 float32 = 1.1
f64 = 2.2
i8 int8 = 1
i16 int16 = 2
i32 int32 = 3
i64 int64 = 4
u8 uint8 = 5
u16 uint16 = 6
u32 uint32 = 7
u64 uint64 = 8
)
tests := []struct {
v any
expect string
}{
{
nil,
"",
},
{
mockStringable{},
"mocked",
},
{
new(mockStringable),
"mocked",
},
{
newMockPtr(),
"mockptr",
},
{
&mockOpacity{
val: 1,
},
"{1}",
},
{
true,
"true",
},
{
false,
"false",
},
{
f32,
"1.1",
},
{
f64,
"2.2",
},
{
i8,
"1",
},
{
i16,
"2",
},
{
i32,
"3",
},
{
i64,
"4",
},
{
u8,
"5",
},
{
u16,
"6",
},
{
u32,
"7",
},
{
u64,
"8",
},
{
[]byte(`abcd`),
"abcd",
},
{
mockOpacity{val: 1},
"{1}",
},
}
for _, test := range tests {
t.Run(test.expect, func(t *testing.T) {
assert.Equal(t, test.expect, Repr(test.v))
})
}
}
func TestReprOfValue(t *testing.T) {
t.Run("error", func(t *testing.T) {
assert.Equal(t, "error", reprOfValue(reflect.ValueOf(errors.New("error"))))
})
t.Run("stringer", func(t *testing.T) {
assert.Equal(t, "1.23", reprOfValue(reflect.ValueOf(json.Number("1.23"))))
})
t.Run("int", func(t *testing.T) {
assert.Equal(t, "1", reprOfValue(reflect.ValueOf(1)))
})
t.Run("int", func(t *testing.T) {
assert.Equal(t, "1", reprOfValue(reflect.ValueOf("1")))
})
t.Run("int", func(t *testing.T) {
assert.Equal(t, "1", reprOfValue(reflect.ValueOf(uint(1))))
})
}
type mockStringable struct{}
func (m mockStringable) String() string {
return "mocked"
}
type mockPtr struct{}
func newMockPtr() *mockPtr {
return new(mockPtr)
}
func (m *mockPtr) String() string {
return "mockptr"
}
type mockOpacity struct {
val int
}

View File

@ -1,71 +0,0 @@
package limit
import (
"testing"
"github.com/redis/go-redis/v9"
"github.com/stretchr/testify/assert"
)
func TestPeriodLimit_Take(t *testing.T) {
testPeriodLimit(t)
}
func TestPeriodLimit_TakeWithAlign(t *testing.T) {
testPeriodLimit(t, Align())
}
func TestPeriodLimit_RedisUnavailable(t *testing.T) {
//t.Skipf("skip this test because it's not stable")
const (
seconds = 1
quota = 5
)
rds := redis.NewClient(&redis.Options{
Addr: "localhost:12345",
})
l := NewPeriodLimit(seconds, quota, rds, "periodlimit:")
val, err := l.Take("first")
assert.NotNil(t, err)
assert.Equal(t, 0, val)
}
func testPeriodLimit(t *testing.T, opts ...PeriodOption) {
store, _ := CreateRedisWithClean(t)
const (
seconds = 1
total = 100
quota = 5
)
l := NewPeriodLimit(seconds, quota, store, "periodlimit", opts...)
var allowed, hitQuota, overQuota int
for i := 0; i < total; i++ {
val, err := l.Take("first")
if err != nil {
t.Error(err)
}
switch val {
case Allowed:
allowed++
case HitQuota:
hitQuota++
case OverQuota:
overQuota++
default:
t.Error("unknown status")
}
}
assert.Equal(t, quota-1, allowed)
assert.Equal(t, 1, hitQuota)
assert.Equal(t, total-quota, overQuota)
}
func TestQuotaFull(t *testing.T) {
rds, _ := CreateRedisWithClean(t)
l := NewPeriodLimit(1, 1, rds, "periodlimit")
val, err := l.Take("first")
assert.Nil(t, err)
assert.Equal(t, HitQuota, val)
}

View File

@ -1,80 +0,0 @@
package limit
import (
"context"
"testing"
"time"
"github.com/redis/go-redis/v9"
"github.com/alicebob/miniredis/v2"
"github.com/stretchr/testify/assert"
)
func TestTokenLimit_WithCtx(t *testing.T) {
const (
total = 100
rate = 5
burst = 10
)
store, _ := CreateRedisWithClean(t)
l := NewTokenLimiter(rate, burst, store, "tokenlimit")
ctx, cancel := context.WithCancel(context.Background())
ok := l.AllowCtx(ctx)
assert.True(t, ok)
cancel()
for i := 0; i < total; i++ {
ok := l.AllowCtx(ctx)
assert.False(t, ok)
assert.False(t, l.monitorStarted)
}
}
func TestTokenLimit_Take(t *testing.T) {
store, _ := CreateRedisWithClean(t)
const (
total = 100
rate = 5
burst = 10
)
l := NewTokenLimiter(rate, burst, store, "tokenlimit")
var allowed int
for i := 0; i < total; i++ {
time.Sleep(time.Second / time.Duration(total))
if l.Allow() {
allowed++
}
}
assert.True(t, allowed >= burst+rate)
}
func TestTokenLimit_TakeBurst(t *testing.T) {
store, _ := CreateRedisWithClean(t)
const (
total = 100
rate = 5
burst = 10
)
l := NewTokenLimiter(rate, burst, store, "tokenlimit")
var allowed int
for i := 0; i < total; i++ {
if l.Allow() {
allowed++
}
}
assert.True(t, allowed >= burst)
}
// CreateRedisWithClean returns an in process redis.Redis and a clean function.
func CreateRedisWithClean(t *testing.T) (r *redis.Client, clean func()) {
mr := miniredis.RunT(t)
return redis.NewClient(&redis.Options{
Addr: mr.Addr(),
}), mr.Close
}

View File

@ -1,33 +0,0 @@
package logger
import (
"sync/atomic"
"testing"
"github.com/perfect-panel/server/pkg/color"
"github.com/stretchr/testify/assert"
)
func TestWithColor(t *testing.T) {
old := atomic.SwapUint32(&encoding, plainEncodingType)
defer atomic.StoreUint32(&encoding, old)
output := WithColor("hello", color.BgBlue)
assert.Equal(t, "hello", output)
atomic.StoreUint32(&encoding, jsonEncodingType)
output = WithColor("hello", color.BgBlue)
assert.Equal(t, "hello", output)
}
func TestWithColorPadding(t *testing.T) {
old := atomic.SwapUint32(&encoding, plainEncodingType)
defer atomic.StoreUint32(&encoding, old)
output := WithColorPadding("hello", color.BgBlue)
assert.Equal(t, " hello ", output)
atomic.StoreUint32(&encoding, jsonEncodingType)
output = WithColorPadding("hello", color.BgBlue)
assert.Equal(t, "hello", output)
}

View File

@ -1,122 +0,0 @@
package logger
import (
"bytes"
"context"
"encoding/json"
"strconv"
"sync"
"sync/atomic"
"testing"
"github.com/stretchr/testify/assert"
)
func TestAddGlobalFields(t *testing.T) {
var buf bytes.Buffer
writer := NewWriter(&buf)
old := Reset()
SetWriter(writer)
defer SetWriter(old)
Info("hello")
buf.Reset()
AddGlobalFields(Field("a", "1"), Field("b", "2"))
AddGlobalFields(Field("c", "3"))
Info("world")
var m map[string]any
assert.NoError(t, json.Unmarshal(buf.Bytes(), &m))
assert.Equal(t, "1", m["a"])
assert.Equal(t, "2", m["b"])
assert.Equal(t, "3", m["c"])
}
func TestContextWithFields(t *testing.T) {
ctx := ContextWithFields(context.Background(), Field("a", 1), Field("b", 2))
vals := ctx.Value(fieldsContextKey)
assert.NotNil(t, vals)
fields, ok := vals.([]LogField)
assert.True(t, ok)
assert.EqualValues(t, []LogField{Field("a", 1), Field("b", 2)}, fields)
}
func TestWithFields(t *testing.T) {
ctx := WithFields(context.Background(), Field("a", 1), Field("b", 2))
vals := ctx.Value(fieldsContextKey)
assert.NotNil(t, vals)
fields, ok := vals.([]LogField)
assert.True(t, ok)
assert.EqualValues(t, []LogField{Field("a", 1), Field("b", 2)}, fields)
}
func TestWithFieldsAppend(t *testing.T) {
type ctxKey string
var dummyKey ctxKey = "dummyKey"
ctx := context.WithValue(context.Background(), dummyKey, "dummy")
ctx = ContextWithFields(ctx, Field("a", 1), Field("b", 2))
ctx = ContextWithFields(ctx, Field("c", 3), Field("d", 4))
vals := ctx.Value(fieldsContextKey)
assert.NotNil(t, vals)
fields, ok := vals.([]LogField)
assert.True(t, ok)
assert.Equal(t, "dummy", ctx.Value(dummyKey))
assert.EqualValues(t, []LogField{
Field("a", 1),
Field("b", 2),
Field("c", 3),
Field("d", 4),
}, fields)
}
func TestWithFieldsAppendCopy(t *testing.T) {
const count = 10
ctx := context.Background()
for i := 0; i < count; i++ {
ctx = ContextWithFields(ctx, Field(strconv.Itoa(i), 1))
}
af := Field("foo", 1)
bf := Field("bar", 2)
ctxa := ContextWithFields(ctx, af)
ctxb := ContextWithFields(ctx, bf)
assert.EqualValues(t, af, ctxa.Value(fieldsContextKey).([]LogField)[count])
assert.EqualValues(t, bf, ctxb.Value(fieldsContextKey).([]LogField)[count])
}
func BenchmarkAtomicValue(b *testing.B) {
b.ReportAllocs()
var container atomic.Value
vals := []LogField{
Field("a", "b"),
Field("c", "d"),
Field("e", "f"),
}
container.Store(&vals)
for i := 0; i < b.N; i++ {
val := container.Load()
if val != nil {
_ = *val.(*[]LogField)
}
}
}
func BenchmarkRWMutex(b *testing.B) {
b.ReportAllocs()
var lock sync.RWMutex
vals := []LogField{
Field("a", "b"),
Field("c", "d"),
Field("e", "f"),
}
for i := 0; i < b.N; i++ {
lock.RLock()
_ = vals
lock.RUnlock()
}
}

View File

@ -1,35 +0,0 @@
package logger
import (
"log"
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
func TestLessLogger_Error(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
l := NewLessLogger(500)
for i := 0; i < 100; i++ {
l.Error("hello")
}
log.Print(w.String())
assert.Equal(t, 1, strings.Count(w.String(), "\n"))
}
func TestLessLogger_Errorf(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
l := NewLessLogger(500)
for i := 0; i < 100; i++ {
l.Errorf("hello")
}
assert.Equal(t, 1, strings.Count(w.String(), "\n"))
}

View File

@ -1,19 +0,0 @@
package logger
import (
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
func TestLessWriter(t *testing.T) {
var builder strings.Builder
w := newLessWriter(&builder, 500)
for i := 0; i < 100; i++ {
_, err := w.Write([]byte("hello"))
assert.Nil(t, err)
}
assert.Equal(t, "hello", builder.String())
}

View File

@ -1,62 +0,0 @@
package logger
import (
"sync/atomic"
"testing"
"time"
"github.com/perfect-panel/server/pkg/timex"
"github.com/stretchr/testify/assert"
)
func TestLimitedExecutor_logOrDiscard(t *testing.T) {
tests := []struct {
name string
threshold time.Duration
lastTime time.Duration
discarded uint32
executed bool
}{
{
name: "nil executor",
executed: true,
},
{
name: "regular",
threshold: time.Hour,
lastTime: timex.Now(),
discarded: 10,
executed: false,
},
{
name: "slow",
threshold: time.Duration(1),
lastTime: -1000,
discarded: 10,
executed: true,
},
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
executor := newLimitedExecutor(0)
executor.threshold = test.threshold
executor.discarded = test.discarded
executor.lastTime.Set(test.lastTime)
var run int32
executor.logOrDiscard(func() {
atomic.AddInt32(&run, 1)
})
if test.executed {
assert.Equal(t, int32(1), atomic.LoadInt32(&run))
} else {
assert.Equal(t, int32(0), atomic.LoadInt32(&run))
assert.Equal(t, test.discarded+1, atomic.LoadUint32(&executor.discarded))
}
})
}
}

View File

@ -1,931 +0,0 @@
package logger
import (
"encoding/json"
"errors"
"fmt"
"io"
"log"
"os"
"reflect"
"runtime"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
var (
s = []byte("Sending #11 notification (id: 1451875113812010473) in #1 connection")
pool = make(chan []byte, 1)
_ Writer = (*mockWriter)(nil)
)
func init() {
ExitOnFatal.Set(false)
}
type mockWriter struct {
lock sync.Mutex
builder strings.Builder
}
func (mw *mockWriter) Alert(v any) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelAlert, v)
}
func (mw *mockWriter) Debug(v any, fields ...LogField) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelDebug, v, fields...)
}
func (mw *mockWriter) Error(v any, fields ...LogField) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelError, v, fields...)
}
func (mw *mockWriter) Info(v any, fields ...LogField) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelInfo, v, fields...)
}
func (mw *mockWriter) Severe(v any) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelSevere, v)
}
func (mw *mockWriter) Slow(v any, fields ...LogField) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelSlow, v, fields...)
}
func (mw *mockWriter) Stack(v any) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelError, v)
}
func (mw *mockWriter) Stat(v any, fields ...LogField) {
mw.lock.Lock()
defer mw.lock.Unlock()
output(&mw.builder, levelStat, v, fields...)
}
func (mw *mockWriter) Close() error {
return nil
}
func (mw *mockWriter) Contains(text string) bool {
mw.lock.Lock()
defer mw.lock.Unlock()
return strings.Contains(mw.builder.String(), text)
}
func (mw *mockWriter) Reset() {
mw.lock.Lock()
defer mw.lock.Unlock()
mw.builder.Reset()
}
func (mw *mockWriter) String() string {
mw.lock.Lock()
defer mw.lock.Unlock()
return mw.builder.String()
}
func TestField(t *testing.T) {
tests := []struct {
name string
f LogField
want map[string]any
}{
{
name: "error",
f: Field("foo", errors.New("bar")),
want: map[string]any{
"foo": "bar",
},
},
{
name: "errors",
f: Field("foo", []error{errors.New("bar"), errors.New("baz")}),
want: map[string]any{
"foo": []any{"bar", "baz"},
},
},
{
name: "strings",
f: Field("foo", []string{"bar", "baz"}),
want: map[string]any{
"foo": []any{"bar", "baz"},
},
},
{
name: "duration",
f: Field("foo", time.Second),
want: map[string]any{
"foo": "1s",
},
},
{
name: "durations",
f: Field("foo", []time.Duration{time.Second, 2 * time.Second}),
want: map[string]any{
"foo": []any{"1s", "2s"},
},
},
{
name: "times",
f: Field("foo", []time.Time{
time.Date(2020, time.January, 1, 0, 0, 0, 0, time.UTC),
time.Date(2020, time.January, 2, 0, 0, 0, 0, time.UTC),
}),
want: map[string]any{
"foo": []any{"2020-01-01 00:00:00 +0000 UTC", "2020-01-02 00:00:00 +0000 UTC"},
},
},
{
name: "stringer",
f: Field("foo", ValStringer{val: "bar"}),
want: map[string]any{
"foo": "bar",
},
},
{
name: "stringers",
f: Field("foo", []fmt.Stringer{ValStringer{val: "bar"}, ValStringer{val: "baz"}}),
want: map[string]any{
"foo": []any{"bar", "baz"},
},
},
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
Infow("foo", test.f)
validateFields(t, w.String(), test.want)
})
}
}
func TestFileLineFileMode(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
file, line := getFileLine()
Error("anything")
assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1)))
file, line = getFileLine()
Errorf("anything %s", "format")
assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1)))
}
func TestFileLineConsoleMode(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
file, line := getFileLine()
Error("anything")
assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1)))
w.Reset()
file, line = getFileLine()
Errorf("anything %s", "format")
assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1)))
}
func TestMust(t *testing.T) {
assert.Panics(t, func() {
Must(errors.New("foo"))
})
}
func TestStructedLogAlert(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelAlert, w, func(v ...any) {
Alert(fmt.Sprint(v...))
})
}
func TestStructedLogDebug(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelDebug, w, func(v ...any) {
Debug(v...)
})
}
func TestStructedLogDebugf(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelDebug, w, func(v ...any) {
Debugf(fmt.Sprint(v...))
})
}
func TestStructedLogDebugv(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelDebug, w, func(v ...any) {
Debugv(fmt.Sprint(v...))
})
}
func TestStructedLogDebugw(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelDebug, w, func(v ...any) {
Debugw(fmt.Sprint(v...), Field("foo", time.Second))
})
}
func TestStructedLogError(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelError, w, func(v ...any) {
Error(v...)
})
}
func TestStructedLogErrorf(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelError, w, func(v ...any) {
Errorf("%s", fmt.Sprint(v...))
})
}
func TestStructedLogErrorv(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelError, w, func(v ...any) {
Errorv(fmt.Sprint(v...))
})
}
func TestStructedLogErrorw(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelError, w, func(v ...any) {
Errorw(fmt.Sprint(v...), Field("foo", "bar"))
})
}
func TestStructedLogInfo(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelInfo, w, func(v ...any) {
Info(v...)
})
}
func TestStructedLogInfof(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelInfo, w, func(v ...any) {
Infof("%s", fmt.Sprint(v...))
})
}
func TestStructedLogInfov(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelInfo, w, func(v ...any) {
Infov(fmt.Sprint(v...))
})
}
func TestStructedLogInfow(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelInfo, w, func(v ...any) {
Infow(fmt.Sprint(v...), Field("foo", "bar"))
})
}
func TestStructedLogFieldNil(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
assert.NotPanics(t, func() {
var s *string
Infow("test", Field("bb", s))
var d *nilStringer
Infow("test", Field("bb", d))
var e *nilError
Errorw("test", Field("bb", e))
})
assert.NotPanics(t, func() {
var p panicStringer
Infow("test", Field("bb", p))
var ps innerPanicStringer
Infow("test", Field("bb", ps))
})
}
func TestStructedLogInfoConsoleAny(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLogConsole(t, w, func(v ...any) {
old := atomic.LoadUint32(&encoding)
atomic.StoreUint32(&encoding, plainEncodingType)
defer func() {
atomic.StoreUint32(&encoding, old)
}()
Infov(v)
})
}
func TestStructedLogInfoConsoleAnyString(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLogConsole(t, w, func(v ...any) {
old := atomic.LoadUint32(&encoding)
atomic.StoreUint32(&encoding, plainEncodingType)
defer func() {
atomic.StoreUint32(&encoding, old)
}()
Infov(fmt.Sprint(v...))
})
}
func TestStructedLogInfoConsoleAnyError(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLogConsole(t, w, func(v ...any) {
old := atomic.LoadUint32(&encoding)
atomic.StoreUint32(&encoding, plainEncodingType)
defer func() {
atomic.StoreUint32(&encoding, old)
}()
Infov(errors.New(fmt.Sprint(v...)))
})
}
func TestStructedLogInfoConsoleAnyStringer(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLogConsole(t, w, func(v ...any) {
old := atomic.LoadUint32(&encoding)
atomic.StoreUint32(&encoding, plainEncodingType)
defer func() {
atomic.StoreUint32(&encoding, old)
}()
Infov(ValStringer{
val: fmt.Sprint(v...),
})
})
}
func TestStructedLogInfoConsoleText(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLogConsole(t, w, func(v ...any) {
old := atomic.LoadUint32(&encoding)
atomic.StoreUint32(&encoding, plainEncodingType)
defer func() {
atomic.StoreUint32(&encoding, old)
}()
Info(fmt.Sprint(v...))
})
}
func TestStructedLogSlow(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelSlow, w, func(v ...any) {
Slow(v...)
})
}
func TestStructedLogSlowf(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelSlow, w, func(v ...any) {
Slowf(fmt.Sprint(v...))
})
}
func TestStructedLogSlowv(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelSlow, w, func(v ...any) {
Slowv(fmt.Sprint(v...))
})
}
func TestStructedLogSloww(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelSlow, w, func(v ...any) {
Sloww(fmt.Sprint(v...), Field("foo", time.Second))
})
}
func TestStructedLogStat(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelStat, w, func(v ...any) {
Stat(v...)
})
}
func TestStructedLogStatf(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelStat, w, func(v ...any) {
Statf(fmt.Sprint(v...))
})
}
func TestStructedLogSevere(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelSevere, w, func(v ...any) {
Severe(v...)
})
}
func TestStructedLogSeveref(t *testing.T) {
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
doTestStructedLog(t, levelSevere, w, func(v ...any) {
Severef(fmt.Sprint(v...))
})
}
func TestStructedLogWithDuration(t *testing.T) {
const message = "hello there"
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
WithDuration(time.Second).Info(message)
var entry map[string]any
if err := json.Unmarshal([]byte(w.String()), &entry); err != nil {
t.Error(err)
}
assert.Equal(t, levelInfo, entry[levelKey])
assert.Equal(t, message, entry[contentKey])
assert.Equal(t, "1000.0ms", entry[durationKey])
}
func TestSetLevel(t *testing.T) {
SetLevel(ErrorLevel)
const message = "hello there"
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
Info(message)
assert.Equal(t, 0, w.builder.Len())
}
func TestSetLevelTwiceWithMode(t *testing.T) {
testModes := []string{
"console",
"volumn",
"mode",
}
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
for _, mode := range testModes {
testSetLevelTwiceWithMode(t, mode, w)
}
}
func TestSetLevelWithDuration(t *testing.T) {
SetLevel(ErrorLevel)
const message = "hello there"
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
WithDuration(time.Second).Info(message)
assert.Equal(t, 0, w.builder.Len())
}
func TestErrorfWithWrappedError(t *testing.T) {
SetLevel(ErrorLevel)
const message = "there"
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
Errorf("hello %s", errors.New(message))
assert.True(t, strings.Contains(w.String(), "hello there"))
}
func TestMustNil(t *testing.T) {
Must(nil)
}
func TestSetup(t *testing.T) {
defer func() {
SetLevel(InfoLevel)
atomic.StoreUint32(&encoding, jsonEncodingType)
}()
setupOnce = sync.Once{}
MustSetup(LogConf{
ServiceName: "any",
Mode: "console",
Encoding: "json",
TimeFormat: timeFormat,
})
setupOnce = sync.Once{}
MustSetup(LogConf{
ServiceName: "any",
Mode: "console",
TimeFormat: timeFormat,
})
setupOnce = sync.Once{}
MustSetup(LogConf{
ServiceName: "any",
Mode: "file",
Path: os.TempDir(),
})
setupOnce = sync.Once{}
MustSetup(LogConf{
ServiceName: "any",
Mode: "volume",
Path: os.TempDir(),
})
setupOnce = sync.Once{}
MustSetup(LogConf{
ServiceName: "any",
Mode: "console",
TimeFormat: timeFormat,
})
setupOnce = sync.Once{}
MustSetup(LogConf{
ServiceName: "any",
Mode: "console",
Encoding: plainEncoding,
})
defer os.RemoveAll("CD01CB7D-2705-4F3F-889E-86219BF56F10")
assert.NotNil(t, setupWithVolume(LogConf{}))
assert.Nil(t, setupWithVolume(LogConf{
ServiceName: "CD01CB7D-2705-4F3F-889E-86219BF56F10",
}))
assert.Nil(t, setupWithVolume(LogConf{
ServiceName: "CD01CB7D-2705-4F3F-889E-86219BF56F10",
Rotation: sizeRotationRule,
}))
assert.NotNil(t, setupWithFiles(LogConf{}))
assert.Nil(t, setupWithFiles(LogConf{
ServiceName: "any",
Path: os.TempDir(),
Compress: true,
KeepDays: 1,
MaxBackups: 3,
MaxSize: 1024 * 1024,
}))
setupLogLevel(LogConf{
Level: levelInfo,
})
setupLogLevel(LogConf{
Level: levelError,
})
setupLogLevel(LogConf{
Level: levelSevere,
})
_, err := createOutput("")
assert.NotNil(t, err)
Disable()
SetLevel(InfoLevel)
atomic.StoreUint32(&encoding, jsonEncodingType)
}
func TestDisable(t *testing.T) {
Disable()
defer func() {
SetLevel(InfoLevel)
atomic.StoreUint32(&encoding, jsonEncodingType)
}()
var opt logOptions
WithKeepDays(1)(&opt)
WithGzip()(&opt)
WithMaxBackups(1)(&opt)
WithMaxSize(1024)(&opt)
assert.Nil(t, Close())
assert.Nil(t, Close())
assert.Equal(t, uint32(disableLevel), atomic.LoadUint32(&logLevel))
}
func TestDisableStat(t *testing.T) {
DisableStat()
const message = "hello there"
w := new(mockWriter)
old := writer.Swap(w)
defer writer.Store(old)
Stat(message)
assert.Equal(t, 0, w.builder.Len())
}
func TestAddWriter(t *testing.T) {
const message = "hello there"
w := new(mockWriter)
AddWriter(w)
w1 := new(mockWriter)
AddWriter(w1)
Error(message)
assert.Contains(t, w.String(), message)
assert.Contains(t, w1.String(), message)
}
func TestSetWriter(t *testing.T) {
atomic.StoreUint32(&logLevel, 0)
Reset()
SetWriter(nopWriter{})
assert.NotNil(t, writer.Load())
assert.True(t, writer.Load() == nopWriter{})
mocked := new(mockWriter)
SetWriter(mocked)
assert.Equal(t, mocked, writer.Load())
}
func TestWithGzip(t *testing.T) {
fn := WithGzip()
var opt logOptions
fn(&opt)
assert.True(t, opt.gzipEnabled)
}
func TestWithKeepDays(t *testing.T) {
fn := WithKeepDays(1)
var opt logOptions
fn(&opt)
assert.Equal(t, 1, opt.keepDays)
}
func BenchmarkCopyByteSliceAppend(b *testing.B) {
for i := 0; i < b.N; i++ {
var buf []byte
buf = append(buf, getTimestamp()...)
buf = append(buf, ' ')
buf = append(buf, s...)
_ = buf
}
}
func BenchmarkCopyByteSliceAllocExactly(b *testing.B) {
for i := 0; i < b.N; i++ {
now := []byte(getTimestamp())
buf := make([]byte, len(now)+1+len(s))
n := copy(buf, now)
buf[n] = ' '
copy(buf[n+1:], s)
}
}
func BenchmarkCopyByteSlice(b *testing.B) {
var buf []byte
for i := 0; i < b.N; i++ {
buf = make([]byte, len(s))
copy(buf, s)
}
fmt.Fprint(io.Discard, buf)
}
func BenchmarkCopyOnWriteByteSlice(b *testing.B) {
var buf []byte
for i := 0; i < b.N; i++ {
size := len(s)
buf = s[:size:size]
}
fmt.Fprint(io.Discard, buf)
}
func BenchmarkCacheByteSlice(b *testing.B) {
for i := 0; i < b.N; i++ {
dup := fetch()
copy(dup, s)
put(dup)
}
}
func BenchmarkLogs(b *testing.B) {
b.ReportAllocs()
log.SetOutput(io.Discard)
for i := 0; i < b.N; i++ {
Info(i)
}
}
func fetch() []byte {
select {
case b := <-pool:
return b
default:
}
return make([]byte, 4096)
}
func getFileLine() (string, int) {
_, file, line, _ := runtime.Caller(1)
short := file
for i := len(file) - 1; i > 0; i-- {
if file[i] == '/' {
short = file[i+1:]
break
}
}
return short, line
}
func put(b []byte) {
select {
case pool <- b:
default:
}
}
func doTestStructedLog(t *testing.T, level string, w *mockWriter, write func(...any)) {
const message = "hello there"
write(message)
var entry map[string]any
if err := json.Unmarshal([]byte(w.String()), &entry); err != nil {
t.Error(err)
}
assert.Equal(t, level, entry[levelKey])
val, ok := entry[contentKey]
assert.True(t, ok)
assert.True(t, strings.Contains(val.(string), message))
}
func doTestStructedLogConsole(t *testing.T, w *mockWriter, write func(...any)) {
const message = "hello there"
write(message)
assert.True(t, strings.Contains(w.String(), message))
}
func testSetLevelTwiceWithMode(t *testing.T, mode string, w *mockWriter) {
writer.Store(nil)
_ = SetUp(LogConf{
Mode: mode,
Level: "debug",
Path: "/dev/null",
Encoding: plainEncoding,
Stat: false,
TimeFormat: time.RFC3339,
FileTimeFormat: time.DateTime,
})
_ = SetUp(LogConf{
Mode: mode,
Level: "info",
Path: "/dev/null",
})
const message = "hello there"
Info(message)
assert.Equal(t, 0, w.builder.Len())
Infof(message)
assert.Equal(t, 0, w.builder.Len())
ErrorStack(message)
assert.Equal(t, 0, w.builder.Len())
ErrorStackf(message)
assert.Equal(t, 0, w.builder.Len())
}
type ValStringer struct {
val string
}
func (v ValStringer) String() string {
return v.val
}
func validateFields(t *testing.T, content string, fields map[string]any) {
var m map[string]any
if err := json.Unmarshal([]byte(content), &m); err != nil {
t.Error(err)
}
for k, v := range fields {
if reflect.TypeOf(v).Kind() == reflect.Slice {
assert.EqualValues(t, v, m[k])
} else {
assert.Equal(t, v, m[k], content)
}
}
}
type nilError struct {
Name string
}
func (e *nilError) Error() string {
return e.Name
}
type nilStringer struct {
Name string
}
func (s *nilStringer) String() string {
return s.Name
}
type innerPanicStringer struct {
Inner *struct {
Name string
}
}
func (s innerPanicStringer) String() string {
return s.Inner.Name
}
type panicStringer struct {
}
func (s panicStringer) String() string {
panic("panic")
}

Some files were not shown because too many files have changed in this diff Show More