diff --git a/.claude-flow/.gitignore b/.claude-flow/.gitignore deleted file mode 100644 index 51f4f63..0000000 --- a/.claude-flow/.gitignore +++ /dev/null @@ -1,7 +0,0 @@ -# Claude Flow runtime files -data/ -logs/ -sessions/ -neural/ -*.log -*.tmp diff --git a/.claude-flow/CAPABILITIES.md b/.claude-flow/CAPABILITIES.md deleted file mode 100644 index 0254d35..0000000 --- a/.claude-flow/CAPABILITIES.md +++ /dev/null @@ -1,403 +0,0 @@ -# RuFlo V3 - Complete Capabilities Reference -> Generated: 2026-03-06T01:38:48.235Z -> Full documentation: https://github.com/ruvnet/claude-flow - -## 📋 Table of Contents - -1. [Overview](#overview) -2. [Swarm Orchestration](#swarm-orchestration) -3. [Available Agents (60+)](#available-agents) -4. [CLI Commands (26 Commands, 140+ Subcommands)](#cli-commands) -5. [Hooks System (27 Hooks + 12 Workers)](#hooks-system) -6. [Memory & Intelligence (RuVector)](#memory--intelligence) -7. [Hive-Mind Consensus](#hive-mind-consensus) -8. [Performance Targets](#performance-targets) -9. [Integration Ecosystem](#integration-ecosystem) - ---- - -## Overview - -RuFlo V3 is a domain-driven design architecture for multi-agent AI coordination with: - -- **15-Agent Swarm Coordination** with hierarchical and mesh topologies -- **HNSW Vector Search** - 150x-12,500x faster pattern retrieval -- **SONA Neural Learning** - Self-optimizing with <0.05ms adaptation -- **Byzantine Fault Tolerance** - Queen-led consensus mechanisms -- **MCP Server Integration** - Model Context Protocol support - -### Current Configuration -| Setting | Value | -|---------|-------| -| Topology | hierarchical-mesh | -| Max Agents | 15 | -| Memory Backend | hybrid | -| HNSW Indexing | Enabled | -| Neural Learning | Enabled | -| LearningBridge | Enabled (SONA + ReasoningBank) | -| Knowledge Graph | Enabled (PageRank + Communities) | -| Agent Scopes | Enabled (project/local/user) | - ---- - -## Swarm Orchestration - -### Topologies -| Topology | Description | Best For | -|----------|-------------|----------| -| `hierarchical` | Queen controls workers directly | Anti-drift, tight control | -| `mesh` | Fully connected peer network | Distributed tasks | -| `hierarchical-mesh` | V3 hybrid (recommended) | 10+ agents | -| `ring` | Circular communication | Sequential workflows | -| `star` | Central coordinator | Simple coordination | -| `adaptive` | Dynamic based on load | Variable workloads | - -### Strategies -- `balanced` - Even distribution across agents -- `specialized` - Clear roles, no overlap (anti-drift) -- `adaptive` - Dynamic task routing - -### Quick Commands -```bash -# Initialize swarm -npx @claude-flow/cli@latest swarm init --topology hierarchical --max-agents 8 --strategy specialized - -# Check status -npx @claude-flow/cli@latest swarm status - -# Monitor activity -npx @claude-flow/cli@latest swarm monitor -``` - ---- - -## Available Agents - -### Core Development (5) -`coder`, `reviewer`, `tester`, `planner`, `researcher` - -### V3 Specialized (4) -`security-architect`, `security-auditor`, `memory-specialist`, `performance-engineer` - -### Swarm Coordination (5) -`hierarchical-coordinator`, `mesh-coordinator`, `adaptive-coordinator`, `collective-intelligence-coordinator`, `swarm-memory-manager` - -### Consensus & Distributed (7) -`byzantine-coordinator`, `raft-manager`, `gossip-coordinator`, `consensus-builder`, `crdt-synchronizer`, `quorum-manager`, `security-manager` - -### Performance & Optimization (5) -`perf-analyzer`, `performance-benchmarker`, `task-orchestrator`, `memory-coordinator`, `smart-agent` - -### GitHub & Repository (9) -`github-modes`, `pr-manager`, `code-review-swarm`, `issue-tracker`, `release-manager`, `workflow-automation`, `project-board-sync`, `repo-architect`, `multi-repo-swarm` - -### SPARC Methodology (6) -`sparc-coord`, `sparc-coder`, `specification`, `pseudocode`, `architecture`, `refinement` - -### Specialized Development (8) -`backend-dev`, `mobile-dev`, `ml-developer`, `cicd-engineer`, `api-docs`, `system-architect`, `code-analyzer`, `base-template-generator` - -### Testing & Validation (2) -`tdd-london-swarm`, `production-validator` - -### Agent Routing by Task -| Task Type | Recommended Agents | Topology | -|-----------|-------------------|----------| -| Bug Fix | researcher, coder, tester | mesh | -| New Feature | coordinator, architect, coder, tester, reviewer | hierarchical | -| Refactoring | architect, coder, reviewer | mesh | -| Performance | researcher, perf-engineer, coder | hierarchical | -| Security | security-architect, auditor, reviewer | hierarchical | -| Docs | researcher, api-docs | mesh | - ---- - -## CLI Commands - -### Core Commands (12) -| Command | Subcommands | Description | -|---------|-------------|-------------| -| `init` | 4 | Project initialization | -| `agent` | 8 | Agent lifecycle management | -| `swarm` | 6 | Multi-agent coordination | -| `memory` | 11 | AgentDB with HNSW search | -| `mcp` | 9 | MCP server management | -| `task` | 6 | Task assignment | -| `session` | 7 | Session persistence | -| `config` | 7 | Configuration | -| `status` | 3 | System monitoring | -| `workflow` | 6 | Workflow templates | -| `hooks` | 17 | Self-learning hooks | -| `hive-mind` | 6 | Consensus coordination | - -### Advanced Commands (14) -| Command | Subcommands | Description | -|---------|-------------|-------------| -| `daemon` | 5 | Background workers | -| `neural` | 5 | Pattern training | -| `security` | 6 | Security scanning | -| `performance` | 5 | Profiling & benchmarks | -| `providers` | 5 | AI provider config | -| `plugins` | 5 | Plugin management | -| `deployment` | 5 | Deploy management | -| `embeddings` | 4 | Vector embeddings | -| `claims` | 4 | Authorization | -| `migrate` | 5 | V2→V3 migration | -| `process` | 4 | Process management | -| `doctor` | 1 | Health diagnostics | -| `completions` | 4 | Shell completions | - -### Example Commands -```bash -# Initialize -npx @claude-flow/cli@latest init --wizard - -# Spawn agent -npx @claude-flow/cli@latest agent spawn -t coder --name my-coder - -# Memory operations -npx @claude-flow/cli@latest memory store --key "pattern" --value "data" --namespace patterns -npx @claude-flow/cli@latest memory search --query "authentication" - -# Diagnostics -npx @claude-flow/cli@latest doctor --fix -``` - ---- - -## Hooks System - -### 27 Available Hooks - -#### Core Hooks (6) -| Hook | Description | -|------|-------------| -| `pre-edit` | Context before file edits | -| `post-edit` | Record edit outcomes | -| `pre-command` | Risk assessment | -| `post-command` | Command metrics | -| `pre-task` | Task start + agent suggestions | -| `post-task` | Task completion learning | - -#### Session Hooks (4) -| Hook | Description | -|------|-------------| -| `session-start` | Start/restore session | -| `session-end` | Persist state | -| `session-restore` | Restore previous | -| `notify` | Cross-agent notifications | - -#### Intelligence Hooks (5) -| Hook | Description | -|------|-------------| -| `route` | Optimal agent routing | -| `explain` | Routing decisions | -| `pretrain` | Bootstrap intelligence | -| `build-agents` | Generate configs | -| `transfer` | Pattern transfer | - -#### Coverage Hooks (3) -| Hook | Description | -|------|-------------| -| `coverage-route` | Coverage-based routing | -| `coverage-suggest` | Improvement suggestions | -| `coverage-gaps` | Gap analysis | - -### 12 Background Workers -| Worker | Priority | Purpose | -|--------|----------|---------| -| `ultralearn` | normal | Deep knowledge | -| `optimize` | high | Performance | -| `consolidate` | low | Memory consolidation | -| `predict` | normal | Predictive preload | -| `audit` | critical | Security | -| `map` | normal | Codebase mapping | -| `preload` | low | Resource preload | -| `deepdive` | normal | Deep analysis | -| `document` | normal | Auto-docs | -| `refactor` | normal | Suggestions | -| `benchmark` | normal | Benchmarking | -| `testgaps` | normal | Coverage gaps | - ---- - -## Memory & Intelligence - -### RuVector Intelligence System -- **SONA**: Self-Optimizing Neural Architecture (<0.05ms) -- **MoE**: Mixture of Experts routing -- **HNSW**: 150x-12,500x faster search -- **EWC++**: Prevents catastrophic forgetting -- **Flash Attention**: 2.49x-7.47x speedup -- **Int8 Quantization**: 3.92x memory reduction - -### 4-Step Intelligence Pipeline -1. **RETRIEVE** - HNSW pattern search -2. **JUDGE** - Success/failure verdicts -3. **DISTILL** - LoRA learning extraction -4. **CONSOLIDATE** - EWC++ preservation - -### Self-Learning Memory (ADR-049) - -| Component | Status | Description | -|-----------|--------|-------------| -| **LearningBridge** | ✅ Enabled | Connects insights to SONA/ReasoningBank neural pipeline | -| **MemoryGraph** | ✅ Enabled | PageRank knowledge graph + community detection | -| **AgentMemoryScope** | ✅ Enabled | 3-scope agent memory (project/local/user) | - -**LearningBridge** - Insights trigger learning trajectories. Confidence evolves: +0.03 on access, -0.005/hour decay. Consolidation runs the JUDGE/DISTILL/CONSOLIDATE pipeline. - -**MemoryGraph** - Builds a knowledge graph from entry references. PageRank identifies influential insights. Communities group related knowledge. Graph-aware ranking blends vector + structural scores. - -**AgentMemoryScope** - Maps Claude Code 3-scope directories: -- `project`: `/.claude/agent-memory//` -- `local`: `/.claude/agent-memory-local//` -- `user`: `~/.claude/agent-memory//` - -High-confidence insights (>0.8) can transfer between agents. - -### Memory Commands -```bash -# Store pattern -npx @claude-flow/cli@latest memory store --key "name" --value "data" --namespace patterns - -# Semantic search -npx @claude-flow/cli@latest memory search --query "authentication" - -# List entries -npx @claude-flow/cli@latest memory list --namespace patterns - -# Initialize database -npx @claude-flow/cli@latest memory init --force -``` - ---- - -## Hive-Mind Consensus - -### Queen Types -| Type | Role | -|------|------| -| Strategic Queen | Long-term planning | -| Tactical Queen | Execution coordination | -| Adaptive Queen | Dynamic optimization | - -### Worker Types (8) -`researcher`, `coder`, `analyst`, `tester`, `architect`, `reviewer`, `optimizer`, `documenter` - -### Consensus Mechanisms -| Mechanism | Fault Tolerance | Use Case | -|-----------|-----------------|----------| -| `byzantine` | f < n/3 faulty | Adversarial | -| `raft` | f < n/2 failed | Leader-based | -| `gossip` | Eventually consistent | Large scale | -| `crdt` | Conflict-free | Distributed | -| `quorum` | Configurable | Flexible | - -### Hive-Mind Commands -```bash -# Initialize -npx @claude-flow/cli@latest hive-mind init --queen-type strategic - -# Status -npx @claude-flow/cli@latest hive-mind status - -# Spawn workers -npx @claude-flow/cli@latest hive-mind spawn --count 5 --type worker - -# Consensus -npx @claude-flow/cli@latest hive-mind consensus --propose "task" -``` - ---- - -## Performance Targets - -| Metric | Target | Status | -|--------|--------|--------| -| HNSW Search | 150x-12,500x faster | ✅ Implemented | -| Memory Reduction | 50-75% | ✅ Implemented (3.92x) | -| SONA Integration | Pattern learning | ✅ Implemented | -| Flash Attention | 2.49x-7.47x | 🔄 In Progress | -| MCP Response | <100ms | ✅ Achieved | -| CLI Startup | <500ms | ✅ Achieved | -| SONA Adaptation | <0.05ms | 🔄 In Progress | -| Graph Build (1k) | <200ms | ✅ 2.78ms (71.9x headroom) | -| PageRank (1k) | <100ms | ✅ 12.21ms (8.2x headroom) | -| Insight Recording | <5ms/each | ✅ 0.12ms (41x headroom) | -| Consolidation | <500ms | ✅ 0.26ms (1,955x headroom) | -| Knowledge Transfer | <100ms | ✅ 1.25ms (80x headroom) | - ---- - -## Integration Ecosystem - -### Integrated Packages -| Package | Version | Purpose | -|---------|---------|---------| -| agentic-flow | 3.0.0-alpha.1 | Core coordination + ReasoningBank + Router | -| agentdb | 3.0.0-alpha.10 | Vector database + 8 controllers | -| @ruvector/attention | 0.1.3 | Flash attention | -| @ruvector/sona | 0.1.5 | Neural learning | - -### Optional Integrations -| Package | Command | -|---------|---------| -| ruv-swarm | `npx ruv-swarm mcp start` | -| flow-nexus | `npx flow-nexus@latest mcp start` | -| agentic-jujutsu | `npx agentic-jujutsu@latest` | - -### MCP Server Setup -```bash -# Add Claude Flow MCP -claude mcp add claude-flow -- npx -y @claude-flow/cli@latest - -# Optional servers -claude mcp add ruv-swarm -- npx -y ruv-swarm mcp start -claude mcp add flow-nexus -- npx -y flow-nexus@latest mcp start -``` - ---- - -## Quick Reference - -### Essential Commands -```bash -# Setup -npx @claude-flow/cli@latest init --wizard -npx @claude-flow/cli@latest daemon start -npx @claude-flow/cli@latest doctor --fix - -# Swarm -npx @claude-flow/cli@latest swarm init --topology hierarchical --max-agents 8 -npx @claude-flow/cli@latest swarm status - -# Agents -npx @claude-flow/cli@latest agent spawn -t coder -npx @claude-flow/cli@latest agent list - -# Memory -npx @claude-flow/cli@latest memory search --query "patterns" - -# Hooks -npx @claude-flow/cli@latest hooks pre-task --description "task" -npx @claude-flow/cli@latest hooks worker dispatch --trigger optimize -``` - -### File Structure -``` -.claude-flow/ -├── config.yaml # Runtime configuration -├── CAPABILITIES.md # This file -├── data/ # Memory storage -├── logs/ # Operation logs -├── sessions/ # Session state -├── hooks/ # Custom hooks -├── agents/ # Agent configs -└── workflows/ # Workflow templates -``` - ---- - -**Full Documentation**: https://github.com/ruvnet/claude-flow -**Issues**: https://github.com/ruvnet/claude-flow/issues diff --git a/.claude-flow/config.yaml b/.claude-flow/config.yaml deleted file mode 100644 index 09c1ce5..0000000 --- a/.claude-flow/config.yaml +++ /dev/null @@ -1,43 +0,0 @@ -# RuFlo V3 Runtime Configuration -# Generated: 2026-03-06T01:38:48.235Z - -version: "3.0.0" - -swarm: - topology: hierarchical-mesh - maxAgents: 15 - autoScale: true - coordinationStrategy: consensus - -memory: - backend: hybrid - enableHNSW: true - persistPath: .claude-flow/data - cacheSize: 100 - # ADR-049: Self-Learning Memory - learningBridge: - enabled: true - sonaMode: balanced - confidenceDecayRate: 0.005 - accessBoostAmount: 0.03 - consolidationThreshold: 10 - memoryGraph: - enabled: true - pageRankDamping: 0.85 - maxNodes: 5000 - similarityThreshold: 0.8 - agentScopes: - enabled: true - defaultScope: project - -neural: - enabled: true - modelPath: .claude-flow/neural - -hooks: - enabled: true - autoExecute: true - -mcp: - autoStart: false - port: 3000 diff --git a/.claude-flow/metrics/learning.json b/.claude-flow/metrics/learning.json deleted file mode 100644 index 70c938c..0000000 --- a/.claude-flow/metrics/learning.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "initialized": "2026-03-06T01:38:48.235Z", - "routing": { - "accuracy": 0, - "decisions": 0 - }, - "patterns": { - "shortTerm": 0, - "longTerm": 0, - "quality": 0 - }, - "sessions": { - "total": 0, - "current": null - }, - "_note": "Intelligence grows as you use Claude Flow" -} \ No newline at end of file diff --git a/.claude-flow/metrics/swarm-activity.json b/.claude-flow/metrics/swarm-activity.json deleted file mode 100644 index a0f6b8c..0000000 --- a/.claude-flow/metrics/swarm-activity.json +++ /dev/null @@ -1,18 +0,0 @@ -{ - "timestamp": "2026-03-06T01:38:48.235Z", - "processes": { - "agentic_flow": 0, - "mcp_server": 0, - "estimated_agents": 0 - }, - "swarm": { - "active": false, - "agent_count": 0, - "coordination_active": false - }, - "integration": { - "agentic_flow_active": false, - "mcp_active": false - }, - "_initialized": true -} \ No newline at end of file diff --git a/.claude-flow/metrics/v3-progress.json b/.claude-flow/metrics/v3-progress.json deleted file mode 100644 index ce81596..0000000 --- a/.claude-flow/metrics/v3-progress.json +++ /dev/null @@ -1,26 +0,0 @@ -{ - "version": "3.0.0", - "initialized": "2026-03-06T01:38:48.235Z", - "domains": { - "completed": 0, - "total": 5, - "status": "INITIALIZING" - }, - "ddd": { - "progress": 0, - "modules": 0, - "totalFiles": 0, - "totalLines": 0 - }, - "swarm": { - "activeAgents": 0, - "maxAgents": 15, - "topology": "hierarchical-mesh" - }, - "learning": { - "status": "READY", - "patternsLearned": 0, - "sessionsCompleted": 0 - }, - "_note": "Metrics will update as you use Claude Flow. Run: npx @claude-flow/cli@latest daemon start" -} \ No newline at end of file diff --git a/.claude-flow/security/audit-status.json b/.claude-flow/security/audit-status.json deleted file mode 100644 index ed706ea..0000000 --- a/.claude-flow/security/audit-status.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "initialized": "2026-03-06T01:38:48.236Z", - "status": "PENDING", - "cvesFixed": 0, - "totalCves": 3, - "lastScan": null, - "_note": "Run: npx @claude-flow/cli@latest security scan" -} \ No newline at end of file diff --git a/.gitignore b/.gitignore index 3d7b787..79f86ee 100644 --- a/.gitignore +++ b/.gitignore @@ -1,24 +1,69 @@ +# ==================== IDE / 编辑器 ==================== .idea/ -.claude/ -.claude-flow/ .vscode/ -*-dev.yaml -*.local.yaml -/test/ -*.log -*.sh -script/*.sh +*.swp +*.swo +*~ + +# ==================== OS 系统文件 ==================== .DS_Store -*_test_config.go -*.log* +Thumbs.db + +# ==================== Go 构建产物 ==================== +/bin/ /build/ +/generate/ +*.exe +*.dll +*.so +*.dylib + +# ==================== 环境 / 密钥 / 证书 ==================== +.env +.env.* +!.env.example *.p8 *.crt *.key -node_modules -package-lock.json +*.pem + +# ==================== 日志 ==================== +*.log +*.log.* +logs/ + +# ==================== 测试 ==================== +/test/ +*_test.go +*_test_config.go +**/logtest/ +*_test.yaml + +# ==================== AI 工具链(Ruflo / Serena / CGC)==================== +.claude/ +.claude-flow/ +.serena/ +.swarm/ +.mcp.json +CLAUDE.md + +# ==================== Node(项目不需要)==================== +node_modules/ package.json -/bin -.claude -./github -./run \ No newline at end of file +package-lock.json + +# ==================== 临时 / 本地配置 ==================== +*-dev.yaml +*.local.yaml +*.tmp +*.bak + +# ==================== 脚本 ==================== +*.sh +script/*.sh + +# ==================== CI/CD 本地运行配置 ==================== +.run/ + +# ==================== 临时笔记 ==================== +订单日志.txt diff --git a/.mcp.json b/.mcp.json deleted file mode 100644 index 1f54617..0000000 --- a/.mcp.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "mcpServers": { - "claude-flow": { - "command": "npx", - "args": [ - "-y", - "@claude-flow/cli@latest", - "mcp", - "start" - ], - "env": { - "npm_config_update_notifier": "false", - "CLAUDE_FLOW_MODE": "v3", - "CLAUDE_FLOW_HOOKS_ENABLED": "true", - "CLAUDE_FLOW_TOPOLOGY": "hierarchical-mesh", - "CLAUDE_FLOW_MAX_AGENTS": "15", - "CLAUDE_FLOW_MEMORY_BACKEND": "hybrid" - }, - "autoStart": false - } - } -} \ No newline at end of file diff --git a/CLAUDE.md b/CLAUDE.md deleted file mode 100644 index b2b3ee2..0000000 --- a/CLAUDE.md +++ /dev/null @@ -1,188 +0,0 @@ -# Claude Code Configuration - RuFlo V3 - -## Behavioral Rules (Always Enforced) - -- Do what has been asked; nothing more, nothing less -- NEVER create files unless they're absolutely necessary for achieving your goal -- ALWAYS prefer editing an existing file to creating a new one -- NEVER proactively create documentation files (*.md) or README files unless explicitly requested -- NEVER save working files, text/mds, or tests to the root folder -- Never continuously check status after spawning a swarm — wait for results -- ALWAYS read a file before editing it -- NEVER commit secrets, credentials, or .env files - -## File Organization - -- NEVER save to root folder — use the directories below -- Use `/src` for source code files -- Use `/tests` for test files -- Use `/docs` for documentation and markdown files -- Use `/config` for configuration files -- Use `/scripts` for utility scripts -- Use `/examples` for example code - -## Project Architecture - -- Follow Domain-Driven Design with bounded contexts -- Keep files under 500 lines -- Use typed interfaces for all public APIs -- Prefer TDD London School (mock-first) for new code -- Use event sourcing for state changes -- Ensure input validation at system boundaries - -### Project Config - -- **Topology**: hierarchical-mesh -- **Max Agents**: 15 -- **Memory**: hybrid -- **HNSW**: Enabled -- **Neural**: Enabled - -## Build & Test - -```bash -# Build -npm run build - -# Test -npm test - -# Lint -npm run lint -``` - -- ALWAYS run tests after making code changes -- ALWAYS verify build succeeds before committing - -## Security Rules - -- NEVER hardcode API keys, secrets, or credentials in source files -- NEVER commit .env files or any file containing secrets -- Always validate user input at system boundaries -- Always sanitize file paths to prevent directory traversal -- Run `npx @claude-flow/cli@latest security scan` after security-related changes - -## Concurrency: 1 MESSAGE = ALL RELATED OPERATIONS - -- All operations MUST be concurrent/parallel in a single message -- Use Claude Code's Task tool for spawning agents, not just MCP -- ALWAYS batch ALL todos in ONE TodoWrite call (5-10+ minimum) -- ALWAYS spawn ALL agents in ONE message with full instructions via Task tool -- ALWAYS batch ALL file reads/writes/edits in ONE message -- ALWAYS batch ALL Bash commands in ONE message - -## Swarm Orchestration - -- MUST initialize the swarm using CLI tools when starting complex tasks -- MUST spawn concurrent agents using Claude Code's Task tool -- Never use CLI tools alone for execution — Task tool agents do the actual work -- MUST call CLI tools AND Task tool in ONE message for complex work - -### 3-Tier Model Routing (ADR-026) - -| Tier | Handler | Latency | Cost | Use Cases | -|------|---------|---------|------|-----------| -| **1** | Agent Booster (WASM) | <1ms | $0 | Simple transforms (var→const, add types) — Skip LLM | -| **2** | Haiku | ~500ms | $0.0002 | Simple tasks, low complexity (<30%) | -| **3** | Sonnet/Opus | 2-5s | $0.003-0.015 | Complex reasoning, architecture, security (>30%) | - -- Always check for `[AGENT_BOOSTER_AVAILABLE]` or `[TASK_MODEL_RECOMMENDATION]` before spawning agents -- Use Edit tool directly when `[AGENT_BOOSTER_AVAILABLE]` - -## Swarm Configuration & Anti-Drift - -- ALWAYS use hierarchical topology for coding swarms -- Keep maxAgents at 6-8 for tight coordination -- Use specialized strategy for clear role boundaries -- Use `raft` consensus for hive-mind (leader maintains authoritative state) -- Run frequent checkpoints via `post-task` hooks -- Keep shared memory namespace for all agents - -```bash -npx @claude-flow/cli@latest swarm init --topology hierarchical --max-agents 8 --strategy specialized -``` - -## Swarm Execution Rules - -- ALWAYS use `run_in_background: true` for all agent Task calls -- ALWAYS put ALL agent Task calls in ONE message for parallel execution -- After spawning, STOP — do NOT add more tool calls or check status -- Never poll TaskOutput or check swarm status — trust agents to return -- When agent results arrive, review ALL results before proceeding - -## V3 CLI Commands - -### Core Commands - -| Command | Subcommands | Description | -|---------|-------------|-------------| -| `init` | 4 | Project initialization | -| `agent` | 8 | Agent lifecycle management | -| `swarm` | 6 | Multi-agent swarm coordination | -| `memory` | 11 | AgentDB memory with HNSW search | -| `task` | 6 | Task creation and lifecycle | -| `session` | 7 | Session state management | -| `hooks` | 17 | Self-learning hooks + 12 workers | -| `hive-mind` | 6 | Byzantine fault-tolerant consensus | - -### Quick CLI Examples - -```bash -npx @claude-flow/cli@latest init --wizard -npx @claude-flow/cli@latest agent spawn -t coder --name my-coder -npx @claude-flow/cli@latest swarm init --v3-mode -npx @claude-flow/cli@latest memory search --query "authentication patterns" -npx @claude-flow/cli@latest doctor --fix -``` - -## Available Agents (60+ Types) - -### Core Development -`coder`, `reviewer`, `tester`, `planner`, `researcher` - -### Specialized -`security-architect`, `security-auditor`, `memory-specialist`, `performance-engineer` - -### Swarm Coordination -`hierarchical-coordinator`, `mesh-coordinator`, `adaptive-coordinator` - -### GitHub & Repository -`pr-manager`, `code-review-swarm`, `issue-tracker`, `release-manager` - -### SPARC Methodology -`sparc-coord`, `sparc-coder`, `specification`, `pseudocode`, `architecture` - -## Memory Commands Reference - -```bash -# Store (REQUIRED: --key, --value; OPTIONAL: --namespace, --ttl, --tags) -npx @claude-flow/cli@latest memory store --key "pattern-auth" --value "JWT with refresh" --namespace patterns - -# Search (REQUIRED: --query; OPTIONAL: --namespace, --limit, --threshold) -npx @claude-flow/cli@latest memory search --query "authentication patterns" - -# List (OPTIONAL: --namespace, --limit) -npx @claude-flow/cli@latest memory list --namespace patterns --limit 10 - -# Retrieve (REQUIRED: --key; OPTIONAL: --namespace) -npx @claude-flow/cli@latest memory retrieve --key "pattern-auth" --namespace patterns -``` - -## Quick Setup - -```bash -claude mcp add claude-flow -- npx -y @claude-flow/cli@latest -npx @claude-flow/cli@latest daemon start -npx @claude-flow/cli@latest doctor --fix -``` - -## Claude Code vs CLI Tools - -- Claude Code's Task tool handles ALL execution: agents, file ops, code generation, git -- CLI tools handle coordination via Bash: swarm init, memory, hooks, routing -- NEVER use CLI tools as a substitute for Task tool agents - -## Support - -- Documentation: https://github.com/ruvnet/claude-flow -- Issues: https://github.com/ruvnet/claude-flow/issues diff --git a/adapter/adapter_test.go b/adapter/adapter_test.go deleted file mode 100644 index d45649b..0000000 --- a/adapter/adapter_test.go +++ /dev/null @@ -1,34 +0,0 @@ -package adapter - -import ( - "testing" - "time" -) - -func TestAdapter_Client(t *testing.T) { - servers := getServers() - if len(servers) == 0 { - t.Errorf("[Test] No servers found") - return - } - a := NewAdapter(tpl, WithServers(servers), WithUserInfo(User{ - Password: "test-password", - ExpiredAt: time.Now().AddDate(1, 0, 0), - Download: 0, - Upload: 0, - Traffic: 1000, - SubscribeURL: "https://example.com/subscribe", - })) - client, err := a.Client() - if err != nil { - t.Errorf("[Test] Failed to get client: %v", err.Error()) - return - } - bytes, err := client.Build() - if err != nil { - t.Errorf("[Test] Failed to build client config: %v", err.Error()) - return - } - t.Logf("[Test] Client config built successfully: %s", string(bytes)) - -} diff --git a/adapter/client_test.go b/adapter/client_test.go deleted file mode 100644 index beb9145..0000000 --- a/adapter/client_test.go +++ /dev/null @@ -1,153 +0,0 @@ -package adapter - -import ( - "testing" - "time" -) - -var tpl = ` -{{- range $n := .Proxies }} - {{- $dn := urlquery (default "node" $n.Name) -}} - {{- $sni := default $n.Host $n.SNI -}} - - {{- if eq $n.Type "shadowsocks" -}} - {{- $userinfo := b64enc (print $n.Method ":" $.UserInfo.Password) -}} - {{- printf "ss://%s@%s:%v#%s" $userinfo $n.Host $n.Port $dn -}} - {{- "\n" -}} - {{- end -}} - - {{- if eq $n.Type "trojan" -}} - {{- $qs := "security=tls" -}} - {{- if $sni }}{{ $qs = printf "%s&sni=%s" $qs (urlquery $sni) }}{{ end -}} - {{- if $n.AllowInsecure }}{{ $qs = printf "%s&allowInsecure=%v" $qs $n.AllowInsecure }}{{ end -}} - {{- if $n.Fingerprint }}{{ $qs = printf "%s&fp=%s" $qs (urlquery $n.Fingerprint) }}{{ end -}} - {{- printf "trojan://%s@%s:%v?%s#%s" $.UserInfo.Password $n.Host $n.Port $qs $dn -}} - {{- "\n" -}} - {{- end -}} - - {{- if eq $n.Type "vless" -}} - {{- $qs := "encryption=none" -}} - {{- if $n.RealityPublicKey -}} - {{- $qs = printf "%s&security=reality" $qs -}} - {{- $qs = printf "%s&pbk=%s" $qs (urlquery $n.RealityPublicKey) -}} - {{- if $n.RealityShortId }}{{ $qs = printf "%s&sid=%s" $qs (urlquery $n.RealityShortId) }}{{ end -}} - {{- else -}} - {{- if or $n.SNI $n.Fingerprint $n.AllowInsecure }} - {{- $qs = printf "%s&security=tls" $qs -}} - {{- end -}} - {{- end -}} - {{- if $n.SNI }}{{ $qs = printf "%s&sni=%s" $qs (urlquery $n.SNI) }}{{ end -}} - {{- if $n.AllowInsecure }}{{ $qs = printf "%s&allowInsecure=%v" $qs $n.AllowInsecure }}{{ end -}} - {{- if $n.Fingerprint }}{{ $qs = printf "%s&fp=%s" $qs (urlquery $n.Fingerprint) }}{{ end -}} - {{- if $n.Network }}{{ $qs = printf "%s&type=%s" $qs $n.Network }}{{ end -}} - {{- if $n.Path }}{{ $qs = printf "%s&path=%s" $qs (urlquery $n.Path) }}{{ end -}} - {{- if $n.ServiceName }}{{ $qs = printf "%s&serviceName=%s" $qs (urlquery $n.ServiceName) }}{{ end -}} - {{- if $n.Flow }}{{ $qs = printf "%s&flow=%s" $qs (urlquery $n.Flow) }}{{ end -}} - {{- printf "vless://%s@%s:%v?%s#%s" $n.ServerKey $n.Host $n.Port $qs $dn -}} - {{- "\n" -}} - {{- end -}} - - {{- if eq $n.Type "vmess" -}} - {{- $obj := dict - "v" "2" - "ps" $n.Name - "add" $n.Host - "port" $n.Port - "id" $n.ServerKey - "aid" 0 - "net" (or $n.Network "tcp") - "type" "none" - "path" (or $n.Path "") - "host" $n.Host - -}} - {{- if or $n.SNI $n.Fingerprint $n.AllowInsecure }}{{ set $obj "tls" "tls" }}{{ end -}} - {{- if $n.SNI }}{{ set $obj "sni" $n.SNI }}{{ end -}} - {{- if $n.Fingerprint }}{{ set $obj "fp" $n.Fingerprint }}{{ end -}} - {{- printf "vmess://%s" (b64enc (toJson $obj)) -}} - {{- "\n" -}} - {{- end -}} - - {{- if or (eq $n.Type "hysteria2") (eq $n.Type "hy2") -}} - {{- $qs := "" -}} - {{- if $n.SNI }}{{ $qs = printf "sni=%s" (urlquery $n.SNI) }}{{ end -}} - {{- if $n.AllowInsecure }}{{ $qs = printf "%s&insecure=%v" $qs $n.AllowInsecure }}{{ end -}} - {{- if $n.ObfsPassword }}{{ $qs = printf "%s&obfs-password=%s" $qs (urlquery $n.ObfsPassword) }}{{ end -}} - {{- printf "hy2://%s@%s:%v%s#%s" - $.UserInfo.Password - $n.Host - $n.Port - (ternary (gt (len $qs) 0) (print "?" $qs) "") - $dn -}} - {{- "\n" -}} - {{- end -}} - - {{- if eq $n.Type "tuic" -}} - {{- $qs := "" -}} - {{- if $n.SNI }}{{ $qs = printf "sni=%s" (urlquery $n.SNI) }}{{ end -}} - {{- if $n.AllowInsecure }}{{ $qs = printf "%s&insecure=%v" $qs $n.AllowInsecure }}{{ end -}} - {{- printf "tuic://%s:%s@%s:%v%s#%s" - $n.ServerKey - $.UserInfo.Password - $n.Host - $n.Port - (ternary (gt (len $qs) 0) (print "?" $qs) "") - $dn -}} - {{- "\n" -}} - {{- end -}} - - {{- if eq $n.Type "anytls" -}} - {{- $qs := "" -}} - {{- if $n.SNI }}{{ $qs = printf "sni=%s" (urlquery $n.SNI) }}{{ end -}} - {{- printf "anytls://%s@%s:%v%s#%s" - $.UserInfo.Password - $n.Host - $n.Port - (ternary (gt (len $qs) 0) (print "?" $qs) "") - $dn -}} - {{- "\n" -}} - {{- end -}} - -{{- end }} -` - -func TestClient_Build(t *testing.T) { - client := &Client{ - SiteName: "TestSite", - SubscribeName: "TestSubscribe", - ClientTemplate: tpl, - Proxies: []Proxy{ - { - Name: "TestShadowSocks", - Type: "shadowsocks", - Host: "127.0.0.1", - Port: 1234, - Method: "aes-256-gcm", - }, - { - Name: "TestTrojan", - Type: "trojan", - Host: "example.com", - Port: 443, - AllowInsecure: true, - Security: "tls", - Transport: "tcp", - SNI: "v1-dy.ixigua.com", - }, - }, - UserInfo: User{ - Password: "testpassword", - ExpiredAt: time.Now().Add(24 * time.Hour), - Download: 1000000, - Upload: 500000, - Traffic: 1500000, - SubscribeURL: "https://example.com/subscribe", - }, - } - buf, err := client.Build() - if err != nil { - t.Fatalf("Failed to build client: %v", err) - } - - t.Logf("[测试] 输出: %s", buf) - -} diff --git a/adapter/utils_test.go b/adapter/utils_test.go deleted file mode 100644 index 7a4e32c..0000000 --- a/adapter/utils_test.go +++ /dev/null @@ -1,46 +0,0 @@ -package adapter - -import ( - "testing" - - "github.com/perfect-panel/server/internal/model/server" - "gorm.io/driver/mysql" - "gorm.io/gorm" -) - -func TestAdapterProxy(t *testing.T) { - - servers := getServers() - if len(servers) == 0 { - t.Fatal("no servers found") - } - for _, srv := range servers { - proxy, err := adapterProxy(*srv, "example.com", 0) - if err != nil { - t.Errorf("failed to adapt server %s: %v", srv.Name, err) - } - t.Logf("[测试] 适配服务器 %s 成功: %+v", srv.Name, proxy) - } - -} - -func getServers() []*server.Server { - db, err := connectMySQL("root:mylove520@tcp(localhost:3306)/perfectlink?charset=utf8mb4&parseTime=True&loc=Local") - if err != nil { - return nil - } - var servers []*server.Server - if err = db.Model(&server.Server{}).Find(&servers).Error; err != nil { - return nil - } - return servers -} -func connectMySQL(dsn string) (*gorm.DB, error) { - db, err := gorm.Open(mysql.New(mysql.Config{ - DSN: dsn, - }), &gorm.Config{}) - if err != nil { - return nil, err - } - return db, nil -} diff --git a/generate/gopure-amd64.exe b/generate/gopure-amd64.exe deleted file mode 100755 index cd250fb..0000000 Binary files a/generate/gopure-amd64.exe and /dev/null differ diff --git a/generate/gopure-arm64.exe b/generate/gopure-arm64.exe deleted file mode 100755 index 3b90adb..0000000 Binary files a/generate/gopure-arm64.exe and /dev/null differ diff --git a/generate/gopure-darwin-amd64 b/generate/gopure-darwin-amd64 deleted file mode 100755 index 496dd4b..0000000 Binary files a/generate/gopure-darwin-amd64 and /dev/null differ diff --git a/generate/gopure-darwin-arm64 b/generate/gopure-darwin-arm64 deleted file mode 100755 index 4c7f6b8..0000000 Binary files a/generate/gopure-darwin-arm64 and /dev/null differ diff --git a/generate/gopure-linux-amd64 b/generate/gopure-linux-amd64 deleted file mode 100755 index 80832ef..0000000 Binary files a/generate/gopure-linux-amd64 and /dev/null differ diff --git a/generate/gopure-linux-arm64 b/generate/gopure-linux-arm64 deleted file mode 100755 index ee5d21d..0000000 Binary files a/generate/gopure-linux-arm64 and /dev/null differ diff --git a/go.mod b/go.mod index aa17478..88d8ca4 100644 --- a/go.mod +++ b/go.mod @@ -27,7 +27,7 @@ require ( github.com/jinzhu/copier v0.4.0 github.com/klauspost/compress v1.17.7 github.com/nyaruka/phonenumbers v1.5.0 - github.com/pkg/errors v0.9.1 + github.com/pkg/errors v0.9.1 github.com/redis/go-redis/v9 v9.7.2 github.com/smartwalle/alipay/v3 v3.2.23 github.com/spf13/cast v1.7.0 // indirect @@ -50,7 +50,7 @@ require ( gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df gopkg.in/yaml.v3 v3.0.1 gorm.io/driver/mysql v1.5.7 - gorm.io/gorm v1.25.12 + gorm.io/gorm v1.30.0 gorm.io/plugin/soft_delete v1.2.1 k8s.io/apimachinery v0.31.1 ) @@ -113,6 +113,7 @@ require ( github.com/leodido/go-urn v1.4.0 // indirect github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/go-sqlite3 v1.14.22 // indirect github.com/mitchellh/copystructure v1.2.0 // indirect github.com/mitchellh/reflectwalk v1.0.2 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect @@ -146,4 +147,5 @@ require ( google.golang.org/genproto/googleapis/rpc v0.0.0-20240513163218-0867130af1f8 // indirect gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect gopkg.in/ini.v1 v1.67.0 // indirect + gorm.io/driver/sqlite v1.6.0 // indirect ) diff --git a/go.sum b/go.sum index aa84f0e..bec641c 100644 --- a/go.sum +++ b/go.sum @@ -550,11 +550,15 @@ gorm.io/driver/mysql v1.5.7/go.mod h1:sEtPWMiqiN1N1cMXoXmBbd8C6/l+TESwriotuRRpkD gorm.io/driver/sqlite v1.1.3/go.mod h1:AKDgRWk8lcSQSw+9kxCJnX/yySj8G3rdwYlU57cB45c= gorm.io/driver/sqlite v1.4.4 h1:gIufGoR0dQzjkyqDyYSCvsYR6fba1Gw5YKDqKeChxFc= gorm.io/driver/sqlite v1.4.4/go.mod h1:0Aq3iPO+v9ZKbcdiz8gLWRw5VOPcBOPUQJFLq5e2ecI= +gorm.io/driver/sqlite v1.6.0 h1:WHRRrIiulaPiPFmDcod6prc4l2VGVWHz80KspNsxSfQ= +gorm.io/driver/sqlite v1.6.0/go.mod h1:AO9V1qIQddBESngQUKWL9yoH93HIeA1X6V633rBwyT8= gorm.io/gorm v1.20.1/go.mod h1:0HFTzE/SqkGTzK6TlDPPQbAYCluiVvhzoA1+aVyzenw= gorm.io/gorm v1.23.0/go.mod h1:l2lP/RyAtc1ynaTjFksBde/O8v9oOGIApu2/xRitmZk= gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8= gorm.io/gorm v1.25.12 h1:I0u8i2hWQItBq1WfE0o2+WuL9+8L21K9e2HHSTE/0f8= gorm.io/gorm v1.25.12/go.mod h1:xh7N7RHfYlNc5EmcI/El95gXusucDrQnHXe0+CgWcLQ= +gorm.io/gorm v1.30.0 h1:qbT5aPv1UH8gI99OsRlvDToLxW5zR7FzS9acZDOZcgs= +gorm.io/gorm v1.30.0/go.mod h1:8Z33v652h4//uMA76KjeDH8mJXPm1QNCYrMeatR0DOE= gorm.io/plugin/soft_delete v1.2.1 h1:qx9D/c4Xu6w5KT8LviX8DgLcB9hkKl6JC9f44Tj7cGU= gorm.io/plugin/soft_delete v1.2.1/go.mod h1:Zv7vQctOJTGOsJ/bWgrN1n3od0GBAZgnLjEx+cApLGk= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/initialize/migrate/init_test.go b/initialize/migrate/init_test.go deleted file mode 100644 index 278a35f..0000000 --- a/initialize/migrate/init_test.go +++ /dev/null @@ -1 +0,0 @@ -package migrate diff --git a/initialize/migrate/migrate_test.go b/initialize/migrate/migrate_test.go deleted file mode 100644 index 531266e..0000000 --- a/initialize/migrate/migrate_test.go +++ /dev/null @@ -1,49 +0,0 @@ -package migrate - -import ( - "testing" - - "github.com/perfect-panel/server/internal/model/node" - "github.com/perfect-panel/server/pkg/orm" - "gorm.io/driver/mysql" - "gorm.io/gorm" -) - -func getDSN() string { - - cfg := orm.Config{ - Addr: "127.0.0.1", - Username: "root", - Password: "mylove520", - Dbname: "vpnboard", - } - mc := orm.Mysql{ - Config: cfg, - } - return mc.Dsn() -} - -func TestMigrate(t *testing.T) { - t.Skipf("skip test") - m := Migrate(getDSN()) - err := m.Migrate(2004) - if err != nil { - t.Errorf("failed to migrate: %v", err) - } else { - t.Log("migrate success") - } -} -func TestMysql(t *testing.T) { - db, err := gorm.Open(mysql.New(mysql.Config{ - DSN: "root:mylove520@tcp(localhost:3306)/vpnboard", - })) - if err != nil { - t.Fatalf("Failed to connect to MySQL: %v", err) - } - err = db.Migrator().AutoMigrate(&node.Node{}) - if err != nil { - t.Fatalf("Failed to auto migrate: %v", err) - return - } - t.Log("MySQL connection and migration successful") -} diff --git a/initialize/verify_test.go b/initialize/verify_test.go deleted file mode 100644 index c902476..0000000 --- a/initialize/verify_test.go +++ /dev/null @@ -1,94 +0,0 @@ -package initialize - -import ( - "testing" - - "github.com/perfect-panel/server/internal/config" - "github.com/perfect-panel/server/internal/model/system" - "github.com/perfect-panel/server/pkg/tool" - "github.com/stretchr/testify/assert" -) - -func TestApplyVerifyCodeDefaults(t *testing.T) { - testCases := []struct { - name string - in config.VerifyCode - want config.VerifyCode - }{ - { - name: "apply defaults when all zero", - in: config.VerifyCode{}, - want: config.VerifyCode{ - VerifyCodeExpireTime: 900, - VerifyCodeLimit: 15, - VerifyCodeInterval: 60, - }, - }, - { - name: "keep provided values", - in: config.VerifyCode{ - VerifyCodeExpireTime: 901, - VerifyCodeLimit: 16, - VerifyCodeInterval: 61, - }, - want: config.VerifyCode{ - VerifyCodeExpireTime: 901, - VerifyCodeLimit: 16, - VerifyCodeInterval: 61, - }, - }, - { - name: "fix invalid non-positive values", - in: config.VerifyCode{ - VerifyCodeExpireTime: -1, - VerifyCodeLimit: 0, - VerifyCodeInterval: -10, - }, - want: config.VerifyCode{ - VerifyCodeExpireTime: 900, - VerifyCodeLimit: 15, - VerifyCodeInterval: 60, - }, - }, - } - - for _, testCase := range testCases { - t.Run(testCase.name, func(t *testing.T) { - got := testCase.in - applyVerifyCodeDefaults(&got) - assert.Equal(t, testCase.want, got) - }) - } -} - -func TestVerifyCodeReflectUsesCanonicalKeys(t *testing.T) { - configs := []*system.System{ - {Category: "verify_code", Key: "VerifyCodeExpireTime", Value: "901"}, - {Category: "verify_code", Key: "VerifyCodeLimit", Value: "16"}, - {Category: "verify_code", Key: "VerifyCodeInterval", Value: "61"}, - } - - var got config.VerifyCode - tool.SystemConfigSliceReflectToStruct(configs, &got) - applyVerifyCodeDefaults(&got) - - assert.Equal(t, int64(901), got.VerifyCodeExpireTime) - assert.Equal(t, int64(16), got.VerifyCodeLimit) - assert.Equal(t, int64(61), got.VerifyCodeInterval) -} - -func TestVerifyCodeReflectIgnoresLegacyKeys(t *testing.T) { - configs := []*system.System{ - {Category: "verify_code", Key: "ExpireTime", Value: "901"}, - {Category: "verify_code", Key: "Limit", Value: "16"}, - {Category: "verify_code", Key: "Interval", Value: "61"}, - } - - var got config.VerifyCode - tool.SystemConfigSliceReflectToStruct(configs, &got) - applyVerifyCodeDefaults(&got) - - assert.Equal(t, int64(900), got.VerifyCodeExpireTime) - assert.Equal(t, int64(15), got.VerifyCodeLimit) - assert.Equal(t, int64(60), got.VerifyCodeInterval) -} diff --git a/internal/handler/auth/checkCodeLegacyHandler_test.go b/internal/handler/auth/checkCodeLegacyHandler_test.go deleted file mode 100644 index cde2a35..0000000 --- a/internal/handler/auth/checkCodeLegacyHandler_test.go +++ /dev/null @@ -1,170 +0,0 @@ -package auth - -import ( - "bytes" - "context" - "encoding/json" - "fmt" - "net/http" - "net/http/httptest" - "testing" - "time" - - "github.com/alicebob/miniredis/v2" - "github.com/gin-gonic/gin" - "github.com/perfect-panel/server/internal/config" - "github.com/perfect-panel/server/internal/middleware" - "github.com/perfect-panel/server/internal/svc" - "github.com/perfect-panel/server/pkg/constant" - "github.com/redis/go-redis/v9" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -type legacyCheckCodeResponse struct { - Code uint32 `json:"code"` - Data struct { - Status bool `json:"status"` - Exist bool `json:"exist"` - } `json:"data"` -} - -func newLegacyCheckCodeTestRouter(svcCtx *svc.ServiceContext) *gin.Engine { - gin.SetMode(gin.TestMode) - router := gin.New() - router.Use(middleware.ApiVersionMiddleware(svcCtx)) - router.POST("/v1/auth/check-code", middleware.ApiVersionSwitchHandler( - CheckCodeLegacyV1Handler(svcCtx), - CheckCodeLegacyV2Handler(svcCtx), - )) - return router -} - -func newLegacyCheckCodeTestSvcCtx(t *testing.T) (*svc.ServiceContext, *redis.Client) { - t.Helper() - - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - svcCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{ - VerifyCodeExpireTime: 900, - }, - }, - } - return svcCtx, redisClient -} - -func seedLegacyVerifyCode(t *testing.T, redisClient *redis.Client, scene string, email string, code string) string { - t.Helper() - - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email) - payload := map[string]interface{}{ - "code": code, - "lastAt": time.Now().Unix(), - } - payloadRaw, err := json.Marshal(payload) - require.NoError(t, err) - err = redisClient.Set(context.Background(), cacheKey, payloadRaw, time.Minute*15).Err() - require.NoError(t, err) - return cacheKey -} - -func callLegacyCheckCode(t *testing.T, router *gin.Engine, apiHeader string, body string) legacyCheckCodeResponse { - t.Helper() - - reqBody := bytes.NewBufferString(body) - req := httptest.NewRequest(http.MethodPost, "/v1/auth/check-code", reqBody) - req.Header.Set("Content-Type", "application/json") - if apiHeader != "" { - req.Header.Set("api-header", apiHeader) - } - recorder := httptest.NewRecorder() - router.ServeHTTP(recorder, req) - require.Equal(t, http.StatusOK, recorder.Code) - - var resp legacyCheckCodeResponse - err := json.Unmarshal(recorder.Body.Bytes(), &resp) - require.NoError(t, err) - return resp -} - -func TestCheckCodeLegacyHandler_NoHeaderNotConsumed(t *testing.T) { - svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t) - router := newLegacyCheckCodeTestRouter(svcCtx) - - email := "legacy@example.com" - code := "123456" - cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code) - - resp := callLegacyCheckCode(t, router, "", `{"email":"legacy@example.com","code":"123456","type":3}`) - assert.Equal(t, uint32(200), resp.Code) - assert.True(t, resp.Data.Status) - assert.True(t, resp.Data.Exist) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - assert.Equal(t, int64(1), exists) -} - -func TestCheckCodeLegacyHandler_GreaterVersionConsumed(t *testing.T) { - svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t) - router := newLegacyCheckCodeTestRouter(svcCtx) - - email := "latest@example.com" - code := "999888" - cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code) - - resp := callLegacyCheckCode(t, router, "1.0.1", `{"email":"latest@example.com","code":"999888","type":3}`) - assert.Equal(t, uint32(200), resp.Code) - assert.True(t, resp.Data.Status) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - assert.Equal(t, int64(0), exists) - - resp = callLegacyCheckCode(t, router, "1.0.1", `{"email":"latest@example.com","code":"999888","type":3}`) - assert.Equal(t, uint32(200), resp.Code) - assert.False(t, resp.Data.Status) - assert.False(t, resp.Data.Exist) -} - -func TestCheckCodeLegacyHandler_EqualThresholdNotConsumed(t *testing.T) { - svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t) - router := newLegacyCheckCodeTestRouter(svcCtx) - - email := "equal@example.com" - code := "112233" - cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code) - - resp := callLegacyCheckCode(t, router, "1.0.0", `{"email":"equal@example.com","code":"112233","type":3}`) - assert.Equal(t, uint32(200), resp.Code) - assert.True(t, resp.Data.Status) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - assert.Equal(t, int64(1), exists) -} - -func TestCheckCodeLegacyHandler_InvalidVersionNotConsumed(t *testing.T) { - svcCtx, redisClient := newLegacyCheckCodeTestSvcCtx(t) - router := newLegacyCheckCodeTestRouter(svcCtx) - - email := "invalid@example.com" - code := "445566" - cacheKey := seedLegacyVerifyCode(t, redisClient, constant.Security.String(), email, code) - - resp := callLegacyCheckCode(t, router, "abc", `{"email":"invalid@example.com","code":"445566","type":3}`) - assert.Equal(t, uint32(200), resp.Code) - assert.True(t, resp.Data.Status) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - assert.Equal(t, int64(1), exists) -} diff --git a/internal/handler/common/checkverificationcodehandler_test.go b/internal/handler/common/checkverificationcodehandler_test.go deleted file mode 100644 index 511bc79..0000000 --- a/internal/handler/common/checkverificationcodehandler_test.go +++ /dev/null @@ -1,146 +0,0 @@ -package common - -import ( - "bytes" - "context" - "encoding/json" - "fmt" - "net/http" - "net/http/httptest" - "testing" - "time" - - "github.com/alicebob/miniredis/v2" - "github.com/gin-gonic/gin" - "github.com/perfect-panel/server/internal/config" - "github.com/perfect-panel/server/internal/middleware" - "github.com/perfect-panel/server/internal/svc" - "github.com/perfect-panel/server/pkg/authmethod" - "github.com/perfect-panel/server/pkg/constant" - "github.com/redis/go-redis/v9" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -type canonicalCheckCodeResponse struct { - Code uint32 `json:"code"` - Data struct { - Status bool `json:"status"` - Exist bool `json:"exist"` - } `json:"data"` -} - -func newCanonicalCheckCodeTestSvcCtx(t *testing.T) (*svc.ServiceContext, *redis.Client) { - t.Helper() - - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - svcCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{ - VerifyCodeExpireTime: 900, - }, - }, - } - return svcCtx, redisClient -} - -func newCanonicalCheckCodeTestRouter(svcCtx *svc.ServiceContext) *gin.Engine { - gin.SetMode(gin.TestMode) - router := gin.New() - router.Use(middleware.ApiVersionMiddleware(svcCtx)) - router.POST("/v1/common/check_verification_code", middleware.ApiVersionSwitchHandler( - CheckVerificationCodeV1Handler(svcCtx), - CheckVerificationCodeV2Handler(svcCtx), - )) - return router -} - -func seedCanonicalVerifyCode(t *testing.T, redisClient *redis.Client, scene string, account string, code string) string { - t.Helper() - - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, account) - payload := map[string]interface{}{ - "code": code, - "lastAt": time.Now().Unix(), - } - payloadRaw, err := json.Marshal(payload) - require.NoError(t, err) - err = redisClient.Set(context.Background(), cacheKey, payloadRaw, time.Minute*15).Err() - require.NoError(t, err) - return cacheKey -} - -func callCanonicalCheckCode(t *testing.T, router *gin.Engine, apiHeader string, body string) canonicalCheckCodeResponse { - t.Helper() - - reqBody := bytes.NewBufferString(body) - req := httptest.NewRequest(http.MethodPost, "/v1/common/check_verification_code", reqBody) - req.Header.Set("Content-Type", "application/json") - if apiHeader != "" { - req.Header.Set("api-header", apiHeader) - } - recorder := httptest.NewRecorder() - router.ServeHTTP(recorder, req) - require.Equal(t, http.StatusOK, recorder.Code) - - var resp canonicalCheckCodeResponse - err := json.Unmarshal(recorder.Body.Bytes(), &resp) - require.NoError(t, err) - return resp -} - -func TestCheckVerificationCodeHandler_ApiHeaderGate(t *testing.T) { - tests := []struct { - name string - apiHeader string - expectConsume bool - }{ - {name: "no header", apiHeader: "", expectConsume: false}, - {name: "invalid header", apiHeader: "invalid", expectConsume: false}, - {name: "equal threshold", apiHeader: "1.0.0", expectConsume: false}, - {name: "greater threshold", apiHeader: "1.0.1", expectConsume: true}, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - svcCtx, redisClient := newCanonicalCheckCodeTestSvcCtx(t) - router := newCanonicalCheckCodeTestRouter(svcCtx) - - account := "header-gate@example.com" - code := "123123" - cacheKey := seedCanonicalVerifyCode(t, redisClient, constant.Register.String(), account, code) - body := fmt.Sprintf(`{"method":"%s","account":"%s","code":"%s","type":%d}`, - authmethod.Email, - account, - code, - constant.Register, - ) - - resp := callCanonicalCheckCode(t, router, tt.apiHeader, body) - assert.Equal(t, uint32(200), resp.Code) - assert.True(t, resp.Data.Status) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - if tt.expectConsume { - assert.Equal(t, int64(0), exists) - } else { - assert.Equal(t, int64(1), exists) - } - - resp = callCanonicalCheckCode(t, router, tt.apiHeader, body) - if tt.expectConsume { - assert.False(t, resp.Data.Status) - } else { - assert.True(t, resp.Data.Status) - } - }) - } -} diff --git a/internal/handler/public/user/deleteAccountHandler_test.go b/internal/handler/public/user/deleteAccountHandler_test.go deleted file mode 100644 index 345227d..0000000 --- a/internal/handler/public/user/deleteAccountHandler_test.go +++ /dev/null @@ -1,192 +0,0 @@ -package user - -import ( - "bytes" - "context" - "encoding/json" - "errors" - "fmt" - "net" - "net/http" - "net/http/httptest" - "testing" - "time" - - "github.com/alicebob/miniredis/v2" - "github.com/gin-gonic/gin" - "github.com/perfect-panel/server/internal/config" - "github.com/perfect-panel/server/internal/svc" - "github.com/perfect-panel/server/pkg/constant" - "github.com/redis/go-redis/v9" -) - -type handlerResponse struct { - Code uint32 `json:"code"` - Msg string `json:"msg"` - Data json.RawMessage `json:"data"` -} - -func newDeleteAccountTestRouter(serverCtx *svc.ServiceContext) *gin.Engine { - gin.SetMode(gin.TestMode) - router := gin.New() - router.POST("/v1/public/user/delete_account", DeleteAccountHandler(serverCtx)) - return router -} - -func TestDeleteAccountHandlerInvalidParamsUsesUnifiedResponse(t *testing.T) { - router := newDeleteAccountTestRouter(&svc.ServiceContext{}) - - reqBody := bytes.NewBufferString(`{"email":"invalid-email"}`) - req := httptest.NewRequest(http.MethodPost, "/v1/public/user/delete_account", reqBody) - req.Header.Set("Content-Type", "application/json") - recorder := httptest.NewRecorder() - - router.ServeHTTP(recorder, req) - - if recorder.Code != http.StatusOK { - t.Fatalf("expected HTTP 200, got %d", recorder.Code) - } - - var resp handlerResponse - if err := json.Unmarshal(recorder.Body.Bytes(), &resp); err != nil { - t.Fatalf("failed to decode response: %v", err) - } - - if resp.Code != 400 { - t.Fatalf("expected business code 400, got %d, body=%s", resp.Code, recorder.Body.String()) - } - - var raw map[string]interface{} - if err := json.Unmarshal(recorder.Body.Bytes(), &raw); err != nil { - t.Fatalf("failed to decode raw response: %v", err) - } - if _, exists := raw["error"]; exists { - t.Fatalf("unexpected raw error field in response: %s", recorder.Body.String()) - } -} - -func TestDeleteAccountHandlerVerifyCodeErrorUsesUnifiedResponse(t *testing.T) { - redisClient := redis.NewClient(&redis.Options{ - Addr: "invalid:6379", - Dialer: func(_ context.Context, _, _ string) (net.Conn, error) { - return nil, errors.New("dial disabled in test") - }, - }) - defer redisClient.Close() - - serverCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{ - VerifyCodeExpireTime: 900, - }, - }, - } - router := newDeleteAccountTestRouter(serverCtx) - - reqBody := bytes.NewBufferString(`{"email":"user@example.com","code":"123456"}`) - req := httptest.NewRequest(http.MethodPost, "/v1/public/user/delete_account", reqBody) - req.Header.Set("Content-Type", "application/json") - recorder := httptest.NewRecorder() - - router.ServeHTTP(recorder, req) - - if recorder.Code != http.StatusOK { - t.Fatalf("expected HTTP 200, got %d", recorder.Code) - } - - var resp handlerResponse - if err := json.Unmarshal(recorder.Body.Bytes(), &resp); err != nil { - t.Fatalf("failed to decode response: %v", err) - } - - if resp.Code != 70001 { - t.Fatalf("expected business code 70001, got %d, body=%s", resp.Code, recorder.Body.String()) - } -} - -func TestVerifyEmailCode_DeleteAccountSceneConsume(t *testing.T) { - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - serverCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{VerifyCodeExpireTime: 900}, - }, - } - - email := "delete-account@example.com" - code := "112233" - cacheKey := seedDeleteSceneCode(t, redisClient, constant.DeleteAccount.String(), email, code) - - err := verifyEmailCode(context.Background(), serverCtx, email, code) - if err != nil { - t.Fatalf("verifyEmailCode returned unexpected error: %v", err) - } - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - if err != nil { - t.Fatalf("failed to check redis key: %v", err) - } - if exists != 0 { - t.Fatalf("expected verification code to be consumed, key still exists") - } -} - -func TestVerifyEmailCode_SecurityFallbackConsume(t *testing.T) { - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - serverCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{VerifyCodeExpireTime: 900}, - }, - } - - email := "security-fallback@example.com" - code := "445566" - cacheKey := seedDeleteSceneCode(t, redisClient, constant.Security.String(), email, code) - - err := verifyEmailCode(context.Background(), serverCtx, email, code) - if err != nil { - t.Fatalf("verifyEmailCode fallback returned unexpected error: %v", err) - } - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - if err != nil { - t.Fatalf("failed to check redis key: %v", err) - } - if exists != 0 { - t.Fatalf("expected fallback verification code to be consumed, key still exists") - } -} - -func seedDeleteSceneCode(t *testing.T, redisClient *redis.Client, scene string, email string, code string) string { - t.Helper() - - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email) - payload := map[string]interface{}{ - "code": code, - "lastAt": time.Now().Unix(), - } - payloadRaw, err := json.Marshal(payload) - if err != nil { - t.Fatalf("failed to marshal payload: %v", err) - } - err = redisClient.Set(context.Background(), cacheKey, payloadRaw, time.Minute*15).Err() - if err != nil { - t.Fatalf("failed to seed redis payload: %v", err) - } - - return cacheKey -} diff --git a/internal/logic/admin/authMethod/validate_test.go b/internal/logic/admin/authMethod/validate_test.go deleted file mode 100644 index 7a2785e..0000000 --- a/internal/logic/admin/authMethod/validate_test.go +++ /dev/null @@ -1,21 +0,0 @@ -package authMethod - -import ( - "encoding/json" - "testing" - - "github.com/perfect-panel/server/pkg/sms" -) - -func TestValidate(t *testing.T) { - config := " {\"0\":\"{\",\"1\":\"\\\"\",\"10\":\"y\",\"11\":\"I\",\"12\":\"d\",\"13\":\"\\\"\",\"14\":\":\",\"15\":\"\\\"\",\"16\":\"\\\"\",\"17\":\",\",\"18\":\"\\\"\",\"19\":\"A\",\"2\":\"A\",\"20\":\"c\",\"21\":\"c\",\"22\":\"e\",\"23\":\"s\",\"24\":\"s\",\"25\":\"K\",\"26\":\"e\",\"27\":\"y\",\"28\":\"S\",\"29\":\"e\",\"3\":\"c\",\"30\":\"c\",\"31\":\"r\",\"32\":\"e\",\"33\":\"t\",\"34\":\"\\\"\",\"35\":\":\",\"36\":\"\\\"\",\"37\":\"\\\"\",\"38\":\",\",\"39\":\"\\\"\",\"4\":\"c\",\"40\":\"S\",\"41\":\"i\",\"42\":\"g\",\"43\":\"n\",\"44\":\"N\",\"45\":\"a\",\"46\":\"m\",\"47\":\"e\",\"48\":\"\\\"\",\"49\":\":\",\"5\":\"e\",\"50\":\"\\\"\",\"51\":\"\\\"\",\"52\":\",\",\"53\":\"\\\"\",\"54\":\"E\",\"55\":\"n\",\"56\":\"d\",\"57\":\"p\",\"58\":\"o\",\"59\":\"i\",\"6\":\"s\",\"60\":\"n\",\"61\":\"t\",\"62\":\"\\\"\",\"63\":\":\",\"64\":\"\\\"\",\"65\":\"\\\"\",\"66\":\",\",\"67\":\"\\\"\",\"68\":\"V\",\"69\":\"e\",\"7\":\"s\",\"70\":\"r\",\"71\":\"i\",\"72\":\"f\",\"73\":\"y\",\"74\":\"T\",\"75\":\"e\",\"76\":\"m\",\"77\":\"p\",\"78\":\"l\",\"79\":\"a\",\"8\":\"K\",\"80\":\"t\",\"81\":\"e\",\"82\":\"C\",\"83\":\"o\",\"84\":\"d\",\"85\":\"e\",\"86\":\"\\\"\",\"87\":\":\",\"88\":\"\\\"\",\"89\":\"\\\"\",\"9\":\"e\",\"90\":\"}\",\"access\":\"xxxx\",\"secret\":\"SSxxxxxxxxxxxxxxxxxxxxxxxU\",\"template\":\"Your verification code is: {{.code}}\"}" - var mapConfig map[string]interface{} - if err := json.Unmarshal([]byte(config), &mapConfig); err != nil { - t.Error(err) - } - platformConfig, err := validatePlatformConfig(sms.Abosend.String(), mapConfig) - if err != nil { - t.Errorf("validateEmailPlatformConfig error: %v", err) - } - t.Logf("platformConfig: %+v", platformConfig) -} diff --git a/internal/logic/admin/server/deleteNodeLogic.go b/internal/logic/admin/server/deleteNodeLogic.go index 8a5f6a0..567882d 100644 --- a/internal/logic/admin/server/deleteNodeLogic.go +++ b/internal/logic/admin/server/deleteNodeLogic.go @@ -29,9 +29,12 @@ func NewDeleteNodeLogic(ctx context.Context, svcCtx *svc.ServiceContext) *Delete func (l *DeleteNodeLogic) DeleteNode(req *types.DeleteNodeRequest) error { data, err := l.svcCtx.NodeModel.FindOneNode(l.ctx, req.Id) - - err = l.svcCtx.NodeModel.DeleteNode(l.ctx, req.Id) if err != nil { + l.Errorw("[DeleteNode] Find Node Error: ", logger.Field("error", err.Error())) + return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "[DeleteNode] Find Node Error") + } + + if err = l.svcCtx.NodeModel.DeleteNode(l.ctx, req.Id); err != nil { l.Errorw("[DeleteNode] Delete Database Error: ", logger.Field("error", err.Error())) return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseDeletedError), "[DeleteNode] Delete Database Error") } diff --git a/internal/logic/admin/server/deleteServerLogic.go b/internal/logic/admin/server/deleteServerLogic.go index 186d14a..76b9b1a 100644 --- a/internal/logic/admin/server/deleteServerLogic.go +++ b/internal/logic/admin/server/deleteServerLogic.go @@ -32,6 +32,9 @@ func (l *DeleteServerLogic) DeleteServer(req *types.DeleteServerRequest) error { l.Errorw("[DeleteServer] Delete Server Error: ", logger.Field("error", err.Error())) return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseDeletedError), "[DeleteServer] Delete Server Error") } + if err = l.svcCtx.NodeModel.ClearServerAllCache(l.ctx); err != nil { + l.Errorw("[DeleteServer] Clear server cache failed", logger.Field("error", err.Error())) + } return l.svcCtx.NodeModel.ClearNodeCache(l.ctx, &node.FilterNodeParams{ Page: 1, Size: 1000, diff --git a/internal/logic/admin/subscribe/createSubscribeLogic.go b/internal/logic/admin/subscribe/createSubscribeLogic.go index 6309e2b..411b026 100644 --- a/internal/logic/admin/subscribe/createSubscribeLogic.go +++ b/internal/logic/admin/subscribe/createSubscribeLogic.go @@ -48,6 +48,7 @@ func (l *CreateSubscribeLogic) CreateSubscribe(req *types.CreateSubscribeRequest SpeedLimit: req.SpeedLimit, DeviceLimit: req.DeviceLimit, Quota: req.Quota, + NewUserOnly: req.NewUserOnly, Nodes: tool.Int64SliceToString(req.Nodes), NodeTags: tool.StringSliceToString(req.NodeTags), Show: req.Show, diff --git a/internal/logic/admin/subscribe/resetAllSubscribeTokenLogic.go b/internal/logic/admin/subscribe/resetAllSubscribeTokenLogic.go index e7307a2..181ff8a 100644 --- a/internal/logic/admin/subscribe/resetAllSubscribeTokenLogic.go +++ b/internal/logic/admin/subscribe/resetAllSubscribeTokenLogic.go @@ -33,12 +33,27 @@ func NewResetAllSubscribeTokenLogic(ctx context.Context, svcCtx *svc.ServiceCont func (l *ResetAllSubscribeTokenLogic) ResetAllSubscribeToken() (resp *types.ResetAllSubscribeTokenResponse, err error) { var list []*user.Subscribe tx := l.svcCtx.DB.WithContext(l.ctx).Begin() + if tx.Error != nil { + return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "Failed to begin transaction: %v", tx.Error) + } // select all active and Finished subscriptions if err = tx.Model(&user.Subscribe{}).Where("`status` IN ?", []int64{1, 2}).Find(&list).Error; err != nil { + tx.Rollback() logger.Errorf("[ResetAllSubscribeToken] Failed to fetch subscribe list: %v", err.Error()) return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "Failed to fetch subscribe list: %v", err.Error()) } + // Save old tokens before overwriting for proper cache clearing + type oldTokenInfo struct { + Token string + UserId int64 + Id int64 + } + oldTokens := make([]oldTokenInfo, len(list)) + for i, sub := range list { + oldTokens[i] = oldTokenInfo{Token: sub.Token, UserId: sub.UserId, Id: sub.Id} + } + for _, sub := range list { sub.Token = uuidx.SubscribeToken(strconv.FormatInt(time.Now().UnixMilli(), 10) + strconv.FormatInt(sub.Id, 10)) sub.UUID = uuidx.NewUUID().String() @@ -55,6 +70,25 @@ func (l *ResetAllSubscribeTokenLogic) ResetAllSubscribeToken() (resp *types.Rese return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseUpdateError), "Failed to commit transaction: %v", err.Error()) } + // Clear cache for both old and new tokens + for i, sub := range list { + // Clear new token cache + if clearErr := l.svcCtx.UserModel.ClearSubscribeCache(l.ctx, sub); clearErr != nil { + logger.Errorf("[ResetAllSubscribeToken] Failed to clear new cache for subscribe ID %d: %v", sub.Id, clearErr.Error()) + } + // Clear old token cache + if oldTokens[i].Token != "" && oldTokens[i].Token != sub.Token { + oldSub := &user.Subscribe{ + Id: oldTokens[i].Id, + UserId: oldTokens[i].UserId, + Token: oldTokens[i].Token, + } + if clearErr := l.svcCtx.UserModel.ClearSubscribeCache(l.ctx, oldSub); clearErr != nil { + logger.Errorf("[ResetAllSubscribeToken] Failed to clear old cache for subscribe ID %d: %v", sub.Id, clearErr.Error()) + } + } + } + return &types.ResetAllSubscribeTokenResponse{ Success: true, }, nil diff --git a/internal/logic/admin/subscribe/updateSubscribeLogic.go b/internal/logic/admin/subscribe/updateSubscribeLogic.go index b79fdfe..123d5e0 100644 --- a/internal/logic/admin/subscribe/updateSubscribeLogic.go +++ b/internal/logic/admin/subscribe/updateSubscribeLogic.go @@ -56,6 +56,7 @@ func (l *UpdateSubscribeLogic) UpdateSubscribe(req *types.UpdateSubscribeRequest SpeedLimit: req.SpeedLimit, DeviceLimit: req.DeviceLimit, Quota: req.Quota, + NewUserOnly: req.NewUserOnly, Nodes: tool.Int64SliceToString(req.Nodes), NodeTags: tool.StringSliceToString(req.NodeTags), Show: req.Show, diff --git a/internal/logic/admin/user/updateUserSubscribeLogic.go b/internal/logic/admin/user/updateUserSubscribeLogic.go index 23c2d2f..d86bac3 100644 --- a/internal/logic/admin/user/updateUserSubscribeLogic.go +++ b/internal/logic/admin/user/updateUserSubscribeLogic.go @@ -64,11 +64,18 @@ func (l *UpdateUserSubscribeLogic) UpdateUserSubscribe(req *types.UpdateUserSubs l.Errorw("ClearSubscribeCache failed:", logger.Field("error", err.Error()), logger.Field("userSubscribeId", userSub.Id)) return errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "ClearSubscribeCache failed: %v", err.Error()) } - // Clear subscribe cache + // Clear old subscribe plan cache if err = l.svcCtx.SubscribeModel.ClearCache(l.ctx, userSub.SubscribeId); err != nil { - l.Errorw("failed to clear subscribe cache", logger.Field("error", err.Error()), logger.Field("subscribeId", userSub.SubscribeId)) + l.Errorw("failed to clear old subscribe cache", logger.Field("error", err.Error()), logger.Field("subscribeId", userSub.SubscribeId)) return errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "failed to clear subscribe cache: %v", err.Error()) } + // Clear new subscribe plan cache if plan changed + if req.SubscribeId != userSub.SubscribeId { + if err = l.svcCtx.SubscribeModel.ClearCache(l.ctx, req.SubscribeId); err != nil { + l.Errorw("failed to clear new subscribe cache", logger.Field("error", err.Error()), logger.Field("subscribeId", req.SubscribeId)) + return errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "failed to clear new subscribe cache: %v", err.Error()) + } + } if err = l.svcCtx.NodeModel.ClearServerAllCache(l.ctx); err != nil { l.Errorf("ClearServerAllCache error: %v", err.Error()) diff --git a/internal/logic/auth/bindDeviceLogic.go b/internal/logic/auth/bindDeviceLogic.go index 6691393..0411bbe 100644 --- a/internal/logic/auth/bindDeviceLogic.go +++ b/internal/logic/auth/bindDeviceLogic.go @@ -171,6 +171,17 @@ func (l *BindDeviceLogic) createDeviceForUser(identifier, ip, userAgent string, logger.Field("user_id", userId), ) + // Clear user cache to reflect new device + userInfo, findErr := l.svcCtx.UserModel.FindOne(l.ctx, userId) + if findErr == nil { + if clearErr := l.svcCtx.UserModel.ClearUserCache(l.ctx, userInfo); clearErr != nil { + l.Errorw("failed to clear user cache after device creation", + logger.Field("user_id", userId), + logger.Field("error", clearErr.Error()), + ) + } + } + return nil } @@ -208,7 +219,7 @@ func (l *BindDeviceLogic) rebindDeviceToNewUser(deviceInfo *user.Device, ip, use } var users []*user.User - err := l.svcCtx.DB.Where("id in (?)", []int64{oldUserId, newUserId}).Find(&users).Error + err := l.svcCtx.DB.Where("id in (?)", []int64{oldUserId, newUserId}).Preload("AuthMethods").Find(&users).Error if err != nil { l.Errorw("failed to query users for rebinding", logger.Field("old_user_id", oldUserId), diff --git a/internal/logic/auth/emailLoginLogic.go b/internal/logic/auth/emailLoginLogic.go index 881d2f9..01a99e1 100644 --- a/internal/logic/auth/emailLoginLogic.go +++ b/internal/logic/auth/emailLoginLogic.go @@ -47,31 +47,29 @@ func (l *EmailLoginLogic) EmailLogin(req *types.EmailLoginRequest) (resp *types. req.Code = strings.TrimSpace(req.Code) // Verify Code - if req.Code != "202511" { - scenes := []string{constant.Security.String(), constant.Register.String(), "unknown"} - var verified bool - var cacheKeyUsed string - var payload common.CacheKeyPayload - for _, scene := range scenes { - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, req.Email) - value, err := l.svcCtx.Redis.Get(l.ctx, cacheKey).Result() - if err != nil || value == "" { - continue - } - if err := json.Unmarshal([]byte(value), &payload); err != nil { - continue - } - if payload.Code == req.Code && time.Now().Unix()-payload.LastAt <= l.svcCtx.Config.VerifyCode.VerifyCodeExpireTime { - verified = true - cacheKeyUsed = cacheKey - break - } + scenes := []string{constant.Security.String(), constant.Register.String(), "unknown"} + var verified bool + var cacheKeyUsed string + var payload common.CacheKeyPayload + for _, scene := range scenes { + cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, req.Email) + value, err := l.svcCtx.Redis.Get(l.ctx, cacheKey).Result() + if err != nil || value == "" { + continue } - if !verified { - return nil, errors.Wrapf(xerr.NewErrCode(xerr.VerifyCodeError), "verification code error or expired") + if err := json.Unmarshal([]byte(value), &payload); err != nil { + continue + } + if payload.Code == req.Code && time.Now().Unix()-payload.LastAt <= l.svcCtx.Config.VerifyCode.VerifyCodeExpireTime { + verified = true + cacheKeyUsed = cacheKey + break } - l.svcCtx.Redis.Del(l.ctx, cacheKeyUsed) } + if !verified { + return nil, errors.Wrapf(xerr.NewErrCode(xerr.VerifyCodeError), "verification code error or expired") + } + l.svcCtx.Redis.Del(l.ctx, cacheKeyUsed) // Check User userInfo, err = l.svcCtx.UserModel.FindOneByEmail(l.ctx, req.Email) diff --git a/internal/logic/auth/resetPasswordLogic.go b/internal/logic/auth/resetPasswordLogic.go index 363604e..f504437 100644 --- a/internal/logic/auth/resetPasswordLogic.go +++ b/internal/logic/auth/resetPasswordLogic.go @@ -45,7 +45,7 @@ func (l *ResetPasswordLogic) ResetPassword(req *types.ResetPasswordRequest) (res loginStatus := false defer func() { - if userInfo.Id != 0 && loginStatus { + if userInfo != nil && userInfo.Id != 0 && loginStatus { loginLog := log.Login{ Method: "email", LoginIP: req.IP, diff --git a/internal/logic/auth/userLoginLogic.go b/internal/logic/auth/userLoginLogic.go index fc285eb..4204c53 100644 --- a/internal/logic/auth/userLoginLogic.go +++ b/internal/logic/auth/userLoginLogic.go @@ -42,7 +42,7 @@ func (l *UserLoginLogic) UserLogin(req *types.UserLoginRequest) (resp *types.Log var userInfo *user.User // Record login status defer func(svcCtx *svc.ServiceContext) { - if userInfo.Id != 0 { + if userInfo != nil && userInfo.Id != 0 { loginLog := log.Login{ Method: "email", LoginIP: req.IP, @@ -67,19 +67,18 @@ func (l *UserLoginLogic) UserLogin(req *types.UserLoginRequest) (resp *types.Log }(l.svcCtx) userInfo, err = l.svcCtx.UserModel.FindOneByEmail(l.ctx, req.Email) - - if userInfo.DeletedAt.Valid { - return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserNotExist), "user email deleted: %v", req.Email) - } - if err != nil { - if errors.As(err, &gorm.ErrRecordNotFound) { + if errors.Is(err, gorm.ErrRecordNotFound) { return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserNotExist), "user email not exist: %v", req.Email) } logger.WithContext(l.ctx).Error(err) return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "query user info failed: %v", err.Error()) } + if userInfo.DeletedAt.Valid { + return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserNotExist), "user email deleted: %v", req.Email) + } + // Verify password if !tool.MultiPasswordVerify(userInfo.Algo, userInfo.Salt, req.Password, userInfo.Password) { return nil, errors.Wrapf(xerr.NewErrCode(xerr.UserPasswordError), "user password") diff --git a/internal/logic/common/checkverificationcodelogic_test.go b/internal/logic/common/checkverificationcodelogic_test.go deleted file mode 100644 index 6f064a6..0000000 --- a/internal/logic/common/checkverificationcodelogic_test.go +++ /dev/null @@ -1,259 +0,0 @@ -package common - -import ( - "context" - "encoding/json" - "fmt" - "testing" - "time" - - "github.com/alicebob/miniredis/v2" - "github.com/perfect-panel/server/internal/config" - "github.com/perfect-panel/server/internal/svc" - "github.com/perfect-panel/server/internal/types" - "github.com/perfect-panel/server/pkg/apiversion" - "github.com/perfect-panel/server/pkg/authmethod" - "github.com/perfect-panel/server/pkg/constant" - "github.com/redis/go-redis/v9" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" -) - -func TestCheckVerificationCodeCanonicalConsume(t *testing.T) { - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - svcCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{ - VerifyCodeExpireTime: 900, - }, - }, - } - - email := "user@example.com" - code := "123456" - scene := constant.Register.String() - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email) - setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix()) - - logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx) - req := &types.CheckVerificationCodeRequest{ - Method: authmethod.Email, - Account: email, - Code: code, - Type: uint8(constant.Register), - } - - resp, err := logic.CheckVerificationCode(req) - require.NoError(t, err) - require.NotNil(t, resp) - assert.True(t, resp.Status) - assert.True(t, resp.Exist) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - assert.Equal(t, int64(0), exists) - - resp, err = logic.CheckVerificationCode(req) - require.NoError(t, err) - require.NotNil(t, resp) - assert.False(t, resp.Status) - assert.False(t, resp.Exist) -} - -func TestCheckVerificationCodeLegacyNoConsumeAndType3Mapping(t *testing.T) { - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - svcCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{ - VerifyCodeExpireTime: 900, - }, - }, - } - - email := "legacy@example.com" - code := "654321" - scene := constant.Security.String() - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, scene, email) - setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix()) - - legacyReq := &types.LegacyCheckVerificationCodeRequest{ - Email: email, - Code: code, - Type: 3, - } - - normalizedReq, type3Mapped, err := NormalizeLegacyCheckVerificationCodeRequest(legacyReq) - require.NoError(t, err) - assert.True(t, type3Mapped) - assert.Equal(t, uint8(constant.Security), normalizedReq.Type) - assert.Equal(t, authmethod.Email, normalizedReq.Method) - assert.Equal(t, email, normalizedReq.Account) - - logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx) - legacyBehavior := VerifyCodeCheckBehavior{ - Source: "legacy", - Consume: false, - LegacyType3Mapped: true, - AllowSceneFallback: true, - } - - resp, err := logic.CheckVerificationCodeWithBehavior(normalizedReq, legacyBehavior) - require.NoError(t, err) - require.NotNil(t, resp) - assert.True(t, resp.Status) - assert.True(t, resp.Exist) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - assert.Equal(t, int64(1), exists) - - resp, err = logic.CheckVerificationCodeWithBehavior(normalizedReq, legacyBehavior) - require.NoError(t, err) - assert.True(t, resp.Status) - - resp, err = logic.CheckVerificationCode(normalizedReq) - require.NoError(t, err) - assert.True(t, resp.Status) - - exists, err = redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - assert.Equal(t, int64(0), exists) -} - -func TestCheckVerificationCodeLegacySceneFallback(t *testing.T) { - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - svcCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{ - VerifyCodeExpireTime: 900, - }, - }, - } - - email := "fallback@example.com" - code := "778899" - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, constant.Register.String(), email) - setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix()) - - logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx) - req := &types.CheckVerificationCodeRequest{ - Method: authmethod.Email, - Account: email, - Code: code, - Type: uint8(constant.Security), - } - - resp, err := logic.CheckVerificationCodeWithBehavior(req, VerifyCodeCheckBehavior{ - Source: "legacy", - Consume: false, - AllowSceneFallback: true, - }) - require.NoError(t, err) - require.NotNil(t, resp) - assert.True(t, resp.Status) - - resp, err = logic.CheckVerificationCodeWithBehavior(req, VerifyCodeCheckBehavior{ - Source: "legacy", - Consume: false, - AllowSceneFallback: false, - }) - require.NoError(t, err) - require.NotNil(t, resp) - assert.False(t, resp.Status) -} - -func setEmailCodePayload(t *testing.T, redisClient *redis.Client, cacheKey string, code string, lastAt int64) { - t.Helper() - - payload := CacheKeyPayload{ - Code: code, - LastAt: lastAt, - } - value, err := json.Marshal(payload) - require.NoError(t, err) - err = redisClient.Set(context.Background(), cacheKey, value, time.Minute*15).Err() - require.NoError(t, err) -} - -func TestCheckVerificationCodeWithApiHeaderGate(t *testing.T) { - tests := []struct { - name string - header string - expectConsume bool - }{ - {name: "missing header", header: "", expectConsume: false}, - {name: "invalid header", header: "invalid", expectConsume: false}, - {name: "equal threshold", header: "1.0.0", expectConsume: false}, - {name: "greater threshold", header: "1.0.1", expectConsume: true}, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - t.Cleanup(func() { - redisClient.Close() - miniRedis.Close() - }) - - svcCtx := &svc.ServiceContext{ - Redis: redisClient, - Config: config.Config{ - VerifyCode: config.VerifyCode{ - VerifyCodeExpireTime: 900, - }, - }, - } - - email := "gate@example.com" - code := "101010" - cacheKey := fmt.Sprintf("%s:%s:%s", config.AuthCodeCacheKey, constant.Register.String(), email) - setEmailCodePayload(t, redisClient, cacheKey, code, time.Now().Unix()) - - logic := NewCheckVerificationCodeLogic(context.Background(), svcCtx) - req := &types.CheckVerificationCodeRequest{ - Method: authmethod.Email, - Account: email, - Code: code, - Type: uint8(constant.Register), - } - - resp, err := logic.CheckVerificationCodeWithBehavior(req, VerifyCodeCheckBehavior{ - Source: "canonical", - Consume: apiversion.UseLatest(tt.header, apiversion.DefaultThreshold), - }) - require.NoError(t, err) - require.NotNil(t, resp) - assert.True(t, resp.Status) - - exists, err := redisClient.Exists(context.Background(), cacheKey).Result() - require.NoError(t, err) - if tt.expectConsume { - assert.Equal(t, int64(0), exists) - } else { - assert.Equal(t, int64(1), exists) - } - }) - } -} diff --git a/internal/logic/common/familyEntitlement_test.go b/internal/logic/common/familyEntitlement_test.go deleted file mode 100644 index 9863d63..0000000 --- a/internal/logic/common/familyEntitlement_test.go +++ /dev/null @@ -1,78 +0,0 @@ -package common - -import ( - stderrors "errors" - "testing" - - modelUser "github.com/perfect-panel/server/internal/model/user" - "github.com/perfect-panel/server/pkg/xerr" - pkgerrors "github.com/pkg/errors" - "github.com/stretchr/testify/require" -) - -func extractFamilyEntitlementCode(err error) uint32 { - if err == nil { - return 0 - } - - var codeErr *xerr.CodeError - if stderrors.As(pkgerrors.Cause(err), &codeErr) { - return codeErr.GetErrCode() - } - return 0 -} - -func TestBuildEntitlementContext(t *testing.T) { - t.Run("default self entitlement", func(t *testing.T) { - entitlement := buildEntitlementContext(1001, nil) - require.Equal(t, int64(1001), entitlement.EffectiveUserID) - require.Equal(t, EntitlementSourceSelf, entitlement.Source) - require.Equal(t, int64(0), entitlement.OwnerUserID) - require.False(t, entitlement.ReadOnly) - }) - - t.Run("active family member uses owner entitlement", func(t *testing.T) { - entitlement := buildEntitlementContext(1001, &familyEntitlementRelation{ - Role: modelUser.FamilyRoleMember, - FamilyStatus: modelUser.FamilyStatusActive, - OwnerUserID: 2001, - }) - require.Equal(t, int64(2001), entitlement.EffectiveUserID) - require.Equal(t, EntitlementSourceFamilyOwner, entitlement.Source) - require.Equal(t, int64(2001), entitlement.OwnerUserID) - require.True(t, entitlement.ReadOnly) - }) - - t.Run("owner relation keeps self entitlement", func(t *testing.T) { - entitlement := buildEntitlementContext(2001, &familyEntitlementRelation{ - Role: modelUser.FamilyRoleOwner, - FamilyStatus: modelUser.FamilyStatusActive, - OwnerUserID: 2001, - }) - require.Equal(t, int64(2001), entitlement.EffectiveUserID) - require.Equal(t, EntitlementSourceSelf, entitlement.Source) - require.False(t, entitlement.ReadOnly) - }) - - t.Run("disabled family keeps self entitlement", func(t *testing.T) { - entitlement := buildEntitlementContext(1001, &familyEntitlementRelation{ - Role: modelUser.FamilyRoleMember, - FamilyStatus: 0, - OwnerUserID: 2001, - }) - require.Equal(t, int64(1001), entitlement.EffectiveUserID) - require.Equal(t, EntitlementSourceSelf, entitlement.Source) - require.False(t, entitlement.ReadOnly) - }) -} - -func TestDenyReadonlyEntitlement(t *testing.T) { - require.NoError(t, denyReadonlyEntitlement(&EntitlementContext{ReadOnly: false})) - - err := denyReadonlyEntitlement(&EntitlementContext{ - Source: EntitlementSourceFamilyOwner, - ReadOnly: true, - }) - require.Error(t, err) - require.Equal(t, xerr.FamilyOwnerOperationForbidden, extractFamilyEntitlementCode(err)) -} diff --git a/internal/logic/common/inviteLinkResolver_test.go b/internal/logic/common/inviteLinkResolver_test.go deleted file mode 100644 index 36f5fe9..0000000 --- a/internal/logic/common/inviteLinkResolver_test.go +++ /dev/null @@ -1,145 +0,0 @@ -package common - -import ( - "context" - "errors" - "net/url" - "testing" - - "github.com/alicebob/miniredis/v2" - "github.com/perfect-panel/server/internal/config" - "github.com/perfect-panel/server/internal/svc" - "github.com/redis/go-redis/v9" - "github.com/stretchr/testify/require" -) - -func buildInviteResolverForTest(t *testing.T, cfg config.Config) (*InviteLinkResolver, *miniredis.Miniredis) { - t.Helper() - - redisServer, err := miniredis.Run() - require.NoError(t, err) - t.Cleanup(func() { - redisServer.Close() - }) - - redisClient := redis.NewClient(&redis.Options{ - Addr: redisServer.Addr(), - DB: 0, - }) - t.Cleanup(func() { - _ = redisClient.Close() - }) - - serviceCtx := &svc.ServiceContext{ - Config: cfg, - Redis: redisClient, - } - - resolver := NewInviteLinkResolver(context.Background(), serviceCtx) - return resolver, redisServer -} - -func TestInviteLinkResolverResolveInviteLink(t *testing.T) { - t.Run("kutt disabled returns long link", func(t *testing.T) { - cfg := config.Config{} - cfg.Kutt.TargetURL = "https://example.com/register" - - resolver, _ := buildInviteResolverForTest(t, cfg) - link := resolver.ResolveInviteLink("abc123") - require.Equal(t, "https://example.com/register?ic=abc123", link) - }) - - t.Run("cache hit returns cached short link", func(t *testing.T) { - cfg := config.Config{} - cfg.Kutt.Enable = true - cfg.Kutt.ApiURL = "https://kutt.local/api/v2" - cfg.Kutt.ApiKey = "token" - cfg.Kutt.TargetURL = "https://example.com/register" - - resolver, redisServer := buildInviteResolverForTest(t, cfg) - redisServer.Set(inviteShortLinkCachePrefix+"abc123", "https://sho.rt/cached") - - called := 0 - resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) { - called++ - return "", errors.New("should not call createShortLink on cache hit") - } - - link := resolver.ResolveInviteLink("abc123") - require.Equal(t, "https://sho.rt/cached", link) - require.Equal(t, 0, called) - }) - - t.Run("cache miss kutt success returns short link and writes cache", func(t *testing.T) { - cfg := config.Config{} - cfg.Kutt.Enable = true - cfg.Kutt.ApiURL = "https://kutt.local/api/v2" - cfg.Kutt.ApiKey = "token" - cfg.Kutt.TargetURL = "https://example.com/register" - - resolver, _ := buildInviteResolverForTest(t, cfg) - resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) { - return "https://sho.rt/new", nil - } - - link := resolver.ResolveInviteLink("abc123") - require.Equal(t, "https://sho.rt/new", link) - - cached := resolver.getCachedShortLink("abc123") - require.Equal(t, "https://sho.rt/new", cached) - }) - - t.Run("kutt failure falls back to long link", func(t *testing.T) { - cfg := config.Config{} - cfg.Kutt.Enable = true - cfg.Kutt.ApiURL = "https://kutt.local/api/v2" - cfg.Kutt.ApiKey = "token" - cfg.Kutt.TargetURL = "https://example.com/register" - - resolver, _ := buildInviteResolverForTest(t, cfg) - resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) { - return "", errors.New("kutt request failed") - } - - link := resolver.ResolveInviteLink("abc123") - require.Equal(t, "https://example.com/register?ic=abc123", link) - }) - - t.Run("long link preserves existing query string", func(t *testing.T) { - cfg := config.Config{} - cfg.Kutt.TargetURL = "https://example.com/register?channel=ios" - - resolver, _ := buildInviteResolverForTest(t, cfg) - link := resolver.ResolveInviteLink("abc123") - parsed, err := url.Parse(link) - require.NoError(t, err) - require.Equal(t, "https", parsed.Scheme) - require.Equal(t, "example.com", parsed.Host) - require.Equal(t, "/register", parsed.Path) - require.Equal(t, "ios", parsed.Query().Get("channel")) - require.Equal(t, "abc123", parsed.Query().Get("ic")) - }) - - t.Run("kutt target preserves existing query string", func(t *testing.T) { - cfg := config.Config{} - cfg.Kutt.Enable = true - cfg.Kutt.ApiURL = "https://kutt.local/api/v2" - cfg.Kutt.ApiKey = "token" - cfg.Kutt.TargetURL = "https://example.com/register?channel=ios" - - resolver, _ := buildInviteResolverForTest(t, cfg) - capturedTargetURL := "" - resolver.createShortLink = func(ctx context.Context, targetURL, domain string) (string, error) { - capturedTargetURL = targetURL - return "https://sho.rt/query", nil - } - - link := resolver.ResolveInviteLink("abc123") - require.Equal(t, "https://sho.rt/query", link) - - parsed, err := url.Parse(capturedTargetURL) - require.NoError(t, err) - require.Equal(t, "ios", parsed.Query().Get("channel")) - require.Equal(t, "abc123", parsed.Query().Get("ic")) - }) -} diff --git a/internal/logic/common/subscribeModeRoute_test.go b/internal/logic/common/subscribeModeRoute_test.go deleted file mode 100644 index ed8b3ac..0000000 --- a/internal/logic/common/subscribeModeRoute_test.go +++ /dev/null @@ -1,82 +0,0 @@ -package common - -import ( - "context" - "errors" - "testing" - - "github.com/perfect-panel/server/internal/model/user" - "github.com/stretchr/testify/require" -) - -func TestResolvePurchaseRoute(t *testing.T) { - ctx := context.Background() - - t.Run("single mode disabled", func(t *testing.T) { - called := false - decision, err := ResolvePurchaseRoute(ctx, false, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) { - called = true - return nil, nil - }) - require.NoError(t, err) - require.NotNil(t, decision) - require.Equal(t, PurchaseRouteNewPurchase, decision.Route) - require.Equal(t, int64(100), decision.ResolvedSubscribeID) - require.False(t, called) - }) - - t.Run("single mode but empty user", func(t *testing.T) { - decision, err := ResolvePurchaseRoute(ctx, true, 0, 100, nil) - require.NoError(t, err) - require.NotNil(t, decision) - require.Equal(t, PurchaseRouteNewPurchase, decision.Route) - require.Equal(t, int64(100), decision.ResolvedSubscribeID) - }) - - t.Run("single mode no anchor", func(t *testing.T) { - decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) { - return nil, nil - }) - require.NoError(t, err) - require.NotNil(t, decision) - require.Equal(t, PurchaseRouteNewPurchase, decision.Route) - require.Equal(t, int64(100), decision.ResolvedSubscribeID) - }) - - t.Run("single mode routed to renewal", func(t *testing.T) { - decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) { - return &user.Subscribe{ - Id: 11, - SubscribeId: 100, - OrderId: 7, - Token: "token", - }, nil - }) - require.NoError(t, err) - require.NotNil(t, decision) - require.Equal(t, PurchaseRoutePurchaseToRenewal, decision.Route) - require.Equal(t, int64(100), decision.ResolvedSubscribeID) - require.NotNil(t, decision.Anchor) - require.Equal(t, int64(11), decision.Anchor.Id) - }) - - t.Run("single mode plan mismatch", func(t *testing.T) { - decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) { - return &user.Subscribe{ - Id: 11, - SubscribeId: 200, - }, nil - }) - require.ErrorIs(t, err, ErrSingleModePlanMismatch) - require.Nil(t, decision) - }) - - t.Run("single mode anchor query error", func(t *testing.T) { - queryErr := errors.New("query failed") - decision, err := ResolvePurchaseRoute(ctx, true, 1, 100, func(ctx context.Context, userID int64) (*user.Subscribe, error) { - return nil, queryErr - }) - require.ErrorIs(t, err, queryErr) - require.Nil(t, decision) - }) -} diff --git a/internal/logic/public/order/closeOrderLogic.go b/internal/logic/public/order/closeOrderLogic.go index dd7ea13..2c89bfa 100644 --- a/internal/logic/public/order/closeOrderLogic.go +++ b/internal/logic/public/order/closeOrderLogic.go @@ -131,9 +131,8 @@ func (l *CloseOrderLogic) CloseOrder(req *types.CloseOrderRequest) error { ) return err } - // update user cache - return l.svcCtx.UserModel.UpdateUserCache(l.ctx, userInfo) } + // Note: user cache will be updated after transaction commits if sub.Inventory != -1 { sub.Inventory++ if e := l.svcCtx.SubscribeModel.Update(l.ctx, sub, tx); e != nil { @@ -151,6 +150,19 @@ func (l *CloseOrderLogic) CloseOrder(req *types.CloseOrderRequest) error { logger.Errorf("[CloseOrder] Transaction failed: %v", err.Error()) return err } + + // Update user cache after transaction commits successfully + if orderInfo.GiftAmount > 0 && orderInfo.UserId != 0 { + if userInfo, findErr := l.svcCtx.UserModel.FindOne(l.ctx, orderInfo.UserId); findErr == nil { + if clearErr := l.svcCtx.UserModel.ClearUserCache(l.ctx, userInfo); clearErr != nil { + l.Errorw("[CloseOrder] failed to clear user cache", + logger.Field("error", clearErr.Error()), + logger.Field("user_id", orderInfo.UserId), + ) + } + } + } + return nil } diff --git a/internal/logic/public/order/preCreateOrderLogic.go b/internal/logic/public/order/preCreateOrderLogic.go index 8b8f324..c32ff24 100644 --- a/internal/logic/public/order/preCreateOrderLogic.go +++ b/internal/logic/public/order/preCreateOrderLogic.go @@ -4,6 +4,7 @@ import ( "context" "encoding/json" "math" + "time" commonLogic "github.com/perfect-panel/server/internal/logic/common" "github.com/perfect-panel/server/internal/model/order" @@ -108,6 +109,23 @@ func (l *PreCreateOrderLogic) PreCreateOrder(req *types.PurchaseOrderRequest) (r } } + // check new user only restriction + if !isSingleModeRenewal && sub.NewUserOnly != nil && *sub.NewUserOnly { + if time.Since(u.CreatedAt) > 24*time.Hour { + return nil, errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "not a new user") + } + var historyCount int64 + if e := l.svcCtx.DB.Model(&order.Order{}). + Where("user_id = ? AND subscribe_id = ? AND type = 1 AND status IN ?", + u.Id, targetSubscribeID, []uint8{2, 5}). + Count(&historyCount).Error; e != nil { + return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "check new user purchase history error: %v", e.Error()) + } + if historyCount >= 1 { + return nil, errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "already purchased new user plan") + } + } + var discount float64 = 1 if sub.Discount != "" { var dis []types.SubscribeDiscount diff --git a/internal/logic/public/order/purchaseLogic.go b/internal/logic/public/order/purchaseLogic.go index f3226e3..58c2dd8 100644 --- a/internal/logic/public/order/purchaseLogic.go +++ b/internal/logic/public/order/purchaseLogic.go @@ -270,6 +270,23 @@ func (l *PurchaseLogic) Purchase(req *types.PurchaseOrderRequest) (resp *types.P } } + // check new user only restriction inside transaction to prevent race condition + if orderInfo.Type == 1 && sub.NewUserOnly != nil && *sub.NewUserOnly { + if time.Since(u.CreatedAt) > 24*time.Hour { + return errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "not a new user") + } + var historyCount int64 + if e := db.Model(&order.Order{}). + Where("user_id = ? AND subscribe_id = ? AND type = 1 AND status IN ?", + u.Id, targetSubscribeID, []int{2, 5}). + Count(&historyCount).Error; e != nil { + return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "check new user purchase history error: %v", e.Error()) + } + if historyCount >= 1 { + return errors.Wrapf(xerr.NewErrCode(xerr.SubscribeNewUserOnly), "already purchased new user plan") + } + } + // update user gift amount and create deduction record if orderInfo.GiftAmount > 0 { // deduct gift amount from user @@ -319,7 +336,11 @@ func (l *PurchaseLogic) Purchase(req *types.PurchaseOrderRequest) (resp *types.P }) if err != nil { l.Errorw("[Purchase] Database insert error", logger.Field("error", err.Error()), logger.Field("orderInfo", orderInfo)) - + // Propagate business errors (e.g. SubscribeNewUserOnly, SubscribeQuotaLimit) directly. + var codeErr *xerr.CodeError + if errors.As(err, &codeErr) { + return nil, err + } return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseInsertError), "insert order error: %v", err.Error()) } // Deferred task diff --git a/internal/logic/public/subscribe/queryUserSubscribeNodeListLogic_test.go b/internal/logic/public/subscribe/queryUserSubscribeNodeListLogic_test.go deleted file mode 100644 index 776fcf7..0000000 --- a/internal/logic/public/subscribe/queryUserSubscribeNodeListLogic_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package subscribe - -import ( - "testing" - - commonLogic "github.com/perfect-panel/server/internal/logic/common" - "github.com/perfect-panel/server/internal/types" - "github.com/stretchr/testify/require" -) - -func TestFillUserSubscribeInfoEntitlementFields(t *testing.T) { - sub := &types.UserSubscribeInfo{} - entitlement := &commonLogic.EntitlementContext{ - EffectiveUserID: 3001, - Source: commonLogic.EntitlementSourceFamilyOwner, - OwnerUserID: 3001, - ReadOnly: true, - } - - fillUserSubscribeInfoEntitlementFields(sub, entitlement) - - require.Equal(t, commonLogic.EntitlementSourceFamilyOwner, sub.EntitlementSource) - require.Equal(t, int64(3001), sub.EntitlementOwnerUserId) - require.True(t, sub.ReadOnly) -} - -func TestNormalizeSubscribeNodeTags(t *testing.T) { - tags := normalizeSubscribeNodeTags("美国, 日本, , 美国, ,日本") - require.Equal(t, []string{"美国", "日本"}, tags) - - empty := normalizeSubscribeNodeTags("") - require.Nil(t, empty) -} diff --git a/internal/logic/public/user/accountMergeHelper.go b/internal/logic/public/user/accountMergeHelper.go index acfe7c7..5a4e1fc 100644 --- a/internal/logic/public/user/accountMergeHelper.go +++ b/internal/logic/public/user/accountMergeHelper.go @@ -45,6 +45,9 @@ func (h *accountMergeHelper) mergeIntoOwner(ownerUserID, deviceUserID int64, sou DeviceUserID: deviceUserID, } + // Capture device user's auth methods BEFORE the transaction migrates them + deviceAuthMethods, _ := h.svcCtx.UserModel.FindUserAuthMethods(h.ctx, deviceUserID) + err := h.svcCtx.DB.WithContext(h.ctx).Transaction(func(tx *gorm.DB) error { var owner modelUser.User if err := tx.Clauses(clause.Locking{Strength: "UPDATE"}). @@ -114,7 +117,7 @@ func (h *accountMergeHelper) mergeIntoOwner(ownerUserID, deviceUserID int64, sou return nil, err } - if err := h.clearCaches(result); err != nil { + if err := h.clearCaches(result, deviceAuthMethods); err != nil { return nil, err } @@ -129,16 +132,32 @@ func (h *accountMergeHelper) mergeIntoOwner(ownerUserID, deviceUserID int64, sou return result, nil } -func (h *accountMergeHelper) clearCaches(result *accountMergeResult) error { +func (h *accountMergeHelper) clearCaches(result *accountMergeResult, deviceAuthMethods []*modelUser.AuthMethods) error { if result == nil { return nil } - if err := h.svcCtx.UserModel.ClearUserCache(h.ctx, - &modelUser.User{Id: result.OwnerUserID}, - &modelUser.User{Id: result.DeviceUserID}, - ); err != nil { - return err + // Fetch owner user with AuthMethods for proper cache key generation + var users []*modelUser.User + if u, err := h.svcCtx.UserModel.FindOne(h.ctx, result.OwnerUserID); err == nil { + users = append(users, u) + } + // For device user, FindOne won't have AuthMethods anymore (migrated in tx), + // so we build a minimal User with the pre-captured auth methods + deviceUser := &modelUser.User{Id: result.DeviceUserID} + if len(deviceAuthMethods) > 0 { + authMethods := make([]modelUser.AuthMethods, len(deviceAuthMethods)) + for i, am := range deviceAuthMethods { + authMethods[i] = *am + } + deviceUser.AuthMethods = authMethods + } + users = append(users, deviceUser) + + if len(users) > 0 { + if err := h.svcCtx.UserModel.ClearUserCache(h.ctx, users...); err != nil { + return err + } } if len(result.MovedDevices) > 0 { diff --git a/internal/logic/public/user/deleteAccountLogic_test.go b/internal/logic/public/user/deleteAccountLogic_test.go deleted file mode 100644 index a531208..0000000 --- a/internal/logic/public/user/deleteAccountLogic_test.go +++ /dev/null @@ -1,109 +0,0 @@ -package user - -import ( - "context" - "testing" - "time" - - "github.com/alicebob/miniredis/v2" - "github.com/perfect-panel/server/internal/svc" - "github.com/redis/go-redis/v9" -) - -func TestClearAllSessions_RemovesUserSessionsAndDeviceMappings(t *testing.T) { - logic, redisClient, cleanup := newDeleteAccountRedisTestLogic(t) - defer cleanup() - - mustRedisSet(t, redisClient, "auth:session_id:sid-user-1", "1001") - mustRedisSet(t, redisClient, "auth:session_id:sid-user-2", "1001") - mustRedisSet(t, redisClient, "auth:session_id:sid-other", "2002") - - mustRedisSet(t, redisClient, "auth:session_id:detail:sid-user-1", "detail") - mustRedisSet(t, redisClient, "auth:session_id:detail:sid-other", "detail") - - mustRedisSet(t, redisClient, "auth:device_identifier:dev-user-1", "sid-user-1") - mustRedisSet(t, redisClient, "auth:device_identifier:dev-user-2", "sid-user-2") - mustRedisSet(t, redisClient, "auth:device_identifier:dev-other", "sid-other") - - mustRedisZAdd(t, redisClient, "auth:user_sessions:1001", "sid-user-3", 1) - mustRedisSet(t, redisClient, "auth:session_id:sid-user-3", "1001") - - logic.clearAllSessions(1001) - - mustRedisNotExist(t, redisClient, "auth:session_id:sid-user-1") - mustRedisNotExist(t, redisClient, "auth:session_id:sid-user-2") - mustRedisNotExist(t, redisClient, "auth:session_id:sid-user-3") - mustRedisNotExist(t, redisClient, "auth:session_id:detail:sid-user-1") - mustRedisNotExist(t, redisClient, "auth:user_sessions:1001") - mustRedisNotExist(t, redisClient, "auth:device_identifier:dev-user-1") - mustRedisNotExist(t, redisClient, "auth:device_identifier:dev-user-2") - - mustRedisExist(t, redisClient, "auth:session_id:sid-other") - mustRedisExist(t, redisClient, "auth:session_id:detail:sid-other") - mustRedisExist(t, redisClient, "auth:device_identifier:dev-other") -} - -func TestClearAllSessions_ScanFallbackWorksWithoutUserSessionIndex(t *testing.T) { - logic, redisClient, cleanup := newDeleteAccountRedisTestLogic(t) - defer cleanup() - - mustRedisSet(t, redisClient, "auth:session_id:sid-a", "3003") - mustRedisSet(t, redisClient, "auth:session_id:sid-b", "3003") - mustRedisSet(t, redisClient, "auth:session_id:sid-c", "4004") - - logic.clearAllSessions(3003) - - mustRedisNotExist(t, redisClient, "auth:session_id:sid-a") - mustRedisNotExist(t, redisClient, "auth:session_id:sid-b") - mustRedisExist(t, redisClient, "auth:session_id:sid-c") -} - -func newDeleteAccountRedisTestLogic(t *testing.T) (*DeleteAccountLogic, *redis.Client, func()) { - t.Helper() - - miniRedis := miniredis.RunT(t) - redisClient := redis.NewClient(&redis.Options{Addr: miniRedis.Addr()}) - logic := NewDeleteAccountLogic(context.Background(), &svc.ServiceContext{Redis: redisClient}) - - cleanup := func() { - _ = redisClient.Close() - miniRedis.Close() - } - return logic, redisClient, cleanup -} - -func mustRedisSet(t *testing.T, redisClient *redis.Client, key, value string) { - t.Helper() - if err := redisClient.Set(context.Background(), key, value, time.Hour).Err(); err != nil { - t.Fatalf("redis set %s failed: %v", key, err) - } -} - -func mustRedisZAdd(t *testing.T, redisClient *redis.Client, key, member string, score float64) { - t.Helper() - if err := redisClient.ZAdd(context.Background(), key, redis.Z{Member: member, Score: score}).Err(); err != nil { - t.Fatalf("redis zadd %s failed: %v", key, err) - } -} - -func mustRedisExist(t *testing.T, redisClient *redis.Client, key string) { - t.Helper() - exists, err := redisClient.Exists(context.Background(), key).Result() - if err != nil { - t.Fatalf("redis exists %s failed: %v", key, err) - } - if exists == 0 { - t.Fatalf("expected redis key %s to exist", key) - } -} - -func mustRedisNotExist(t *testing.T, redisClient *redis.Client, key string) { - t.Helper() - exists, err := redisClient.Exists(context.Background(), key).Result() - if err != nil { - t.Fatalf("redis exists %s failed: %v", key, err) - } - if exists != 0 { - t.Fatalf("expected redis key %s to be deleted", key) - } -} diff --git a/internal/logic/public/user/familyBindingHelper_test.go b/internal/logic/public/user/familyBindingHelper_test.go deleted file mode 100644 index 40db128..0000000 --- a/internal/logic/public/user/familyBindingHelper_test.go +++ /dev/null @@ -1,128 +0,0 @@ -package user - -import ( - stderrors "errors" - "testing" - - modelUser "github.com/perfect-panel/server/internal/model/user" - "github.com/perfect-panel/server/pkg/xerr" - pkgerrors "github.com/pkg/errors" - "github.com/stretchr/testify/require" -) - -func extractFamilyJoinCode(err error) uint32 { - if err == nil { - return 0 - } - - var codeErr *xerr.CodeError - if stderrors.As(pkgerrors.Cause(err), &codeErr) { - return codeErr.GetErrCode() - } - return 0 -} - -func TestValidateMemberJoinConflict(t *testing.T) { - ownerFamilyID := int64(11) - - testCases := []struct { - name string - ownerFamily int64 - memberRecord *modelUser.UserFamilyMember - wantCode uint32 - }{ - { - name: "no member record", - ownerFamily: ownerFamilyID, - wantCode: 0, - }, - { - name: "same family active member", - ownerFamily: ownerFamilyID, - memberRecord: &modelUser.UserFamilyMember{ - FamilyId: ownerFamilyID, - Status: modelUser.FamilyMemberActive, - }, - wantCode: xerr.FamilyAlreadyBound, - }, - { - name: "same family left member", - ownerFamily: ownerFamilyID, - memberRecord: &modelUser.UserFamilyMember{ - FamilyId: ownerFamilyID, - Status: modelUser.FamilyMemberLeft, - }, - wantCode: 0, - }, - { - name: "same family removed member", - ownerFamily: ownerFamilyID, - memberRecord: &modelUser.UserFamilyMember{ - FamilyId: ownerFamilyID, - Status: modelUser.FamilyMemberRemoved, - }, - wantCode: 0, - }, - { - name: "cross family active member", - ownerFamily: ownerFamilyID, - memberRecord: &modelUser.UserFamilyMember{ - FamilyId: ownerFamilyID + 1, - Status: modelUser.FamilyMemberActive, - }, - wantCode: xerr.FamilyCrossBindForbidden, - }, - { - name: "cross family left member", - ownerFamily: ownerFamilyID, - memberRecord: &modelUser.UserFamilyMember{ - FamilyId: ownerFamilyID + 1, - Status: modelUser.FamilyMemberLeft, - }, - wantCode: 0, - }, - { - name: "cross family removed member", - ownerFamily: ownerFamilyID, - memberRecord: &modelUser.UserFamilyMember{ - FamilyId: ownerFamilyID + 1, - Status: modelUser.FamilyMemberRemoved, - }, - wantCode: 0, - }, - } - - for _, testCase := range testCases { - t.Run(testCase.name, func(t *testing.T) { - err := validateMemberJoinConflict(testCase.ownerFamily, testCase.memberRecord) - if testCase.wantCode == 0 { - require.NoError(t, err) - return - } - - require.Error(t, err) - require.Equal(t, testCase.wantCode, extractFamilyJoinCode(err)) - }) - } -} - -func TestBuildRemovedSubscribeCacheMeta(t *testing.T) { - removed := []modelUser.Subscribe{ - {Id: 1, SubscribeId: 10, Token: "member-token-1"}, - {Id: 2, SubscribeId: 11, Token: "member-token-2"}, - {Id: 3, SubscribeId: 0, Token: "member-token-3"}, - } - - models, subscribeIDSet := buildRemovedSubscribeCacheMeta(removed) - - require.Len(t, models, 3) - require.Equal(t, int64(1), models[0].Id) - require.Equal(t, "member-token-2", models[1].Token) - require.Len(t, subscribeIDSet, 2) - _, has10 := subscribeIDSet[10] - _, has11 := subscribeIDSet[11] - _, has0 := subscribeIDSet[0] - require.True(t, has10) - require.True(t, has11) - require.False(t, has0) -} diff --git a/internal/logic/public/user/queryUserInfoLogic_test.go b/internal/logic/public/user/queryUserInfoLogic_test.go deleted file mode 100644 index 8013780..0000000 --- a/internal/logic/public/user/queryUserInfoLogic_test.go +++ /dev/null @@ -1,105 +0,0 @@ -package user - -import ( - "testing" - - modelUser "github.com/perfect-panel/server/internal/model/user" - "github.com/perfect-panel/server/internal/types" - "github.com/stretchr/testify/require" -) - -func TestAppendFamilyOwnerEmailIfNeeded(t *testing.T) { - testCases := []struct { - name string - methods []types.UserAuthMethod - familyJoined bool - ownerEmailMethod *modelUser.AuthMethods - wantMethodCount int - wantEmailCount int - wantFirstAuthType string - wantFirstAuthValue string - }{ - { - name: "inject owner email when member has no email", - methods: []types.UserAuthMethod{ - {AuthType: "device", AuthIdentifier: "dev-1", Verified: true}, - }, - familyJoined: true, - ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true}, - wantMethodCount: 2, - wantEmailCount: 1, - wantFirstAuthType: "email", - wantFirstAuthValue: "owner@example.com", - }, - { - name: "do not inject when member already has email", - methods: []types.UserAuthMethod{ - {AuthType: "email", AuthIdentifier: "member@example.com", Verified: true}, - {AuthType: "device", AuthIdentifier: "dev-1", Verified: true}, - }, - familyJoined: true, - ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true}, - wantMethodCount: 2, - wantEmailCount: 1, - wantFirstAuthType: "email", - wantFirstAuthValue: "member@example.com", - }, - { - name: "do not inject when owner has no email", - methods: []types.UserAuthMethod{ - {AuthType: "device", AuthIdentifier: "dev-1", Verified: true}, - }, - familyJoined: true, - ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "", Verified: true}, - wantMethodCount: 1, - wantEmailCount: 0, - wantFirstAuthType: "device", - }, - { - name: "do not inject for non active family relationship", - methods: []types.UserAuthMethod{ - {AuthType: "device", AuthIdentifier: "dev-1", Verified: true}, - }, - familyJoined: false, - ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true}, - wantMethodCount: 1, - wantEmailCount: 0, - wantFirstAuthType: "device", - }, - { - name: "sort keeps injected email at first position", - methods: []types.UserAuthMethod{ - {AuthType: "mobile", AuthIdentifier: "+1234567890", Verified: true}, - {AuthType: "device", AuthIdentifier: "dev-1", Verified: true}, - }, - familyJoined: true, - ownerEmailMethod: &modelUser.AuthMethods{AuthType: "email", AuthIdentifier: "owner@example.com", Verified: true}, - wantMethodCount: 3, - wantEmailCount: 1, - wantFirstAuthType: "email", - wantFirstAuthValue: "owner@example.com", - }, - } - - for _, testCase := range testCases { - t.Run(testCase.name, func(t *testing.T) { - finalMethods := appendFamilyOwnerEmailIfNeeded(testCase.methods, testCase.familyJoined, testCase.ownerEmailMethod) - sortUserAuthMethodsByPriority(finalMethods) - - require.Len(t, finalMethods, testCase.wantMethodCount) - - emailCount := 0 - for _, method := range finalMethods { - if method.AuthType == "email" { - emailCount++ - } - } - require.Equal(t, testCase.wantEmailCount, emailCount) - - require.Equal(t, testCase.wantFirstAuthType, finalMethods[0].AuthType) - if testCase.wantFirstAuthValue != "" { - require.Equal(t, testCase.wantFirstAuthValue, finalMethods[0].AuthIdentifier) - } - }) - } -} diff --git a/internal/logic/public/user/queryUserSubscribeLogic_test.go b/internal/logic/public/user/queryUserSubscribeLogic_test.go deleted file mode 100644 index 249f0af..0000000 --- a/internal/logic/public/user/queryUserSubscribeLogic_test.go +++ /dev/null @@ -1,25 +0,0 @@ -package user - -import ( - "testing" - - commonLogic "github.com/perfect-panel/server/internal/logic/common" - "github.com/perfect-panel/server/internal/types" - "github.com/stretchr/testify/require" -) - -func TestFillUserSubscribeEntitlementFields(t *testing.T) { - sub := &types.UserSubscribe{} - entitlement := &commonLogic.EntitlementContext{ - EffectiveUserID: 2001, - Source: commonLogic.EntitlementSourceFamilyOwner, - OwnerUserID: 2001, - ReadOnly: true, - } - - fillUserSubscribeEntitlementFields(sub, entitlement) - - require.Equal(t, commonLogic.EntitlementSourceFamilyOwner, sub.EntitlementSource) - require.Equal(t, int64(2001), sub.EntitlementOwnerUserId) - require.True(t, sub.ReadOnly) -} diff --git a/internal/logic/public/user/unsubscribeLogic.go b/internal/logic/public/user/unsubscribeLogic.go index c87ca84..e74b062 100644 --- a/internal/logic/public/user/unsubscribeLogic.go +++ b/internal/logic/public/user/unsubscribeLogic.go @@ -71,7 +71,7 @@ func (l *UnsubscribeLogic) Unsubscribe(req *types.UnsubscribeRequest) error { err = l.svcCtx.UserModel.Transaction(l.ctx, func(db *gorm.DB) error { // Find and update subscription status to cancelled (status = 4) userSub.Status = 4 // Set status to cancelled - if err = l.svcCtx.UserModel.UpdateSubscribe(l.ctx, userSub); err != nil { + if err = l.svcCtx.UserModel.UpdateSubscribe(l.ctx, userSub, db); err != nil { return err } @@ -148,7 +148,7 @@ func (l *UnsubscribeLogic) Unsubscribe(req *types.UnsubscribeRequest) error { // Update user's regular balance and save changes to database u.Balance = balance - return l.svcCtx.UserModel.Update(l.ctx, u) + return l.svcCtx.UserModel.Update(l.ctx, u, db) }) if err != nil { diff --git a/internal/middleware/apiVersionSwitchHandler_test.go b/internal/middleware/apiVersionSwitchHandler_test.go deleted file mode 100644 index dd48dc6..0000000 --- a/internal/middleware/apiVersionSwitchHandler_test.go +++ /dev/null @@ -1,50 +0,0 @@ -package middleware - -import ( - "context" - "net/http" - "net/http/httptest" - "testing" - - "github.com/gin-gonic/gin" - "github.com/perfect-panel/server/pkg/constant" -) - -func TestApiVersionSwitchHandlerUsesLegacyByDefault(t *testing.T) { - gin.SetMode(gin.TestMode) - r := gin.New() - r.GET("/test", ApiVersionSwitchHandler( - func(c *gin.Context) { c.String(http.StatusOK, "legacy") }, - func(c *gin.Context) { c.String(http.StatusOK, "latest") }, - )) - - req := httptest.NewRequest(http.MethodGet, "/test", nil) - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != "legacy" { - t.Fatalf("expected legacy handler, code=%d body=%s", resp.Code, resp.Body.String()) - } -} - -func TestApiVersionSwitchHandlerUsesLatestWhenFlagSet(t *testing.T) { - gin.SetMode(gin.TestMode) - r := gin.New() - r.Use(func(c *gin.Context) { - ctx := context.WithValue(c.Request.Context(), constant.CtxKeyAPIVersionUseLatest, true) - c.Request = c.Request.WithContext(ctx) - c.Next() - }) - r.GET("/test", ApiVersionSwitchHandler( - func(c *gin.Context) { c.String(http.StatusOK, "legacy") }, - func(c *gin.Context) { c.String(http.StatusOK, "latest") }, - )) - - req := httptest.NewRequest(http.MethodGet, "/test", nil) - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != "latest" { - t.Fatalf("expected latest handler, code=%d body=%s", resp.Code, resp.Body.String()) - } -} diff --git a/internal/middleware/authMiddleware_test.go b/internal/middleware/authMiddleware_test.go deleted file mode 100644 index ca6d5cf..0000000 --- a/internal/middleware/authMiddleware_test.go +++ /dev/null @@ -1,46 +0,0 @@ -package middleware - -import "testing" - -func TestParseLoginType(t *testing.T) { - tests := []struct { - name string - claims map[string]interface{} - want string - }{ - { - name: "prefer CtxLoginType when both exist", - claims: map[string]interface{}{"CtxLoginType": "device", "LoginType": "email"}, - want: "device", - }, - { - name: "fallback to legacy LoginType", - claims: map[string]interface{}{"LoginType": "device"}, - want: "device", - }, - { - name: "ignore non-string values", - claims: map[string]interface{}{"CtxLoginType": 123, "LoginType": true}, - want: "", - }, - { - name: "empty values return empty", - claims: map[string]interface{}{"CtxLoginType": "", "LoginType": ""}, - want: "", - }, - { - name: "missing values return empty", - claims: map[string]interface{}{}, - want: "", - }, - } - - for _, testCase := range tests { - t.Run(testCase.name, func(t *testing.T) { - got := parseLoginType(testCase.claims) - if got != testCase.want { - t.Fatalf("parseLoginType() = %q, want %q", got, testCase.want) - } - }) - } -} diff --git a/internal/middleware/signatureMiddleware_test.go b/internal/middleware/signatureMiddleware_test.go deleted file mode 100644 index 8ea1919..0000000 --- a/internal/middleware/signatureMiddleware_test.go +++ /dev/null @@ -1,239 +0,0 @@ -package middleware - -import ( - "context" - "crypto/hmac" - "crypto/sha256" - "encoding/hex" - "encoding/json" - "io" - "net/http" - "net/http/httptest" - "strconv" - "strings" - "testing" - "time" - - "github.com/gin-gonic/gin" - "github.com/perfect-panel/server/internal/config" - "github.com/perfect-panel/server/internal/svc" - "github.com/perfect-panel/server/pkg/signature" - "github.com/perfect-panel/server/pkg/xerr" -) - -type testNonceStore struct { - seen map[string]bool -} - -func newTestNonceStore() *testNonceStore { - return &testNonceStore{seen: map[string]bool{}} -} - -func (s *testNonceStore) SetIfNotExists(_ context.Context, appId, nonce string, _ int64) (bool, error) { - key := appId + ":" + nonce - if s.seen[key] { - return true, nil - } - s.seen[key] = true - return false, nil -} - -func makeTestSignature(secret, sts string) string { - mac := hmac.New(sha256.New, []byte(secret)) - mac.Write([]byte(sts)) - return hex.EncodeToString(mac.Sum(nil)) -} - -func newTestServiceContext() *svc.ServiceContext { - conf := config.Config{} - conf.Signature.EnableSignature = true - conf.AppSignature = signature.SignatureConf{ - AppSecrets: map[string]string{ - "web-client": "test-secret", - }, - ValidWindowSeconds: 300, - SkipPrefixes: []string{ - "/v1/public/health", - }, - } - return &svc.ServiceContext{ - Config: conf, - SignatureValidator: signature.NewValidator(conf.AppSignature, newTestNonceStore()), - } -} - -func newTestServiceContextWithSwitch(enabled bool) *svc.ServiceContext { - svcCtx := newTestServiceContext() - svcCtx.Config.Signature.EnableSignature = enabled - return svcCtx -} - -func decodeCode(t *testing.T, body []byte) uint32 { - t.Helper() - var resp struct { - Code uint32 `json:"code"` - } - if err := json.Unmarshal(body, &resp); err != nil { - t.Fatalf("unmarshal response failed: %v", err) - } - return resp.Code -} - -func TestSignatureMiddlewareMissingAppID(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContext() - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.GET("/v1/public/ping", func(c *gin.Context) { - c.String(http.StatusOK, "ok") - }) - - req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil) - req.Header.Set("X-Signature-Enabled", "1") - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if code := decodeCode(t, resp.Body.Bytes()); code != xerr.InvalidAccess { - t.Fatalf("expected InvalidAccess(%d), got %d", xerr.InvalidAccess, code) - } -} - -func TestSignatureMiddlewareMissingSignatureHeaders(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContext() - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.GET("/v1/public/ping", func(c *gin.Context) { - c.String(http.StatusOK, "ok") - }) - - req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil) - req.Header.Set("X-Signature-Enabled", "1") - req.Header.Set("X-App-Id", "web-client") - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if code := decodeCode(t, resp.Body.Bytes()); code != xerr.SignatureMissing { - t.Fatalf("expected SignatureMissing(%d), got %d", xerr.SignatureMissing, code) - } -} - -func TestSignatureMiddlewarePassesWhenSignatureHeaderMissing(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContext() - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.GET("/v1/public/ping", func(c *gin.Context) { - c.String(http.StatusOK, "ok") - }) - - req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil) - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != "ok" { - t.Fatalf("expected pass-through without X-Signature-Enabled, got code=%d body=%s", resp.Code, resp.Body.String()) - } -} - -func TestSignatureMiddlewarePassesWhenSignatureHeaderIsZero(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContext() - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.GET("/v1/public/ping", func(c *gin.Context) { - c.String(http.StatusOK, "ok") - }) - - req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil) - req.Header.Set("X-Signature-Enabled", "0") - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != "ok" { - t.Fatalf("expected pass-through when X-Signature-Enabled=0, got code=%d body=%s", resp.Code, resp.Body.String()) - } -} - -func TestSignatureMiddlewarePassesWhenSystemSwitchDisabled(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContextWithSwitch(false) - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.GET("/v1/public/ping", func(c *gin.Context) { - c.String(http.StatusOK, "ok") - }) - - req := httptest.NewRequest(http.MethodGet, "/v1/public/ping", nil) - req.Header.Set("X-Signature-Enabled", "1") - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != "ok" { - t.Fatalf("expected pass-through when system switch is disabled, got code=%d body=%s", resp.Code, resp.Body.String()) - } -} - -func TestSignatureMiddlewareSkipsNonPublicPath(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContext() - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.GET("/v1/admin/ping", func(c *gin.Context) { - c.String(http.StatusOK, "ok") - }) - - req := httptest.NewRequest(http.MethodGet, "/v1/admin/ping", nil) - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != "ok" { - t.Fatalf("expected pass-through for non-public path, got code=%d body=%s", resp.Code, resp.Body.String()) - } -} - -func TestSignatureMiddlewareHonorsSkipPrefix(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContext() - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.GET("/v1/public/healthz", func(c *gin.Context) { - c.String(http.StatusOK, "ok") - }) - - req := httptest.NewRequest(http.MethodGet, "/v1/public/healthz", nil) - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != "ok" { - t.Fatalf("expected skip-prefix pass-through, got code=%d body=%s", resp.Code, resp.Body.String()) - } -} - -func TestSignatureMiddlewareRestoresBodyAfterVerify(t *testing.T) { - gin.SetMode(gin.TestMode) - svcCtx := newTestServiceContext() - r := gin.New() - r.Use(SignatureMiddleware(svcCtx)) - r.POST("/v1/public/body", func(c *gin.Context) { - body, _ := io.ReadAll(c.Request.Body) - c.String(http.StatusOK, string(body)) - }) - - body := `{"hello":"world"}` - req := httptest.NewRequest(http.MethodPost, "/v1/public/body?a=1&b=2", strings.NewReader(body)) - ts := strconv.FormatInt(time.Now().Unix(), 10) - nonce := "nonce-body-1" - sts := signature.BuildStringToSign(http.MethodPost, "/v1/public/body", "a=1&b=2", []byte(body), "web-client", ts, nonce) - req.Header.Set("X-Signature-Enabled", "1") - req.Header.Set("X-App-Id", "web-client") - req.Header.Set("X-Timestamp", ts) - req.Header.Set("X-Nonce", nonce) - req.Header.Set("X-Signature", makeTestSignature("test-secret", sts)) - resp := httptest.NewRecorder() - r.ServeHTTP(resp, req) - - if resp.Code != http.StatusOK || resp.Body.String() != body { - t.Fatalf("expected restored body, got code=%d body=%s", resp.Code, resp.Body.String()) - } -} diff --git a/internal/model/auth/auth_test.go b/internal/model/auth/auth_test.go deleted file mode 100644 index db5dbed..0000000 --- a/internal/model/auth/auth_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package auth - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestAlibabaCloudConfig_Marshal(t *testing.T) { - v := new(AlibabaCloudConfig) - t.Log(v.Marshal()) -} - -func TestAlibabaCloudConfig_Unmarshal(t *testing.T) { - - cfg := AlibabaCloudConfig{ - Access: "AccessKeyId", - Secret: "AccessKeySecret", - SignName: "SignName", - Endpoint: "Endpoint", - TemplateCode: "VerifyTemplateCode", - } - data := cfg.Marshal() - v := new(AlibabaCloudConfig) - err := v.Unmarshal(data) - if err != nil { - t.Fatal(err.Error()) - } - assert.Equal(t, "AccessKeyId", v.Access) -} diff --git a/internal/model/node/model_test.go b/internal/model/node/model_test.go deleted file mode 100644 index 7318b3b..0000000 --- a/internal/model/node/model_test.go +++ /dev/null @@ -1,12 +0,0 @@ -package node - -import ( - "testing" - - "github.com/stretchr/testify/require" -) - -func TestNormalizeNodeTags(t *testing.T) { - tags := normalizeNodeTags([]string{"美国", " 日本 ", "", "美国", " ", "日本"}) - require.Equal(t, []string{"美国", "日本"}, tags) -} diff --git a/internal/model/subscribe/subscribe.go b/internal/model/subscribe/subscribe.go index c9c1046..1f20c96 100644 --- a/internal/model/subscribe/subscribe.go +++ b/internal/model/subscribe/subscribe.go @@ -20,6 +20,7 @@ type Subscribe struct { SpeedLimit int64 `gorm:"type:int;not null;default:0;comment:Speed Limit"` DeviceLimit int64 `gorm:"type:int;not null;default:0;comment:Device Limit"` Quota int64 `gorm:"type:int;not null;default:0;comment:Quota"` + NewUserOnly *bool `gorm:"type:tinyint(1);default:0;comment:New user only: allow purchase within 24h of registration, once per user"` Nodes string `gorm:"type:varchar(255);comment:Node Ids"` NodeTags string `gorm:"type:varchar(255);comment:Node Tags"` Show *bool `gorm:"type:tinyint(1);not null;default:0;comment:Show portal page"` diff --git a/internal/model/user/model.go b/internal/model/user/model.go index 4d316af..b00acec 100644 --- a/internal/model/user/model.go +++ b/internal/model/user/model.go @@ -246,7 +246,7 @@ func (m *customUserModel) BatchDeleteUser(ctx context.Context, ids []int64, tx . if len(tx) > 0 { conn = tx[0] } - return conn.Where("id in ?", ids).Find(&users).Error + return conn.Where("id in ?", ids).Preload("AuthMethods").Find(&users).Error }) if err != nil { return err diff --git a/internal/model/user/subscribe.go b/internal/model/user/subscribe.go index 8b786a4..7f798e4 100644 --- a/internal/model/user/subscribe.go +++ b/internal/model/user/subscribe.go @@ -88,15 +88,18 @@ func (m *defaultUserModel) FindOneSubscribe(ctx context.Context, id int64) (*Sub func (m *defaultUserModel) FindUsersSubscribeBySubscribeId(ctx context.Context, subscribeId int64) ([]*Subscribe, error) { var data []*Subscribe err := m.QueryNoCacheCtx(ctx, &data, func(conn *gorm.DB, v interface{}) error { - err := conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` IN ?", subscribeId, []int64{1, 0}).Find(v).Error - - if err != nil { - return err - } - // update user subscribe status - return conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` = ?", subscribeId, 0).Update("status", 1).Error + return conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` IN ?", subscribeId, []int64{1, 0}).Find(v).Error }) - return data, err + if err != nil { + return nil, err + } + // Activate pending subscribes (status 0 -> 1) in a separate write operation + if err := m.ExecNoCacheCtx(ctx, func(conn *gorm.DB) error { + return conn.Model(&Subscribe{}).Where("subscribe_id = ? AND `status` = ?", subscribeId, 0).Update("status", 1).Error + }); err != nil { + return data, err + } + return data, nil } // QueryUserSubscribe returns a list of records that meet the conditions. @@ -136,6 +139,7 @@ func (m *defaultUserModel) QueryUserSubscribe(ctx context.Context, userId int64, func (m *defaultUserModel) FindOneUserSubscribe(ctx context.Context, id int64) (subscribeDetails *SubscribeDetails, err error) { //TODO cache //key := fmt.Sprintf("%s%d", cacheUserSubscribeUserPrefix, userId) + subscribeDetails = &SubscribeDetails{} err = m.QueryNoCacheCtx(ctx, subscribeDetails, func(conn *gorm.DB, v interface{}) error { return conn.Model(&Subscribe{}).Preload("Subscribe").Where("id = ?", id).First(&subscribeDetails).Error }) diff --git a/internal/report/tool_test.go b/internal/report/tool_test.go deleted file mode 100644 index cee9d83..0000000 --- a/internal/report/tool_test.go +++ /dev/null @@ -1,21 +0,0 @@ -package report - -import ( - "testing" -) - -func TestFreePort(t *testing.T) { - port, err := FreePort() - if err != nil { - t.Fatalf("FreePort() error: %v", err) - } - t.Logf("FreePort: %v", port) -} - -func TestModulePort(t *testing.T) { - port, err := ModulePort() - if err != nil { - t.Fatalf("ModulePort() error: %v", err) - } - t.Logf("ModulePort: %v", port) -} diff --git a/internal/trace/trace_test.go b/internal/trace/trace_test.go deleted file mode 100644 index e40339c..0000000 --- a/internal/trace/trace_test.go +++ /dev/null @@ -1,32 +0,0 @@ -package trace - -import ( - "context" - "net/http" - "net/http/httptest" - "testing" - - "github.com/stretchr/testify/assert" - sdktrace "go.opentelemetry.io/otel/sdk/trace" - semconv "go.opentelemetry.io/otel/semconv/v1.4.0" - oteltrace "go.opentelemetry.io/otel/trace" -) - -func TestSpanIDFromContext(t *testing.T) { - tracer := sdktrace.NewTracerProvider().Tracer("test") - ctx, span := tracer.Start( - context.Background(), - "foo", - oteltrace.WithSpanKind(oteltrace.SpanKindClient), - oteltrace.WithAttributes(semconv.HTTPClientAttributesFromHTTPRequest(httptest.NewRequest(http.MethodGet, "/", nil))...), - ) - defer span.End() - - assert.NotEmpty(t, TraceIDFromContext(ctx)) - assert.NotEmpty(t, SpanIDFromContext(ctx)) -} - -func TestSpanIDFromContextEmpty(t *testing.T) { - assert.Empty(t, TraceIDFromContext(context.Background())) - assert.Empty(t, SpanIDFromContext(context.Background())) -} diff --git a/internal/types/deleteAccountResponse_test.go b/internal/types/deleteAccountResponse_test.go deleted file mode 100644 index 0fc4c88..0000000 --- a/internal/types/deleteAccountResponse_test.go +++ /dev/null @@ -1,37 +0,0 @@ -package types - -import ( - "encoding/json" - "testing" -) - -func TestDeleteAccountResponseAlwaysContainsIntFields(t *testing.T) { - data, err := json.Marshal(DeleteAccountResponse{ - Success: true, - Message: "ok", - }) - if err != nil { - t.Fatalf("failed to marshal response: %v", err) - } - - var decoded map[string]interface{} - if err = json.Unmarshal(data, &decoded); err != nil { - t.Fatalf("failed to decode response: %v", err) - } - - userID, hasUserID := decoded["user_id"] - if !hasUserID { - t.Fatalf("expected user_id in JSON, got %s", string(data)) - } - if userID != float64(0) { - t.Fatalf("expected user_id=0, got %v", userID) - } - - code, hasCode := decoded["code"] - if !hasCode { - t.Fatalf("expected code in JSON, got %s", string(data)) - } - if code != float64(0) { - t.Fatalf("expected code=0, got %v", code) - } -} diff --git a/internal/types/types.go b/internal/types/types.go index 3a7ce0f..9903e04 100644 --- a/internal/types/types.go +++ b/internal/types/types.go @@ -458,6 +458,7 @@ type CreateSubscribeRequest struct { SpeedLimit int64 `json:"speed_limit"` DeviceLimit int64 `json:"device_limit"` Quota int64 `json:"quota"` + NewUserOnly *bool `json:"new_user_only"` Nodes []int64 `json:"nodes"` NodeTags []string `json:"node_tags"` Show *bool `json:"show"` @@ -2402,6 +2403,7 @@ type Subscribe struct { SpeedLimit int64 `json:"speed_limit"` DeviceLimit int64 `json:"device_limit"` Quota int64 `json:"quota"` + NewUserOnly bool `json:"new_user_only"` Nodes []int64 `json:"nodes"` NodeTags []string `json:"node_tags"` Show bool `json:"show"` @@ -2805,6 +2807,7 @@ type UpdateSubscribeRequest struct { SpeedLimit int64 `json:"speed_limit"` DeviceLimit int64 `json:"device_limit"` Quota int64 `json:"quota"` + NewUserOnly *bool `json:"new_user_only"` Nodes []int64 `json:"nodes"` NodeTags []string `json:"node_tags"` Show *bool `json:"show"` diff --git a/pkg/aes/aes_test.go b/pkg/aes/aes_test.go deleted file mode 100644 index 286173e..0000000 --- a/pkg/aes/aes_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package pkgaes - -import ( - "encoding/json" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestAes(t *testing.T) { - params := map[string]interface{}{ - "method": "email", - "account": "admin@ppanel.dev", - "password": "password", - } - marshal, _ := json.Marshal(params) - jsonStr := string(marshal) - encrypt, iv, err := Encrypt([]byte(jsonStr), "123456") - if err != nil { - t.Fatalf("encrypt failed: %v", err) - } - decrypt, err := Decrypt(encrypt, "123456", iv) - if err != nil { - t.Fatalf("decrypt failed: %v", err) - } - - assert.Equal(t, jsonStr, decrypt, "decrypt failed") - -} diff --git a/pkg/apiversion/version_test.go b/pkg/apiversion/version_test.go deleted file mode 100644 index 41e5585..0000000 --- a/pkg/apiversion/version_test.go +++ /dev/null @@ -1,55 +0,0 @@ -package apiversion - -import "testing" - -func TestParse(t *testing.T) { - tests := []struct { - name string - raw string - valid bool - version Version - }{ - {name: "empty", raw: "", valid: false}, - {name: "invalid text", raw: "abc", valid: false}, - {name: "missing patch", raw: "1.0", valid: false}, - {name: "exact", raw: "1.0.0", valid: true, version: Version{Major: 1, Minor: 0, Patch: 0}}, - {name: "with prefix", raw: "v1.2.3", valid: true, version: Version{Major: 1, Minor: 2, Patch: 3}}, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - version, ok := Parse(tt.raw) - if ok != tt.valid { - t.Fatalf("expected valid=%v, got %v", tt.valid, ok) - } - if tt.valid && version != tt.version { - t.Fatalf("expected version=%+v, got %+v", tt.version, version) - } - }) - } -} - -func TestUseLatest(t *testing.T) { - tests := []struct { - name string - header string - threshold string - expect bool - }{ - {name: "missing header", header: "", threshold: "1.0.0", expect: false}, - {name: "invalid header", header: "invalid", threshold: "1.0.0", expect: false}, - {name: "equal threshold", header: "1.0.0", threshold: "1.0.0", expect: false}, - {name: "greater threshold", header: "1.0.1", threshold: "1.0.0", expect: true}, - {name: "greater with v prefix", header: "v1.2.3", threshold: "1.0.0", expect: true}, - {name: "less than threshold", header: "0.9.9", threshold: "1.0.0", expect: false}, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := UseLatest(tt.header, tt.threshold) - if result != tt.expect { - t.Fatalf("expected %v, got %v", tt.expect, result) - } - }) - } -} diff --git a/pkg/cache/cacheopt_test.go b/pkg/cache/cacheopt_test.go deleted file mode 100644 index 7b7d82c..0000000 --- a/pkg/cache/cacheopt_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package cache - -import ( - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestCacheOptions(t *testing.T) { - t.Run("default options", func(t *testing.T) { - o := newOptions() - assert.Equal(t, defaultExpiry, o.Expiry) - assert.Equal(t, defaultNotFoundExpiry, o.NotFoundExpiry) - }) - - t.Run("with expiry", func(t *testing.T) { - o := newOptions(WithExpiry(time.Second)) - assert.Equal(t, time.Second, o.Expiry) - assert.Equal(t, defaultNotFoundExpiry, o.NotFoundExpiry) - }) - - t.Run("with not found expiry", func(t *testing.T) { - o := newOptions(WithNotFoundExpiry(time.Second)) - assert.Equal(t, defaultExpiry, o.Expiry) - assert.Equal(t, time.Second, o.NotFoundExpiry) - }) -} diff --git a/pkg/cache/gorm.go b/pkg/cache/gorm.go index 0fd2805..fdad474 100644 --- a/pkg/cache/gorm.go +++ b/pkg/cache/gorm.go @@ -109,6 +109,13 @@ func (cc CachedConn) QueryCtx(ctx context.Context, v interface{}, key string, qu } return cc.SetCache(key, v) } + // Cache data corrupted (e.g. bad JSON), delete key and fall through to DB + _ = cc.DelCache(key) + err = query(cc.db.WithContext(ctx), v) + if err != nil { + return err + } + return cc.SetCache(key, v) } return } diff --git a/pkg/cache/gorm_test.go b/pkg/cache/gorm_test.go index 0078884..c090771 100644 --- a/pkg/cache/gorm_test.go +++ b/pkg/cache/gorm_test.go @@ -2,65 +2,182 @@ package cache import ( "context" + "encoding/json" + "errors" "testing" "time" - "github.com/perfect-panel/server/pkg/orm" + "github.com/alicebob/miniredis/v2" "github.com/redis/go-redis/v9" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "gorm.io/driver/sqlite" "gorm.io/gorm" - "gorm.io/plugin/soft_delete" ) -type User struct { - Id int64 `gorm:"primarykey"` - Email string `gorm:"index:idx_email;type:varchar(100);unique;not null;comment:电子邮箱"` - Password string `gorm:"type:varchar(100);comment:用户密码;not null"` - Avatar string `gorm:"type:varchar(200);default:'';comment:用户头像"` - Balance int64 `gorm:"default:0;comment:用户余额"` - Telegram int64 `gorm:"default:null;comment:Telegram账号"` - ReferCode string `gorm:"type:varchar(20);default:'';comment:推荐码"` - RefererId int64 `gorm:"comment:推荐人ID"` - Enable bool `gorm:"default:true;not null;comment:账户是否可用"` - IsAdmin bool `gorm:"default:false;not null;comment:是否管理员"` - ValidEmail bool `gorm:"default:false;not null;comment:是否验证邮箱"` - EnableEmailNotify bool `gorm:"default:false;not null;comment:是否启用邮件通知"` - EnableTelegramNotify bool `gorm:"default:false;not null;comment:是否启用Telegram通知"` - EnableBalanceNotify bool `gorm:"default:false;not null;comment:是否启用余额变动通知"` - EnableLoginNotify bool `gorm:"default:false;not null;comment:是否启用登录通知"` - EnableSubscribeNotify bool `gorm:"default:false;not null;comment:是否启用订阅通知"` - EnableTradeNotify bool `gorm:"default:false;not null;comment:是否启用交易通知"` - CreatedAt time.Time `gorm:"<-:create;comment:创建时间"` - UpdatedAt time.Time `gorm:"comment:更新时间"` - DeletedAt gorm.DeletedAt `gorm:"default:null;comment:删除时间"` - IsDel soft_delete.DeletedAt `gorm:"softDelete:flag,DeletedAtField:DeletedAt;comment:1:正常 0:删除"` // Use `1` `0` to identify +// testUser is a simple struct used across all QueryCtx tests. +type testUser struct { + ID int64 `json:"id"` + Name string `json:"name"` } -func TestGormCacheCtx(t *testing.T) { - t.Skipf("skip TestGormCacheCtx test") - db, err := orm.ConnectMysql(orm.Mysql{ - Config: orm.Config{ - Addr: "localhost:3306", - Config: "charset=utf8mb4&parseTime=true&loc=Asia%2FShanghai", - Dbname: "vpnboard", - Username: "root", - Password: "mylove520", - }, +// setupCachedConn creates a CachedConn backed by a real miniredis instance +// and a bare *gorm.DB (no real database connection needed because the +// QueryCtxFn callback is fully under our control). +func setupCachedConn(t *testing.T) (CachedConn, *miniredis.Miniredis) { + t.Helper() + + mr := miniredis.RunT(t) + + rdb := redis.NewClient(&redis.Options{ + Addr: mr.Addr(), }) - if err != nil { - t.Error(err) - } - rds := redis.NewClient(&redis.Options{ - Addr: "localhost:6379", - }) - conn := NewConn(db, rds) - var u User - key := "user:id" - err = conn.QueryCtx(context.Background(), &u, key, func(conn *gorm.DB, v interface{}) error { - return conn.Where("id = ?", 1).First(v).Error - }) - if err != nil { - t.Error(err) - return - } - t.Logf("get cache success %+v", u) + t.Cleanup(func() { rdb.Close() }) + + // Use SQLite in-memory to get a properly initialized *gorm.DB. + db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{}) + require.NoError(t, err) + + cc := NewConn(db, rdb, WithExpiry(time.Minute)) + return cc, mr +} + +func TestQueryCtx_CacheHit(t *testing.T) { + cc, mr := setupCachedConn(t) + ctx := context.Background() + key := "cache:user:1" + + // Pre-populate the cache with valid JSON. + expected := testUser{ID: 1, Name: "Alice"} + data, err := json.Marshal(expected) + require.NoError(t, err) + mr.Set(key, string(data)) + + // Track whether the DB query function is called. + dbCalled := false + queryFn := func(conn *gorm.DB, v interface{}) error { + dbCalled = true + return nil + } + + var result testUser + err = cc.QueryCtx(ctx, &result, key, queryFn) + + assert.NoError(t, err) + assert.False(t, dbCalled, "DB query should NOT be called on cache hit") + assert.Equal(t, expected.ID, result.ID) + assert.Equal(t, expected.Name, result.Name) +} + +func TestQueryCtx_CacheMiss_QueriesDB_SetsCache(t *testing.T) { + cc, mr := setupCachedConn(t) + ctx := context.Background() + key := "cache:user:2" + + // Do NOT pre-populate the cache -- this is a cache miss scenario. + dbCalled := false + queryFn := func(conn *gorm.DB, v interface{}) error { + dbCalled = true + u := v.(*testUser) + u.ID = 2 + u.Name = "Bob" + return nil + } + + var result testUser + err := cc.QueryCtx(ctx, &result, key, queryFn) + + assert.NoError(t, err) + assert.True(t, dbCalled, "DB query should be called on cache miss") + assert.Equal(t, int64(2), result.ID) + assert.Equal(t, "Bob", result.Name) + + // Verify the value was written back to cache. + cached, cacheErr := mr.Get(key) + require.NoError(t, cacheErr) + + var cachedUser testUser + require.NoError(t, json.Unmarshal([]byte(cached), &cachedUser)) + assert.Equal(t, int64(2), cachedUser.ID) + assert.Equal(t, "Bob", cachedUser.Name) +} + +func TestQueryCtx_CorruptedCache_SelfHeals(t *testing.T) { + cc, mr := setupCachedConn(t) + ctx := context.Background() + key := "cache:user:3" + + // Store invalid JSON in the cache to simulate corruption. + mr.Set(key, "THIS IS NOT VALID JSON{{{") + + dbCalled := false + queryFn := func(conn *gorm.DB, v interface{}) error { + dbCalled = true + u := v.(*testUser) + u.ID = 3 + u.Name = "Charlie" + return nil + } + + var result testUser + err := cc.QueryCtx(ctx, &result, key, queryFn) + + assert.NoError(t, err) + assert.True(t, dbCalled, "DB query should be called when cache is corrupted") + assert.Equal(t, int64(3), result.ID) + assert.Equal(t, "Charlie", result.Name) + + // Verify the corrupt key was replaced with valid data. + cached, cacheErr := mr.Get(key) + require.NoError(t, cacheErr) + + var cachedUser testUser + require.NoError(t, json.Unmarshal([]byte(cached), &cachedUser)) + assert.Equal(t, int64(3), cachedUser.ID) + assert.Equal(t, "Charlie", cachedUser.Name) +} + +func TestQueryCtx_CacheMiss_DBFails_ReturnsError(t *testing.T) { + cc, mr := setupCachedConn(t) + ctx := context.Background() + key := "cache:user:4" + + // No cache entry -- this is a miss. + dbErr := errors.New("connection refused") + queryFn := func(conn *gorm.DB, v interface{}) error { + return dbErr + } + + var result testUser + err := cc.QueryCtx(ctx, &result, key, queryFn) + + assert.Error(t, err) + assert.Equal(t, dbErr, err) + + // Cache should remain empty -- no value was written. + assert.False(t, mr.Exists(key), "cache should NOT be set when DB query fails") +} + +func TestQueryCtx_CorruptedCache_DBFails_ReturnsError(t *testing.T) { + cc, mr := setupCachedConn(t) + ctx := context.Background() + key := "cache:user:5" + + // Store invalid JSON to trigger the corruption branch. + mr.Set(key, "<<>>") + + dbErr := errors.New("database is down") + queryFn := func(conn *gorm.DB, v interface{}) error { + return dbErr + } + + var result testUser + err := cc.QueryCtx(ctx, &result, key, queryFn) + + assert.Error(t, err) + assert.Equal(t, dbErr, err) + + // The corrupt key should have been deleted (DelCache was called), + // and no new value was set because the DB query failed. + assert.False(t, mr.Exists(key), "corrupt key should be deleted even when DB fails") } diff --git a/pkg/calculateMonths/calculateMonths_test.go b/pkg/calculateMonths/calculateMonths_test.go deleted file mode 100644 index 77fb43a..0000000 --- a/pkg/calculateMonths/calculateMonths_test.go +++ /dev/null @@ -1,13 +0,0 @@ -package calculateMonths - -import ( - "testing" - "time" -) - -func TestCalculateMonths(t *testing.T) { - startTime, _ := time.Parse(time.DateTime, "2025-01-15 00:00:00") - EndTime, _ := time.Parse(time.DateTime, "2025-05-15 00:00:00") - months := CalculateMonths(startTime, EndTime) - t.Log(months) -} diff --git a/pkg/color/color_test.go b/pkg/color/color_test.go deleted file mode 100644 index 74ee2a1..0000000 --- a/pkg/color/color_test.go +++ /dev/null @@ -1,17 +0,0 @@ -package color - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestWithColor(t *testing.T) { - output := WithColor("Hello", BgRed) - assert.Equal(t, "Hello", output) -} - -func TestWithColorPadding(t *testing.T) { - output := WithColorPadding("Hello", BgRed) - assert.Equal(t, " Hello ", output) -} diff --git a/pkg/conf/config_test.go b/pkg/conf/config_test.go deleted file mode 100644 index 2070e88..0000000 --- a/pkg/conf/config_test.go +++ /dev/null @@ -1,18 +0,0 @@ -package conf - -import "testing" - -type Server struct { - Host string `yaml:"Host" default:"localhost"` - Port int `yaml:"Port" default:"8080"` -} - -type Config struct { - Server Server `yaml:"Server"` -} - -func TestConfigLoad(t *testing.T) { - var c Config - MustLoad("./config_test.yaml", &c) - t.Logf("config: %+v", c) -} diff --git a/pkg/conf/config_test.yaml b/pkg/conf/config_test.yaml deleted file mode 100644 index bcef5fc..0000000 --- a/pkg/conf/config_test.yaml +++ /dev/null @@ -1,3 +0,0 @@ -Server: - Port: 9999 - Host: 0.0.0.0 diff --git a/pkg/deduction/deduction_test.go b/pkg/deduction/deduction_test.go deleted file mode 100644 index 0e96555..0000000 --- a/pkg/deduction/deduction_test.go +++ /dev/null @@ -1,665 +0,0 @@ -package deduction - -import ( - "math" - "testing" - "time" -) - -func TestSubscribe_Validate(t *testing.T) { - tests := []struct { - name string - sub Subscribe - wantErr bool - errType error - }{ - { - name: "valid subscription", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(24 * time.Hour), - Traffic: 1000, - Download: 100, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: 50, - }, - wantErr: false, - }, - { - name: "negative traffic", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(24 * time.Hour), - Traffic: -1000, - Download: 100, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: 50, - }, - wantErr: true, - errType: ErrInvalidTraffic, - }, - { - name: "negative download", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(24 * time.Hour), - Traffic: 1000, - Download: -100, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: 50, - }, - wantErr: true, - errType: ErrInvalidTraffic, - }, - { - name: "download + upload exceeds traffic", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(24 * time.Hour), - Traffic: 1000, - Download: 600, - Upload: 500, - UnitTime: UnitTimeMonth, - DeductionRatio: 50, - }, - wantErr: true, - }, - { - name: "expire time before start time", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(-24 * time.Hour), - Traffic: 1000, - Download: 100, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: 50, - }, - wantErr: true, - errType: ErrInvalidTimeRange, - }, - { - name: "invalid deduction ratio - negative", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(24 * time.Hour), - Traffic: 1000, - Download: 100, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: -10, - }, - wantErr: true, - errType: ErrInvalidDeductionRatio, - }, - { - name: "invalid deduction ratio - over 100", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(24 * time.Hour), - Traffic: 1000, - Download: 100, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: 150, - }, - wantErr: true, - errType: ErrInvalidDeductionRatio, - }, - { - name: "invalid unit time", - sub: Subscribe{ - StartTime: time.Now(), - ExpireTime: time.Now().Add(24 * time.Hour), - Traffic: 1000, - Download: 100, - Upload: 200, - UnitTime: "InvalidUnit", - DeductionRatio: 50, - }, - wantErr: true, - errType: ErrInvalidUnitTime, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - err := tt.sub.Validate() - if (err != nil) != tt.wantErr { - t.Errorf("Subscribe.Validate() error = %v, wantErr %v", err, tt.wantErr) - return - } - if tt.errType != nil && err != tt.errType { - t.Errorf("Subscribe.Validate() error = %v, want %v", err, tt.errType) - } - }) - } -} - -func TestOrder_Validate(t *testing.T) { - tests := []struct { - name string - order Order - wantErr bool - errType error - }{ - { - name: "valid order", - order: Order{Amount: 1000, Quantity: 2}, - wantErr: false, - }, - { - name: "zero quantity", - order: Order{Amount: 1000, Quantity: 0}, - wantErr: true, - errType: ErrInvalidQuantity, - }, - { - name: "negative quantity", - order: Order{Amount: 1000, Quantity: -1}, - wantErr: true, - errType: ErrInvalidQuantity, - }, - { - name: "negative amount", - order: Order{Amount: -1000, Quantity: 2}, - wantErr: true, - errType: ErrInvalidAmount, - }, - { - name: "zero amount is valid", - order: Order{Amount: 0, Quantity: 1}, - wantErr: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - err := tt.order.Validate() - if (err != nil) != tt.wantErr { - t.Errorf("Order.Validate() error = %v, wantErr %v", err, tt.wantErr) - return - } - if tt.errType != nil && err != tt.errType { - t.Errorf("Order.Validate() error = %v, want %v", err, tt.errType) - } - }) - } -} - -func TestSafeMultiply(t *testing.T) { - tests := []struct { - name string - a, b int64 - want int64 - wantErr bool - }{ - { - name: "normal multiplication", - a: 10, - b: 20, - want: 200, - wantErr: false, - }, - { - name: "zero multiplication", - a: 10, - b: 0, - want: 0, - wantErr: false, - }, - { - name: "negative multiplication", - a: -10, - b: 20, - want: -200, - wantErr: false, - }, - { - name: "overflow case", - a: math.MaxInt64, - b: 2, - want: 0, - wantErr: true, - }, - { - name: "large numbers no overflow", - a: 1000000, - b: 1000000, - want: 1000000000000, - wantErr: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got, err := safeMultiply(tt.a, tt.b) - if (err != nil) != tt.wantErr { - t.Errorf("safeMultiply() error = %v, wantErr %v", err, tt.wantErr) - return - } - if got != tt.want { - t.Errorf("safeMultiply() = %v, want %v", got, tt.want) - } - }) - } -} - -func TestSafeAdd(t *testing.T) { - tests := []struct { - name string - a, b int64 - want int64 - wantErr bool - }{ - { - name: "normal addition", - a: 10, - b: 20, - want: 30, - wantErr: false, - }, - { - name: "negative addition", - a: -10, - b: 5, - want: -5, - wantErr: false, - }, - { - name: "overflow case", - a: math.MaxInt64, - b: 1, - want: 0, - wantErr: true, - }, - { - name: "underflow case", - a: math.MinInt64, - b: -1, - want: 0, - wantErr: true, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got, err := safeAdd(tt.a, tt.b) - if (err != nil) != tt.wantErr { - t.Errorf("safeAdd() error = %v, wantErr %v", err, tt.wantErr) - return - } - if got != tt.want { - t.Errorf("safeAdd() = %v, want %v", got, tt.want) - } - }) - } -} - -func TestSafeDivide(t *testing.T) { - tests := []struct { - name string - a, b int64 - want int64 - wantErr bool - }{ - { - name: "normal division", - a: 20, - b: 10, - want: 2, - wantErr: false, - }, - { - name: "division by zero", - a: 20, - b: 0, - want: 0, - wantErr: true, - }, - { - name: "negative division", - a: -20, - b: 10, - want: -2, - wantErr: false, - }, - { - name: "zero dividend", - a: 0, - b: 10, - want: 0, - wantErr: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got, err := safeDivide(tt.a, tt.b) - if (err != nil) != tt.wantErr { - t.Errorf("safeDivide() error = %v, wantErr %v", err, tt.wantErr) - return - } - if got != tt.want { - t.Errorf("safeDivide() = %v, want %v", got, tt.want) - } - }) - } -} - -func TestCalculateWeights(t *testing.T) { - tests := []struct { - name string - deductionRatio int64 - wantTrafficWeight float64 - wantTimeWeight float64 - }{ - { - name: "zero ratio", - deductionRatio: 0, - wantTrafficWeight: 0, - wantTimeWeight: 0, - }, - { - name: "50% ratio", - deductionRatio: 50, - wantTrafficWeight: 0.5, - wantTimeWeight: 0.5, - }, - { - name: "75% ratio", - deductionRatio: 75, - wantTrafficWeight: 0.75, - wantTimeWeight: 0.25, - }, - { - name: "100% ratio", - deductionRatio: 100, - wantTrafficWeight: 1.0, - wantTimeWeight: 0.0, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - gotTrafficWeight, gotTimeWeight := calculateWeights(tt.deductionRatio) - if gotTrafficWeight != tt.wantTrafficWeight { - t.Errorf("calculateWeights() trafficWeight = %v, want %v", gotTrafficWeight, tt.wantTrafficWeight) - } - if gotTimeWeight != tt.wantTimeWeight { - t.Errorf("calculateWeights() timeWeight = %v, want %v", gotTimeWeight, tt.wantTimeWeight) - } - }) - } -} - -func TestCalculateProportionalAmount(t *testing.T) { - tests := []struct { - name string - unitPrice int64 - remaining int64 - total int64 - want int64 - wantErr bool - }{ - { - name: "normal calculation", - unitPrice: 100, - remaining: 50, - total: 100, - want: 50, - wantErr: false, - }, - { - name: "zero total", - unitPrice: 100, - remaining: 50, - total: 0, - want: 0, - wantErr: false, - }, - { - name: "zero remaining", - unitPrice: 100, - remaining: 0, - total: 100, - want: 0, - wantErr: false, - }, - { - name: "quarter remaining", - unitPrice: 200, - remaining: 25, - total: 100, - want: 50, - wantErr: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got, err := calculateProportionalAmount(tt.unitPrice, tt.remaining, tt.total) - if (err != nil) != tt.wantErr { - t.Errorf("calculateProportionalAmount() error = %v, wantErr %v", err, tt.wantErr) - return - } - if got != tt.want { - t.Errorf("calculateProportionalAmount() = %v, want %v", got, tt.want) - } - }) - } -} - -func TestCalculateNoLimitAmount(t *testing.T) { - tests := []struct { - name string - sub Subscribe - order Order - want int64 - wantErr bool - }{ - { - name: "normal no limit calculation", - sub: Subscribe{ - Traffic: 1000, - Download: 300, - Upload: 200, - }, - order: Order{ - Amount: 1000, - }, - want: 500, // (1000 - 300 - 200) / 1000 * 1000 = 500 - wantErr: false, - }, - { - name: "zero traffic", - sub: Subscribe{ - Traffic: 0, - Download: 0, - Upload: 0, - }, - order: Order{ - Amount: 1000, - }, - want: 0, - wantErr: false, - }, - { - name: "overused traffic", - sub: Subscribe{ - Traffic: 1000, - Download: 600, - Upload: 500, - }, - order: Order{ - Amount: 1000, - }, - want: 0, // usedTraffic would be negative, clamped to 0 - wantErr: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got, err := calculateNoLimitAmount(tt.sub, tt.order) - if (err != nil) != tt.wantErr { - t.Errorf("calculateNoLimitAmount() error = %v, wantErr %v", err, tt.wantErr) - return - } - if got != tt.want { - t.Errorf("calculateNoLimitAmount() = %v, want %v", got, tt.want) - } - }) - } -} - -func TestCalculateRemainingAmount(t *testing.T) { - now := time.Now() - - tests := []struct { - name string - sub Subscribe - order Order - wantErr bool - }{ - { - name: "valid no limit subscription", - sub: Subscribe{ - StartTime: now.Add(-24 * time.Hour), - ExpireTime: now.Add(24 * time.Hour), - Traffic: 1000, - Download: 300, - Upload: 200, - UnitTime: UnitTimeNoLimit, - ResetCycle: ResetCycleNone, - DeductionRatio: 0, - }, - order: Order{ - Amount: 1000, - Quantity: 1, - }, - wantErr: false, - }, - { - name: "invalid subscription", - sub: Subscribe{ - StartTime: now, - ExpireTime: now.Add(-24 * time.Hour), // Invalid: expire before start - Traffic: 1000, - Download: 300, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: 0, - }, - order: Order{ - Amount: 1000, - Quantity: 1, - }, - wantErr: true, - }, - { - name: "invalid order", - sub: Subscribe{ - StartTime: now.Add(-24 * time.Hour), - ExpireTime: now.Add(24 * time.Hour), - Traffic: 1000, - Download: 300, - Upload: 200, - UnitTime: UnitTimeMonth, - DeductionRatio: 0, - }, - order: Order{ - Amount: 1000, - Quantity: 0, // Invalid: zero quantity - }, - wantErr: true, - }, - { - name: "no limit with reset cycle", - sub: Subscribe{ - StartTime: now.Add(-24 * time.Hour), - ExpireTime: now.Add(24 * time.Hour), - Traffic: 1000, - Download: 300, - Upload: 200, - UnitTime: UnitTimeNoLimit, - ResetCycle: ResetCycleMonthly, // Should return 0 - DeductionRatio: 0, - }, - order: Order{ - Amount: 1000, - Quantity: 1, - }, - wantErr: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - _, err := CalculateRemainingAmount(tt.sub, tt.order) - if (err != nil) != tt.wantErr { - t.Errorf("CalculateRemainingAmount() error = %v, wantErr %v", err, tt.wantErr) - } - }) - } -} - -func TestCalculateRemainingAmount_NoLimitWithResetCycle(t *testing.T) { - now := time.Now() - sub := Subscribe{ - StartTime: now.Add(-24 * time.Hour), - ExpireTime: now.Add(24 * time.Hour), - Traffic: 1000, - Download: 300, - Upload: 200, - UnitTime: UnitTimeNoLimit, - ResetCycle: ResetCycleMonthly, - DeductionRatio: 0, - } - order := Order{ - Amount: 1000, - Quantity: 1, - } - - got, err := CalculateRemainingAmount(sub, order) - if err != nil { - t.Errorf("CalculateRemainingAmount() error = %v", err) - return - } - if got != 0 { - t.Errorf("CalculateRemainingAmount() = %v, want 0", got) - } -} - -// Benchmark tests -func BenchmarkCalculateRemainingAmount(b *testing.B) { - now := time.Now() - sub := Subscribe{ - StartTime: now.Add(-24 * time.Hour), - ExpireTime: now.Add(24 * time.Hour), - Traffic: 1000, - Download: 300, - Upload: 200, - UnitTime: UnitTimeMonth, - ResetCycle: ResetCycleNone, - DeductionRatio: 50, - } - order := Order{ - Amount: 1000, - Quantity: 1, - } - - b.ResetTimer() - for i := 0; i < b.N; i++ { - _, _ = CalculateRemainingAmount(sub, order) - } -} - -func BenchmarkSafeMultiply(b *testing.B) { - for i := 0; i < b.N; i++ { - _, _ = safeMultiply(12345, 67890) - } -} diff --git a/pkg/device/device_test.go b/pkg/device/device_test.go deleted file mode 100644 index 14a6910..0000000 --- a/pkg/device/device_test.go +++ /dev/null @@ -1,123 +0,0 @@ -package device - -import ( - "encoding/json" - "fmt" - "io" - "log" - "net" - "net/http" - "strings" - "sync" - "testing" - "time" - - "github.com/pkg/errors" - - "github.com/gorilla/websocket" -) - -func TestDevice(t *testing.T) { - t.Skip("skip test") - /* deviceManager := NewDeviceManager(10, 3) - - deviceManager.OnDeviceOnline = func(userID int64, deviceID, session string) { - fmt.Printf("✅ 设备 %s (用户 %d) 上线\n", deviceID, userID) - } - - deviceManager.OnDeviceOffline = func(userID int64, deviceID, session string) { - fmt.Printf("❌ 设备 %s (用户 %d) 下线\n", deviceID, userID) - } - - deviceManager.OnDeviceKicked = func(userID int64, deviceID, session string, operator Operator) { - fmt.Printf("⚠️ 设备 %s (用户 %d) 被踢下线\n", deviceID, userID) - } - deviceManager.OnMessage = func(userID int64, deviceID, session string, message string) { - log.Printf("✅收到消息: 设备 %s (用户 %d) 内容: %s,sesion: %s\n", deviceID, userID, message, session) - } - engine := gin.Default() - engine.GET("/ws/:userid/:device_number", func(c *gin.Context) { - //根据Authorization获取session - authorization := c.GetHeader("Authorization") - userid, err := strconv.ParseInt(c.Param("userid"), 10, 64) - if err != nil { - t.Errorf("get user id err:%v", err) - return - } - deviceNumber := c.Param("device_number") - deviceManager.AddDevice(c, authorization, userid, deviceNumber, 3) - return - }) - go func() { - err := http.ListenAndServe(":8081", engine) - if err != nil { - t.Fatalf("engine start failed: %v", err) - } - }() - */ - h := http.Header{} - h.Add("Authorization", "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJTZXNzaW9uSWQiOiIwMTk0Y2ZiNy1hYjY0LTdjYjMtODUzYi03ZGU5YTAzNWRlZTgiLCJVc2VySWQiOjI5LCJleHAiOjE3MzkyNTY1MDgsImlhdCI6MTczODY1MTcwOH0.BGKT5-hongJPZrA_yAb6cf6go5iDR8T9uu1ZxUg8HDw") - - mutex := sync.Mutex{} - serverURL := fmt.Sprintf("ws://localhost:8080/v1/app/ws/%d/%s", 29, "15502502051") // 假设 userID 为 1001,设备ID 为 deviceA - - // 建立 WebSocket 连接 - conn, resp, err := websocket.DefaultDialer.Dial(serverURL, h) - if err != nil { - all, err := io.ReadAll(resp.Body) - t.Fatalf("websocket dial failed: %v:%s", err, string(all)) - } - // 启动一个 goroutine 来读取服务器消息 - go func() { - for { - _, msg, err := conn.ReadMessage() - if err != nil { - if errors.Is(err, net.ErrClosed) || strings.Contains(err.Error(), "use of closed network connection") { - log.Println("连接已关闭") - return - } - log.Printf("接收消息失败: %v", err) - return - } - fmt.Printf("收到来自服务器的消息: %s\n", msg) - } - }() - - //发送心跳 - go func() { - ticker := time.NewTicker(time.Second * 5) - defer ticker.Stop() - - for range ticker.C { - mutex.Lock() - err := conn.WriteMessage(websocket.TextMessage, []byte("ping")) - mutex.Unlock() - - if err != nil { - if strings.Contains(err.Error(), "use of closed network connection") { - log.Println("连接已关闭") - return - } - t.Errorf("websocket 写入失败: %v", err) - return - } - } - }() - - updateSubscribe, _ := json.Marshal(map[string]interface{}{ - "method": "test_method", - }) - - //发送一条消息 - mutex.Lock() - err = conn.WriteMessage(websocket.TextMessage, updateSubscribe) - mutex.Unlock() - if err != nil { - t.Errorf("websocket write failed: %v", err) - } - - time.Sleep(time.Second * 20) - conn.Close() - time.Sleep(time.Second * 5) - -} diff --git a/pkg/email/smtp/email_test.go b/pkg/email/smtp/email_test.go deleted file mode 100644 index 445d9af..0000000 --- a/pkg/email/smtp/email_test.go +++ /dev/null @@ -1,24 +0,0 @@ -package smtp - -import "testing" - -func TestEmailSend(t *testing.T) { - t.Skipf("Skip TestEmailSend") - config := &Config{ - Host: "smtp.mail.me.com", - Port: 587, - User: "support@ppanel.dev", - Pass: "password", - From: "support@ppanel.dev", - SSL: true, - SiteName: "", - } - address := []string{"tension@sparkdance.dev"} - subject := "test" - body := "test" - email := NewClient(config) - err := email.Send(address, subject, body) - if err != nil { - t.Errorf("send email error: %v", err) - } -} diff --git a/pkg/email/template_test.go b/pkg/email/template_test.go deleted file mode 100644 index 9c8fd51..0000000 --- a/pkg/email/template_test.go +++ /dev/null @@ -1,36 +0,0 @@ -package email - -import ( - "bytes" - "html/template" - "testing" -) - -type VerifyTemplate struct { - Type uint8 - SiteLogo string - SiteName string - Expire uint8 - Code string -} - -func TestVerifyEmail(t *testing.T) { - t.Skipf("Skip TestVerifyEmail test") - data := VerifyTemplate{ - Type: 1, - SiteLogo: "https://www.google.com", - SiteName: "Google", - Expire: 5, - Code: "123456", - } - tpl, err := template.New("email").Parse(DefaultEmailVerifyTemplate) - if err != nil { - t.Error(err) - } - var result bytes.Buffer - err = tpl.Execute(&result, data) - if err != nil { - t.Error(err) - } - t.Log(result.String()) -} diff --git a/pkg/errorx/atomicerror_test.go b/pkg/errorx/atomicerror_test.go deleted file mode 100644 index 10c7b44..0000000 --- a/pkg/errorx/atomicerror_test.go +++ /dev/null @@ -1,82 +0,0 @@ -package errorx - -import ( - "errors" - "sync" - "sync/atomic" - "testing" - - "github.com/stretchr/testify/assert" -) - -var errDummy = errors.New("hello") - -func TestAtomicError(t *testing.T) { - var err AtomicError - err.Set(errDummy) - assert.Equal(t, errDummy, err.Load()) -} - -func TestAtomicErrorSetNil(t *testing.T) { - var ( - errNil error - err AtomicError - ) - err.Set(errNil) - assert.Equal(t, errNil, err.Load()) -} - -func TestAtomicErrorNil(t *testing.T) { - var err AtomicError - assert.Nil(t, err.Load()) -} - -func BenchmarkAtomicError(b *testing.B) { - var aerr AtomicError - wg := sync.WaitGroup{} - - b.Run("Load", func(b *testing.B) { - var done uint32 - go func() { - for { - if atomic.LoadUint32(&done) != 0 { - break - } - wg.Add(1) - go func() { - aerr.Set(errDummy) - wg.Done() - }() - } - }() - b.ResetTimer() - for i := 0; i < b.N; i++ { - _ = aerr.Load() - } - b.StopTimer() - atomic.StoreUint32(&done, 1) - wg.Wait() - }) - b.Run("Set", func(b *testing.B) { - var done uint32 - go func() { - for { - if atomic.LoadUint32(&done) != 0 { - break - } - wg.Add(1) - go func() { - _ = aerr.Load() - wg.Done() - }() - } - }() - b.ResetTimer() - for i := 0; i < b.N; i++ { - aerr.Set(errDummy) - } - b.StopTimer() - atomic.StoreUint32(&done, 1) - wg.Wait() - }) -} diff --git a/pkg/errorx/batcherror_test.go b/pkg/errorx/batcherror_test.go deleted file mode 100644 index ca6d03b..0000000 --- a/pkg/errorx/batcherror_test.go +++ /dev/null @@ -1,147 +0,0 @@ -package errorx - -import ( - "errors" - "fmt" - "sync" - "testing" - - "github.com/stretchr/testify/assert" -) - -const ( - err1 = "first error" - err2 = "second error" -) - -func TestBatchErrorNil(t *testing.T) { - var batch BatchError - assert.Nil(t, batch.Err()) - assert.False(t, batch.NotNil()) - batch.Add(nil) - assert.Nil(t, batch.Err()) - assert.False(t, batch.NotNil()) -} - -func TestBatchErrorNilFromFunc(t *testing.T) { - err := func() error { - var be BatchError - return be.Err() - }() - assert.True(t, err == nil) -} - -func TestBatchErrorOneError(t *testing.T) { - var batch BatchError - batch.Add(errors.New(err1)) - assert.NotNil(t, batch.Err()) - assert.Equal(t, err1, batch.Err().Error()) - assert.True(t, batch.NotNil()) -} - -func TestBatchErrorWithErrors(t *testing.T) { - var batch BatchError - batch.Add(errors.New(err1)) - batch.Add(errors.New(err2)) - assert.NotNil(t, batch.Err()) - assert.Equal(t, fmt.Sprintf("%s\n%s", err1, err2), batch.Err().Error()) - assert.True(t, batch.NotNil()) -} - -func TestBatchErrorConcurrentAdd(t *testing.T) { - const count = 10000 - var batch BatchError - var wg sync.WaitGroup - - wg.Add(count) - for i := 0; i < count; i++ { - go func() { - defer wg.Done() - batch.Add(errors.New(err1)) - }() - } - wg.Wait() - - assert.NotNil(t, batch.Err()) - assert.Equal(t, count, len(batch.errs)) - assert.True(t, batch.NotNil()) -} - -func TestBatchError_Unwrap(t *testing.T) { - t.Run("nil", func(t *testing.T) { - var be BatchError - assert.Nil(t, be.Err()) - assert.True(t, errors.Is(be.Err(), nil)) - }) - - t.Run("one error", func(t *testing.T) { - var errFoo = errors.New("foo") - var errBar = errors.New("bar") - var be BatchError - be.Add(errFoo) - assert.True(t, errors.Is(be.Err(), errFoo)) - assert.False(t, errors.Is(be.Err(), errBar)) - }) - - t.Run("two errors", func(t *testing.T) { - var errFoo = errors.New("foo") - var errBar = errors.New("bar") - var errBaz = errors.New("baz") - var be BatchError - be.Add(errFoo) - be.Add(errBar) - assert.True(t, errors.Is(be.Err(), errFoo)) - assert.True(t, errors.Is(be.Err(), errBar)) - assert.False(t, errors.Is(be.Err(), errBaz)) - }) -} - -func TestBatchError_Add(t *testing.T) { - var be BatchError - - // Test adding nil errors - be.Add(nil, nil) - assert.False(t, be.NotNil(), "Expected BatchError to be empty after adding nil errors") - - // Test adding non-nil errors - err1 := errors.New("error 1") - err2 := errors.New("error 2") - be.Add(err1, err2) - assert.True(t, be.NotNil(), "Expected BatchError to be non-empty after adding errors") - - // Test adding a mix of nil and non-nil errors - err3 := errors.New("error 3") - be.Add(nil, err3, nil) - assert.True(t, be.NotNil(), "Expected BatchError to be non-empty after adding a mix of nil and non-nil errors") -} - -func TestBatchError_Err(t *testing.T) { - var be BatchError - - // Test Err() on empty BatchError - assert.Nil(t, be.Err(), "Expected nil error for empty BatchError") - - // Test Err() with multiple errors - err1 := errors.New("error 1") - err2 := errors.New("error 2") - be.Add(err1, err2) - - combinedErr := be.Err() - assert.NotNil(t, combinedErr, "Expected nil error for BatchError with multiple errors") - - // Check if the combined error contains both error messages - errString := combinedErr.Error() - assert.Truef(t, errors.Is(combinedErr, err1), "Combined error doesn't contain first error: %s", errString) - assert.Truef(t, errors.Is(combinedErr, err2), "Combined error doesn't contain second error: %s", errString) -} - -func TestBatchError_NotNil(t *testing.T) { - var be BatchError - - // Test NotNil() on empty BatchError - assert.Nil(t, be.Err(), "Expected nil error for empty BatchError") - - // Test NotNil() after adding an error - be.Add(errors.New("test error")) - assert.NotNil(t, be.Err(), "Expected non-nil error after adding an error") -} diff --git a/pkg/errorx/callchain_test.go b/pkg/errorx/callchain_test.go deleted file mode 100644 index 234dd5c..0000000 --- a/pkg/errorx/callchain_test.go +++ /dev/null @@ -1,27 +0,0 @@ -package errorx - -import ( - "errors" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestChain(t *testing.T) { - errDummy := errors.New("dummy") - assert.Nil(t, Chain(func() error { - return nil - }, func() error { - return nil - })) - assert.Equal(t, errDummy, Chain(func() error { - return errDummy - }, func() error { - return nil - })) - assert.Equal(t, errDummy, Chain(func() error { - return nil - }, func() error { - return errDummy - })) -} diff --git a/pkg/errorx/check_test.go b/pkg/errorx/check_test.go deleted file mode 100644 index 0e7b267..0000000 --- a/pkg/errorx/check_test.go +++ /dev/null @@ -1,70 +0,0 @@ -package errorx - -import ( - "errors" - "testing" -) - -func TestIn(t *testing.T) { - err1 := errors.New("error 1") - err2 := errors.New("error 2") - err3 := errors.New("error 3") - - tests := []struct { - name string - err error - errs []error - want bool - }{ - { - name: "Error matches one of the errors in the list", - err: err1, - errs: []error{err1, err2}, - want: true, - }, - { - name: "Error does not match any errors in the list", - err: err3, - errs: []error{err1, err2}, - want: false, - }, - { - name: "Empty error list", - err: err1, - errs: []error{}, - want: false, - }, - { - name: "Nil error with non-nil list", - err: nil, - errs: []error{err1, err2}, - want: false, - }, - { - name: "Non-nil error with nil in list", - err: err1, - errs: []error{nil, err2}, - want: false, - }, - { - name: "Error matches nil error in the list", - err: nil, - errs: []error{nil, err2}, - want: true, - }, - { - name: "Nil error with empty list", - err: nil, - errs: []error{}, - want: false, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := In(tt.err, tt.errs...); got != tt.want { - t.Errorf("In() = %v, want %v", got, tt.want) - } - }) - } -} diff --git a/pkg/errorx/wrap_test.go b/pkg/errorx/wrap_test.go deleted file mode 100644 index 4682c9e..0000000 --- a/pkg/errorx/wrap_test.go +++ /dev/null @@ -1,24 +0,0 @@ -package errorx - -import ( - "errors" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestWrap(t *testing.T) { - assert.Nil(t, Wrap(nil, "test")) - assert.Equal(t, "foo: bar", Wrap(errors.New("bar"), "foo").Error()) - - err := errors.New("foo") - assert.True(t, errors.Is(Wrap(err, "bar"), err)) -} - -func TestWrapf(t *testing.T) { - assert.Nil(t, Wrapf(nil, "%s", "test")) - assert.Equal(t, "foo bar: quz", Wrapf(errors.New("quz"), "foo %s", "bar").Error()) - - err := errors.New("foo") - assert.True(t, errors.Is(Wrapf(err, "foo %s", "bar"), err)) -} diff --git a/pkg/exchangeRate/exchange_rate_test.go b/pkg/exchangeRate/exchange_rate_test.go deleted file mode 100644 index 1444098..0000000 --- a/pkg/exchangeRate/exchange_rate_test.go +++ /dev/null @@ -1,12 +0,0 @@ -package exchangeRate - -import "testing" - -func TestGetExchangeRete(t *testing.T) { - t.Skip("skip TestGetExchangeRete") - result, err := GetExchangeRete("USD", "CNY", "", 1) - if err != nil { - t.Fatal(err) - } - t.Log(result) -} diff --git a/pkg/fs/files_test.go b/pkg/fs/files_test.go deleted file mode 100644 index 96ff174..0000000 --- a/pkg/fs/files_test.go +++ /dev/null @@ -1,15 +0,0 @@ -package fs - -import ( - "os" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestCloseOnExec(t *testing.T) { - file := os.NewFile(0, os.DevNull) - assert.NotPanics(t, func() { - CloseOnExec(file) - }) -} diff --git a/pkg/fs/temps_test.go b/pkg/fs/temps_test.go deleted file mode 100644 index 1e2ed6e..0000000 --- a/pkg/fs/temps_test.go +++ /dev/null @@ -1,49 +0,0 @@ -package fs - -import ( - "io" - "os" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestTempFileWithText(t *testing.T) { - f, err := TempFileWithText("test") - if err != nil { - t.Error(err) - } - if f == nil { - t.Error("TempFileWithText returned nil") - } - if f.Name() == "" { - t.Error("TempFileWithText returned empty file name") - } - defer os.Remove(f.Name()) - - bs, err := io.ReadAll(f) - assert.Nil(t, err) - if len(bs) != 4 { - t.Error("TempFileWithText returned wrong file size") - } - if f.Close() != nil { - t.Error("TempFileWithText returned error on close") - } -} - -func TestTempFilenameWithText(t *testing.T) { - f, err := TempFilenameWithText("test") - if err != nil { - t.Error(err) - } - if f == "" { - t.Error("TempFilenameWithText returned empty file name") - } - defer os.Remove(f) - - bs, err := os.ReadFile(f) - assert.Nil(t, err) - if len(bs) != 4 { - t.Error("TempFilenameWithText returned wrong file size") - } -} diff --git a/pkg/hash/consistenthash_test.go b/pkg/hash/consistenthash_test.go deleted file mode 100644 index f7c5b00..0000000 --- a/pkg/hash/consistenthash_test.go +++ /dev/null @@ -1,155 +0,0 @@ -package hash - -import ( - "fmt" - "strconv" - "testing" - - "github.com/stretchr/testify/assert" -) - -const ( - keySize = 20 - requestSize = 1000 -) - -func BenchmarkConsistentHashGet(b *testing.B) { - ch := NewConsistentHash() - for i := 0; i < keySize; i++ { - ch.Add("localhost:" + strconv.Itoa(i)) - } - - for i := 0; i < b.N; i++ { - ch.Get(i) - } -} - -func TestConsistentHashIncrementalTransfer(t *testing.T) { - prefix := "anything" - create := func() *ConsistentHash { - ch := NewConsistentHash() - for i := 0; i < keySize; i++ { - ch.Add(prefix + strconv.Itoa(i)) - } - return ch - } - - originCh := create() - keys := make(map[int]string, requestSize) - for i := 0; i < requestSize; i++ { - key, ok := originCh.Get(requestSize + i) - assert.True(t, ok) - assert.NotNil(t, key) - keys[i] = key.(string) - } - - node := fmt.Sprintf("%s%d", prefix, keySize) - for i := 0; i < 10; i++ { - laterCh := create() - laterCh.AddWithWeight(node, 10*(i+1)) - - for j := 0; j < requestSize; j++ { - key, ok := laterCh.Get(requestSize + j) - assert.True(t, ok) - assert.NotNil(t, key) - value := key.(string) - assert.True(t, value == keys[j] || value == node) - } - } -} - -func TestConsistentHashTransferOnFailure(t *testing.T) { - index := 41 - keys, newKeys := getKeysBeforeAndAfterFailure(t, "localhost:", index) - var transferred int - for k, v := range newKeys { - if v != keys[k] { - transferred++ - } - } - - ratio := float32(transferred) / float32(requestSize) - assert.True(t, ratio < 2.5/float32(keySize), fmt.Sprintf("%d: %f", index, ratio)) -} - -func TestConsistentHashLeastTransferOnFailure(t *testing.T) { - prefix := "localhost:" - index := 41 - keys, newKeys := getKeysBeforeAndAfterFailure(t, prefix, index) - for k, v := range keys { - newV := newKeys[k] - if v != prefix+strconv.Itoa(index) { - assert.Equal(t, v, newV) - } - } -} - -func TestConsistentHash_Remove(t *testing.T) { - ch := NewConsistentHash() - ch.Add("first") - ch.Add("second") - ch.Remove("first") - for i := 0; i < 100; i++ { - val, ok := ch.Get(i) - assert.True(t, ok) - assert.Equal(t, "second", val) - } -} - -func TestConsistentHash_RemoveInterface(t *testing.T) { - const key = "any" - ch := NewConsistentHash() - node1 := newMockNode(key, 1) - node2 := newMockNode(key, 2) - ch.AddWithWeight(node1, 80) - ch.AddWithWeight(node2, 50) - assert.Equal(t, 1, len(ch.nodes)) - node, ok := ch.Get(1) - assert.True(t, ok) - assert.Equal(t, key, node.(*mockNode).addr) - assert.Equal(t, 2, node.(*mockNode).id) -} - -func getKeysBeforeAndAfterFailure(t *testing.T, prefix string, index int) (map[int]string, map[int]string) { - ch := NewConsistentHash() - for i := 0; i < keySize; i++ { - ch.Add(prefix + strconv.Itoa(i)) - } - - keys := make(map[int]string, requestSize) - for i := 0; i < requestSize; i++ { - key, ok := ch.Get(requestSize + i) - assert.True(t, ok) - assert.NotNil(t, key) - keys[i] = key.(string) - } - - remove := fmt.Sprintf("%s%d", prefix, index) - ch.Remove(remove) - newKeys := make(map[int]string, requestSize) - for i := 0; i < requestSize; i++ { - key, ok := ch.Get(requestSize + i) - assert.True(t, ok) - assert.NotNil(t, key) - assert.NotEqual(t, remove, key) - newKeys[i] = key.(string) - } - - return keys, newKeys -} - -type mockNode struct { - addr string - id int -} - -func newMockNode(addr string, id int) *mockNode { - return &mockNode{ - addr: addr, - id: id, - } -} - -func (n *mockNode) String() string { - return n.addr -} diff --git a/pkg/hash/hash_test.go b/pkg/hash/hash_test.go deleted file mode 100644 index 5e0962a..0000000 --- a/pkg/hash/hash_test.go +++ /dev/null @@ -1,47 +0,0 @@ -package hash - -import ( - "crypto/md5" - "fmt" - "hash/fnv" - "math/big" - "testing" - - "github.com/stretchr/testify/assert" -) - -const ( - text = "hello, world!\n" - md5Digest = "910c8bc73110b0cd1bc5d2bcae782511" -) - -func TestMd5(t *testing.T) { - actual := fmt.Sprintf("%x", Md5([]byte(text))) - assert.Equal(t, md5Digest, actual) -} - -func TestMd5Hex(t *testing.T) { - actual := Md5Hex([]byte(text)) - assert.Equal(t, md5Digest, actual) -} - -func BenchmarkHashFnv(b *testing.B) { - for i := 0; i < b.N; i++ { - h := fnv.New32() - new(big.Int).SetBytes(h.Sum([]byte(text))).Int64() - } -} - -func BenchmarkHashMd5(b *testing.B) { - for i := 0; i < b.N; i++ { - h := md5.New() - bytes := h.Sum([]byte(text)) - new(big.Int).SetBytes(bytes).Int64() - } -} - -func BenchmarkMurmur3(b *testing.B) { - for i := 0; i < b.N; i++ { - Hash([]byte(text)) - } -} diff --git a/pkg/iap/apple/jws_test.go b/pkg/iap/apple/jws_test.go deleted file mode 100644 index 4cb7796..0000000 --- a/pkg/iap/apple/jws_test.go +++ /dev/null @@ -1,35 +0,0 @@ -package apple - -import ( - "encoding/base64" - "encoding/json" - "testing" - "time" -) - -func TestParseTransactionJWS(t *testing.T) { - payload := map[string]interface{}{ - "bundleId": "co.airoport.app.ios", - "productId": "com.airport.vpn.pass.30d", - "transactionId": "1000000000001", - "originalTransactionId": "1000000000000", - "purchaseDate": float64(time.Now().UnixMilli()), - } - data, _ := json.Marshal(payload) - b64 := base64.RawURLEncoding.EncodeToString(data) - jws := "header." + b64 + ".signature" - p, err := ParseTransactionJWS(jws) - if err != nil { - t.Fatalf("parse error: %v", err) - } - if p.ProductId != payload["productId"] { - t.Fatalf("productId not match") - } - if p.BundleId != payload["bundleId"] { - t.Fatalf("bundleId not match") - } - if p.OriginalTransactionId != payload["originalTransactionId"] { - t.Fatalf("originalTransactionId not match") - } -} - diff --git a/pkg/ip/ip_test.go b/pkg/ip/ip_test.go deleted file mode 100644 index f4560d5..0000000 --- a/pkg/ip/ip_test.go +++ /dev/null @@ -1,34 +0,0 @@ -package ip - -import ( - "testing" - "time" -) - -func TestGetIPv4(t *testing.T) { - t.Skip("skip TestGetIPv4") - iPv4, err := GetIP("baidu.com") - if err != nil { - t.Fatal(err) - } - - t.Log(iPv4) -} - -func TestGetRegionByIp(t *testing.T) { - t.Skip("skip TestGetRegionByIp") - ips, err := GetIP("122.14.229.128") - if err != nil { - t.Fatal(err) - } - - for _, ip := range ips { - t.Log(ip) - resp, err := GetRegionByIp(ip) - if err != nil { - t.Fatalf("ip: %s,err: %v", ip, err) - } - t.Logf("country: %s,City: %s,latitude:%s, longitude:%s", resp.Country, resp.City, resp.Latitude, resp.Longitude) - } - time.Sleep(3 * time.Second) -} diff --git a/pkg/jsonx/json_test.go b/pkg/jsonx/json_test.go deleted file mode 100644 index 301ff24..0000000 --- a/pkg/jsonx/json_test.go +++ /dev/null @@ -1,23 +0,0 @@ -package jsonx - -import "testing" - -type User struct { - Id int64 - Name string - Age int64 -} - -func TestJson(t *testing.T) { - t.Log("TestJson") - user := &User{ - Id: 1, - Name: "test", - Age: 18, - } - b, err := Marshal(user) - if err != nil { - t.Error(err) - } - t.Log(string(b)) -} diff --git a/pkg/jwt/util_test.go b/pkg/jwt/util_test.go deleted file mode 100644 index efefc31..0000000 --- a/pkg/jwt/util_test.go +++ /dev/null @@ -1,22 +0,0 @@ -package jwt - -import ( - "testing" - - "github.com/golang-jwt/jwt/v5" - "github.com/pkg/errors" -) - -// TestNewJwtToken test NewJwtToken function -func TestParseJwtToken(t *testing.T) { - token := "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJEZXZpY2VJZCI6IjM4IiwiZXhwIjoxNzE4MTU2OTQ4LCJpYXQiOjE3MTc1NTIxNDgsInVzZXJJZCI6MX0.4W0nga82kNrfwWjkwcgYAWj4fI4iRc-ZftwVbu-a_kI" - secret := "ae0536f9-6450-4606-8e13-5a19ed505da0" - - claims, err := ParseJwtToken(token, secret) - if err != nil && !errors.Is(err, jwt.ErrTokenExpired) { - t.Errorf("err: %v", err.Error()) - return - } - // parse jwt token success - t.Logf("claims: %v", claims) -} diff --git a/pkg/lang/lang_test.go b/pkg/lang/lang_test.go deleted file mode 100644 index a1ebdc5..0000000 --- a/pkg/lang/lang_test.go +++ /dev/null @@ -1,156 +0,0 @@ -package lang - -import ( - "encoding/json" - "errors" - "reflect" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestRepr(t *testing.T) { - var ( - f32 float32 = 1.1 - f64 = 2.2 - i8 int8 = 1 - i16 int16 = 2 - i32 int32 = 3 - i64 int64 = 4 - u8 uint8 = 5 - u16 uint16 = 6 - u32 uint32 = 7 - u64 uint64 = 8 - ) - tests := []struct { - v any - expect string - }{ - { - nil, - "", - }, - { - mockStringable{}, - "mocked", - }, - { - new(mockStringable), - "mocked", - }, - { - newMockPtr(), - "mockptr", - }, - { - &mockOpacity{ - val: 1, - }, - "{1}", - }, - { - true, - "true", - }, - { - false, - "false", - }, - { - f32, - "1.1", - }, - { - f64, - "2.2", - }, - { - i8, - "1", - }, - { - i16, - "2", - }, - { - i32, - "3", - }, - { - i64, - "4", - }, - { - u8, - "5", - }, - { - u16, - "6", - }, - { - u32, - "7", - }, - { - u64, - "8", - }, - { - []byte(`abcd`), - "abcd", - }, - { - mockOpacity{val: 1}, - "{1}", - }, - } - - for _, test := range tests { - t.Run(test.expect, func(t *testing.T) { - assert.Equal(t, test.expect, Repr(test.v)) - }) - } -} - -func TestReprOfValue(t *testing.T) { - t.Run("error", func(t *testing.T) { - assert.Equal(t, "error", reprOfValue(reflect.ValueOf(errors.New("error")))) - }) - - t.Run("stringer", func(t *testing.T) { - assert.Equal(t, "1.23", reprOfValue(reflect.ValueOf(json.Number("1.23")))) - }) - - t.Run("int", func(t *testing.T) { - assert.Equal(t, "1", reprOfValue(reflect.ValueOf(1))) - }) - - t.Run("int", func(t *testing.T) { - assert.Equal(t, "1", reprOfValue(reflect.ValueOf("1"))) - }) - - t.Run("int", func(t *testing.T) { - assert.Equal(t, "1", reprOfValue(reflect.ValueOf(uint(1)))) - }) -} - -type mockStringable struct{} - -func (m mockStringable) String() string { - return "mocked" -} - -type mockPtr struct{} - -func newMockPtr() *mockPtr { - return new(mockPtr) -} - -func (m *mockPtr) String() string { - return "mockptr" -} - -type mockOpacity struct { - val int -} diff --git a/pkg/limit/periodlimit_test.go b/pkg/limit/periodlimit_test.go deleted file mode 100644 index 1e0f104..0000000 --- a/pkg/limit/periodlimit_test.go +++ /dev/null @@ -1,71 +0,0 @@ -package limit - -import ( - "testing" - - "github.com/redis/go-redis/v9" - - "github.com/stretchr/testify/assert" -) - -func TestPeriodLimit_Take(t *testing.T) { - testPeriodLimit(t) -} - -func TestPeriodLimit_TakeWithAlign(t *testing.T) { - testPeriodLimit(t, Align()) -} - -func TestPeriodLimit_RedisUnavailable(t *testing.T) { - //t.Skipf("skip this test because it's not stable") - const ( - seconds = 1 - quota = 5 - ) - rds := redis.NewClient(&redis.Options{ - Addr: "localhost:12345", - }) - - l := NewPeriodLimit(seconds, quota, rds, "periodlimit:") - val, err := l.Take("first") - assert.NotNil(t, err) - assert.Equal(t, 0, val) -} - -func testPeriodLimit(t *testing.T, opts ...PeriodOption) { - store, _ := CreateRedisWithClean(t) - const ( - seconds = 1 - total = 100 - quota = 5 - ) - l := NewPeriodLimit(seconds, quota, store, "periodlimit", opts...) - var allowed, hitQuota, overQuota int - for i := 0; i < total; i++ { - val, err := l.Take("first") - if err != nil { - t.Error(err) - } - switch val { - case Allowed: - allowed++ - case HitQuota: - hitQuota++ - case OverQuota: - overQuota++ - default: - t.Error("unknown status") - } - } - assert.Equal(t, quota-1, allowed) - assert.Equal(t, 1, hitQuota) - assert.Equal(t, total-quota, overQuota) -} - -func TestQuotaFull(t *testing.T) { - rds, _ := CreateRedisWithClean(t) - l := NewPeriodLimit(1, 1, rds, "periodlimit") - val, err := l.Take("first") - assert.Nil(t, err) - assert.Equal(t, HitQuota, val) -} diff --git a/pkg/limit/tokenlimit_test.go b/pkg/limit/tokenlimit_test.go deleted file mode 100644 index 5dbfab2..0000000 --- a/pkg/limit/tokenlimit_test.go +++ /dev/null @@ -1,80 +0,0 @@ -package limit - -import ( - "context" - "testing" - "time" - - "github.com/redis/go-redis/v9" - - "github.com/alicebob/miniredis/v2" - "github.com/stretchr/testify/assert" -) - -func TestTokenLimit_WithCtx(t *testing.T) { - const ( - total = 100 - rate = 5 - burst = 10 - ) - store, _ := CreateRedisWithClean(t) - l := NewTokenLimiter(rate, burst, store, "tokenlimit") - - ctx, cancel := context.WithCancel(context.Background()) - ok := l.AllowCtx(ctx) - assert.True(t, ok) - - cancel() - for i := 0; i < total; i++ { - ok := l.AllowCtx(ctx) - assert.False(t, ok) - assert.False(t, l.monitorStarted) - } -} - -func TestTokenLimit_Take(t *testing.T) { - store, _ := CreateRedisWithClean(t) - - const ( - total = 100 - rate = 5 - burst = 10 - ) - l := NewTokenLimiter(rate, burst, store, "tokenlimit") - var allowed int - for i := 0; i < total; i++ { - time.Sleep(time.Second / time.Duration(total)) - if l.Allow() { - allowed++ - } - } - - assert.True(t, allowed >= burst+rate) -} - -func TestTokenLimit_TakeBurst(t *testing.T) { - store, _ := CreateRedisWithClean(t) - - const ( - total = 100 - rate = 5 - burst = 10 - ) - l := NewTokenLimiter(rate, burst, store, "tokenlimit") - var allowed int - for i := 0; i < total; i++ { - if l.Allow() { - allowed++ - } - } - - assert.True(t, allowed >= burst) -} - -// CreateRedisWithClean returns an in process redis.Redis and a clean function. -func CreateRedisWithClean(t *testing.T) (r *redis.Client, clean func()) { - mr := miniredis.RunT(t) - return redis.NewClient(&redis.Options{ - Addr: mr.Addr(), - }), mr.Close -} diff --git a/pkg/logger/color_test.go b/pkg/logger/color_test.go deleted file mode 100644 index b2bb45d..0000000 --- a/pkg/logger/color_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package logger - -import ( - "sync/atomic" - "testing" - - "github.com/perfect-panel/server/pkg/color" - "github.com/stretchr/testify/assert" -) - -func TestWithColor(t *testing.T) { - old := atomic.SwapUint32(&encoding, plainEncodingType) - defer atomic.StoreUint32(&encoding, old) - - output := WithColor("hello", color.BgBlue) - assert.Equal(t, "hello", output) - - atomic.StoreUint32(&encoding, jsonEncodingType) - output = WithColor("hello", color.BgBlue) - assert.Equal(t, "hello", output) -} - -func TestWithColorPadding(t *testing.T) { - old := atomic.SwapUint32(&encoding, plainEncodingType) - defer atomic.StoreUint32(&encoding, old) - - output := WithColorPadding("hello", color.BgBlue) - assert.Equal(t, " hello ", output) - - atomic.StoreUint32(&encoding, jsonEncodingType) - output = WithColorPadding("hello", color.BgBlue) - assert.Equal(t, "hello", output) -} diff --git a/pkg/logger/fields_test.go b/pkg/logger/fields_test.go deleted file mode 100644 index eabfa38..0000000 --- a/pkg/logger/fields_test.go +++ /dev/null @@ -1,122 +0,0 @@ -package logger - -import ( - "bytes" - "context" - "encoding/json" - "strconv" - "sync" - "sync/atomic" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestAddGlobalFields(t *testing.T) { - var buf bytes.Buffer - writer := NewWriter(&buf) - old := Reset() - SetWriter(writer) - defer SetWriter(old) - - Info("hello") - buf.Reset() - - AddGlobalFields(Field("a", "1"), Field("b", "2")) - AddGlobalFields(Field("c", "3")) - Info("world") - var m map[string]any - assert.NoError(t, json.Unmarshal(buf.Bytes(), &m)) - assert.Equal(t, "1", m["a"]) - assert.Equal(t, "2", m["b"]) - assert.Equal(t, "3", m["c"]) -} - -func TestContextWithFields(t *testing.T) { - ctx := ContextWithFields(context.Background(), Field("a", 1), Field("b", 2)) - vals := ctx.Value(fieldsContextKey) - assert.NotNil(t, vals) - fields, ok := vals.([]LogField) - assert.True(t, ok) - assert.EqualValues(t, []LogField{Field("a", 1), Field("b", 2)}, fields) -} - -func TestWithFields(t *testing.T) { - ctx := WithFields(context.Background(), Field("a", 1), Field("b", 2)) - vals := ctx.Value(fieldsContextKey) - assert.NotNil(t, vals) - fields, ok := vals.([]LogField) - assert.True(t, ok) - assert.EqualValues(t, []LogField{Field("a", 1), Field("b", 2)}, fields) -} - -func TestWithFieldsAppend(t *testing.T) { - type ctxKey string - var dummyKey ctxKey = "dummyKey" - ctx := context.WithValue(context.Background(), dummyKey, "dummy") - ctx = ContextWithFields(ctx, Field("a", 1), Field("b", 2)) - ctx = ContextWithFields(ctx, Field("c", 3), Field("d", 4)) - vals := ctx.Value(fieldsContextKey) - assert.NotNil(t, vals) - fields, ok := vals.([]LogField) - assert.True(t, ok) - assert.Equal(t, "dummy", ctx.Value(dummyKey)) - assert.EqualValues(t, []LogField{ - Field("a", 1), - Field("b", 2), - Field("c", 3), - Field("d", 4), - }, fields) -} - -func TestWithFieldsAppendCopy(t *testing.T) { - const count = 10 - ctx := context.Background() - for i := 0; i < count; i++ { - ctx = ContextWithFields(ctx, Field(strconv.Itoa(i), 1)) - } - - af := Field("foo", 1) - bf := Field("bar", 2) - ctxa := ContextWithFields(ctx, af) - ctxb := ContextWithFields(ctx, bf) - - assert.EqualValues(t, af, ctxa.Value(fieldsContextKey).([]LogField)[count]) - assert.EqualValues(t, bf, ctxb.Value(fieldsContextKey).([]LogField)[count]) -} - -func BenchmarkAtomicValue(b *testing.B) { - b.ReportAllocs() - - var container atomic.Value - vals := []LogField{ - Field("a", "b"), - Field("c", "d"), - Field("e", "f"), - } - container.Store(&vals) - - for i := 0; i < b.N; i++ { - val := container.Load() - if val != nil { - _ = *val.(*[]LogField) - } - } -} - -func BenchmarkRWMutex(b *testing.B) { - b.ReportAllocs() - - var lock sync.RWMutex - vals := []LogField{ - Field("a", "b"), - Field("c", "d"), - Field("e", "f"), - } - - for i := 0; i < b.N; i++ { - lock.RLock() - _ = vals - lock.RUnlock() - } -} diff --git a/pkg/logger/lesslogger_test.go b/pkg/logger/lesslogger_test.go deleted file mode 100644 index 1134c0c..0000000 --- a/pkg/logger/lesslogger_test.go +++ /dev/null @@ -1,35 +0,0 @@ -package logger - -import ( - "log" - "strings" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestLessLogger_Error(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - l := NewLessLogger(500) - for i := 0; i < 100; i++ { - l.Error("hello") - } - log.Print(w.String()) - assert.Equal(t, 1, strings.Count(w.String(), "\n")) -} - -func TestLessLogger_Errorf(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - l := NewLessLogger(500) - for i := 0; i < 100; i++ { - l.Errorf("hello") - } - - assert.Equal(t, 1, strings.Count(w.String(), "\n")) -} diff --git a/pkg/logger/lesswriter_test.go b/pkg/logger/lesswriter_test.go deleted file mode 100644 index c522bab..0000000 --- a/pkg/logger/lesswriter_test.go +++ /dev/null @@ -1,19 +0,0 @@ -package logger - -import ( - "strings" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestLessWriter(t *testing.T) { - var builder strings.Builder - w := newLessWriter(&builder, 500) - for i := 0; i < 100; i++ { - _, err := w.Write([]byte("hello")) - assert.Nil(t, err) - } - - assert.Equal(t, "hello", builder.String()) -} diff --git a/pkg/logger/limitedexecutor_test.go b/pkg/logger/limitedexecutor_test.go deleted file mode 100644 index eb365e9..0000000 --- a/pkg/logger/limitedexecutor_test.go +++ /dev/null @@ -1,62 +0,0 @@ -package logger - -import ( - "sync/atomic" - "testing" - "time" - - "github.com/perfect-panel/server/pkg/timex" - "github.com/stretchr/testify/assert" -) - -func TestLimitedExecutor_logOrDiscard(t *testing.T) { - tests := []struct { - name string - threshold time.Duration - lastTime time.Duration - discarded uint32 - executed bool - }{ - { - name: "nil executor", - executed: true, - }, - { - name: "regular", - threshold: time.Hour, - lastTime: timex.Now(), - discarded: 10, - executed: false, - }, - { - name: "slow", - threshold: time.Duration(1), - lastTime: -1000, - discarded: 10, - executed: true, - }, - } - - for _, test := range tests { - test := test - t.Run(test.name, func(t *testing.T) { - t.Parallel() - - executor := newLimitedExecutor(0) - executor.threshold = test.threshold - executor.discarded = test.discarded - executor.lastTime.Set(test.lastTime) - - var run int32 - executor.logOrDiscard(func() { - atomic.AddInt32(&run, 1) - }) - if test.executed { - assert.Equal(t, int32(1), atomic.LoadInt32(&run)) - } else { - assert.Equal(t, int32(0), atomic.LoadInt32(&run)) - assert.Equal(t, test.discarded+1, atomic.LoadUint32(&executor.discarded)) - } - }) - } -} diff --git a/pkg/logger/logs_test.go b/pkg/logger/logs_test.go deleted file mode 100644 index c67bd79..0000000 --- a/pkg/logger/logs_test.go +++ /dev/null @@ -1,931 +0,0 @@ -package logger - -import ( - "encoding/json" - "errors" - "fmt" - "io" - "log" - "os" - "reflect" - "runtime" - "strings" - "sync" - "sync/atomic" - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -var ( - s = []byte("Sending #11 notification (id: 1451875113812010473) in #1 connection") - pool = make(chan []byte, 1) - _ Writer = (*mockWriter)(nil) -) - -func init() { - ExitOnFatal.Set(false) -} - -type mockWriter struct { - lock sync.Mutex - builder strings.Builder -} - -func (mw *mockWriter) Alert(v any) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelAlert, v) -} - -func (mw *mockWriter) Debug(v any, fields ...LogField) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelDebug, v, fields...) -} - -func (mw *mockWriter) Error(v any, fields ...LogField) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelError, v, fields...) -} - -func (mw *mockWriter) Info(v any, fields ...LogField) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelInfo, v, fields...) -} - -func (mw *mockWriter) Severe(v any) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelSevere, v) -} - -func (mw *mockWriter) Slow(v any, fields ...LogField) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelSlow, v, fields...) -} - -func (mw *mockWriter) Stack(v any) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelError, v) -} - -func (mw *mockWriter) Stat(v any, fields ...LogField) { - mw.lock.Lock() - defer mw.lock.Unlock() - output(&mw.builder, levelStat, v, fields...) -} - -func (mw *mockWriter) Close() error { - return nil -} - -func (mw *mockWriter) Contains(text string) bool { - mw.lock.Lock() - defer mw.lock.Unlock() - return strings.Contains(mw.builder.String(), text) -} - -func (mw *mockWriter) Reset() { - mw.lock.Lock() - defer mw.lock.Unlock() - mw.builder.Reset() -} - -func (mw *mockWriter) String() string { - mw.lock.Lock() - defer mw.lock.Unlock() - return mw.builder.String() -} - -func TestField(t *testing.T) { - tests := []struct { - name string - f LogField - want map[string]any - }{ - { - name: "error", - f: Field("foo", errors.New("bar")), - want: map[string]any{ - "foo": "bar", - }, - }, - { - name: "errors", - f: Field("foo", []error{errors.New("bar"), errors.New("baz")}), - want: map[string]any{ - "foo": []any{"bar", "baz"}, - }, - }, - { - name: "strings", - f: Field("foo", []string{"bar", "baz"}), - want: map[string]any{ - "foo": []any{"bar", "baz"}, - }, - }, - { - name: "duration", - f: Field("foo", time.Second), - want: map[string]any{ - "foo": "1s", - }, - }, - { - name: "durations", - f: Field("foo", []time.Duration{time.Second, 2 * time.Second}), - want: map[string]any{ - "foo": []any{"1s", "2s"}, - }, - }, - { - name: "times", - f: Field("foo", []time.Time{ - time.Date(2020, time.January, 1, 0, 0, 0, 0, time.UTC), - time.Date(2020, time.January, 2, 0, 0, 0, 0, time.UTC), - }), - want: map[string]any{ - "foo": []any{"2020-01-01 00:00:00 +0000 UTC", "2020-01-02 00:00:00 +0000 UTC"}, - }, - }, - { - name: "stringer", - f: Field("foo", ValStringer{val: "bar"}), - want: map[string]any{ - "foo": "bar", - }, - }, - { - name: "stringers", - f: Field("foo", []fmt.Stringer{ValStringer{val: "bar"}, ValStringer{val: "baz"}}), - want: map[string]any{ - "foo": []any{"bar", "baz"}, - }, - }, - } - - for _, test := range tests { - test := test - t.Run(test.name, func(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - Infow("foo", test.f) - validateFields(t, w.String(), test.want) - }) - } -} - -func TestFileLineFileMode(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - file, line := getFileLine() - Error("anything") - assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1))) - - file, line = getFileLine() - Errorf("anything %s", "format") - assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1))) -} - -func TestFileLineConsoleMode(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - file, line := getFileLine() - Error("anything") - assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1))) - - w.Reset() - file, line = getFileLine() - Errorf("anything %s", "format") - assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1))) -} - -func TestMust(t *testing.T) { - assert.Panics(t, func() { - Must(errors.New("foo")) - }) -} - -func TestStructedLogAlert(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelAlert, w, func(v ...any) { - Alert(fmt.Sprint(v...)) - }) -} - -func TestStructedLogDebug(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelDebug, w, func(v ...any) { - Debug(v...) - }) -} - -func TestStructedLogDebugf(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelDebug, w, func(v ...any) { - Debugf(fmt.Sprint(v...)) - }) -} - -func TestStructedLogDebugv(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelDebug, w, func(v ...any) { - Debugv(fmt.Sprint(v...)) - }) -} - -func TestStructedLogDebugw(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelDebug, w, func(v ...any) { - Debugw(fmt.Sprint(v...), Field("foo", time.Second)) - }) -} - -func TestStructedLogError(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelError, w, func(v ...any) { - Error(v...) - }) -} - -func TestStructedLogErrorf(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelError, w, func(v ...any) { - Errorf("%s", fmt.Sprint(v...)) - }) -} - -func TestStructedLogErrorv(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelError, w, func(v ...any) { - Errorv(fmt.Sprint(v...)) - }) -} - -func TestStructedLogErrorw(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelError, w, func(v ...any) { - Errorw(fmt.Sprint(v...), Field("foo", "bar")) - }) -} - -func TestStructedLogInfo(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelInfo, w, func(v ...any) { - Info(v...) - }) -} - -func TestStructedLogInfof(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelInfo, w, func(v ...any) { - Infof("%s", fmt.Sprint(v...)) - }) -} - -func TestStructedLogInfov(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelInfo, w, func(v ...any) { - Infov(fmt.Sprint(v...)) - }) -} - -func TestStructedLogInfow(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelInfo, w, func(v ...any) { - Infow(fmt.Sprint(v...), Field("foo", "bar")) - }) -} - -func TestStructedLogFieldNil(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - assert.NotPanics(t, func() { - var s *string - Infow("test", Field("bb", s)) - var d *nilStringer - Infow("test", Field("bb", d)) - var e *nilError - Errorw("test", Field("bb", e)) - }) - assert.NotPanics(t, func() { - var p panicStringer - Infow("test", Field("bb", p)) - var ps innerPanicStringer - Infow("test", Field("bb", ps)) - }) -} - -func TestStructedLogInfoConsoleAny(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLogConsole(t, w, func(v ...any) { - old := atomic.LoadUint32(&encoding) - atomic.StoreUint32(&encoding, plainEncodingType) - defer func() { - atomic.StoreUint32(&encoding, old) - }() - - Infov(v) - }) -} - -func TestStructedLogInfoConsoleAnyString(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLogConsole(t, w, func(v ...any) { - old := atomic.LoadUint32(&encoding) - atomic.StoreUint32(&encoding, plainEncodingType) - defer func() { - atomic.StoreUint32(&encoding, old) - }() - - Infov(fmt.Sprint(v...)) - }) -} - -func TestStructedLogInfoConsoleAnyError(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLogConsole(t, w, func(v ...any) { - old := atomic.LoadUint32(&encoding) - atomic.StoreUint32(&encoding, plainEncodingType) - defer func() { - atomic.StoreUint32(&encoding, old) - }() - - Infov(errors.New(fmt.Sprint(v...))) - }) -} - -func TestStructedLogInfoConsoleAnyStringer(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLogConsole(t, w, func(v ...any) { - old := atomic.LoadUint32(&encoding) - atomic.StoreUint32(&encoding, plainEncodingType) - defer func() { - atomic.StoreUint32(&encoding, old) - }() - - Infov(ValStringer{ - val: fmt.Sprint(v...), - }) - }) -} - -func TestStructedLogInfoConsoleText(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLogConsole(t, w, func(v ...any) { - old := atomic.LoadUint32(&encoding) - atomic.StoreUint32(&encoding, plainEncodingType) - defer func() { - atomic.StoreUint32(&encoding, old) - }() - - Info(fmt.Sprint(v...)) - }) -} - -func TestStructedLogSlow(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelSlow, w, func(v ...any) { - Slow(v...) - }) -} - -func TestStructedLogSlowf(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelSlow, w, func(v ...any) { - Slowf(fmt.Sprint(v...)) - }) -} - -func TestStructedLogSlowv(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelSlow, w, func(v ...any) { - Slowv(fmt.Sprint(v...)) - }) -} - -func TestStructedLogSloww(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelSlow, w, func(v ...any) { - Sloww(fmt.Sprint(v...), Field("foo", time.Second)) - }) -} - -func TestStructedLogStat(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelStat, w, func(v ...any) { - Stat(v...) - }) -} - -func TestStructedLogStatf(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelStat, w, func(v ...any) { - Statf(fmt.Sprint(v...)) - }) -} - -func TestStructedLogSevere(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelSevere, w, func(v ...any) { - Severe(v...) - }) -} - -func TestStructedLogSeveref(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - doTestStructedLog(t, levelSevere, w, func(v ...any) { - Severef(fmt.Sprint(v...)) - }) -} - -func TestStructedLogWithDuration(t *testing.T) { - const message = "hello there" - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - WithDuration(time.Second).Info(message) - var entry map[string]any - if err := json.Unmarshal([]byte(w.String()), &entry); err != nil { - t.Error(err) - } - assert.Equal(t, levelInfo, entry[levelKey]) - assert.Equal(t, message, entry[contentKey]) - assert.Equal(t, "1000.0ms", entry[durationKey]) -} - -func TestSetLevel(t *testing.T) { - SetLevel(ErrorLevel) - const message = "hello there" - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - Info(message) - assert.Equal(t, 0, w.builder.Len()) -} - -func TestSetLevelTwiceWithMode(t *testing.T) { - testModes := []string{ - "console", - "volumn", - "mode", - } - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - for _, mode := range testModes { - testSetLevelTwiceWithMode(t, mode, w) - } -} - -func TestSetLevelWithDuration(t *testing.T) { - SetLevel(ErrorLevel) - const message = "hello there" - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - WithDuration(time.Second).Info(message) - assert.Equal(t, 0, w.builder.Len()) -} - -func TestErrorfWithWrappedError(t *testing.T) { - SetLevel(ErrorLevel) - const message = "there" - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - Errorf("hello %s", errors.New(message)) - assert.True(t, strings.Contains(w.String(), "hello there")) -} - -func TestMustNil(t *testing.T) { - Must(nil) -} - -func TestSetup(t *testing.T) { - defer func() { - SetLevel(InfoLevel) - atomic.StoreUint32(&encoding, jsonEncodingType) - }() - - setupOnce = sync.Once{} - MustSetup(LogConf{ - ServiceName: "any", - Mode: "console", - Encoding: "json", - TimeFormat: timeFormat, - }) - setupOnce = sync.Once{} - MustSetup(LogConf{ - ServiceName: "any", - Mode: "console", - TimeFormat: timeFormat, - }) - setupOnce = sync.Once{} - MustSetup(LogConf{ - ServiceName: "any", - Mode: "file", - Path: os.TempDir(), - }) - setupOnce = sync.Once{} - MustSetup(LogConf{ - ServiceName: "any", - Mode: "volume", - Path: os.TempDir(), - }) - setupOnce = sync.Once{} - MustSetup(LogConf{ - ServiceName: "any", - Mode: "console", - TimeFormat: timeFormat, - }) - setupOnce = sync.Once{} - MustSetup(LogConf{ - ServiceName: "any", - Mode: "console", - Encoding: plainEncoding, - }) - - defer os.RemoveAll("CD01CB7D-2705-4F3F-889E-86219BF56F10") - assert.NotNil(t, setupWithVolume(LogConf{})) - assert.Nil(t, setupWithVolume(LogConf{ - ServiceName: "CD01CB7D-2705-4F3F-889E-86219BF56F10", - })) - assert.Nil(t, setupWithVolume(LogConf{ - ServiceName: "CD01CB7D-2705-4F3F-889E-86219BF56F10", - Rotation: sizeRotationRule, - })) - assert.NotNil(t, setupWithFiles(LogConf{})) - assert.Nil(t, setupWithFiles(LogConf{ - ServiceName: "any", - Path: os.TempDir(), - Compress: true, - KeepDays: 1, - MaxBackups: 3, - MaxSize: 1024 * 1024, - })) - setupLogLevel(LogConf{ - Level: levelInfo, - }) - setupLogLevel(LogConf{ - Level: levelError, - }) - setupLogLevel(LogConf{ - Level: levelSevere, - }) - _, err := createOutput("") - assert.NotNil(t, err) - Disable() - SetLevel(InfoLevel) - atomic.StoreUint32(&encoding, jsonEncodingType) -} - -func TestDisable(t *testing.T) { - Disable() - defer func() { - SetLevel(InfoLevel) - atomic.StoreUint32(&encoding, jsonEncodingType) - }() - - var opt logOptions - WithKeepDays(1)(&opt) - WithGzip()(&opt) - WithMaxBackups(1)(&opt) - WithMaxSize(1024)(&opt) - assert.Nil(t, Close()) - assert.Nil(t, Close()) - assert.Equal(t, uint32(disableLevel), atomic.LoadUint32(&logLevel)) -} - -func TestDisableStat(t *testing.T) { - DisableStat() - - const message = "hello there" - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - Stat(message) - assert.Equal(t, 0, w.builder.Len()) -} - -func TestAddWriter(t *testing.T) { - const message = "hello there" - w := new(mockWriter) - AddWriter(w) - w1 := new(mockWriter) - AddWriter(w1) - Error(message) - assert.Contains(t, w.String(), message) - assert.Contains(t, w1.String(), message) -} - -func TestSetWriter(t *testing.T) { - atomic.StoreUint32(&logLevel, 0) - Reset() - SetWriter(nopWriter{}) - assert.NotNil(t, writer.Load()) - assert.True(t, writer.Load() == nopWriter{}) - mocked := new(mockWriter) - SetWriter(mocked) - assert.Equal(t, mocked, writer.Load()) -} - -func TestWithGzip(t *testing.T) { - fn := WithGzip() - var opt logOptions - fn(&opt) - assert.True(t, opt.gzipEnabled) -} - -func TestWithKeepDays(t *testing.T) { - fn := WithKeepDays(1) - var opt logOptions - fn(&opt) - assert.Equal(t, 1, opt.keepDays) -} - -func BenchmarkCopyByteSliceAppend(b *testing.B) { - for i := 0; i < b.N; i++ { - var buf []byte - buf = append(buf, getTimestamp()...) - buf = append(buf, ' ') - buf = append(buf, s...) - _ = buf - } -} - -func BenchmarkCopyByteSliceAllocExactly(b *testing.B) { - for i := 0; i < b.N; i++ { - now := []byte(getTimestamp()) - buf := make([]byte, len(now)+1+len(s)) - n := copy(buf, now) - buf[n] = ' ' - copy(buf[n+1:], s) - } -} - -func BenchmarkCopyByteSlice(b *testing.B) { - var buf []byte - for i := 0; i < b.N; i++ { - buf = make([]byte, len(s)) - copy(buf, s) - } - fmt.Fprint(io.Discard, buf) -} - -func BenchmarkCopyOnWriteByteSlice(b *testing.B) { - var buf []byte - for i := 0; i < b.N; i++ { - size := len(s) - buf = s[:size:size] - } - fmt.Fprint(io.Discard, buf) -} - -func BenchmarkCacheByteSlice(b *testing.B) { - for i := 0; i < b.N; i++ { - dup := fetch() - copy(dup, s) - put(dup) - } -} - -func BenchmarkLogs(b *testing.B) { - b.ReportAllocs() - - log.SetOutput(io.Discard) - for i := 0; i < b.N; i++ { - Info(i) - } -} - -func fetch() []byte { - select { - case b := <-pool: - return b - default: - } - return make([]byte, 4096) -} - -func getFileLine() (string, int) { - _, file, line, _ := runtime.Caller(1) - short := file - - for i := len(file) - 1; i > 0; i-- { - if file[i] == '/' { - short = file[i+1:] - break - } - } - - return short, line -} - -func put(b []byte) { - select { - case pool <- b: - default: - } -} - -func doTestStructedLog(t *testing.T, level string, w *mockWriter, write func(...any)) { - const message = "hello there" - write(message) - - var entry map[string]any - if err := json.Unmarshal([]byte(w.String()), &entry); err != nil { - t.Error(err) - } - - assert.Equal(t, level, entry[levelKey]) - val, ok := entry[contentKey] - assert.True(t, ok) - assert.True(t, strings.Contains(val.(string), message)) -} - -func doTestStructedLogConsole(t *testing.T, w *mockWriter, write func(...any)) { - const message = "hello there" - write(message) - assert.True(t, strings.Contains(w.String(), message)) -} - -func testSetLevelTwiceWithMode(t *testing.T, mode string, w *mockWriter) { - writer.Store(nil) - _ = SetUp(LogConf{ - Mode: mode, - Level: "debug", - Path: "/dev/null", - Encoding: plainEncoding, - Stat: false, - TimeFormat: time.RFC3339, - FileTimeFormat: time.DateTime, - }) - _ = SetUp(LogConf{ - Mode: mode, - Level: "info", - Path: "/dev/null", - }) - const message = "hello there" - Info(message) - assert.Equal(t, 0, w.builder.Len()) - Infof(message) - assert.Equal(t, 0, w.builder.Len()) - ErrorStack(message) - assert.Equal(t, 0, w.builder.Len()) - ErrorStackf(message) - assert.Equal(t, 0, w.builder.Len()) -} - -type ValStringer struct { - val string -} - -func (v ValStringer) String() string { - return v.val -} - -func validateFields(t *testing.T, content string, fields map[string]any) { - var m map[string]any - if err := json.Unmarshal([]byte(content), &m); err != nil { - t.Error(err) - } - - for k, v := range fields { - if reflect.TypeOf(v).Kind() == reflect.Slice { - assert.EqualValues(t, v, m[k]) - } else { - assert.Equal(t, v, m[k], content) - } - } -} - -type nilError struct { - Name string -} - -func (e *nilError) Error() string { - return e.Name -} - -type nilStringer struct { - Name string -} - -func (s *nilStringer) String() string { - return s.Name -} - -type innerPanicStringer struct { - Inner *struct { - Name string - } -} - -func (s innerPanicStringer) String() string { - return s.Inner.Name -} - -type panicStringer struct { -} - -func (s panicStringer) String() string { - panic("panic") -} diff --git a/pkg/logger/logtest/logtest.go b/pkg/logger/logtest/logtest.go deleted file mode 100644 index 4f7c03f..0000000 --- a/pkg/logger/logtest/logtest.go +++ /dev/null @@ -1,84 +0,0 @@ -package logtest - -import ( - "bytes" - "encoding/json" - "io" - "testing" - - "github.com/perfect-panel/server/pkg/logger" -) - -type Buffer struct { - buf *bytes.Buffer - t *testing.T -} - -func Discard(t *testing.T) { - prev := logger.Reset() - logger.SetWriter(logger.NewWriter(io.Discard)) - - t.Cleanup(func() { - logger.SetWriter(prev) - }) -} - -func NewCollector(t *testing.T) *Buffer { - var buf bytes.Buffer - writer := logger.NewWriter(&buf) - prev := logger.Reset() - logger.SetWriter(writer) - - t.Cleanup(func() { - logger.SetWriter(prev) - }) - - return &Buffer{ - buf: &buf, - t: t, - } -} - -func (b *Buffer) Bytes() []byte { - return b.buf.Bytes() -} - -func (b *Buffer) Content() string { - var m map[string]interface{} - if err := json.Unmarshal(b.buf.Bytes(), &m); err != nil { - return "" - } - - content, ok := m["content"] - if !ok { - return "" - } - - switch val := content.(type) { - case string: - return val - default: - // err is impossible to be not nil, unmarshaled from b.buf.Bytes() - bs, _ := json.Marshal(content) - return string(bs) - } -} - -func (b *Buffer) Reset() { - b.buf.Reset() -} - -func (b *Buffer) String() string { - return b.buf.String() -} - -func PanicOnFatal(t *testing.T) { - ok := logger.ExitOnFatal.CompareAndSwap(true, false) - if !ok { - return - } - - t.Cleanup(func() { - logger.ExitOnFatal.CompareAndSwap(false, true) - }) -} diff --git a/pkg/logger/logtest/logtest_test.go b/pkg/logger/logtest/logtest_test.go deleted file mode 100644 index 953fec2..0000000 --- a/pkg/logger/logtest/logtest_test.go +++ /dev/null @@ -1,44 +0,0 @@ -package logtest - -import ( - "errors" - "testing" - - "github.com/perfect-panel/server/pkg/logger" - "github.com/stretchr/testify/assert" -) - -func TestCollector(t *testing.T) { - const input = "hello" - c := NewCollector(t) - logger.Info(input) - assert.Equal(t, input, c.Content()) - assert.Contains(t, c.String(), input) - c.Reset() - assert.Empty(t, c.Bytes()) -} - -func TestPanicOnFatal(t *testing.T) { - const input = "hello" - Discard(t) - logger.Info(input) - - PanicOnFatal(t) - PanicOnFatal(t) - assert.Panics(t, func() { - logger.Must(errors.New("foo")) - }) -} - -func TestCollectorContent(t *testing.T) { - const input = "hello" - c := NewCollector(t) - c.buf.WriteString(input) - assert.Empty(t, c.Content()) - c.Reset() - c.buf.WriteString(`{}`) - assert.Empty(t, c.Content()) - c.Reset() - c.buf.WriteString(`{"content":1}`) - assert.Equal(t, "1", c.Content()) -} diff --git a/pkg/logger/read_test.go b/pkg/logger/read_test.go deleted file mode 100644 index 527d32d..0000000 --- a/pkg/logger/read_test.go +++ /dev/null @@ -1,16 +0,0 @@ -package logger - -import ( - "testing" -) - -func TestReadLastNLines(t *testing.T) { - t.Skipf("skip this test until this test fails") - lines, err := ReadLastNLines("/Users/tension/code/ppanel/server/logs", 10) - if err != nil { - t.Fatalf("Error reading last N lines: %v", err) - } - for i, line := range lines { - t.Logf("Line %d: %s", i, line) - } -} diff --git a/pkg/logger/richlogger_test.go b/pkg/logger/richlogger_test.go deleted file mode 100644 index 1eb2d2f..0000000 --- a/pkg/logger/richlogger_test.go +++ /dev/null @@ -1,408 +0,0 @@ -package logger - -import ( - "context" - "encoding/json" - "fmt" - "io" - "strings" - "sync/atomic" - "testing" - "time" - - "github.com/stretchr/testify/assert" - "go.opentelemetry.io/otel" - sdktrace "go.opentelemetry.io/otel/sdk/trace" -) - -func TestTraceLog(t *testing.T) { - SetLevel(InfoLevel) - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - otp := otel.GetTracerProvider() - tp := sdktrace.NewTracerProvider(sdktrace.WithSampler(sdktrace.AlwaysSample())) - otel.SetTracerProvider(tp) - defer otel.SetTracerProvider(otp) - - ctx, span := tp.Tracer("trace-id").Start(context.Background(), "span-id") - defer span.End() - - WithContext(ctx).Info(testlog) - validate(t, w.String(), true, true) -} - -func TestTraceDebug(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - otp := otel.GetTracerProvider() - tp := sdktrace.NewTracerProvider(sdktrace.WithSampler(sdktrace.AlwaysSample())) - otel.SetTracerProvider(tp) - defer otel.SetTracerProvider(otp) - - ctx, span := tp.Tracer("foo").Start(context.Background(), "bar") - defer span.End() - - l := WithContext(ctx) - SetLevel(DebugLevel) - l.WithDuration(time.Second).Debug(testlog) - assert.True(t, strings.Contains(w.String(), traceKey)) - assert.True(t, strings.Contains(w.String(), spanKey)) - w.Reset() - l.WithDuration(time.Second).Debugf(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Debugv(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Debugv(testobj) - validateContentType(t, w.String(), map[string]any{}, true, true) - w.Reset() - l.WithDuration(time.Second).Debugw(testlog, Field("foo", "bar")) - validate(t, w.String(), true, true) - assert.True(t, strings.Contains(w.String(), "foo"), w.String()) - assert.True(t, strings.Contains(w.String(), "bar"), w.String()) -} - -func TestTraceError(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - otp := otel.GetTracerProvider() - tp := sdktrace.NewTracerProvider(sdktrace.WithSampler(sdktrace.AlwaysSample())) - otel.SetTracerProvider(tp) - defer otel.SetTracerProvider(otp) - - ctx, span := tp.Tracer("trace-id").Start(context.Background(), "span-id") - defer span.End() - - var nilCtx context.Context - l := WithContext(context.Background()) - l = l.WithContext(nilCtx) - l = l.WithContext(ctx) - SetLevel(ErrorLevel) - l.WithDuration(time.Second).Error(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Errorf(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Errorv(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Errorv(testobj) - validateContentType(t, w.String(), map[string]any{}, true, true) - w.Reset() - l.WithDuration(time.Second).Errorw(testlog, Field("basket", "ball")) - validate(t, w.String(), true, true) - assert.True(t, strings.Contains(w.String(), "basket"), w.String()) - assert.True(t, strings.Contains(w.String(), "ball"), w.String()) -} - -func TestTraceInfo(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - otp := otel.GetTracerProvider() - tp := sdktrace.NewTracerProvider(sdktrace.WithSampler(sdktrace.AlwaysSample())) - otel.SetTracerProvider(tp) - defer otel.SetTracerProvider(otp) - - ctx, span := tp.Tracer("trace-id").Start(context.Background(), "span-id") - defer span.End() - - SetLevel(InfoLevel) - l := WithContext(ctx) - l.WithDuration(time.Second).Info(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Infof(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Infov(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Infov(testobj) - validateContentType(t, w.String(), map[string]any{}, true, true) - w.Reset() - l.WithDuration(time.Second).Infow(testlog, Field("basket", "ball")) - validate(t, w.String(), true, true) - assert.True(t, strings.Contains(w.String(), "basket"), w.String()) - assert.True(t, strings.Contains(w.String(), "ball"), w.String()) -} - -func TestTraceInfoConsole(t *testing.T) { - old := atomic.SwapUint32(&encoding, jsonEncodingType) - defer atomic.StoreUint32(&encoding, old) - - w := new(mockWriter) - o := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(o) - }() - - otp := otel.GetTracerProvider() - tp := sdktrace.NewTracerProvider(sdktrace.WithSampler(sdktrace.AlwaysSample())) - otel.SetTracerProvider(tp) - defer otel.SetTracerProvider(otp) - - ctx, span := tp.Tracer("trace-id").Start(context.Background(), "span-id") - defer span.End() - - l := WithContext(ctx) - SetLevel(InfoLevel) - l.WithDuration(time.Second).Info(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Infof(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Infov(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Infov(testobj) - validateContentType(t, w.String(), map[string]any{}, true, true) -} - -func TestTraceSlow(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - otp := otel.GetTracerProvider() - tp := sdktrace.NewTracerProvider(sdktrace.WithSampler(sdktrace.AlwaysSample())) - otel.SetTracerProvider(tp) - defer otel.SetTracerProvider(otp) - - ctx, span := tp.Tracer("trace-id").Start(context.Background(), "span-id") - defer span.End() - - l := WithContext(ctx) - SetLevel(InfoLevel) - l.WithDuration(time.Second).Slow(testlog) - assert.True(t, strings.Contains(w.String(), traceKey)) - assert.True(t, strings.Contains(w.String(), spanKey)) - w.Reset() - l.WithDuration(time.Second).Slowf(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Slowv(testlog) - validate(t, w.String(), true, true) - w.Reset() - l.WithDuration(time.Second).Slowv(testobj) - validateContentType(t, w.String(), map[string]any{}, true, true) - w.Reset() - l.WithDuration(time.Second).Sloww(testlog, Field("basket", "ball")) - validate(t, w.String(), true, true) - assert.True(t, strings.Contains(w.String(), "basket"), w.String()) - assert.True(t, strings.Contains(w.String(), "ball"), w.String()) -} - -func TestTraceWithoutContext(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - l := WithContext(context.Background()) - SetLevel(InfoLevel) - l.WithDuration(time.Second).Info(testlog) - validate(t, w.String(), false, false) - w.Reset() - l.WithDuration(time.Second).Infof(testlog) - validate(t, w.String(), false, false) -} - -func TestLogWithFields(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - ctx := ContextWithFields(context.Background(), Field("foo", "bar")) - l := WithContext(ctx) - SetLevel(InfoLevel) - l.Infow(testlog) - - var val mockValue - assert.Nil(t, json.Unmarshal([]byte(w.String()), &val)) - assert.Equal(t, "bar", val.Foo) -} - -func TestLogWithCallerSkip(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - l := WithCallerSkip(1).WithCallerSkip(0) - p := func(v string) { - l.Infow(v) - } - - file, line := getFileLine() - p(testlog) - assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1))) - - w.Reset() - l = WithCallerSkip(0).WithCallerSkip(1) - file, line = getFileLine() - p(testlog) - assert.True(t, w.Contains(fmt.Sprintf("%s:%d", file, line+1))) -} - -func TestLogWithCallerSkipCopy(t *testing.T) { - log1 := WithCallerSkip(2) - log2 := log1.WithCallerSkip(3) - log3 := log2.WithCallerSkip(-1) - assert.Equal(t, 2, log1.(*richLogger).callerSkip) - assert.Equal(t, 3, log2.(*richLogger).callerSkip) - assert.Equal(t, 3, log3.(*richLogger).callerSkip) -} - -func TestLogWithContextCopy(t *testing.T) { - c1 := context.Background() - type ctxKey string // 定义新的字符串类型 - - const fooKey ctxKey = "foo" // 使用这个 key - c2 := context.WithValue(context.Background(), fooKey, "bar") - log1 := WithContext(c1) - log2 := log1.WithContext(c2) - assert.Equal(t, c1, log1.(*richLogger).ctx) - assert.Equal(t, c2, log2.(*richLogger).ctx) -} - -func TestLogWithDurationCopy(t *testing.T) { - log1 := WithContext(context.Background()) - log2 := log1.WithDuration(time.Second) - assert.Empty(t, log1.(*richLogger).fields) - assert.Equal(t, 1, len(log2.(*richLogger).fields)) - - var w mockWriter - old := writer.Swap(&w) - defer writer.Store(old) - log2.Info("hello") - assert.Contains(t, w.String(), `"duration":"1000.0ms"`) -} - -func TestLogWithFieldsCopy(t *testing.T) { - log1 := WithContext(context.Background()) - log2 := log1.WithFields(Field("foo", "bar")) - log3 := log1.WithFields() - assert.Empty(t, log1.(*richLogger).fields) - assert.Equal(t, 1, len(log2.(*richLogger).fields)) - assert.Equal(t, log1, log3) - assert.Empty(t, log3.(*richLogger).fields) - - var w mockWriter - old := writer.Swap(&w) - defer writer.Store(old) - - log2.Info("hello") - assert.Contains(t, w.String(), `"foo":"bar"`) -} - -func TestLoggerWithFields(t *testing.T) { - w := new(mockWriter) - old := writer.Swap(w) - writer.lock.RLock() - defer func() { - writer.lock.RUnlock() - writer.Store(old) - }() - - l := WithContext(context.Background()).WithFields(Field("foo", "bar")) - l.Infow(testlog) - - var val mockValue - assert.Nil(t, json.Unmarshal([]byte(w.String()), &val)) - assert.Equal(t, "bar", val.Foo) -} - -func validate(t *testing.T, body string, expectedTrace, expectedSpan bool) { - var val mockValue - dec := json.NewDecoder(strings.NewReader(body)) - - for { - var doc mockValue - err := dec.Decode(&doc) - if err == io.EOF { - // all done - break - } - if err != nil { - continue - } - - val = doc - } - - assert.Equal(t, expectedTrace, len(val.Trace) > 0, body) - assert.Equal(t, expectedSpan, len(val.Span) > 0, body) -} - -func validateContentType(t *testing.T, body string, expectedType any, expectedTrace, expectedSpan bool) { - var val mockValue - dec := json.NewDecoder(strings.NewReader(body)) - - for { - var doc mockValue - err := dec.Decode(&doc) - if err == io.EOF { - // all done - break - } - if err != nil { - continue - } - - val = doc - } - - assert.IsType(t, expectedType, val.Content, body) - assert.Equal(t, expectedTrace, len(val.Trace) > 0, body) - assert.Equal(t, expectedSpan, len(val.Span) > 0, body) -} - -type mockValue struct { - Trace string `json:"trace"` - Span string `json:"span"` - Foo string `json:"foo"` - Content any `json:"content"` -} diff --git a/pkg/logger/rotatelogger_test.go b/pkg/logger/rotatelogger_test.go deleted file mode 100644 index c2b5be1..0000000 --- a/pkg/logger/rotatelogger_test.go +++ /dev/null @@ -1,636 +0,0 @@ -package logger - -import ( - "errors" - "io" - "os" - "path" - "path/filepath" - "sync/atomic" - "syscall" - "testing" - "time" - - "github.com/perfect-panel/server/pkg/random" - - "github.com/perfect-panel/server/pkg/fs" - "github.com/stretchr/testify/assert" -) - -func TestDailyRotateRuleMarkRotated(t *testing.T) { - t.Run("daily rule", func(t *testing.T) { - var rule DailyRotateRule - rule.MarkRotated() - assert.Equal(t, getNowDate(), rule.rotatedTime) - }) - - t.Run("daily rule", func(t *testing.T) { - rule := DefaultRotateRule("test", "-", 1, false) - _, ok := rule.(*DailyRotateRule) - assert.True(t, ok) - }) -} - -func TestDailyRotateRuleOutdatedFiles(t *testing.T) { - t.Run("no files", func(t *testing.T) { - var rule DailyRotateRule - assert.Empty(t, rule.OutdatedFiles()) - rule.days = 1 - assert.Empty(t, rule.OutdatedFiles()) - rule.gzip = true - assert.Empty(t, rule.OutdatedFiles()) - }) - - t.Run("bad files", func(t *testing.T) { - rule := DailyRotateRule{ - filename: "[a-z", - } - assert.Empty(t, rule.OutdatedFiles()) - rule.days = 1 - assert.Empty(t, rule.OutdatedFiles()) - rule.gzip = true - assert.Empty(t, rule.OutdatedFiles()) - }) - - t.Run("temp files", func(t *testing.T) { - boundary := time.Now().Add(-time.Hour * time.Duration(hoursPerDay) * 2).Format(dateFormat) - f1, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary) - assert.NoError(t, err) - _ = f1.Close() - f2, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary) - assert.NoError(t, err) - _ = f2.Close() - t.Cleanup(func() { - _ = os.Remove(f1.Name()) - _ = os.Remove(f2.Name()) - }) - rule := DailyRotateRule{ - filename: path.Join(os.TempDir(), "go-zero-test-"), - days: 1, - } - assert.NotEmpty(t, rule.OutdatedFiles()) - }) -} - -func TestDailyRotateRuleShallRotate(t *testing.T) { - var rule DailyRotateRule - rule.rotatedTime = time.Now().Add(time.Hour * 24).Format(dateFormat) - assert.True(t, rule.ShallRotate(0)) -} - -func TestSizeLimitRotateRuleMarkRotated(t *testing.T) { - t.Run("size limit rule", func(t *testing.T) { - var rule SizeLimitRotateRule - rule.MarkRotated() - assert.Equal(t, getNowDateInRFC3339Format(), rule.rotatedTime) - }) - - t.Run("size limit rule", func(t *testing.T) { - rule := NewSizeLimitRotateRule("foo", "-", 1, 1, 1, false) - rule.MarkRotated() - assert.Equal(t, getNowDateInRFC3339Format(), rule.(*SizeLimitRotateRule).rotatedTime) - }) -} - -func TestSizeLimitRotateRuleOutdatedFiles(t *testing.T) { - t.Run("no files", func(t *testing.T) { - var rule SizeLimitRotateRule - assert.Empty(t, rule.OutdatedFiles()) - rule.days = 1 - assert.Empty(t, rule.OutdatedFiles()) - rule.gzip = true - assert.Empty(t, rule.OutdatedFiles()) - rule.maxBackups = 0 - assert.Empty(t, rule.OutdatedFiles()) - }) - - t.Run("bad files", func(t *testing.T) { - rule := SizeLimitRotateRule{ - DailyRotateRule: DailyRotateRule{ - filename: "[a-z", - }, - } - assert.Empty(t, rule.OutdatedFiles()) - rule.days = 1 - assert.Empty(t, rule.OutdatedFiles()) - rule.gzip = true - assert.Empty(t, rule.OutdatedFiles()) - }) - - t.Run("temp files", func(t *testing.T) { - boundary := time.Now().Add(-time.Hour * time.Duration(hoursPerDay) * 2).Format(dateFormat) - f1, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary) - assert.NoError(t, err) - f2, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary) - assert.NoError(t, err) - boundary1 := time.Now().Add(time.Hour * time.Duration(hoursPerDay) * 2).Format(dateFormat) - f3, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary1) - assert.NoError(t, err) - t.Cleanup(func() { - _ = f1.Close() - _ = os.Remove(f1.Name()) - _ = f2.Close() - _ = os.Remove(f2.Name()) - _ = f3.Close() - _ = os.Remove(f3.Name()) - }) - rule := SizeLimitRotateRule{ - DailyRotateRule: DailyRotateRule{ - filename: path.Join(os.TempDir(), "go-zero-test-"), - days: 1, - }, - maxBackups: 3, - } - assert.NotEmpty(t, rule.OutdatedFiles()) - }) - - t.Run("no backups", func(t *testing.T) { - boundary := time.Now().Add(-time.Hour * time.Duration(hoursPerDay) * 2).Format(dateFormat) - f1, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary) - assert.NoError(t, err) - f2, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary) - assert.NoError(t, err) - boundary1 := time.Now().Add(time.Hour * time.Duration(hoursPerDay) * 2).Format(dateFormat) - f3, err := os.CreateTemp(os.TempDir(), "go-zero-test-"+boundary1) - assert.NoError(t, err) - t.Cleanup(func() { - _ = f1.Close() - _ = os.Remove(f1.Name()) - _ = f2.Close() - _ = os.Remove(f2.Name()) - _ = f3.Close() - _ = os.Remove(f3.Name()) - }) - rule := SizeLimitRotateRule{ - DailyRotateRule: DailyRotateRule{ - filename: path.Join(os.TempDir(), "go-zero-test-"), - days: 1, - }, - } - assert.NotEmpty(t, rule.OutdatedFiles()) - - logger := new(RotateLogger) - logger.rule = &rule - logger.maybeDeleteOutdatedFiles() - assert.Empty(t, rule.OutdatedFiles()) - }) -} - -func TestSizeLimitRotateRuleShallRotate(t *testing.T) { - var rule SizeLimitRotateRule - rule.rotatedTime = time.Now().Add(time.Hour * 24).Format(fileTimeFormat) - rule.maxSize = 0 - assert.False(t, rule.ShallRotate(0)) - rule.maxSize = 100 - assert.False(t, rule.ShallRotate(0)) - assert.True(t, rule.ShallRotate(101*megaBytes)) -} - -func TestRotateLoggerClose(t *testing.T) { - t.Run("close", func(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - logger, err := NewLogger(filename, new(DailyRotateRule), false) - assert.Nil(t, err) - _, err = logger.Write([]byte("foo")) - assert.Nil(t, err) - assert.Nil(t, logger.Close()) - }) - - t.Run("close and write", func(t *testing.T) { - logger := new(RotateLogger) - logger.done = make(chan struct{}) - close(logger.done) - _, err := logger.Write([]byte("foo")) - assert.ErrorIs(t, err, ErrLogFileClosed) - }) - - t.Run("close without losing logs", func(t *testing.T) { - text := "foo" - filename, err := fs.TempFilenameWithText(text) - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - logger, err := NewLogger(filename, new(DailyRotateRule), false) - assert.Nil(t, err) - msg := []byte("foo") - n := 100 - for i := 0; i < n; i++ { - _, err = logger.Write(msg) - assert.Nil(t, err) - } - assert.Nil(t, logger.Close()) - bs, err := os.ReadFile(filename) - assert.Nil(t, err) - assert.Equal(t, len(msg)*n+len(text), len(bs)) - }) -} - -func TestRotateLoggerGetBackupFilename(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - logger, err := NewLogger(filename, new(DailyRotateRule), false) - assert.Nil(t, err) - assert.True(t, len(logger.getBackupFilename()) > 0) - logger.backup = "" - assert.True(t, len(logger.getBackupFilename()) > 0) -} - -func TestRotateLoggerMayCompressFile(t *testing.T) { - old := os.Stdout - os.Stdout = os.NewFile(0, os.DevNull) - defer func() { - os.Stdout = old - }() - - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - logger, err := NewLogger(filename, new(DailyRotateRule), false) - assert.Nil(t, err) - logger.maybeCompressFile(filename) - _, err = os.Stat(filename) - assert.Nil(t, err) -} - -func TestRotateLoggerMayCompressFileTrue(t *testing.T) { - old := os.Stdout - os.Stdout = os.NewFile(0, os.DevNull) - defer func() { - os.Stdout = old - }() - - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - logger, err := NewLogger(filename, new(DailyRotateRule), true) - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filepath.Base(logger.getBackupFilename()) + ".gz") - } - logger.maybeCompressFile(filename) - _, err = os.Stat(filename) - assert.NotNil(t, err) -} - -func TestRotateLoggerRotate(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - logger, err := NewLogger(filename, new(DailyRotateRule), true) - assert.Nil(t, err) - if len(filename) > 0 { - defer func() { - os.Remove(logger.getBackupFilename()) - os.Remove(filepath.Base(logger.getBackupFilename()) + ".gz") - }() - } - err = logger.rotate() - switch v := err.(type) { - case *os.LinkError: - // avoid rename error on docker container - assert.Equal(t, syscall.EXDEV, v.Err) - case *os.PathError: - // ignore remove error for tests, - // files are cleaned in GitHub actions. - assert.Equal(t, "remove", v.Op) - default: - assert.Nil(t, err) - } -} - -func TestRotateLoggerWrite(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - rule := new(DailyRotateRule) - logger, err := NewLogger(filename, rule, true) - assert.Nil(t, err) - if len(filename) > 0 { - defer func() { - os.Remove(logger.getBackupFilename()) - os.Remove(filepath.Base(logger.getBackupFilename()) + ".gz") - }() - } - // the following write calls cannot be changed to Write, because of DATA RACE. - logger.write([]byte(`foo`)) - rule.rotatedTime = time.Now().Add(-time.Hour * 24).Format(dateFormat) - logger.write([]byte(`bar`)) - logger.Close() - logger.write([]byte(`baz`)) -} - -func TestLogWriterClose(t *testing.T) { - assert.Nil(t, newLogWriter(nil).Close()) -} - -func TestRotateLoggerWithSizeLimitRotateRuleClose(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - logger, err := NewLogger(filename, new(SizeLimitRotateRule), false) - assert.Nil(t, err) - _ = logger.Close() -} - -func TestRotateLoggerGetBackupWithSizeLimitRotateRuleFilename(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - logger, err := NewLogger(filename, new(SizeLimitRotateRule), false) - assert.Nil(t, err) - assert.True(t, len(logger.getBackupFilename()) > 0) - logger.backup = "" - assert.True(t, len(logger.getBackupFilename()) > 0) -} - -func TestRotateLoggerWithSizeLimitRotateRuleMayCompressFile(t *testing.T) { - old := os.Stdout - os.Stdout = os.NewFile(0, os.DevNull) - defer func() { - os.Stdout = old - }() - - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - logger, err := NewLogger(filename, new(SizeLimitRotateRule), false) - assert.Nil(t, err) - logger.maybeCompressFile(filename) - _, err = os.Stat(filename) - assert.Nil(t, err) -} - -func TestRotateLoggerWithSizeLimitRotateRuleMayCompressFileTrue(t *testing.T) { - old := os.Stdout - os.Stdout = os.NewFile(0, os.DevNull) - defer func() { - os.Stdout = old - }() - - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - logger, err := NewLogger(filename, new(SizeLimitRotateRule), true) - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filepath.Base(logger.getBackupFilename()) + ".gz") - } - logger.maybeCompressFile(filename) - _, err = os.Stat(filename) - assert.NotNil(t, err) -} - -func TestRotateLoggerWithSizeLimitRotateRuleMayCompressFileFailed(t *testing.T) { - old := os.Stdout - os.Stdout = os.NewFile(0, os.DevNull) - defer func() { - os.Stdout = old - }() - - filename := random.KeyNew(8, 1) - logger, err := NewLogger(filename, new(SizeLimitRotateRule), true) - defer os.Remove(filename) - if assert.NoError(t, err) { - assert.NotPanics(t, func() { - logger.maybeCompressFile(random.KeyNew(8, 1)) - }) - } -} - -func TestRotateLoggerWithSizeLimitRotateRuleRotate(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - logger, err := NewLogger(filename, new(SizeLimitRotateRule), true) - assert.Nil(t, err) - if len(filename) > 0 { - defer func() { - os.Remove(logger.getBackupFilename()) - os.Remove(filepath.Base(logger.getBackupFilename()) + ".gz") - }() - } - err = logger.rotate() - switch v := err.(type) { - case *os.LinkError: - // avoid rename error on docker container - assert.Equal(t, syscall.EXDEV, v.Err) - case *os.PathError: - // ignore remove error for tests, - // files are cleaned in GitHub actions. - assert.Equal(t, "remove", v.Op) - default: - assert.Nil(t, err) - } -} - -func TestRotateLoggerWithSizeLimitRotateRuleWrite(t *testing.T) { - filename, err := fs.TempFilenameWithText("foo") - assert.Nil(t, err) - rule := new(SizeLimitRotateRule) - logger, err := NewLogger(filename, rule, true) - assert.Nil(t, err) - if len(filename) > 0 { - defer func() { - os.Remove(logger.getBackupFilename()) - os.Remove(filepath.Base(logger.getBackupFilename()) + ".gz") - }() - } - // the following write calls cannot be changed to Write, because of DATA RACE. - logger.write([]byte(`foo`)) - rule.rotatedTime = time.Now().Add(-time.Hour * 24).Format(dateFormat) - logger.write([]byte(`bar`)) - logger.Close() - logger.write([]byte(`baz`)) -} - -func TestGzipFile(t *testing.T) { - err := errors.New("any error") - - t.Run("gzip file open failed", func(t *testing.T) { - fsys := &fakeFileSystem{ - openFn: func(name string) (*os.File, error) { - return nil, err - }, - } - assert.ErrorIs(t, err, gzipFile("any", fsys)) - assert.False(t, fsys.Removed()) - }) - - t.Run("gzip file create failed", func(t *testing.T) { - fsys := &fakeFileSystem{ - createFn: func(name string) (*os.File, error) { - return nil, err - }, - } - assert.ErrorIs(t, err, gzipFile("any", fsys)) - assert.False(t, fsys.Removed()) - }) - - t.Run("gzip file copy failed", func(t *testing.T) { - fsys := &fakeFileSystem{ - copyFn: func(writer io.Writer, reader io.Reader) (int64, error) { - return 0, err - }, - } - assert.ErrorIs(t, err, gzipFile("any", fsys)) - assert.False(t, fsys.Removed()) - }) - - t.Run("gzip file last close failed", func(t *testing.T) { - var called int32 - fsys := &fakeFileSystem{ - closeFn: func(closer io.Closer) error { - if atomic.AddInt32(&called, 1) > 2 { - return err - } - return nil - }, - } - assert.NoError(t, gzipFile("any", fsys)) - assert.True(t, fsys.Removed()) - }) - - t.Run("gzip file remove failed", func(t *testing.T) { - fsys := &fakeFileSystem{ - removeFn: func(name string) error { - return err - }, - } - assert.Error(t, err, gzipFile("any", fsys)) - assert.True(t, fsys.Removed()) - }) - - t.Run("gzip file everything ok", func(t *testing.T) { - fsys := &fakeFileSystem{} - assert.NoError(t, gzipFile("any", fsys)) - assert.True(t, fsys.Removed()) - }) -} - -func TestRotateLogger_WithExistingFile(t *testing.T) { - const body = "foo" - filename, err := fs.TempFilenameWithText(body) - assert.Nil(t, err) - if len(filename) > 0 { - defer os.Remove(filename) - } - - rule := NewSizeLimitRotateRule(filename, "-", 1, 100, 3, false) - logger, err := NewLogger(filename, rule, false) - assert.Nil(t, err) - assert.Equal(t, int64(len(body)), logger.currentSize) - assert.Nil(t, logger.Close()) -} - -func BenchmarkRotateLogger(b *testing.B) { - filename := "./test.log" - filename2 := "./test2.log" - dailyRotateRuleLogger, err1 := NewLogger( - filename, - DefaultRotateRule( - filename, - backupFileDelimiter, - 1, - true, - ), - true, - ) - if err1 != nil { - b.Logf("Failed to new daily rotate rule logger: %v", err1) - b.FailNow() - } - sizeLimitRotateRuleLogger, err2 := NewLogger( - filename2, - NewSizeLimitRotateRule( - filename, - backupFileDelimiter, - 1, - 100, - 10, - true, - ), - true, - ) - if err2 != nil { - b.Logf("Failed to new size limit rotate rule logger: %v", err1) - b.FailNow() - } - defer func() { - dailyRotateRuleLogger.Close() - sizeLimitRotateRuleLogger.Close() - os.Remove(filename) - os.Remove(filename2) - }() - - b.Run("daily rotate rule", func(b *testing.B) { - for i := 0; i < b.N; i++ { - dailyRotateRuleLogger.write([]byte("testing\ntesting\n")) - } - }) - b.Run("size limit rotate rule", func(b *testing.B) { - for i := 0; i < b.N; i++ { - sizeLimitRotateRuleLogger.write([]byte("testing\ntesting\n")) - } - }) -} - -type fakeFileSystem struct { - removed int32 - closeFn func(closer io.Closer) error - copyFn func(writer io.Writer, reader io.Reader) (int64, error) - createFn func(name string) (*os.File, error) - openFn func(name string) (*os.File, error) - removeFn func(name string) error -} - -func (f *fakeFileSystem) Close(closer io.Closer) error { - if f.closeFn != nil { - return f.closeFn(closer) - } - return nil -} - -func (f *fakeFileSystem) Copy(writer io.Writer, reader io.Reader) (int64, error) { - if f.copyFn != nil { - return f.copyFn(writer, reader) - } - return 0, nil -} - -func (f *fakeFileSystem) Create(name string) (*os.File, error) { - if f.createFn != nil { - return f.createFn(name) - } - return nil, nil -} - -func (f *fakeFileSystem) Open(name string) (*os.File, error) { - if f.openFn != nil { - return f.openFn(name) - } - return nil, nil -} - -func (f *fakeFileSystem) Remove(name string) error { - atomic.AddInt32(&f.removed, 1) - - if f.removeFn != nil { - return f.removeFn(name) - } - return nil -} - -func (f *fakeFileSystem) Removed() bool { - return atomic.LoadInt32(&f.removed) > 0 -} diff --git a/pkg/logger/syslog_test.go b/pkg/logger/syslog_test.go deleted file mode 100644 index 8e98aa8..0000000 --- a/pkg/logger/syslog_test.go +++ /dev/null @@ -1,61 +0,0 @@ -package logger - -import ( - "encoding/json" - "log" - "strings" - "sync/atomic" - "testing" - - "github.com/stretchr/testify/assert" -) - -const testlog = "Stay hungry, stay foolish." - -var testobj = map[string]any{"foo": "bar"} - -func TestCollectSysLog(t *testing.T) { - CollectSysLog() - content := getContent(captureOutput(func() { - log.Print(testlog) - })) - assert.True(t, strings.Contains(content, testlog)) -} - -func TestRedirector(t *testing.T) { - var r redirector - content := getContent(captureOutput(func() { - _, _ = r.Write([]byte(testlog)) - })) - assert.Equal(t, testlog, content) -} - -func captureOutput(f func()) string { - w := new(mockWriter) - old := writer.Swap(w) - defer writer.Store(old) - - prevLevel := atomic.LoadUint32(&logLevel) - SetLevel(InfoLevel) - f() - SetLevel(prevLevel) - - return w.String() -} - -func getContent(jsonStr string) string { - var entry map[string]any - _ = json.Unmarshal([]byte(jsonStr), &entry) - - val, ok := entry[contentKey] - if !ok { - return "" - } - - str, ok := val.(string) - if !ok { - return "" - } - - return str -} diff --git a/pkg/logger/util_test.go b/pkg/logger/util_test.go deleted file mode 100644 index a516c49..0000000 --- a/pkg/logger/util_test.go +++ /dev/null @@ -1,72 +0,0 @@ -package logger - -import ( - "path/filepath" - "runtime" - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestGetCaller(t *testing.T) { - _, file, _, _ := runtime.Caller(0) - assert.Contains(t, getCaller(1), filepath.Base(file)) - assert.True(t, len(getCaller(1<<10)) == 0) -} - -func TestGetTimestamp(t *testing.T) { - ts := getTimestamp() - tm, err := time.Parse(timeFormat, ts) - assert.Nil(t, err) - assert.True(t, time.Since(tm) < time.Minute) -} - -func TestPrettyCaller(t *testing.T) { - tests := []struct { - name string - file string - line int - want string - }{ - { - name: "regular", - file: "logx_test.go", - line: 123, - want: "logx_test.go:123", - }, - { - name: "relative", - file: "adhoc/logx_test.go", - line: 123, - want: "adhoc/logx_test.go:123", - }, - { - name: "long path", - file: "github.com/zeromicro/go-zero/core/logx/util_test.go", - line: 12, - want: "logx/util_test.go:12", - }, - { - name: "local path", - file: "/Users/kevin/go-zero/core/logx/util_test.go", - line: 1234, - want: "logx/util_test.go:1234", - }, - } - - for _, test := range tests { - test := test - t.Run(test.name, func(t *testing.T) { - assert.Equal(t, test.want, prettyCaller(test.file, test.line)) - }) - } -} - -func BenchmarkGetCaller(b *testing.B) { - b.ReportAllocs() - - for i := 0; i < b.N; i++ { - getCaller(1) - } -} diff --git a/pkg/logger/writer_test.go b/pkg/logger/writer_test.go deleted file mode 100644 index 8138230..0000000 --- a/pkg/logger/writer_test.go +++ /dev/null @@ -1,440 +0,0 @@ -package logger - -import ( - "bytes" - "encoding/json" - "errors" - "log" - "sync/atomic" - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/mock" -) - -func TestNewWriter(t *testing.T) { - const literal = "foo bar" - var buf bytes.Buffer - w := NewWriter(&buf) - w.Info(literal) - assert.Contains(t, buf.String(), literal) - buf.Reset() - w.Debug(literal) - assert.Contains(t, buf.String(), literal) -} - -func TestConsoleWriter(t *testing.T) { - var buf bytes.Buffer - w := newConsoleWriter() - lw := newLogWriter(log.New(&buf, "", 0)) - w.(*concreteWriter).errorLog = lw - w.Alert("foo bar 1") - var val mockedEntry - if err := json.Unmarshal(buf.Bytes(), &val); err != nil { - t.Fatal(err) - } - assert.Equal(t, levelAlert, val.Level) - assert.Equal(t, "foo bar 1", val.Content) - - buf.Reset() - w.(*concreteWriter).errorLog = lw - w.Error("foo bar 2") - if err := json.Unmarshal(buf.Bytes(), &val); err != nil { - t.Fatal(err) - } - assert.Equal(t, levelError, val.Level) - assert.Equal(t, "foo bar 2", val.Content) - - buf.Reset() - w.(*concreteWriter).infoLog = lw - w.Info("foo bar 3") - if err := json.Unmarshal(buf.Bytes(), &val); err != nil { - t.Fatal(err) - } - assert.Equal(t, levelInfo, val.Level) - assert.Equal(t, "foo bar 3", val.Content) - - buf.Reset() - w.(*concreteWriter).severeLog = lw - w.Severe("foo bar 4") - if err := json.Unmarshal(buf.Bytes(), &val); err != nil { - t.Fatal(err) - } - assert.Equal(t, levelFatal, val.Level) - assert.Equal(t, "foo bar 4", val.Content) - - buf.Reset() - w.(*concreteWriter).slowLog = lw - w.Slow("foo bar 5") - if err := json.Unmarshal(buf.Bytes(), &val); err != nil { - t.Fatal(err) - } - assert.Equal(t, levelSlow, val.Level) - assert.Equal(t, "foo bar 5", val.Content) - - buf.Reset() - w.(*concreteWriter).statLog = lw - w.Stat("foo bar 6") - if err := json.Unmarshal(buf.Bytes(), &val); err != nil { - t.Fatal(err) - } - assert.Equal(t, levelStat, val.Level) - assert.Equal(t, "foo bar 6", val.Content) - - w.(*concreteWriter).infoLog = hardToCloseWriter{} - assert.NotNil(t, w.Close()) - w.(*concreteWriter).infoLog = easyToCloseWriter{} - w.(*concreteWriter).errorLog = hardToCloseWriter{} - assert.NotNil(t, w.Close()) - w.(*concreteWriter).errorLog = easyToCloseWriter{} - w.(*concreteWriter).severeLog = hardToCloseWriter{} - assert.NotNil(t, w.Close()) - w.(*concreteWriter).severeLog = easyToCloseWriter{} - w.(*concreteWriter).slowLog = hardToCloseWriter{} - assert.NotNil(t, w.Close()) - w.(*concreteWriter).slowLog = easyToCloseWriter{} - w.(*concreteWriter).statLog = hardToCloseWriter{} - assert.NotNil(t, w.Close()) - w.(*concreteWriter).statLog = easyToCloseWriter{} -} - -func TestNewFileWriter(t *testing.T) { - t.Run("access", func(t *testing.T) { - _, err := newFileWriter(LogConf{ - Path: "/not-exists", - }) - assert.Error(t, err) - }) -} - -func TestNopWriter(t *testing.T) { - assert.NotPanics(t, func() { - var w nopWriter - w.Alert("foo") - w.Debug("foo") - w.Error("foo") - w.Info("foo") - w.Severe("foo") - w.Stack("foo") - w.Stat("foo") - w.Slow("foo") - _ = w.Close() - }) -} - -func TestWriteJson(t *testing.T) { - var buf bytes.Buffer - log.SetOutput(&buf) - writeJson(nil, "foo") - assert.Contains(t, buf.String(), "foo") - - buf.Reset() - writeJson(hardToWriteWriter{}, "foo") - assert.Contains(t, buf.String(), "write error") - - buf.Reset() - writeJson(nil, make(chan int)) - assert.Contains(t, buf.String(), "unsupported type") - - buf.Reset() - type C struct { - RC func() - } - writeJson(nil, C{ - RC: func() {}, - }) - assert.Contains(t, buf.String(), "runtime/debug.Stack") -} - -func TestWritePlainAny(t *testing.T) { - var buf bytes.Buffer - log.SetOutput(&buf) - writePlainAny(nil, levelInfo, "foo") - assert.Contains(t, buf.String(), "foo") - - buf.Reset() - writePlainAny(nil, levelDebug, make(chan int)) - assert.Contains(t, buf.String(), "unsupported type") - writePlainAny(nil, levelDebug, 100) - assert.Contains(t, buf.String(), "100") - - buf.Reset() - writePlainAny(nil, levelError, make(chan int)) - assert.Contains(t, buf.String(), "unsupported type") - writePlainAny(nil, levelSlow, 100) - assert.Contains(t, buf.String(), "100") - - buf.Reset() - writePlainAny(hardToWriteWriter{}, levelStat, 100) - assert.Contains(t, buf.String(), "write error") - - buf.Reset() - writePlainAny(hardToWriteWriter{}, levelSevere, "foo") - assert.Contains(t, buf.String(), "write error") - - buf.Reset() - writePlainAny(hardToWriteWriter{}, levelAlert, "foo") - assert.Contains(t, buf.String(), "write error") - - buf.Reset() - writePlainAny(hardToWriteWriter{}, levelFatal, "foo") - assert.Contains(t, buf.String(), "write error") - - buf.Reset() - type C struct { - RC func() - } - writePlainAny(nil, levelError, C{ - RC: func() {}, - }) - assert.Contains(t, buf.String(), "runtime/debug.Stack") -} - -func TestWritePlainDuplicate(t *testing.T) { - old := atomic.SwapUint32(&encoding, plainEncodingType) - t.Cleanup(func() { - atomic.StoreUint32(&encoding, old) - }) - - var buf bytes.Buffer - output(&buf, levelInfo, "foo", LogField{ - Key: "first", - Value: "a", - }, LogField{ - Key: "first", - Value: "b", - }) - assert.Contains(t, buf.String(), "foo") - assert.NotContains(t, buf.String(), "first=a") - assert.Contains(t, buf.String(), "first=b") - - buf.Reset() - output(&buf, levelInfo, "foo", LogField{ - Key: "first", - Value: "a", - }, LogField{ - Key: "first", - Value: "b", - }, LogField{ - Key: "second", - Value: "c", - }) - assert.Contains(t, buf.String(), "foo") - assert.NotContains(t, buf.String(), "first=a") - assert.Contains(t, buf.String(), "first=b") - assert.Contains(t, buf.String(), "second=c") -} - -func TestLogWithLimitContentLength(t *testing.T) { - maxLen := atomic.LoadUint32(&maxContentLength) - atomic.StoreUint32(&maxContentLength, 10) - - t.Cleanup(func() { - atomic.StoreUint32(&maxContentLength, maxLen) - }) - - t.Run("alert", func(t *testing.T) { - var buf bytes.Buffer - w := NewWriter(&buf) - w.Info("1234567890") - var v1 mockedEntry - if err := json.Unmarshal(buf.Bytes(), &v1); err != nil { - t.Fatal(err) - } - assert.Equal(t, "1234567890", v1.Content) - assert.False(t, v1.Truncated) - - buf.Reset() - var v2 mockedEntry - w.Info("12345678901") - if err := json.Unmarshal(buf.Bytes(), &v2); err != nil { - t.Fatal(err) - } - assert.Equal(t, "1234567890", v2.Content) - assert.True(t, v2.Truncated) - }) -} - -func TestComboWriter(t *testing.T) { - var mockWriters []Writer - for i := 0; i < 3; i++ { - mockWriters = append(mockWriters, new(tracedWriter)) - } - - cw := comboWriter{ - writers: mockWriters, - } - - t.Run("Alert", func(t *testing.T) { - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Alert", "test alert").Once() - } - cw.Alert("test alert") - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Alert", "test alert") - } - }) - - t.Run("Close", func(t *testing.T) { - for i := range cw.writers { - if i == 1 { - cw.writers[i].(*tracedWriter).On("Close").Return(errors.New("error")).Once() - } else { - cw.writers[i].(*tracedWriter).On("Close").Return(nil).Once() - } - } - err := cw.Close() - assert.Error(t, err) - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Close") - } - }) - - t.Run("Debug", func(t *testing.T) { - fields := []LogField{{Key: "key", Value: "value"}} - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Debug", "test debug", fields).Once() - } - cw.Debug("test debug", fields...) - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Debug", "test debug", fields) - } - }) - - t.Run("Error", func(t *testing.T) { - fields := []LogField{{Key: "key", Value: "value"}} - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Error", "test error", fields).Once() - } - cw.Error("test error", fields...) - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Error", "test error", fields) - } - }) - - t.Run("Info", func(t *testing.T) { - fields := []LogField{{Key: "key", Value: "value"}} - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Info", "test info", fields).Once() - } - cw.Info("test info", fields...) - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Info", "test info", fields) - } - }) - - t.Run("Severe", func(t *testing.T) { - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Severe", "test severe").Once() - } - cw.Severe("test severe") - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Severe", "test severe") - } - }) - - t.Run("Slow", func(t *testing.T) { - fields := []LogField{{Key: "key", Value: "value"}} - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Slow", "test slow", fields).Once() - } - cw.Slow("test slow", fields...) - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Slow", "test slow", fields) - } - }) - - t.Run("Stack", func(t *testing.T) { - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Stack", "test stack").Once() - } - cw.Stack("test stack") - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Stack", "test stack") - } - }) - - t.Run("Stat", func(t *testing.T) { - fields := []LogField{{Key: "key", Value: "value"}} - for _, mw := range cw.writers { - mw.(*tracedWriter).On("Stat", "test stat", fields).Once() - } - cw.Stat("test stat", fields...) - for _, mw := range cw.writers { - mw.(*tracedWriter).AssertCalled(t, "Stat", "test stat", fields) - } - }) -} - -type mockedEntry struct { - Level string `json:"level"` - Content string `json:"content"` - Truncated bool `json:"truncated"` -} - -type easyToCloseWriter struct{} - -func (h easyToCloseWriter) Write(_ []byte) (_ int, _ error) { - return -} - -func (h easyToCloseWriter) Close() error { - return nil -} - -type hardToCloseWriter struct{} - -func (h hardToCloseWriter) Write(_ []byte) (_ int, _ error) { - return -} - -func (h hardToCloseWriter) Close() error { - return errors.New("close error") -} - -type hardToWriteWriter struct{} - -func (h hardToWriteWriter) Write(_ []byte) (_ int, _ error) { - return 0, errors.New("write error") -} - -type tracedWriter struct { - mock.Mock -} - -func (w *tracedWriter) Alert(v any) { - w.Called(v) -} - -func (w *tracedWriter) Close() error { - args := w.Called() - return args.Error(0) -} - -func (w *tracedWriter) Debug(v any, fields ...LogField) { - w.Called(v, fields) -} - -func (w *tracedWriter) Error(v any, fields ...LogField) { - w.Called(v, fields) -} - -func (w *tracedWriter) Info(v any, fields ...LogField) { - w.Called(v, fields) -} - -func (w *tracedWriter) Severe(v any) { - w.Called(v) -} - -func (w *tracedWriter) Slow(v any, fields ...LogField) { - w.Called(v, fields) -} - -func (w *tracedWriter) Stack(v any) { - w.Called(v) -} - -func (w *tracedWriter) Stat(v any, fields ...LogField) { - w.Called(v, fields) -} diff --git a/pkg/nodeMultiplier/manage_test.go b/pkg/nodeMultiplier/manage_test.go deleted file mode 100644 index 881db55..0000000 --- a/pkg/nodeMultiplier/manage_test.go +++ /dev/null @@ -1,27 +0,0 @@ -package nodeMultiplier - -import ( - "testing" - "time" -) - -func TestNewNodeMultiplierManager(t *testing.T) { - periods := []TimePeriod{ - { - StartTime: "23:00.000", - EndTime: "1:59.000", - Multiplier: 1.2, - }, - { - StartTime: "12:00.000", - EndTime: "13:59.000", - Multiplier: 0.5, - }, - } - m := NewNodeMultiplierManager(periods) - if len(m.Periods) != 1 { - t.Errorf("expected 1, got %d", len(m.Periods)) - } - - t.Log("00:10 multiplier:", m.GetMultiplier(time.Date(0, 1, 1, 0, 10, 0, 0, time.UTC))) -} diff --git a/pkg/oauth/apple/apple_test.go b/pkg/oauth/apple/apple_test.go deleted file mode 100644 index eec19e5..0000000 --- a/pkg/oauth/apple/apple_test.go +++ /dev/null @@ -1,76 +0,0 @@ -package apple - -import ( - "context" - "fmt" - "log" - "net/http" - "testing" - - "github.com/gin-gonic/gin" -) - -func TestAppleLogin(t *testing.T) { - t.Skipf("Skip TestAppleLogin test") - router := gin.Default() - router.LoadHTMLGlob("./*") - router.GET("/apple", func(c *gin.Context) { - c.HTML(http.StatusOK, "apple.html", gin.H{ - "title": "Gin HTML Example", - "message": "Hello, Gin!", - }) - }) - router.POST("/auth/apple/callback", func(c *gin.Context) { - var req CallbackRequest - if err := c.ShouldBind(&req); err != nil { - c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid request data"}) - return - } - handleAppleCallBack(c, req) - }) - _ = router.RunTLS(":8443", "certificate.crt", "private.key") -} - -func handleAppleCallBack(ctx context.Context, request CallbackRequest) { - fmt.Printf("request: %+v\n", request) - // validate the token - client, err := New(Config{ - TeamID: TeamID, - ClientID: ClientID, - KeyID: KeyID, - ClientSecret: ClientSecret, - RedirectURI: "https://test.ppanel.dev:8443/auth/apple/callback", - }) - if err != nil { - fmt.Println("error creating apple client: " + err.Error()) - return - } - resp, err := client.VerifyWebToken(ctx, request.Code) - if err != nil { - fmt.Println("error verifying token: " + err.Error()) - return - } - if resp.Error != "" { - fmt.Printf("apple returned an error: %s - %s\n", resp.Error, resp.ErrorDescription) - return - } - - // Get the unique user ID - unique, err := GetUniqueID(resp.IDToken) - if err != nil { - fmt.Println("error getting unique id: " + err.Error()) - return - } - // Get the email - claim, err := GetClaims(resp.IDToken) - if err != nil { - fmt.Println("failed to get claims: " + err.Error()) - return - } - email := (*claim)["email"] - emailVerified := (*claim)["email_verified"] - isPrivateEmail := (*claim)["is_private_email"] - - // Voila! - log.Printf("\n unique: %s \n email: %s \n email_verified: %v \n is_private_email: %v", unique, email, emailVerified, isPrivateEmail) -} diff --git a/pkg/oauth/google/google_test.go b/pkg/oauth/google/google_test.go deleted file mode 100644 index 1d967d3..0000000 --- a/pkg/oauth/google/google_test.go +++ /dev/null @@ -1,78 +0,0 @@ -package google - -import ( - "context" - "fmt" - "log" - "net/http" - "testing" - - "golang.org/x/oauth2" -) - -func TestGoogleOAuth(t *testing.T) { - t.Skipf("Skip TestGoogleOAuth test") - http.HandleFunc("/", handleMain) - http.HandleFunc("/login", handleLogin) - http.HandleFunc("/auth", handleCallback) - http.HandleFunc("/user", handleAuth) - - fmt.Println("Server is running on http://localhost:3001") - log.Fatal(http.ListenAndServe(":3001", nil)) -} - -func handleMain(w http.ResponseWriter, r *http.Request) { - html := ` - - Log in with Google - - ` - fmt.Fprint(w, html) -} - -func handleLogin(w http.ResponseWriter, r *http.Request) { - oauthConfig := New(&Config{ - ClientID: "", - ClientSecret: "", - RedirectURL: "http://localhost:3001/auth", - }) - url := oauthConfig.AuthCodeURL("randomstate", oauth2.AccessTypeOffline) - http.Redirect(w, r, url, http.StatusTemporaryRedirect) -} - -func handleCallback(w http.ResponseWriter, r *http.Request) { - if r.FormValue("state") != "randomstate" { - http.Error(w, "State is invalid", http.StatusBadRequest) - return - } - - log.Printf("url: %v", r.URL) - - oauthConfig := New(&Config{ - ClientID: "", - ClientSecret: "Key", - RedirectURL: "http://localhost:3001/auth", - }) - code := r.FormValue("code") - token, err := oauthConfig.Exchange(context.Background(), code) - if err != nil { - http.Error(w, "Failed to exchange token", http.StatusInternalServerError) - return - } - http.Redirect(w, r, "/user?token="+token.AccessToken, http.StatusTemporaryRedirect) -} - -func handleAuth(w http.ResponseWriter, r *http.Request) { - token := r.FormValue("token") - client := New(&Config{ - ClientID: "Id", - ClientSecret: "Key", - RedirectURL: "http://localhost:3001/auth", - }) - userInfo, err := client.GetUserInfo(token) - if err != nil { - http.Error(w, "Failed to get user info", http.StatusInternalServerError) - return - } - fmt.Fprintf(w, "Hello, %s", userInfo.Name) -} diff --git a/pkg/oauth/telegram/telegram_test.go b/pkg/oauth/telegram/telegram_test.go deleted file mode 100644 index 2801e70..0000000 --- a/pkg/oauth/telegram/telegram_test.go +++ /dev/null @@ -1,36 +0,0 @@ -package telegram - -import ( - "net/http" - "testing" - - "github.com/gin-gonic/gin" -) - -func TestOAuth(t *testing.T) { - t.Skipf("Skip TestOAuth test") - router := gin.Default() - router.LoadHTMLGlob("./*") - router.GET("/telegram", func(c *gin.Context) { - c.HTML(http.StatusOK, "telegram.html", gin.H{ - "title": "Gin HTML Example", - "message": "Hello, Gin!", - }) - }) - router.GET("/auth/telegram/callback", func(c *gin.Context) { - - }) - _ = router.RunTLS(":443", "server.crt", "server.key") -} - -func TestBase64(t *testing.T) { - text := "eyJpZCI6ODI0NjI2ODAzLCJmaXJzdF9uYW1lIjoiQ2hhbmcgbHVlIiwibGFzdF9uYW1lIjoiVHNlbiIsInVzZXJuYW1lIjoidGVuc2lvbl9jIiwicGhvdG9fdXJsIjoiaHR0cHM6XC9cL3QubWVcL2lcL3VzZXJwaWNcLzMyMFwvYU1LNkhEc0pqc2V1YldRYmt2NGlYOHZCRUF6N0hWU3g3dkFuRDBLZ0tFVS5qcGciLCJhdXRoX2RhdGUiOjE3Mzc4MTkwNzQsImhhc2giOiI5M2I1ZDg3Zjc3NjE2YjBjMTM0OTAxYmYwMDg3MTc4YjJiYmZlYzA1MTlkMWVmMDJhZjFjMGNlOTAzM2ZiNGFlIn0" - var token = "7651491571:AAEVQma6niHhtqEYDowAEpPo6Fq69BWvRU8" - - data, err := ParseAndValidateBase64([]byte(text), token) - if err != nil { - t.Error(err) - } - t.Log(*data.Id) - -} diff --git a/pkg/openinstall/channel_test.go b/pkg/openinstall/channel_test.go deleted file mode 100644 index 76302da..0000000 --- a/pkg/openinstall/channel_test.go +++ /dev/null @@ -1,68 +0,0 @@ -package openinstall - -import ( - "context" - "net/http" - "net/http/httptest" - "testing" - - "github.com/stretchr/testify/assert" -) - -// TestChannelParameter 验证 OpenInstall 客户端是否正确传递了 channel 参数 -func TestChannelParameter(t *testing.T) { - // 1. 启动 Mock Server - mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - // 验证请求路径 - if r.URL.Path == "/data/sum/growth" { - // 验证 Query 参数 - query := r.URL.Query() - channel := query.Get("channel") - - // 核心验证点:channel 参数必须等于即使的 inviteCode - if channel == "TEST_INVITE_CODE_123" { - w.WriteHeader(http.StatusOK) - // 返回假数据 - w.Write([]byte(`{ - "code": 0, - "body": [ - {"key": "ios", "value": 100}, - {"key": "android", "value": 200} - ] - }`)) - return - } - - // 如果 channel 不匹配,返回错误 - w.WriteHeader(http.StatusBadRequest) - w.Write([]byte(`{"code": 400, "error": "channel mismatch"}`)) - return - } - - w.WriteHeader(http.StatusNotFound) - })) - defer mockServer.Close() - - // 2. 临时修改 apiBaseURL 指向 Mock Server - originalBaseURL := apiBaseURL - apiBaseURL = mockServer.URL - defer func() { apiBaseURL = originalBaseURL }() - - // 3. 初始化客户端 - client := NewClient("test-api-key") - - // 4. 调用接口 (传入测试用的邀请码) - ctx := context.Background() - stats, err := client.GetPlatformDownloads(ctx, "TEST_INVITE_CODE_123") - - // 5. 验证结果 - assert.NoError(t, err) - assert.NotNil(t, stats) - - // 验证数据正确解析 (iOS=100, Android=200, Total=300) - assert.Equal(t, int64(100), stats.IOS, "iOS count should match mock data") - assert.Equal(t, int64(200), stats.Android, "Android count should match mock data") - assert.Equal(t, int64(300), stats.Total, "Total count should match sum of mock data") - - t.Logf("Success! Channel parameter 'TEST_INVITE_CODE_123' was correctly sent to server.") -} diff --git a/pkg/openinstall/client_test.go b/pkg/openinstall/client_test.go deleted file mode 100644 index dbf4d23..0000000 --- a/pkg/openinstall/client_test.go +++ /dev/null @@ -1,57 +0,0 @@ -package openinstall - -import ( - "net/http" - "net/http/httptest" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestClient_GetPlatformDownloads_WithChannel(t *testing.T) { - // Mock Server - server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - // Verify URL parameters - assert.Equal(t, "GET", r.Method) - assert.Equal(t, "/data/sum/growth", r.URL.Path) - assert.Equal(t, "test-api-key", r.URL.Query().Get("apiKey")) - assert.Equal(t, "test-channel", r.URL.Query().Get("channel")) // Verify channel is passed - assert.Equal(t, "total", r.URL.Query().Get("sumBy")) - assert.Equal(t, "0", r.URL.Query().Get("excludeDuplication")) - - // Return mock response - w.WriteHeader(http.StatusOK) - w.Write([]byte(`{ - "code": 0, - "body": [ - {"key": "ios", "value": 10}, - {"key": "android", "value": 20} - ] - }`)) - })) - defer server.Close() - - // Redirect base URL to mock server (This requires modifying the constant in real code, - // but for this test script we can just verify the logic or make the URL configurable. - // Since apiBaseURL is a constant, we cannot change it. - // However, this test demonstrates the logic we implemented. - // For actual running, we might need to inject the URL or make it a variable.) - - // NOTE: Since apiBaseURL is constant in standard Go we can't patch it easily without unsafe or changing code. - // But `getDeviceDistribution` constructs the URL using `apiBaseURL`. - // For the sake of this example, we assume we can test the parameter construction logic - // or we would need to refactor `apiBaseURL` to be a field in `Client`. - - // Since I cannot change the constant easily to point to localhost in the compiled package - // without refactoring, I will provide a test that *would* work if we refactored, - // OR I can make the test just run against the real API but that requires a key. - - // Plan B: Create a test that instantiates the client and checks the URL construction if we extracted that method, - // but we didn't. - - // Let's refactor Client to allow base URL injection for testing? - // Or just provide a shell script for the user to run against real env provided they have keys. - // The user asked for a "Test Script", commonly meaning a shell script to run the API. - - t.Log("This is a structural test example. To fully unit test HTTP requests with constants, refactoring is recommended.") -} diff --git a/pkg/orm/tool_test.go b/pkg/orm/tool_test.go deleted file mode 100644 index d415bbd..0000000 --- a/pkg/orm/tool_test.go +++ /dev/null @@ -1,40 +0,0 @@ -package orm - -import ( - "testing" - - "github.com/perfect-panel/server/internal/model/task" - - "gorm.io/driver/mysql" - "gorm.io/gorm" -) - -func TestParseDSN(t *testing.T) { - dsn := "root:mylove520@tcp(localhost:3306)/vpnboard" - config := ParseDSN(dsn) - if config == nil { - t.Fatal("config is nil") - } - t.Log(config) -} - -func TestPing(t *testing.T) { - dsn := "root:mylove520@tcp(localhost:3306)/vpnboard" - status := Ping(dsn) - t.Log(status) -} - -func TestMysql(t *testing.T) { - db, err := gorm.Open(mysql.New(mysql.Config{ - DSN: "root:mylove520@tcp(localhost:3306)/vpnboard", - })) - if err != nil { - t.Fatalf("Failed to connect to MySQL: %v", err) - } - err = db.Migrator().AutoMigrate(&task.Task{}) - if err != nil { - t.Fatalf("Failed to auto migrate: %v", err) - return - } - t.Log("MySQL connection and migration successful") -} diff --git a/pkg/payment/alipay/alipay_test.go b/pkg/payment/alipay/alipay_test.go deleted file mode 100644 index 9635e52..0000000 --- a/pkg/payment/alipay/alipay_test.go +++ /dev/null @@ -1,25 +0,0 @@ -package alipay - -import ( - "context" - "testing" -) - -func TestClientPreCreateTrade(t *testing.T) { - t.Skipf("Skip TestClientPreCreateTrade") - cfg := Config{ - InvoiceName: "XrayR", - NotifyURL: "https://example.com/alipay/notify", - Sandbox: true, - } - c := NewClient(cfg) - order := Order{ - OrderNo: "20210701000001", - Amount: 100, - } - qr, err := c.PreCreateTrade(context.Background(), order) - if err != nil { - t.Fatal(err) - } - t.Log(qr) -} diff --git a/pkg/payment/epay/epay_test.go b/pkg/payment/epay/epay_test.go deleted file mode 100644 index 87265e6..0000000 --- a/pkg/payment/epay/epay_test.go +++ /dev/null @@ -1,49 +0,0 @@ -package epay - -import "testing" - -func TestEpay(t *testing.T) { - client := NewClient("", "http://127.0.0.1", "", "") - order := Order{ - Name: "测试", - OrderNo: "123456789", - Amount: 1000, - SignType: "md5", - NotifyUrl: "http://127.0.0.1", - ReturnUrl: "http://127.0.0.1", - } - url := client.CreatePayUrl(order) - t.Logf("PayUrl: %s\n", url) - -} - -func TestQueryOrderStatus(t *testing.T) { - t.Skipf("Skip TestQueryOrderStatus test") - client := NewClient("Pid", "Url", "Key", "Type") - orderNo := "123456789" - status := client.QueryOrderStatus(orderNo) - t.Logf("OrderNo: %s, Status: %v\n", orderNo, status) -} - -func TestVerifySign(t *testing.T) { - t.Skipf("Skip TestVerifySign test") - params := map[string]string{ - "pid": "1654", - "trade_no": "2024121521150860990", - "out_trade_no": "202412152115078262977262254", - "type": "alipay", - "name": "product", - "money": "10", - "trade_status": "TRADE_SUCCESS", - "sign": "d3181f18ebdf9821f0ab6ee93faa82d1", - "sign_type": "MD5", - } - - key := "LbTabbB580zWyhXhyyww7wwvy5u8k0wl" - c := NewClient("Pid", "Url", key, "Type") - if c.VerifySign(params) { - t.Logf("Sign verification success!") - } else { - t.Error("Sign verification failed!") - } -} diff --git a/pkg/payment/http_test.go b/pkg/payment/http_test.go deleted file mode 100644 index d4ad384..0000000 --- a/pkg/payment/http_test.go +++ /dev/null @@ -1,21 +0,0 @@ -package payment - -import ( - "net/http" - "testing" - - "github.com/gin-gonic/gin" -) - -func TestHttp(t *testing.T) { - t.Skipf("Skip TestHttp test") - router := gin.Default() - router.LoadHTMLGlob("./*") - router.GET("/stripe", func(c *gin.Context) { - c.HTML(http.StatusOK, "stripe.html", gin.H{ - "title": "Gin HTML Example", - "message": "Hello, Gin!", - }) - }) - _ = router.Run(":8989") -} diff --git a/pkg/payment/platform_test.go b/pkg/payment/platform_test.go deleted file mode 100644 index 95ba93a..0000000 --- a/pkg/payment/platform_test.go +++ /dev/null @@ -1,69 +0,0 @@ -package payment - -import "testing" - -func TestParsePlatform(t *testing.T) { - testCases := []struct { - name string - input string - expected Platform - }{ - {name: "exact AppleIAP", input: "AppleIAP", expected: AppleIAP}, - {name: "snake apple_iap", input: "apple_iap", expected: AppleIAP}, - {name: "kebab apple-iap", input: "apple-iap", expected: AppleIAP}, - {name: "compact appleiap", input: "appleiap", expected: AppleIAP}, - {name: "trimmed value", input: " apple_iap ", expected: AppleIAP}, - {name: "legacy exact CryptoSaaS", input: "CryptoSaaS", expected: CryptoSaaS}, - {name: "snake crypto_saas", input: "crypto_saas", expected: CryptoSaaS}, - {name: "unsupported", input: "unknown_gateway", expected: UNSUPPORTED}, - } - - for _, testCase := range testCases { - t.Run(testCase.name, func(t *testing.T) { - got := ParsePlatform(testCase.input) - if got != testCase.expected { - t.Fatalf("ParsePlatform(%q) = %v, expected %v", testCase.input, got, testCase.expected) - } - }) - } -} - -func TestPlatformStringIsCanonical(t *testing.T) { - testCases := []struct { - name string - input Platform - expected string - }{ - {name: "stripe", input: Stripe, expected: "Stripe"}, - {name: "alipay", input: AlipayF2F, expected: "AlipayF2F"}, - {name: "epay", input: EPay, expected: "EPay"}, - {name: "balance", input: Balance, expected: "balance"}, - {name: "crypto", input: CryptoSaaS, expected: "CryptoSaaS"}, - {name: "apple", input: AppleIAP, expected: "AppleIAP"}, - {name: "unsupported", input: UNSUPPORTED, expected: "unsupported"}, - } - - for _, testCase := range testCases { - t.Run(testCase.name, func(t *testing.T) { - got := testCase.input.String() - if got != testCase.expected { - t.Fatalf("Platform.String() = %q, expected %q", got, testCase.expected) - } - }) - } -} - -func TestCanonicalPlatformName(t *testing.T) { - canonical, ok := CanonicalPlatformName("apple_iap") - if !ok { - t.Fatalf("expected apple_iap to be supported") - } - if canonical != "AppleIAP" { - t.Fatalf("canonical name mismatch: got %q", canonical) - } - - _, ok = CanonicalPlatformName("not_exists") - if ok { - t.Fatalf("expected unsupported platform to return ok=false") - } -} diff --git a/pkg/payment/stripe/stripe_test.go b/pkg/payment/stripe/stripe_test.go deleted file mode 100644 index 419b758..0000000 --- a/pkg/payment/stripe/stripe_test.go +++ /dev/null @@ -1,55 +0,0 @@ -package stripe - -import ( - "testing" - - "github.com/stripe/stripe-go/v81" -) - -func TestStripeAlipay(t *testing.T) { - t.Skipf("Skip TestStripeAlipay test") - client := NewClient(Config{ - WebhookSecret: "", - }) - order := Order{ - OrderNo: "JS20210719123456789", - Subscribe: "测试", - Amount: 100, - Currency: string(stripe.CurrencyGBP), - Payment: "alipay", - } - user := User{ - UserId: 1, - Email: "tension@ppanel.dev", - } - result, err := client.CreatePaymentSheet(&order, &user) - if err != nil { - t.Error(err.Error()) - } - t.Logf("TradeNo: %s\n", result.ClientSecret) -} - -func TestStripeWechat(t *testing.T) { - t.Skipf("Skip TestStripeWechat test") - client := NewClient(Config{ - SecretKey: "SecretKey", - PublicKey: "PublicKey", - WebhookSecret: "", - }) - order := Order{ - OrderNo: "JS20210719123456789", - Subscribe: "测试", - Amount: 100, - Currency: string(stripe.CurrencyGBP), - Payment: "wechat_pay", - } - user := User{ - UserId: 1, - Email: "tension@ppanel.dev", - } - result, err := client.CreatePaymentSheet(&order, &user) - if err != nil { - t.Error(err.Error()) - } - t.Logf("TradeNo: %s\n", result.ClientSecret) -} diff --git a/pkg/phone/phone_test.go b/pkg/phone/phone_test.go deleted file mode 100644 index 23414e3..0000000 --- a/pkg/phone/phone_test.go +++ /dev/null @@ -1,57 +0,0 @@ -package phone - -import ( - "testing" - - "github.com/nyaruka/phonenumbers" -) - -func TestPhoneNumber(t *testing.T) { - parsedNumber, err := phonenumbers.Parse("+8615502505555", "") - if err != nil { - t.Fatalf("Failed to parse phone number: %v", err) - return - } - // 检查手机号是否有效 - isValid := phonenumbers.IsValidNumber(parsedNumber) - // 获取区域代码 (如 CN, US, IN) - region := phonenumbers.GetRegionCodeForNumber(parsedNumber) - t.Log(isValid) - t.Log(region) -} - -func TestCheck(t *testing.T) { - var phone = "15502505555" - if !Check("86", phone) { - t.Fatalf("Check phone number failed: %s", phone) - } - t.Logf("Check phone number success: %s", phone) -} - -func TestGetCountryCode(t *testing.T) { - var phone = "14407941888" - countryCode := GetCountryCode(phone) - t.Logf("Country code: %s", countryCode) -} - -func TestFormatToInternational(t *testing.T) { - var phone = "8615502505555" - international := FormatToInternational(phone) - t.Logf("International format: %s", international) -} - -func TestFormatToE164(t *testing.T) { - var phone = "4407941888" - e164, err := FormatToE164("1", phone) - if err != nil { - t.Fatalf("Failed to format phone number to E164: %v", err) - return - } - t.Logf("E164 format: %s", e164) -} - -func TestMask(t *testing.T) { - var phone = "+14407941888" - mask := MaskPhoneNumber(phone) - t.Logf("Mask format: %s", mask) -} diff --git a/pkg/proc/shutdown_test.go b/pkg/proc/shutdown_test.go deleted file mode 100644 index efd5104..0000000 --- a/pkg/proc/shutdown_test.go +++ /dev/null @@ -1,62 +0,0 @@ -//go:build linux || darwin - -package proc - -import ( - "fmt" - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestShutdown(t *testing.T) { - SetTimeToForceQuit(time.Hour) - assert.Equal(t, time.Hour, delayTimeBeforeForceQuit) - - var val int - called := AddWrapUpListener(func() { - val++ - }) - WrapUp() - called() - assert.Equal(t, 1, val) - - called = AddShutdownListener(func() { - val += 2 - }) - Shutdown() - called() - assert.Equal(t, 3, val) -} - -func TestNotifyMoreThanOnce(t *testing.T) { - ch := make(chan struct{}, 1) - - go func() { - var val int - called := AddWrapUpListener(func() { - val++ - }) - WrapUp() - WrapUp() - called() - assert.Equal(t, 1, val) - - called = AddShutdownListener(func() { - val += 2 - }) - Shutdown() - Shutdown() - called() - assert.Equal(t, 3, val) - ch <- struct{}{} - }() - - select { - case <-ch: - fmt.Printf("TestNotifyMoreThanOnce done\n") - case <-time.After(time.Second): - t.Fatal("timeout, check error logs") - } -} diff --git a/pkg/random/RandomKey_test.go b/pkg/random/RandomKey_test.go deleted file mode 100644 index 244054d..0000000 --- a/pkg/random/RandomKey_test.go +++ /dev/null @@ -1,96 +0,0 @@ -package random - -import ( - "math/rand" - "testing" - "time" - - "github.com/perfect-panel/server/pkg/snowflake" - - "github.com/stretchr/testify/assert" -) - -func TestEncodeBase62(t *testing.T) { - start := 1112275807 - length := 1558080 - n := length + start - // n := 328564998144 - m := make(map[string]struct{}) - // m := make(map[string]struct{}, length) - var inviteCode string - for i := start; i < n; i++ { - // inviteCode = EncodeBase36(int64(i)) - inviteCode = EncodeBase36(snowflake.GetID()) - if v, ok := m[inviteCode]; ok { - t.Fatal(v, inviteCode) - } - m[inviteCode] = struct{}{} - } - t.Log(inviteCode) - - assert.Equal(t, length, len(m)) -} - -func TestInt64ToDashedString(t *testing.T) { - type args struct { - strNum string - } - tests := []struct { - name string - args args - want string - }{ - // TODO: Add test cases. - { - name: "", - args: args{ - strNum: "123", - }, - want: "123", - }, - { - name: "", - args: args{ - strNum: "1234", - }, - want: "1234", - }, - { - name: "", - args: args{ - strNum: "12345", - }, - want: "1234-5", - }, - { - name: "", - args: args{ - strNum: "12345678", - }, - want: "1234-5678", - }, - { - name: "", - args: args{ - strNum: "123456789", - }, - want: "1234-5678-9", - }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - assert.Equalf(t, tt.want, StrToDashedString(tt.args.strNum), "StrToDashedString(%v)", tt.args.strNum) - }) - } -} - -// ShuffleString shuffles the characters in a string. -func ShuffleString(s string) string { - r := rand.New(rand.NewSource(time.Now().UnixNano())) - runes := []rune(s) - for i := range runes { - j := r.Intn(i + 1) - runes[i], runes[j] = runes[j], runes[i] - } - return string(runes) -} diff --git a/pkg/rescue/recover_test.go b/pkg/rescue/recover_test.go deleted file mode 100644 index ae5d636..0000000 --- a/pkg/rescue/recover_test.go +++ /dev/null @@ -1,40 +0,0 @@ -package rescue - -import ( - "context" - "sync/atomic" - "testing" - - "github.com/stretchr/testify/assert" -) - -func init() { -} - -func TestRescue(t *testing.T) { - var count int32 - assert.NotPanics(t, func() { - defer Recover(func() { - atomic.AddInt32(&count, 2) - }, func() { - atomic.AddInt32(&count, 3) - }) - - panic("hello") - }) - assert.Equal(t, int32(5), atomic.LoadInt32(&count)) -} - -func TestRescueCtx(t *testing.T) { - var count int32 - assert.NotPanics(t, func() { - defer RecoverCtx(context.Background(), func() { - atomic.AddInt32(&count, 2) - }, func() { - atomic.AddInt32(&count, 3) - }) - - panic("hello") - }) - assert.Equal(t, int32(5), atomic.LoadInt32(&count)) -} diff --git a/pkg/rules/rules_test.go b/pkg/rules/rules_test.go deleted file mode 100644 index cdb9cad..0000000 --- a/pkg/rules/rules_test.go +++ /dev/null @@ -1,52 +0,0 @@ -package rules - -import ( - "strings" - "testing" - - "github.com/stretchr/testify/assert" -) - -var text = ` -DOMAIN,example.com -DOMAIN-SUFFIX,google.com,DIRECT -DOMAIN-KEYWORD,amazon,REJECT -IP-CIDR,192.168.0.0/16 -` - -func TestNewRule(t *testing.T) { - var rs []string - // parse validate rules - ruleArr := strings.Split(text, "\n") - if len(ruleArr) == 0 { - t.Error("rules is empty") - } - ruleArr = trimArr(ruleArr) - for _, s := range ruleArr { - r := NewRule(s, "Test") - if r == nil { - t.Errorf("[CreateRuleGroup] rule %s is nil, len: %d", s, len(s)) - continue - } - if err := r.Validate(); err != nil { - t.Errorf("[CreateRuleGroup] rule %s is invalid: %v", s, err) - continue - } - rs = append(rs, r.String()) - } - - expected := []string{ - "DOMAIN,example.com,Test", - "DOMAIN-SUFFIX,google.com,DIRECT", - "DOMAIN-KEYWORD,amazon,REJECT", - "IP-CIDR,192.168.0.0/16,Test,no-resolve", - } - - for i, r := range rs { - if r != expected[i] { - t.Errorf("expected %s, got %s", expected[i], r) - } - } - // Check if the rules are sorted - assert.Equal(t, len(rs), len(expected)) -} diff --git a/pkg/service/servicegroup_test.go b/pkg/service/servicegroup_test.go deleted file mode 100644 index aaf683e..0000000 --- a/pkg/service/servicegroup_test.go +++ /dev/null @@ -1,129 +0,0 @@ -package service - -import ( - "sync" - "testing" - - "github.com/perfect-panel/server/pkg/proc" - - "github.com/stretchr/testify/assert" -) - -var ( - number = 1 - mutex sync.Mutex - done = make(chan struct{}) -) - -func TestServiceGroup(t *testing.T) { - multipliers := []int{2, 3, 5, 7} - want := 1 - - group := NewServiceGroup() - for _, multiplier := range multipliers { - want *= multiplier - service := newMockedService(multiplier) - group.Add(service) - } - - go group.Start() - - for i := 0; i < len(multipliers); i++ { - <-done - } - - group.Stop() - proc.Shutdown() - - mutex.Lock() - defer mutex.Unlock() - assert.Equal(t, want, number) -} - -func TestServiceGroup_WithStart(t *testing.T) { - multipliers := []int{2, 3, 5, 7} - want := 1 - - var wait sync.WaitGroup - var lock sync.Mutex - wait.Add(len(multipliers)) - group := NewServiceGroup() - for _, multiplier := range multipliers { - mul := multiplier - group.Add(WithStart(func() { - lock.Lock() - want *= mul - lock.Unlock() - wait.Done() - })) - } - - go group.Start() - wait.Wait() - group.Stop() - - lock.Lock() - defer lock.Unlock() - assert.Equal(t, 210, want) -} - -func TestServiceGroup_WithStarter(t *testing.T) { - multipliers := []int{2, 3, 5, 7} - want := 1 - - var wait sync.WaitGroup - var lock sync.Mutex - wait.Add(len(multipliers)) - group := NewServiceGroup() - for _, multiplier := range multipliers { - mul := multiplier - group.Add(WithStarter(mockedStarter{ - fn: func() { - lock.Lock() - want *= mul - lock.Unlock() - wait.Done() - }, - })) - } - - go group.Start() - wait.Wait() - group.Stop() - - lock.Lock() - defer lock.Unlock() - assert.Equal(t, 210, want) -} - -type mockedStarter struct { - fn func() -} - -func (s mockedStarter) Start() { - s.fn() -} - -type mockedService struct { - quit chan struct{} - multiplier int -} - -func newMockedService(multiplier int) *mockedService { - return &mockedService{ - quit: make(chan struct{}), - multiplier: multiplier, - } -} - -func (s *mockedService) Start() { - mutex.Lock() - number *= s.multiplier - mutex.Unlock() - done <- struct{}{} - <-s.quit -} - -func (s *mockedService) Stop() { - close(s.quit) -} diff --git a/pkg/signature/signature_test.go b/pkg/signature/signature_test.go deleted file mode 100644 index 658c2a4..0000000 --- a/pkg/signature/signature_test.go +++ /dev/null @@ -1,120 +0,0 @@ -package signature - -import ( - "context" - "crypto/hmac" - "crypto/sha256" - "encoding/hex" - "fmt" - "strconv" - "testing" - "time" -) - -type mockNonceStore struct { - seen map[string]bool -} - -func newMockNonceStore() *mockNonceStore { - return &mockNonceStore{seen: map[string]bool{}} -} - -func (m *mockNonceStore) SetIfNotExists(_ context.Context, appId, nonce string, _ int64) (bool, error) { - key := appId + ":" + nonce - if m.seen[key] { - return true, nil - } - m.seen[key] = true - return false, nil -} - -func makeSignature(secret, stringToSign string) string { - mac := hmac.New(sha256.New, []byte(secret)) - mac.Write([]byte(stringToSign)) - return hex.EncodeToString(mac.Sum(nil)) -} - -func TestValidateSuccess(t *testing.T) { - conf := SignatureConf{ - AppSecrets: map[string]string{"web-client": "uB4G,XxL2{7b"}, - ValidWindowSeconds: 300, - } - v := NewValidator(conf, newMockNonceStore()) - - ts := strconv.FormatInt(time.Now().Unix(), 10) - nonce := fmt.Sprintf("%x", time.Now().UnixNano()) - sts := BuildStringToSign("POST", "/v1/public/order/create", "", []byte(`{"plan_id":1}`), "web-client", ts, nonce) - sig := makeSignature("uB4G,XxL2{7b", sts) - - if err := v.Validate(context.Background(), "web-client", ts, nonce, sig, sts); err != nil { - t.Fatalf("expected success, got %v", err) - } -} - -func TestValidateExpired(t *testing.T) { - conf := SignatureConf{ - AppSecrets: map[string]string{"web-client": "uB4G,XxL2{7b"}, - ValidWindowSeconds: 300, - } - v := NewValidator(conf, newMockNonceStore()) - - ts := strconv.FormatInt(time.Now().Unix()-400, 10) - nonce := "abc" - sts := BuildStringToSign("GET", "/v1/public/user/info", "", nil, "web-client", ts, nonce) - sig := makeSignature("uB4G,XxL2{7b", sts) - - if err := v.Validate(context.Background(), "web-client", ts, nonce, sig, sts); err != ErrSignatureExpired { - t.Fatalf("expected ErrSignatureExpired, got %v", err) - } -} - -func TestValidateReplay(t *testing.T) { - conf := SignatureConf{ - AppSecrets: map[string]string{"web-client": "uB4G,XxL2{7b"}, - ValidWindowSeconds: 300, - } - v := NewValidator(conf, newMockNonceStore()) - - ts := strconv.FormatInt(time.Now().Unix(), 10) - nonce := "same-nonce-replay" - sts := BuildStringToSign("GET", "/v1/public/user/info", "", nil, "web-client", ts, nonce) - sig := makeSignature("uB4G,XxL2{7b", sts) - - _ = v.Validate(context.Background(), "web-client", ts, nonce, sig, sts) - if err := v.Validate(context.Background(), "web-client", ts, nonce, sig, sts); err != ErrSignatureReplay { - t.Fatalf("expected ErrSignatureReplay, got %v", err) - } -} - -func TestValidateInvalidSignature(t *testing.T) { - conf := SignatureConf{ - AppSecrets: map[string]string{"web-client": "uB4G,XxL2{7b"}, - ValidWindowSeconds: 300, - } - v := NewValidator(conf, newMockNonceStore()) - - ts := strconv.FormatInt(time.Now().Unix(), 10) - nonce := "nonce-invalid-sig" - sts := BuildStringToSign("POST", "/v1/public/order/create", "", []byte(`{"plan_id":1}`), "web-client", ts, nonce) - - if err := v.Validate(context.Background(), "web-client", ts, nonce, "badsignature", sts); err != ErrSignatureInvalid { - t.Fatalf("expected ErrSignatureInvalid, got %v", err) - } -} - -func TestBuildStringToSignCanonicalQuery(t *testing.T) { - got := BuildStringToSign( - "get", - "/v1/public/order/list", - "b=2&a=1&a=3&c=", - nil, - "web-client", - "1700000000", - "nonce-1", - ) - - want := "GET\n/v1/public/order/list\na=1&b=2&c=\ne3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nweb-client\n1700000000\nnonce-1" - if got != want { - t.Fatalf("unexpected stringToSign\nwant: %s\ngot: %s", want, got) - } -} diff --git a/pkg/sms/abosend/abosend_test.go b/pkg/sms/abosend/abosend_test.go deleted file mode 100644 index 2002ba0..0000000 --- a/pkg/sms/abosend/abosend_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package abosend - -import "testing" - -func TestNewClient(t *testing.T) { - t.Skipf("Skip TestNewClient test") - client := createClient() - err := client.SendCode("1", "", "223322") - if err != nil { - t.Errorf("TestNewClient() error = %v", err.Error()) - return - } - t.Logf("TestNewClient success") -} - -func TestClient_GetSendCodeContent(t *testing.T) { - t.Skipf("Skip TestClient_GetSendCodeContent test") - client := createClient() - content := client.GetSendCodeContent("223322") - t.Logf("TestClient_GetSendCodeContent() = %v", content) -} - -func createClient() *Client { - return NewClient(Config{ - ApiDomain: "https://smsapi.abosend.com", - Access: "", - Secret: "", - Template: "您的验证码是:{{.code}}。请不要把验证码泄露给其他人。", - }) -} diff --git a/pkg/sms/smsbao/smsbao_test.go b/pkg/sms/smsbao/smsbao_test.go deleted file mode 100644 index 800dfcd..0000000 --- a/pkg/sms/smsbao/smsbao_test.go +++ /dev/null @@ -1,16 +0,0 @@ -package smsbao - -import "testing" - -func TestNewClient(t *testing.T) { - t.Skipf("Skip TestNewClient test") - client := NewClient(Config{ - Template: "【XXX】您的验证码是:{{.code}},有效期 {{.expiration}}。请不要把验证码泄露给其他人。", - }) - err := client.SendCode("1", "", "223322") - if err != nil { - t.Errorf("TestNewClient() error = %v", err.Error()) - return - } - t.Logf("TestNewClient success") -} diff --git a/pkg/sms/twilio/twilio_test.go b/pkg/sms/twilio/twilio_test.go deleted file mode 100644 index 3a56509..0000000 --- a/pkg/sms/twilio/twilio_test.go +++ /dev/null @@ -1,16 +0,0 @@ -package twilio - -import "testing" - -func TestClient_SendCode(t *testing.T) { - t.Skipf("Skip TestClient_SendCode test") - client := NewClient(Config{ - Access: "", Secret: "", PhoneNumber: "", Template: "", - }) - err := client.SendCode("", "", "123456") - if err != nil { - t.Errorf("SendCode() error = %v", err.Error()) - return - } - t.Logf("SendCode() success") -} diff --git a/pkg/snowflake/snowflake_test.go b/pkg/snowflake/snowflake_test.go deleted file mode 100644 index 8ebecbf..0000000 --- a/pkg/snowflake/snowflake_test.go +++ /dev/null @@ -1,59 +0,0 @@ -package snowflake - -import ( - "testing" -) - -func TestGetLocalIp(t *testing.T) { - tests := []struct { - name string - // want byte - wantErr bool - wantFunc func(byte) (bool, string) - }{ - // TODO: Add test cases. - { - name: "", - wantErr: false, - wantFunc: func(got byte) (bool, string) { return got > 0, "got > 0" }, - }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got, err := GetLocalIp() - if (err != nil) != tt.wantErr { - t.Errorf("GetLocalIp() error = %v, wantErr %v", err, tt.wantErr) - return - } - if r, s := tt.wantFunc(got); !r { - t.Errorf("GetLocalIp() = %v, want %v", got, s) - } - }) - } -} - -func TestGetID(t *testing.T) { - tests := []struct { - name string - // want int64 - wantErr bool - wantFunc func(int64) (bool, string) - }{ - // TODO: Add test cases. - { - name: "", - wantErr: false, - wantFunc: func(got int64) (bool, string) { - return got > 0, "got > 0" - }, - }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - got := GetID() - if r, s := tt.wantFunc(got); !r { - t.Errorf("GetID() = %v, want %v", got, s) - } - }) - } -} diff --git a/pkg/syncx/atomicbool_test.go b/pkg/syncx/atomicbool_test.go deleted file mode 100644 index f1f8557..0000000 --- a/pkg/syncx/atomicbool_test.go +++ /dev/null @@ -1,27 +0,0 @@ -package syncx - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestAtomicBool(t *testing.T) { - val := ForAtomicBool(true) - assert.True(t, val.True()) - val.Set(false) - assert.False(t, val.True()) - val.Set(true) - assert.True(t, val.True()) - val.Set(false) - assert.False(t, val.True()) - ok := val.CompareAndSwap(false, true) - assert.True(t, ok) - assert.True(t, val.True()) - ok = val.CompareAndSwap(true, false) - assert.True(t, ok) - assert.False(t, val.True()) - ok = val.CompareAndSwap(true, false) - assert.False(t, ok) - assert.False(t, val.True()) -} diff --git a/pkg/syncx/atomicduration_test.go b/pkg/syncx/atomicduration_test.go deleted file mode 100644 index 8165e13..0000000 --- a/pkg/syncx/atomicduration_test.go +++ /dev/null @@ -1,19 +0,0 @@ -package syncx - -import ( - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestAtomicDuration(t *testing.T) { - d := ForAtomicDuration(time.Duration(100)) - assert.Equal(t, time.Duration(100), d.Load()) - d.Set(time.Duration(200)) - assert.Equal(t, time.Duration(200), d.Load()) - assert.True(t, d.CompareAndSwap(time.Duration(200), time.Duration(300))) - assert.Equal(t, time.Duration(300), d.Load()) - assert.False(t, d.CompareAndSwap(time.Duration(200), time.Duration(400))) - assert.Equal(t, time.Duration(300), d.Load()) -} diff --git a/pkg/syncx/atomicfloat64_test.go b/pkg/syncx/atomicfloat64_test.go deleted file mode 100644 index c3c5fa1..0000000 --- a/pkg/syncx/atomicfloat64_test.go +++ /dev/null @@ -1,24 +0,0 @@ -package syncx - -import ( - "sync" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestAtomicFloat64(t *testing.T) { - f := ForAtomicFloat64(100) - var wg sync.WaitGroup - for i := 0; i < 5; i++ { - wg.Add(1) - go func() { - for i := 0; i < 100; i++ { - f.Add(1) - } - wg.Done() - }() - } - wg.Wait() - assert.Equal(t, float64(600), f.Load()) -} diff --git a/pkg/syncx/barrier_test.go b/pkg/syncx/barrier_test.go deleted file mode 100644 index 7e2426e..0000000 --- a/pkg/syncx/barrier_test.go +++ /dev/null @@ -1,56 +0,0 @@ -package syncx - -import ( - "sync" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestBarrier_Guard(t *testing.T) { - const total = 10000 - var barrier Barrier - var count int - var wg sync.WaitGroup - wg.Add(total) - for i := 0; i < total; i++ { - go barrier.Guard(func() { - count++ - wg.Done() - }) - } - wg.Wait() - assert.Equal(t, total, count) -} - -func TestBarrierPtr_Guard(t *testing.T) { - const total = 10000 - barrier := new(Barrier) - var count int - wg := new(sync.WaitGroup) - wg.Add(total) - for i := 0; i < total; i++ { - go barrier.Guard(func() { - count++ - wg.Done() - }) - } - wg.Wait() - assert.Equal(t, total, count) -} - -func TestGuard(t *testing.T) { - const total = 10000 - var count int - var lock sync.Mutex - wg := new(sync.WaitGroup) - wg.Add(total) - for i := 0; i < total; i++ { - go Guard(&lock, func() { - count++ - wg.Done() - }) - } - wg.Wait() - assert.Equal(t, total, count) -} diff --git a/pkg/syncx/cond_test.go b/pkg/syncx/cond_test.go deleted file mode 100644 index c7e7c28..0000000 --- a/pkg/syncx/cond_test.go +++ /dev/null @@ -1,69 +0,0 @@ -package syncx - -import ( - "sync" - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestTimeoutCondWait(t *testing.T) { - var wait sync.WaitGroup - cond := NewCond() - wait.Add(2) - go func() { - cond.Wait() - wait.Done() - }() - time.Sleep(time.Duration(50) * time.Millisecond) - go func() { - cond.Signal() - wait.Done() - }() - wait.Wait() -} - -func TestTimeoutCondWaitTimeout(t *testing.T) { - var wait sync.WaitGroup - cond := NewCond() - wait.Add(1) - go func() { - cond.WaitWithTimeout(time.Duration(500) * time.Millisecond) - wait.Done() - }() - wait.Wait() -} - -func TestTimeoutCondWaitTimeoutRemain(t *testing.T) { - var wait sync.WaitGroup - cond := NewCond() - wait.Add(2) - ch := make(chan time.Duration, 1) - defer close(ch) - timeout := time.Duration(2000) * time.Millisecond - go func() { - remainTimeout, _ := cond.WaitWithTimeout(timeout) - ch <- remainTimeout - wait.Done() - }() - sleep(200) - go func() { - cond.Signal() - wait.Done() - }() - wait.Wait() - remainTimeout := <-ch - assert.True(t, remainTimeout < timeout, "expect remainTimeout %v < %v", remainTimeout, timeout) - assert.True(t, remainTimeout >= time.Duration(200)*time.Millisecond, - "expect remainTimeout %v >= 200 millisecond", remainTimeout) -} - -func TestSignalNoWait(t *testing.T) { - cond := NewCond() - cond.Signal() -} - -func sleep(millisecond int) { - time.Sleep(time.Duration(millisecond) * time.Millisecond) -} diff --git a/pkg/syncx/donechan_test.go b/pkg/syncx/donechan_test.go deleted file mode 100644 index 2db0f15..0000000 --- a/pkg/syncx/donechan_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package syncx - -import ( - "sync" - "testing" -) - -func TestDoneChanClose(t *testing.T) { - doneChan := NewDoneChan() - - for i := 0; i < 5; i++ { - doneChan.Close() - } -} - -func TestDoneChanDone(t *testing.T) { - var waitGroup sync.WaitGroup - doneChan := NewDoneChan() - - waitGroup.Add(1) - go func() { - <-doneChan.Done() - waitGroup.Done() - }() - - for i := 0; i < 5; i++ { - doneChan.Close() - } - - waitGroup.Wait() -} diff --git a/pkg/syncx/immutableresource_test.go b/pkg/syncx/immutableresource_test.go deleted file mode 100644 index 8aec6b9..0000000 --- a/pkg/syncx/immutableresource_test.go +++ /dev/null @@ -1,78 +0,0 @@ -package syncx - -import ( - "errors" - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestImmutableResource(t *testing.T) { - var count int - r := NewImmutableResource(func() (any, error) { - count++ - return "hello", nil - }) - - res, err := r.Get() - assert.Equal(t, "hello", res) - assert.Equal(t, 1, count) - assert.Nil(t, err) - - // again - res, err = r.Get() - assert.Equal(t, "hello", res) - assert.Equal(t, 1, count) - assert.Nil(t, err) -} - -func TestImmutableResourceError(t *testing.T) { - var count int - r := NewImmutableResource(func() (any, error) { - count++ - return nil, errors.New("any") - }) - - res, err := r.Get() - assert.Nil(t, res) - assert.NotNil(t, err) - assert.Equal(t, "any", err.Error()) - assert.Equal(t, 1, count) - - // again - res, err = r.Get() - assert.Nil(t, res) - assert.NotNil(t, err) - assert.Equal(t, "any", err.Error()) - assert.Equal(t, 1, count) - - r.refreshInterval = 0 - time.Sleep(time.Millisecond) - res, err = r.Get() - assert.Nil(t, res) - assert.NotNil(t, err) - assert.Equal(t, "any", err.Error()) - assert.Equal(t, 2, count) -} - -func TestImmutableResourceErrorRefreshAlways(t *testing.T) { - var count int - r := NewImmutableResource(func() (any, error) { - count++ - return nil, errors.New("any") - }, WithRefreshIntervalOnFailure(0)) - - res, err := r.Get() - assert.Nil(t, res) - assert.NotNil(t, err) - assert.Equal(t, "any", err.Error()) - assert.Equal(t, 1, count) - - // again - res, err = r.Get() - assert.Nil(t, res) - assert.NotNil(t, err) - assert.Equal(t, "any", err.Error()) - assert.Equal(t, 2, count) -} diff --git a/pkg/syncx/limit_test.go b/pkg/syncx/limit_test.go deleted file mode 100644 index 1465dbb..0000000 --- a/pkg/syncx/limit_test.go +++ /dev/null @@ -1,17 +0,0 @@ -package syncx - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestLimit(t *testing.T) { - limit := NewLimit(2) - limit.Borrow() - assert.True(t, limit.TryBorrow()) - assert.False(t, limit.TryBorrow()) - assert.Nil(t, limit.Return()) - assert.Nil(t, limit.Return()) - assert.Equal(t, ErrLimitReturn, limit.Return()) -} diff --git a/pkg/syncx/lockedcalls_test.go b/pkg/syncx/lockedcalls_test.go deleted file mode 100644 index 93f6fe0..0000000 --- a/pkg/syncx/lockedcalls_test.go +++ /dev/null @@ -1,82 +0,0 @@ -package syncx - -import ( - "errors" - "fmt" - "sync" - "testing" - "time" -) - -func TestLockedCallDo(t *testing.T) { - g := NewLockedCalls() - v, err := g.Do("key", func() (any, error) { - return "bar", nil - }) - if got, want := fmt.Sprintf("%v (%T)", v, v), "bar (string)"; got != want { - t.Errorf("Do = %v; want %v", got, want) - } - if err != nil { - t.Errorf("Do error = %v", err) - } -} - -func TestLockedCallDoErr(t *testing.T) { - g := NewLockedCalls() - someErr := errors.New("some error") - v, err := g.Do("key", func() (any, error) { - return nil, someErr - }) - if !errors.Is(err, someErr) { - t.Errorf("Do error = %v; want someErr", err) - } - if v != nil { - t.Errorf("unexpected non-nil value %#v", v) - } -} - -func TestLockedCallDoDupSuppress(t *testing.T) { - g := NewLockedCalls() - c := make(chan string) - var calls int - fn := func() (any, error) { - calls++ - ret := calls - <-c - calls-- - return ret, nil - } - - const n = 10 - var results []int - var lock sync.Mutex - var wg sync.WaitGroup - for i := 0; i < n; i++ { - wg.Add(1) - go func() { - v, err := g.Do("key", fn) - if err != nil { - t.Errorf("Do error: %v", err) - } - - lock.Lock() - results = append(results, v.(int)) - lock.Unlock() - wg.Done() - }() - } - time.Sleep(100 * time.Millisecond) // let goroutines above block - for i := 0; i < n; i++ { - c <- "bar" - } - wg.Wait() - - lock.Lock() - defer lock.Unlock() - - for _, item := range results { - if item != 1 { - t.Errorf("number of calls = %d; want 1", item) - } - } -} diff --git a/pkg/syncx/managedresource_test.go b/pkg/syncx/managedresource_test.go deleted file mode 100644 index ebcb18d..0000000 --- a/pkg/syncx/managedresource_test.go +++ /dev/null @@ -1,22 +0,0 @@ -package syncx - -import ( - "sync/atomic" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestManagedResource(t *testing.T) { - var count int32 - resource := NewManagedResource(func() any { - return atomic.AddInt32(&count, 1) - }, func(a, b any) bool { - return a == b - }) - - assert.Equal(t, resource.Take(), resource.Take()) - old := resource.Take() - resource.MarkBroken(old) - assert.NotEqual(t, old, resource.Take()) -} diff --git a/pkg/syncx/once_test.go b/pkg/syncx/once_test.go deleted file mode 100644 index 9e7fd71..0000000 --- a/pkg/syncx/once_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package syncx - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestOnce(t *testing.T) { - var v int - add := Once(func() { - v++ - }) - - for i := 0; i < 5; i++ { - add() - } - - assert.Equal(t, 1, v) -} - -func BenchmarkOnce(b *testing.B) { - var v int - add := Once(func() { - v++ - }) - - b.ResetTimer() - for i := 0; i < b.N; i++ { - add() - } - assert.Equal(b, 1, v) -} diff --git a/pkg/syncx/onceguard_test.go b/pkg/syncx/onceguard_test.go deleted file mode 100644 index dac7aa3..0000000 --- a/pkg/syncx/onceguard_test.go +++ /dev/null @@ -1,17 +0,0 @@ -package syncx - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestOnceGuard(t *testing.T) { - var guard OnceGuard - - assert.False(t, guard.Taken()) - assert.True(t, guard.Take()) - assert.True(t, guard.Taken()) - assert.False(t, guard.Take()) - assert.True(t, guard.Taken()) -} diff --git a/pkg/syncx/pool_test.go b/pkg/syncx/pool_test.go deleted file mode 100644 index 5bdb47c..0000000 --- a/pkg/syncx/pool_test.go +++ /dev/null @@ -1,115 +0,0 @@ -package syncx - -import ( - "sync" - "sync/atomic" - "testing" - "time" - - "github.com/perfect-panel/server/pkg/lang" - "github.com/stretchr/testify/assert" -) - -const limit = 10 - -func TestPoolGet(t *testing.T) { - stack := NewPool(limit, create, destroy) - ch := make(chan lang.PlaceholderType) - - for i := 0; i < limit; i++ { - var fail AtomicBool - go func() { - v := stack.Get() - if v.(int) != 1 { - fail.Set(true) - } - ch <- lang.Placeholder - }() - - select { - case <-ch: - case <-time.After(time.Second): - t.Fail() - } - - if fail.True() { - t.Fatal("unmatch value") - } - } -} - -func TestPoolPopTooMany(t *testing.T) { - stack := NewPool(limit, create, destroy) - ch := make(chan lang.PlaceholderType, 1) - - for i := 0; i < limit; i++ { - var wait sync.WaitGroup - wait.Add(1) - go func() { - stack.Get() - ch <- lang.Placeholder - wait.Done() - }() - - wait.Wait() - select { - case <-ch: - default: - t.Fail() - } - } - - var waitGroup, pushWait sync.WaitGroup - waitGroup.Add(1) - pushWait.Add(1) - go func() { - pushWait.Done() - stack.Get() - waitGroup.Done() - }() - - pushWait.Wait() - stack.Put(1) - waitGroup.Wait() -} - -func TestPoolPopFirst(t *testing.T) { - var value int32 - stack := NewPool(limit, func() any { - return atomic.AddInt32(&value, 1) - }, destroy) - - for i := 0; i < 100; i++ { - v := stack.Get().(int32) - assert.Equal(t, 1, int(v)) - stack.Put(v) - } -} - -func TestPoolWithMaxAge(t *testing.T) { - var value int32 - stack := NewPool(limit, func() any { - return atomic.AddInt32(&value, 1) - }, destroy, WithMaxAge(time.Millisecond)) - - v1 := stack.Get().(int32) - // put nil should not matter - stack.Put(nil) - stack.Put(v1) - time.Sleep(time.Millisecond * 10) - v2 := stack.Get().(int32) - assert.NotEqual(t, v1, v2) -} - -func TestNewPoolPanics(t *testing.T) { - assert.Panics(t, func() { - NewPool(0, create, destroy) - }) -} - -func create() any { - return 1 -} - -func destroy(_ any) { -} diff --git a/pkg/syncx/refresource_test.go b/pkg/syncx/refresource_test.go deleted file mode 100644 index 1cc1688..0000000 --- a/pkg/syncx/refresource_test.go +++ /dev/null @@ -1,27 +0,0 @@ -package syncx - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestRefCleaner(t *testing.T) { - var count int - clean := func() { - count += 1 - } - - cleaner := NewRefResource(clean) - err := cleaner.Use() - assert.Nil(t, err) - err = cleaner.Use() - assert.Nil(t, err) - cleaner.Clean() - cleaner.Clean() - assert.Equal(t, 1, count) - cleaner.Clean() - cleaner.Clean() - assert.Equal(t, 1, count) - assert.Equal(t, ErrUseOfCleaned, cleaner.Use()) -} diff --git a/pkg/syncx/resourcemanager_test.go b/pkg/syncx/resourcemanager_test.go deleted file mode 100644 index 725b8d1..0000000 --- a/pkg/syncx/resourcemanager_test.go +++ /dev/null @@ -1,99 +0,0 @@ -package syncx - -import ( - "errors" - "io" - "testing" - - "github.com/stretchr/testify/assert" -) - -type dummyResource struct { - age int -} - -func (dr *dummyResource) Close() error { - return errors.New("close") -} - -func TestResourceManager_GetResource(t *testing.T) { - manager := NewResourceManager() - defer manager.Close() - - var age int - for i := 0; i < 10; i++ { - val, err := manager.GetResource("key", func() (io.Closer, error) { - age++ - return &dummyResource{ - age: age, - }, nil - }) - assert.Nil(t, err) - assert.Equal(t, 1, val.(*dummyResource).age) - } -} - -func TestResourceManager_GetResourceError(t *testing.T) { - manager := NewResourceManager() - defer manager.Close() - - for i := 0; i < 10; i++ { - _, err := manager.GetResource("key", func() (io.Closer, error) { - return nil, errors.New("fail") - }) - assert.NotNil(t, err) - } -} - -func TestResourceManager_Close(t *testing.T) { - manager := NewResourceManager() - defer manager.Close() - - for i := 0; i < 10; i++ { - _, err := manager.GetResource("key", func() (io.Closer, error) { - return nil, errors.New("fail") - }) - assert.NotNil(t, err) - } - - if assert.NoError(t, manager.Close()) { - assert.Equal(t, 0, len(manager.resources)) - } -} - -func TestResourceManager_UseAfterClose(t *testing.T) { - manager := NewResourceManager() - defer manager.Close() - - _, err := manager.GetResource("key", func() (io.Closer, error) { - return nil, errors.New("fail") - }) - assert.NotNil(t, err) - if assert.NoError(t, manager.Close()) { - _, err = manager.GetResource("key", func() (io.Closer, error) { - return nil, errors.New("fail") - }) - assert.NotNil(t, err) - - assert.Panics(t, func() { - _, err = manager.GetResource("key", func() (io.Closer, error) { - return &dummyResource{age: 123}, nil - }) - }) - } -} - -func TestResourceManager_Inject(t *testing.T) { - manager := NewResourceManager() - defer manager.Close() - - manager.Inject("key", &dummyResource{ - age: 10, - }) - - val, err := manager.GetResource("key", func() (io.Closer, error) { - return nil, nil - }) - assert.Nil(t, err) - assert.Equal(t, 10, val.(*dummyResource).age) -} diff --git a/pkg/syncx/singleflight_test.go b/pkg/syncx/singleflight_test.go deleted file mode 100644 index 591c273..0000000 --- a/pkg/syncx/singleflight_test.go +++ /dev/null @@ -1,141 +0,0 @@ -package syncx - -import ( - "errors" - "fmt" - "sync" - "sync/atomic" - "testing" - "time" -) - -func TestExclusiveCallDo(t *testing.T) { - g := NewSingleFlight() - v, err := g.Do("key", func() (any, error) { - return "bar", nil - }) - if got, want := fmt.Sprintf("%v (%T)", v, v), "bar (string)"; got != want { - t.Errorf("Do = %v; want %v", got, want) - } - if err != nil { - t.Errorf("Do error = %v", err) - } -} - -func TestExclusiveCallDoErr(t *testing.T) { - g := NewSingleFlight() - someErr := errors.New("some error") - v, err := g.Do("key", func() (any, error) { - return nil, someErr - }) - if !errors.Is(err, someErr) { - t.Errorf("Do error = %v; want someErr", err) - } - if v != nil { - t.Errorf("unexpected non-nil value %#v", v) - } -} - -func TestExclusiveCallDoDupSuppress(t *testing.T) { - g := NewSingleFlight() - c := make(chan string) - var calls int32 - fn := func() (any, error) { - atomic.AddInt32(&calls, 1) - return <-c, nil - } - - const n = 10 - var wg sync.WaitGroup - for i := 0; i < n; i++ { - wg.Add(1) - go func() { - v, err := g.Do("key", fn) - if err != nil { - t.Errorf("Do error: %v", err) - } - if v.(string) != "bar" { - t.Errorf("got %q; want %q", v, "bar") - } - wg.Done() - }() - } - time.Sleep(100 * time.Millisecond) // let goroutines above block - c <- "bar" - wg.Wait() - if got := atomic.LoadInt32(&calls); got != 1 { - t.Errorf("number of calls = %d; want 1", got) - } -} - -func TestExclusiveCallDoDiffDupSuppress(t *testing.T) { - g := NewSingleFlight() - broadcast := make(chan struct{}) - var calls int32 - tests := []string{"e", "a", "e", "a", "b", "c", "b", "a", "c", "d", "b", "c", "d"} - - var wg sync.WaitGroup - for _, key := range tests { - wg.Add(1) - go func(k string) { - <-broadcast // get all goroutines ready - _, err := g.Do(k, func() (any, error) { - atomic.AddInt32(&calls, 1) - time.Sleep(10 * time.Millisecond) - return nil, nil - }) - if err != nil { - t.Errorf("Do error: %v", err) - } - wg.Done() - }(key) - } - - time.Sleep(100 * time.Millisecond) // let goroutines above block - close(broadcast) - wg.Wait() - - if got := atomic.LoadInt32(&calls); got != 5 { - // five letters - t.Errorf("number of calls = %d; want 5", got) - } -} - -func TestExclusiveCallDoExDupSuppress(t *testing.T) { - g := NewSingleFlight() - c := make(chan string) - var calls int32 - fn := func() (any, error) { - atomic.AddInt32(&calls, 1) - return <-c, nil - } - - const n = 10 - var wg sync.WaitGroup - var freshes int32 - for i := 0; i < n; i++ { - wg.Add(1) - go func() { - v, fresh, err := g.DoEx("key", fn) - if err != nil { - t.Errorf("Do error: %v", err) - } - if fresh { - atomic.AddInt32(&freshes, 1) - } - if v.(string) != "bar" { - t.Errorf("got %q; want %q", v, "bar") - } - wg.Done() - }() - } - time.Sleep(100 * time.Millisecond) // let goroutines above block - c <- "bar" - wg.Wait() - if got := atomic.LoadInt32(&calls); got != 1 { - t.Errorf("number of calls = %d; want 1", got) - } - if got := atomic.LoadInt32(&freshes); got != 1 { - t.Errorf("freshes = %d; want 1", got) - } -} diff --git a/pkg/syncx/spinlock_test.go b/pkg/syncx/spinlock_test.go deleted file mode 100644 index 9bcff23..0000000 --- a/pkg/syncx/spinlock_test.go +++ /dev/null @@ -1,70 +0,0 @@ -package syncx - -import ( - "runtime" - "sync" - "sync/atomic" - "testing" - "time" - - "github.com/perfect-panel/server/pkg/lang" - "github.com/stretchr/testify/assert" -) - -func TestTryLock(t *testing.T) { - var lock SpinLock - assert.True(t, lock.TryLock()) - assert.False(t, lock.TryLock()) - lock.Unlock() - assert.True(t, lock.TryLock()) -} - -func TestSpinLock(t *testing.T) { - var lock SpinLock - lock.Lock() - assert.False(t, lock.TryLock()) - lock.Unlock() - assert.True(t, lock.TryLock()) -} - -func TestSpinLockRace(t *testing.T) { - var lock SpinLock - lock.Lock() - var wait sync.WaitGroup - wait.Add(1) - go func() { - wait.Done() - }() - time.Sleep(time.Millisecond * 100) - lock.Unlock() - wait.Wait() - assert.True(t, lock.TryLock()) -} - -func TestSpinLock_TryLock(t *testing.T) { - var lock SpinLock - var count int32 - var wait sync.WaitGroup - wait.Add(2) - sig := make(chan lang.PlaceholderType) - - go func() { - lock.TryLock() - sig <- lang.Placeholder - atomic.AddInt32(&count, 1) - runtime.Gosched() - lock.Unlock() - wait.Done() - }() - - go func() { - <-sig - lock.Lock() - atomic.AddInt32(&count, 1) - lock.Unlock() - wait.Done() - }() - - wait.Wait() - assert.Equal(t, int32(2), atomic.LoadInt32(&count)) -} diff --git a/pkg/syncx/timeoutlimit_test.go b/pkg/syncx/timeoutlimit_test.go deleted file mode 100644 index ffed96d..0000000 --- a/pkg/syncx/timeoutlimit_test.go +++ /dev/null @@ -1,52 +0,0 @@ -package syncx - -import ( - "sync" - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestTimeoutLimit(t *testing.T) { - tests := []struct { - name string - interval time.Duration - }{ - { - name: "no wait", - }, - { - name: "wait", - interval: time.Millisecond * 100, - }, - } - - for _, test := range tests { - test := test - t.Run(test.name, func(t *testing.T) { - limit := NewTimeoutLimit(2) - assert.Nil(t, limit.Borrow(time.Millisecond*200)) - assert.Nil(t, limit.Borrow(time.Millisecond*200)) - var wait1, wait2, wait3 sync.WaitGroup - wait1.Add(1) - wait2.Add(1) - wait3.Add(1) - go func() { - wait1.Wait() - wait2.Done() - time.Sleep(test.interval) - assert.Nil(t, limit.Return()) - wait3.Done() - }() - wait1.Done() - wait2.Wait() - assert.Nil(t, limit.Borrow(time.Second)) - wait3.Wait() - assert.Equal(t, ErrTimeout, limit.Borrow(time.Millisecond*100)) - assert.Nil(t, limit.Return()) - assert.Nil(t, limit.Return()) - assert.Equal(t, ErrLimitReturn, limit.Return()) - }) - } -} diff --git a/pkg/templatex/render_test.go b/pkg/templatex/render_test.go deleted file mode 100644 index 98e9c05..0000000 --- a/pkg/templatex/render_test.go +++ /dev/null @@ -1,17 +0,0 @@ -package templatex - -import "testing" - -func TestRenderToString(t *testing.T) { - tmpl := "hello {{.Name}}" - data := map[string]interface{}{ - "Name": "world", - } - got, err := RenderToString(tmpl, data) - if err != nil { - t.Fatalf("RenderToString() error = %v", err) - return - } - want := "hello world" - t.Logf("got: %v, want: %v", got, want) -} diff --git a/pkg/threading/routinegroup_test.go b/pkg/threading/routinegroup_test.go deleted file mode 100644 index 7f4dd08..0000000 --- a/pkg/threading/routinegroup_test.go +++ /dev/null @@ -1,45 +0,0 @@ -package threading - -import ( - "io" - "log" - "sync" - "sync/atomic" - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestRoutineGroupRun(t *testing.T) { - var count int32 - group := NewRoutineGroup() - for i := 0; i < 3; i++ { - group.Run(func() { - atomic.AddInt32(&count, 1) - }) - } - - group.Wait() - - assert.Equal(t, int32(3), count) -} - -func TestRoutingGroupRunSafe(t *testing.T) { - log.SetOutput(io.Discard) - - var count int32 - group := NewRoutineGroup() - var once sync.Once - for i := 0; i < 3; i++ { - group.RunSafe(func() { - once.Do(func() { - panic("") - }) - atomic.AddInt32(&count, 1) - }) - } - - group.Wait() - - assert.Equal(t, int32(2), count) -} diff --git a/pkg/threading/routines_test.go b/pkg/threading/routines_test.go deleted file mode 100644 index dfc1d7c..0000000 --- a/pkg/threading/routines_test.go +++ /dev/null @@ -1,11 +0,0 @@ -package threading - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestRoutineId(t *testing.T) { - assert.True(t, RoutineId() > 0) -} diff --git a/pkg/timex/relativetime_test.go b/pkg/timex/relativetime_test.go deleted file mode 100644 index 950dbc6..0000000 --- a/pkg/timex/relativetime_test.go +++ /dev/null @@ -1,32 +0,0 @@ -package timex - -import ( - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestRelativeTime(t *testing.T) { - time.Sleep(time.Millisecond) - now := Now() - assert.True(t, now > 0) - time.Sleep(time.Millisecond) - assert.True(t, Since(now) > 0) -} - -func BenchmarkTimeSince(b *testing.B) { - b.ReportAllocs() - - for i := 0; i < b.N; i++ { - _ = time.Since(time.Now()) - } -} - -func BenchmarkTimexSince(b *testing.B) { - b.ReportAllocs() - - for i := 0; i < b.N; i++ { - _ = Since(Now()) - } -} diff --git a/pkg/timex/repr_test.go b/pkg/timex/repr_test.go deleted file mode 100644 index f787418..0000000 --- a/pkg/timex/repr_test.go +++ /dev/null @@ -1,14 +0,0 @@ -package timex - -import ( - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestReprOfDuration(t *testing.T) { - assert.Equal(t, "1000.0ms", ReprOfDuration(time.Second)) - assert.Equal(t, "1111.6ms", ReprOfDuration( - time.Second+time.Millisecond*111+time.Microsecond*555)) -} diff --git a/pkg/timex/ticker_test.go b/pkg/timex/ticker_test.go deleted file mode 100644 index 66c0dce..0000000 --- a/pkg/timex/ticker_test.go +++ /dev/null @@ -1,50 +0,0 @@ -package timex - -import ( - "sync/atomic" - "testing" - "time" - - "github.com/stretchr/testify/assert" -) - -func TestRealTickerDoTick(t *testing.T) { - ticker := NewTicker(time.Millisecond * 10) - defer ticker.Stop() - var count int - for range ticker.Chan() { - count++ - if count > 5 { - break - } - } -} - -func TestFakeTicker(t *testing.T) { - const total = 5 - ticker := NewFakeTicker() - defer ticker.Stop() - - var count int32 - go func() { - for range ticker.Chan() { - if atomic.AddInt32(&count, 1) == total { - ticker.Done() - } - } - }() - - for i := 0; i < 5; i++ { - ticker.Tick() - } - - assert.Nil(t, ticker.Wait(time.Second)) - assert.Equal(t, int32(total), atomic.LoadInt32(&count)) -} - -func TestFakeTickerTimeout(t *testing.T) { - ticker := NewFakeTicker() - defer ticker.Stop() - - assert.NotNil(t, ticker.Wait(time.Millisecond)) -} diff --git a/pkg/tool/base64_test.go b/pkg/tool/base64_test.go deleted file mode 100644 index f4981b3..0000000 --- a/pkg/tool/base64_test.go +++ /dev/null @@ -1,14 +0,0 @@ -package tool - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestIsValidImageSize(t *testing.T) { - testBase64 := "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg==" - maxSize := int64(10) - result := IsValidImageSize(testBase64, maxSize) - assert.Equal(t, result, true) -} diff --git a/pkg/tool/cipher_test.go b/pkg/tool/cipher_test.go deleted file mode 100644 index 4bfc07e..0000000 --- a/pkg/tool/cipher_test.go +++ /dev/null @@ -1,11 +0,0 @@ -package tool - -import ( - "testing" -) - -func TestGenerateCipher(t *testing.T) { - pwd := GenerateCipher("", 16) - t.Logf("pwd: %s", pwd) - t.Logf("pwd length: %d", len(pwd)) -} diff --git a/pkg/tool/encryption_test.go b/pkg/tool/encryption_test.go deleted file mode 100644 index 8e420cf..0000000 --- a/pkg/tool/encryption_test.go +++ /dev/null @@ -1,15 +0,0 @@ -package tool - -import ( - "testing" -) - -func TestEncodePassWord(t *testing.T) { - t.Logf("EncodePassWord: %v", EncodePassWord("password")) -} - -func TestMultiPasswordVerify(t *testing.T) { - pwd := "$2y$10$WFO17pdtohfeBILjEChoGeVxpDG.u9kVCKhjDAeEeNmCjIlj3tDRy" - status := MultiPasswordVerify("bcrypt", "", "admin1", pwd) - t.Logf("MultiPasswordVerify: %v", status) -} diff --git a/pkg/tool/format_float_test.go b/pkg/tool/format_float_test.go deleted file mode 100644 index 4310412..0000000 --- a/pkg/tool/format_float_test.go +++ /dev/null @@ -1,10 +0,0 @@ -package tool - -import "testing" - -func TestFormatStringToFloat(t *testing.T) { - var value = 1.23 - if FormatStringToFloat("1.23") != value { - t.Errorf("Expected %f, but got %f", value, FormatStringToFloat("1.23")) - } -} diff --git a/pkg/tool/redis_test.go b/pkg/tool/redis_test.go deleted file mode 100644 index d57c294..0000000 --- a/pkg/tool/redis_test.go +++ /dev/null @@ -1,24 +0,0 @@ -package tool - -import "testing" - -func TestParseRedisURI(t *testing.T) { - uri := "redis://localhost:6379" - addr, password, database, err := ParseRedisURI(uri) - if err != nil { - t.Fatal(err) - } - t.Log(addr, password, database) -} - -func TestRedisPing(t *testing.T) { - uri := "redis://localhost:6379" - addr, password, database, err := ParseRedisURI(uri) - if err != nil { - t.Fatal(err) - } - err = RedisPing(addr, password, database) - if err != nil { - t.Fatal(err) - } -} diff --git a/pkg/tool/string_test.go b/pkg/tool/string_test.go deleted file mode 100644 index 0c44086..0000000 --- a/pkg/tool/string_test.go +++ /dev/null @@ -1,27 +0,0 @@ -package tool - -import ( - "testing" -) - -func TestFixedUniqueString(t *testing.T) { - a := "example" - b := "example1" - c := "example" - - strA1, err := FixedUniqueString(a, 8, "") - strB1, err := FixedUniqueString(b, 8, "") - strC1, err := FixedUniqueString(c, 8, "") - if err != nil { - t.Logf("Error: %v", err.Error()) - return - } - if strA1 != strC1 { - t.Errorf("Expected strA1 and strC1 to be equal, got %s and %s", strA1, strC1) - } - if strA1 == strB1 { - t.Errorf("Expected strA1 and strB1 to be different, got %s and %s", strA1, strB1) - } - t.Logf("strA1 and strB1 are not equal, strA1: %s, strB1: %s", strA1, strB1) - t.Logf("strA1 and strC1 are equal,strA1: %s, strC1: %s", strA1, strC1) -} diff --git a/pkg/tool/time_test.go b/pkg/tool/time_test.go deleted file mode 100644 index 0da71d8..0000000 --- a/pkg/tool/time_test.go +++ /dev/null @@ -1,19 +0,0 @@ -package tool - -import ( - "testing" - "time" -) - -func TestAddTime(t *testing.T) { - basic := time.Now() - expAt := AddTime("Month", 1, basic) - - t.Logf("AddTime() success, expected year %d, got year %d, full: %v", basic.Year()+1, expAt.Year(), expAt.Format("2006-01-02 15:04:05")) -} - -func TestGetYearDays(t *testing.T) { - days := GetYearDays(time.Now(), 2, 1) - t.Logf("GetYearDays() success, expected 365, got %d", days) - -} diff --git a/pkg/tool/version_test.go b/pkg/tool/version_test.go deleted file mode 100644 index 02d8920..0000000 --- a/pkg/tool/version_test.go +++ /dev/null @@ -1,12 +0,0 @@ -package tool - -import ( - "testing" - - "github.com/perfect-panel/server/pkg/constant" -) - -func TestExtractVersionNumber(t *testing.T) { - versionNumber := ExtractVersionNumber(constant.Version) - t.Log(versionNumber) -} diff --git a/pkg/trace/agent_test.go b/pkg/trace/agent_test.go deleted file mode 100644 index 02e8e5f..0000000 --- a/pkg/trace/agent_test.go +++ /dev/null @@ -1,111 +0,0 @@ -package trace - -import ( - "testing" - - "github.com/perfect-panel/server/pkg/logger" - "github.com/stretchr/testify/assert" -) - -func TestStartAgent(t *testing.T) { - logger.Disable() - - const ( - endpoint1 = "localhost:1234" - endpoint2 = "remotehost:1234" - endpoint3 = "localhost:1235" - endpoint4 = "localhost:1236" - endpoint5 = "udp://localhost:6831" - endpoint6 = "localhost:1237" - endpoint71 = "/tmp/trace.log" - endpoint72 = "/not-exist-fs/trace.log" - ) - c1 := Config{ - Name: "foo", - } - c2 := Config{ - Name: "bar", - Endpoint: endpoint1, - Batcher: kindJaeger, - } - c3 := Config{ - Name: "any", - Endpoint: endpoint2, - Batcher: kindZipkin, - } - c4 := Config{ - Name: "bla", - Endpoint: endpoint3, - Batcher: "otlp", - } - c5 := Config{ - Name: "otlpgrpc", - Endpoint: endpoint3, - Batcher: kindOtlpGrpc, - OtlpHeaders: map[string]string{ - "uptrace-dsn": "http://project2_secret_token@localhost:14317/2", - }, - } - c6 := Config{ - Name: "otlphttp", - Endpoint: endpoint4, - Batcher: kindOtlpHttp, - OtlpHeaders: map[string]string{ - "uptrace-dsn": "http://project2_secret_token@localhost:14318/2", - }, - OtlpHttpPath: "/v1/traces", - } - c7 := Config{ - Name: "UDP", - Endpoint: endpoint5, - Batcher: kindJaeger, - } - c8 := Config{ - Disabled: true, - Endpoint: endpoint6, - Batcher: kindJaeger, - } - c9 := Config{ - Name: "file", - Endpoint: endpoint71, - Batcher: kindFile, - } - c10 := Config{ - Name: "file", - Endpoint: endpoint72, - Batcher: kindFile, - } - - StartAgent(c1) - StartAgent(c1) - StartAgent(c2) - StartAgent(c3) - StartAgent(c4) - StartAgent(c5) - StartAgent(c6) - StartAgent(c7) - StartAgent(c8) - StartAgent(c9) - StartAgent(c10) - defer StopAgent() - - lock.Lock() - defer lock.Unlock() - - // because remotehost cannot be resolved - assert.Equal(t, 6, len(agents)) - _, ok := agents[""] - assert.True(t, ok) - _, ok = agents[endpoint1] - assert.True(t, ok) - _, ok = agents[endpoint2] - assert.False(t, ok) - _, ok = agents[endpoint5] - assert.True(t, ok) - _, ok = agents[endpoint6] - assert.False(t, ok) - _, ok = agents[endpoint71] - assert.True(t, ok) - _, ok = agents[endpoint72] - assert.False(t, ok) -} diff --git a/pkg/trace/attributes_test.go b/pkg/trace/attributes_test.go deleted file mode 100644 index 67c0097..0000000 --- a/pkg/trace/attributes_test.go +++ /dev/null @@ -1,12 +0,0 @@ -package trace - -import ( - "testing" - - "github.com/stretchr/testify/assert" - gcodes "google.golang.org/grpc/codes" -) - -func TestStatusCodeAttr(t *testing.T) { - assert.Equal(t, GRPCStatusCodeKey.Int(int(gcodes.DataLoss)), StatusCodeAttr(gcodes.DataLoss)) -} diff --git a/pkg/trace/message_test.go b/pkg/trace/message_test.go deleted file mode 100644 index b1de239..0000000 --- a/pkg/trace/message_test.go +++ /dev/null @@ -1,76 +0,0 @@ -package trace - -import ( - "context" - "testing" - - "github.com/stretchr/testify/assert" - "go.opentelemetry.io/otel" - "go.opentelemetry.io/otel/attribute" - "go.opentelemetry.io/otel/codes" - "go.opentelemetry.io/otel/trace" - "google.golang.org/protobuf/reflect/protoreflect" - "google.golang.org/protobuf/types/dynamicpb" -) - -func TestMessageType_Event(t *testing.T) { - ctx, s := otel.Tracer(TraceName).Start(context.Background(), "test") - span := mockSpan{Span: s} - ctx = trace.ContextWithSpan(ctx, &span) - MessageReceived.Event(ctx, 1, "foo") - assert.Equal(t, messageEvent, span.name) - assert.NotEmpty(t, span.options) -} - -func TestMessageType_EventProtoMessage(t *testing.T) { - var span mockSpan - var message mockMessage - ctx := trace.ContextWithSpan(context.Background(), &span) - MessageReceived.Event(ctx, 1, message) - assert.Equal(t, messageEvent, span.name) - assert.NotEmpty(t, span.options) -} - -type mockSpan struct { - trace.Span - name string - options []trace.EventOption -} - -func (m *mockSpan) End(_ ...trace.SpanEndOption) { -} - -func (m *mockSpan) AddEvent(name string, options ...trace.EventOption) { - m.name = name - m.options = options -} - -func (m *mockSpan) IsRecording() bool { - return false -} - -func (m *mockSpan) RecordError(_ error, _ ...trace.EventOption) { -} - -func (m *mockSpan) SpanContext() trace.SpanContext { - panic("implement me") -} - -func (m *mockSpan) SetStatus(_ codes.Code, _ string) { -} - -func (m *mockSpan) SetName(_ string) { -} - -func (m *mockSpan) SetAttributes(_ ...attribute.KeyValue) { -} - -func (m *mockSpan) TracerProvider() trace.TracerProvider { - return nil -} - -type mockMessage struct{} - -func (m mockMessage) ProtoReflect() protoreflect.Message { - return new(dynamicpb.Message) -} diff --git a/pkg/trace/tracer_test.go b/pkg/trace/tracer_test.go deleted file mode 100644 index bb77d38..0000000 --- a/pkg/trace/tracer_test.go +++ /dev/null @@ -1,356 +0,0 @@ -package trace - -import ( - "context" - "testing" - - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" - "go.opentelemetry.io/otel" - "go.opentelemetry.io/otel/propagation" - "go.opentelemetry.io/otel/trace" - "google.golang.org/grpc/metadata" -) - -const ( - traceIDStr = "4bf92f3577b34da6a3ce929d0e0e4736" - spanIDStr = "00f067aa0ba902b7" -) - -var ( - traceID = mustTraceIDFromHex(traceIDStr) - spanID = mustSpanIDFromHex(spanIDStr) -) - -func mustTraceIDFromHex(s string) (t trace.TraceID) { - var err error - t, err = trace.TraceIDFromHex(s) - if err != nil { - panic(err) - } - return -} - -func mustSpanIDFromHex(s string) (t trace.SpanID) { - var err error - t, err = trace.SpanIDFromHex(s) - if err != nil { - panic(err) - } - return -} - -func TestExtractValidTraceContext(t *testing.T) { - stateStr := "key1=value1,key2=value2" - state, err := trace.ParseTraceState(stateStr) - require.NoError(t, err) - - tests := []struct { - name string - traceparent string - tracestate string - sc trace.SpanContext - }{ - { - name: "not sampled", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - { - name: "sampled", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - TraceFlags: trace.FlagsSampled, - Remote: true, - }), - }, - { - name: "valid tracestate", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00", - tracestate: stateStr, - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - TraceState: state, - Remote: true, - }), - }, - { - name: "invalid tracestate perserves traceparent", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00", - tracestate: "invalid$@#=invalid", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - { - name: "future version not sampled", - traceparent: "02-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - { - name: "future version sampled", - traceparent: "02-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - TraceFlags: trace.FlagsSampled, - Remote: true, - }), - }, - { - name: "future version sample bit set", - traceparent: "02-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-09", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - TraceFlags: trace.FlagsSampled, - Remote: true, - }), - }, - { - name: "future version sample bit not set", - traceparent: "02-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-08", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - { - name: "future version additional data", - traceparent: "02-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00-XYZxsf09", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - { - name: "B3 format ending in dash", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00-", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - { - name: "future version B3 format ending in dash", - traceparent: "03-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00-", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - } - otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( - propagation.TraceContext{}, propagation.Baggage{})) - propagator := otel.GetTextMapPropagator() - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - ctx := context.Background() - md := metadata.MD{} - md.Set("traceparent", tt.traceparent) - md.Set("tracestate", tt.tracestate) - _, spanCtx := Extract(ctx, propagator, &md) - assert.Equal(t, tt.sc, spanCtx) - }) - } -} - -func TestExtractInvalidTraceContext(t *testing.T) { - tests := []struct { - name string - header string - }{ - { - name: "wrong version length", - header: "0000-00000000000000000000000000000000-0000000000000000-01", - }, - { - name: "wrong trace ID length", - header: "00-ab00000000000000000000000000000000-cd00000000000000-01", - }, - { - name: "wrong span ID length", - header: "00-ab000000000000000000000000000000-cd0000000000000000-01", - }, - { - name: "wrong trace flag length", - header: "00-ab000000000000000000000000000000-cd00000000000000-0100", - }, - { - name: "bogus version", - header: "qw-00000000000000000000000000000000-0000000000000000-01", - }, - { - name: "bogus trace ID", - header: "00-qw000000000000000000000000000000-cd00000000000000-01", - }, - { - name: "bogus span ID", - header: "00-ab000000000000000000000000000000-qw00000000000000-01", - }, - { - name: "bogus trace flag", - header: "00-ab000000000000000000000000000000-cd00000000000000-qw", - }, - { - name: "upper case version", - header: "A0-00000000000000000000000000000000-0000000000000000-01", - }, - { - name: "upper case trace ID", - header: "00-AB000000000000000000000000000000-cd00000000000000-01", - }, - { - name: "upper case span ID", - header: "00-ab000000000000000000000000000000-CD00000000000000-01", - }, - { - name: "upper case trace flag", - header: "00-ab000000000000000000000000000000-cd00000000000000-A1", - }, - { - name: "zero trace ID and span ID", - header: "00-00000000000000000000000000000000-0000000000000000-01", - }, - { - name: "trace-flag unused bits set", - header: "00-ab000000000000000000000000000000-cd00000000000000-09", - }, - { - name: "missing options", - header: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7", - }, - { - name: "empty options", - header: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-", - }, - } - otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( - propagation.TraceContext{}, propagation.Baggage{})) - propagator := otel.GetTextMapPropagator() - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - ctx := context.Background() - md := metadata.MD{} - md.Set("traceparent", tt.header) - _, spanCtx := Extract(ctx, propagator, &md) - assert.Equal(t, trace.SpanContext{}, spanCtx) - }) - } -} - -func TestInjectValidTraceContext(t *testing.T) { - stateStr := "key1=value1,key2=value2" - state, err := trace.ParseTraceState(stateStr) - require.NoError(t, err) - - tests := []struct { - name string - traceparent string - tracestate string - sc trace.SpanContext - }{ - { - name: "not sampled", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - Remote: true, - }), - }, - { - name: "sampled", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - TraceFlags: trace.FlagsSampled, - Remote: true, - }), - }, - { - name: "unsupported trace flag bits dropped", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01", - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - TraceFlags: 0xff, - Remote: true, - }), - }, - { - name: "with tracestate", - traceparent: "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00", - tracestate: stateStr, - sc: trace.NewSpanContext(trace.SpanContextConfig{ - TraceID: traceID, - SpanID: spanID, - TraceState: state, - Remote: true, - }), - }, - } - otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( - propagation.TraceContext{}, propagation.Baggage{})) - propagator := otel.GetTextMapPropagator() - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - ctx := context.Background() - ctx = trace.ContextWithRemoteSpanContext(ctx, tt.sc) - - want := metadata.MD{} - want.Set("traceparent", tt.traceparent) - if len(tt.tracestate) > 0 { - want.Set("tracestate", tt.tracestate) - } - - md := metadata.MD{} - Inject(ctx, propagator, &md) - assert.Equal(t, want, md) - - mm := &metadataSupplier{ - metadata: &md, - } - assert.NotEmpty(t, mm.Keys()) - }) - } -} - -func TestInvalidSpanContextDropped(t *testing.T) { - invalidSC := trace.SpanContext{} - require.False(t, invalidSC.IsValid()) - ctx := trace.ContextWithRemoteSpanContext(context.Background(), invalidSC) - - otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator( - propagation.TraceContext{}, propagation.Baggage{})) - propagator := otel.GetTextMapPropagator() - - md := metadata.MD{} - Inject(ctx, propagator, &md) - mm := &metadataSupplier{ - metadata: &md, - } - assert.Empty(t, mm.Keys()) - assert.Equal(t, "", mm.Get("traceparent"), "injected invalid SpanContext") -} diff --git a/pkg/trace/utils_test.go b/pkg/trace/utils_test.go deleted file mode 100644 index 089479e..0000000 --- a/pkg/trace/utils_test.go +++ /dev/null @@ -1,204 +0,0 @@ -package trace - -import ( - "context" - "net" - "testing" - - "github.com/stretchr/testify/assert" - "go.opentelemetry.io/otel" - "go.opentelemetry.io/otel/attribute" - "go.opentelemetry.io/otel/sdk/resource" - sdktrace "go.opentelemetry.io/otel/sdk/trace" - semconv "go.opentelemetry.io/otel/semconv/v1.4.0" - "go.opentelemetry.io/otel/trace" - "google.golang.org/grpc/peer" -) - -func TestPeerFromContext(t *testing.T) { - addrs, err := net.InterfaceAddrs() - assert.Nil(t, err) - assert.NotEmpty(t, addrs) - tests := []struct { - name string - ctx context.Context - empty bool - }{ - { - name: "empty", - ctx: context.Background(), - empty: true, - }, - { - name: "nil", - ctx: peer.NewContext(context.Background(), nil), - empty: true, - }, - { - name: "with value", - ctx: peer.NewContext(context.Background(), &peer.Peer{ - Addr: addrs[0], - }), - }, - } - - for _, test := range tests { - test := test - t.Run(test.name, func(t *testing.T) { - t.Parallel() - addr := PeerFromCtx(test.ctx) - assert.Equal(t, test.empty, len(addr) == 0) - }) - } -} - -func TestParseFullMethod(t *testing.T) { - tests := []struct { - fullMethod string - name string - attr []attribute.KeyValue - }{ - { - fullMethod: "/grpc.test.EchoService/Echo", - name: "grpc.test.EchoService/Echo", - attr: []attribute.KeyValue{ - semconv.RPCServiceKey.String("grpc.test.EchoService"), - semconv.RPCMethodKey.String("Echo"), - }, - }, { - fullMethod: "/com.example.ExampleRmiService/exampleMethod", - name: "com.example.ExampleRmiService/exampleMethod", - attr: []attribute.KeyValue{ - semconv.RPCServiceKey.String("com.example.ExampleRmiService"), - semconv.RPCMethodKey.String("exampleMethod"), - }, - }, { - fullMethod: "/MyCalcService.Calculator/Add", - name: "MyCalcService.Calculator/Add", - attr: []attribute.KeyValue{ - semconv.RPCServiceKey.String("MyCalcService.Calculator"), - semconv.RPCMethodKey.String("Add"), - }, - }, { - fullMethod: "/MyServiceReference.ICalculator/Add", - name: "MyServiceReference.ICalculator/Add", - attr: []attribute.KeyValue{ - semconv.RPCServiceKey.String("MyServiceReference.ICalculator"), - semconv.RPCMethodKey.String("Add"), - }, - }, { - fullMethod: "/MyServiceWithNoPackage/theMethod", - name: "MyServiceWithNoPackage/theMethod", - attr: []attribute.KeyValue{ - semconv.RPCServiceKey.String("MyServiceWithNoPackage"), - semconv.RPCMethodKey.String("theMethod"), - }, - }, { - fullMethod: "/pkg.svr", - name: "pkg.svr", - attr: []attribute.KeyValue(nil), - }, { - fullMethod: "/pkg.svr/", - name: "pkg.svr/", - attr: []attribute.KeyValue{ - semconv.RPCServiceKey.String("pkg.svr"), - }, - }, - } - - for _, test := range tests { - n, a := ParseFullMethod(test.fullMethod) - assert.Equal(t, test.name, n) - assert.Equal(t, test.attr, a) - } -} - -func TestSpanInfo(t *testing.T) { - val, kvs := SpanInfo("/fullMethod", "remote") - assert.Equal(t, "fullMethod", val) - assert.NotEmpty(t, kvs) -} - -func TestPeerAttr(t *testing.T) { - tests := []struct { - name string - addr string - expect []attribute.KeyValue - }{ - { - name: "empty", - }, - { - name: "port only", - addr: ":8080", - expect: []attribute.KeyValue{ - semconv.NetPeerIPKey.String(localhost), - semconv.NetPeerPortKey.String("8080"), - }, - }, - { - name: "port only", - addr: "192.168.0.2:8080", - expect: []attribute.KeyValue{ - semconv.NetPeerIPKey.String("192.168.0.2"), - semconv.NetPeerPortKey.String("8080"), - }, - }, - } - - for _, test := range tests { - test := test - t.Run(test.name, func(t *testing.T) { - t.Parallel() - kvs := PeerAttr(test.addr) - assert.EqualValues(t, test.expect, kvs) - }) - } -} - -func TestTracerFromContext(t *testing.T) { - traceFn := func(ctx context.Context, hasTraceId bool) { - spanContext := trace.SpanContextFromContext(ctx) - assert.Equal(t, spanContext.IsValid(), hasTraceId) - parentTraceId := spanContext.TraceID().String() - - tracer := TracerFromContext(ctx) - _, span := tracer.Start(ctx, "b") - defer span.End() - - spanContext = span.SpanContext() - assert.True(t, spanContext.IsValid()) - if hasTraceId { - assert.Equal(t, parentTraceId, spanContext.TraceID().String()) - } - - } - - t.Run("context", func(t *testing.T) { - opts := []sdktrace.TracerProviderOption{ - // Set the sampling rate based on the parent span to 100% - sdktrace.WithSampler(sdktrace.ParentBased(sdktrace.TraceIDRatioBased(1))), - // Record information about this application in a Resource. - sdktrace.WithResource(resource.NewSchemaless(semconv.ServiceNameKey.String("test"))), - } - tp = sdktrace.NewTracerProvider(opts...) - otel.SetTracerProvider(tp) - ctx, span := tp.Tracer(TraceName).Start(context.Background(), "a") - - defer span.End() - traceFn(ctx, true) - }) - - t.Run("global", func(t *testing.T) { - opts := []sdktrace.TracerProviderOption{ - // Set the sampling rate based on the parent span to 100% - sdktrace.WithSampler(sdktrace.ParentBased(sdktrace.TraceIDRatioBased(1))), - // Record information about this application in a Resource. - sdktrace.WithResource(resource.NewSchemaless(semconv.ServiceNameKey.String("test"))), - } - tp = sdktrace.NewTracerProvider(opts...) - otel.SetTracerProvider(tp) - - traceFn(context.Background(), false) - }) -} diff --git a/pkg/updater/updater_test.go b/pkg/updater/updater_test.go deleted file mode 100644 index bc703a3..0000000 --- a/pkg/updater/updater_test.go +++ /dev/null @@ -1,74 +0,0 @@ -package updater - -import ( - "testing" - - "github.com/stretchr/testify/assert" -) - -func TestNewUpdater(t *testing.T) { - u := NewUpdater() - assert.NotNil(t, u) - assert.Equal(t, "OmnTeam", u.Owner) - assert.Equal(t, "server", u.Repo) - assert.NotNil(t, u.HTTPClient) -} - -func TestCompareVersions(t *testing.T) { - u := NewUpdater() - - tests := []struct { - name string - newVersion string - currentVersion string - expected bool - }{ - { - name: "same version", - newVersion: "v1.0.0", - currentVersion: "v1.0.0", - expected: false, - }, - { - name: "different version", - newVersion: "v1.1.0", - currentVersion: "v1.0.0", - expected: true, - }, - { - name: "unknown current version", - newVersion: "v1.0.0", - currentVersion: "unknown version", - expected: true, - }, - { - name: "version without v prefix", - newVersion: "1.1.0", - currentVersion: "1.0.0", - expected: true, - }, - { - name: "empty current version", - newVersion: "v1.0.0", - currentVersion: "", - expected: true, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - result := u.compareVersions(tt.newVersion, tt.currentVersion) - assert.Equal(t, tt.expected, result) - }) - } -} - -func TestGetAssetName(t *testing.T) { - u := NewUpdater() - u.CurrentVersion = "v1.0.0" - - assetName := u.getAssetName() - assert.NotEmpty(t, assetName) - assert.Contains(t, assetName, "ppanel-server") - assert.Contains(t, assetName, "v1.0.0") -} diff --git a/pkg/uuidx/uuid_test.go b/pkg/uuidx/uuid_test.go deleted file mode 100644 index 4ec2493..0000000 --- a/pkg/uuidx/uuid_test.go +++ /dev/null @@ -1,98 +0,0 @@ -// Copyright 2023 The Ryan SU Authors (https://github.com/suyuan32). All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package uuidx - -import ( - "fmt" - "reflect" - "testing" - "time" - - "github.com/perfect-panel/server/pkg/random" - "github.com/perfect-panel/server/pkg/snowflake" - - "github.com/gofrs/uuid/v5" -) - -func TestParseUUIDSlice(t *testing.T) { - type args struct { - ids []string - } - tests := []struct { - name string - args args - want []uuid.UUID - }{ - { - name: "test1", - args: args{ids: []string{"123"}}, - want: nil, - }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := ParseUUIDSlice(tt.args.ids); !reflect.DeepEqual(got, tt.want) { - t.Errorf("ParseUUIDSlice() = %v, want %v", got, tt.want) - } - }) - } -} - -func TestParseUUIDString(t *testing.T) { - type args struct { - id string - } - tests := []struct { - name string - args args - want uuid.UUID - }{ - { - name: "test1", - args: args{id: "123456"}, - want: uuid.UUID{}, - }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := ParseUUIDString(tt.args.id); !reflect.DeepEqual(got, tt.want) { - t.Errorf("ParseUUIDString() = %v, want %v", got, tt.want) - } - }) - } -} - -func TestT1(t *testing.T) { - valMap := make(map[string]struct{}) - num := 100 - for i := 0; i < num; i++ { - exCode := random.StrToDashedString(random.EncodeBase62(snowflake.GetID())) - valMap[exCode] = struct{}{} - } - t.Log(len(valMap)) -} - -func TestAffCode(t *testing.T) { - - code := AffiliateInviteCode(time.Now().UnixMilli()) - - fmt.Println(code) -} - -func TestSubscribeMarkCode(t *testing.T) { - orderNo := "20241213222445955" - code := SubscribeToken(orderNo) - fmt.Println(code) -} diff --git a/pkg/xerr/errCode.go b/pkg/xerr/errCode.go index 6d4fbc1..28c104a 100644 --- a/pkg/xerr/errCode.go +++ b/pkg/xerr/errCode.go @@ -85,6 +85,7 @@ const ( SubscribeQuotaLimit uint32 = 60006 SubscribeOutOfStock uint32 = 60007 SingleSubscribePlanMismatch uint32 = 60008 + SubscribeNewUserOnly uint32 = 60009 ) // Auth error diff --git a/pkg/xerr/errMsg.go b/pkg/xerr/errMsg.go index 7b43cc9..f235187 100644 --- a/pkg/xerr/errMsg.go +++ b/pkg/xerr/errMsg.go @@ -70,6 +70,7 @@ func init() { SubscribeQuotaLimit: "Subscribe quota limit", SubscribeOutOfStock: "Subscribe out of stock", SingleSubscribePlanMismatch: "Single subscribe mode does not support switching subscription by purchase", + SubscribeNewUserOnly: "This plan is only available for new users within 24 hours of registration", // auth error VerifyCodeError: "Verify code error", diff --git a/pkg/xerr/family_err_test.go b/pkg/xerr/family_err_test.go deleted file mode 100644 index 35842e0..0000000 --- a/pkg/xerr/family_err_test.go +++ /dev/null @@ -1,19 +0,0 @@ -package xerr - -import ( - "testing" - - "github.com/stretchr/testify/require" -) - -func TestFamilyErrorCodeMessages(t *testing.T) { - require.Equal(t, "家庭成员数量已达上限", MapErrMsg(FamilyMemberLimitExceeded)) - require.Equal(t, "已绑定家庭组", MapErrMsg(FamilyAlreadyBound)) - require.Equal(t, "禁止跨家庭组绑定", MapErrMsg(FamilyCrossBindForbidden)) -} - -func TestFamilyErrorCodeIsRegistered(t *testing.T) { - require.True(t, IsCodeErr(FamilyMemberLimitExceeded)) - require.True(t, IsCodeErr(FamilyAlreadyBound)) - require.True(t, IsCodeErr(FamilyCrossBindForbidden)) -} diff --git a/queue/logic/email/sendEmailLogic_test.go b/queue/logic/email/sendEmailLogic_test.go deleted file mode 100644 index 20c8323..0000000 --- a/queue/logic/email/sendEmailLogic_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package emailLogic - -import ( - "testing" - - "github.com/perfect-panel/server/pkg/constant" - "github.com/perfect-panel/server/pkg/email" - "github.com/stretchr/testify/assert" -) - -func TestSelectVerifyTemplate(t *testing.T) { - sceneTemplates := map[string]string{ - "default": "DEFAULT_TEMPLATE", - "register": "REGISTER_TEMPLATE", - "delete_account": "DELETE_TEMPLATE", - } - - assert.Equal(t, "REGISTER_TEMPLATE", selectVerifyTemplate(sceneTemplates, "LEGACY_TEMPLATE", "register")) - assert.Equal(t, "DEFAULT_TEMPLATE", selectVerifyTemplate(sceneTemplates, "LEGACY_TEMPLATE", "security")) - assert.Equal(t, "LEGACY_TEMPLATE", selectVerifyTemplate(nil, "LEGACY_TEMPLATE", "security")) - assert.Equal(t, email.DefaultEmailVerifyTemplate, selectVerifyTemplate(nil, "", "security")) -} - -func TestResolveVerifyScene(t *testing.T) { - assert.Equal(t, "register", resolveVerifyScene("register", 0)) - assert.Equal(t, constant.DeleteAccount.String(), resolveVerifyScene("", int(constant.DeleteAccount))) - assert.Equal(t, "unknown", resolveVerifyScene("", 99)) -} diff --git a/订单日志.txt b/订单日志.txt deleted file mode 100644 index 5790959..0000000 --- a/订单日志.txt +++ /dev/null @@ -1,351 +0,0 @@ -292.317µs | 111.55.73.248 | POST "/v1/public/iap/apple/transactions/attach" -60622-03-03 00:00:00.293 info HTTP Request duration=6.783µs -caller=middleware/loggerMiddleware.go:113 request=POST -tapi.hifast.biz/v1/public/iap/apple/transactions/attach ip=111.55.73.248 -trace=9519d9fbf61839765c98546fe63e1359 status=404 query= -request_body={"data":"tTtGK8Fl1jCIFGAWFfxMcLYVrzWNtfbClOoTrAd5vN+WklHZ0LQF8hPhiOv+jVpOq7BQ9snx -TeRa8rqbvadI5ooqupk7OcwFC7i44k10kmDucWqLVutj2o1Rw7rr0ZgVuTFngTbiOZT1gte/tM6fKr5IXrWyJOqFqrCUow -GHkmR2EHoLHxCwFiAc8mDBJ1UZpHd6zQXEDpnE9kUEBkzgIaHEY5QehsGMnkzKUfevcIF23qxCh8KEK6a0LsKLmHcukxmJ -uHAAmFh0zGNW0S5G8RY+bzOrw2yaQuGTNXV2urUdOuNj07pDfO5+a5yp/F98gIQX8Z+DllVqRvG7RVB6i5cMRgJkHGsRjc -+41s+coGS06EdlqokyrPghDkBgoefOSo1fpIbr4Wyq3Dfb6HmFrVGc0iein9TVfJgqW1oXAw9hUJVS2BA9PBLd4RxDT6OV -ODDFobowjMv9ASS/lGl8RmBIp3j/blHbeDAJZokG2kn1vY6Q4Q/pszM8iR3hUT0bj/v0TSpOOp9X20Pt8kPgpcRKm869u1 -iDZff2ZgZs3CVlfjjvxCSy8AH6ZXImd1KkNUzo3rtQD0L6JJVoTdW1/ViRdhTFK+4dbLp/Fw6ezVxVZ6jIV04ITtN/U80+ -VaWOz+qQCiEFQ2xwfDgY2hy4pDUoXIkyDXqcqV/DbS7JJPXojgRP+E3dnweMLU5bcGs1syql3uA43vE3naEGhrXLCq58hV -ScfAmIcGYviE0k+Zl3RbdPQIlgXFjcqdEGXqnVuWP9u5ZJT/civiK4lqw2HPcx2UudM7l+hh1VM0ZQAU74KlGF2AQh7S1h -sMh5TBrj9nVd8qw0IOL4XHv9TEfXRP+lYI/Lwwop8cgWa4s+9NFd6JrFMXZRpmLHS2G1c280tUBt95cbPzVL111FIL7EPD -jseLoy2fr/iOVNkYHeGPvx4e2hI5/nkqTHH00p7C0stB5qcLt816FXbfluVyR96GeWun25jgSPI8cYwRLqRom8ycI8bE2t -krQeWJBtfFS8TVLeMeEGPYJpUZKrGlxTN9Dfz3bAxd21TuoRoXA2DYsJGlxLeR8cz+6pa2dD1TD1kr3XwVDzNOhee7vI03 -I1H4ytsPjnL7JJBke5RhWiCw7hzc+8b9TgTKDOWx15aIeLKudgmRcwldtpX0srxeAsUysTKG+Oj9MhvosHTSHfO/WvxqM0 -pcvkDGi9ASn3N1j+lNh+iWVxzoZRZ5YuDLDLPlU9qaE/q0EI125/DC4qTYVLUFbSUG76iksxMOeTg0MJjOWHGmgtB9muWX -nUKf42JKJ0gvhIm5J5VUJ+hm2bjEeOIvr9lZFu69/2/BKDbG7/1MXUQ68SCD2MVhTP11kFB7L/rQZUyBJMRHsUSxCfI9SA -pAbNPNiEeq0nZTijn/W0dkAJGNIYbgDJrJxYcwIqb19PS41o15l3hqZxymtvlA3HG9h81UHQvsyZNgVIFVI+f+Wzs9EKTX -764FcZjJuirCcLkWAeQYN5dba+ZdzcT+uLipVihexSpR/yP3hX/ctyyfWJKuiXUfw0DDO3wUNxtpGd9/3RQE5KeEDwCbN8 -VJDs2WGiP89Eza7RY5aZfwMUV1KNHdag2SUEVxLMyjxhVqiVhQ2rfh6JjbEM1zaEYxzw6aSDLsPkWb3VIooSYC3sFSePsd -PSV941vuX6JP+F5uW0nH/Ei15LTwxbdLPFi376fBC28RNP3/oLsHZ4DOzJDeVVLy7CJn5GkkDDNSWlhltG7SVlSyYkJLUz -GC2z+HUID2XzOH+68mUax7sqbYTl02+f971Zy7j87ZyLcNHrGThwRkfQRl2EskMfV0tnsX/MkvC6MN7Y12E+Y4+mYJyr7S -1Xn40EFdOYN52aPwttGT7Lyv5fync+UQp3BbICo75lD3O1mamcovADC52PB2pgQ3oGQuQ+R4UrecPCSFmCjQ72TZunJO4h -YoHs0oq739Q4WPlHvOlRThAGTXNweIUOGxxl8OD9RQ91ESDH16XOzO/ezZjk7ueCfJisXDGH1fDhdo4NCDKfgUfu/d4d9O -XIJnVITz5RJSL7/NpCN7UmLEhu1trxIfnsV0mtz3e4bvpNuLfvZ+iDijZUGyu1mS4EI4hzLvqqN6PU5dwvMu+6LyIHrm83 -JcjAOxwmc1PrKdUbIcE6i4Iwz/GGY2XC52vh6Wi7iuaYPBm3w/VMZjnkfeNQJuAeUvnTLgu/qXssJAc1VaHNnQDKGjeOQi -4KAYBUr/86VoauHSIJ4NmGeht6BUa9kV1+9cMU2TfyzdWc9dZGjF1uFKwyMYkFgF6v2lbSykrNG2a5bbhdtMqe3pHjFZq1 -Qp2jbqVvwJuEurIJmuCbxecatbRLg3xoqxlKtBUrat/fko52vEAwX+zxCrTgAp8du2HGBYDTnj6X9orWR88bzRaT2ec+lA -hrJi7r3VzPVqzrywQ6/7sjv1Js2obtuNsTXJ0/LFyXrWPZDQpvCbZzBlQuzwyZalTZ+O3jZPklUOB1Y1Y/iqr8z4gHI6iG -0f68bjZ18hvtin8xlFGARZs9MvG14vGR1taSJaa+EaEQOTjpE27z18ZV3+hocXlFbtoxrB4QAXP2b4hO35cigBByvNnAci -74FqEoBuuPrMtkpXLK72hK+IFAXtB1CU9he6zVq+uP1dhUZGRVoUpFc/bX9Isz97uhiSJ2ZzQezcvtwTa7ose2DCpibX31 -3uCjTTg968pT0I5koRIXvo5l4Cnas4B7pJJNFVuwKuqhJ8OF/NH5QG/C6p+qz4h4tw+ts0yWow1R2nVbttGwutITo1zYKL -n7R9C2kJjQnXp24QghjuEbWL8X3x7/VuU09Do8wsqhnh8mpubzi5bxWv3ilq1G/Aivsgo5UAbOHUIyi0KdTWevN6YLom5g -Hg+8Gid3BF27cGah5Iktnl3AsnttgARqNRdw6rSBcThLee8Uxlm/jBUPX0TbK0idD/FSZD/bvh8sveiRve6z4d31m/1hqp -dcGXX3pgRjQZ6B5Y0vt0L1GiQvzdUGW9r9XN+AnkISI7K3zBFUlc4OX+roTdADAve+k0D4XIqQ0pyM8NgwqHlhV3ZFtCev -Wk2xRRr9Sv/Lhbk1ZxJPlafgBt9336Dd+7ZmNB9L1pcc99NKtSYxmyKQUwpCMhso6f7ObhVbKzBXG6trIgII+FV5Ti8mby -tEPp9y0K/FJEFydIM6SyNlTSJxU78ZuHGEadZ9iJYRLkPtGlL0PJi6CYyFTc+EgOIgvcmLTTqIIfhMaVFk5wXg8Je5US/j -r5VHG7xgTvgPTgs1PLlk7Ul9XzqnxORSRONxVG2F8SBT4JJMC8w2AG1cVqT3xSwKDmPv/F7NVxg2NikpAf4Rt8NVyKhDMR -KMsE56U86OxfcuHtQej0Vq3ev+OjNl6QusvfDNIFK2b0Ysz6A5xYq7eNcYUO4XGsDgAOKjUkai3VSwxGZ0jpcQbn1TVIfj -wdxVUz51fYUC2RFjNHOB4XKSWZw14xTpiab+8EJ//go7l3PczR1muTHBL+gCLmGP0hK467UfLtpcFJWs1NCKwxzKvxuLAO -52eTVVFZezTjd/RRLlLjbrppp977/AO+EMh+uEfBCMIVFzYzeC79lG5EyULtccNFwZ7ICn8HxiLAtzAurf3Xt9Cn49Ldaz -rkl4yQ46hV9el6Ym0pkGhwMq/z9o5BcTotueyv2+TjaT/CtdQrkhGIsowTEWINlggT/yio7+q0HSJ6Le6MY8oMMzcS0oI6 -ebtNuKvNjsAGwe8WvJu1ynhOiXxLlTpFDrUy1VTAbTfciedI8/Epe71f8LKX/wbPn+N8BG4PFeMDGIfZTSQyX3Am8wN/Li -xq6qg///kYPAXnaux8AAvc7bkEN8lrNmln2aB3GWT97RCQyqDaUgyPXm+HU5Y2wVpa3frvgAipmydDupFzbds/1utNd16k -fTpLXr3pf2voGCpJRrnL4ZgmC+Rvu9uXBeRibWJankdo0KmknyLCVatPA5Dy0OCwG/xm1TcVD0kru4MZGy/pLsEASmg6Ye -WNgHzxDklKNwFOXkcuCT/nGbbZFccXSiz87C2d9Xq7UElBjVP6MGE0H1NhcV1LcKlafHNiEsG8m8ZJMedzawxejmHwg47/ -Gh+9mMe/tu62NqJS81lqdx5Hy9nA3/5ugp7goLahTsO8X8cifQnXfg/cAMxbdrGUyK2LUa7IfVkJf4EXWlF+xCXE7YfLOO -W2DYbcrdkFKlhzNvN8vt/Rsj/QwcPOE/OQlu0GycDtofb6G3zy8D5RnLQHRRDFHR0dV+5FQlZP63HcnUqakG0VDwd3T4tz -Dkht0hcDYsGmM9XPAOLgDLnGHy7S1yPfgwRn/p10ZbWjRhyJtT7CMBbTo9h0tl/pZYYCkGrco/Bq7Pptnh/6/p1mgbhSjM -Vz7ZgzIrrhJ3bwCuJomHRr1guNoyfrljBebzBiIvyWza0jBChxxADbrEr6B82IH3k2/WTWX3qPEifKHfRr7g3qX28xBmmc -k0Mtp4sHEFn2sGygQC9ZnQZf6qikQo1oRNw3UWakWyqvAF1s9i6VSJ0XFHZ2t4HINJcgmUHq53vJ5xY0V+R6KYUNpvPQi7 -+OViyM+B7wAR6zRKjz2/zptO3+75OKqxVCGL6YInRczu6bY38EuniPrMPw7hDDTms77gbx83Z3Em2CQhomF+AThXU1dlEV -AXcHfogcfgfxVWz71zaLPeEEXdfmmOvmC5yzl1j6lJnpzB4I6PwG5NttFYjvPLPOY0Afq13PgHwhqi0GpUMMzTZPsBKP0c -Cko/BkGckc2uJ/Nm5/o2tnFJAtiN9tTmYIf7rBK+y0B43S4OzQRPr3t/8ePFoLgYKsiu0pQjQZsGegAM/dhuQEYIsf3tpm -wY3X0V63e7AB6amGX3HYtPpLpmC6xXAElxliZ3sFYenoISts2Ua61uxsIRfeaCH4OoWVG0c13iEFHEZuW8x8ij//5N8fjt -0VcCxwlIXFFvFrMx4wRWVrs2t98Pm8RdsgAzYARh5uHFioUTUCJ4XxtmOqN/Zwx4qi2famDFd32+SK/a41KF0uyZl2jS/m -VhbensK4hc4EDam6vlGXg95FqPqv9eq39FcHEzjCiY0FxJ+/qs9JtwkefjxtaITed7mMbIxnu+4CIwHPkVTNwWX8+HhyBH -08Za0wGg6mli9m83yL8ujR58DygyzlRCX0eRlwMl/az3DFKNk0s2RsHFp21l4fRpPK1J+plAJEctTZOpXqqHhNCRj3VU4h -r0icrMfc2JuppZ67XyocOMLm+R6fFyBPE05P0qnMvTCdYcgoqUOy+XOgJfBYs6ujt6twnAxhs7o8Q3uRna5B+9NbSLSg7N -OZQbxDaQMmvEVHmq/DRy8aNWrlZ0Cb6jnh2UkBY5IP/Z2ah5MbPNK+ohHKYj5nHn4UzD9GO9quUlUNtmj8b75A3i7TlfPA -yuzgtzg/ydMsQ7nmQ1/f7YHu57f3mTISS54L4WaBvLPaPRZRYEPwb3iQyPx2Lk3OSuF71WmnJ8myROJrMeupi1kNWyKS6M -hner95cOy6yLJo71g0rKBhceYFXBrNTOXD0X+joWDFq6Ent6Za1LYv4Zr+Y6ouQGWJeySO12O0yGWyOjCWamPG48Im+dgX -2rjH+MAibPXVAskMoHkRrs+QOCi1GGS3PJemxPCGjDcvqc0fs6tCW1vB0y103nL264kBWig/6kBelD2y6PF2wqz73kRL2N -pN+doaSPr9gRMZO0u6aVmPp/LMZewE7cU9OSB4sM/XuS/FfyVIooeaYnvjgWV1ePv9IsABmOToHN68Gidwmy9la1Kt+OEu -h/ZGR4WQl37f955M1kLjGpeik5osLyRXKFvg29N/i+0nd7eqA09FYGTvapKlnFzBMUomST5ocTF4MnQGl3AvtDwpSgQHzu -oBKyQVG0POLlD5Xz+CaNe4mdy00VrLbZK95eJWCQrsJHtgfquQ7qNEhQfUi3X7aRBXKzu/gNAqPocxjs8ZHTOiWauj+HMU -4Exebsp2egbZEUT7PXdqfl9meWXpfVx6CIazL4+im2ba+8ZOjNVrpmRvBjx2DkZuAzz8Q6mq+50ZFss5bJunytwNR4AQ4/ -V88SsgKk9kwb9NcQzBKloVhgeqGBBCPWT9oHLYjJCycXDpCGqmQme3H5e/bKdp3Q8N9Ax5L9cdbnvJ9CawRIVG3OG2OeoN -QCFrV2VGFwuwct6Hk5SUDRpZho/jfqIE2AyzQBaXEXTkeizXTWCYs6wfOYLIzIGEeIWun5uKczY+NTpK5NceoO66SuHF56 -+DpkuOtXtOfX0YV784Bhp8bk9yHTrDOMDbR08LMIqy8ASNo/LrIuK8TzwFQjDr5XjhEXsFeJcszuBVp5rLYP8tTQnPbDUv -hXiu3wVVnM2BhcI5xRkxwbgiO9XGqcTWlz50MqQockp/2sKamwQ8eQdFOP39gmb6pudSZA3ZxXQqa4FCsShT8IYcIIwuGD -MMWYz5GiUKHDeuV2mBHzHDULCiAYahMkeGq7powH5Edyi7GRcrFX8KKQO5yKEBDG+ZnqNr8tS3IP5ljRLNE+jcBEUkhG6Y -HBh0wvvHGkngWH3D/XKr4w4VbTFS970N6/2iLQdnDDPGau4H4B+KjVk24HTSDWuRiUDmaVJOM6DgeGGJx6tq1BEYM/vdL6 -ygUAHQwxpjR2sZ91B0oxZZy3oevq613PxE7NwcuN43eQxWqHy/YbBUDQuoKDbRv4JL15NZi+qZ8tboxBQQiYRFEf7JmiqH -UQptTKiy0GlT6bbVqc0BisGFmDaEzIVSVI8zoc2G3pzV7KKjgmkUo7wpw7nnadSn7Eniy/0v5imOXtxCmZhu5ty4auKuwn -hslnOPOIsZN/I0g/rOKShNiOa2hfNaT/rwSaiJ/XXTIY9bDrZego+zrqje0Fyy/89HgMIhyCJMjG8LNlO8R3qATT6ZAl8D -pSECeEGgeX8waV4qIsb8awiPDHJH3Doo3F33cT+n+mPmNoZsjW1RvRwckWOQI1boNGryeGJxTCPAJyv72+3Dlh8ZHs7BRv -UzAPTAY4gZZaKI0qLpIgQHEnxUkCTLjVfhVV2LQw18HLriKevwcTODVhsOqWfcIwvzUTVBLnQCJJGrJA5r1NMf8wIxgazg -842GXYnXGkCFengyGOALd6A1AJnpMl2thUy+lgGFFl7Bvq43COw6dtFjP4i4csuQQNQ5E+lAVpHq7Z7GLOwBcIk5A=","t -ime":"2026-03-06T16:05:19.828995"} span=7ed5023bad15ce12 user-agent=ios api_header= - response_body= -60622-03-03 00:00:00.293 debug 404 Not Found: Host:tapi.hifast.biz -Path:/v1/public/iap/apple/transactions/attach IsPanDomain:false -caller=middleware/loggerMiddleware.go:117 trace=9519d9fbf61839765c98546fe63e1359 -span=7ed5023bad15ce12 -[GIN] 2026/03/06 - 16:05:22 | 404 | 411.228µs | 111.55.73.248 | POST -"/v1/public/iap/apple/transactions/attach" -60623-03-03 00:00:00.217 info HTTP Request duration=5.321µs -caller=middleware/loggerMiddleware.go:113 trace=42f2998ec5a51a428853673d58afcfb3 -ip=111.55.73.248 request_body={"data":"i8Fi6em6nBiO1fk3EfUdUYSZpcBI2sBb5fPSZ10xOh1dNtM/ -MVwNkoVyPA5S8yqLimJFTnmfXgQ9atlbc5QTrwCVM2EVyrJgxRUzrH43StlDKKSBVu+rAigta+wXe5y6muxpWrxN8tBIFv -KF0s888bEIuYC5Kz887n2UPuYR7H0mvBhnyeNRlsS8x/QDwhdfsBayNv1H5Wg4WPWAeZs9buGJGbNgaUgCsVP3oNgRXL3k -lP56moq0vSLmeblMxsZ7fgThPeZfZaKqTeVkzVXuGySJkrh8cq9DhgV/0n84XONjM17x5drwrfVoWiz3G//XvpNSowb8o7 -kpkIKKw7y4p1U3vzrAzIXK3iBztEA3UOv3a4yWtHZ+CmmLWrG540y/xCZLeonndN0pOrrSF9DRYLaUUquX5sTTg0xXmuXZ -8x3mSaR7bxZrH1oCpE0kvcGsN9BSyO4V6uWKnItWN3veD/b9Uw54tQiwFG8HGehQQuzx6ftnvqV9kJJTrMRojYrqc95EGi -rntNmtYBLN5O6rJnlEVkSvqDTpeZaco0JxBI//pmJvecL6DJPak56MRQgZJyQmnhuWloK45grnVgvoQE22LQv/hC8seTyw -cNJNzHBncSzd5HfFQGg337DaPkGiCaLz3MV8lFKCqRIr6NgtfccLq8A8ME+AzFn+SMFhI4t5FJEMnIlvSfndGmauubozSq -kx7ZDNlwadXntU3H2CdYm1kolpaKi2MPnUUBTN1DeapX7ci0XoEdIfJbsxzRnC7PTJr8L+iBvbq76YykvoYRwMjsRjS0yh -DTPP5T5n7uJa/AYVHyD8RHPwC8yvVydktkBikywx2ik5XqF7uHLiJD47uRTQt9e9RdIyOUKE13JlYg3IRYUdV6SOvq/MFf -9J6gIG68j5Z+YMVp0kW2AMgrv9F8AfhXyIjcJ6P9xBTFOOaeLcAATPzkd9dh8QJjemf+Cwg5vfpCOzJlsMRME/kWZT5Xu5 -LaHCN+AWtmnor2VJ0IeVxYNFPWYZPDOwqvauoL2zuBuMhmy/OrFkDoaK1/RxTB76/5bRQcAHFPMm9VDFNeQ4vpe6ImK850 -unst/Ra4ehaHhDJzsef0NyQejnG11xrZGnOdn+yRAkWLySLiXAqs6/g8dJmrAoC2d+kOhKHwKAkSoJSvtIxQ2BMALxD+mE -lvPzFAbI+kIi+DM5IuWeiy+DWGzRmOpcV72Mkiqcyc5KsxjA3RduSDOjpoV4HgZ1KawZEELHsqLeZiYy6RVgZf3MPZW4TN -cn2sx5B2q+pvlXC+F9qcUr8bUkAkDTySxZ7hWtO8R560wbz8CTNzqZ6R4HvYuUxQsVqc1sX/gh0K88EP/+0rc2rPdTUc69 -vpi/EkK3r1028o74Me6CX3fOh/bOePrfGZQn8zisvD4JHPoVXpzKbibAelUDZtNjLfeWHYXLeo+pHiRpt/Wm2YQMlxVu0D -DgIw8P9tMWdG3YYTiuNp+CAgpCQLnsovB1xAC+mEiMapA5pjw38U646a8MxIP2/6FdCfmwzs6owCdSO7HRrtY5s9cHU5jD -Ch/h9NvNPhFENWfNAgKitoMMK4b3qtSjUWbqJdkzkYBR0n8/sBOsC5vXmBHKZwkObu4AWa9ZioxI+bDjLsbe60WfwxwBcG -RgGr6dEm2/DKFPtmJOZgHY6uivzS9vxfzdMfXRotCuPp43AdQWod1V7j1IrDkpqpPtf3XuPuafRvO/z67sXhNAlA2fcNWv -iggW7H7ruocn2kZkasID87ou6xm+XSqyc3PuhW9wn870xGBNEHGuIn748eJElSTTQn0Sa5KT2Pt5m5/In9ZRF2gQHC+hf4 -i3bgGy39IxvUCUc3/8dx1r60SNxlOihtwtsk/peUJPKcJVnOLeyzYl+zdNlEzeH/eLOdWyUhgM+UyozkCwlvort/Jj/gag -m8dmal4vbZguAattBhHN8xnndE5tGKvHg7RNjvD53zgCHscm1dmePoJTfZnI1hHmayag5A84cT+OYN8wE/XsdkeqxH61fM -Hs1AGvTqrRBdAdoQEAgCicAfTF0Sd/ySBpItnHBRTZXFq8Mr/2rKwyDdS38WrJdF4GuWemhxN24H2UEbVZI/BZz0fep9kk -k0fPbBBiC3DpyjqYwo4cwRgmLRtXK1lIJ8LP5i32t3lnkHUusBkhcnuLSToklqipDbVrZ4zx9+bHEc66b9ws0CuRV3vCj1 -rT9gRAjKKuMmDFSwG4AYXyuuCaPHcxTYe0x3JrOkMA8bcjZE6Nc27+NkgkYmbroJ1AXFXgMHxrMSUgf0IQGFnhoKypSBSC -ftihIF2zAjdfSotYcV5TZb+ZKjwyJXqKrkW0T7oTNusUuy50a2n9JhwcBDkuDA182YhAsF7HWEYzEG6RsYJXLsmFnhBSZX -Ru3S1mpSes8PLULHlgfOFMNxzcsvA+N2f+YB+1kDKIOObsvEl5TLS26ZE8dBUcfYcl4yLilLzPLW49B10uvqGU5X+gwyD0 -YfRoXHAdnSfWak3g5tFdbCrN1vktS8nare6t3PenJX3GcVGu9lOovi62Q9QPFM6i88GKuXTHYCBU3eZV9hfML4Hu7mLN1U -KhHDhm/6JAZWzYg2E2Eua9ucvXlirXfdaUq6bS2WGXY7w9y/lnlbi1arIiFqAxuOoz1/2bY3mokdIoFpo4zTBkWw/O9Z3r -DiY40pdW6Mt+jRD/Lpcka4sfZe5GdY33CzWkfYGNMM64tosIo3BW3JwnvXjFNG7dAdsEZHg7fU/IqTIrZSxOiGkj3jSgDg -zc9JewM7pvK9bYWB6sYx+qx+Ry5mWzDSQjAuMerxojldY0uEes9v1dYD3TcDVpoJ3GVu9xGBI19OPMzEfexeDIqP96F3PN -Q1Px2ubls4Pd+GLihPzfrJFcEnnAfp4H9dfVbt4LTgt4+US40mlOSDBSdtYt/v82YU9Nq9Ia0tdiuKw1S40zvlAIuH38kj -4LGyLuHoAACd4jxMREU5rBTk5KBohMQrITqCSYDBUo5ahAwGOvasNxuqshcz2Q+TflkVszc1DqvC2Al2q+XXbjZr3flILu -U+hJoSf7SvA0NZarsu7JZpso+MIW8zCtfG6frIe/cnc7u5LdRc4NdnMQuZEnfjECygQ6cp52ADF/WUoE8Ug560f0jEXV+p -80MWdEbWcesrl25rCXPRk6F5hmKgmag/GMQo8leCmWHtfPUWtS/Ki9lhttchbFnTGXff5OUImEc4rhUcqIRUAD1E4ZWqme -Is9sxnlIM0jXOlRENu/dYy+FT9qYtQUQoSsOAN+YUOqYYek6GHp+oQ6gCB2PUfTalyn6TooO+/JVGALHxhFBh85pxBmrBx -5BH2/X0F7XUIdaDTPMfprOIKoBLOzKkkHf5NPBny/QqlG/eobYTLwl0lEYscRwi1LHg+DZSW2lTyEdjJPU6AnWkgmWlqgB -dLkNDzORanHTd6G1vq3DSQGPHx4TQ8TD/wOyt4Bmfqd7/sGymvlKeB4qnFxn1KF3TAWg74B2CE30lREdWrdOwGuacr4Bo4 -Rg4cc9mutSxIlj3pH94FKHpF5fwrIZvBjGYgSsPDy2hj5QyYySsN+kTt36YWje43X6+Syi4SAs5HGw5cjy0CQ+nDrQRn8x -D0bajwxgPRWkB+YkOWk7K8uMrZfC38RwT99DYo+GNQXCBVX7sRPcZ91V5rU33Gk02pxvU6S87/o6q6R9BiBzthzo5mPTnQ -tNZ774w/TqdJcgyoexJQ4YxoroaQ8lLe5Sbq81ehLQOIE4MK3MkLisZCM659eL2d0OTcf2Odm2Yq58daMnY1Nq+bV+lE3P -6bM9MYf+Rdw5qvkwoveuAH5Vd/YbNzDq+IfUUYE1EvKWfMd3qejAtSI2k4/PUA/7cmooE0ONLcBrYSQgRh6vr9eZzc+2Jo -MvSFC5YqYaIalKNvnnGCmsLexRioSd7u41sBmUd4L07xh31wgvvOP1AvLtRRR5eS6//af5RlqB3UKqfzUIxrMcao/I3Dbs -zmE1UaajFtVAm3bI1+HaqiyYOP0kGJ5JGo370I9EFoj4kkZdVE840kTOzoupAyptK4GVuDYxppGkAVse2d6ECQdQX6r66q -jecXTsKHK35fkORr+DLzS/sli7p9rxmBgh59KI7hqawKEpFnQFb2wkjb7Ms/G+8tg0VwMaP77RQnJRlJuTeJSostBVkJ5f -IFnadoAKKiWdHtG6kzkzhykaDMlxnAWKJHfSqlc00qIOxjNcxfbiD05vfbzdwz0v0Tx22a6LgQsEAJDfcUhP9hPtycilrE -j3CJKlxwBuhqr1zng+ltMZB8w69MlrYVIZt101qW0LB2J8DLcgXe6DGJeP/ycTRU3PI0y9h3aKypNjFgH0oAvcW1supAR0 -vKSX4tXMYwqCgwR3ZzQgIQ9DtRKTQNWzryAT0AebLo59VX7GUHIWb0kV6VZZbJyIxIvYB8xdvzV810Be7XJV/qgrYCKlV5 -250hjzt6ztLE0WWcid+r9tiBEGVz1ckKTaoN7XRj5bb1+9XBTw3U8OndGpRcBgcicT0TM35fIYEJw6sN0wHXvb97AYwZOJ -c6zzqldGTPBJXKLprHGqqHEJodW5cKwWYP7UzImBVtQaayz2I3dsT2WcUhBtoTu/KLHX389Sf770qlJCXK30qd6ldMlp5A -nknQebQduJ0VITlcHQtvc6VJ6wqnFl3AkOCp1TBIcu4KJL2a7DzR+x1rQms2y2yrR4OcwuOZlHL8gpm8aOHkrWiTootnlf -xE/48DEGtAJ46pt/jEjFh4SOB3H22r87ilo9AR7eLw7ZxKjZBgY1BqshVdCQ9m5MK1XS4HuOPqLJ/pK135QLH6P8W+js3q -w3K/qIfqw9pToQ9SyIdjcFl7Dn5Tqkp2N5v3n0jipSAc+X9V4zsXhdNaaglHBq2fnhQ/gS5nHYDewdAQ2hJzl+fBa2it6l -ohavF731slgyq2SuIZoPfGn/hF0MOHeTlvjemR2kcMSJdZdpdJFKM0XUa68m4xrABISbnZ2acubib7e3JCAIOmpkWzgV4e -AAGmeqfQ4UO49CN+wxTGaVVVSRvSlm/a8yOQw4GyOnAb2eQX8X4Ewbh4Wk8II52cxF33UND+IGQE2ZEqXupK87IVEKZJi1 -UQ9kzzOQFHElHAC4zei2L2FOjX82rJrAVfjlGqet9b1MQpKOFqcxYSX98YdbC5NtWyLN4vENtE+z/b+fjgQ6qmeoEQyEwd -Xnczrx2AtjNDoJvC8qP+fjl2zkLUlKv+9uMtkxRZDCnldKnjJrAquRIKUj8UmKV1Vpdc2o1XVhTbalWWfhGyStBAMGqltT -nUJy6vddex4utO84l7QL2/ILNTrEYZScR3Ol0WE0iXP64FSiRWWkREVJWZRow+bHD2Am8RKgn3nhY5nlnpFso7xqgOp0m9 -GM1iwIlNGCGRcFCQg1X6xM8i1CXkvB7yU01iGrHqtRDogj4gMTEMipnOoyQEgYJs2BiAn4K4cK5LHWFIGh3wvo0qa4zfPf -Izyimc1IcsmcTmiaVClyYqGLFFAm9ewQIYpiJqmeIVjgtczRQCrHTEK96H8qN7QKd94aalU8fr0C86NXnxmiLCxNJ/ltbF -4XJI+XGLhU7g2AL7V1ekvq5a0qz+onBRQ0D0K3A9Yu9l0vqUiOGusNf5O5558NhMdmqh/XtQaE2Wbn66Gii7zw8O1pChAc -QtWeXfdLMsQndXR/t+W1a9U2WCyJWOk8tbp3UtJalDOdu8sL9kIzCjNbahA8yrGElrj+Ba6betxr4XivPepuBHF0qysUhk -Dz5p1bz2m+oTuoJjOyIqa12xMnrcVZbR/T5uFib84KcqacCsUIYdOqIe/6r+umjnUHHuFAPUGAV8pKg1zVeU3aHqagKWI1 -5j90qDYHz7M5qu5+1thOQU4Q+FhLLsTfLajyuJ+AmJfmhZjRQCQcie0f7GZXms836RBt5c7JH5AcOOSQUoNgND882V32zY -4RzDOKFYhT3rq22nvTG7gqnX8a7md719ytDX22m8aDbrl5mj6wLxwYrY7AUtkueZdQLrmIBHdfmfLxA8SWPnh1pkCUo8VU -5K46c6toxI2b/4TovNy8rHB4sbdWb6K2rh2YKthP+swQyT3iHunsIqf/bMHjwb/ltxAhtGPBdTRjrlG5VJi7WdYVDbLvTz -5803MOoGlT/m6KALrtC5gPxNVohem6uSU4wOPVHR4R+qBpdN0Ia8SC3tkzb/JQK7dOs1kDwGLfA5rhhtK+YYvjnLXs7V11 -27HsNLY2EacNhgTcmxUvEZ48H0kNOeoWxMt2CvFddPbQrfUkjZqdm5rlJlvoCkWogR5JH/AGVCazLJGAhq7eGi+np4KTIk -Ak/6vy90yIbD6xeO05VldI2ftpQIYy8Bd1lwZpkprGcY0ONWukqfZU2syU4+XL1DCpqzKLH+aonQwxgAjh+8vSanPkqVov -JPJjJa+W82nVvQSBGhVFd3yPHUxSHpDUH1Jlpf2bWgh7goIh0Wd0Fo/1gM5A43Checgv0ZMX1CKJbgXrKgOUM0RMY+Qu+U -d4rUV8nfmj8PDcJfXV/rW7sWaOEu9opVCioO1OSEMQqIJMCrtCrtHaUuMXaF75FUcSnNmh5gi+gaLqilc0Nrx+kn46JFC9 -nBxBKvnWanqCRqMiWmHeKSokFt0PhyXJKGIdLX5fmQ7OwySZAS/zJj6GQq/Uq84JLexj4vMnIXF5hKXvxmtISRb/dXsTte -U7425sdui545/u37kZLncyIQnlujmpHxeMaIQJf8yXt+mixawtNi+Mzq/40Vl2xu+5QYLsn7UALVzuvO5/w3rVYCyhSERO -/Zbv9qkwaG1c7j4Vo0zRONVCPiOYjn6mYd+a6i9NWwkpz4sjWbSqWROoqIguumR4l2YXWPC5UD3DLr0VtVAGKkSYsR4bPC -N77gI3OjUqqU9Dx8NVhZBUAsxS6FpSVl8bhsC7Eaq+FKaL7motCmN+/2l1peinof6ZsqjuZ3HHCbs0/TPV+bznOh9Y0CVF -x9oxC0skRfl1tsgA92BOHzR6Jyec2U35bhTfQcpbrkWZU2CgE8lfb+Ba1CC/T+qVcp0IRDviwZaVuwJoHu5CZqa34OA6AI -+ZjNBhkcEAfXGxTjYANAgrjrjSHoKdQ3mjI/CrsNF1ePi9hbGue5N0HEnhsLhE7xIxN1A6gAqjLCycpg1im/fQgTyuFWOj -JbhsWj6FrYPVvufuTm0=","time":"2026-03-06T16:05:20.753625"} status=404 user-agent=ios - query= api_header= span=d2ae1834a66b8dd0 request=POST -tapi.hifast.biz/v1/public/iap/apple/transactions/attach response_body= -60623-03-03 00:00:00.217 debug 404 Not Found: Host:tapi.hifast.biz -Path:/v1/public/iap/apple/transactions/attach IsPanDomain:false -caller=middleware/loggerMiddleware.go:117 trace=42f2998ec5a51a428853673d58afcfb3 -span=d2ae1834a66b8dd0 -[GIN] 2026/03/06 - 16:05:23 | 404 | 311.424µs | 111.55.73.248 | POST -"/v1/public/iap/apple/transactions/attach" -60623-03-03 00:00:00.910 info HTTP Request duration=5.581µs -caller=middleware/loggerMiddleware.go:113 status=404 response_body= -span=eddd32459d010760 query= user-agent=ios api_header= request=POST -tapi.hifast.biz/v1/public/iap/apple/transactions/attach ip=111.55.73.248 -trace=2cc2aa4bc7b7eb9579b3073e5768d2e2 request_body={"data":"AuktjxLy3lMmvUVEdBAwOcvVuKS2Pr0T -0Bp7nStikxDTaJaJMrNAhW5HhEiWLTgSJKLOsaD+bBvMXmufLPt0I6zGZEK6LDV/htp4mLeuTI2FraZYJ9bmImzgRPL8iJ -fQWEFVG0FC3LBQCsHVwFKpQEkhazHTWGjxgwNZCnvKpV4i+Cn0EF3oKO05XzxSrWjRWBNX3kvXQoJwdBZOzklbJoDokCJM -ImjWGbJvis4FLNetWqCdbhhDW8eIRTG5J2D694e76PNr/0KNiBzlVGqxSuLVKhtK2Y/OFQH+y0o3C7k88AOXRHjI7d3lQ6 -mcmJqSi5EU+tzN1uT1v4gC6a1EzBHB/iXRb0dYomosHhh5/C+ABrBjmn9e/Z6mCd1SchHHpV5VDQlDIIzZ/Bd8Y8KowWGI -1YjLG1O4gNUFnFdUfo6/VpKbYJyznzMq+tkLlZXu6cs/nLNuZc8mXgHK7Ono3xNkC8aZWplrpc8RzaJ+ivoOH48uXW65Ep -YF3ClmskvZbPnKaXGmZEVUrQtB71octtPZnubXpEIUjoxvVXIaRdq12LN9QpJw+TnWO8naBUFO4SME3BnH5iNLoGaWK8LQ -ocEpQ/4uVl++IovH0zWg2cUh4b+lJk+Hqj4v3kdx1AR3yjp3Gesc7VPYviPj47cmInHms0s9MQjzR4v4v916TfHez/Eygh -rYtHa61np58Pgth8E/UjC/+RJENm2ZZ0NDdnhTz/S+0YDTb3qUuJCI5g5Ua+QcxaBEaFbHzr2bjA73H8I1PMNlIzsxidvU -7bVcuzlg8o8zx+4OsepA+l132db9MdSoPGVe9ihaUvZl51rPbaXXSpv7DmG83qvnj8rWJgcJD5oWUTtiUIl6gnJ+yh4uPc -CYop3VSARKVTq1a87Iu2UMvcNAN0ZNBqEv3Bwm+WIebZrLQ9ngnsjls1YJ8n+nX46+UtDx2NOI10IHHiQRAUtb3xk45Zgq -2QzQVPVc2Yd7qS+d6oNU5UIqng5WthWLV73BOSglOnyok/zV0TaqTulzLJeo/X09KRg79EVhaxQzx6IfNvrbO/k1gyx+3Z -5ooCoIYhCbKGGIlmnWsZ2VrfleJ7XVDNslsJ8twhDvrUldhTeaxaz4uWc2rSvO9PPjwETMHiKH4V/t1VwGYEHxmQkq67pF -SzWxT+11+VeZcN0fqJkkUpLogvxbEO2Od56oQQdFuJQf/RroqrcsDlO3kn2hGv+FDLOVS36LmwgZbdvT4eAr1a+kZBTIoZ -Qqdq2oz0328Ao2fA5YQ6fvMPrJH7JtnF+6EoTTR8C18rjGjmBnIYV6Iom2xJ1VFxRdt6wLh8l1r+qpCFjca1vZjzZItTtg -I80hwF5u2NvCe3JtUDSHt0XxRO/W6GjaYh8JE5ps03N+rfSNGo6WFAb+4/5f/KAI2hEWdu4uF6+ioSeOOs+no0Woi8c9qH -mJRZNZlELioGyRVp+zoDBU35pk9EczRXs6CVxBwOW33VHOHtlYEbHx0G+2q6DVrSUR6xOnroNJGHg2Wy39/XgHxjUZn9Dv -3IWniOBoVVj/u8lFS1LtcjfGwC4T5Ro/nzdZqvFCsKZfoZ9wevZb4oR8/ABIhMz5myiLQi1K6dNljD2zlyCdVLoVIqpYxt -asE9nCkxTIZcWYEod6BYlw0Q1Ohp4n89Kaxnw7qAnLPWP1xHkGbwIyNxkhE76disEKB7PGOLSB9nm7GcMAWowgCZ2paOA/ -lGHNXLbC4/XlU+hIsU7FdYsT9Np4ebwwjDd65TJ19CUHNLUaqwOVO5w3cCxjzIplikE7ZMCZebGgfWpl6Lme6M6DxMnzq5 -KgEffsGurNa8La3wb3kAEYx2wLCAR0SBSsP4bfMJpEcY4kFZWvCytmdF32WJLgV5espma12mNvd46UmjrSFNLY1fHKnN8y -VUBVxpgWH7S52DkFjSZh3glOP0S9xVAo+0ys244nIoFoA7uiw9/yOXYbathEFU1077JcHV9DxfXjY6NS6eRjdgXpL2sdxl -3F3ed+O93T4Eihn6GJI6mi2jPjBkLAlL0h2VoyGy7ul2MW0Ywrx/sn4XlKtNrUCUGDhU28agGoYj+fe/fbyRCjZ/2h7O8c -4P+3bewhaH+8WBgrZbmHwTzO2yauyV6GN7gLRLOB9byqeH0JqioidWRDuf1rNVAXV9PnTVC7I96JBEnEGomF934T3USb/1 -+7/BWgNKng4TwKOANgf6LYUUS1/wArwdZJhWxRyNC7mgronRwZv9NPHoCoxm8ADnByT25ZuGLoIETr1Rm87K+aJyUHBbrk -hoCeqMzI0SiYOAXFH/blsoLb0cnV6cDgsiowXSLX7/1YOrH9U6elA5/k+NMhPSyNE2VKz+PM+3g6T7P3jsPYWqkfqGF+RH -nxGN26n+bftNaew1+SHwYtARCxbpUufgDu591kjFDR0mx81TwOB5fMoYjGJke/hpDjJu5EbEKICiAF0z/YII1o4MfMDMOS -Eul+ReFNoV+npiOHk/gCYYX2aVi/rnmK+GiYlmxoQb5X+C0YlVZePjANnNBkTWqrtgPlScaNC6FfhC3vuN9P6VOhe+eFT0 -g4ap0gJX1aUmZF0WoZ/zg4UfIuIDWkUTF6GBX7b6ndluiOOz085VJaXGiqTt5CWauf7HjUbAFpglqKPQv+udb2mlrTNcNQ -CNGCVrSMdsgn+w9InXb7pGRFjRyfuBCM05Mn81UYbNU6uY9IsYdOmyAo45Cg4Mp+PZUWQ3wtQOY7cKgJWtiSRLLilMs0Rg -TWI0+m3pQ0OnHOkZ0+dcOn+Tw/UOkWM+zwuvweYQ+L/LNojRymF5FfyusqAtzvTVu6lGbDQEnzgAnBtipdZFbwuIonvYPC -s221VNOeL0ZYts0TzdoX/WqH60baoCdE0mOZ8wxp0CqwI5QU2X1eS5ukLwoDGAgvRph+p4RIM4E1p5/TukpCn4jrGGY6oT -+QLdJzwDVA0Rk/qYDnRYuzuEBL8KfKbbca9zIQNTKdkUcnXVBmh3v6H8oBMO8MsQ2kUlLnHkbvVZW5/rDSApCgD8Aqp9Z7 -3gw3niYbZkT48KxPgSk2tKy9TII4foOW3tKeitRTycus4zCBW/e+ITTkLVsC9Y0sTVqJ4aczOS6rVVRfjWzt2MZNeC8Mw6 -czhmZcEul+o7yVzp+PX16MPaAh+psnk5wHbz8rsEVYryywaH76viy7JsQsVJZoFJXMdDVgLQgZJmq7AQxkm6cj7U1bpFhy -LTD3kyiwqwABGfMDMTxpWOFWz39mO5dsLpoumfYGWv53t0MD+92SHz+jsNK4WPXC6QrJQCOadFb8Mu2rRsi6OTKxIRugc6 -YyguSU/Yzy9Z9qVqwW7BtdYUwsiF8gXCztVShIB4GEJJFfexEwvQby4+48P0BpJTIfp8Ep1yDpfSOZErV30Luie7iL0lpX -O1tsiWnlgLdz5L+ywiMMVeXl/Z+SaRmOQtKeXfgYyCvRUDRRRumwFpvj4szTX+3Wj0Cribn0o+h6Xno28qSqo4EceVzeCM -25sPTw15ni3VsOqbXHde8tLM7/AQhR5eOVhW8HvT1V/UUD+HJyTqYm3cGKzV0j8sBtvs6k0cFBV4Eawe7eDuKWhvOtnYOM -VoGmRwG9w+0sTrUZ+Xdiyv7qVS1Y4CnxMe3QBfBZLN9bQrrXC6e7JEGUg3tkMg2djJAq8fK1fnV+NFYkuKR4yEOVAyolBm -bX2p5JjWDB3JbkAPhlZYMnM15XW6EYR42HfH9L4jpit5Pfhpspc/DS+EWjD3F0whGujeGqMa72+cODOYgAaP5Buu4GMKa9 -7o9mfyBkhdJEtOP3ulRuKrrMbpXFWQd+Aq4EL8b6L+FVutfvSExjW94azZwnRS3UdAtOTS/clkhpRru0yv9wIZLF24IWvG -5rzzx4PMcuv3MeSQ0DsNfj9l2g1e0qTMheJ17JxQWcvnT1UkU/TCpZLeToni562wDT816f4IU5+H66PpU6449RAzgG6EeO -yD+cq5VwdPv0tu/t9s8QnyzcEZNLL0F2Urvj0L8p+2vAqFbelP0Pu4plME/0mI82HGtrt2ryZ1zzdkmeiw3eksM4QhhES9 -mOd6ofP/Hfm/xsR/KicAfQm7WRWY4uaDsEE0xhQJnrZ3ziBVgSb7zoSrLM+jFZ7Cujgaqg4hu9ws7NfHF3wilkCWpHhOo2 -xVp+/aOVmTx5gWJCxJKElREjVo8Eqs8ggCGh3MRcncO7DR7F2W0xtsMgDKH+F9AudkraFmhgfp4YlZfxqPncoYBVdkl8ip -spdw9f3460VnGMpIqgvyV0tjkOgc3ZwVL7jxtSRiLJucFE1GK7KpZM1gkyqhPSPV4bQfjxw3QhhX7kMCrL2pWn2j8TzyhK -jpyIa+kal1jJDJFYCPJ5I2ocHsYnlg/N/Ex/S6XYq1lzwERAKSxsAFOXJ5U9NDh8ElxFT4YYH7kd2CUbaBm9Qg70CRzgoN -frNeojwd0XF9Q34yoYU7VaqMqItYGAkYQpGoIGZJdCHc80xQ9BtAlNZt9g4Tij6b0Jo348Y97I7TvSzGHl+9mN4Jomahth -wmWAhnDQxE/BTx5FdULz9ajmRN6wpGwfK+s2R5agsdRtc6u8fBUcMDdxitLBhrwnZ9ZBkhh+X9pbB+Oue5CozOKeFAKIgb -BzyZl0L1msMW/8geOT/LnMMLFKuSnJhuL1vaW4tgGlwNzKltTLEW3Y9l4vrU5xnG8Ti0YCUDVkjSBuly4BKNQ4dkBHqERd -yrvZK+4gE/4HkrQrK5baqGCv/ylXRtP95ecu9RhCEzjBr/VyCtmAE7844yI+nOBZqEi5cargZPqWmELpbn8F/JZim+zc+u -8zmq1yje1tUmjGFSoaoBbuDagPmq6hXD1Sf8lFMA3Do+5jIhSkWFh3HNACcKa76owiYGRnswtAtlUqsnC4VcW80WGAsbM8 -AHGJSvD41A9S+8BauXUGeNJYV991VJjhJx0wA0kunSkj70Z5aMVeQs6D8Jao0MuzlW0c2bI91h5QFXh4UJYaTFoULv9E6Q -KOFyGWQzIf+S/C77iNq8yzJA2Krt5XoLmPtE6fGWxza0HGD3DW+zQh1wAuBkIjAtD+Xvg9pFMWTjj6Bw/ego+77n71KIbd -Rc3so44oGktDxBMxUg4BHzIOm2hlI+UbvziKhlsIP+iA1ZFtMDjWoASvQt0g9VaP+sH4/ipkaJjQZFxjfvs28T6aicI16C -Suylrr2tddTWJTBXvlV85NlKBtDsBHh1obS+L5w84gxVOyDKdqLgOQ9x2dJiOkaRBnXh9eVNCo66RJI6asWINSOwkJ7KX1 -w+O7i2dRoA0rdFwyC81QR1seot/Y3MhjA/4An2x0/wdhs+/LltGM8XmvmnARetbQeqH2y0vrAH+CS+14Y4AqMKnXDboUJA -v4MGFyyk55Vqwp1pgYmMS0KGLVK59itY3c1rJcoRSXfOmBUTWU1fasWuq+w01dMWBI7F38xaKf0yZYgbIBk2ERl//gCNmc -qb0s9N52ZNkpqHzSqd2RkklQRmBzJIEif8OIDmb5AMXOi1Uc6li6hmqR4rGxPFpf1uCJyX8Y/iytn4pIlPH42kK5DxZFML -NqGTXOGnXr+c71ZBYqyjcljzvOOFO0YZRaPv6LkrzDBn0vqHQkHafwROMDnD27YXFdsxm6jQPBq2aPwEX+5/nwtvKl3aQe -1I4t5dVxK6T80T/wcTalCDOb4pU2sTvRGXuhxCkxB1X3qm5rUETYwgtyVsF0MVrJccCUpAxEmP5YOt2wVZb4m3NpcmnKNA -OIrkNtXuysWPxNe3RfoOhi4ULTR95EPO1asprdbjdBLH9zeCHSXPJrcAsNxPNxwRqiOTd8cy4OfYRy/ZVfA0OxHIKDQRoj -UVtPoVK4+t910yOGAHuI98Ymrpu4gnYfmHszy3iX+2UusgnfmGezles4n8wfMls7T7i2+SlG5T/UZT9sKbYnVl0veYyNlD -wUYWHeaEnh5AndsSLxxtNEELYyCgfJuReZmX4dlE4dUNb7LbBaFYjK7ipEF+jheCHs2feu2LLaqKlgUDBDGhDvuOZI182+ -ERO2dKPZ8qtghujTW6MN/I8XTnn6c4Hv+Ksgb5/+vZfHYELAccCer9pH5HYpfiOKJ17W4yjZdOn9PIagmiFsSDUoqyhLb7 -jN8r6OFDPW44T8fO9oIjkQsqAQnslBZZfSAUgqQUWtPHEztJW0z331H8eg9lxSRqVLqnsOn1ysWDLF5/oNnxWGD/PyZqYs -pLU56tBbPiNadd5bxqgmWUb5RubfqZuIjjmr/UMyb3SH60CgWQaQ8ZEoZd/V+jKfKJ9OlI+Ni7KbQjuM+FmZfd5LKoWk5a -zpjsCbWIn0YYuKq78qs0lsRTWX1I8v3EJ4zO3NM5dwuOBZpFI2HjX0WJuxPoQiMIB+8h2Ocaamu0rdh+lhSx+rIJLbo2gG -vJwlwjMiV2VTT2/whtJQynpM9yQJDMI8O0/V3zou21/s17dJgihfXm/LG5+wMHGdomZHKR2ezifTgjUB4o517BIh9emzRc -6yLkzJM+H8fhLVz/qiJnFvBVgjY3KCxjX+RLlVvSxe7Rxei81heIl3SLKwYqZR+Yk02nR1lywmsW7saprnISUtsRrwES1A -Gba3IKOS0zU3b14jUyAk9Rf9I9XPTevZJQkWzr2H809b/PWyo9w/jWZkawCgpYw2h1rfJ8P7xbEfI9gy0jqIAyfp8pfGWv -4fMasGo5xoi1eKgpgdNditkn0XIXjjboyFxeuurjV81qK367sfGkpOW1Qmzrzm5/IsHMo2Baf06nXJePZMQYTzvG69C1Bj -9vEtnphn1DkuzyL/tI+9HtP0mxwcCToOPHzY3sS73jr9uLFzh2+E50LJAoOMwHjuEgusDkQM7dzQ/IaG7pLxfJ3Ah6CwMz -ka6k5WfMCybJ9l9qz8U1etQGG6B+9HlIyXwfZgGaDbfC7uSUhoY5f/aw6s2rVHXVlHOCIDJGWW29n0mqdR+O6QCqHgKJ6Z -mRgmEX0/0t7h8SBB7rJX8koaVJ8SBnQpyqjlelJCcDL5Itjdb+/ZyvPvLcAd+Q/4DMA7Q0K5jWhLVgjVNE7VwpfA0Zi3XZ -9uGHZgd6MGfo0HQe3dKJnaThuS+qWCvmdVk/LAr2BiCdJ9cQap9gh68Wbl7hQ9KQ1n5kTUBFo+ItDJvFKidmS/HFqznMHy -wli7VjyoNib9/ikTu9Q2yTFeMoDDnUsxygk=","time":"2026-03-06T16:05:21.443292"} -60623-03-03 00:00:00.910 debug 404 Not Found: Host:tapi.hifast.biz -Path:/v1/public/iap/apple/transactions/attach IsPanDomain:false -caller=middleware/loggerMiddleware.go:117 trace=2cc2aa4bc7b7eb9579b3073e5768d2e2 -span=eddd32459d010760 -[GIN] 2026/03/06 - 16:05:23 | 404 | 314.931µs | 111.55.73.248 | POST -"/v1/public/iap/apple/transactions/attach" -60626-03-03 00:00:00.259 info HTTP Request duration=5.971µs -caller=middleware/loggerMiddleware.go:113 api_header= span=8afe91bde25769fc -ip=167.253.97.183 user-agent=Dart/3.9 (dart:io) trace=3ddbd2848d5dd0e991a2b7723dfa4e9b - status=404 request=GET tapi.hifast.biz/ query= -60626-03-03 00:00:00.259 debug 404 Not Found: Host:tapi.hifast.biz Path:/ -IsPanDomain:false caller=middleware/loggerMiddleware.go:117 -trace=3ddbd2848d5dd0e991a2b7723dfa4e9b span=8afe91bde25769fc -[GIN] 2026/03/06 - 16:05:26 | 404 | 194.638µs | 167.253.97.183 | GET "/" -60626-03-03 00:00:00.347 info HTTP Request duration=4.87µs -caller=middleware/loggerMiddleware.go:113 request=GET tapi.hifast.biz/ -trace=dd99dd231c1b418d5aafd32fb77f8cf8 span=8563aa784d8ab774 status=404 query= -ip=167.253.97.183 user-agent=Dart/3.9 (dart:io) api_header= -60626-03-03 00:00:00.347 debug 404 Not Found: Host:tapi.hifast.biz Path:/ -IsPanDomain:false caller=middleware/loggerMiddleware.go:117 span=8563aa784d8ab774 -trace=dd99dd231c1b418d5aafd32fb77f8cf8 -[GIN] 2026/03/06 - 16:05:26 | 404 | 176.582µs | 167.253.97.183 | GET "/" -60627-03-03 00:00:00.063 info [GORM] SQL Executed duration=1.1ms -caller=auth/deviceLoginLogic.go:71 rows=1 trace=c25ef52d63a0d9b845c1097b71749654 -sql=SELECT * FROM `user_device` WHERE `identifier` = -'68c71ab9f82d521d202ff6b09cd7794038dd69da2f1dfacca015159222362adc' ORDER BY `user_device`.`id` - LIMIT 1 span=613a7d6fa7d752a5 -60627-03-03 00:00:00.067 info [GORM] SQL Executed duration=3.1ms -caller=auth/deviceLoginHandler.go:23 sql=INSERT INTO `system_logs` -(`type`,`object_id`,`content`,`created_at`,`date`) VALUES -(30,659,'{"method":"device","login_ip":"","user_agent":"HiVPN/1.0.0 (Android; vivo V2302A; 12) - Flutter","success":true,"timestamp":1772784327064}','2026-03-06 16:05:27.065','2026-03-06') - rows=1 trace=c25ef52d63a0d9b845c1097b71749654 span=613a7d6fa7d752a5 -60627-03-03 00:00:00.068 info HTTP Request duration=6.809722ms -caller=middleware/loggerMiddleware.go:113 query= ip=167.253.97.183 response_body= -{"code":200,"data":{"data":"Rs6/oaI5dykhKO0Ce/b6EkhsO21G5mLC140wdLt7Lwy87gusvKVkFHiX57uTTc1kI4 -bODvgfe3kbQoPdsMHCFseXmDQSIpkJ8ZY5FK5fwOA8v3V8mJNmjaoKdI6hnbt8+sYO7VPA1YNw1rGfo02bUMqVIz6P3h73 -8QD793hyA+6DbE0eO99ZI38C91mrySV1ohnqcHZxenOgvptckzCMB0WkKI06G6Xulf0wgW8HtZ23RaE1td/fb8v6rdhBlF -oaDSGkV4myLpbCfSLdq4wVrR5GHSVXf9zG+o7DIDMgbCuOkikqGNHoAOx+wRdCQIJVmjQ85BKWdjvaNG1TMW+J2XA7mqcx -V9aaAp+HaqS01anmUL2aRAtJDX99RdSgPkou","time":"189a31f3ee459cef"},"msg":"success"} -status=200 api_header= device_decrypt_status=success -request_body={"data":"aBJLGpQBNoQb/WQ5fTTKRaq9wflMflv3xn9UCehNUAhHGjQzd9eoZWROOZ+KznqsP3ocw5FT -OehvC9+5DeqLEDuDSiqZC1Ej99jHQLY1a6UzTIoC9yVcjLqJbi1WJ3QYpWRL32cNk1XTV4GK4LFAXTNBDMNQSy+WAvThGb -ZNDdNEARQZbTq3kCfnQxZHv2oM","time":"2026-03-06T16:05:26.733552"} -decrypted_request_body={"identifier":"68c71ab9f82d521d202ff6b09cd7794038dd69da2f1dfacca0151592 -22362adc","user_agent":"HiVPN/1.0.0 (Android; vivo V2302A; 12) Flutter"} user-agent=Dart/3.9 -(dart:io) span=613a7d6fa7d752a5 request=POST tapi.hifast.biz/v1/auth/login/device -trace=c25ef52d63a0d9b845c1097b71749654 -[GIN] 2026/03/06 - 16:05:27 | 200 | 7.009591ms | 167.253.97.183 | POST -"/v1/auth/login/device" -60627-03-03 00:00:00.740 info [GORM] SQL Executed duration=1.3ms -caller=gin@v1.10.0/context.go:185 sql=SELECT user_family_member.role, user_family.status - AS family_status, user_family.owner_user_id FROM `user_family_member` JOIN user_family ON -user_family.id = user_family_member.family_id AND user_family.deleted_at IS NULL WHERE -user_family_member.user_id = 652 AND user_family_member.deleted_at IS NULL AND -user_family_member.status = 1 ORDER BY user_family_member.role LIMIT 1 -trace=f5fd1ef6a989104d7700d2c76c3aa653 span=de3234063995e481 rows=0 -60627-03-03 00:00:00.743 info [GORM] SQL Executed duration=0.9ms -caller=gorm@v1.25.12/callbacks.go:130 rows=1 span=de3234063995e481 -trace=f5fd1ef6a989104d7700d2c76c3aa653 sql=SELECT * FROM `subscribe` WHERE `subscribe`.`id` = - 4 -60627-03-03 00:00:00.743 info [GORM] SQL Executed duration=2.3ms -caller=user/queryUserSubscribeLogic.go:46 trace=f5fd1ef6a989104d7700d2c76c3aa653 -span=de3234063995e481 sql=SELECT -`user_subscribe`.`id`,`user_subscribe`.`user_id`,`user_subscribe`.`order_id`,`user_subscribe`. -`subscribe_id`,`user_subscribe`.`start_time`,`user_subscribe`.`expire_time`,`user_subscribe`.` -finished_at`,`user_subscribe`.`traffic`,`user_subscribe`.`download`,`user_subscribe`.`upload`, -`user_subscribe`.`token`,`user_subscribe`.`uuid`,`user_subscribe`.`status`,`user_subscribe`.`n -ote`,`user_subscribe`.`created_at`,`user_subscribe`.`updated_at` FROM `user_subscribe` WHERE -`user_id` = 652 AND `status` IN (0,1,2,3) AND (`expire_time` > '2026-03-06 16:05:27.741' OR -`finished_at` >= '2026-02-27 16:05:27.741' OR `expire_time` = '1970-01-01 08:00:00') -rows=1 -60627-03-03 00:00:00.744 info HTTP Request duration=6.263864ms -caller=middleware/loggerMiddleware.go:113 status=200 query= -device_decrypt_status=success -encrypted_query=data=cHrCabwq37KAmtcWyWHUag%3D%3D&time=2026-03-06T16%3A05%3A25.260631 -user-agent=ios trace=f5fd1ef6a989104d7700d2c76c3aa653 request=GET -tapi.hifast.biz/v1/public/user/subscribe ip=111.55.73.248 api_header= -decrypted_query= span=de3234063995e481 -[GIN] 2026/03/06 - 16:05:27 | 200 | 6.439685ms | 111.55.73.248 | GET "/v1/public/use -r/subscribe?data=cHrCabwq37KAmtcWyWHUag%3D%3D&time=2026-03-06T16%3A05%3A25.260631" -60627-03-03 00:00:00.753 error [GORM] duration=1.2ms -caller=gin@v1.10.0/context.go:185 trace=0d677831eeaa39b9eed1f7172fa249b6 sql=SELECT -user_family_member.family_id, user_family_member.role, user_family.status as family_status, -user_family.owner_user_id, user_family.max_members FROM `user_family_member` JOIN user_family -ON user_family.id = user_family_member.family_id AND user_family.deleted_at IS NULL WHERE -user_family_member.user_id = 659 AND user_family_member.deleted_at IS NULL AND -user_family_member.status = 1 ORDER BY `user_family_member`.`family_id` LIMIT 1 rows=0 -error=record not found span=e37812ea9cae3422 -60627-03-03 00:00:00.754 debug Hit cache for invite short link -caller=user/queryUserInfo \ No newline at end of file