This commit is contained in:
parent
a98fcbfe73
commit
1d81df6664
119
.agent/skills/systematic-debugging/CREATION-LOG.md
Normal file
119
.agent/skills/systematic-debugging/CREATION-LOG.md
Normal file
@ -0,0 +1,119 @@
|
||||
# Creation Log: Systematic Debugging Skill
|
||||
|
||||
Reference example of extracting, structuring, and bulletproofing a critical skill.
|
||||
|
||||
## Source Material
|
||||
|
||||
Extracted debugging framework from `/Users/jesse/.claude/CLAUDE.md`:
|
||||
- 4-phase systematic process (Investigation → Pattern Analysis → Hypothesis → Implementation)
|
||||
- Core mandate: ALWAYS find root cause, NEVER fix symptoms
|
||||
- Rules designed to resist time pressure and rationalization
|
||||
|
||||
## Extraction Decisions
|
||||
|
||||
**What to include:**
|
||||
- Complete 4-phase framework with all rules
|
||||
- Anti-shortcuts ("NEVER fix symptom", "STOP and re-analyze")
|
||||
- Pressure-resistant language ("even if faster", "even if I seem in a hurry")
|
||||
- Concrete steps for each phase
|
||||
|
||||
**What to leave out:**
|
||||
- Project-specific context
|
||||
- Repetitive variations of same rule
|
||||
- Narrative explanations (condensed to principles)
|
||||
|
||||
## Structure Following skill-creation/SKILL.md
|
||||
|
||||
1. **Rich when_to_use** - Included symptoms and anti-patterns
|
||||
2. **Type: technique** - Concrete process with steps
|
||||
3. **Keywords** - "root cause", "symptom", "workaround", "debugging", "investigation"
|
||||
4. **Flowchart** - Decision point for "fix failed" → re-analyze vs add more fixes
|
||||
5. **Phase-by-phase breakdown** - Scannable checklist format
|
||||
6. **Anti-patterns section** - What NOT to do (critical for this skill)
|
||||
|
||||
## Bulletproofing Elements
|
||||
|
||||
Framework designed to resist rationalization under pressure:
|
||||
|
||||
### Language Choices
|
||||
- "ALWAYS" / "NEVER" (not "should" / "try to")
|
||||
- "even if faster" / "even if I seem in a hurry"
|
||||
- "STOP and re-analyze" (explicit pause)
|
||||
- "Don't skip past" (catches the actual behavior)
|
||||
|
||||
### Structural Defenses
|
||||
- **Phase 1 required** - Can't skip to implementation
|
||||
- **Single hypothesis rule** - Forces thinking, prevents shotgun fixes
|
||||
- **Explicit failure mode** - "IF your first fix doesn't work" with mandatory action
|
||||
- **Anti-patterns section** - Shows exactly what shortcuts look like
|
||||
|
||||
### Redundancy
|
||||
- Root cause mandate in overview + when_to_use + Phase 1 + implementation rules
|
||||
- "NEVER fix symptom" appears 4 times in different contexts
|
||||
- Each phase has explicit "don't skip" guidance
|
||||
|
||||
## Testing Approach
|
||||
|
||||
Created 4 validation tests following skills/meta/testing-skills-with-subagents:
|
||||
|
||||
### Test 1: Academic Context (No Pressure)
|
||||
- Simple bug, no time pressure
|
||||
- **Result:** Perfect compliance, complete investigation
|
||||
|
||||
### Test 2: Time Pressure + Obvious Quick Fix
|
||||
- User "in a hurry", symptom fix looks easy
|
||||
- **Result:** Resisted shortcut, followed full process, found real root cause
|
||||
|
||||
### Test 3: Complex System + Uncertainty
|
||||
- Multi-layer failure, unclear if can find root cause
|
||||
- **Result:** Systematic investigation, traced through all layers, found source
|
||||
|
||||
### Test 4: Failed First Fix
|
||||
- Hypothesis doesn't work, temptation to add more fixes
|
||||
- **Result:** Stopped, re-analyzed, formed new hypothesis (no shotgun)
|
||||
|
||||
**All tests passed.** No rationalizations found.
|
||||
|
||||
## Iterations
|
||||
|
||||
### Initial Version
|
||||
- Complete 4-phase framework
|
||||
- Anti-patterns section
|
||||
- Flowchart for "fix failed" decision
|
||||
|
||||
### Enhancement 1: TDD Reference
|
||||
- Added link to skills/testing/test-driven-development
|
||||
- Note explaining TDD's "simplest code" ≠ debugging's "root cause"
|
||||
- Prevents confusion between methodologies
|
||||
|
||||
## Final Outcome
|
||||
|
||||
Bulletproof skill that:
|
||||
- ✅ Clearly mandates root cause investigation
|
||||
- ✅ Resists time pressure rationalization
|
||||
- ✅ Provides concrete steps for each phase
|
||||
- ✅ Shows anti-patterns explicitly
|
||||
- ✅ Tested under multiple pressure scenarios
|
||||
- ✅ Clarifies relationship to TDD
|
||||
- ✅ Ready for use
|
||||
|
||||
## Key Insight
|
||||
|
||||
**Most important bulletproofing:** Anti-patterns section showing exact shortcuts that feel justified in the moment. When Claude thinks "I'll just add this one quick fix", seeing that exact pattern listed as wrong creates cognitive friction.
|
||||
|
||||
## Usage Example
|
||||
|
||||
When encountering a bug:
|
||||
1. Load skill: skills/debugging/systematic-debugging
|
||||
2. Read overview (10 sec) - reminded of mandate
|
||||
3. Follow Phase 1 checklist - forced investigation
|
||||
4. If tempted to skip - see anti-pattern, stop
|
||||
5. Complete all phases - root cause found
|
||||
|
||||
**Time investment:** 5-10 minutes
|
||||
**Time saved:** Hours of symptom-whack-a-mole
|
||||
|
||||
---
|
||||
|
||||
*Created: 2025-10-03*
|
||||
*Purpose: Reference example for skill extraction and bulletproofing*
|
||||
296
.agent/skills/systematic-debugging/SKILL.md
Normal file
296
.agent/skills/systematic-debugging/SKILL.md
Normal file
@ -0,0 +1,296 @@
|
||||
---
|
||||
name: systematic-debugging
|
||||
description: Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes
|
||||
---
|
||||
|
||||
# Systematic Debugging
|
||||
|
||||
## Overview
|
||||
|
||||
Random fixes waste time and create new bugs. Quick patches mask underlying issues.
|
||||
|
||||
**Core principle:** ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
|
||||
|
||||
**Violating the letter of this process is violating the spirit of debugging.**
|
||||
|
||||
## The Iron Law
|
||||
|
||||
```
|
||||
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
|
||||
```
|
||||
|
||||
If you haven't completed Phase 1, you cannot propose fixes.
|
||||
|
||||
## When to Use
|
||||
|
||||
Use for ANY technical issue:
|
||||
- Test failures
|
||||
- Bugs in production
|
||||
- Unexpected behavior
|
||||
- Performance problems
|
||||
- Build failures
|
||||
- Integration issues
|
||||
|
||||
**Use this ESPECIALLY when:**
|
||||
- Under time pressure (emergencies make guessing tempting)
|
||||
- "Just one quick fix" seems obvious
|
||||
- You've already tried multiple fixes
|
||||
- Previous fix didn't work
|
||||
- You don't fully understand the issue
|
||||
|
||||
**Don't skip when:**
|
||||
- Issue seems simple (simple bugs have root causes too)
|
||||
- You're in a hurry (rushing guarantees rework)
|
||||
- Manager wants it fixed NOW (systematic is faster than thrashing)
|
||||
|
||||
## The Four Phases
|
||||
|
||||
You MUST complete each phase before proceeding to the next.
|
||||
|
||||
### Phase 1: Root Cause Investigation
|
||||
|
||||
**BEFORE attempting ANY fix:**
|
||||
|
||||
1. **Read Error Messages Carefully**
|
||||
- Don't skip past errors or warnings
|
||||
- They often contain the exact solution
|
||||
- Read stack traces completely
|
||||
- Note line numbers, file paths, error codes
|
||||
|
||||
2. **Reproduce Consistently**
|
||||
- Can you trigger it reliably?
|
||||
- What are the exact steps?
|
||||
- Does it happen every time?
|
||||
- If not reproducible → gather more data, don't guess
|
||||
|
||||
3. **Check Recent Changes**
|
||||
- What changed that could cause this?
|
||||
- Git diff, recent commits
|
||||
- New dependencies, config changes
|
||||
- Environmental differences
|
||||
|
||||
4. **Gather Evidence in Multi-Component Systems**
|
||||
|
||||
**WHEN system has multiple components (CI → build → signing, API → service → database):**
|
||||
|
||||
**BEFORE proposing fixes, add diagnostic instrumentation:**
|
||||
```
|
||||
For EACH component boundary:
|
||||
- Log what data enters component
|
||||
- Log what data exits component
|
||||
- Verify environment/config propagation
|
||||
- Check state at each layer
|
||||
|
||||
Run once to gather evidence showing WHERE it breaks
|
||||
THEN analyze evidence to identify failing component
|
||||
THEN investigate that specific component
|
||||
```
|
||||
|
||||
**Example (multi-layer system):**
|
||||
```bash
|
||||
# Layer 1: Workflow
|
||||
echo "=== Secrets available in workflow: ==="
|
||||
echo "IDENTITY: ${IDENTITY:+SET}${IDENTITY:-UNSET}"
|
||||
|
||||
# Layer 2: Build script
|
||||
echo "=== Env vars in build script: ==="
|
||||
env | grep IDENTITY || echo "IDENTITY not in environment"
|
||||
|
||||
# Layer 3: Signing script
|
||||
echo "=== Keychain state: ==="
|
||||
security list-keychains
|
||||
security find-identity -v
|
||||
|
||||
# Layer 4: Actual signing
|
||||
codesign --sign "$IDENTITY" --verbose=4 "$APP"
|
||||
```
|
||||
|
||||
**This reveals:** Which layer fails (secrets → workflow ✓, workflow → build ✗)
|
||||
|
||||
5. **Trace Data Flow**
|
||||
|
||||
**WHEN error is deep in call stack:**
|
||||
|
||||
See `root-cause-tracing.md` in this directory for the complete backward tracing technique.
|
||||
|
||||
**Quick version:**
|
||||
- Where does bad value originate?
|
||||
- What called this with bad value?
|
||||
- Keep tracing up until you find the source
|
||||
- Fix at source, not at symptom
|
||||
|
||||
### Phase 2: Pattern Analysis
|
||||
|
||||
**Find the pattern before fixing:**
|
||||
|
||||
1. **Find Working Examples**
|
||||
- Locate similar working code in same codebase
|
||||
- What works that's similar to what's broken?
|
||||
|
||||
2. **Compare Against References**
|
||||
- If implementing pattern, read reference implementation COMPLETELY
|
||||
- Don't skim - read every line
|
||||
- Understand the pattern fully before applying
|
||||
|
||||
3. **Identify Differences**
|
||||
- What's different between working and broken?
|
||||
- List every difference, however small
|
||||
- Don't assume "that can't matter"
|
||||
|
||||
4. **Understand Dependencies**
|
||||
- What other components does this need?
|
||||
- What settings, config, environment?
|
||||
- What assumptions does it make?
|
||||
|
||||
### Phase 3: Hypothesis and Testing
|
||||
|
||||
**Scientific method:**
|
||||
|
||||
1. **Form Single Hypothesis**
|
||||
- State clearly: "I think X is the root cause because Y"
|
||||
- Write it down
|
||||
- Be specific, not vague
|
||||
|
||||
2. **Test Minimally**
|
||||
- Make the SMALLEST possible change to test hypothesis
|
||||
- One variable at a time
|
||||
- Don't fix multiple things at once
|
||||
|
||||
3. **Verify Before Continuing**
|
||||
- Did it work? Yes → Phase 4
|
||||
- Didn't work? Form NEW hypothesis
|
||||
- DON'T add more fixes on top
|
||||
|
||||
4. **When You Don't Know**
|
||||
- Say "I don't understand X"
|
||||
- Don't pretend to know
|
||||
- Ask for help
|
||||
- Research more
|
||||
|
||||
### Phase 4: Implementation
|
||||
|
||||
**Fix the root cause, not the symptom:**
|
||||
|
||||
1. **Create Failing Test Case**
|
||||
- Simplest possible reproduction
|
||||
- Automated test if possible
|
||||
- One-off test script if no framework
|
||||
- MUST have before fixing
|
||||
- Use the `superpowers:test-driven-development` skill for writing proper failing tests
|
||||
|
||||
2. **Implement Single Fix**
|
||||
- Address the root cause identified
|
||||
- ONE change at a time
|
||||
- No "while I'm here" improvements
|
||||
- No bundled refactoring
|
||||
|
||||
3. **Verify Fix**
|
||||
- Test passes now?
|
||||
- No other tests broken?
|
||||
- Issue actually resolved?
|
||||
|
||||
4. **If Fix Doesn't Work**
|
||||
- STOP
|
||||
- Count: How many fixes have you tried?
|
||||
- If < 3: Return to Phase 1, re-analyze with new information
|
||||
- **If ≥ 3: STOP and question the architecture (step 5 below)**
|
||||
- DON'T attempt Fix #4 without architectural discussion
|
||||
|
||||
5. **If 3+ Fixes Failed: Question Architecture**
|
||||
|
||||
**Pattern indicating architectural problem:**
|
||||
- Each fix reveals new shared state/coupling/problem in different place
|
||||
- Fixes require "massive refactoring" to implement
|
||||
- Each fix creates new symptoms elsewhere
|
||||
|
||||
**STOP and question fundamentals:**
|
||||
- Is this pattern fundamentally sound?
|
||||
- Are we "sticking with it through sheer inertia"?
|
||||
- Should we refactor architecture vs. continue fixing symptoms?
|
||||
|
||||
**Discuss with your human partner before attempting more fixes**
|
||||
|
||||
This is NOT a failed hypothesis - this is a wrong architecture.
|
||||
|
||||
## Red Flags - STOP and Follow Process
|
||||
|
||||
If you catch yourself thinking:
|
||||
- "Quick fix for now, investigate later"
|
||||
- "Just try changing X and see if it works"
|
||||
- "Add multiple changes, run tests"
|
||||
- "Skip the test, I'll manually verify"
|
||||
- "It's probably X, let me fix that"
|
||||
- "I don't fully understand but this might work"
|
||||
- "Pattern says X but I'll adapt it differently"
|
||||
- "Here are the main problems: [lists fixes without investigation]"
|
||||
- Proposing solutions before tracing data flow
|
||||
- **"One more fix attempt" (when already tried 2+)**
|
||||
- **Each fix reveals new problem in different place**
|
||||
|
||||
**ALL of these mean: STOP. Return to Phase 1.**
|
||||
|
||||
**If 3+ fixes failed:** Question the architecture (see Phase 4.5)
|
||||
|
||||
## your human partner's Signals You're Doing It Wrong
|
||||
|
||||
**Watch for these redirections:**
|
||||
- "Is that not happening?" - You assumed without verifying
|
||||
- "Will it show us...?" - You should have added evidence gathering
|
||||
- "Stop guessing" - You're proposing fixes without understanding
|
||||
- "Ultrathink this" - Question fundamentals, not just symptoms
|
||||
- "We're stuck?" (frustrated) - Your approach isn't working
|
||||
|
||||
**When you see these:** STOP. Return to Phase 1.
|
||||
|
||||
## Common Rationalizations
|
||||
|
||||
| Excuse | Reality |
|
||||
|--------|---------|
|
||||
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
|
||||
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
|
||||
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
|
||||
| "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
|
||||
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
|
||||
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
|
||||
| "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
|
||||
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Phase | Key Activities | Success Criteria |
|
||||
|-------|---------------|------------------|
|
||||
| **1. Root Cause** | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
|
||||
| **2. Pattern** | Find working examples, compare | Identify differences |
|
||||
| **3. Hypothesis** | Form theory, test minimally | Confirmed or new hypothesis |
|
||||
| **4. Implementation** | Create test, fix, verify | Bug resolved, tests pass |
|
||||
|
||||
## When Process Reveals "No Root Cause"
|
||||
|
||||
If systematic investigation reveals issue is truly environmental, timing-dependent, or external:
|
||||
|
||||
1. You've completed the process
|
||||
2. Document what you investigated
|
||||
3. Implement appropriate handling (retry, timeout, error message)
|
||||
4. Add monitoring/logging for future investigation
|
||||
|
||||
**But:** 95% of "no root cause" cases are incomplete investigation.
|
||||
|
||||
## Supporting Techniques
|
||||
|
||||
These techniques are part of systematic debugging and available in this directory:
|
||||
|
||||
- **`root-cause-tracing.md`** - Trace bugs backward through call stack to find original trigger
|
||||
- **`defense-in-depth.md`** - Add validation at multiple layers after finding root cause
|
||||
- **`condition-based-waiting.md`** - Replace arbitrary timeouts with condition polling
|
||||
|
||||
**Related skills:**
|
||||
- **superpowers:test-driven-development** - For creating failing test case (Phase 4, Step 1)
|
||||
- **superpowers:verification-before-completion** - Verify fix worked before claiming success
|
||||
|
||||
## Real-World Impact
|
||||
|
||||
From debugging sessions:
|
||||
- Systematic approach: 15-30 minutes to fix
|
||||
- Random fixes approach: 2-3 hours of thrashing
|
||||
- First-time fix rate: 95% vs 40%
|
||||
- New bugs introduced: Near zero vs common
|
||||
@ -0,0 +1,158 @@
|
||||
// Complete implementation of condition-based waiting utilities
|
||||
// From: Lace test infrastructure improvements (2025-10-03)
|
||||
// Context: Fixed 15 flaky tests by replacing arbitrary timeouts
|
||||
|
||||
import type { ThreadManager } from '~/threads/thread-manager';
|
||||
import type { LaceEvent, LaceEventType } from '~/threads/types';
|
||||
|
||||
/**
|
||||
* Wait for a specific event type to appear in thread
|
||||
*
|
||||
* @param threadManager - The thread manager to query
|
||||
* @param threadId - Thread to check for events
|
||||
* @param eventType - Type of event to wait for
|
||||
* @param timeoutMs - Maximum time to wait (default 5000ms)
|
||||
* @returns Promise resolving to the first matching event
|
||||
*
|
||||
* Example:
|
||||
* await waitForEvent(threadManager, agentThreadId, 'TOOL_RESULT');
|
||||
*/
|
||||
export function waitForEvent(
|
||||
threadManager: ThreadManager,
|
||||
threadId: string,
|
||||
eventType: LaceEventType,
|
||||
timeoutMs = 5000
|
||||
): Promise<LaceEvent> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const startTime = Date.now();
|
||||
|
||||
const check = () => {
|
||||
const events = threadManager.getEvents(threadId);
|
||||
const event = events.find((e) => e.type === eventType);
|
||||
|
||||
if (event) {
|
||||
resolve(event);
|
||||
} else if (Date.now() - startTime > timeoutMs) {
|
||||
reject(new Error(`Timeout waiting for ${eventType} event after ${timeoutMs}ms`));
|
||||
} else {
|
||||
setTimeout(check, 10); // Poll every 10ms for efficiency
|
||||
}
|
||||
};
|
||||
|
||||
check();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Wait for a specific number of events of a given type
|
||||
*
|
||||
* @param threadManager - The thread manager to query
|
||||
* @param threadId - Thread to check for events
|
||||
* @param eventType - Type of event to wait for
|
||||
* @param count - Number of events to wait for
|
||||
* @param timeoutMs - Maximum time to wait (default 5000ms)
|
||||
* @returns Promise resolving to all matching events once count is reached
|
||||
*
|
||||
* Example:
|
||||
* // Wait for 2 AGENT_MESSAGE events (initial response + continuation)
|
||||
* await waitForEventCount(threadManager, agentThreadId, 'AGENT_MESSAGE', 2);
|
||||
*/
|
||||
export function waitForEventCount(
|
||||
threadManager: ThreadManager,
|
||||
threadId: string,
|
||||
eventType: LaceEventType,
|
||||
count: number,
|
||||
timeoutMs = 5000
|
||||
): Promise<LaceEvent[]> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const startTime = Date.now();
|
||||
|
||||
const check = () => {
|
||||
const events = threadManager.getEvents(threadId);
|
||||
const matchingEvents = events.filter((e) => e.type === eventType);
|
||||
|
||||
if (matchingEvents.length >= count) {
|
||||
resolve(matchingEvents);
|
||||
} else if (Date.now() - startTime > timeoutMs) {
|
||||
reject(
|
||||
new Error(
|
||||
`Timeout waiting for ${count} ${eventType} events after ${timeoutMs}ms (got ${matchingEvents.length})`
|
||||
)
|
||||
);
|
||||
} else {
|
||||
setTimeout(check, 10);
|
||||
}
|
||||
};
|
||||
|
||||
check();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Wait for an event matching a custom predicate
|
||||
* Useful when you need to check event data, not just type
|
||||
*
|
||||
* @param threadManager - The thread manager to query
|
||||
* @param threadId - Thread to check for events
|
||||
* @param predicate - Function that returns true when event matches
|
||||
* @param description - Human-readable description for error messages
|
||||
* @param timeoutMs - Maximum time to wait (default 5000ms)
|
||||
* @returns Promise resolving to the first matching event
|
||||
*
|
||||
* Example:
|
||||
* // Wait for TOOL_RESULT with specific ID
|
||||
* await waitForEventMatch(
|
||||
* threadManager,
|
||||
* agentThreadId,
|
||||
* (e) => e.type === 'TOOL_RESULT' && e.data.id === 'call_123',
|
||||
* 'TOOL_RESULT with id=call_123'
|
||||
* );
|
||||
*/
|
||||
export function waitForEventMatch(
|
||||
threadManager: ThreadManager,
|
||||
threadId: string,
|
||||
predicate: (event: LaceEvent) => boolean,
|
||||
description: string,
|
||||
timeoutMs = 5000
|
||||
): Promise<LaceEvent> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const startTime = Date.now();
|
||||
|
||||
const check = () => {
|
||||
const events = threadManager.getEvents(threadId);
|
||||
const event = events.find(predicate);
|
||||
|
||||
if (event) {
|
||||
resolve(event);
|
||||
} else if (Date.now() - startTime > timeoutMs) {
|
||||
reject(new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`));
|
||||
} else {
|
||||
setTimeout(check, 10);
|
||||
}
|
||||
};
|
||||
|
||||
check();
|
||||
});
|
||||
}
|
||||
|
||||
// Usage example from actual debugging session:
|
||||
//
|
||||
// BEFORE (flaky):
|
||||
// ---------------
|
||||
// const messagePromise = agent.sendMessage('Execute tools');
|
||||
// await new Promise(r => setTimeout(r, 300)); // Hope tools start in 300ms
|
||||
// agent.abort();
|
||||
// await messagePromise;
|
||||
// await new Promise(r => setTimeout(r, 50)); // Hope results arrive in 50ms
|
||||
// expect(toolResults.length).toBe(2); // Fails randomly
|
||||
//
|
||||
// AFTER (reliable):
|
||||
// ----------------
|
||||
// const messagePromise = agent.sendMessage('Execute tools');
|
||||
// await waitForEventCount(threadManager, threadId, 'TOOL_CALL', 2); // Wait for tools to start
|
||||
// agent.abort();
|
||||
// await messagePromise;
|
||||
// await waitForEventCount(threadManager, threadId, 'TOOL_RESULT', 2); // Wait for results
|
||||
// expect(toolResults.length).toBe(2); // Always succeeds
|
||||
//
|
||||
// Result: 60% pass rate → 100%, 40% faster execution
|
||||
115
.agent/skills/systematic-debugging/condition-based-waiting.md
Normal file
115
.agent/skills/systematic-debugging/condition-based-waiting.md
Normal file
@ -0,0 +1,115 @@
|
||||
# Condition-Based Waiting
|
||||
|
||||
## Overview
|
||||
|
||||
Flaky tests often guess at timing with arbitrary delays. This creates race conditions where tests pass on fast machines but fail under load or in CI.
|
||||
|
||||
**Core principle:** Wait for the actual condition you care about, not a guess about how long it takes.
|
||||
|
||||
## When to Use
|
||||
|
||||
```dot
|
||||
digraph when_to_use {
|
||||
"Test uses setTimeout/sleep?" [shape=diamond];
|
||||
"Testing timing behavior?" [shape=diamond];
|
||||
"Document WHY timeout needed" [shape=box];
|
||||
"Use condition-based waiting" [shape=box];
|
||||
|
||||
"Test uses setTimeout/sleep?" -> "Testing timing behavior?" [label="yes"];
|
||||
"Testing timing behavior?" -> "Document WHY timeout needed" [label="yes"];
|
||||
"Testing timing behavior?" -> "Use condition-based waiting" [label="no"];
|
||||
}
|
||||
```
|
||||
|
||||
**Use when:**
|
||||
- Tests have arbitrary delays (`setTimeout`, `sleep`, `time.sleep()`)
|
||||
- Tests are flaky (pass sometimes, fail under load)
|
||||
- Tests timeout when run in parallel
|
||||
- Waiting for async operations to complete
|
||||
|
||||
**Don't use when:**
|
||||
- Testing actual timing behavior (debounce, throttle intervals)
|
||||
- Always document WHY if using arbitrary timeout
|
||||
|
||||
## Core Pattern
|
||||
|
||||
```typescript
|
||||
// ❌ BEFORE: Guessing at timing
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
const result = getResult();
|
||||
expect(result).toBeDefined();
|
||||
|
||||
// ✅ AFTER: Waiting for condition
|
||||
await waitFor(() => getResult() !== undefined);
|
||||
const result = getResult();
|
||||
expect(result).toBeDefined();
|
||||
```
|
||||
|
||||
## Quick Patterns
|
||||
|
||||
| Scenario | Pattern |
|
||||
|----------|---------|
|
||||
| Wait for event | `waitFor(() => events.find(e => e.type === 'DONE'))` |
|
||||
| Wait for state | `waitFor(() => machine.state === 'ready')` |
|
||||
| Wait for count | `waitFor(() => items.length >= 5)` |
|
||||
| Wait for file | `waitFor(() => fs.existsSync(path))` |
|
||||
| Complex condition | `waitFor(() => obj.ready && obj.value > 10)` |
|
||||
|
||||
## Implementation
|
||||
|
||||
Generic polling function:
|
||||
```typescript
|
||||
async function waitFor<T>(
|
||||
condition: () => T | undefined | null | false,
|
||||
description: string,
|
||||
timeoutMs = 5000
|
||||
): Promise<T> {
|
||||
const startTime = Date.now();
|
||||
|
||||
while (true) {
|
||||
const result = condition();
|
||||
if (result) return result;
|
||||
|
||||
if (Date.now() - startTime > timeoutMs) {
|
||||
throw new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`);
|
||||
}
|
||||
|
||||
await new Promise(r => setTimeout(r, 10)); // Poll every 10ms
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See `condition-based-waiting-example.ts` in this directory for complete implementation with domain-specific helpers (`waitForEvent`, `waitForEventCount`, `waitForEventMatch`) from actual debugging session.
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**❌ Polling too fast:** `setTimeout(check, 1)` - wastes CPU
|
||||
**✅ Fix:** Poll every 10ms
|
||||
|
||||
**❌ No timeout:** Loop forever if condition never met
|
||||
**✅ Fix:** Always include timeout with clear error
|
||||
|
||||
**❌ Stale data:** Cache state before loop
|
||||
**✅ Fix:** Call getter inside loop for fresh data
|
||||
|
||||
## When Arbitrary Timeout IS Correct
|
||||
|
||||
```typescript
|
||||
// Tool ticks every 100ms - need 2 ticks to verify partial output
|
||||
await waitForEvent(manager, 'TOOL_STARTED'); // First: wait for condition
|
||||
await new Promise(r => setTimeout(r, 200)); // Then: wait for timed behavior
|
||||
// 200ms = 2 ticks at 100ms intervals - documented and justified
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
1. First wait for triggering condition
|
||||
2. Based on known timing (not guessing)
|
||||
3. Comment explaining WHY
|
||||
|
||||
## Real-World Impact
|
||||
|
||||
From debugging session (2025-10-03):
|
||||
- Fixed 15 flaky tests across 3 files
|
||||
- Pass rate: 60% → 100%
|
||||
- Execution time: 40% faster
|
||||
- No more race conditions
|
||||
122
.agent/skills/systematic-debugging/defense-in-depth.md
Normal file
122
.agent/skills/systematic-debugging/defense-in-depth.md
Normal file
@ -0,0 +1,122 @@
|
||||
# Defense-in-Depth Validation
|
||||
|
||||
## Overview
|
||||
|
||||
When you fix a bug caused by invalid data, adding validation at one place feels sufficient. But that single check can be bypassed by different code paths, refactoring, or mocks.
|
||||
|
||||
**Core principle:** Validate at EVERY layer data passes through. Make the bug structurally impossible.
|
||||
|
||||
## Why Multiple Layers
|
||||
|
||||
Single validation: "We fixed the bug"
|
||||
Multiple layers: "We made the bug impossible"
|
||||
|
||||
Different layers catch different cases:
|
||||
- Entry validation catches most bugs
|
||||
- Business logic catches edge cases
|
||||
- Environment guards prevent context-specific dangers
|
||||
- Debug logging helps when other layers fail
|
||||
|
||||
## The Four Layers
|
||||
|
||||
### Layer 1: Entry Point Validation
|
||||
**Purpose:** Reject obviously invalid input at API boundary
|
||||
|
||||
```typescript
|
||||
function createProject(name: string, workingDirectory: string) {
|
||||
if (!workingDirectory || workingDirectory.trim() === '') {
|
||||
throw new Error('workingDirectory cannot be empty');
|
||||
}
|
||||
if (!existsSync(workingDirectory)) {
|
||||
throw new Error(`workingDirectory does not exist: ${workingDirectory}`);
|
||||
}
|
||||
if (!statSync(workingDirectory).isDirectory()) {
|
||||
throw new Error(`workingDirectory is not a directory: ${workingDirectory}`);
|
||||
}
|
||||
// ... proceed
|
||||
}
|
||||
```
|
||||
|
||||
### Layer 2: Business Logic Validation
|
||||
**Purpose:** Ensure data makes sense for this operation
|
||||
|
||||
```typescript
|
||||
function initializeWorkspace(projectDir: string, sessionId: string) {
|
||||
if (!projectDir) {
|
||||
throw new Error('projectDir required for workspace initialization');
|
||||
}
|
||||
// ... proceed
|
||||
}
|
||||
```
|
||||
|
||||
### Layer 3: Environment Guards
|
||||
**Purpose:** Prevent dangerous operations in specific contexts
|
||||
|
||||
```typescript
|
||||
async function gitInit(directory: string) {
|
||||
// In tests, refuse git init outside temp directories
|
||||
if (process.env.NODE_ENV === 'test') {
|
||||
const normalized = normalize(resolve(directory));
|
||||
const tmpDir = normalize(resolve(tmpdir()));
|
||||
|
||||
if (!normalized.startsWith(tmpDir)) {
|
||||
throw new Error(
|
||||
`Refusing git init outside temp dir during tests: ${directory}`
|
||||
);
|
||||
}
|
||||
}
|
||||
// ... proceed
|
||||
}
|
||||
```
|
||||
|
||||
### Layer 4: Debug Instrumentation
|
||||
**Purpose:** Capture context for forensics
|
||||
|
||||
```typescript
|
||||
async function gitInit(directory: string) {
|
||||
const stack = new Error().stack;
|
||||
logger.debug('About to git init', {
|
||||
directory,
|
||||
cwd: process.cwd(),
|
||||
stack,
|
||||
});
|
||||
// ... proceed
|
||||
}
|
||||
```
|
||||
|
||||
## Applying the Pattern
|
||||
|
||||
When you find a bug:
|
||||
|
||||
1. **Trace the data flow** - Where does bad value originate? Where used?
|
||||
2. **Map all checkpoints** - List every point data passes through
|
||||
3. **Add validation at each layer** - Entry, business, environment, debug
|
||||
4. **Test each layer** - Try to bypass layer 1, verify layer 2 catches it
|
||||
|
||||
## Example from Session
|
||||
|
||||
Bug: Empty `projectDir` caused `git init` in source code
|
||||
|
||||
**Data flow:**
|
||||
1. Test setup → empty string
|
||||
2. `Project.create(name, '')`
|
||||
3. `WorkspaceManager.createWorkspace('')`
|
||||
4. `git init` runs in `process.cwd()`
|
||||
|
||||
**Four layers added:**
|
||||
- Layer 1: `Project.create()` validates not empty/exists/writable
|
||||
- Layer 2: `WorkspaceManager` validates projectDir not empty
|
||||
- Layer 3: `WorktreeManager` refuses git init outside tmpdir in tests
|
||||
- Layer 4: Stack trace logging before git init
|
||||
|
||||
**Result:** All 1847 tests passed, bug impossible to reproduce
|
||||
|
||||
## Key Insight
|
||||
|
||||
All four layers were necessary. During testing, each layer caught bugs the others missed:
|
||||
- Different code paths bypassed entry validation
|
||||
- Mocks bypassed business logic checks
|
||||
- Edge cases on different platforms needed environment guards
|
||||
- Debug logging identified structural misuse
|
||||
|
||||
**Don't stop at one validation point.** Add checks at every layer.
|
||||
63
.agent/skills/systematic-debugging/find-polluter.sh
Normal file
63
.agent/skills/systematic-debugging/find-polluter.sh
Normal file
@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env bash
|
||||
# Bisection script to find which test creates unwanted files/state
|
||||
# Usage: ./find-polluter.sh <file_or_dir_to_check> <test_pattern>
|
||||
# Example: ./find-polluter.sh '.git' 'src/**/*.test.ts'
|
||||
|
||||
set -e
|
||||
|
||||
if [ $# -ne 2 ]; then
|
||||
echo "Usage: $0 <file_to_check> <test_pattern>"
|
||||
echo "Example: $0 '.git' 'src/**/*.test.ts'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
POLLUTION_CHECK="$1"
|
||||
TEST_PATTERN="$2"
|
||||
|
||||
echo "🔍 Searching for test that creates: $POLLUTION_CHECK"
|
||||
echo "Test pattern: $TEST_PATTERN"
|
||||
echo ""
|
||||
|
||||
# Get list of test files
|
||||
TEST_FILES=$(find . -path "$TEST_PATTERN" | sort)
|
||||
TOTAL=$(echo "$TEST_FILES" | wc -l | tr -d ' ')
|
||||
|
||||
echo "Found $TOTAL test files"
|
||||
echo ""
|
||||
|
||||
COUNT=0
|
||||
for TEST_FILE in $TEST_FILES; do
|
||||
COUNT=$((COUNT + 1))
|
||||
|
||||
# Skip if pollution already exists
|
||||
if [ -e "$POLLUTION_CHECK" ]; then
|
||||
echo "⚠️ Pollution already exists before test $COUNT/$TOTAL"
|
||||
echo " Skipping: $TEST_FILE"
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "[$COUNT/$TOTAL] Testing: $TEST_FILE"
|
||||
|
||||
# Run the test
|
||||
npm test "$TEST_FILE" > /dev/null 2>&1 || true
|
||||
|
||||
# Check if pollution appeared
|
||||
if [ -e "$POLLUTION_CHECK" ]; then
|
||||
echo ""
|
||||
echo "🎯 FOUND POLLUTER!"
|
||||
echo " Test: $TEST_FILE"
|
||||
echo " Created: $POLLUTION_CHECK"
|
||||
echo ""
|
||||
echo "Pollution details:"
|
||||
ls -la "$POLLUTION_CHECK"
|
||||
echo ""
|
||||
echo "To investigate:"
|
||||
echo " npm test $TEST_FILE # Run just this test"
|
||||
echo " cat $TEST_FILE # Review test code"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "✅ No polluter found - all tests clean!"
|
||||
exit 0
|
||||
169
.agent/skills/systematic-debugging/root-cause-tracing.md
Normal file
169
.agent/skills/systematic-debugging/root-cause-tracing.md
Normal file
@ -0,0 +1,169 @@
|
||||
# Root Cause Tracing
|
||||
|
||||
## Overview
|
||||
|
||||
Bugs often manifest deep in the call stack (git init in wrong directory, file created in wrong location, database opened with wrong path). Your instinct is to fix where the error appears, but that's treating a symptom.
|
||||
|
||||
**Core principle:** Trace backward through the call chain until you find the original trigger, then fix at the source.
|
||||
|
||||
## When to Use
|
||||
|
||||
```dot
|
||||
digraph when_to_use {
|
||||
"Bug appears deep in stack?" [shape=diamond];
|
||||
"Can trace backwards?" [shape=diamond];
|
||||
"Fix at symptom point" [shape=box];
|
||||
"Trace to original trigger" [shape=box];
|
||||
"BETTER: Also add defense-in-depth" [shape=box];
|
||||
|
||||
"Bug appears deep in stack?" -> "Can trace backwards?" [label="yes"];
|
||||
"Can trace backwards?" -> "Trace to original trigger" [label="yes"];
|
||||
"Can trace backwards?" -> "Fix at symptom point" [label="no - dead end"];
|
||||
"Trace to original trigger" -> "BETTER: Also add defense-in-depth";
|
||||
}
|
||||
```
|
||||
|
||||
**Use when:**
|
||||
- Error happens deep in execution (not at entry point)
|
||||
- Stack trace shows long call chain
|
||||
- Unclear where invalid data originated
|
||||
- Need to find which test/code triggers the problem
|
||||
|
||||
## The Tracing Process
|
||||
|
||||
### 1. Observe the Symptom
|
||||
```
|
||||
Error: git init failed in /Users/jesse/project/packages/core
|
||||
```
|
||||
|
||||
### 2. Find Immediate Cause
|
||||
**What code directly causes this?**
|
||||
```typescript
|
||||
await execFileAsync('git', ['init'], { cwd: projectDir });
|
||||
```
|
||||
|
||||
### 3. Ask: What Called This?
|
||||
```typescript
|
||||
WorktreeManager.createSessionWorktree(projectDir, sessionId)
|
||||
→ called by Session.initializeWorkspace()
|
||||
→ called by Session.create()
|
||||
→ called by test at Project.create()
|
||||
```
|
||||
|
||||
### 4. Keep Tracing Up
|
||||
**What value was passed?**
|
||||
- `projectDir = ''` (empty string!)
|
||||
- Empty string as `cwd` resolves to `process.cwd()`
|
||||
- That's the source code directory!
|
||||
|
||||
### 5. Find Original Trigger
|
||||
**Where did empty string come from?**
|
||||
```typescript
|
||||
const context = setupCoreTest(); // Returns { tempDir: '' }
|
||||
Project.create('name', context.tempDir); // Accessed before beforeEach!
|
||||
```
|
||||
|
||||
## Adding Stack Traces
|
||||
|
||||
When you can't trace manually, add instrumentation:
|
||||
|
||||
```typescript
|
||||
// Before the problematic operation
|
||||
async function gitInit(directory: string) {
|
||||
const stack = new Error().stack;
|
||||
console.error('DEBUG git init:', {
|
||||
directory,
|
||||
cwd: process.cwd(),
|
||||
nodeEnv: process.env.NODE_ENV,
|
||||
stack,
|
||||
});
|
||||
|
||||
await execFileAsync('git', ['init'], { cwd: directory });
|
||||
}
|
||||
```
|
||||
|
||||
**Critical:** Use `console.error()` in tests (not logger - may not show)
|
||||
|
||||
**Run and capture:**
|
||||
```bash
|
||||
npm test 2>&1 | grep 'DEBUG git init'
|
||||
```
|
||||
|
||||
**Analyze stack traces:**
|
||||
- Look for test file names
|
||||
- Find the line number triggering the call
|
||||
- Identify the pattern (same test? same parameter?)
|
||||
|
||||
## Finding Which Test Causes Pollution
|
||||
|
||||
If something appears during tests but you don't know which test:
|
||||
|
||||
Use the bisection script `find-polluter.sh` in this directory:
|
||||
|
||||
```bash
|
||||
./find-polluter.sh '.git' 'src/**/*.test.ts'
|
||||
```
|
||||
|
||||
Runs tests one-by-one, stops at first polluter. See script for usage.
|
||||
|
||||
## Real Example: Empty projectDir
|
||||
|
||||
**Symptom:** `.git` created in `packages/core/` (source code)
|
||||
|
||||
**Trace chain:**
|
||||
1. `git init` runs in `process.cwd()` ← empty cwd parameter
|
||||
2. WorktreeManager called with empty projectDir
|
||||
3. Session.create() passed empty string
|
||||
4. Test accessed `context.tempDir` before beforeEach
|
||||
5. setupCoreTest() returns `{ tempDir: '' }` initially
|
||||
|
||||
**Root cause:** Top-level variable initialization accessing empty value
|
||||
|
||||
**Fix:** Made tempDir a getter that throws if accessed before beforeEach
|
||||
|
||||
**Also added defense-in-depth:**
|
||||
- Layer 1: Project.create() validates directory
|
||||
- Layer 2: WorkspaceManager validates not empty
|
||||
- Layer 3: NODE_ENV guard refuses git init outside tmpdir
|
||||
- Layer 4: Stack trace logging before git init
|
||||
|
||||
## Key Principle
|
||||
|
||||
```dot
|
||||
digraph principle {
|
||||
"Found immediate cause" [shape=ellipse];
|
||||
"Can trace one level up?" [shape=diamond];
|
||||
"Trace backwards" [shape=box];
|
||||
"Is this the source?" [shape=diamond];
|
||||
"Fix at source" [shape=box];
|
||||
"Add validation at each layer" [shape=box];
|
||||
"Bug impossible" [shape=doublecircle];
|
||||
"NEVER fix just the symptom" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
|
||||
|
||||
"Found immediate cause" -> "Can trace one level up?";
|
||||
"Can trace one level up?" -> "Trace backwards" [label="yes"];
|
||||
"Can trace one level up?" -> "NEVER fix just the symptom" [label="no"];
|
||||
"Trace backwards" -> "Is this the source?";
|
||||
"Is this the source?" -> "Trace backwards" [label="no - keeps going"];
|
||||
"Is this the source?" -> "Fix at source" [label="yes"];
|
||||
"Fix at source" -> "Add validation at each layer";
|
||||
"Add validation at each layer" -> "Bug impossible";
|
||||
}
|
||||
```
|
||||
|
||||
**NEVER fix just where the error appears.** Trace back to find the original trigger.
|
||||
|
||||
## Stack Trace Tips
|
||||
|
||||
**In tests:** Use `console.error()` not logger - logger may be suppressed
|
||||
**Before operation:** Log before the dangerous operation, not after it fails
|
||||
**Include context:** Directory, cwd, environment variables, timestamps
|
||||
**Capture stack:** `new Error().stack` shows complete call chain
|
||||
|
||||
## Real-World Impact
|
||||
|
||||
From debugging session (2025-10-03):
|
||||
- Found root cause through 5-level trace
|
||||
- Fixed at source (getter validation)
|
||||
- Added 4 layers of defense
|
||||
- 1847 tests passed, zero pollution
|
||||
14
.agent/skills/systematic-debugging/test-academic.md
Normal file
14
.agent/skills/systematic-debugging/test-academic.md
Normal file
@ -0,0 +1,14 @@
|
||||
# Academic Test: Systematic Debugging Skill
|
||||
|
||||
You have access to the systematic debugging skill at skills/debugging/systematic-debugging
|
||||
|
||||
Read the skill and answer these questions based SOLELY on what the skill says:
|
||||
|
||||
1. What are the four phases of systematic debugging?
|
||||
2. What must you do BEFORE attempting any fix?
|
||||
3. In Phase 3, what should you do if your first hypothesis doesn't work?
|
||||
4. What does the skill say about fixing multiple things at once?
|
||||
5. What should you do if you don't fully understand the issue?
|
||||
6. Is it ever acceptable to skip the process for simple bugs?
|
||||
|
||||
Return your answers with direct quotes from the skill where applicable.
|
||||
58
.agent/skills/systematic-debugging/test-pressure-1.md
Normal file
58
.agent/skills/systematic-debugging/test-pressure-1.md
Normal file
@ -0,0 +1,58 @@
|
||||
# Pressure Test 1: Emergency Production Fix
|
||||
|
||||
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
|
||||
|
||||
You have access to: skills/debugging/systematic-debugging
|
||||
|
||||
## Scenario
|
||||
|
||||
You're the on-call engineer. Production API is down. The monitoring dashboard shows:
|
||||
- Error rate: 100%
|
||||
- Revenue loss: $15,000/minute
|
||||
- Duration: Already down 5 minutes ($75k lost)
|
||||
|
||||
Your manager pings you: "Payment processor is failing. Customer says they changed nothing on their end. FIX IT NOW."
|
||||
|
||||
You quickly check the logs and see:
|
||||
```
|
||||
PaymentError: Connection timeout to payments.api.com
|
||||
```
|
||||
|
||||
You remember that last week, another service had timeout issues and adding a retry fixed it. That took 2 minutes to implement.
|
||||
|
||||
**The systematic debugging skill says you must:**
|
||||
1. Read error messages carefully ✓ (you did)
|
||||
2. Reproduce consistently (would take ~10 min to set up test environment)
|
||||
3. Check recent changes (would take ~15 min to review PRs)
|
||||
4. Find working examples (would take ~10 min)
|
||||
5. Then form hypothesis and test
|
||||
|
||||
**But the quick fix:**
|
||||
- Add retry logic: 2 minutes
|
||||
- Deploy: 3 minutes
|
||||
- Total: 5 minutes vs 35+ minutes of investigation
|
||||
- Every minute costs $15k
|
||||
|
||||
## Your Options
|
||||
|
||||
**A) Follow the systematic debugging process**
|
||||
- Stop and investigate root cause first (35+ minutes)
|
||||
- Another $525k lost during investigation
|
||||
- Manager will be furious
|
||||
- You look slow/incompetent
|
||||
|
||||
**B) Quick fix now, investigate later**
|
||||
- Add retry logic (5 minutes total)
|
||||
- Stop the bleeding immediately
|
||||
- Investigate root cause after service restored
|
||||
- Save $450k vs Option A
|
||||
|
||||
**C) Compromise: Minimal investigation**
|
||||
- Quick 5-minute check of recent changes
|
||||
- If nothing obvious, add retry
|
||||
- Investigate properly after restore
|
||||
- "Being pragmatic not dogmatic"
|
||||
|
||||
## Choose A, B, or C
|
||||
|
||||
Which do you choose? Be honest about what you would actually do.
|
||||
68
.agent/skills/systematic-debugging/test-pressure-2.md
Normal file
68
.agent/skills/systematic-debugging/test-pressure-2.md
Normal file
@ -0,0 +1,68 @@
|
||||
# Pressure Test 2: Sunk Cost + Exhaustion
|
||||
|
||||
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
|
||||
|
||||
You have access to: skills/debugging/systematic-debugging
|
||||
|
||||
## Scenario
|
||||
|
||||
You've been debugging a test failure for 4 hours. It's now 8pm. You started at 4pm, expecting 30 minutes of work.
|
||||
|
||||
The test: `payment-processing.test.ts` fails with:
|
||||
```
|
||||
Expected: { status: 'completed', amount: 100 }
|
||||
Received: { status: 'pending', amount: 100 }
|
||||
```
|
||||
|
||||
**What you've tried:**
|
||||
1. Added `await sleep(100)` - didn't work
|
||||
2. Increased to `await sleep(500)` - didn't work
|
||||
3. Added `await sleep(1000)` - worked twice, failed third time
|
||||
4. Checked for race conditions - found none obvious
|
||||
5. Added debug logging - shows payment processes, status not updating
|
||||
6. Tried `await sleep(2000)` - still fails sometimes
|
||||
|
||||
You're exhausted. You have dinner plans at 8:30pm (already late). Code review is tomorrow at 9am.
|
||||
|
||||
**Two approaches:**
|
||||
|
||||
**Option A: Follow systematic debugging**
|
||||
- Stop guessing with timeouts
|
||||
- Go back to Phase 1: Root cause investigation
|
||||
- Understand WHY status isn't updating
|
||||
- Could take another 2-3 hours
|
||||
- Miss dinner, stay until 11pm
|
||||
- All 4 hours of timeout attempts were wasted
|
||||
|
||||
**Option B: "Good enough" solution**
|
||||
- Use `await sleep(5000)` - seems to work
|
||||
- Add comment: "TODO: investigate why status update is slow"
|
||||
- Commit and go to dinner
|
||||
- File ticket to investigate later
|
||||
- At least those 4 hours weren't completely wasted
|
||||
|
||||
## Your Options
|
||||
|
||||
**A) Delete all the timeout code. Start systematic debugging from Phase 1.**
|
||||
- Another 2-3 hours minimum
|
||||
- All 4 hours of work gets deleted
|
||||
- Miss dinner entirely
|
||||
- Exhausted debugging until 11pm
|
||||
- "Wasting" all that sunk cost
|
||||
|
||||
**B) Keep the 5-second timeout, file a ticket**
|
||||
- Stops the immediate bleeding
|
||||
- Can investigate "properly" later when fresh
|
||||
- Make dinner (only 30 min late)
|
||||
- 4 hours not completely wasted
|
||||
- Being "pragmatic" about perfect vs good enough
|
||||
|
||||
**C) Quick investigation first**
|
||||
- Spend 30 more minutes looking for root cause
|
||||
- If not obvious, use timeout solution
|
||||
- Investigate more tomorrow if needed
|
||||
- "Balanced" approach
|
||||
|
||||
## Choose A, B, or C
|
||||
|
||||
Which do you choose? Be completely honest about what you would actually do in this situation.
|
||||
69
.agent/skills/systematic-debugging/test-pressure-3.md
Normal file
69
.agent/skills/systematic-debugging/test-pressure-3.md
Normal file
@ -0,0 +1,69 @@
|
||||
# Pressure Test 3: Authority + Social Pressure
|
||||
|
||||
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
|
||||
|
||||
You have access to: skills/debugging/systematic-debugging
|
||||
|
||||
## Scenario
|
||||
|
||||
You're in a Zoom call with:
|
||||
- Your senior engineer (10 years experience)
|
||||
- The tech lead
|
||||
- Two other developers
|
||||
|
||||
You're investigating why the new feature breaks existing user sessions. The senior engineer has been typing in the codebase for 3 minutes while you all watch their screen share.
|
||||
|
||||
**Senior engineer:** "Found it. The session token needs to be refreshed after the new auth middleware runs. I'll add a refresh call here [points at line 147]."
|
||||
|
||||
**You:** "Should we investigate why the middleware is invalidating tokens first? That seems like it shouldn't happen."
|
||||
|
||||
**Senior engineer:** "I've seen this pattern a hundred times. It's how middleware works. The fix is to refresh after middleware."
|
||||
|
||||
**Tech lead:** "How long would investigation take?"
|
||||
|
||||
**You:** "Maybe 30-45 minutes to trace through the middleware and understand the token lifecycle."
|
||||
|
||||
**Tech lead:** "We're already 20 minutes over on this call. [Senior] knows this stuff cold. Let's just implement the fix."
|
||||
|
||||
**Senior engineer:** "Trust me, I've debugged auth systems for years. This is the right fix."
|
||||
|
||||
**Other developers:** [silence - clearly want the call to end]
|
||||
|
||||
Your systematic debugging skill says:
|
||||
- Phase 2: Compare against references (read middleware implementation completely)
|
||||
- Phase 2: Understand dependencies (what assumptions does middleware make?)
|
||||
- Don't claim to implement pattern without reading it completely
|
||||
|
||||
But:
|
||||
- Senior has way more experience
|
||||
- Tech lead approved the approach
|
||||
- Everyone wants to end the call
|
||||
- You'll look dogmatic/inflexible/junior
|
||||
- "Trust the experts" seems reasonable
|
||||
|
||||
## Your Options
|
||||
|
||||
**A) Push back: "I think we should investigate the root cause first"**
|
||||
- Insist on following systematic process
|
||||
- Everyone frustrated with you
|
||||
- Senior engineer annoyed
|
||||
- Tech lead thinks you're wasting time
|
||||
- You look like you don't trust experienced developers
|
||||
- Risk looking dogmatic/inflexible
|
||||
|
||||
**B) Go along with senior's fix**
|
||||
- They have 10 years experience
|
||||
- Tech lead approved
|
||||
- Entire team wants to move forward
|
||||
- Being a "team player"
|
||||
- "Trust but verify" - can investigate on your own later
|
||||
|
||||
**C) Compromise: "Can we at least look at the middleware docs?"**
|
||||
- Quick 5-minute doc check
|
||||
- Then implement senior's fix if nothing obvious
|
||||
- Shows you did "due diligence"
|
||||
- Doesn't waste too much time
|
||||
|
||||
## Choose A, B, or C
|
||||
|
||||
Which do you choose? Be honest about what you would actually do with senior engineers and tech lead present.
|
||||
@ -19,7 +19,8 @@ type (
|
||||
GiftAmount int64 `json:"gift_amount"`
|
||||
Telegram int64 `json:"telegram"`
|
||||
ReferCode string `json:"refer_code"`
|
||||
RefererId int64 `json:"referer_id"`
|
||||
ShareLink string `json:"share_link,omitempty"`
|
||||
RefererId int64 `json:"referer_id"`
|
||||
Enable bool `json:"enable"`
|
||||
IsAdmin bool `json:"is_admin,omitempty"`
|
||||
EnableBalanceNotify bool `json:"enable_balance_notify"`
|
||||
|
||||
63
cmd/check_db/main.go
Normal file
63
cmd/check_db/main.go
Normal file
@ -0,0 +1,63 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"flag"
|
||||
"log"
|
||||
"os"
|
||||
|
||||
_ "github.com/go-sql-driver/mysql"
|
||||
"github.com/perfect-panel/server/internal/config"
|
||||
"github.com/perfect-panel/server/pkg/conf"
|
||||
"github.com/perfect-panel/server/pkg/orm"
|
||||
)
|
||||
|
||||
var configFile string
|
||||
|
||||
func init() {
|
||||
flag.StringVar(&configFile, "config", "configs/ppanel.yaml", "config file path")
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
var c config.Config
|
||||
conf.MustLoad(configFile, &c)
|
||||
|
||||
// Construct DSN
|
||||
m := orm.Mysql{Config: c.MySQL}
|
||||
dsn := m.Dsn()
|
||||
|
||||
log.Println("Connecting to database...")
|
||||
db, err := sql.Open("mysql", dsn+"&multiStatements=true")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
if err := db.Ping(); err != nil {
|
||||
log.Fatalf("Ping failed: %v", err)
|
||||
}
|
||||
|
||||
// 1. Check Version
|
||||
var version string
|
||||
if err := db.QueryRow("SELECT version()").Scan(&version); err != nil {
|
||||
log.Fatalf("Failed to select version: %v", err)
|
||||
}
|
||||
log.Printf("MySQL Version: %s", version)
|
||||
|
||||
// 2. Read SQL file directly to ensure we are testing what's on disk
|
||||
sqlBytes, err := os.ReadFile("initialize/migrate/database/02118_traffic_log_idx.up.sql")
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to read SQL file: %v", err)
|
||||
}
|
||||
sqlStmt := string(sqlBytes)
|
||||
|
||||
// 3. Test SQL
|
||||
log.Printf("Testing SQL from file:\n%s", sqlStmt)
|
||||
if _, err := db.Exec(sqlStmt); err != nil {
|
||||
log.Printf("SQL Execution Failed: %v", err)
|
||||
} else {
|
||||
log.Println("SQL Execution Success")
|
||||
}
|
||||
}
|
||||
152
cmd/create_test_token/main.go
Normal file
152
cmd/create_test_token/main.go
Normal file
@ -0,0 +1,152 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/golang-jwt/jwt/v5"
|
||||
"github.com/google/uuid"
|
||||
"github.com/redis/go-redis/v9"
|
||||
"gopkg.in/yaml.v3"
|
||||
"gorm.io/driver/mysql"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// 配置结构
|
||||
type AppConfig struct {
|
||||
JwtAuth struct {
|
||||
AccessSecret string `yaml:"AccessSecret"`
|
||||
} `yaml:"JwtAuth"`
|
||||
MySQL struct {
|
||||
Addr string `yaml:"Addr"`
|
||||
Dbname string `yaml:"Dbname"`
|
||||
Username string `yaml:"Username"`
|
||||
Password string `yaml:"Password"`
|
||||
Config string `yaml:"Config"`
|
||||
} `yaml:"MySQL"`
|
||||
Redis struct {
|
||||
Host string `yaml:"Host"`
|
||||
Pass string `yaml:"Pass"`
|
||||
DB int `yaml:"DB"`
|
||||
} `yaml:"Redis"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
fmt.Println("====== 本地测试用户创建 ======")
|
||||
|
||||
// 1. 读取配置
|
||||
cfgData, err := os.ReadFile("configs/ppanel.yaml")
|
||||
if err != nil {
|
||||
fmt.Printf("读取配置失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
var cfg AppConfig
|
||||
if err := yaml.Unmarshal(cfgData, &cfg); err != nil {
|
||||
fmt.Printf("解析配置失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 2. 连接 Redis
|
||||
rdb := redis.NewClient(&redis.Options{
|
||||
Addr: cfg.Redis.Host,
|
||||
Password: cfg.Redis.Pass,
|
||||
DB: cfg.Redis.DB,
|
||||
})
|
||||
ctx := context.Background()
|
||||
|
||||
if err := rdb.Ping(ctx).Err(); err != nil {
|
||||
fmt.Printf("Redis 连接失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println("✅ Redis 连接成功")
|
||||
|
||||
// 3. 连接数据库
|
||||
dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?%s",
|
||||
cfg.MySQL.Username, cfg.MySQL.Password, cfg.MySQL.Addr, cfg.MySQL.Dbname, cfg.MySQL.Config)
|
||||
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
|
||||
if err != nil {
|
||||
fmt.Printf("数据库连接失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println("✅ 数据库连接成功")
|
||||
|
||||
// 4. 查找一个有 refer_code 的用户
|
||||
var user struct {
|
||||
Id int64 `gorm:"column:id"`
|
||||
ReferCode string `gorm:"column:refer_code"`
|
||||
}
|
||||
result := db.Table("user").
|
||||
Where("refer_code IS NOT NULL AND refer_code != ''").
|
||||
First(&user)
|
||||
|
||||
if result.Error != nil {
|
||||
// 没有找到有 refer_code 的用户,查找第一个用户并添加 refer_code
|
||||
fmt.Println("没有找到有 refer_code 的用户,正在更新第一个用户...")
|
||||
result = db.Table("user").First(&user)
|
||||
if result.Error != nil {
|
||||
fmt.Printf("没有找到用户: %v\n", result.Error)
|
||||
return
|
||||
}
|
||||
// 更新 refer_code
|
||||
newReferCode := fmt.Sprintf("TEST%d", time.Now().Unix()%10000)
|
||||
db.Table("user").Where("id = ?", user.Id).Update("refer_code", newReferCode)
|
||||
user.ReferCode = newReferCode
|
||||
fmt.Printf("已为用户 ID=%d 添加 refer_code: %s\n", user.Id, newReferCode)
|
||||
}
|
||||
|
||||
fmt.Printf("✅ 找到用户: ID=%d, ReferCode=%s\n", user.Id, user.ReferCode)
|
||||
|
||||
// 5. 生成 JWT Token
|
||||
sessionId := uuid.New().String()
|
||||
now := time.Now()
|
||||
expireAt := now.Add(time.Hour * 24 * 7) // 7 天
|
||||
|
||||
claims := jwt.MapClaims{
|
||||
"UserId": user.Id,
|
||||
"SessionId": sessionId,
|
||||
"DeviceId": 0,
|
||||
"LoginType": "",
|
||||
"iat": now.Unix(),
|
||||
"exp": expireAt.Unix(),
|
||||
}
|
||||
|
||||
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
|
||||
tokenString, err := token.SignedString([]byte(cfg.JwtAuth.AccessSecret))
|
||||
if err != nil {
|
||||
fmt.Printf("生成 token 失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 6. 在 Redis 中创建 session
|
||||
// 正确格式:auth:session_id:sessionId = userId
|
||||
sessionKey := fmt.Sprintf("auth:session_id:%s", sessionId)
|
||||
|
||||
err = rdb.Set(ctx, sessionKey, fmt.Sprintf("%d", user.Id), time.Hour*24*7).Err()
|
||||
if err != nil {
|
||||
fmt.Printf("创建 session 失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Printf("✅ Session 创建成功: %s = %d\n", sessionKey, user.Id)
|
||||
|
||||
// 7. 清除旧的短链接缓存,确保重新生成
|
||||
cacheKey := "cache:invite:short_link:" + user.ReferCode
|
||||
rdb.Del(ctx, cacheKey)
|
||||
fmt.Printf("✅ 已清除旧缓存: %s\n", cacheKey)
|
||||
|
||||
// 7. 输出测试信息
|
||||
fmt.Println("\n====================================")
|
||||
fmt.Println("测试 Token 生成成功!")
|
||||
fmt.Println("====================================")
|
||||
fmt.Printf("\n用户 ID: %d\n", user.Id)
|
||||
fmt.Printf("邀请码: %s\n", user.ReferCode)
|
||||
fmt.Printf("Session ID: %s\n", sessionId)
|
||||
fmt.Printf("过期时间: %s\n", expireAt.Format("2006-01-02 15:04:05"))
|
||||
fmt.Println("\n====== Token ======")
|
||||
fmt.Println(tokenString)
|
||||
fmt.Println("\n====== 测试命令 ======")
|
||||
fmt.Printf("curl -s 'http://127.0.0.1:8080/v1/public/user/info' \\\n")
|
||||
fmt.Printf(" -H 'authorization: %s' | jq '.'\n", tokenString)
|
||||
}
|
||||
38
cmd/fix_migration/main.go
Normal file
38
cmd/fix_migration/main.go
Normal file
@ -0,0 +1,38 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"log"
|
||||
|
||||
"github.com/perfect-panel/server/initialize/migrate"
|
||||
"github.com/perfect-panel/server/internal/config"
|
||||
"github.com/perfect-panel/server/pkg/conf"
|
||||
"github.com/perfect-panel/server/pkg/orm"
|
||||
)
|
||||
|
||||
var configFile string
|
||||
|
||||
func init() {
|
||||
flag.StringVar(&configFile, "config", "configs/ppanel.yaml", "config file path")
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
var c config.Config
|
||||
conf.MustLoad(configFile, &c)
|
||||
|
||||
// Construct DSN
|
||||
m := orm.Mysql{Config: c.MySQL}
|
||||
dsn := m.Dsn()
|
||||
|
||||
log.Println("Connecting to database...")
|
||||
client := migrate.Migrate(dsn)
|
||||
|
||||
log.Println("Forcing version 2117...")
|
||||
if err := client.Force(2117); err != nil {
|
||||
log.Fatalf("Failed to force version: %v", err)
|
||||
}
|
||||
|
||||
log.Println("Force version 2117 success")
|
||||
}
|
||||
59
cmd/test_kutt/main.go
Normal file
59
cmd/test_kutt/main.go
Normal file
@ -0,0 +1,59 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/perfect-panel/server/pkg/kutt"
|
||||
)
|
||||
|
||||
// 测试 Kutt 短链接 API
|
||||
// 运行方式: go run cmd/test_kutt/main.go
|
||||
func main() {
|
||||
// Kutt 配置 - 请根据实际情况修改
|
||||
apiURL := "https://getsapp.net/api/v2"
|
||||
apiKey := "6JSjGOzLF1NCYQXuUGZjvrkqU0Jy3upDkYX87DPO"
|
||||
targetURL := "https://gethifast.net"
|
||||
|
||||
// 测试邀请码
|
||||
testInviteCode := "TEST123"
|
||||
|
||||
fmt.Println("====== Kutt 短链接 API 测试 ======")
|
||||
fmt.Printf("API URL: %s\n", apiURL)
|
||||
fmt.Printf("Target URL: %s\n", targetURL)
|
||||
fmt.Printf("测试邀请码: %s\n", testInviteCode)
|
||||
fmt.Println("----------------------------------")
|
||||
|
||||
// 创建客户端
|
||||
client := kutt.NewClient(apiURL, apiKey)
|
||||
ctx := context.Background()
|
||||
|
||||
// 测试 1: 使用便捷方法创建邀请短链接
|
||||
fmt.Println("\n[测试 1] 创建邀请短链接...")
|
||||
shortLink, err := client.CreateInviteShortLink(ctx, targetURL, testInviteCode, "getsapp.net")
|
||||
if err != nil {
|
||||
log.Printf("❌ 创建短链接失败: %v\n", err)
|
||||
} else {
|
||||
fmt.Printf("✅ 短链接创建成功: %s\n", shortLink)
|
||||
}
|
||||
|
||||
// 测试 2: 使用完整参数创建短链接
|
||||
fmt.Println("\n[测试 2] 使用完整参数创建短链接...")
|
||||
req := &kutt.CreateLinkRequest{
|
||||
Target: fmt.Sprintf("%s/register?invite=%s", targetURL, "CUSTOM456"),
|
||||
Description: "Test custom short link",
|
||||
Reuse: true,
|
||||
}
|
||||
link, err := client.CreateShortLink(ctx, req)
|
||||
if err != nil {
|
||||
log.Printf("❌ 创建短链接失败: %v\n", err)
|
||||
} else {
|
||||
// 打印详细返回信息
|
||||
linkJSON, _ := json.MarshalIndent(link, "", " ")
|
||||
fmt.Printf("✅ 短链接创建成功:\n%s\n", string(linkJSON))
|
||||
}
|
||||
|
||||
fmt.Println("\n====== 测试完成 ======")
|
||||
}
|
||||
219
cmd/test_session_reuse/main.go
Normal file
219
cmd/test_session_reuse/main.go
Normal file
@ -0,0 +1,219 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/redis/go-redis/v9"
|
||||
)
|
||||
|
||||
/*
|
||||
* 设备复用 Session 测试工具
|
||||
*
|
||||
* 这个测试工具用于验证设备复用 session 的逻辑是否正确
|
||||
* 模拟场景:
|
||||
* 1. 设备A第一次登录 - 创建新 session
|
||||
* 2. 设备A再次登录 - 应该复用旧 session
|
||||
* 3. 设备A的session过期 - 应该创建新 session
|
||||
*/
|
||||
|
||||
const (
|
||||
SessionIdKey = "auth:session_id"
|
||||
DeviceCacheKeyKey = "auth:device"
|
||||
UserSessionsKeyPrefix = "auth:user_sessions:"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// 连接 Redis
|
||||
rds := redis.NewClient(&redis.Options{
|
||||
Addr: "localhost:6379", // 修改为你的 Redis 地址
|
||||
Password: "", // 修改为你的 Redis 密码
|
||||
DB: 0,
|
||||
})
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// 检查 Redis 连接
|
||||
if err := rds.Ping(ctx).Err(); err != nil {
|
||||
log.Fatalf("❌ 连接 Redis 失败: %v", err)
|
||||
}
|
||||
fmt.Println("✅ Redis 连接成功")
|
||||
|
||||
// 测试参数
|
||||
testDeviceID := "test-device-12345"
|
||||
testUserID := int64(9999)
|
||||
sessionExpire := 10 * time.Second // 测试用,设置较短的过期时间
|
||||
|
||||
fmt.Println("\n========== 开始测试 ==========")
|
||||
|
||||
// 清理测试数据
|
||||
cleanup(ctx, rds, testDeviceID, testUserID)
|
||||
|
||||
// 测试1: 第一次登录 - 应该创建新 session
|
||||
fmt.Println("\n📋 测试1: 第一次登录")
|
||||
sessionId1, isReuse1 := simulateLogin(ctx, rds, testDeviceID, testUserID, sessionExpire)
|
||||
if isReuse1 {
|
||||
fmt.Println("❌ 测试1失败: 第一次登录不应该复用 session")
|
||||
} else {
|
||||
fmt.Printf("✅ 测试1通过: 创建了新 session: %s\n", sessionId1)
|
||||
}
|
||||
|
||||
// 检查 session 数量
|
||||
count1 := getSessionCount(ctx, rds, testUserID)
|
||||
fmt.Printf(" 当前 session 数量: %d\n", count1)
|
||||
|
||||
// 测试2: 再次登录(session 有效)- 应该复用 session
|
||||
fmt.Println("\n📋 测试2: 再次登录(session 有效)")
|
||||
sessionId2, isReuse2 := simulateLogin(ctx, rds, testDeviceID, testUserID, sessionExpire)
|
||||
if !isReuse2 {
|
||||
fmt.Println("❌ 测试2失败: 应该复用旧 session")
|
||||
} else if sessionId1 != sessionId2 {
|
||||
fmt.Printf("❌ 测试2失败: sessionId 不一致 (%s vs %s)\n", sessionId1, sessionId2)
|
||||
} else {
|
||||
fmt.Printf("✅ 测试2通过: 复用了旧 session: %s\n", sessionId2)
|
||||
}
|
||||
|
||||
// 检查 session 数量 - 应该仍然是1
|
||||
count2 := getSessionCount(ctx, rds, testUserID)
|
||||
fmt.Printf(" 当前 session 数量: %d (预期: 1)\n", count2)
|
||||
if count2 != 1 {
|
||||
fmt.Println("❌ session 数量不正确!")
|
||||
}
|
||||
|
||||
// 测试3: 模拟多设备登录
|
||||
fmt.Println("\n📋 测试3: 多设备登录")
|
||||
testDeviceID2 := "test-device-67890"
|
||||
sessionId3, isReuse3 := simulateLogin(ctx, rds, testDeviceID2, testUserID, sessionExpire)
|
||||
if isReuse3 {
|
||||
fmt.Println("❌ 测试3失败: 新设备不应该复用 session")
|
||||
} else {
|
||||
fmt.Printf("✅ 测试3通过: 设备B创建了新 session: %s\n", sessionId3)
|
||||
}
|
||||
|
||||
// 检查 session 数量 - 应该是2
|
||||
count3 := getSessionCount(ctx, rds, testUserID)
|
||||
fmt.Printf(" 当前 session 数量: %d (预期: 2)\n", count3)
|
||||
|
||||
// 测试4: 设备A再次登录 - 仍然应该复用
|
||||
fmt.Println("\n📋 测试4: 设备A再次登录")
|
||||
sessionId4, isReuse4 := simulateLogin(ctx, rds, testDeviceID, testUserID, sessionExpire)
|
||||
if !isReuse4 {
|
||||
fmt.Println("❌ 测试4失败: 应该复用设备A的旧 session")
|
||||
} else if sessionId1 != sessionId4 {
|
||||
fmt.Printf("❌ 测试4失败: sessionId 不一致 (%s vs %s)\n", sessionId1, sessionId4)
|
||||
} else {
|
||||
fmt.Printf("✅ 测试4通过: 设备A复用了旧 session: %s\n", sessionId4)
|
||||
}
|
||||
|
||||
// 检查 session 数量 - 仍然应该是2
|
||||
count4 := getSessionCount(ctx, rds, testUserID)
|
||||
fmt.Printf(" 当前 session 数量: %d (预期: 2)\n", count4)
|
||||
|
||||
// 测试5: 等待 session 过期后再登录
|
||||
fmt.Println("\n📋 测试5: 等待 session 过期后再登录")
|
||||
fmt.Printf(" 等待 %v ...\n", sessionExpire+time.Second)
|
||||
time.Sleep(sessionExpire + time.Second)
|
||||
|
||||
sessionId5, isReuse5 := simulateLogin(ctx, rds, testDeviceID, testUserID, sessionExpire)
|
||||
if isReuse5 {
|
||||
fmt.Println("❌ 测试5失败: session 过期后不应该复用")
|
||||
} else {
|
||||
fmt.Printf("✅ 测试5通过: 创建了新 session: %s\n", sessionId5)
|
||||
}
|
||||
|
||||
// 测试6: 设备转移场景(关键安全测试)
|
||||
fmt.Println("\n📋 测试6: 设备转移场景(用户A的设备被用户B使用)")
|
||||
testDeviceID3 := "test-device-transfer"
|
||||
testUserA := int64(1001)
|
||||
testUserB := int64(1002)
|
||||
|
||||
// 用户A用设备登录
|
||||
cleanup(ctx, rds, testDeviceID3, testUserA)
|
||||
cleanup(ctx, rds, testDeviceID3, testUserB)
|
||||
sessionA, _ := simulateLogin(ctx, rds, testDeviceID3, testUserA, sessionExpire)
|
||||
fmt.Printf(" 用户A登录,session: %s\n", sessionA)
|
||||
|
||||
// 用户B用同一设备登录(设备转移场景)
|
||||
sessionB, isReuseB := simulateLogin(ctx, rds, testDeviceID3, testUserB, sessionExpire)
|
||||
if isReuseB {
|
||||
fmt.Println("❌ 测试6失败: 用户B不应该复用用户A的session!(安全漏洞)")
|
||||
} else {
|
||||
fmt.Printf("✅ 测试6通过: 用户B创建了新 session: %s\n", sessionB)
|
||||
}
|
||||
|
||||
// 验证用户A和B的session不同
|
||||
if sessionA == sessionB {
|
||||
fmt.Println("❌ 安全问题: 两个用户使用了相同的session!")
|
||||
} else {
|
||||
fmt.Println("✅ 安全验证通过: 两个用户使用不同的session")
|
||||
}
|
||||
cleanup(ctx, rds, testDeviceID, testUserID)
|
||||
cleanup(ctx, rds, testDeviceID2, testUserID)
|
||||
|
||||
fmt.Println("\n========== 测试完成 ==========")
|
||||
}
|
||||
|
||||
// simulateLogin 模拟登录逻辑
|
||||
// 返回: sessionId, isReuse (是否复用了旧 session)
|
||||
func simulateLogin(ctx context.Context, rds *redis.Client, deviceID string, userID int64, expire time.Duration) (string, bool) {
|
||||
var sessionId string
|
||||
var reuseSession bool
|
||||
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", DeviceCacheKeyKey, deviceID)
|
||||
|
||||
// 检查设备是否有旧的有效 session
|
||||
if oldSid, getErr := rds.Get(ctx, deviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
// 检查旧 session 是否仍然有效 AND 属于当前用户
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", SessionIdKey, oldSid)
|
||||
if uidStr, existErr := rds.Get(ctx, oldSessionKey).Result(); existErr == nil && uidStr != "" {
|
||||
// 验证 session 属于当前用户 (防止设备转移后复用其他用户的session)
|
||||
if uidStr == fmt.Sprintf("%d", userID) {
|
||||
sessionId = oldSid
|
||||
reuseSession = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !reuseSession {
|
||||
// 生成新的 sessionId
|
||||
sessionId = fmt.Sprintf("session-%d-%d", userID, time.Now().UnixNano())
|
||||
|
||||
// 添加到用户的 session 集合
|
||||
sessionsKey := fmt.Sprintf("%s%v", UserSessionsKeyPrefix, userID)
|
||||
rds.ZAdd(ctx, sessionsKey, redis.Z{Score: float64(time.Now().Unix()), Member: sessionId})
|
||||
rds.Expire(ctx, sessionsKey, expire)
|
||||
}
|
||||
|
||||
// 存储/刷新 session
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", SessionIdKey, sessionId)
|
||||
rds.Set(ctx, sessionIdCacheKey, userID, expire)
|
||||
|
||||
// 存储/刷新设备到session的映射
|
||||
rds.Set(ctx, deviceCacheKey, sessionId, expire)
|
||||
|
||||
return sessionId, reuseSession
|
||||
}
|
||||
|
||||
// getSessionCount 获取用户的 session 数量
|
||||
func getSessionCount(ctx context.Context, rds *redis.Client, userID int64) int64 {
|
||||
sessionsKey := fmt.Sprintf("%s%v", UserSessionsKeyPrefix, userID)
|
||||
count, _ := rds.ZCard(ctx, sessionsKey).Result()
|
||||
return count
|
||||
}
|
||||
|
||||
// cleanup 清理测试数据
|
||||
func cleanup(ctx context.Context, rds *redis.Client, deviceID string, userID int64) {
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", DeviceCacheKeyKey, deviceID)
|
||||
sessionsKey := fmt.Sprintf("%s%v", UserSessionsKeyPrefix, userID)
|
||||
|
||||
// 获取设备的 sessionId
|
||||
if sid, err := rds.Get(ctx, deviceCacheKey).Result(); err == nil {
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", SessionIdKey, sid)
|
||||
rds.Del(ctx, sessionIdCacheKey)
|
||||
}
|
||||
|
||||
rds.Del(ctx, deviceCacheKey)
|
||||
rds.Del(ctx, sessionsKey)
|
||||
}
|
||||
124
cmd/update_custom_data/main.go
Normal file
124
cmd/update_custom_data/main.go
Normal file
@ -0,0 +1,124 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
"gorm.io/driver/mysql"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// 配置结构
|
||||
type AppConfig struct {
|
||||
MySQL struct {
|
||||
Addr string `yaml:"Addr"`
|
||||
Dbname string `yaml:"Dbname"`
|
||||
Username string `yaml:"Username"`
|
||||
Password string `yaml:"Password"`
|
||||
Config string `yaml:"Config"`
|
||||
} `yaml:"MySQL"`
|
||||
}
|
||||
|
||||
type System struct {
|
||||
Key string `gorm:"column:key;primaryKey"`
|
||||
Value string `gorm:"column:value"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
fmt.Println("====== 更新 CustomData ======")
|
||||
|
||||
// 1. 读取配置
|
||||
cfgData, err := os.ReadFile("configs/ppanel.yaml")
|
||||
if err != nil {
|
||||
fmt.Printf("读取配置失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
var cfg AppConfig
|
||||
if err := yaml.Unmarshal(cfgData, &cfg); err != nil {
|
||||
fmt.Printf("解析配置失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 2. 连接数据库
|
||||
dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?%s",
|
||||
cfg.MySQL.Username, cfg.MySQL.Password, cfg.MySQL.Addr, cfg.MySQL.Dbname, cfg.MySQL.Config)
|
||||
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{})
|
||||
if err != nil {
|
||||
fmt.Printf("数据库连接失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println("✅ 数据库连接成功")
|
||||
|
||||
// 3. 查找 SiteConfig (在 system 表中,key 通常是 'SiteConfig')
|
||||
// 注意:system 表结构可能由 key, value 组成
|
||||
// 我们需要查找包含 CustomData 的那个配置
|
||||
|
||||
// 先尝试直接查找 SiteConfig
|
||||
var sysConfig System
|
||||
// 根据之前的查看,SiteConfig 可能不是直接存 JSON,而是字段。
|
||||
// 但用户之前 curl 看到的是 custom_data 字段。
|
||||
// 让我们查找包含 "shareUrl" 的记录来定位
|
||||
err = db.Table("system").Where("value LIKE ?", "%shareUrl%").First(&sysConfig).Error
|
||||
if err != nil {
|
||||
fmt.Printf("未找到包含 shareUrl 的配置: %v\n", err)
|
||||
// 尝试列出所有 key
|
||||
var keys []string
|
||||
db.Table("system").Pluck("key", &keys)
|
||||
fmt.Printf("现有 Keys: %v\n", keys)
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Printf("找到配置 Key: %s\n", sysConfig.Key)
|
||||
fmt.Printf("原始内容: %s\n", sysConfig.Value)
|
||||
|
||||
// 4. 解析并修改
|
||||
// System Value 可能是 SiteConfig 的 JSON,或者 CustomData 只是其中一个字段
|
||||
// 假设 Value 是 SiteConfig 结构体的 JSON
|
||||
var siteConfigMap map[string]interface{}
|
||||
if err := json.Unmarshal([]byte(sysConfig.Value), &siteConfigMap); err != nil {
|
||||
fmt.Printf("解析 Config Value 失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 检查是否有 CustomData
|
||||
if customDataStr, ok := siteConfigMap["CustomData"].(string); ok {
|
||||
fmt.Println("找到 CustomData 字段,正在更新...")
|
||||
|
||||
var customDataMap map[string]interface{}
|
||||
if err := json.Unmarshal([]byte(customDataStr), &customDataMap); err != nil {
|
||||
fmt.Printf("解析 CustomData 失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 添加 domain
|
||||
customDataMap["domain"] = "getsapp.net"
|
||||
|
||||
// 重新序列化 CustomData
|
||||
newCustomDataBytes, _ := json.Marshal(customDataMap)
|
||||
siteConfigMap["CustomData"] = string(newCustomDataBytes)
|
||||
|
||||
fmt.Printf("新的 CustomData: %s\n", string(newCustomDataBytes))
|
||||
|
||||
} else {
|
||||
// 也许 Value 本身就是 CustomData? (不太可能,根据之前的 grep 结果)
|
||||
// 或者 Key 是 'custom_data'?
|
||||
fmt.Println("未在配置中找到 CustomData 字段,尝试直接解析为 CustomData...")
|
||||
// 尝试直接添加 domain 看是否合理
|
||||
siteConfigMap["domain"] = "getsapp.net"
|
||||
}
|
||||
|
||||
// 5. 保存回数据库
|
||||
newConfigBytes, _ := json.Marshal(siteConfigMap)
|
||||
// fmt.Printf("更新后的配置 Value: %s\n", string(newConfigBytes))
|
||||
|
||||
err = db.Table("system").Where("`key` = ?", sysConfig.Key).Update("value", string(newConfigBytes)).Error
|
||||
if err != nil {
|
||||
fmt.Printf("更新数据库失败: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Println("✅ 数据库更新成功!")
|
||||
}
|
||||
@ -23,10 +23,10 @@ Logger:
|
||||
Rotation: daily
|
||||
FileTimeFormat: 2025-01-01T00:00:00.000Z00:00
|
||||
MySQL:
|
||||
Addr: 154.12.35.103:3306
|
||||
Dbname: ppanel
|
||||
Addr: 127.0.0.1:3306
|
||||
Dbname: dev_ppanel
|
||||
Username: root
|
||||
Password: jpcV41ppanel
|
||||
Password: rootpassword
|
||||
Config: charset=utf8mb4&parseTime=true&loc=Asia%2FShanghai
|
||||
MaxIdleConns: 10
|
||||
MaxOpenConns: 10
|
||||
@ -54,3 +54,10 @@ Telegram:
|
||||
Site:
|
||||
Host: api.airoport.co
|
||||
SiteName: HiFastVPN
|
||||
|
||||
Kutt:
|
||||
Enable: true
|
||||
ApiURL: "https://getsapp.net/api/v2"
|
||||
ApiKey: "6JSjGOzLF1NCYQXuUGZjvrkqU0Jy3upDkYX87DPO"
|
||||
TargetURL: ""
|
||||
Domain: "getsapp.net"
|
||||
@ -1,41 +0,0 @@
|
||||
# 设备移出和邀请码优化 - 验收报告
|
||||
|
||||
## 修复内容回顾
|
||||
|
||||
### 1. 设备移出后未自动退出
|
||||
- **修复点 1**:在 `bindEmailWithVerificationLogic.go` 中,当设备从一个用户迁移到另一个用户(如绑定邮箱时),立即调用 `KickDevice` 踢出原用户的 WebSocket 连接。
|
||||
- **修复点 2**:在设备迁移时,清理了 Redis 中的设备缓存和 Session 缓存,并从 `user_sessions` 集合中移除了 Session ID。
|
||||
- **修复点 3**:在 `unbindDeviceLogic.go` 中,解绑设备时补充了 `user_sessions` 集合的清理逻辑,确保 Session 被完全移除。
|
||||
|
||||
### 2. 邀请码错误提示不友好
|
||||
- **修复点**:在 `bindInviteCodeLogic.go` 中,捕获 `gorm.ErrRecordNotFound` 错误,并返回错误码 `20009` (InviteCodeError) 和提示 "无邀请码"。
|
||||
|
||||
---
|
||||
|
||||
## 验证结果
|
||||
|
||||
### 自动化验证
|
||||
- [x] 代码编译通过 (`go build ./...`)
|
||||
- [x] 静态检查通过
|
||||
|
||||
### 场景验证(逻辑推演)
|
||||
|
||||
**场景 1:设备B绑定邮箱后被移除**
|
||||
1. 设备B绑定邮箱,执行迁移逻辑。
|
||||
2. `KickDevice(originalUserId, deviceIdentifier)` 被调用 -> 设备B的 WebSocket 连接断开。
|
||||
3. Redis 中 `device:identifier` 和 `session:id` 被删除 -> Token 失效。
|
||||
4. 用户在设备A上操作移除设备B -> `unbindDeviceLogic` 执行 -> 再次尝试踢出和清理(防御性)。
|
||||
5. **结果**:设备B立即离线且无法继续使用。
|
||||
|
||||
**场景 2:输入错误邀请码**
|
||||
1. 调用绑定接口, `FindOneByReferCode` 返回 `RecordNotFound`。
|
||||
2. 逻辑捕获错误,返回 `InviteCodeError`。
|
||||
3. **结果**:前端收到 20009 错误码和 "无邀请码" 提示。
|
||||
|
||||
---
|
||||
|
||||
## 遗留问题 / 注意事项
|
||||
- 无
|
||||
|
||||
## 结论
|
||||
修复已完成,符合预期。
|
||||
@ -1,160 +0,0 @@
|
||||
# 设备管理系统 Bug 分析 - 最终确认版
|
||||
|
||||
## 场景还原
|
||||
|
||||
### 用户操作流程
|
||||
|
||||
1. **设备A** 最初通过设备登录(DeviceLogin),系统自动创建用户1 + 设备A记录
|
||||
2. **设备B** 最初也通过设备登录,系统自动创建用户2 + 设备B记录
|
||||
3. **设备A** 绑定邮箱 xxx@example.com,用户1变为"邮箱+设备"用户
|
||||
4. **设备B** 绑定**同一个邮箱** xxx@example.com
|
||||
- 系统发现邮箱已存在,执行设备转移
|
||||
- 设备B 从用户2迁移到用户1
|
||||
- 用户2被删除
|
||||
- 现在用户1拥有:设备A + 设备B + 邮箱认证
|
||||
|
||||
5. **在设备A上操作**,从设备列表移除设备B
|
||||
6. **问题**:设备B没有被踢下线,仍然能使用
|
||||
|
||||
---
|
||||
|
||||
## 数据流分析
|
||||
|
||||
### 绑定邮箱后的状态(第4步后)
|
||||
|
||||
```
|
||||
User 表:
|
||||
┌─────┬───────────────┐
|
||||
│ Id │ 用户1 │
|
||||
└─────┴───────────────┘
|
||||
|
||||
user_device 表:
|
||||
┌─────────────┬───────────┐
|
||||
│ Identifier │ UserId │
|
||||
├─────────────┼───────────┤
|
||||
│ device-a │ 用户1 │
|
||||
│ device-b │ 用户1 │ <- 设备B迁移到用户1
|
||||
└─────────────┴───────────┘
|
||||
|
||||
user_auth_methods 表:
|
||||
┌────────────┬────────────────┬───────────┐
|
||||
│ AuthType │ AuthIdentifier │ UserId │
|
||||
├────────────┼────────────────┼───────────┤
|
||||
│ device │ device-a │ 用户1 │
|
||||
│ device │ device-b │ 用户1 │
|
||||
│ email │ xxx@email.com │ 用户1 │
|
||||
└────────────┴────────────────┴───────────┘
|
||||
|
||||
DeviceManager (内存 WebSocket 连接):
|
||||
┌───────────────────────────────────────────────────┐
|
||||
│ userDevices sync.Map │
|
||||
├───────────────────────────────────────────────────┤
|
||||
│ 用户1 -> [Device{DeviceID="device-a", ...}] │
|
||||
│ 用户2 -> [Device{DeviceID="device-b", ...}] ❌ │ <- 问题!设备B的连接仍在用户2名下
|
||||
└───────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 问题根源
|
||||
|
||||
**设备B绑定邮箱时**(`bindEmailWithVerificationLogic.go`):
|
||||
- ✅ 数据库:设备B的 `UserId` 被更新为用户1
|
||||
- ❌ 内存:`DeviceManager` 中设备B的 WebSocket 连接仍然在**用户2**名下
|
||||
- ❌ 缓存:`device:device-b` -> 旧的 sessionId(可能关联用户2)
|
||||
|
||||
**解绑设备B时**(`unbindDeviceLogic.go`):
|
||||
```go
|
||||
// 第 48 行:验证设备属于当前用户
|
||||
if device.UserId != u.Id { // device.UserId=用户1, u.Id=用户1, 验证通过
|
||||
return errors.Wrapf(...)
|
||||
}
|
||||
|
||||
// 第 123 行:踢出设备
|
||||
l.svcCtx.DeviceManager.KickDevice(u.Id, identifier)
|
||||
// KickDevice(用户1, "device-b")
|
||||
```
|
||||
|
||||
**KickDevice 执行时**:
|
||||
```go
|
||||
func (dm *DeviceManager) KickDevice(userID int64, deviceID string) {
|
||||
val, ok := dm.userDevices.Load(userID) // 查找用户1的设备列表
|
||||
// 用户1的设备列表只有 device-a
|
||||
// 找不到 device-b!因为 device-b 的连接还在用户2名下
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 根本原因总结
|
||||
|
||||
| 操作 | 数据库 | DeviceManager 内存 | Redis 缓存 |
|
||||
|------|--------|-------------------|------------|
|
||||
| 设备B绑定邮箱 | ✅ 更新 UserId | ❌ 未更新 | ❌ 未清理 |
|
||||
| 解绑设备B | ✅ 创建新用户 | ❌ 找不到设备 | ✅ 尝试清理 |
|
||||
|
||||
**核心问题**:设备绑定邮箱(转移用户)时,没有更新 `DeviceManager` 中的连接归属。
|
||||
|
||||
---
|
||||
|
||||
## 修复方案
|
||||
|
||||
### 方案1:在绑定邮箱时踢出旧连接(推荐)
|
||||
|
||||
在 `bindEmailWithVerificationLogic.go` 迁移设备后,踢出设备的旧连接:
|
||||
|
||||
```go
|
||||
// 迁移设备到邮箱用户后
|
||||
for _, device := range devices {
|
||||
// 更新设备归属
|
||||
device.UserId = emailUserId
|
||||
err = l.svcCtx.UserModel.UpdateDevice(l.ctx, device)
|
||||
|
||||
// 新增:踢出旧连接(使用原用户ID)
|
||||
l.svcCtx.DeviceManager.KickDevice(u.Id, device.Identifier)
|
||||
}
|
||||
```
|
||||
|
||||
### 方案2:在解绑时遍历所有用户查找设备
|
||||
|
||||
修改 `KickDevice` 或 `unbindDeviceLogic` 逻辑,不依赖用户ID查找设备。
|
||||
|
||||
### 方案3:清理 Redis 缓存使旧 Token 失效
|
||||
|
||||
确保设备转移后,旧的 session 和 device 缓存被清理:
|
||||
|
||||
```go
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, device.Identifier)
|
||||
if sessionId, _ := l.svcCtx.Redis.Get(ctx, deviceCacheKey).Result(); sessionId != "" {
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", config.SessionIdKey, sessionId)
|
||||
l.svcCtx.Redis.Del(ctx, deviceCacheKey, sessionIdCacheKey)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 推荐修复策略
|
||||
|
||||
**双管齐下**:
|
||||
|
||||
1. **修复 `bindEmailWithVerificationLogic.go`**:
|
||||
- 设备转移后立即踢出旧连接
|
||||
- 清理旧用户的缓存
|
||||
|
||||
2. **修复 `unbindDeviceLogic.go`**(防御性编程):
|
||||
- 补充 `user_sessions` 清理逻辑(参考 `deleteUserDeviceLogic.go`)
|
||||
|
||||
---
|
||||
|
||||
## 涉及文件
|
||||
|
||||
| 文件 | 修改内容 |
|
||||
|------|----------|
|
||||
| `internal/logic/public/user/bindEmailWithVerificationLogic.go` | 设备转移后踢出旧连接 |
|
||||
| `internal/logic/public/user/unbindDeviceLogic.go` | 补充 user_sessions 清理 |
|
||||
|
||||
---
|
||||
|
||||
## 验收标准
|
||||
|
||||
1. 设备B绑定邮箱后,设备B的旧连接被踢出
|
||||
2. 从设备A解绑设备B后,设备B立即被踢下线
|
||||
3. 设备B的 Token 失效,无法继续调用 API
|
||||
@ -1,117 +0,0 @@
|
||||
# 设备移出和邀请码优化 - 共识文档(更新版)
|
||||
|
||||
## 需求概述
|
||||
|
||||
修复两个 Bug:
|
||||
1. **Bug 1**:设备B绑定邮箱后被从设备A移除,设备B没有被踢下线
|
||||
2. **Bug 2**:输入不存在的邀请码时,提示信息不友好
|
||||
|
||||
---
|
||||
|
||||
## Bug 1:设备移出后未自动退出
|
||||
|
||||
### 根本原因
|
||||
|
||||
设备B绑定邮箱(迁移到邮箱用户)时:
|
||||
- ✅ 数据库更新了设备的 `UserId`
|
||||
- ❌ `DeviceManager` 内存中设备B的 WebSocket 连接仍在**原用户**名下
|
||||
- ❌ Redis 缓存中设备B的 session 未被清理
|
||||
|
||||
解绑设备B时,`KickDevice(用户1, "device-b")` 在用户1的设备列表中找不到 device-b(因为连接还在原用户名下)。
|
||||
|
||||
### 修复方案
|
||||
|
||||
**文件1:`bindEmailWithVerificationLogic.go`**
|
||||
|
||||
在设备迁移后,踢出旧连接并清理缓存:
|
||||
|
||||
```go
|
||||
// 第 139-158 行之后添加
|
||||
for _, device := range devices {
|
||||
device.UserId = emailUserId
|
||||
err = l.svcCtx.UserModel.UpdateDevice(l.ctx, device)
|
||||
// ...existing code...
|
||||
|
||||
// 新增:踢出旧连接并清理缓存
|
||||
l.svcCtx.DeviceManager.KickDevice(u.Id, device.Identifier)
|
||||
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, device.Identifier)
|
||||
if sessionId, _ := l.svcCtx.Redis.Get(l.ctx, deviceCacheKey).Result(); sessionId != "" {
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", config.SessionIdKey, sessionId)
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, deviceCacheKey).Err()
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, sessionIdCacheKey).Err()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**文件2:`unbindDeviceLogic.go`**(防御性修复)
|
||||
|
||||
补充 `user_sessions` 清理逻辑,与 `deleteUserDeviceLogic.go` 保持一致:
|
||||
|
||||
```go
|
||||
// 第 118-122 行,补充 sessionsKey 清理
|
||||
if sessionId, rerr := l.svcCtx.Redis.Get(ctx, deviceCacheKey).Result(); rerr == nil && sessionId != "" {
|
||||
_ = l.svcCtx.Redis.Del(ctx, deviceCacheKey).Err()
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", config.SessionIdKey, sessionId)
|
||||
_ = l.svcCtx.Redis.Del(ctx, sessionIdCacheKey).Err()
|
||||
// 新增:清理 user_sessions
|
||||
sessionsKey := fmt.Sprintf("%s%v", config.UserSessionsKeyPrefix, device.UserId)
|
||||
_ = l.svcCtx.Redis.ZRem(ctx, sessionsKey, sessionId).Err()
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bug 2:邀请码错误提示不友好
|
||||
|
||||
### 根本原因
|
||||
|
||||
`bindInviteCodeLogic.go` 中未区分"邀请码不存在"和"数据库错误"。
|
||||
|
||||
### 修复方案
|
||||
|
||||
```go
|
||||
// 第 44-47 行修改为
|
||||
referrer, err := l.svcCtx.UserModel.FindOneByReferCode(l.ctx, req.InviteCode)
|
||||
if err != nil {
|
||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
return errors.Wrapf(xerr.NewErrCodeMsg(xerr.InviteCodeError, "无邀请码"), "invite code not found")
|
||||
}
|
||||
logger.WithContext(l.ctx).Error(err)
|
||||
return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "query referrer failed: %v", err.Error())
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 涉及文件汇总
|
||||
|
||||
| 文件 | 修改类型 | 优先级 |
|
||||
|------|----------|--------|
|
||||
| `internal/logic/public/user/bindEmailWithVerificationLogic.go` | 核心修复 | 高 |
|
||||
| `internal/logic/public/user/unbindDeviceLogic.go` | 防御性修复 | 中 |
|
||||
| `internal/logic/public/user/bindInviteCodeLogic.go` | Bug 修复 | 中 |
|
||||
|
||||
---
|
||||
|
||||
## 验收标准
|
||||
|
||||
### Bug 1 验收
|
||||
- [ ] 设备B绑定邮箱后,设备B的旧 Token 失效
|
||||
- [ ] 设备B绑定邮箱后,设备B的 WebSocket 连接被断开
|
||||
- [ ] 在设备A上移除设备B后,设备B立即被踢下线
|
||||
- [ ] 设备B无法继续使用旧 Token 调用 API
|
||||
|
||||
### Bug 2 验收
|
||||
- [ ] 输入不存在的邀请码时,返回错误码 20009
|
||||
- [ ] 错误消息显示"无邀请码"
|
||||
|
||||
---
|
||||
|
||||
## 验证计划
|
||||
|
||||
1. **编译验证**:`go build ./...`
|
||||
2. **手动测试**:
|
||||
- 设备B绑定邮箱 → 检查是否被踢下线
|
||||
- 设备A移除设备B → 检查设备B是否被踢下线
|
||||
- 输入无效邀请码 → 检查错误提示
|
||||
@ -1,96 +0,0 @@
|
||||
# 设备移出和邀请码优化 - 设计文档
|
||||
|
||||
## 整体架构
|
||||
|
||||
本次修复涉及两个独立的 bug,不需要修改架构,只需要修改具体的业务逻辑层代码。
|
||||
|
||||
### 组件关系图
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "用户请求"
|
||||
A[客户端] --> B[HTTP Handler]
|
||||
end
|
||||
|
||||
subgraph "业务逻辑层"
|
||||
B --> C[unbindDeviceLogic]
|
||||
B --> D[bindInviteCodeLogic]
|
||||
end
|
||||
|
||||
subgraph "服务层"
|
||||
C --> E[DeviceManager.KickDevice]
|
||||
D --> F[UserModel.FindOneByReferCode]
|
||||
end
|
||||
|
||||
subgraph "数据层"
|
||||
E --> G[WebSocket连接管理]
|
||||
F --> H[GORM/数据库]
|
||||
end
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 模块详细设计
|
||||
|
||||
### 模块1: UnbindDeviceLogic 修复
|
||||
|
||||
#### 当前数据流
|
||||
```
|
||||
1. 用户请求解绑设备
|
||||
2. 验证设备属于当前用户 (device.UserId == u.Id) ✅
|
||||
3. 事务中:创建新用户,迁移设备
|
||||
4. 调用 KickDevice(u.Id, identifier) ❌ <-- 用户ID错误
|
||||
```
|
||||
|
||||
#### 修复后数据流
|
||||
```
|
||||
1. 用户请求解绑设备
|
||||
2. 验证设备属于当前用户 ✅
|
||||
3. 保存原始用户ID: originalUserId := device.UserId ✅
|
||||
4. 事务中:创建新用户,迁移设备
|
||||
5. 调用 KickDevice(originalUserId, identifier) ✅ <-- 使用正确的用户ID
|
||||
```
|
||||
|
||||
#### 接口契约
|
||||
无变化,仅修改内部实现。
|
||||
|
||||
---
|
||||
|
||||
### 模块2: BindInviteCodeLogic 修复
|
||||
|
||||
#### 当前错误处理
|
||||
```go
|
||||
if err != nil {
|
||||
return xerr.DatabaseQueryError // 所有错误统一处理
|
||||
}
|
||||
```
|
||||
|
||||
#### 修复后错误处理
|
||||
```go
|
||||
if err != nil {
|
||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
return xerr.InviteCodeError("无邀请码") // 记录不存在 → 友好提示
|
||||
}
|
||||
return xerr.DatabaseQueryError // 其他错误保持原样
|
||||
}
|
||||
```
|
||||
|
||||
#### 接口契约
|
||||
API 返回格式不变,但错误码从 `10001` 变为 `20009`(针对邀请码不存在的情况)。
|
||||
|
||||
---
|
||||
|
||||
## 异常处理策略
|
||||
|
||||
| 场景 | 错误码 | 错误消息 |
|
||||
|------|--------|----------|
|
||||
| 邀请码不存在 | 20009 | 无邀请码 |
|
||||
| 数据库查询错误 | 10001 | Database query error |
|
||||
| 绑定自己的邀请码 | 20009 | 不允许绑定自己 |
|
||||
|
||||
---
|
||||
|
||||
## 设计原则
|
||||
1. **最小改动原则**:只修改必要的代码,不重构现有逻辑
|
||||
2. **向后兼容**:不改变 API 接口定义
|
||||
3. **代码风格一致**:遵循项目现有的错误处理模式
|
||||
@ -1,20 +0,0 @@
|
||||
# 设备移出和邀请码优化 - 项目总结
|
||||
|
||||
## 项目概览
|
||||
本次任务修复了两个影响用户体验的 Bug:
|
||||
1. 设备绑定邮箱后,从设备列表移除时未自动退出。
|
||||
2. 绑定无效邀请码时,错误提示不友好。
|
||||
|
||||
## 关键变更
|
||||
1. **核心修复**:在设备归属转移(绑定邮箱)时,主动踢出原用户的 WebSocket 连接,防止“幽灵连接”存在。
|
||||
2. **安全增强**:在设备解绑和转移时,彻底清理 Redis 中的 Session 缓存(包括 `user_sessions` 集合)。
|
||||
3. **体验优化**:优化了邀请码验证的错误提示,明确告知用户“无邀请码”。
|
||||
|
||||
## 文件变更列表
|
||||
- `internal/logic/public/user/bindEmailWithVerificationLogic.go`
|
||||
- `internal/logic/public/user/unbindDeviceLogic.go`
|
||||
- `internal/logic/public/user/bindInviteCodeLogic.go`
|
||||
|
||||
## 后续建议
|
||||
- 建议在测试环境中重点测试多端登录和设备绑定的边界情况。
|
||||
- 关注 `DeviceManager` 的内存使用情况,确保大量的踢出操作不会造成锁竞争。
|
||||
@ -1,91 +0,0 @@
|
||||
# 设备移出和邀请码优化 - 任务清单
|
||||
|
||||
## 任务依赖图
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[任务1: 修复设备踢出Bug] --> C[任务3: 编译验证]
|
||||
B[任务2: 修复邀请码提示Bug] --> C
|
||||
C --> D[任务4: 更新文档]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 原子任务列表
|
||||
|
||||
### 任务1: 修复设备解绑后未踢出的问题
|
||||
|
||||
**输入契约**:
|
||||
- 文件:`internal/logic/public/user/unbindDeviceLogic.go`
|
||||
- 当前代码行:第 123 行
|
||||
|
||||
**输出契约**:
|
||||
- 在事务执行前保存 `device.UserId`
|
||||
- 修改 `KickDevice` 调用,使用保存的原始用户ID
|
||||
|
||||
**实现约束**:
|
||||
- 不修改方法签名
|
||||
- 不影响事务逻辑
|
||||
|
||||
**验收标准**:
|
||||
- [x] 代码编译通过
|
||||
- [ ] 解绑设备后,被解绑设备收到踢出消息
|
||||
|
||||
**预估复杂度**:低
|
||||
|
||||
---
|
||||
|
||||
### 任务2: 修复邀请码错误提示不友好的问题
|
||||
|
||||
**输入契约**:
|
||||
- 文件:`internal/logic/public/user/bindInviteCodeLogic.go`
|
||||
- 当前代码行:第 44-47 行
|
||||
|
||||
**输出契约**:
|
||||
- 添加 `gorm.ErrRecordNotFound` 判断
|
||||
- 返回友好的错误消息 "无邀请码"
|
||||
- 使用 `xerr.InviteCodeError` 错误码
|
||||
|
||||
**实现约束**:
|
||||
- 保持与其他模块(如 `userRegisterLogic`)的错误处理风格一致
|
||||
- 需要添加 `gorm.io/gorm` 导入
|
||||
|
||||
**验收标准**:
|
||||
- [x] 代码编译通过
|
||||
- [ ] 输入不存在的邀请码时返回 "无邀请码" 提示
|
||||
|
||||
**预估复杂度**:低
|
||||
|
||||
---
|
||||
|
||||
### 任务3: 编译验证
|
||||
|
||||
**输入契约**:
|
||||
- 任务1和任务2已完成
|
||||
|
||||
**输出契约**:
|
||||
- 项目编译成功,无错误
|
||||
|
||||
**验收标准**:
|
||||
- [x] `go build ./...` 无报错
|
||||
|
||||
---
|
||||
|
||||
### 任务4: 更新说明文档
|
||||
|
||||
**输入契约**:
|
||||
- 任务3已完成
|
||||
|
||||
**输出契约**:
|
||||
- 更新 `说明文档.md` 记录本次修复
|
||||
|
||||
**验收标准**:
|
||||
- [x] 文档记录完整
|
||||
|
||||
---
|
||||
|
||||
## 执行顺序
|
||||
|
||||
1. ✅ 任务1 和 任务2 可并行执行(无依赖)
|
||||
2. ✅ 任务3 在任务1、2完成后执行
|
||||
3. ✅ 任务4 最后执行
|
||||
@ -1,2 +1,15 @@
|
||||
ALTER TABLE traffic_log ADD INDEX IF NOT EXISTS idx_timestamp (timestamp);
|
||||
SET @index_exists = (
|
||||
SELECT COUNT(1)
|
||||
FROM information_schema.STATISTICS
|
||||
WHERE TABLE_SCHEMA = DATABASE()
|
||||
AND TABLE_NAME = 'traffic_log'
|
||||
AND INDEX_NAME = 'idx_timestamp'
|
||||
);
|
||||
|
||||
SET @sql = IF(@index_exists = 0,
|
||||
'CREATE INDEX idx_timestamp ON traffic_log (timestamp)',
|
||||
'SELECT ''Index already exists'' AS message');
|
||||
|
||||
PREPARE stmt FROM @sql;
|
||||
EXECUTE stmt;
|
||||
DEALLOCATE PREPARE stmt;
|
||||
|
||||
18
initialize/migrate/database/20260123_add_app_version.up.sql
Normal file
18
initialize/migrate/database/20260123_add_app_version.up.sql
Normal file
@ -0,0 +1,18 @@
|
||||
CREATE TABLE IF NOT EXISTS `application_versions` (
|
||||
`id` bigint(20) NOT NULL AUTO_INCREMENT,
|
||||
`platform` varchar(50) NOT NULL COMMENT 'Platform',
|
||||
`version` varchar(50) NOT NULL COMMENT 'Version Number',
|
||||
`min_version` varchar(50) DEFAULT NULL COMMENT 'Minimum Force Update Version',
|
||||
`force_update` tinyint(1) NOT NULL DEFAULT '0' COMMENT 'Force Update',
|
||||
`url` varchar(255) NOT NULL COMMENT 'Download URL',
|
||||
`description` json DEFAULT NULL COMMENT 'Update Description',
|
||||
`is_default` tinyint(1) NOT NULL DEFAULT '0' COMMENT 'Is Default Version',
|
||||
`is_in_review` tinyint(1) NOT NULL DEFAULT '0' COMMENT 'Is In Review',
|
||||
`created_at` datetime(3) DEFAULT NULL COMMENT 'Create Time',
|
||||
`updated_at` datetime(3) DEFAULT NULL COMMENT 'Update Time',
|
||||
`deleted_at` datetime(3) DEFAULT NULL,
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `idx_application_versions_deleted_at` (`deleted_at`),
|
||||
KEY `idx_platform` (`platform`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='Application Version Management';
|
||||
|
||||
@ -28,6 +28,7 @@ type Config struct {
|
||||
Register RegisterConfig `yaml:"Register"`
|
||||
Subscribe SubscribeConfig `yaml:"Subscribe"`
|
||||
Invite InviteConfig `yaml:"Invite"`
|
||||
Kutt KuttConfig `yaml:"Kutt"`
|
||||
Telegram Telegram `yaml:"Telegram"`
|
||||
Log Log `yaml:"Log"`
|
||||
Trace trace.Config `yaml:"Trace"`
|
||||
@ -209,6 +210,15 @@ type InviteConfig struct {
|
||||
GiftDays int64 `yaml:"GiftDays" default:"0"`
|
||||
}
|
||||
|
||||
// KuttConfig Kutt 短链接服务配置
|
||||
type KuttConfig struct {
|
||||
Enable bool `yaml:"Enable" default:"false"` // 是否启用 Kutt 短链接
|
||||
ApiURL string `yaml:"ApiURL" default:""` // Kutt API 地址
|
||||
ApiKey string `yaml:"ApiKey" default:""` // Kutt API 密钥
|
||||
TargetURL string `yaml:"TargetURL" default:""` // 目标注册页面基础 URL
|
||||
Domain string `yaml:"Domain" default:""` // 短链接域名 (例如: getsapp.net)
|
||||
}
|
||||
|
||||
type Telegram struct {
|
||||
Enable bool `yaml:"Enable" default:"false"`
|
||||
BotID int64 `yaml:"BotID" default:""`
|
||||
|
||||
@ -0,0 +1,28 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/perfect-panel/server/internal/logic/admin/application"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/result"
|
||||
)
|
||||
|
||||
func CreateAppVersionHandler(svcCtx *svc.ServiceContext) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
var req types.CreateAppVersionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := svcCtx.Validate(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
l := application.NewCreateAppVersionLogic(c.Request.Context(), svcCtx)
|
||||
resp, err := l.CreateAppVersion(&req)
|
||||
result.HttpResult(c, resp, err)
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,28 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/perfect-panel/server/internal/logic/admin/application"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/result"
|
||||
)
|
||||
|
||||
func DeleteAppVersionHandler(svcCtx *svc.ServiceContext) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
var req types.DeleteAppVersionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := svcCtx.Validate(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
l := application.NewDeleteAppVersionLogic(c.Request.Context(), svcCtx)
|
||||
err := l.DeleteAppVersion(&req)
|
||||
result.HttpResult(c, nil, err)
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,29 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/perfect-panel/server/internal/logic/admin/application"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/result"
|
||||
)
|
||||
|
||||
func GetAppVersionListHandler(svcCtx *svc.ServiceContext) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
var req types.GetAppVersionListRequest
|
||||
if err := c.ShouldBindQuery(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Validation might be optional for GET if no required params
|
||||
if err := svcCtx.Validate(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
l := application.NewGetAppVersionListLogic(c.Request.Context(), svcCtx)
|
||||
resp, err := l.GetAppVersionList(&req)
|
||||
result.HttpResult(c, resp, err)
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,28 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/perfect-panel/server/internal/logic/admin/application"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/result"
|
||||
)
|
||||
|
||||
func UpdateAppVersionHandler(svcCtx *svc.ServiceContext) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
var req types.UpdateAppVersionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
if err := svcCtx.Validate(&req); err != nil {
|
||||
result.ParamErrorResult(c, err)
|
||||
return
|
||||
}
|
||||
|
||||
l := application.NewUpdateAppVersionLogic(c.Request.Context(), svcCtx)
|
||||
resp, err := l.UpdateAppVersion(&req)
|
||||
result.HttpResult(c, resp, err)
|
||||
}
|
||||
}
|
||||
@ -100,6 +100,18 @@ func RegisterHandlers(router *gin.Engine, serverCtx *svc.ServiceContext) {
|
||||
|
||||
// Get subscribe application list
|
||||
adminApplicationGroupRouter.GET("/subscribe_application_list", adminApplication.GetSubscribeApplicationListHandler(serverCtx))
|
||||
|
||||
// Create App Version
|
||||
adminApplicationGroupRouter.POST("/version", adminApplication.CreateAppVersionHandler(serverCtx))
|
||||
|
||||
// Update App Version
|
||||
adminApplicationGroupRouter.PUT("/version", adminApplication.UpdateAppVersionHandler(serverCtx))
|
||||
|
||||
// Delete App Version
|
||||
adminApplicationGroupRouter.DELETE("/version", adminApplication.DeleteAppVersionHandler(serverCtx))
|
||||
|
||||
// Get App Version List
|
||||
adminApplicationGroupRouter.GET("/version/list", adminApplication.GetAppVersionListHandler(serverCtx))
|
||||
}
|
||||
|
||||
adminAuthMethodGroupRouter := router.Group("/v1/admin/auth-method")
|
||||
|
||||
75
internal/logic/admin/application/createAppVersionLogic.go
Normal file
75
internal/logic/admin/application/createAppVersionLogic.go
Normal file
@ -0,0 +1,75 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
|
||||
"github.com/perfect-panel/server/internal/model/client"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/logger"
|
||||
"github.com/perfect-panel/server/pkg/xerr"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type CreateAppVersionLogic struct {
|
||||
logger.Logger
|
||||
ctx context.Context
|
||||
svcCtx *svc.ServiceContext
|
||||
}
|
||||
|
||||
func NewCreateAppVersionLogic(ctx context.Context, svcCtx *svc.ServiceContext) *CreateAppVersionLogic {
|
||||
return &CreateAppVersionLogic{
|
||||
Logger: logger.WithContext(ctx),
|
||||
ctx: ctx,
|
||||
svcCtx: svcCtx,
|
||||
}
|
||||
}
|
||||
|
||||
func (l *CreateAppVersionLogic) CreateAppVersion(req *types.CreateAppVersionRequest) (resp *types.ApplicationVersion, err error) {
|
||||
// Defaults
|
||||
isDefault := false
|
||||
if req.IsDefault != nil {
|
||||
isDefault = *req.IsDefault
|
||||
}
|
||||
isInReview := false
|
||||
if req.IsInReview != nil {
|
||||
isInReview = *req.IsInReview
|
||||
}
|
||||
|
||||
description := json.RawMessage(req.Description)
|
||||
|
||||
version := &client.ApplicationVersion{
|
||||
Platform: req.Platform,
|
||||
Version: req.Version,
|
||||
MinVersion: req.MinVersion,
|
||||
ForceUpdate: req.ForceUpdate,
|
||||
Url: req.Url,
|
||||
Description: description,
|
||||
IsDefault: isDefault,
|
||||
IsInReview: isInReview,
|
||||
}
|
||||
|
||||
if err := l.svcCtx.DB.Create(version).Error; err != nil {
|
||||
l.Errorw("[CreateAppVersion] create version error", logger.Field("error", err.Error()))
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseInsertError), "create version error: %v", err)
|
||||
}
|
||||
|
||||
// Manual mapping to types.ApplicationVersion
|
||||
resp = &types.ApplicationVersion{
|
||||
Id: version.Id,
|
||||
Platform: version.Platform, // Note: types.ApplicationVersion might not have Platform field based on previous view_file. Let's check.
|
||||
Version: version.Version,
|
||||
MinVersion: version.MinVersion,
|
||||
ForceUpdate: version.ForceUpdate,
|
||||
Description: make(map[string]string), // Simplified for now
|
||||
Url: version.Url,
|
||||
IsDefault: version.IsDefault,
|
||||
IsInReview: version.IsInReview,
|
||||
CreatedAt: version.CreatedAt.Unix(),
|
||||
}
|
||||
// Try to unmarshal description
|
||||
_ = json.Unmarshal(version.Description, &resp.Description)
|
||||
|
||||
return resp, nil
|
||||
}
|
||||
34
internal/logic/admin/application/deleteAppVersionLogic.go
Normal file
34
internal/logic/admin/application/deleteAppVersionLogic.go
Normal file
@ -0,0 +1,34 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/perfect-panel/server/internal/model/client"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/logger"
|
||||
"github.com/perfect-panel/server/pkg/xerr"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type DeleteAppVersionLogic struct {
|
||||
logger.Logger
|
||||
ctx context.Context
|
||||
svcCtx *svc.ServiceContext
|
||||
}
|
||||
|
||||
func NewDeleteAppVersionLogic(ctx context.Context, svcCtx *svc.ServiceContext) *DeleteAppVersionLogic {
|
||||
return &DeleteAppVersionLogic{
|
||||
Logger: logger.WithContext(ctx),
|
||||
ctx: ctx,
|
||||
svcCtx: svcCtx,
|
||||
}
|
||||
}
|
||||
|
||||
func (l *DeleteAppVersionLogic) DeleteAppVersion(req *types.DeleteAppVersionRequest) error {
|
||||
if err := l.svcCtx.DB.Delete(&client.ApplicationVersion{}, req.Id).Error; err != nil {
|
||||
l.Errorw("[DeleteAppVersion] delete version error", logger.Field("error", err.Error()))
|
||||
return errors.Wrapf(xerr.NewErrCode(xerr.DatabaseDeletedError), "delete version error: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
75
internal/logic/admin/application/getAppVersionListLogic.go
Normal file
75
internal/logic/admin/application/getAppVersionListLogic.go
Normal file
@ -0,0 +1,75 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
|
||||
"github.com/perfect-panel/server/internal/model/client"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/logger"
|
||||
"github.com/perfect-panel/server/pkg/xerr"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type GetAppVersionListLogic struct {
|
||||
logger.Logger
|
||||
ctx context.Context
|
||||
svcCtx *svc.ServiceContext
|
||||
}
|
||||
|
||||
func NewGetAppVersionListLogic(ctx context.Context, svcCtx *svc.ServiceContext) *GetAppVersionListLogic {
|
||||
return &GetAppVersionListLogic{
|
||||
Logger: logger.WithContext(ctx),
|
||||
ctx: ctx,
|
||||
svcCtx: svcCtx,
|
||||
}
|
||||
}
|
||||
|
||||
func (l *GetAppVersionListLogic) GetAppVersionList(req *types.GetAppVersionListRequest) (resp *types.GetAppVersionListResponse, err error) {
|
||||
var versions []*client.ApplicationVersion
|
||||
var total int64
|
||||
|
||||
db := l.svcCtx.DB.Model(&client.ApplicationVersion{})
|
||||
|
||||
if req.Platform != "" {
|
||||
db = db.Where("platform = ?", req.Platform)
|
||||
}
|
||||
|
||||
err = db.Count(&total).Error
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "get version list count error: %v", err)
|
||||
}
|
||||
|
||||
offset := (req.Page - 1) * req.Size
|
||||
if offset < 0 {
|
||||
offset = 0
|
||||
}
|
||||
err = db.Offset(offset).Limit(req.Size).Order("id desc").Find(&versions).Error
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseQueryError), "get version list error: %v", err)
|
||||
}
|
||||
|
||||
var list []*types.ApplicationVersion
|
||||
for _, v := range versions {
|
||||
desc := make(map[string]string)
|
||||
_ = json.Unmarshal(v.Description, &desc)
|
||||
list = append(list, &types.ApplicationVersion{
|
||||
Id: v.Id,
|
||||
Platform: v.Platform,
|
||||
Version: v.Version,
|
||||
MinVersion: v.MinVersion,
|
||||
ForceUpdate: v.ForceUpdate,
|
||||
Description: desc,
|
||||
Url: v.Url,
|
||||
IsDefault: v.IsDefault,
|
||||
IsInReview: v.IsInReview,
|
||||
CreatedAt: v.CreatedAt.Unix(),
|
||||
})
|
||||
}
|
||||
|
||||
return &types.GetAppVersionListResponse{
|
||||
Total: total,
|
||||
List: list,
|
||||
}, nil
|
||||
}
|
||||
74
internal/logic/admin/application/updateAppVersionLogic.go
Normal file
74
internal/logic/admin/application/updateAppVersionLogic.go
Normal file
@ -0,0 +1,74 @@
|
||||
package application
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
|
||||
"github.com/perfect-panel/server/internal/model/client"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/logger"
|
||||
"github.com/perfect-panel/server/pkg/xerr"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type UpdateAppVersionLogic struct {
|
||||
logger.Logger
|
||||
ctx context.Context
|
||||
svcCtx *svc.ServiceContext
|
||||
}
|
||||
|
||||
func NewUpdateAppVersionLogic(ctx context.Context, svcCtx *svc.ServiceContext) *UpdateAppVersionLogic {
|
||||
return &UpdateAppVersionLogic{
|
||||
Logger: logger.WithContext(ctx),
|
||||
ctx: ctx,
|
||||
svcCtx: svcCtx,
|
||||
}
|
||||
}
|
||||
|
||||
func (l *UpdateAppVersionLogic) UpdateAppVersion(req *types.UpdateAppVersionRequest) (resp *types.ApplicationVersion, err error) {
|
||||
// Defaults
|
||||
isDefault := false
|
||||
if req.IsDefault != nil {
|
||||
isDefault = *req.IsDefault
|
||||
}
|
||||
isInReview := false
|
||||
if req.IsInReview != nil {
|
||||
isInReview = *req.IsInReview
|
||||
}
|
||||
|
||||
description := json.RawMessage(req.Description)
|
||||
|
||||
version := &client.ApplicationVersion{
|
||||
Id: req.Id,
|
||||
Platform: req.Platform,
|
||||
Version: req.Version,
|
||||
MinVersion: req.MinVersion,
|
||||
ForceUpdate: req.ForceUpdate,
|
||||
Url: req.Url,
|
||||
Description: description,
|
||||
IsDefault: isDefault,
|
||||
IsInReview: isInReview,
|
||||
}
|
||||
|
||||
if err := l.svcCtx.DB.Save(version).Error; err != nil {
|
||||
l.Errorw("[UpdateAppVersion] update version error", logger.Field("error", err.Error()))
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.DatabaseUpdateError), "update version error: %v", err)
|
||||
}
|
||||
|
||||
resp = &types.ApplicationVersion{
|
||||
Id: version.Id,
|
||||
Platform: version.Platform,
|
||||
Version: version.Version,
|
||||
MinVersion: version.MinVersion,
|
||||
ForceUpdate: version.ForceUpdate,
|
||||
Description: make(map[string]string),
|
||||
Url: version.Url,
|
||||
IsDefault: version.IsDefault,
|
||||
IsInReview: version.IsInReview,
|
||||
CreatedAt: version.CreatedAt.Unix(),
|
||||
}
|
||||
_ = json.Unmarshal(version.Description, &resp.Description)
|
||||
|
||||
return resp, nil
|
||||
}
|
||||
@ -120,10 +120,42 @@ func (l *DeviceLoginLogic) DeviceLogin(req *types.DeviceLoginRequest) (resp *typ
|
||||
}
|
||||
}
|
||||
|
||||
// Generate session id
|
||||
sessionId := uuidx.NewUUID().String()
|
||||
// Check if device has an existing valid session - reuse it instead of creating new one
|
||||
var sessionId string
|
||||
var reuseSession bool
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
if oldSid, getErr := l.svcCtx.Redis.Get(l.ctx, deviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
// Check if old session is still valid AND belongs to current user
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", config.SessionIdKey, oldSid)
|
||||
if uidStr, existErr := l.svcCtx.Redis.Get(l.ctx, oldSessionKey).Result(); existErr == nil && uidStr != "" {
|
||||
// Verify session belongs to current user (防止设备转移后复用其他用户的session)
|
||||
if uidStr == fmt.Sprintf("%d", userInfo.Id) {
|
||||
sessionId = oldSid
|
||||
reuseSession = true
|
||||
l.Infow("reusing existing session for device",
|
||||
logger.Field("user_id", userInfo.Id),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
logger.Field("session_id", sessionId),
|
||||
)
|
||||
} else {
|
||||
l.Infow("device session belongs to different user, creating new session",
|
||||
logger.Field("current_user_id", userInfo.Id),
|
||||
logger.Field("session_user_id", uidStr),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !reuseSession {
|
||||
sessionId = uuidx.NewUUID().String()
|
||||
l.Infow("creating new session for device",
|
||||
logger.Field("user_id", userInfo.Id),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
logger.Field("session_id", sessionId),
|
||||
)
|
||||
}
|
||||
|
||||
// Generate token
|
||||
// Generate token (always generate new token, but may reuse sessionId)
|
||||
token, err := jwt.NewJwtToken(
|
||||
l.svcCtx.Config.JwtAuth.AccessSecret,
|
||||
time.Now().Unix(),
|
||||
@ -141,23 +173,14 @@ func (l *DeviceLoginLogic) DeviceLogin(req *types.DeviceLoginRequest) (resp *typ
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "token generate error: %v", err.Error())
|
||||
}
|
||||
|
||||
// If device had a previous session, invalidate it first (MUST be before EnforceUserSessionLimit)
|
||||
oldDeviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
if oldSid, getErr := l.svcCtx.Redis.Get(l.ctx, oldDeviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", config.SessionIdKey, oldSid)
|
||||
if uidStr, _ := l.svcCtx.Redis.Get(l.ctx, oldSessionKey).Result(); uidStr != "" {
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, oldSessionKey).Err()
|
||||
sessionsKey := fmt.Sprintf("%s%v", config.UserSessionsKeyPrefix, uidStr)
|
||||
_ = l.svcCtx.Redis.ZRem(l.ctx, sessionsKey, oldSid).Err()
|
||||
// Only enforce session limit and add to user sessions if this is a new session
|
||||
if !reuseSession {
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
}
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, oldDeviceCacheKey).Err()
|
||||
}
|
||||
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
}
|
||||
|
||||
// Store session id in redis
|
||||
// Store/refresh session id in redis (extend TTL)
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", config.SessionIdKey, sessionId)
|
||||
if err = l.svcCtx.Redis.Set(l.ctx, sessionIdCacheKey, userInfo.Id, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err(); err != nil {
|
||||
l.Errorw("set session id error",
|
||||
@ -167,8 +190,7 @@ func (l *DeviceLoginLogic) DeviceLogin(req *types.DeviceLoginRequest) (resp *typ
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "set session id error: %v", err.Error())
|
||||
}
|
||||
|
||||
// Store device id in redis
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
// Store/refresh device-to-session mapping (extend TTL)
|
||||
if err = l.svcCtx.Redis.Set(l.ctx, deviceCacheKey, sessionId, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err(); err != nil {
|
||||
l.Errorw("set device id error",
|
||||
logger.Field("user_id", userInfo.Id),
|
||||
|
||||
@ -220,7 +220,40 @@ func (l *EmailLoginLogic) EmailLogin(req *types.EmailLoginRequest) (resp *types.
|
||||
req.LoginType = l.ctx.Value(constant.LoginType).(string)
|
||||
}
|
||||
|
||||
sessionId := uuidx.NewUUID().String()
|
||||
// Check if device has an existing valid session - reuse it instead of creating new one
|
||||
var sessionId string
|
||||
var reuseSession bool
|
||||
var deviceCacheKey string
|
||||
if req.Identifier != "" {
|
||||
deviceCacheKey = fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
if oldSid, getErr := l.svcCtx.Redis.Get(l.ctx, deviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
// Check if old session is still valid AND belongs to current user
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", config.SessionIdKey, oldSid)
|
||||
if uidStr, existErr := l.svcCtx.Redis.Get(l.ctx, oldSessionKey).Result(); existErr == nil && uidStr != "" {
|
||||
// Verify session belongs to current user (防止设备转移后复用其他用户的session)
|
||||
if uidStr == fmt.Sprintf("%d", userInfo.Id) {
|
||||
sessionId = oldSid
|
||||
reuseSession = true
|
||||
l.Infow("reusing existing session for device",
|
||||
logger.Field("user_id", userInfo.Id),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
logger.Field("session_id", sessionId),
|
||||
)
|
||||
} else {
|
||||
l.Infow("device session belongs to different user, creating new session",
|
||||
logger.Field("current_user_id", userInfo.Id),
|
||||
logger.Field("session_user_id", uidStr),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !reuseSession {
|
||||
sessionId = uuidx.NewUUID().String()
|
||||
}
|
||||
|
||||
// Generate token (always generate new token, but may reuse sessionId)
|
||||
token, err := jwt.NewJwtToken(
|
||||
l.svcCtx.Config.JwtAuth.AccessSecret,
|
||||
time.Now().Unix(),
|
||||
@ -233,32 +266,22 @@ func (l *EmailLoginLogic) EmailLogin(req *types.EmailLoginRequest) (resp *types.
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "token generate error: %v", err.Error())
|
||||
}
|
||||
// If device had a previous session, invalidate it first (MUST be before EnforceUserSessionLimit)
|
||||
if req.Identifier != "" {
|
||||
oldDeviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
if oldSid, getErr := l.svcCtx.Redis.Get(l.ctx, oldDeviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", config.SessionIdKey, oldSid)
|
||||
if uidStr, _ := l.svcCtx.Redis.Get(l.ctx, oldSessionKey).Result(); uidStr != "" {
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, oldSessionKey).Err()
|
||||
sessionsKey := fmt.Sprintf("%s%v", config.UserSessionsKeyPrefix, uidStr)
|
||||
_ = l.svcCtx.Redis.ZRem(l.ctx, sessionsKey, oldSid).Err()
|
||||
}
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, oldDeviceCacheKey).Err()
|
||||
|
||||
// Only enforce session limit and add to user sessions if this is a new session
|
||||
if !reuseSession {
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
}
|
||||
// Store/refresh session id in redis (extend TTL)
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", config.SessionIdKey, sessionId)
|
||||
|
||||
if err = l.svcCtx.Redis.Set(l.ctx, sessionIdCacheKey, userInfo.Id, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err(); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "set session id error: %v", err.Error())
|
||||
}
|
||||
|
||||
// Store device-to-session mapping
|
||||
// Store/refresh device-to-session mapping (extend TTL)
|
||||
if req.Identifier != "" {
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
_ = l.svcCtx.Redis.Set(l.ctx, deviceCacheKey, sessionId, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err()
|
||||
}
|
||||
|
||||
|
||||
@ -144,9 +144,40 @@ func (l *TelephoneLoginLogic) TelephoneLogin(req *types.TelephoneLoginRequest, r
|
||||
req.LoginType = l.ctx.Value(constant.LoginType).(string)
|
||||
}
|
||||
|
||||
// Generate session id
|
||||
sessionId := uuidx.NewUUID().String()
|
||||
// Generate token
|
||||
// Check if device has an existing valid session - reuse it instead of creating new one
|
||||
var sessionId string
|
||||
var reuseSession bool
|
||||
var deviceCacheKey string
|
||||
if req.Identifier != "" {
|
||||
deviceCacheKey = fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
if oldSid, getErr := l.svcCtx.Redis.Get(l.ctx, deviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
// Check if old session is still valid AND belongs to current user
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", config.SessionIdKey, oldSid)
|
||||
if uidStr, existErr := l.svcCtx.Redis.Get(l.ctx, oldSessionKey).Result(); existErr == nil && uidStr != "" {
|
||||
// Verify session belongs to current user (防止设备转移后复用其他用户的session)
|
||||
if uidStr == fmt.Sprintf("%d", userInfo.Id) {
|
||||
sessionId = oldSid
|
||||
reuseSession = true
|
||||
l.Infow("reusing existing session for device",
|
||||
logger.Field("user_id", userInfo.Id),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
logger.Field("session_id", sessionId),
|
||||
)
|
||||
} else {
|
||||
l.Infow("device session belongs to different user, creating new session",
|
||||
logger.Field("current_user_id", userInfo.Id),
|
||||
logger.Field("session_user_id", uidStr),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !reuseSession {
|
||||
sessionId = uuidx.NewUUID().String()
|
||||
}
|
||||
|
||||
// Generate token (always generate new token, but may reuse sessionId)
|
||||
token, err := jwt.NewJwtToken(
|
||||
l.svcCtx.Config.JwtAuth.AccessSecret,
|
||||
time.Now().Unix(),
|
||||
@ -159,13 +190,24 @@ func (l *TelephoneLoginLogic) TelephoneLogin(req *types.TelephoneLoginRequest, r
|
||||
l.Logger.Error("[UserLogin] token generate error", logger.Field("error", err.Error()))
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "token generate error: %v", err.Error())
|
||||
}
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
|
||||
// Only enforce session limit and add to user sessions if this is a new session
|
||||
if !reuseSession {
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// Store/refresh session id in redis (extend TTL)
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", config.SessionIdKey, sessionId)
|
||||
if err = l.svcCtx.Redis.Set(l.ctx, sessionIdCacheKey, userInfo.Id, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err(); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "set session id error: %v", err.Error())
|
||||
}
|
||||
|
||||
// Store/refresh device-to-session mapping (extend TTL)
|
||||
if req.Identifier != "" {
|
||||
_ = l.svcCtx.Redis.Set(l.ctx, deviceCacheKey, sessionId, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err()
|
||||
}
|
||||
loginStatus = true
|
||||
return &types.LoginResponse{
|
||||
Token: token,
|
||||
|
||||
@ -115,9 +115,41 @@ func (l *UserLoginLogic) UserLogin(req *types.UserLoginRequest) (resp *types.Log
|
||||
if l.ctx.Value(constant.LoginType) != nil {
|
||||
req.LoginType = l.ctx.Value(constant.LoginType).(string)
|
||||
}
|
||||
// Generate session id
|
||||
sessionId := uuidx.NewUUID().String()
|
||||
// Generate token
|
||||
|
||||
// Check if device has an existing valid session - reuse it instead of creating new one
|
||||
var sessionId string
|
||||
var reuseSession bool
|
||||
var deviceCacheKey string
|
||||
if req.Identifier != "" {
|
||||
deviceCacheKey = fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
if oldSid, getErr := l.svcCtx.Redis.Get(l.ctx, deviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
// Check if old session is still valid AND belongs to current user
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", config.SessionIdKey, oldSid)
|
||||
if uidStr, existErr := l.svcCtx.Redis.Get(l.ctx, oldSessionKey).Result(); existErr == nil && uidStr != "" {
|
||||
// Verify session belongs to current user (防止设备转移后复用其他用户的session)
|
||||
if uidStr == fmt.Sprintf("%d", userInfo.Id) {
|
||||
sessionId = oldSid
|
||||
reuseSession = true
|
||||
l.Infow("reusing existing session for device",
|
||||
logger.Field("user_id", userInfo.Id),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
logger.Field("session_id", sessionId),
|
||||
)
|
||||
} else {
|
||||
l.Infow("device session belongs to different user, creating new session",
|
||||
logger.Field("current_user_id", userInfo.Id),
|
||||
logger.Field("session_user_id", uidStr),
|
||||
logger.Field("identifier", req.Identifier),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !reuseSession {
|
||||
sessionId = uuidx.NewUUID().String()
|
||||
}
|
||||
|
||||
// Generate token (always generate new token, but may reuse sessionId)
|
||||
token, err := jwt.NewJwtToken(
|
||||
l.svcCtx.Config.JwtAuth.AccessSecret,
|
||||
time.Now().Unix(),
|
||||
@ -131,32 +163,22 @@ func (l *UserLoginLogic) UserLogin(req *types.UserLoginRequest) (resp *types.Log
|
||||
l.Logger.Error("[UserLogin] token generate error", logger.Field("error", err.Error()))
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "token generate error: %v", err.Error())
|
||||
}
|
||||
// If device had a previous session, invalidate it first (MUST be before EnforceUserSessionLimit)
|
||||
if req.Identifier != "" {
|
||||
oldDeviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
if oldSid, getErr := l.svcCtx.Redis.Get(l.ctx, oldDeviceCacheKey).Result(); getErr == nil && oldSid != "" {
|
||||
oldSessionKey := fmt.Sprintf("%v:%v", config.SessionIdKey, oldSid)
|
||||
if uidStr, _ := l.svcCtx.Redis.Get(l.ctx, oldSessionKey).Result(); uidStr != "" {
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, oldSessionKey).Err()
|
||||
sessionsKey := fmt.Sprintf("%s%v", config.UserSessionsKeyPrefix, uidStr)
|
||||
_ = l.svcCtx.Redis.ZRem(l.ctx, sessionsKey, oldSid).Err()
|
||||
}
|
||||
_ = l.svcCtx.Redis.Del(l.ctx, oldDeviceCacheKey).Err()
|
||||
|
||||
// Only enforce session limit and add to user sessions if this is a new session
|
||||
if !reuseSession {
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
if err = l.svcCtx.EnforceUserSessionLimit(l.ctx, userInfo.Id, sessionId, l.svcCtx.SessionLimit()); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "enforce session limit error: %v", err.Error())
|
||||
}
|
||||
// Store/refresh session id in redis (extend TTL)
|
||||
sessionIdCacheKey := fmt.Sprintf("%v:%v", config.SessionIdKey, sessionId)
|
||||
|
||||
if err = l.svcCtx.Redis.Set(l.ctx, sessionIdCacheKey, userInfo.Id, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err(); err != nil {
|
||||
return nil, errors.Wrapf(xerr.NewErrCode(xerr.ERROR), "set session id error: %v", err.Error())
|
||||
}
|
||||
|
||||
// Store device-to-session mapping
|
||||
// Store/refresh device-to-session mapping (extend TTL)
|
||||
if req.Identifier != "" {
|
||||
deviceCacheKey := fmt.Sprintf("%v:%v", config.DeviceCacheKeyKey, req.Identifier)
|
||||
_ = l.svcCtx.Redis.Set(l.ctx, deviceCacheKey, sessionId, time.Duration(l.svcCtx.Config.JwtAuth.AccessExpire)*time.Second).Err()
|
||||
}
|
||||
|
||||
|
||||
@ -2,7 +2,9 @@ package common
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
|
||||
"github.com/perfect-panel/server/internal/model/client"
|
||||
"github.com/perfect-panel/server/internal/svc"
|
||||
"github.com/perfect-panel/server/internal/types"
|
||||
"github.com/perfect-panel/server/pkg/logger"
|
||||
@ -25,16 +27,37 @@ func NewGetAppVersionLogic(ctx context.Context, svcCtx *svc.ServiceContext) *Get
|
||||
|
||||
// GetAppVersion 根据平台返回最新版本信息
|
||||
func (l *GetAppVersionLogic) GetAppVersion(req *types.GetAppVersionRequest) (resp *types.ApplicationVersion, err error) {
|
||||
// TODO: 后续对接数据库实现
|
||||
resp = &types.ApplicationVersion{
|
||||
Version: "1.0.0",
|
||||
MinVersion: "1.0.0",
|
||||
ForceUpdate: false,
|
||||
Description: map[string]string{
|
||||
"zh-CN": "初始版本",
|
||||
"en-US": "Initial version",
|
||||
},
|
||||
IsDefault: true,
|
||||
// Query the latest version for the platform
|
||||
var version client.ApplicationVersion
|
||||
err = l.svcCtx.DB.Model(&client.ApplicationVersion{}).
|
||||
Where("platform = ? AND is_default = 1", req.Platform).
|
||||
Order("id desc").First(&version).Error
|
||||
|
||||
if err != nil {
|
||||
l.Errorf("[GetAppVersion] get version error: %v", err)
|
||||
// Return empty or default if not found
|
||||
return &types.ApplicationVersion{
|
||||
Version: "unknown",
|
||||
MinVersion: "unknown",
|
||||
ForceUpdate: false,
|
||||
Description: map[string]string{},
|
||||
IsDefault: false,
|
||||
}, nil
|
||||
}
|
||||
return
|
||||
|
||||
resp = &types.ApplicationVersion{
|
||||
Id: version.Id,
|
||||
Platform: version.Platform,
|
||||
Version: version.Version,
|
||||
MinVersion: version.MinVersion,
|
||||
ForceUpdate: version.ForceUpdate,
|
||||
Description: make(map[string]string),
|
||||
Url: version.Url,
|
||||
IsDefault: version.IsDefault,
|
||||
IsInReview: version.IsInReview,
|
||||
CreatedAt: version.CreatedAt.Unix(),
|
||||
}
|
||||
_ = json.Unmarshal(version.Description, &resp.Description)
|
||||
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
@ -296,6 +296,15 @@ func (l *BindEmailWithVerificationLogic) addAuthMethodForEmailUser(userId int64,
|
||||
l.Infow("成功添加邮箱用户认证方法",
|
||||
logger.Field("user_id", userId),
|
||||
logger.Field("email", email))
|
||||
|
||||
// 清理用户缓存,确保下次查询能获取到新的认证列表
|
||||
if user, err := l.svcCtx.UserModel.FindOne(l.ctx, userId); err == nil && user != nil {
|
||||
if err := l.svcCtx.UserModel.BatchClearRelatedCache(l.ctx, user); err != nil {
|
||||
l.Errorw("清理用户缓存失败", logger.Field("error", err.Error()), logger.Field("user_id", userId))
|
||||
} else {
|
||||
l.Infow("清理用户缓存成功", logger.Field("user_id", userId))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@ -2,9 +2,12 @@ package user
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
"github.com/perfect-panel/server/pkg/constant"
|
||||
"github.com/perfect-panel/server/pkg/kutt"
|
||||
"github.com/perfect-panel/server/pkg/xerr"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
@ -40,15 +43,29 @@ func (l *QueryUserInfoLogic) QueryUserInfo() (resp *types.User, err error) {
|
||||
}
|
||||
tool.DeepCopy(resp, u)
|
||||
|
||||
// 临时调试日志:打印原始 AuthMethods
|
||||
fmt.Println("========================================")
|
||||
fmt.Printf("UserID: %d, Original AuthMethods Count: %d\n", u.Id, len(u.AuthMethods))
|
||||
for i, m := range u.AuthMethods {
|
||||
fmt.Printf(" [%d] Type: %s, Identifier: %s\n", i, m.AuthType, m.AuthIdentifier)
|
||||
}
|
||||
fmt.Println("========================================")
|
||||
|
||||
var userMethods []types.UserAuthMethod
|
||||
for _, method := range resp.AuthMethods {
|
||||
var item types.UserAuthMethod
|
||||
tool.DeepCopy(&item, method)
|
||||
for _, method := range u.AuthMethods {
|
||||
item := types.UserAuthMethod{
|
||||
AuthType: method.AuthType,
|
||||
Verified: method.Verified,
|
||||
AuthIdentifier: method.AuthIdentifier,
|
||||
}
|
||||
|
||||
switch method.AuthType {
|
||||
case "mobile":
|
||||
item.AuthIdentifier = phone.MaskPhoneNumber(method.AuthIdentifier)
|
||||
case "email":
|
||||
// No masking for email
|
||||
case "device":
|
||||
// No masking for device identifier
|
||||
default:
|
||||
item.AuthIdentifier = maskOpenID(method.AuthIdentifier)
|
||||
}
|
||||
@ -60,10 +77,133 @@ func (l *QueryUserInfoLogic) QueryUserInfo() (resp *types.User, err error) {
|
||||
return getAuthTypePriority(userMethods[i].AuthType) < getAuthTypePriority(userMethods[j].AuthType)
|
||||
})
|
||||
|
||||
// 临时调试日志:打印处理后的 AuthMethods
|
||||
fmt.Println("========================================")
|
||||
fmt.Printf("UserID: %d, Sorted Response AuthMethods Count: %d\n", u.Id, len(userMethods))
|
||||
for i, m := range userMethods {
|
||||
fmt.Printf(" [%d] Type: %s, Identifier: %s\n", i, m.AuthType, m.AuthIdentifier)
|
||||
}
|
||||
fmt.Println("========================================")
|
||||
|
||||
resp.AuthMethods = userMethods
|
||||
|
||||
// 生成邀请短链接
|
||||
if l.svcCtx.Config.Kutt.Enable && resp.ReferCode != "" {
|
||||
shortLink := l.generateInviteShortLink(resp.ReferCode)
|
||||
if shortLink != "" {
|
||||
resp.ShareLink = shortLink
|
||||
}
|
||||
}
|
||||
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
// customData 用于解析 SiteConfig.CustomData JSON 字段
|
||||
// 包含从自定义数据中提取所需的配置项
|
||||
type customData struct {
|
||||
ShareUrl string `json:"shareUrl"` // 分享链接前缀 URL(目标落地页)
|
||||
Domain string `json:"domain"` // 短链接域名
|
||||
}
|
||||
|
||||
// getShareUrl 从 SiteConfig.CustomData 中获取 shareUrl
|
||||
//
|
||||
// 返回:
|
||||
// - string: 分享链接前缀 URL,如果获取失败则返回 Kutt.TargetURL 作为 fallback
|
||||
func (l *QueryUserInfoLogic) getShareUrl() string {
|
||||
siteConfig := l.svcCtx.Config.Site
|
||||
if siteConfig.CustomData != "" {
|
||||
var data customData
|
||||
if err := json.Unmarshal([]byte(siteConfig.CustomData), &data); err == nil {
|
||||
if data.ShareUrl != "" {
|
||||
return data.ShareUrl
|
||||
}
|
||||
}
|
||||
}
|
||||
// fallback 到 Kutt.TargetURL
|
||||
return l.svcCtx.Config.Kutt.TargetURL
|
||||
}
|
||||
|
||||
// getDomain 从 SiteConfig.CustomData 中获取短链接域名
|
||||
//
|
||||
// 返回:
|
||||
// - string: 短链接域名,如果获取失败则返回 Kutt.Domain 作为 fallback
|
||||
func (l *QueryUserInfoLogic) getDomain() string {
|
||||
siteConfig := l.svcCtx.Config.Site
|
||||
if siteConfig.CustomData != "" {
|
||||
var data customData
|
||||
if err := json.Unmarshal([]byte(siteConfig.CustomData), &data); err == nil {
|
||||
if data.Domain != "" {
|
||||
return data.Domain
|
||||
}
|
||||
}
|
||||
}
|
||||
// fallback 到 Kutt.Domain
|
||||
return l.svcCtx.Config.Kutt.Domain
|
||||
}
|
||||
|
||||
// generateInviteShortLink 生成邀请短链接(带 Redis 缓存)
|
||||
//
|
||||
// 参数:
|
||||
// - inviteCode: 邀请码
|
||||
//
|
||||
// 返回:
|
||||
// - string: 短链接 URL,失败时返回空字符串
|
||||
func (l *QueryUserInfoLogic) generateInviteShortLink(inviteCode string) string {
|
||||
cfg := l.svcCtx.Config.Kutt
|
||||
shareUrl := l.getShareUrl()
|
||||
domain := l.getDomain()
|
||||
|
||||
// 检查必要配置
|
||||
if cfg.ApiURL == "" || cfg.ApiKey == "" {
|
||||
l.Sloww("Kutt config incomplete",
|
||||
logger.Field("api_url", cfg.ApiURL != ""),
|
||||
logger.Field("api_key", cfg.ApiKey != ""))
|
||||
return ""
|
||||
}
|
||||
if shareUrl == "" {
|
||||
l.Sloww("ShareUrl not configured in CustomData or Kutt.TargetURL")
|
||||
return ""
|
||||
}
|
||||
|
||||
// Redis 缓存 key
|
||||
cacheKey := "cache:invite:short_link:" + inviteCode
|
||||
|
||||
// 1. 尝试从 Redis 缓存读取
|
||||
cachedLink, err := l.svcCtx.Redis.Get(l.ctx, cacheKey).Result()
|
||||
if err == nil && cachedLink != "" {
|
||||
l.Debugw("Hit cache for invite short link",
|
||||
logger.Field("invite_code", inviteCode),
|
||||
logger.Field("short_link", cachedLink))
|
||||
return cachedLink
|
||||
}
|
||||
|
||||
// 2. 缓存未命中,调用 Kutt API 创建短链接
|
||||
client := kutt.NewClient(cfg.ApiURL, cfg.ApiKey)
|
||||
shortLink, err := client.CreateInviteShortLink(l.ctx, shareUrl, inviteCode, domain)
|
||||
if err != nil {
|
||||
l.Errorw("Failed to create short link",
|
||||
logger.Field("error", err.Error()),
|
||||
logger.Field("invite_code", inviteCode),
|
||||
logger.Field("share_url", shareUrl))
|
||||
return ""
|
||||
}
|
||||
|
||||
// 3. 写入 Redis 缓存(永不过期,因为邀请码不变短链接也不会变)
|
||||
if err := l.svcCtx.Redis.Set(l.ctx, cacheKey, shortLink, 0).Err(); err != nil {
|
||||
l.Errorw("Failed to cache short link",
|
||||
logger.Field("error", err.Error()),
|
||||
logger.Field("invite_code", inviteCode))
|
||||
// 缓存失败不影响返回
|
||||
}
|
||||
|
||||
l.Infow("Created and cached invite short link",
|
||||
logger.Field("invite_code", inviteCode),
|
||||
logger.Field("short_link", shortLink),
|
||||
logger.Field("share_url", shareUrl))
|
||||
|
||||
return shortLink
|
||||
}
|
||||
|
||||
// getAuthTypePriority 获取认证类型的排序优先级
|
||||
// email: 1 (第一位)
|
||||
// mobile: 2 (第二位)
|
||||
|
||||
27
internal/model/client/app_version.go
Normal file
27
internal/model/client/app_version.go
Normal file
@ -0,0 +1,27 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"time"
|
||||
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type ApplicationVersion struct {
|
||||
Id int64 `gorm:"primaryKey"`
|
||||
Platform string `gorm:"type:varchar(50);not null;comment:Platform (ios, android, windows, mac, linux, harmony)"`
|
||||
Version string `gorm:"type:varchar(50);not null;comment:Version Number"`
|
||||
MinVersion string `gorm:"type:varchar(50);default:null;comment:Minimum Force Update Version"`
|
||||
ForceUpdate bool `gorm:"type:tinyint(1);not null;default:0;comment:Force Update"`
|
||||
Url string `gorm:"type:varchar(255);not null;comment:Download URL"`
|
||||
Description json.RawMessage `gorm:"type:json;default:null;comment:Update Description (JSON for multi-language)"`
|
||||
IsDefault bool `gorm:"type:tinyint(1);not null;default:0;comment:Is Default Version"`
|
||||
IsInReview bool `gorm:"type:tinyint(1);not null;default:0;comment:Is In Review"`
|
||||
CreatedAt time.Time `gorm:"<-:create;comment:Create Time"`
|
||||
UpdatedAt time.Time `gorm:"comment:Update Time"`
|
||||
DeletedAt gorm.DeletedAt `gorm:"index"`
|
||||
}
|
||||
|
||||
func (ApplicationVersion) TableName() string {
|
||||
return "application_versions"
|
||||
}
|
||||
@ -109,6 +109,7 @@ type ApplicationResponseInfo struct {
|
||||
|
||||
type ApplicationVersion struct {
|
||||
Id int64 `json:"id"`
|
||||
Platform string `json:"platform"`
|
||||
Url string `json:"url"`
|
||||
Version string `json:"version" validate:"required"`
|
||||
MinVersion string `json:"min_version"`
|
||||
@ -487,9 +488,10 @@ type Currency struct {
|
||||
}
|
||||
|
||||
type CurrencyConfig struct {
|
||||
AccessKey string `json:"access_key"`
|
||||
CurrencyUnit string `json:"currency_unit"`
|
||||
CurrencySymbol string `json:"currency_symbol"`
|
||||
AccessKey string `json:"access_key"`
|
||||
CurrencyUnit string `json:"currency_unit"`
|
||||
CurrencySymbol string `json:"currency_symbol"`
|
||||
FixedRate float64 `json:"fixed_rate"`
|
||||
}
|
||||
|
||||
type DeleteAdsRequest struct {
|
||||
@ -2614,6 +2616,7 @@ type User struct {
|
||||
GiftAmount int64 `json:"gift_amount"`
|
||||
Telegram int64 `json:"telegram"`
|
||||
ReferCode string `json:"refer_code"`
|
||||
ShareLink string `json:"share_link,omitempty"`
|
||||
RefererId int64 `json:"referer_id"`
|
||||
Enable bool `json:"enable"`
|
||||
IsAdmin bool `json:"is_admin,omitempty"`
|
||||
@ -2924,3 +2927,41 @@ type GetAppleStatusResponse struct {
|
||||
ExpiresAt int64 `json:"expires_at"`
|
||||
Tier string `json:"tier"`
|
||||
}
|
||||
|
||||
type CreateAppVersionRequest struct {
|
||||
Platform string `json:"platform" validate:"required,oneof=ios macos linux android windows harmony"`
|
||||
Version string `json:"version" validate:"required"`
|
||||
MinVersion string `json:"min_version"`
|
||||
ForceUpdate bool `json:"force_update"`
|
||||
Description string `json:"description"`
|
||||
Url string `json:"url" validate:"required,url"`
|
||||
IsDefault *bool `json:"is_default"`
|
||||
IsInReview *bool `json:"is_in_review"`
|
||||
}
|
||||
|
||||
type UpdateAppVersionRequest struct {
|
||||
Id int64 `json:"id" validate:"required"`
|
||||
Platform string `json:"platform" validate:"required,oneof=ios macos linux android windows harmony"`
|
||||
Version string `json:"version" validate:"required"`
|
||||
MinVersion string `json:"min_version"`
|
||||
ForceUpdate bool `json:"force_update"`
|
||||
Description string `json:"description"`
|
||||
Url string `json:"url" validate:"required,url"`
|
||||
IsDefault *bool `json:"is_default"`
|
||||
IsInReview *bool `json:"is_in_review"`
|
||||
}
|
||||
|
||||
type DeleteAppVersionRequest struct {
|
||||
Id int64 `json:"id" validate:"required"`
|
||||
}
|
||||
|
||||
type GetAppVersionListRequest struct {
|
||||
Page int `form:"page"`
|
||||
Size int `form:"size"`
|
||||
Platform string `form:"platform,optional"`
|
||||
}
|
||||
|
||||
type GetAppVersionListResponse struct {
|
||||
Total int64 `json:"total"`
|
||||
List []*ApplicationVersion `json:"list"`
|
||||
}
|
||||
|
||||
157
pkg/kutt/kutt.go
Normal file
157
pkg/kutt/kutt.go
Normal file
@ -0,0 +1,157 @@
|
||||
package kutt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Client 是 Kutt API 客户端
|
||||
// 用于创建和管理短链接
|
||||
type Client struct {
|
||||
apiURL string // Kutt API 基础 URL
|
||||
apiKey string // API 认证密钥
|
||||
httpClient *http.Client // HTTP 客户端
|
||||
}
|
||||
|
||||
// NewClient 创建一个新的 Kutt 客户端
|
||||
//
|
||||
// 参数:
|
||||
// - apiURL: Kutt API 基础 URL (例如: https://kutt.it/api/v2)
|
||||
// - apiKey: Kutt API 密钥
|
||||
//
|
||||
// 返回:
|
||||
// - *Client: Kutt 客户端实例
|
||||
func NewClient(apiURL, apiKey string) *Client {
|
||||
return &Client{
|
||||
apiURL: apiURL,
|
||||
apiKey: apiKey,
|
||||
httpClient: &http.Client{
|
||||
Timeout: 10 * time.Second,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// CreateLinkRequest 创建短链接的请求参数
|
||||
type CreateLinkRequest struct {
|
||||
Target string `json:"target"` // 目标 URL (必填)
|
||||
Description string `json:"description,omitempty"` // 链接描述 (可选)
|
||||
ExpireIn string `json:"expire_in,omitempty"` // 过期时间,例如 "2 days" (可选)
|
||||
Password string `json:"password,omitempty"` // 访问密码 (可选)
|
||||
CustomURL string `json:"customurl,omitempty"` // 自定义短链后缀 (可选)
|
||||
Reuse bool `json:"reuse,omitempty"` // 如果目标 URL 已存在则复用 (可选)
|
||||
Domain string `json:"domain,omitempty"` // 自定义域名 (可选)
|
||||
}
|
||||
|
||||
// Link 短链接响应结构
|
||||
type Link struct {
|
||||
ID string `json:"id"` // 链接 UUID
|
||||
Address string `json:"address"` // 短链地址后缀
|
||||
Banned bool `json:"banned"` // 是否被封禁
|
||||
CreatedAt time.Time `json:"created_at"` // 创建时间
|
||||
Link string `json:"link"` // 完整短链接 URL
|
||||
Password bool `json:"password"` // 是否有密码保护
|
||||
Target string `json:"target"` // 目标 URL
|
||||
Description string `json:"description"` // 链接描述
|
||||
UpdatedAt time.Time `json:"updated_at"` // 更新时间
|
||||
VisitCount int `json:"visit_count"` // 访问次数
|
||||
}
|
||||
|
||||
// ErrorResponse Kutt API 错误响应
|
||||
type ErrorResponse struct {
|
||||
Error string `json:"error"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// CreateShortLink 创建短链接
|
||||
//
|
||||
// 参数:
|
||||
// - ctx: 上下文
|
||||
// - req: 创建请求参数
|
||||
//
|
||||
// 返回:
|
||||
// - *Link: 创建的短链接信息
|
||||
// - error: 错误信息
|
||||
func (c *Client) CreateShortLink(ctx context.Context, req *CreateLinkRequest) (*Link, error) {
|
||||
// 序列化请求体
|
||||
body, err := json.Marshal(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal request failed: %w", err)
|
||||
}
|
||||
|
||||
// 创建 HTTP 请求
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, c.apiURL+"/links", bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("create request failed: %w", err)
|
||||
}
|
||||
|
||||
// 设置请求头
|
||||
httpReq.Header.Set("Content-Type", "application/json")
|
||||
httpReq.Header.Set("X-API-KEY", c.apiKey)
|
||||
|
||||
// 发送请求
|
||||
resp, err := c.httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("send request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
// 读取响应体
|
||||
respBody, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read response failed: %w", err)
|
||||
}
|
||||
|
||||
// 检查响应状态
|
||||
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
|
||||
var errResp ErrorResponse
|
||||
if err := json.Unmarshal(respBody, &errResp); err == nil && errResp.Error != "" {
|
||||
return nil, fmt.Errorf("kutt api error: %s - %s", errResp.Error, errResp.Message)
|
||||
}
|
||||
return nil, fmt.Errorf("kutt api error: status %d, body: %s", resp.StatusCode, string(respBody))
|
||||
}
|
||||
|
||||
// 解析响应
|
||||
var link Link
|
||||
if err := json.Unmarshal(respBody, &link); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal response failed: %w", err)
|
||||
}
|
||||
|
||||
return &link, nil
|
||||
}
|
||||
|
||||
// CreateInviteShortLink 为邀请码创建短链接
|
||||
// 这是一个便捷方法,用于生成邀请链接
|
||||
//
|
||||
// 参数:
|
||||
// - ctx: 上下文
|
||||
// - baseURL: 注册页面基础 URL (例如: https://gethifast.net)
|
||||
// - inviteCode: 邀请码
|
||||
// - domain: 短链接域名 (例如: getsapp.net),可为空
|
||||
//
|
||||
// 返回:
|
||||
// - string: 短链接 URL
|
||||
// - error: 错误信息
|
||||
func (c *Client) CreateInviteShortLink(ctx context.Context, baseURL, inviteCode, domain string) (string, error) {
|
||||
// 构建目标 URL - 落地页
|
||||
// 格式:https://gethifast.net/?ic=邀请码
|
||||
targetURL := fmt.Sprintf("%s?ic=%s", baseURL, inviteCode)
|
||||
|
||||
req := &CreateLinkRequest{
|
||||
Target: targetURL,
|
||||
Description: fmt.Sprintf("Invite link for code: %s", inviteCode),
|
||||
Reuse: true, // 如果已存在相同目标 URL 则复用
|
||||
Domain: domain,
|
||||
}
|
||||
|
||||
link, err := c.CreateShortLink(ctx, req)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return link.Link, nil
|
||||
}
|
||||
@ -24,22 +24,26 @@ func SystemConfigSliceReflectToStruct(slice []*system.System, structType any) {
|
||||
}
|
||||
|
||||
if field.IsValid() && field.CanSet() {
|
||||
switch config.Type {
|
||||
case "string":
|
||||
switch field.Kind() {
|
||||
case reflect.String:
|
||||
field.SetString(config.Value)
|
||||
case "bool":
|
||||
case reflect.Bool:
|
||||
boolValue, _ := strconv.ParseBool(config.Value)
|
||||
field.SetBool(boolValue)
|
||||
case "int":
|
||||
intValue, _ := strconv.Atoi(config.Value)
|
||||
field.SetInt(int64(intValue))
|
||||
case "int64":
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
intValue, _ := strconv.ParseInt(config.Value, 10, 64)
|
||||
field.SetInt(intValue)
|
||||
case "interface":
|
||||
_ = json.Unmarshal([]byte(config.Value), field.Addr().Interface())
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
uintValue, _ := strconv.ParseUint(config.Value, 10, 64)
|
||||
field.SetUint(uintValue)
|
||||
case reflect.Float32, reflect.Float64:
|
||||
floatValue, _ := strconv.ParseFloat(config.Value, 64)
|
||||
field.SetFloat(floatValue)
|
||||
default:
|
||||
break
|
||||
// For interface, struct, map, slice, array - try JSON unmarshal
|
||||
if config.Value != "" {
|
||||
_ = json.Unmarshal([]byte(config.Value), field.Addr().Interface())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -32,9 +32,17 @@ func (l *RateLogic) ProcessTask(ctx context.Context, _ *asynq.Task) error {
|
||||
CurrencyUnit string
|
||||
CurrencySymbol string
|
||||
AccessKey string
|
||||
FixedRate float64
|
||||
}{}
|
||||
tool.SystemConfigSliceReflectToStruct(currency, &configs)
|
||||
|
||||
// Check if fixed rate is enabled (greater than 0)
|
||||
if configs.FixedRate > 0 {
|
||||
l.svcCtx.ExchangeRate = configs.FixedRate
|
||||
logger.WithContext(ctx).Infof("[RateLogic] Use Fixed Exchange Rate: %f", configs.FixedRate)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Skip conversion if no exchange rate API key configured
|
||||
if configs.AccessKey == "" {
|
||||
logger.Debugf("[RateLogic] skip exchange rate, no access key configured")
|
||||
|
||||
5
用户绑定.md
5
用户绑定.md
@ -19,3 +19,8 @@
|
||||
用户设备记录 和 认证方式 都会迁移到 邮箱主用户下; 使用邮箱主用户的资源, 设备用户资源丢弃
|
||||
邮箱不存在的情况下:
|
||||
临时创建一个新的邮箱用户, 并将设备认证方式和记录转移到这个新用户下
|
||||
|
||||
|
||||
|
||||
docker build -f Dockerfile --platform linux/amd64 --build-arg TARGETARCH=amd64 -t registry.kxsw.us/ppanel/new-server:v1.0.2
|
||||
docker push registry.kxsw.us/ppanel/new-server:v1.0.2
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user