18 KiB
L3 Implementation Roadmap & Checklist
Based on: L3_ARCHITECTURE_PLAN.md v2.0
Start Date: 2026-01-28
Estimated Duration: 8-10 days
Quick Start Checklist
✅ Pre-requisites
- L1 database完整 (208 matches)
- L2 database完整 (100% coverage, 51,860 rows)
- L2 schema documented
- Profile requirements analyzed
- L3 architecture designed
🎯 Implementation Phases
Phase 1: Schema & Infrastructure (Day 1-2)
1.1 Create L3 Database Schema
- Create
database/L3/schema.sql- dm_player_features (207 columns)
- dm_player_match_history
- dm_player_map_stats
- dm_player_weapon_stats
- All indexes
1.2 Initialize L3 Database
- Update
database/L3/L3_Builder.pyinit_db() - Run schema creation
- Verify tables created
1.3 Processor Base Classes
- Create
database/L3/processors/__init__.py - Create
database/L3/processors/base_processor.py- BaseFeatureProcessor interface
- SafeAggregator utility class
- Z-score normalization functions
验收标准:
sqlite3 database/L3/L3.db ".tables"
# 应输出: dm_player_features, dm_player_match_history, dm_player_map_stats, dm_player_weapon_stats
Phase 2: Tier 1 - Core Processors (Day 3-4)
2.1 BasicProcessor Implementation
- Create
database/L3/processors/basic_processor.py
Sub-tasks:
-
calculate_basic_stats()- 15 columns- AVG(rating, rating2, kd, adr, kast, rws) from fact_match_players
- AVG(headshot_count), hs_rate = SUM(hs)/SUM(kills)
- total_kills, total_deaths, total_assists
- kpr, dpr, survival_rate
-
calculate_match_stats()- 8 columns- win_rate, wins, losses
- avg_match_duration from fact_matches
- avg_mvps, mvp_rate
- avg_elo_change, total_elo_gained from fact_match_teams
-
calculate_weapon_stats()- 12 columns- avg_awp_kills, awp_usage_rate
- avg_knife_kills, avg_zeus_kills, zeus_buy_rate
- top_weapon (GROUP BY weapon in fact_round_events)
- weapon_diversity (Shannon entropy)
- rifle/pistol/smg hs_rates
-
calculate_objective_stats()- 6 columns- avg_plants, avg_defuses, avg_flash_assists
- plant_success_rate, defuse_success_rate
- objective_impact (weighted score)
测试用例:
features = BasicProcessor.calculate('76561198012345678', conn_l2)
assert 'core_avg_rating' in features
assert features['core_total_kills'] > 0
assert 0 <= features['core_hs_rate'] <= 1
Phase 3: Tier 2 - Tactical Processors (Day 4-5)
3.1 TacticalProcessor Implementation
- Create
database/L3/processors/tactical_processor.py
Sub-tasks:
-
calculate_opening_impact()- 8 columns- avg_fk, avg_fd from fact_match_players
- fk_rate, fd_rate
- fk_success_rate (team win when FK)
- entry_kill_rate, entry_death_rate
- opening_duel_winrate
-
calculate_multikill()- 6 columns- avg_2k, avg_3k, avg_4k, avg_5k
- multikill_rate
- ace_count (5k count)
-
calculate_clutch()- 10 columns- clutch_1v1/1v2_attempts/wins/rate
- clutch_1v3_plus aggregated
- clutch_impact_score (weighted)
-
calculate_utility()- 12 columns- util_X_per_round for flash/smoke/molotov/he
- util_usage_rate
- nade_dmg metrics
- flash_efficiency, smoke_timing_score
- util_impact_score
-
calculate_economy()- 8 columns- dmg_per_1k from fact_round_player_economy
- kpr/kd for eco/force/full rounds
- save_discipline, force_success_rate
- eco_efficiency_score
测试:
features = TacticalProcessor.calculate('76561198012345678', conn_l2)
assert 'tac_fk_rate' in features
assert features['tac_multikill_rate'] >= 0
Phase 4: Tier 3 - Intelligence Processors (Day 5-7)
4.1 IntelligenceProcessor Implementation
- Create
database/L3/processors/intelligence_processor.py
Sub-tasks:
-
calculate_high_iq_kills()- 8 columns- wallbang/smoke/blind/noscope kills from fact_round_events flags
- Rates: X_kills / total_kills
- high_iq_score (weighted formula)
-
calculate_timing_analysis()- 12 columns- early/mid/late kills by event_time bins (0-30s, 30-60s, 60s+)
- timing shares
- avg_kill_time, avg_death_time
- aggression_index, patience_score
- first_contact_time (MIN(event_time) per round)
-
calculate_pressure_performance()- 10 columns- comeback_kd/rating (when down 4+ rounds)
- losing_streak_kd (3+ round loss streak)
- matchpoint_kpr/rating (at 15-X or 12-X)
- clutch_composure, entry_in_loss
- pressure_performance_index, big_moment_score
- tilt_resistance
-
calculate_position_mastery()- 15 columns ⚠️ Complex- site_a/b/mid_control_rate from xyz clustering
- favorite_position (most common cluster)
- position_diversity (entropy)
- rotation_speed (distance between kills)
- map_coverage, defensive/aggressive positioning
- lurk_tendency, site_anchor_score
- spatial_iq_score
-
calculate_trade_network()- 8 columns- trade_kill_count (kills within 5s of teammate death)
- trade_kill_rate
- trade_response_time (AVG seconds)
- trade_given (deaths traded by teammate)
- trade_balance, trade_efficiency
- teamwork_score
Position Mastery特别注意:
# 需要使用sklearn DBSCAN聚类
from sklearn.cluster import DBSCAN
def cluster_player_positions(steam_id, conn_l2):
"""从fact_round_events提取xyz坐标并聚类"""
cursor = conn_l2.cursor()
cursor.execute("""
SELECT attacker_pos_x, attacker_pos_y, attacker_pos_z
FROM fact_round_events
WHERE attacker_steam_id = ?
AND attacker_pos_x IS NOT NULL
""", (steam_id,))
coords = cursor.fetchall()
# DBSCAN clustering...
测试:
features = IntelligenceProcessor.calculate('76561198012345678', conn_l2)
assert 'int_high_iq_score' in features
assert features['int_timing_early_kill_share'] + features['int_timing_mid_kill_share'] + features['int_timing_late_kill_share'] <= 1.1 # Allow rounding
Phase 5: Tier 4 - Meta Processors (Day 7-8)
5.1 MetaProcessor Implementation
- Create
database/L3/processors/meta_processor.py
Sub-tasks:
-
calculate_stability()- 8 columns- rating_volatility (STDDEV of last 20 matches)
- recent_form_rating (AVG last 10)
- win/loss_rating
- rating_consistency (100 - volatility_norm)
- time_rating_correlation (CORR(duration, rating))
- map_stability, elo_tier_stability
-
calculate_side_preference()- 14 columns- side_ct/t_rating from fact_match_players_ct/t
- side_ct/t_kd, win_rate, fk_rate, kast
- side_rating_diff, side_kd_diff
- side_preference ('CT'/'T'/'Balanced')
- side_balance_score
-
calculate_opponent_adaptation()- 12 columns- vs_lower/similar/higher_elo_rating/kd
- Based on fact_match_teams.group_origin_elo差值
- elo_adaptation, stomping_score, upset_score
- consistency_across_elos, rank_resistance
- smurf_detection
-
calculate_map_specialization()- 10 columns- best/worst_map, best/worst_rating
- map_diversity (entropy)
- map_pool_size (maps with 5+ matches)
- map_specialist_score, map_versatility
- comfort_zone_rate, map_adaptation
-
calculate_session_pattern()- 8 columns- avg_matches_per_day
- longest_streak (consecutive days)
- weekend/weekday_rating
- morning/afternoon/evening/night_rating (based on timestamp)
测试:
features = MetaProcessor.calculate('76561198012345678', conn_l2)
assert 'meta_rating_volatility' in features
assert features['meta_side_preference'] in ['CT', 'T', 'Balanced']
Phase 6: Tier 5 - Composite Processors (Day 8)
6.1 CompositeProcessor Implementation
- Create
database/L3/processors/composite_processor.py
Sub-tasks:
-
normalize_and_standardize()helper- Z-score normalization function
- Global mean/std calculation from all players
- Map Z-score to 0-100 range
-
calculate_radar_scores()- 8 scores- score_aim: 25% Rating + 20% KD + 15% ADR + 10% DuelWin + 10% HighEloKD + 20% MultiKill
- score_clutch: 25% 1v3+ + 20% MatchPtWin + 20% ComebackKD + 15% PressureEntry + 20% Rating
- score_pistol: 30% PistolKills + 30% PistolWin + 20% PistolKD + 20% PistolHS%
- score_defense: 35% CT_Rating + 35% T_Rating + 15% CT_FK + 15% T_FK
- score_utility: 35% UsageRate + 25% NadeDmg + 20% FlashEff + 20% FlashEnemy
- score_stability: 30% (100-Volatility) + 30% LossRating + 20% WinRating + 20% Consistency
- score_economy: 50% Dmg/$1k + 30% EcoKPR + 20% SaveRoundKD
- score_pace: 40% EntryTiming + 30% TradeSpeed + 30% AggressionIndex
-
calculate_overall_score()- AVG of 8 scores -
classify_tier()- Performance tier- Elite: overall > 75
- Advanced: 60-75
- Intermediate: 40-60
- Beginner: < 40
-
calculate_percentile()- Rank among all players
依赖:
def calculate(steam_id: str, conn_l2: sqlite3.Connection, pre_features: dict) -> dict:
"""
需要前面4个Tier的特征作为输入
Args:
pre_features: 包含Tier 1-4的所有特征
"""
pass
测试:
# 需要先计算所有前置特征
features = {}
features.update(BasicProcessor.calculate(steam_id, conn_l2))
features.update(TacticalProcessor.calculate(steam_id, conn_l2))
features.update(IntelligenceProcessor.calculate(steam_id, conn_l2))
features.update(MetaProcessor.calculate(steam_id, conn_l2))
composite = CompositeProcessor.calculate(steam_id, conn_l2, features)
assert 0 <= composite['score_aim'] <= 100
assert composite['tier_classification'] in ['Elite', 'Advanced', 'Intermediate', 'Beginner']
Phase 7: L3_Builder Integration (Day 8-9)
7.1 Main Builder Logic
- Update
database/L3/L3_Builder.py- Import all processors
- Main loop: iterate all players from dim_players
- Call processors in order
- _upsert_features() helper
- Batch commit every 100 players
- Progress logging
def main():
logger.info("Starting L3 Builder...")
# 1. Init DB
init_db()
# 2. Connect
conn_l2 = sqlite3.connect(L2_DB_PATH)
conn_l3 = sqlite3.connect(L3_DB_PATH)
# 3. Get all players
cursor = conn_l2.cursor()
cursor.execute("SELECT DISTINCT steam_id_64 FROM dim_players")
players = cursor.fetchall()
logger.info(f"Processing {len(players)} players...")
for idx, (steam_id,) in enumerate(players, 1):
try:
# 4. Calculate features tier by tier
features = {}
features.update(BasicProcessor.calculate(steam_id, conn_l2))
features.update(TacticalProcessor.calculate(steam_id, conn_l2))
features.update(IntelligenceProcessor.calculate(steam_id, conn_l2))
features.update(MetaProcessor.calculate(steam_id, conn_l2))
features.update(CompositeProcessor.calculate(steam_id, conn_l2, features))
# 5. Upsert to L3
_upsert_features(conn_l3, steam_id, features)
# 6. Commit batch
if idx % 100 == 0:
conn_l3.commit()
logger.info(f"Processed {idx}/{len(players)} players")
except Exception as e:
logger.error(f"Error processing {steam_id}: {e}")
conn_l3.commit()
logger.info("Done!")
7.2 Auxiliary Tables Population
-
Populate
dm_player_match_history- FROM fact_match_players JOIN fact_matches
- ORDER BY match date
- Calculate match_sequence, rolling averages
-
Populate
dm_player_map_stats- GROUP BY steam_id, map_name
- FROM fact_match_players
-
Populate
dm_player_weapon_stats- GROUP BY steam_id, weapon_name
- FROM fact_round_events
- TOP 10 weapons per player
7.3 Full Build Test
- Run:
python database/L3/L3_Builder.py - Verify: All players processed
- Check: Row counts in all L3 tables
- Validate: Sample features make sense
验收标准:
SELECT COUNT(*) FROM dm_player_features; -- 应该 = dim_players count
SELECT AVG(core_avg_rating) FROM dm_player_features; -- 应该接近1.0
SELECT COUNT(*) FROM dm_player_features WHERE score_aim > 0; -- 大部分玩家有评分
Phase 8: Web Services Refactoring (Day 9-10)
8.1 Create PlayerService
- Create
web/services/player_service.py
class PlayerService:
@staticmethod
def get_player_features(steam_id: str) -> dict:
"""获取完整特征(dm_player_features)"""
pass
@staticmethod
def get_player_radar_data(steam_id: str) -> dict:
"""获取雷达图8维数据"""
pass
@staticmethod
def get_player_core_stats(steam_id: str) -> dict:
"""获取核心Dashboard数据"""
pass
@staticmethod
def get_player_history(steam_id: str, limit: int = 20) -> list:
"""获取历史趋势数据"""
pass
@staticmethod
def get_player_map_stats(steam_id: str) -> list:
"""获取各地图统计"""
pass
@staticmethod
def get_player_weapon_stats(steam_id: str, top_n: int = 10) -> list:
"""获取Top N武器"""
pass
@staticmethod
def get_players_ranking(order_by: str = 'core_avg_rating',
limit: int = 100,
offset: int = 0) -> list:
"""获取排行榜"""
pass
- Implement all methods
- Add error handling
- Add caching (optional)
8.2 Refactor Routes
-
Update
web/routes/players.py/profile/<steam_id>route- Use PlayerService instead of direct DB queries
- Pass features dict to template
-
Add API endpoints
/api/players/<steam_id>/features/api/players/ranking/api/players/<steam_id>/history
8.3 Update feature_service.py
- Mark old rebuild methods as DEPRECATED
- Redirect to L3_Builder.py
- Keep query methods for backward compatibility
Phase 9: Frontend Integration (Day 10-11)
9.1 Update profile.html Template
- Dashboard cards: use
features.core_* - Radar chart: use
features.score_* - Trend chart: use
historydata - Core Performance section
- Gunfight section
- Opening Impact section
- Clutch section
- High IQ Kills section
- Map stats table
- Weapon stats table
9.2 JavaScript Integration
- Radar chart rendering (Chart.js)
- Trend chart rendering
- Dynamic data loading
9.3 UI Polish
- Responsive design
- Loading states
- Error handling
- Tooltips for complex metrics
Phase 10: Testing & Validation (Day 11-12)
10.1 Unit Tests
- Test each processor independently
- Mock L2 data
- Verify calculation correctness
10.2 Integration Tests
- Full L3_Builder run
- Verify all tables populated
- Check data consistency
10.3 Performance Tests
- Benchmark L3_Builder runtime
- Profile slow queries
- Optimize if needed
10.4 Data Quality Checks
- Verify no NULL values where expected
- Check value ranges (e.g., 0 <= rate <= 1)
- Validate composite scores (0-100)
- Cross-check with L2 source data
Success Criteria
✅ L3 Database
- All 4 tables created with correct schemas
- dm_player_features has 207 columns
- All players from L2 have corresponding L3 rows
- No critical NULL values
✅ Feature Calculation
- All 5 processors implemented and tested
- 207 features calculated correctly
- Composite scores in 0-100 range
- Tier classification working
✅ Services & Routes
- PlayerService provides all query methods
- Routes use services correctly
- API endpoints return valid JSON
- No direct DB queries in routes
✅ Frontend
- Profile page renders correctly
- Radar chart displays 8 dimensions
- Trend chart shows history
- All sections populated with data
✅ Performance
- L3_Builder completes in < 20 min for 1000 players
- Profile page loads in < 200ms
- No N+1 query problems
Risk Mitigation
🔴 High Risk Items
-
Position Mastery (xyz clustering)
- Mitigation: Start with simple grid-based approach, defer ML clustering
-
Composite Score Standardization
- Mitigation: Use simple percentile-based normalization as fallback
-
Performance at Scale
- Mitigation: Implement incremental updates, add indexes
🟡 Medium Risk Items
-
Time Window Calculations (trades)
- Mitigation: Use efficient self-JOIN with time bounds
-
Missing Data Handling
- Mitigation: Comprehensive NULL handling, default values
🟢 Low Risk Items
- Basic aggregations (AVG, SUM, COUNT)
- Service layer refactoring
- Template updates
Next Actions
Immediate (Today):
- Create schema.sql
- Initialize L3.db
- Create processor base classes
Tomorrow:
- Implement BasicProcessor
- Test with sample player
- Start TacticalProcessor
This Week:
- Complete all 5 processors
- Full L3_Builder run
- Service refactoring
Next Week:
- Frontend integration
- Testing & validation
- Documentation
Notes
- 保持每个processor独立,便于单元测试
- 使用动态SQL避免column count错误
- 所有rate/percentage使用0-1范围存储,UI展示时乘100
- 时间戳统一使用Unix timestamp (INTEGER)
- 遵循"查询不计算"原则:web层只SELECT,不做聚合