main
All checks were successful
CI / build (push) Successful in 2m23s

This commit is contained in:
Harun CAN
2026-01-30 02:52:42 +03:00
commit 8674911033
110 changed files with 23247 additions and 0 deletions

View File

@@ -0,0 +1,209 @@
---
name: code-reviewer
description: Comprehensive code review skill for TypeScript, JavaScript, Python, Swift, Kotlin, Go. Includes automated code analysis, best practice checking, security scanning, and review checklist generation. Use when reviewing pull requests, providing code feedback, identifying issues, or ensuring code quality standards.
---
# Code Reviewer
Complete toolkit for code reviewer with modern tools and best practices.
## Quick Start
### Main Capabilities
This skill provides three core capabilities through automated scripts:
```bash
# Script 1: Pr Analyzer
python scripts/pr_analyzer.py [options]
# Script 2: Code Quality Checker
python scripts/code_quality_checker.py [options]
# Script 3: Review Report Generator
python scripts/review_report_generator.py [options]
```
## Core Capabilities
### 1. Pr Analyzer
Automated tool for pr analyzer tasks.
**Features:**
- Automated scaffolding
- Best practices built-in
- Configurable templates
- Quality checks
**Usage:**
```bash
python scripts/pr_analyzer.py <project-path> [options]
```
### 2. Code Quality Checker
Comprehensive analysis and optimization tool.
**Features:**
- Deep analysis
- Performance metrics
- Recommendations
- Automated fixes
**Usage:**
```bash
python scripts/code_quality_checker.py <target-path> [--verbose]
```
### 3. Review Report Generator
Advanced tooling for specialized tasks.
**Features:**
- Expert-level automation
- Custom configurations
- Integration ready
- Production-grade output
**Usage:**
```bash
python scripts/review_report_generator.py [arguments] [options]
```
## Reference Documentation
### Code Review Checklist
Comprehensive guide available in `references/code_review_checklist.md`:
- Detailed patterns and practices
- Code examples
- Best practices
- Anti-patterns to avoid
- Real-world scenarios
### Coding Standards
Complete workflow documentation in `references/coding_standards.md`:
- Step-by-step processes
- Optimization strategies
- Tool integrations
- Performance tuning
- Troubleshooting guide
### Common Antipatterns
Technical reference guide in `references/common_antipatterns.md`:
- Technology stack details
- Configuration examples
- Integration patterns
- Security considerations
- Scalability guidelines
## Tech Stack
**Languages:** TypeScript, JavaScript, Python, Go, Swift, Kotlin
**Frontend:** React, Next.js, React Native, Flutter
**Backend:** Node.js, Express, GraphQL, REST APIs
**Database:** PostgreSQL, Prisma, NeonDB, Supabase
**DevOps:** Docker, Kubernetes, Terraform, GitHub Actions, CircleCI
**Cloud:** AWS, GCP, Azure
## Development Workflow
### 1. Setup and Configuration
```bash
# Install dependencies
npm install
# or
pip install -r requirements.txt
# Configure environment
cp .env.example .env
```
### 2. Run Quality Checks
```bash
# Use the analyzer script
python scripts/code_quality_checker.py .
# Review recommendations
# Apply fixes
```
### 3. Implement Best Practices
Follow the patterns and practices documented in:
- `references/code_review_checklist.md`
- `references/coding_standards.md`
- `references/common_antipatterns.md`
## Best Practices Summary
### Code Quality
- Follow established patterns
- Write comprehensive tests
- Document decisions
- Review regularly
### Performance
- Measure before optimizing
- Use appropriate caching
- Optimize critical paths
- Monitor in production
### Security
- Validate all inputs
- Use parameterized queries
- Implement proper authentication
- Keep dependencies updated
### Maintainability
- Write clear code
- Use consistent naming
- Add helpful comments
- Keep it simple
## Common Commands
```bash
# Development
npm run dev
npm run build
npm run test
npm run lint
# Analysis
python scripts/code_quality_checker.py .
python scripts/review_report_generator.py --analyze
# Deployment
docker build -t app:latest .
docker-compose up -d
kubectl apply -f k8s/
```
## Troubleshooting
### Common Issues
Check the comprehensive troubleshooting section in `references/common_antipatterns.md`.
### Getting Help
- Review reference documentation
- Check script output messages
- Consult tech stack documentation
- Review error logs
## Resources
- Pattern Reference: `references/code_review_checklist.md`
- Workflow Guide: `references/coding_standards.md`
- Technical Guide: `references/common_antipatterns.md`
- Tool Scripts: `scripts/` directory

View File

@@ -0,0 +1,103 @@
# Code Review Checklist
## Overview
This reference guide provides comprehensive information for code reviewer.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for code reviewer.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,103 @@
# Coding Standards
## Overview
This reference guide provides comprehensive information for code reviewer.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for code reviewer.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,103 @@
# Common Antipatterns
## Overview
This reference guide provides comprehensive information for code reviewer.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for code reviewer.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Code Quality Checker
Automated tool for code reviewer tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class CodeQualityChecker:
"""Main class for code quality checker functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Code Quality Checker"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = CodeQualityChecker(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Pr Analyzer
Automated tool for code reviewer tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class PrAnalyzer:
"""Main class for pr analyzer functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Pr Analyzer"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = PrAnalyzer(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Review Report Generator
Automated tool for code reviewer tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class ReviewReportGenerator:
"""Main class for review report generator functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Review Report Generator"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = ReviewReportGenerator(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,209 @@
---
name: receiving-code-review
description: Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
---
# Code Review Reception
## Overview
Code review requires technical evaluation, not emotional performance.
**Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort.
## The Response Pattern
```
WHEN receiving code review feedback:
1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each
```
## Forbidden Responses
**NEVER:**
- "You're absolutely right!" (explicit CLAUDE.md violation)
- "Great point!" / "Excellent feedback!" (performative)
- "Let me implement that now" (before verification)
**INSTEAD:**
- Restate the technical requirement
- Ask clarifying questions
- Push back with technical reasoning if wrong
- Just start working (actions > words)
## Handling Unclear Feedback
```
IF any item is unclear:
STOP - do not implement anything yet
ASK for clarification on unclear items
WHY: Items may be related. Partial understanding = wrong implementation.
```
**Example:**
```
your human partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.
❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."
```
## Source-Specific Handling
### From your human partner
- **Trusted** - implement after understanding
- **Still ask** if scope unclear
- **No performative agreement**
- **Skip to action** or technical acknowledgment
### From External Reviewers
```
BEFORE implementing:
1. Check: Technically correct for THIS codebase?
2. Check: Breaks existing functionality?
3. Check: Reason for current implementation?
4. Check: Works on all platforms/versions?
5. Check: Does reviewer understand full context?
IF suggestion seems wrong:
Push back with technical reasoning
IF can't easily verify:
Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"
IF conflicts with your human partner's prior decisions:
Stop and discuss with your human partner first
```
**your human partner's rule:** "External feedback - be skeptical, but check carefully"
## YAGNI Check for "Professional" Features
```
IF reviewer suggests "implementing properly":
grep codebase for actual usage
IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
IF used: Then implement properly
```
**your human partner's rule:** "You and reviewer both report to me. If we don't need this feature, don't add it."
## Implementation Order
```
FOR multi-item feedback:
1. Clarify anything unclear FIRST
2. Then implement in this order:
- Blocking issues (breaks, security)
- Simple fixes (typos, imports)
- Complex fixes (refactoring, logic)
3. Test each fix individually
4. Verify no regressions
```
## When To Push Back
Push back when:
- Suggestion breaks existing functionality
- Reviewer lacks full context
- Violates YAGNI (unused feature)
- Technically incorrect for this stack
- Legacy/compatibility reasons exist
- Conflicts with your human partner's architectural decisions
**How to push back:**
- Use technical reasoning, not defensiveness
- Ask specific questions
- Reference working tests/code
- Involve your human partner if architectural
**Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K"
## Acknowledging Correct Feedback
When feedback IS correct:
```
✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ [Just fix it and show in the code]
❌ "You're absolutely right!"
❌ "Great point!"
❌ "Thanks for catching that!"
❌ "Thanks for [anything]"
❌ ANY gratitude expression
```
**Why no thanks:** Actions speak. Just fix it. The code itself shows you heard the feedback.
**If you catch yourself about to write "Thanks":** DELETE IT. State the fix instead.
## Gracefully Correcting Your Pushback
If you pushed back and were wrong:
```
✅ "You were right - I checked [X] and it does [Y]. Implementing now."
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."
❌ Long apology
❌ Defending why you pushed back
❌ Over-explaining
```
State the correction factually and move on.
## Common Mistakes
| Mistake | Fix |
|---------|-----|
| Performative agreement | State requirement or just act |
| Blind implementation | Verify against codebase first |
| Batch without testing | One at a time, test each |
| Assuming reviewer is right | Check if breaks things |
| Avoiding pushback | Technical correctness > comfort |
| Partial implementation | Clarify all items first |
| Can't verify, proceed anyway | State limitation, ask for direction |
## Real Examples
**Performative Agreement (Bad):**
```
Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."
```
**Technical Verification (Good):**
```
Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"
```
**YAGNI (Good):**
```
Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"
```
**Unclear Item (Good):**
```
your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."
```
## The Bottom Line
**External feedback = suggestions to evaluate, not orders to follow.**
Verify. Question. Then implement.
No performative agreement. Technical rigor always.

View File

@@ -0,0 +1,209 @@
---
name: senior-backend
description: Comprehensive backend development skill for building scalable backend systems using NodeJS, Express, Go, Python, Postgres, GraphQL, REST APIs. Includes API scaffolding, database optimization, security implementation, and performance tuning. Use when designing APIs, optimizing database queries, implementing business logic, handling authentication/authorization, or reviewing backend code.
---
# Senior Backend
Complete toolkit for senior backend with modern tools and best practices.
## Quick Start
### Main Capabilities
This skill provides three core capabilities through automated scripts:
```bash
# Script 1: Api Scaffolder
python scripts/api_scaffolder.py [options]
# Script 2: Database Migration Tool
python scripts/database_migration_tool.py [options]
# Script 3: Api Load Tester
python scripts/api_load_tester.py [options]
```
## Core Capabilities
### 1. Api Scaffolder
Automated tool for api scaffolder tasks.
**Features:**
- Automated scaffolding
- Best practices built-in
- Configurable templates
- Quality checks
**Usage:**
```bash
python scripts/api_scaffolder.py <project-path> [options]
```
### 2. Database Migration Tool
Comprehensive analysis and optimization tool.
**Features:**
- Deep analysis
- Performance metrics
- Recommendations
- Automated fixes
**Usage:**
```bash
python scripts/database_migration_tool.py <target-path> [--verbose]
```
### 3. Api Load Tester
Advanced tooling for specialized tasks.
**Features:**
- Expert-level automation
- Custom configurations
- Integration ready
- Production-grade output
**Usage:**
```bash
python scripts/api_load_tester.py [arguments] [options]
```
## Reference Documentation
### Api Design Patterns
Comprehensive guide available in `references/api_design_patterns.md`:
- Detailed patterns and practices
- Code examples
- Best practices
- Anti-patterns to avoid
- Real-world scenarios
### Database Optimization Guide
Complete workflow documentation in `references/database_optimization_guide.md`:
- Step-by-step processes
- Optimization strategies
- Tool integrations
- Performance tuning
- Troubleshooting guide
### Backend Security Practices
Technical reference guide in `references/backend_security_practices.md`:
- Technology stack details
- Configuration examples
- Integration patterns
- Security considerations
- Scalability guidelines
## Tech Stack
**Languages:** TypeScript, JavaScript, Python, Go, Swift, Kotlin
**Frontend:** React, Next.js, React Native, Flutter
**Backend:** Node.js, Express, GraphQL, REST APIs
**Database:** PostgreSQL, Prisma, NeonDB, Supabase
**DevOps:** Docker, Kubernetes, Terraform, GitHub Actions, CircleCI
**Cloud:** AWS, GCP, Azure
## Development Workflow
### 1. Setup and Configuration
```bash
# Install dependencies
npm install
# or
pip install -r requirements.txt
# Configure environment
cp .env.example .env
```
### 2. Run Quality Checks
```bash
# Use the analyzer script
python scripts/database_migration_tool.py .
# Review recommendations
# Apply fixes
```
### 3. Implement Best Practices
Follow the patterns and practices documented in:
- `references/api_design_patterns.md`
- `references/database_optimization_guide.md`
- `references/backend_security_practices.md`
## Best Practices Summary
### Code Quality
- Follow established patterns
- Write comprehensive tests
- Document decisions
- Review regularly
### Performance
- Measure before optimizing
- Use appropriate caching
- Optimize critical paths
- Monitor in production
### Security
- Validate all inputs
- Use parameterized queries
- Implement proper authentication
- Keep dependencies updated
### Maintainability
- Write clear code
- Use consistent naming
- Add helpful comments
- Keep it simple
## Common Commands
```bash
# Development
npm run dev
npm run build
npm run test
npm run lint
# Analysis
python scripts/database_migration_tool.py .
python scripts/api_load_tester.py --analyze
# Deployment
docker build -t app:latest .
docker-compose up -d
kubectl apply -f k8s/
```
## Troubleshooting
### Common Issues
Check the comprehensive troubleshooting section in `references/backend_security_practices.md`.
### Getting Help
- Review reference documentation
- Check script output messages
- Consult tech stack documentation
- Review error logs
## Resources
- Pattern Reference: `references/api_design_patterns.md`
- Workflow Guide: `references/database_optimization_guide.md`
- Technical Guide: `references/backend_security_practices.md`
- Tool Scripts: `scripts/` directory

View File

@@ -0,0 +1,103 @@
# Api Design Patterns
## Overview
This reference guide provides comprehensive information for senior backend.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for senior backend.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,103 @@
# Backend Security Practices
## Overview
This reference guide provides comprehensive information for senior backend.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for senior backend.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,103 @@
# Database Optimization Guide
## Overview
This reference guide provides comprehensive information for senior backend.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for senior backend.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Api Load Tester
Automated tool for senior backend tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class ApiLoadTester:
"""Main class for api load tester functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Api Load Tester"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = ApiLoadTester(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Api Scaffolder
Automated tool for senior backend tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class ApiScaffolder:
"""Main class for api scaffolder functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Api Scaffolder"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = ApiScaffolder(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Database Migration Tool
Automated tool for senior backend tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class DatabaseMigrationTool:
"""Main class for database migration tool functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Database Migration Tool"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = DatabaseMigrationTool(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,209 @@
---
name: senior-fullstack
description: Comprehensive fullstack development skill for building complete web applications with React, Next.js, Node.js, GraphQL, and PostgreSQL. Includes project scaffolding, code quality analysis, architecture patterns, and complete tech stack guidance. Use when building new projects, analyzing code quality, implementing design patterns, or setting up development workflows.
---
# Senior Fullstack
Complete toolkit for senior fullstack with modern tools and best practices.
## Quick Start
### Main Capabilities
This skill provides three core capabilities through automated scripts:
```bash
# Script 1: Fullstack Scaffolder
python scripts/fullstack_scaffolder.py [options]
# Script 2: Project Scaffolder
python scripts/project_scaffolder.py [options]
# Script 3: Code Quality Analyzer
python scripts/code_quality_analyzer.py [options]
```
## Core Capabilities
### 1. Fullstack Scaffolder
Automated tool for fullstack scaffolder tasks.
**Features:**
- Automated scaffolding
- Best practices built-in
- Configurable templates
- Quality checks
**Usage:**
```bash
python scripts/fullstack_scaffolder.py <project-path> [options]
```
### 2. Project Scaffolder
Comprehensive analysis and optimization tool.
**Features:**
- Deep analysis
- Performance metrics
- Recommendations
- Automated fixes
**Usage:**
```bash
python scripts/project_scaffolder.py <target-path> [--verbose]
```
### 3. Code Quality Analyzer
Advanced tooling for specialized tasks.
**Features:**
- Expert-level automation
- Custom configurations
- Integration ready
- Production-grade output
**Usage:**
```bash
python scripts/code_quality_analyzer.py [arguments] [options]
```
## Reference Documentation
### Tech Stack Guide
Comprehensive guide available in `references/tech_stack_guide.md`:
- Detailed patterns and practices
- Code examples
- Best practices
- Anti-patterns to avoid
- Real-world scenarios
### Architecture Patterns
Complete workflow documentation in `references/architecture_patterns.md`:
- Step-by-step processes
- Optimization strategies
- Tool integrations
- Performance tuning
- Troubleshooting guide
### Development Workflows
Technical reference guide in `references/development_workflows.md`:
- Technology stack details
- Configuration examples
- Integration patterns
- Security considerations
- Scalability guidelines
## Tech Stack
**Languages:** TypeScript, JavaScript, Python, Go, Swift, Kotlin
**Frontend:** React, Next.js, React Native, Flutter
**Backend:** Node.js, Express, GraphQL, REST APIs
**Database:** PostgreSQL, Prisma, NeonDB, Supabase
**DevOps:** Docker, Kubernetes, Terraform, GitHub Actions, CircleCI
**Cloud:** AWS, GCP, Azure
## Development Workflow
### 1. Setup and Configuration
```bash
# Install dependencies
npm install
# or
pip install -r requirements.txt
# Configure environment
cp .env.example .env
```
### 2. Run Quality Checks
```bash
# Use the analyzer script
python scripts/project_scaffolder.py .
# Review recommendations
# Apply fixes
```
### 3. Implement Best Practices
Follow the patterns and practices documented in:
- `references/tech_stack_guide.md`
- `references/architecture_patterns.md`
- `references/development_workflows.md`
## Best Practices Summary
### Code Quality
- Follow established patterns
- Write comprehensive tests
- Document decisions
- Review regularly
### Performance
- Measure before optimizing
- Use appropriate caching
- Optimize critical paths
- Monitor in production
### Security
- Validate all inputs
- Use parameterized queries
- Implement proper authentication
- Keep dependencies updated
### Maintainability
- Write clear code
- Use consistent naming
- Add helpful comments
- Keep it simple
## Common Commands
```bash
# Development
npm run dev
npm run build
npm run test
npm run lint
# Analysis
python scripts/project_scaffolder.py .
python scripts/code_quality_analyzer.py --analyze
# Deployment
docker build -t app:latest .
docker-compose up -d
kubectl apply -f k8s/
```
## Troubleshooting
### Common Issues
Check the comprehensive troubleshooting section in `references/development_workflows.md`.
### Getting Help
- Review reference documentation
- Check script output messages
- Consult tech stack documentation
- Review error logs
## Resources
- Pattern Reference: `references/tech_stack_guide.md`
- Workflow Guide: `references/architecture_patterns.md`
- Technical Guide: `references/development_workflows.md`
- Tool Scripts: `scripts/` directory

View File

@@ -0,0 +1,103 @@
# Architecture Patterns
## Overview
This reference guide provides comprehensive information for senior fullstack.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for senior fullstack.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,103 @@
# Development Workflows
## Overview
This reference guide provides comprehensive information for senior fullstack.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for senior fullstack.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,103 @@
# Tech Stack Guide
## Overview
This reference guide provides comprehensive information for senior fullstack.
## Patterns and Practices
### Pattern 1: Best Practice Implementation
**Description:**
Detailed explanation of the pattern.
**When to Use:**
- Scenario 1
- Scenario 2
- Scenario 3
**Implementation:**
```typescript
// Example code implementation
export class Example {
// Implementation details
}
```
**Benefits:**
- Benefit 1
- Benefit 2
- Benefit 3
**Trade-offs:**
- Consider 1
- Consider 2
- Consider 3
### Pattern 2: Advanced Technique
**Description:**
Another important pattern for senior fullstack.
**Implementation:**
```typescript
// Advanced example
async function advancedExample() {
// Code here
}
```
## Guidelines
### Code Organization
- Clear structure
- Logical separation
- Consistent naming
- Proper documentation
### Performance Considerations
- Optimization strategies
- Bottleneck identification
- Monitoring approaches
- Scaling techniques
### Security Best Practices
- Input validation
- Authentication
- Authorization
- Data protection
## Common Patterns
### Pattern A
Implementation details and examples.
### Pattern B
Implementation details and examples.
### Pattern C
Implementation details and examples.
## Anti-Patterns to Avoid
### Anti-Pattern 1
What not to do and why.
### Anti-Pattern 2
What not to do and why.
## Tools and Resources
### Recommended Tools
- Tool 1: Purpose
- Tool 2: Purpose
- Tool 3: Purpose
### Further Reading
- Resource 1
- Resource 2
- Resource 3
## Conclusion
Key takeaways for using this reference guide effectively.

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Code Quality Analyzer
Automated tool for senior fullstack tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class CodeQualityAnalyzer:
"""Main class for code quality analyzer functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Code Quality Analyzer"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = CodeQualityAnalyzer(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Fullstack Scaffolder
Automated tool for senior fullstack tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class FullstackScaffolder:
"""Main class for fullstack scaffolder functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Fullstack Scaffolder"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = FullstackScaffolder(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
Project Scaffolder
Automated tool for senior fullstack tasks
"""
import os
import sys
import json
import argparse
from pathlib import Path
from typing import Dict, List, Optional
class ProjectScaffolder:
"""Main class for project scaffolder functionality"""
def __init__(self, target_path: str, verbose: bool = False):
self.target_path = Path(target_path)
self.verbose = verbose
self.results = {}
def run(self) -> Dict:
"""Execute the main functionality"""
print(f"🚀 Running {self.__class__.__name__}...")
print(f"📁 Target: {self.target_path}")
try:
self.validate_target()
self.analyze()
self.generate_report()
print("✅ Completed successfully!")
return self.results
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
def validate_target(self):
"""Validate the target path exists and is accessible"""
if not self.target_path.exists():
raise ValueError(f"Target path does not exist: {self.target_path}")
if self.verbose:
print(f"✓ Target validated: {self.target_path}")
def analyze(self):
"""Perform the main analysis or operation"""
if self.verbose:
print("📊 Analyzing...")
# Main logic here
self.results['status'] = 'success'
self.results['target'] = str(self.target_path)
self.results['findings'] = []
# Add analysis results
if self.verbose:
print(f"✓ Analysis complete: {len(self.results.get('findings', []))} findings")
def generate_report(self):
"""Generate and display the report"""
print("\n" + "="*50)
print("REPORT")
print("="*50)
print(f"Target: {self.results.get('target')}")
print(f"Status: {self.results.get('status')}")
print(f"Findings: {len(self.results.get('findings', []))}")
print("="*50 + "\n")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Project Scaffolder"
)
parser.add_argument(
'target',
help='Target path to analyze or process'
)
parser.add_argument(
'--verbose', '-v',
action='store_true',
help='Enable verbose output'
)
parser.add_argument(
'--json',
action='store_true',
help='Output results as JSON'
)
parser.add_argument(
'--output', '-o',
help='Output file path'
)
args = parser.parse_args()
tool = ProjectScaffolder(
args.target,
verbose=args.verbose
)
results = tool.run()
if args.json:
output = json.dumps(results, indent=2)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
print(f"Results written to {args.output}")
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,226 @@
---
name: senior-ml-engineer
description: World-class ML engineering skill for productionizing ML models, MLOps, and building scalable ML systems. Expertise in PyTorch, TensorFlow, model deployment, feature stores, model monitoring, and ML infrastructure. Includes LLM integration, fine-tuning, RAG systems, and agentic AI. Use when deploying ML models, building ML platforms, implementing MLOps, or integrating LLMs into production systems.
---
# Senior ML/AI Engineer
World-class senior ml/ai engineer skill for production-grade AI/ML/Data systems.
## Quick Start
### Main Capabilities
```bash
# Core Tool 1
python scripts/model_deployment_pipeline.py --input data/ --output results/
# Core Tool 2
python scripts/rag_system_builder.py --target project/ --analyze
# Core Tool 3
python scripts/ml_monitoring_suite.py --config config.yaml --deploy
```
## Core Expertise
This skill covers world-class capabilities in:
- Advanced production patterns and architectures
- Scalable system design and implementation
- Performance optimization at scale
- MLOps and DataOps best practices
- Real-time processing and inference
- Distributed computing frameworks
- Model deployment and monitoring
- Security and compliance
- Cost optimization
- Team leadership and mentoring
## Tech Stack
**Languages:** Python, SQL, R, Scala, Go
**ML Frameworks:** PyTorch, TensorFlow, Scikit-learn, XGBoost
**Data Tools:** Spark, Airflow, dbt, Kafka, Databricks
**LLM Frameworks:** LangChain, LlamaIndex, DSPy
**Deployment:** Docker, Kubernetes, AWS/GCP/Azure
**Monitoring:** MLflow, Weights & Biases, Prometheus
**Databases:** PostgreSQL, BigQuery, Snowflake, Pinecone
## Reference Documentation
### 1. Mlops Production Patterns
Comprehensive guide available in `references/mlops_production_patterns.md` covering:
- Advanced patterns and best practices
- Production implementation strategies
- Performance optimization techniques
- Scalability considerations
- Security and compliance
- Real-world case studies
### 2. Llm Integration Guide
Complete workflow documentation in `references/llm_integration_guide.md` including:
- Step-by-step processes
- Architecture design patterns
- Tool integration guides
- Performance tuning strategies
- Troubleshooting procedures
### 3. Rag System Architecture
Technical reference guide in `references/rag_system_architecture.md` with:
- System design principles
- Implementation examples
- Configuration best practices
- Deployment strategies
- Monitoring and observability
## Production Patterns
### Pattern 1: Scalable Data Processing
Enterprise-scale data processing with distributed computing:
- Horizontal scaling architecture
- Fault-tolerant design
- Real-time and batch processing
- Data quality validation
- Performance monitoring
### Pattern 2: ML Model Deployment
Production ML system with high availability:
- Model serving with low latency
- A/B testing infrastructure
- Feature store integration
- Model monitoring and drift detection
- Automated retraining pipelines
### Pattern 3: Real-Time Inference
High-throughput inference system:
- Batching and caching strategies
- Load balancing
- Auto-scaling
- Latency optimization
- Cost optimization
## Best Practices
### Development
- Test-driven development
- Code reviews and pair programming
- Documentation as code
- Version control everything
- Continuous integration
### Production
- Monitor everything critical
- Automate deployments
- Feature flags for releases
- Canary deployments
- Comprehensive logging
### Team Leadership
- Mentor junior engineers
- Drive technical decisions
- Establish coding standards
- Foster learning culture
- Cross-functional collaboration
## Performance Targets
**Latency:**
- P50: < 50ms
- P95: < 100ms
- P99: < 200ms
**Throughput:**
- Requests/second: > 1000
- Concurrent users: > 10,000
**Availability:**
- Uptime: 99.9%
- Error rate: < 0.1%
## Security & Compliance
- Authentication & authorization
- Data encryption (at rest & in transit)
- PII handling and anonymization
- GDPR/CCPA compliance
- Regular security audits
- Vulnerability management
## Common Commands
```bash
# Development
python -m pytest tests/ -v --cov
python -m black src/
python -m pylint src/
# Training
python scripts/train.py --config prod.yaml
python scripts/evaluate.py --model best.pth
# Deployment
docker build -t service:v1 .
kubectl apply -f k8s/
helm upgrade service ./charts/
# Monitoring
kubectl logs -f deployment/service
python scripts/health_check.py
```
## Resources
- Advanced Patterns: `references/mlops_production_patterns.md`
- Implementation Guide: `references/llm_integration_guide.md`
- Technical Reference: `references/rag_system_architecture.md`
- Automation Scripts: `scripts/` directory
## Senior-Level Responsibilities
As a world-class senior professional:
1. **Technical Leadership**
- Drive architectural decisions
- Mentor team members
- Establish best practices
- Ensure code quality
2. **Strategic Thinking**
- Align with business goals
- Evaluate trade-offs
- Plan for scale
- Manage technical debt
3. **Collaboration**
- Work across teams
- Communicate effectively
- Build consensus
- Share knowledge
4. **Innovation**
- Stay current with research
- Experiment with new approaches
- Contribute to community
- Drive continuous improvement
5. **Production Excellence**
- Ensure high availability
- Monitor proactively
- Optimize performance
- Respond to incidents

View File

@@ -0,0 +1,80 @@
# Llm Integration Guide
## Overview
World-class llm integration guide for senior ml/ai engineer.
## Core Principles
### Production-First Design
Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything
### Performance by Design
Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing
### Security & Privacy
Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging
## Advanced Patterns
### Pattern 1: Distributed Processing
Enterprise-scale data processing with fault tolerance.
### Pattern 2: Real-Time Systems
Low-latency, high-throughput systems.
### Pattern 3: ML at Scale
Production ML with monitoring and automation.
## Best Practices
### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints
### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations
### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health
## Tools & Technologies
Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions
## Further Reading
- Research papers
- Industry blogs
- Conference talks
- Open source projects

View File

@@ -0,0 +1,80 @@
# Mlops Production Patterns
## Overview
World-class mlops production patterns for senior ml/ai engineer.
## Core Principles
### Production-First Design
Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything
### Performance by Design
Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing
### Security & Privacy
Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging
## Advanced Patterns
### Pattern 1: Distributed Processing
Enterprise-scale data processing with fault tolerance.
### Pattern 2: Real-Time Systems
Low-latency, high-throughput systems.
### Pattern 3: ML at Scale
Production ML with monitoring and automation.
## Best Practices
### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints
### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations
### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health
## Tools & Technologies
Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions
## Further Reading
- Research papers
- Industry blogs
- Conference talks
- Open source projects

View File

@@ -0,0 +1,80 @@
# Rag System Architecture
## Overview
World-class rag system architecture for senior ml/ai engineer.
## Core Principles
### Production-First Design
Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything
### Performance by Design
Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing
### Security & Privacy
Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging
## Advanced Patterns
### Pattern 1: Distributed Processing
Enterprise-scale data processing with fault tolerance.
### Pattern 2: Real-Time Systems
Low-latency, high-throughput systems.
### Pattern 3: ML at Scale
Production ML with monitoring and automation.
## Best Practices
### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints
### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations
### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health
## Tools & Technologies
Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions
## Further Reading
- Research papers
- Industry blogs
- Conference talks
- Open source projects

View File

@@ -0,0 +1,100 @@
#!/usr/bin/env python3
"""
Ml Monitoring Suite
Production-grade tool for senior ml/ai engineer
"""
import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class MlMonitoringSuite:
"""Production-grade ml monitoring suite"""
def __init__(self, config: Dict):
self.config = config
self.results = {
'status': 'initialized',
'start_time': datetime.now().isoformat(),
'processed_items': 0
}
logger.info(f"Initialized {self.__class__.__name__}")
def validate_config(self) -> bool:
"""Validate configuration"""
logger.info("Validating configuration...")
# Add validation logic
logger.info("Configuration validated")
return True
def process(self) -> Dict:
"""Main processing logic"""
logger.info("Starting processing...")
try:
self.validate_config()
# Main processing
result = self._execute()
self.results['status'] = 'completed'
self.results['end_time'] = datetime.now().isoformat()
logger.info("Processing completed successfully")
return self.results
except Exception as e:
self.results['status'] = 'failed'
self.results['error'] = str(e)
logger.error(f"Processing failed: {e}")
raise
def _execute(self) -> Dict:
"""Execute main logic"""
# Implementation here
return {'success': True}
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Ml Monitoring Suite"
)
parser.add_argument('--input', '-i', required=True, help='Input path')
parser.add_argument('--output', '-o', required=True, help='Output path')
parser.add_argument('--config', '-c', help='Configuration file')
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
try:
config = {
'input': args.input,
'output': args.output
}
processor = MlMonitoringSuite(config)
results = processor.process()
print(json.dumps(results, indent=2))
sys.exit(0)
except Exception as e:
logger.error(f"Fatal error: {e}")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,100 @@
#!/usr/bin/env python3
"""
Model Deployment Pipeline
Production-grade tool for senior ml/ai engineer
"""
import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class ModelDeploymentPipeline:
"""Production-grade model deployment pipeline"""
def __init__(self, config: Dict):
self.config = config
self.results = {
'status': 'initialized',
'start_time': datetime.now().isoformat(),
'processed_items': 0
}
logger.info(f"Initialized {self.__class__.__name__}")
def validate_config(self) -> bool:
"""Validate configuration"""
logger.info("Validating configuration...")
# Add validation logic
logger.info("Configuration validated")
return True
def process(self) -> Dict:
"""Main processing logic"""
logger.info("Starting processing...")
try:
self.validate_config()
# Main processing
result = self._execute()
self.results['status'] = 'completed'
self.results['end_time'] = datetime.now().isoformat()
logger.info("Processing completed successfully")
return self.results
except Exception as e:
self.results['status'] = 'failed'
self.results['error'] = str(e)
logger.error(f"Processing failed: {e}")
raise
def _execute(self) -> Dict:
"""Execute main logic"""
# Implementation here
return {'success': True}
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Model Deployment Pipeline"
)
parser.add_argument('--input', '-i', required=True, help='Input path')
parser.add_argument('--output', '-o', required=True, help='Output path')
parser.add_argument('--config', '-c', help='Configuration file')
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
try:
config = {
'input': args.input,
'output': args.output
}
processor = ModelDeploymentPipeline(config)
results = processor.process()
print(json.dumps(results, indent=2))
sys.exit(0)
except Exception as e:
logger.error(f"Fatal error: {e}")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,100 @@
#!/usr/bin/env python3
"""
Rag System Builder
Production-grade tool for senior ml/ai engineer
"""
import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class RagSystemBuilder:
"""Production-grade rag system builder"""
def __init__(self, config: Dict):
self.config = config
self.results = {
'status': 'initialized',
'start_time': datetime.now().isoformat(),
'processed_items': 0
}
logger.info(f"Initialized {self.__class__.__name__}")
def validate_config(self) -> bool:
"""Validate configuration"""
logger.info("Validating configuration...")
# Add validation logic
logger.info("Configuration validated")
return True
def process(self) -> Dict:
"""Main processing logic"""
logger.info("Starting processing...")
try:
self.validate_config()
# Main processing
result = self._execute()
self.results['status'] = 'completed'
self.results['end_time'] = datetime.now().isoformat()
logger.info("Processing completed successfully")
return self.results
except Exception as e:
self.results['status'] = 'failed'
self.results['error'] = str(e)
logger.error(f"Processing failed: {e}")
raise
def _execute(self) -> Dict:
"""Execute main logic"""
# Implementation here
return {'success': True}
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Rag System Builder"
)
parser.add_argument('--input', '-i', required=True, help='Input path')
parser.add_argument('--output', '-o', required=True, help='Output path')
parser.add_argument('--config', '-c', help='Configuration file')
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
try:
config = {
'input': args.input,
'output': args.output
}
processor = RagSystemBuilder(config)
results = processor.process()
print(json.dumps(results, indent=2))
sys.exit(0)
except Exception as e:
logger.error(f"Fatal error: {e}")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,226 @@
---
name: senior-prompt-engineer
description: World-class prompt engineering skill for LLM optimization, prompt patterns, structured outputs, and AI product development. Expertise in Claude, GPT-4, prompt design patterns, few-shot learning, chain-of-thought, and AI evaluation. Includes RAG optimization, agent design, and LLM system architecture. Use when building AI products, optimizing LLM performance, designing agentic systems, or implementing advanced prompting techniques.
---
# Senior Prompt Engineer
World-class senior prompt engineer skill for production-grade AI/ML/Data systems.
## Quick Start
### Main Capabilities
```bash
# Core Tool 1
python scripts/prompt_optimizer.py --input data/ --output results/
# Core Tool 2
python scripts/rag_evaluator.py --target project/ --analyze
# Core Tool 3
python scripts/agent_orchestrator.py --config config.yaml --deploy
```
## Core Expertise
This skill covers world-class capabilities in:
- Advanced production patterns and architectures
- Scalable system design and implementation
- Performance optimization at scale
- MLOps and DataOps best practices
- Real-time processing and inference
- Distributed computing frameworks
- Model deployment and monitoring
- Security and compliance
- Cost optimization
- Team leadership and mentoring
## Tech Stack
**Languages:** Python, SQL, R, Scala, Go
**ML Frameworks:** PyTorch, TensorFlow, Scikit-learn, XGBoost
**Data Tools:** Spark, Airflow, dbt, Kafka, Databricks
**LLM Frameworks:** LangChain, LlamaIndex, DSPy
**Deployment:** Docker, Kubernetes, AWS/GCP/Azure
**Monitoring:** MLflow, Weights & Biases, Prometheus
**Databases:** PostgreSQL, BigQuery, Snowflake, Pinecone
## Reference Documentation
### 1. Prompt Engineering Patterns
Comprehensive guide available in `references/prompt_engineering_patterns.md` covering:
- Advanced patterns and best practices
- Production implementation strategies
- Performance optimization techniques
- Scalability considerations
- Security and compliance
- Real-world case studies
### 2. Llm Evaluation Frameworks
Complete workflow documentation in `references/llm_evaluation_frameworks.md` including:
- Step-by-step processes
- Architecture design patterns
- Tool integration guides
- Performance tuning strategies
- Troubleshooting procedures
### 3. Agentic System Design
Technical reference guide in `references/agentic_system_design.md` with:
- System design principles
- Implementation examples
- Configuration best practices
- Deployment strategies
- Monitoring and observability
## Production Patterns
### Pattern 1: Scalable Data Processing
Enterprise-scale data processing with distributed computing:
- Horizontal scaling architecture
- Fault-tolerant design
- Real-time and batch processing
- Data quality validation
- Performance monitoring
### Pattern 2: ML Model Deployment
Production ML system with high availability:
- Model serving with low latency
- A/B testing infrastructure
- Feature store integration
- Model monitoring and drift detection
- Automated retraining pipelines
### Pattern 3: Real-Time Inference
High-throughput inference system:
- Batching and caching strategies
- Load balancing
- Auto-scaling
- Latency optimization
- Cost optimization
## Best Practices
### Development
- Test-driven development
- Code reviews and pair programming
- Documentation as code
- Version control everything
- Continuous integration
### Production
- Monitor everything critical
- Automate deployments
- Feature flags for releases
- Canary deployments
- Comprehensive logging
### Team Leadership
- Mentor junior engineers
- Drive technical decisions
- Establish coding standards
- Foster learning culture
- Cross-functional collaboration
## Performance Targets
**Latency:**
- P50: < 50ms
- P95: < 100ms
- P99: < 200ms
**Throughput:**
- Requests/second: > 1000
- Concurrent users: > 10,000
**Availability:**
- Uptime: 99.9%
- Error rate: < 0.1%
## Security & Compliance
- Authentication & authorization
- Data encryption (at rest & in transit)
- PII handling and anonymization
- GDPR/CCPA compliance
- Regular security audits
- Vulnerability management
## Common Commands
```bash
# Development
python -m pytest tests/ -v --cov
python -m black src/
python -m pylint src/
# Training
python scripts/train.py --config prod.yaml
python scripts/evaluate.py --model best.pth
# Deployment
docker build -t service:v1 .
kubectl apply -f k8s/
helm upgrade service ./charts/
# Monitoring
kubectl logs -f deployment/service
python scripts/health_check.py
```
## Resources
- Advanced Patterns: `references/prompt_engineering_patterns.md`
- Implementation Guide: `references/llm_evaluation_frameworks.md`
- Technical Reference: `references/agentic_system_design.md`
- Automation Scripts: `scripts/` directory
## Senior-Level Responsibilities
As a world-class senior professional:
1. **Technical Leadership**
- Drive architectural decisions
- Mentor team members
- Establish best practices
- Ensure code quality
2. **Strategic Thinking**
- Align with business goals
- Evaluate trade-offs
- Plan for scale
- Manage technical debt
3. **Collaboration**
- Work across teams
- Communicate effectively
- Build consensus
- Share knowledge
4. **Innovation**
- Stay current with research
- Experiment with new approaches
- Contribute to community
- Drive continuous improvement
5. **Production Excellence**
- Ensure high availability
- Monitor proactively
- Optimize performance
- Respond to incidents

View File

@@ -0,0 +1,80 @@
# Agentic System Design
## Overview
World-class agentic system design for senior prompt engineer.
## Core Principles
### Production-First Design
Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything
### Performance by Design
Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing
### Security & Privacy
Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging
## Advanced Patterns
### Pattern 1: Distributed Processing
Enterprise-scale data processing with fault tolerance.
### Pattern 2: Real-Time Systems
Low-latency, high-throughput systems.
### Pattern 3: ML at Scale
Production ML with monitoring and automation.
## Best Practices
### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints
### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations
### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health
## Tools & Technologies
Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions
## Further Reading
- Research papers
- Industry blogs
- Conference talks
- Open source projects

View File

@@ -0,0 +1,80 @@
# Llm Evaluation Frameworks
## Overview
World-class llm evaluation frameworks for senior prompt engineer.
## Core Principles
### Production-First Design
Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything
### Performance by Design
Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing
### Security & Privacy
Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging
## Advanced Patterns
### Pattern 1: Distributed Processing
Enterprise-scale data processing with fault tolerance.
### Pattern 2: Real-Time Systems
Low-latency, high-throughput systems.
### Pattern 3: ML at Scale
Production ML with monitoring and automation.
## Best Practices
### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints
### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations
### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health
## Tools & Technologies
Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions
## Further Reading
- Research papers
- Industry blogs
- Conference talks
- Open source projects

View File

@@ -0,0 +1,80 @@
# Prompt Engineering Patterns
## Overview
World-class prompt engineering patterns for senior prompt engineer.
## Core Principles
### Production-First Design
Always design with production in mind:
- Scalability: Handle 10x current load
- Reliability: 99.9% uptime target
- Maintainability: Clear, documented code
- Observability: Monitor everything
### Performance by Design
Optimize from the start:
- Efficient algorithms
- Resource awareness
- Strategic caching
- Batch processing
### Security & Privacy
Build security in:
- Input validation
- Data encryption
- Access control
- Audit logging
## Advanced Patterns
### Pattern 1: Distributed Processing
Enterprise-scale data processing with fault tolerance.
### Pattern 2: Real-Time Systems
Low-latency, high-throughput systems.
### Pattern 3: ML at Scale
Production ML with monitoring and automation.
## Best Practices
### Code Quality
- Comprehensive testing
- Clear documentation
- Code reviews
- Type hints
### Performance
- Profile before optimizing
- Monitor continuously
- Cache strategically
- Batch operations
### Reliability
- Design for failure
- Implement retries
- Use circuit breakers
- Monitor health
## Tools & Technologies
Essential tools for this domain:
- Development frameworks
- Testing libraries
- Deployment platforms
- Monitoring solutions
## Further Reading
- Research papers
- Industry blogs
- Conference talks
- Open source projects

View File

@@ -0,0 +1,100 @@
#!/usr/bin/env python3
"""
Agent Orchestrator
Production-grade tool for senior prompt engineer
"""
import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class AgentOrchestrator:
"""Production-grade agent orchestrator"""
def __init__(self, config: Dict):
self.config = config
self.results = {
'status': 'initialized',
'start_time': datetime.now().isoformat(),
'processed_items': 0
}
logger.info(f"Initialized {self.__class__.__name__}")
def validate_config(self) -> bool:
"""Validate configuration"""
logger.info("Validating configuration...")
# Add validation logic
logger.info("Configuration validated")
return True
def process(self) -> Dict:
"""Main processing logic"""
logger.info("Starting processing...")
try:
self.validate_config()
# Main processing
result = self._execute()
self.results['status'] = 'completed'
self.results['end_time'] = datetime.now().isoformat()
logger.info("Processing completed successfully")
return self.results
except Exception as e:
self.results['status'] = 'failed'
self.results['error'] = str(e)
logger.error(f"Processing failed: {e}")
raise
def _execute(self) -> Dict:
"""Execute main logic"""
# Implementation here
return {'success': True}
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Agent Orchestrator"
)
parser.add_argument('--input', '-i', required=True, help='Input path')
parser.add_argument('--output', '-o', required=True, help='Output path')
parser.add_argument('--config', '-c', help='Configuration file')
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
try:
config = {
'input': args.input,
'output': args.output
}
processor = AgentOrchestrator(config)
results = processor.process()
print(json.dumps(results, indent=2))
sys.exit(0)
except Exception as e:
logger.error(f"Fatal error: {e}")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,100 @@
#!/usr/bin/env python3
"""
Prompt Optimizer
Production-grade tool for senior prompt engineer
"""
import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class PromptOptimizer:
"""Production-grade prompt optimizer"""
def __init__(self, config: Dict):
self.config = config
self.results = {
'status': 'initialized',
'start_time': datetime.now().isoformat(),
'processed_items': 0
}
logger.info(f"Initialized {self.__class__.__name__}")
def validate_config(self) -> bool:
"""Validate configuration"""
logger.info("Validating configuration...")
# Add validation logic
logger.info("Configuration validated")
return True
def process(self) -> Dict:
"""Main processing logic"""
logger.info("Starting processing...")
try:
self.validate_config()
# Main processing
result = self._execute()
self.results['status'] = 'completed'
self.results['end_time'] = datetime.now().isoformat()
logger.info("Processing completed successfully")
return self.results
except Exception as e:
self.results['status'] = 'failed'
self.results['error'] = str(e)
logger.error(f"Processing failed: {e}")
raise
def _execute(self) -> Dict:
"""Execute main logic"""
# Implementation here
return {'success': True}
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Prompt Optimizer"
)
parser.add_argument('--input', '-i', required=True, help='Input path')
parser.add_argument('--output', '-o', required=True, help='Output path')
parser.add_argument('--config', '-c', help='Configuration file')
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
try:
config = {
'input': args.input,
'output': args.output
}
processor = PromptOptimizer(config)
results = processor.process()
print(json.dumps(results, indent=2))
sys.exit(0)
except Exception as e:
logger.error(f"Fatal error: {e}")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,100 @@
#!/usr/bin/env python3
"""
Rag Evaluator
Production-grade tool for senior prompt engineer
"""
import os
import sys
import json
import logging
import argparse
from pathlib import Path
from typing import Dict, List, Optional
from datetime import datetime
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class RagEvaluator:
"""Production-grade rag evaluator"""
def __init__(self, config: Dict):
self.config = config
self.results = {
'status': 'initialized',
'start_time': datetime.now().isoformat(),
'processed_items': 0
}
logger.info(f"Initialized {self.__class__.__name__}")
def validate_config(self) -> bool:
"""Validate configuration"""
logger.info("Validating configuration...")
# Add validation logic
logger.info("Configuration validated")
return True
def process(self) -> Dict:
"""Main processing logic"""
logger.info("Starting processing...")
try:
self.validate_config()
# Main processing
result = self._execute()
self.results['status'] = 'completed'
self.results['end_time'] = datetime.now().isoformat()
logger.info("Processing completed successfully")
return self.results
except Exception as e:
self.results['status'] = 'failed'
self.results['error'] = str(e)
logger.error(f"Processing failed: {e}")
raise
def _execute(self) -> Dict:
"""Execute main logic"""
# Implementation here
return {'success': True}
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(
description="Rag Evaluator"
)
parser.add_argument('--input', '-i', required=True, help='Input path')
parser.add_argument('--output', '-o', required=True, help='Output path')
parser.add_argument('--config', '-c', help='Configuration file')
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
args = parser.parse_args()
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
try:
config = {
'input': args.input,
'output': args.output
}
processor = RagEvaluator(config)
results = processor.process()
print(json.dumps(results, indent=2))
sys.exit(0)
except Exception as e:
logger.error(f"Fatal error: {e}")
sys.exit(1)
if __name__ == '__main__':
main()