@@ -1,226 +0,0 @@
|
||||
---
|
||||
name: senior-ml-engineer
|
||||
description: World-class ML engineering skill for productionizing ML models, MLOps, and building scalable ML systems. Expertise in PyTorch, TensorFlow, model deployment, feature stores, model monitoring, and ML infrastructure. Includes LLM integration, fine-tuning, RAG systems, and agentic AI. Use when deploying ML models, building ML platforms, implementing MLOps, or integrating LLMs into production systems.
|
||||
---
|
||||
|
||||
# Senior ML/AI Engineer
|
||||
|
||||
World-class senior ml/ai engineer skill for production-grade AI/ML/Data systems.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Main Capabilities
|
||||
|
||||
```bash
|
||||
# Core Tool 1
|
||||
python scripts/model_deployment_pipeline.py --input data/ --output results/
|
||||
|
||||
# Core Tool 2
|
||||
python scripts/rag_system_builder.py --target project/ --analyze
|
||||
|
||||
# Core Tool 3
|
||||
python scripts/ml_monitoring_suite.py --config config.yaml --deploy
|
||||
```
|
||||
|
||||
## Core Expertise
|
||||
|
||||
This skill covers world-class capabilities in:
|
||||
|
||||
- Advanced production patterns and architectures
|
||||
- Scalable system design and implementation
|
||||
- Performance optimization at scale
|
||||
- MLOps and DataOps best practices
|
||||
- Real-time processing and inference
|
||||
- Distributed computing frameworks
|
||||
- Model deployment and monitoring
|
||||
- Security and compliance
|
||||
- Cost optimization
|
||||
- Team leadership and mentoring
|
||||
|
||||
## Tech Stack
|
||||
|
||||
**Languages:** Python, SQL, R, Scala, Go
|
||||
**ML Frameworks:** PyTorch, TensorFlow, Scikit-learn, XGBoost
|
||||
**Data Tools:** Spark, Airflow, dbt, Kafka, Databricks
|
||||
**LLM Frameworks:** LangChain, LlamaIndex, DSPy
|
||||
**Deployment:** Docker, Kubernetes, AWS/GCP/Azure
|
||||
**Monitoring:** MLflow, Weights & Biases, Prometheus
|
||||
**Databases:** PostgreSQL, BigQuery, Snowflake, Pinecone
|
||||
|
||||
## Reference Documentation
|
||||
|
||||
### 1. Mlops Production Patterns
|
||||
|
||||
Comprehensive guide available in `references/mlops_production_patterns.md` covering:
|
||||
|
||||
- Advanced patterns and best practices
|
||||
- Production implementation strategies
|
||||
- Performance optimization techniques
|
||||
- Scalability considerations
|
||||
- Security and compliance
|
||||
- Real-world case studies
|
||||
|
||||
### 2. Llm Integration Guide
|
||||
|
||||
Complete workflow documentation in `references/llm_integration_guide.md` including:
|
||||
|
||||
- Step-by-step processes
|
||||
- Architecture design patterns
|
||||
- Tool integration guides
|
||||
- Performance tuning strategies
|
||||
- Troubleshooting procedures
|
||||
|
||||
### 3. Rag System Architecture
|
||||
|
||||
Technical reference guide in `references/rag_system_architecture.md` with:
|
||||
|
||||
- System design principles
|
||||
- Implementation examples
|
||||
- Configuration best practices
|
||||
- Deployment strategies
|
||||
- Monitoring and observability
|
||||
|
||||
## Production Patterns
|
||||
|
||||
### Pattern 1: Scalable Data Processing
|
||||
|
||||
Enterprise-scale data processing with distributed computing:
|
||||
|
||||
- Horizontal scaling architecture
|
||||
- Fault-tolerant design
|
||||
- Real-time and batch processing
|
||||
- Data quality validation
|
||||
- Performance monitoring
|
||||
|
||||
### Pattern 2: ML Model Deployment
|
||||
|
||||
Production ML system with high availability:
|
||||
|
||||
- Model serving with low latency
|
||||
- A/B testing infrastructure
|
||||
- Feature store integration
|
||||
- Model monitoring and drift detection
|
||||
- Automated retraining pipelines
|
||||
|
||||
### Pattern 3: Real-Time Inference
|
||||
|
||||
High-throughput inference system:
|
||||
|
||||
- Batching and caching strategies
|
||||
- Load balancing
|
||||
- Auto-scaling
|
||||
- Latency optimization
|
||||
- Cost optimization
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Development
|
||||
|
||||
- Test-driven development
|
||||
- Code reviews and pair programming
|
||||
- Documentation as code
|
||||
- Version control everything
|
||||
- Continuous integration
|
||||
|
||||
### Production
|
||||
|
||||
- Monitor everything critical
|
||||
- Automate deployments
|
||||
- Feature flags for releases
|
||||
- Canary deployments
|
||||
- Comprehensive logging
|
||||
|
||||
### Team Leadership
|
||||
|
||||
- Mentor junior engineers
|
||||
- Drive technical decisions
|
||||
- Establish coding standards
|
||||
- Foster learning culture
|
||||
- Cross-functional collaboration
|
||||
|
||||
## Performance Targets
|
||||
|
||||
**Latency:**
|
||||
- P50: < 50ms
|
||||
- P95: < 100ms
|
||||
- P99: < 200ms
|
||||
|
||||
**Throughput:**
|
||||
- Requests/second: > 1000
|
||||
- Concurrent users: > 10,000
|
||||
|
||||
**Availability:**
|
||||
- Uptime: 99.9%
|
||||
- Error rate: < 0.1%
|
||||
|
||||
## Security & Compliance
|
||||
|
||||
- Authentication & authorization
|
||||
- Data encryption (at rest & in transit)
|
||||
- PII handling and anonymization
|
||||
- GDPR/CCPA compliance
|
||||
- Regular security audits
|
||||
- Vulnerability management
|
||||
|
||||
## Common Commands
|
||||
|
||||
```bash
|
||||
# Development
|
||||
python -m pytest tests/ -v --cov
|
||||
python -m black src/
|
||||
python -m pylint src/
|
||||
|
||||
# Training
|
||||
python scripts/train.py --config prod.yaml
|
||||
python scripts/evaluate.py --model best.pth
|
||||
|
||||
# Deployment
|
||||
docker build -t service:v1 .
|
||||
kubectl apply -f k8s/
|
||||
helm upgrade service ./charts/
|
||||
|
||||
# Monitoring
|
||||
kubectl logs -f deployment/service
|
||||
python scripts/health_check.py
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- Advanced Patterns: `references/mlops_production_patterns.md`
|
||||
- Implementation Guide: `references/llm_integration_guide.md`
|
||||
- Technical Reference: `references/rag_system_architecture.md`
|
||||
- Automation Scripts: `scripts/` directory
|
||||
|
||||
## Senior-Level Responsibilities
|
||||
|
||||
As a world-class senior professional:
|
||||
|
||||
1. **Technical Leadership**
|
||||
- Drive architectural decisions
|
||||
- Mentor team members
|
||||
- Establish best practices
|
||||
- Ensure code quality
|
||||
|
||||
2. **Strategic Thinking**
|
||||
- Align with business goals
|
||||
- Evaluate trade-offs
|
||||
- Plan for scale
|
||||
- Manage technical debt
|
||||
|
||||
3. **Collaboration**
|
||||
- Work across teams
|
||||
- Communicate effectively
|
||||
- Build consensus
|
||||
- Share knowledge
|
||||
|
||||
4. **Innovation**
|
||||
- Stay current with research
|
||||
- Experiment with new approaches
|
||||
- Contribute to community
|
||||
- Drive continuous improvement
|
||||
|
||||
5. **Production Excellence**
|
||||
- Ensure high availability
|
||||
- Monitor proactively
|
||||
- Optimize performance
|
||||
- Respond to incidents
|
||||
@@ -1,80 +0,0 @@
|
||||
# Llm Integration Guide
|
||||
|
||||
## Overview
|
||||
|
||||
World-class llm integration guide for senior ml/ai engineer.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### Production-First Design
|
||||
|
||||
Always design with production in mind:
|
||||
- Scalability: Handle 10x current load
|
||||
- Reliability: 99.9% uptime target
|
||||
- Maintainability: Clear, documented code
|
||||
- Observability: Monitor everything
|
||||
|
||||
### Performance by Design
|
||||
|
||||
Optimize from the start:
|
||||
- Efficient algorithms
|
||||
- Resource awareness
|
||||
- Strategic caching
|
||||
- Batch processing
|
||||
|
||||
### Security & Privacy
|
||||
|
||||
Build security in:
|
||||
- Input validation
|
||||
- Data encryption
|
||||
- Access control
|
||||
- Audit logging
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Distributed Processing
|
||||
|
||||
Enterprise-scale data processing with fault tolerance.
|
||||
|
||||
### Pattern 2: Real-Time Systems
|
||||
|
||||
Low-latency, high-throughput systems.
|
||||
|
||||
### Pattern 3: ML at Scale
|
||||
|
||||
Production ML with monitoring and automation.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Code Quality
|
||||
- Comprehensive testing
|
||||
- Clear documentation
|
||||
- Code reviews
|
||||
- Type hints
|
||||
|
||||
### Performance
|
||||
- Profile before optimizing
|
||||
- Monitor continuously
|
||||
- Cache strategically
|
||||
- Batch operations
|
||||
|
||||
### Reliability
|
||||
- Design for failure
|
||||
- Implement retries
|
||||
- Use circuit breakers
|
||||
- Monitor health
|
||||
|
||||
## Tools & Technologies
|
||||
|
||||
Essential tools for this domain:
|
||||
- Development frameworks
|
||||
- Testing libraries
|
||||
- Deployment platforms
|
||||
- Monitoring solutions
|
||||
|
||||
## Further Reading
|
||||
|
||||
- Research papers
|
||||
- Industry blogs
|
||||
- Conference talks
|
||||
- Open source projects
|
||||
@@ -1,80 +0,0 @@
|
||||
# Mlops Production Patterns
|
||||
|
||||
## Overview
|
||||
|
||||
World-class mlops production patterns for senior ml/ai engineer.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### Production-First Design
|
||||
|
||||
Always design with production in mind:
|
||||
- Scalability: Handle 10x current load
|
||||
- Reliability: 99.9% uptime target
|
||||
- Maintainability: Clear, documented code
|
||||
- Observability: Monitor everything
|
||||
|
||||
### Performance by Design
|
||||
|
||||
Optimize from the start:
|
||||
- Efficient algorithms
|
||||
- Resource awareness
|
||||
- Strategic caching
|
||||
- Batch processing
|
||||
|
||||
### Security & Privacy
|
||||
|
||||
Build security in:
|
||||
- Input validation
|
||||
- Data encryption
|
||||
- Access control
|
||||
- Audit logging
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Distributed Processing
|
||||
|
||||
Enterprise-scale data processing with fault tolerance.
|
||||
|
||||
### Pattern 2: Real-Time Systems
|
||||
|
||||
Low-latency, high-throughput systems.
|
||||
|
||||
### Pattern 3: ML at Scale
|
||||
|
||||
Production ML with monitoring and automation.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Code Quality
|
||||
- Comprehensive testing
|
||||
- Clear documentation
|
||||
- Code reviews
|
||||
- Type hints
|
||||
|
||||
### Performance
|
||||
- Profile before optimizing
|
||||
- Monitor continuously
|
||||
- Cache strategically
|
||||
- Batch operations
|
||||
|
||||
### Reliability
|
||||
- Design for failure
|
||||
- Implement retries
|
||||
- Use circuit breakers
|
||||
- Monitor health
|
||||
|
||||
## Tools & Technologies
|
||||
|
||||
Essential tools for this domain:
|
||||
- Development frameworks
|
||||
- Testing libraries
|
||||
- Deployment platforms
|
||||
- Monitoring solutions
|
||||
|
||||
## Further Reading
|
||||
|
||||
- Research papers
|
||||
- Industry blogs
|
||||
- Conference talks
|
||||
- Open source projects
|
||||
@@ -1,80 +0,0 @@
|
||||
# Rag System Architecture
|
||||
|
||||
## Overview
|
||||
|
||||
World-class rag system architecture for senior ml/ai engineer.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### Production-First Design
|
||||
|
||||
Always design with production in mind:
|
||||
- Scalability: Handle 10x current load
|
||||
- Reliability: 99.9% uptime target
|
||||
- Maintainability: Clear, documented code
|
||||
- Observability: Monitor everything
|
||||
|
||||
### Performance by Design
|
||||
|
||||
Optimize from the start:
|
||||
- Efficient algorithms
|
||||
- Resource awareness
|
||||
- Strategic caching
|
||||
- Batch processing
|
||||
|
||||
### Security & Privacy
|
||||
|
||||
Build security in:
|
||||
- Input validation
|
||||
- Data encryption
|
||||
- Access control
|
||||
- Audit logging
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Distributed Processing
|
||||
|
||||
Enterprise-scale data processing with fault tolerance.
|
||||
|
||||
### Pattern 2: Real-Time Systems
|
||||
|
||||
Low-latency, high-throughput systems.
|
||||
|
||||
### Pattern 3: ML at Scale
|
||||
|
||||
Production ML with monitoring and automation.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Code Quality
|
||||
- Comprehensive testing
|
||||
- Clear documentation
|
||||
- Code reviews
|
||||
- Type hints
|
||||
|
||||
### Performance
|
||||
- Profile before optimizing
|
||||
- Monitor continuously
|
||||
- Cache strategically
|
||||
- Batch operations
|
||||
|
||||
### Reliability
|
||||
- Design for failure
|
||||
- Implement retries
|
||||
- Use circuit breakers
|
||||
- Monitor health
|
||||
|
||||
## Tools & Technologies
|
||||
|
||||
Essential tools for this domain:
|
||||
- Development frameworks
|
||||
- Testing libraries
|
||||
- Deployment platforms
|
||||
- Monitoring solutions
|
||||
|
||||
## Further Reading
|
||||
|
||||
- Research papers
|
||||
- Industry blogs
|
||||
- Conference talks
|
||||
- Open source projects
|
||||
@@ -1,100 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Ml Monitoring Suite
|
||||
Production-grade tool for senior ml/ai engineer
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import logging
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional
|
||||
from datetime import datetime
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MlMonitoringSuite:
|
||||
"""Production-grade ml monitoring suite"""
|
||||
|
||||
def __init__(self, config: Dict):
|
||||
self.config = config
|
||||
self.results = {
|
||||
'status': 'initialized',
|
||||
'start_time': datetime.now().isoformat(),
|
||||
'processed_items': 0
|
||||
}
|
||||
logger.info(f"Initialized {self.__class__.__name__}")
|
||||
|
||||
def validate_config(self) -> bool:
|
||||
"""Validate configuration"""
|
||||
logger.info("Validating configuration...")
|
||||
# Add validation logic
|
||||
logger.info("Configuration validated")
|
||||
return True
|
||||
|
||||
def process(self) -> Dict:
|
||||
"""Main processing logic"""
|
||||
logger.info("Starting processing...")
|
||||
|
||||
try:
|
||||
self.validate_config()
|
||||
|
||||
# Main processing
|
||||
result = self._execute()
|
||||
|
||||
self.results['status'] = 'completed'
|
||||
self.results['end_time'] = datetime.now().isoformat()
|
||||
|
||||
logger.info("Processing completed successfully")
|
||||
return self.results
|
||||
|
||||
except Exception as e:
|
||||
self.results['status'] = 'failed'
|
||||
self.results['error'] = str(e)
|
||||
logger.error(f"Processing failed: {e}")
|
||||
raise
|
||||
|
||||
def _execute(self) -> Dict:
|
||||
"""Execute main logic"""
|
||||
# Implementation here
|
||||
return {'success': True}
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Ml Monitoring Suite"
|
||||
)
|
||||
parser.add_argument('--input', '-i', required=True, help='Input path')
|
||||
parser.add_argument('--output', '-o', required=True, help='Output path')
|
||||
parser.add_argument('--config', '-c', help='Configuration file')
|
||||
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.verbose:
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
|
||||
try:
|
||||
config = {
|
||||
'input': args.input,
|
||||
'output': args.output
|
||||
}
|
||||
|
||||
processor = MlMonitoringSuite(config)
|
||||
results = processor.process()
|
||||
|
||||
print(json.dumps(results, indent=2))
|
||||
sys.exit(0)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Fatal error: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,100 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Model Deployment Pipeline
|
||||
Production-grade tool for senior ml/ai engineer
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import logging
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional
|
||||
from datetime import datetime
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ModelDeploymentPipeline:
|
||||
"""Production-grade model deployment pipeline"""
|
||||
|
||||
def __init__(self, config: Dict):
|
||||
self.config = config
|
||||
self.results = {
|
||||
'status': 'initialized',
|
||||
'start_time': datetime.now().isoformat(),
|
||||
'processed_items': 0
|
||||
}
|
||||
logger.info(f"Initialized {self.__class__.__name__}")
|
||||
|
||||
def validate_config(self) -> bool:
|
||||
"""Validate configuration"""
|
||||
logger.info("Validating configuration...")
|
||||
# Add validation logic
|
||||
logger.info("Configuration validated")
|
||||
return True
|
||||
|
||||
def process(self) -> Dict:
|
||||
"""Main processing logic"""
|
||||
logger.info("Starting processing...")
|
||||
|
||||
try:
|
||||
self.validate_config()
|
||||
|
||||
# Main processing
|
||||
result = self._execute()
|
||||
|
||||
self.results['status'] = 'completed'
|
||||
self.results['end_time'] = datetime.now().isoformat()
|
||||
|
||||
logger.info("Processing completed successfully")
|
||||
return self.results
|
||||
|
||||
except Exception as e:
|
||||
self.results['status'] = 'failed'
|
||||
self.results['error'] = str(e)
|
||||
logger.error(f"Processing failed: {e}")
|
||||
raise
|
||||
|
||||
def _execute(self) -> Dict:
|
||||
"""Execute main logic"""
|
||||
# Implementation here
|
||||
return {'success': True}
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Model Deployment Pipeline"
|
||||
)
|
||||
parser.add_argument('--input', '-i', required=True, help='Input path')
|
||||
parser.add_argument('--output', '-o', required=True, help='Output path')
|
||||
parser.add_argument('--config', '-c', help='Configuration file')
|
||||
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.verbose:
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
|
||||
try:
|
||||
config = {
|
||||
'input': args.input,
|
||||
'output': args.output
|
||||
}
|
||||
|
||||
processor = ModelDeploymentPipeline(config)
|
||||
results = processor.process()
|
||||
|
||||
print(json.dumps(results, indent=2))
|
||||
sys.exit(0)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Fatal error: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,100 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Rag System Builder
|
||||
Production-grade tool for senior ml/ai engineer
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import logging
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional
|
||||
from datetime import datetime
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class RagSystemBuilder:
|
||||
"""Production-grade rag system builder"""
|
||||
|
||||
def __init__(self, config: Dict):
|
||||
self.config = config
|
||||
self.results = {
|
||||
'status': 'initialized',
|
||||
'start_time': datetime.now().isoformat(),
|
||||
'processed_items': 0
|
||||
}
|
||||
logger.info(f"Initialized {self.__class__.__name__}")
|
||||
|
||||
def validate_config(self) -> bool:
|
||||
"""Validate configuration"""
|
||||
logger.info("Validating configuration...")
|
||||
# Add validation logic
|
||||
logger.info("Configuration validated")
|
||||
return True
|
||||
|
||||
def process(self) -> Dict:
|
||||
"""Main processing logic"""
|
||||
logger.info("Starting processing...")
|
||||
|
||||
try:
|
||||
self.validate_config()
|
||||
|
||||
# Main processing
|
||||
result = self._execute()
|
||||
|
||||
self.results['status'] = 'completed'
|
||||
self.results['end_time'] = datetime.now().isoformat()
|
||||
|
||||
logger.info("Processing completed successfully")
|
||||
return self.results
|
||||
|
||||
except Exception as e:
|
||||
self.results['status'] = 'failed'
|
||||
self.results['error'] = str(e)
|
||||
logger.error(f"Processing failed: {e}")
|
||||
raise
|
||||
|
||||
def _execute(self) -> Dict:
|
||||
"""Execute main logic"""
|
||||
# Implementation here
|
||||
return {'success': True}
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Rag System Builder"
|
||||
)
|
||||
parser.add_argument('--input', '-i', required=True, help='Input path')
|
||||
parser.add_argument('--output', '-o', required=True, help='Output path')
|
||||
parser.add_argument('--config', '-c', help='Configuration file')
|
||||
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose output')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.verbose:
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
|
||||
try:
|
||||
config = {
|
||||
'input': args.input,
|
||||
'output': args.output
|
||||
}
|
||||
|
||||
processor = RagSystemBuilder(config)
|
||||
results = processor.process()
|
||||
|
||||
print(json.dumps(results, indent=2))
|
||||
sys.exit(0)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Fatal error: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
Reference in New Issue
Block a user