Initial Commit
This commit is contained in:
125
docs/changes/vehicles-dropdown-v1/README.md
Normal file
125
docs/changes/vehicles-dropdown-v1/README.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# MVP Platform Vehicles Service Implementation - Executive Summary
|
||||
|
||||
## Project Overview
|
||||
|
||||
**UPDATED ARCHITECTURE DECISION**: This implementation creates the MVP Platform Vehicles Service as part of MotoVaultPro's distributed microservices architecture. The service provides hierarchical vehicle API endpoints and VIN decoding capabilities, replacing external NHTSA vPIC API calls with a local, high-performance 3-container microservice.
|
||||
|
||||
**STATUS**: Implementation in progress - Phase 1 (Infrastructure Setup)
|
||||
|
||||
**IMPORTANT**: The `vehicle-etl/` directory is temporary and will be removed when complete. All functionality is being integrated directly into the main MotoVaultPro application as the MVP Platform Vehicles Service.
|
||||
|
||||
## Architecture Goals
|
||||
|
||||
1. **Microservices Architecture**: Create 3-container MVP Platform Vehicles Service (DB + ETL + FastAPI)
|
||||
2. **Hierarchical Vehicle API**: Implement year-based filtering with hierarchical parameters
|
||||
3. **PostgreSQL VIN Decoding**: Create vpic.f_decode_vin() function with MSSQL parity
|
||||
4. **Service Independence**: Platform service completely independent with own database
|
||||
5. **Performance**: Sub-100ms hierarchical endpoint response times with year-based caching
|
||||
|
||||
## Context7 Verified Technology Stack
|
||||
|
||||
- **Docker Compose**: Latest version with health checks and dependency management ✅
|
||||
- **PostgreSQL 15**: Stable, production-ready with excellent Docker support ✅
|
||||
- **Python 3.11**: Current stable version for FastAPI ETL processing ✅
|
||||
- **Node.js 20**: LTS version for TypeScript backend integration ✅
|
||||
- **FastAPI**: Modern async framework, perfect for ETL API endpoints ✅
|
||||
|
||||
## Implementation Strategy - Distributed Microservices
|
||||
|
||||
The implementation creates a complete 3-container platform service in 6 phases:
|
||||
|
||||
### **Phase 1: Infrastructure Setup** ✅ COMPLETED
|
||||
- ✅ Added mvp-platform-vehicles-db container (PostgreSQL with vpic schema)
|
||||
- ✅ Added mvp-platform-vehicles-etl container (Python ETL processor)
|
||||
- ✅ Added mvp-platform-vehicles-api container (FastAPI service)
|
||||
- ✅ Updated docker-compose.yml with health checks and dependencies
|
||||
|
||||
### **Phase 2: FastAPI Hierarchical Endpoints** ✅ COMPLETED
|
||||
- ✅ Implemented year-based hierarchical filtering endpoints (makes, models, trims, engines, transmissions)
|
||||
- ✅ Added Query parameter validation with FastAPI
|
||||
- ✅ Created hierarchical caching strategy with Redis
|
||||
- ✅ Built complete FastAPI application structure with proper dependencies and middleware
|
||||
|
||||
### **Phase 3: PostgreSQL VIN Decoding Function** ✅ COMPLETED
|
||||
- ✅ Implemented vpic.f_decode_vin() with MSSQL stored procedure parity
|
||||
- ✅ Added WMI resolution, year calculation, and confidence scoring
|
||||
- ✅ Created VIN decode caching tables with automatic cache population
|
||||
- ✅ Built complete year calculation logic with 30-year cycle handling
|
||||
|
||||
### **Phase 4: ETL Container Implementation** ✅ COMPLETED
|
||||
- ✅ Setup scheduled weekly ETL processing with cron-based scheduler
|
||||
- ✅ Configured MSSQL source connection with pyodbc and proper ODBC drivers
|
||||
- ✅ Implemented data transformation and loading pipeline with connection testing
|
||||
- ✅ Added ETL health checks and error handling with comprehensive logging
|
||||
|
||||
### **Phase 5: Application Integration** ✅ COMPLETED
|
||||
- ✅ Created platform vehicles client with comprehensive circuit breaker pattern
|
||||
- ✅ Built platform integration service with automatic fallback to external vPIC
|
||||
- ✅ Updated vehicles feature to consume hierarchical platform service API
|
||||
- ✅ Implemented feature flag system for gradual platform service migration
|
||||
- ✅ Updated all vehicle dropdown endpoints to use hierarchical parameters (year → make → model → trims/engines/transmissions)
|
||||
|
||||
### **Phase 6: Testing & Validation** ✅ READY FOR TESTING
|
||||
- ⚡ **Ready**: Hierarchical API performance testing (<100ms target)
|
||||
- ⚡ **Ready**: VIN decoding accuracy parity testing with PostgreSQL function
|
||||
- ⚡ **Ready**: ETL processing validation with scheduled weekly pipeline
|
||||
- ⚡ **Ready**: Circuit breaker pattern testing with graceful fallbacks
|
||||
- ⚡ **Ready**: End-to-end platform service integration testing
|
||||
|
||||
## **🎯 IMPLEMENTATION COMPLETE**
|
||||
|
||||
All phases of the MVP Platform Vehicles Service implementation are complete. The service is ready for testing and validation.
|
||||
|
||||
## Success Criteria - IMPLEMENTATION STATUS
|
||||
|
||||
- ✅ **Zero Breaking Changes**: Hierarchical API maintains backward compatibility with circuit breakers
|
||||
- ✅ **Performance**: Platform service designed for <100ms with year-based caching
|
||||
- ✅ **Accuracy**: PostgreSQL vpic.f_decode_vin() function implements MSSQL stored procedure parity
|
||||
- ✅ **Reliability**: Weekly ETL scheduler with comprehensive error handling and health checks
|
||||
- ✅ **Scalability**: Complete 3-container microservice architecture ready for production
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Start Services**: `make dev` to start full microservices environment
|
||||
2. **Test Platform API**: Access http://localhost:8000/docs for FastAPI documentation
|
||||
3. **Test Application**: Verify hierarchical dropdowns in frontend at https://motovaultpro.com
|
||||
4. **Monitor ETL**: Check ETL logs with `make logs-platform-vehicles`
|
||||
5. **Validate Performance**: Test <100ms response times with real vehicle data
|
||||
|
||||
## MVP Platform Foundation Benefits
|
||||
|
||||
This implementation establishes the **foundational pattern for MVP Platform shared services**:
|
||||
|
||||
- **Standardized Naming**: `mvp-platform-*` services and databases
|
||||
- **Service Isolation**: Separate databases for different domains
|
||||
- **Scheduled Processing**: Automated data pipeline management
|
||||
- **API Integration**: Seamless integration through existing feature capsules
|
||||
- **Monitoring Ready**: Health checks and observability from day one
|
||||
|
||||
## Future Platform Services
|
||||
|
||||
Once established, this pattern enables rapid deployment of additional platform services:
|
||||
|
||||
- `mvp-platform-analytics` (user behavior tracking)
|
||||
- `mvp-platform-notifications` (email/SMS service)
|
||||
- `mvp-platform-payments` (payment processing)
|
||||
- `mvp-platform-documents` (file storage service)
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Review [Architecture Decisions](./architecture-decisions.md) for technical context
|
||||
2. Follow [Implementation Checklist](./implementation-checklist.md) for step-by-step execution
|
||||
3. Execute phases sequentially starting with [Phase 1: Infrastructure](./phase-01-infrastructure.md)
|
||||
4. Validate each phase using provided test procedures
|
||||
|
||||
## AI Assistant Guidance
|
||||
|
||||
This documentation is optimized for efficient AI assistant execution:
|
||||
|
||||
- Each phase contains explicit, actionable instructions
|
||||
- All file paths and code changes are precisely specified
|
||||
- Validation steps are included for each major change
|
||||
- Error handling and rollback procedures are documented
|
||||
- Dependencies and prerequisites are clearly stated
|
||||
|
||||
For any clarification on implementation details, refer to the specific phase documentation or the comprehensive [Implementation Checklist](./implementation-checklist.md).
|
||||
465
docs/changes/vehicles-dropdown-v1/architecture-decisions.md
Normal file
465
docs/changes/vehicles-dropdown-v1/architecture-decisions.md
Normal file
@@ -0,0 +1,465 @@
|
||||
# Architecture Decisions - Vehicle ETL Integration
|
||||
|
||||
## Overview
|
||||
|
||||
This document captures all architectural decisions made during the Vehicle ETL integration project. Each decision includes the context, options considered, decision made, and rationale. This serves as a reference for future AI assistants and development teams.
|
||||
|
||||
## Context7 Technology Validation
|
||||
|
||||
All technology choices were verified through Context7 for current best practices, compatibility, and production readiness:
|
||||
|
||||
- ✅ **Docker Compose**: Latest version with health checks and dependency management
|
||||
- ✅ **PostgreSQL 15**: Stable, production-ready with excellent Docker support
|
||||
- ✅ **Python 3.11**: Current stable version for FastAPI ETL processing
|
||||
- ✅ **Node.js 20**: LTS version for TypeScript backend integration
|
||||
- ✅ **FastAPI**: Modern async framework, perfect for ETL API endpoints
|
||||
|
||||
---
|
||||
|
||||
## Decision 1: MVP Platform Naming Convention
|
||||
|
||||
### Context
|
||||
Need to establish a consistent naming pattern for shared services that will be used across multiple features and future platform services.
|
||||
|
||||
### Options Considered
|
||||
1. **Generic naming**: `shared-database`, `common-db`
|
||||
2. **Service-specific naming**: `vehicle-database`, `vpic-database`
|
||||
3. **Platform-prefixed naming**: `mvp-platform-database`, `mvp-platform-*`
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Platform-prefixed naming with pattern `mvp-platform-*`
|
||||
|
||||
### Rationale
|
||||
- Establishes clear ownership and purpose
|
||||
- Scales to multiple platform services
|
||||
- Avoids naming conflicts with feature-specific resources
|
||||
- Creates recognizable pattern for future services
|
||||
- Aligns with microservices architecture principles
|
||||
|
||||
### Implementation
|
||||
- Database service: `mvp-platform-database`
|
||||
- Database name: `mvp-platform-vehicles`
|
||||
- User: `mvp_platform_user`
|
||||
- Cache keys: `mvp-platform:*`
|
||||
|
||||
---
|
||||
|
||||
## Decision 2: Database Separation Strategy
|
||||
|
||||
### Context
|
||||
Need to determine how to integrate the MVP Platform database with the existing MotoVaultPro database architecture.
|
||||
|
||||
### Options Considered
|
||||
1. **Single Database**: Add ETL tables to existing MotoVaultPro database
|
||||
2. **Schema Separation**: Use separate schemas within existing database
|
||||
3. **Complete Database Separation**: Separate PostgreSQL instance for platform services
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Complete Database Separation
|
||||
|
||||
### Rationale
|
||||
- **Service Isolation**: Platform services can be independently managed
|
||||
- **Scalability**: Each service can have different performance requirements
|
||||
- **Security**: Separate access controls and permissions
|
||||
- **Maintenance**: Independent backup and recovery procedures
|
||||
- **Future-Proofing**: Ready for microservices deployment on Kubernetes
|
||||
|
||||
### Implementation
|
||||
- Main app database: `motovaultpro` on port 5432
|
||||
- Platform database: `mvp-platform-vehicles` on port 5433
|
||||
- Separate connection pools in backend service
|
||||
- Independent health checks and monitoring
|
||||
|
||||
---
|
||||
|
||||
## Decision 3: ETL Processing Architecture
|
||||
|
||||
### Context
|
||||
Need to replace external NHTSA vPIC API calls with local data while maintaining data freshness.
|
||||
|
||||
### Options Considered
|
||||
1. **Real-time Proxy**: Cache API responses indefinitely
|
||||
2. **Daily Sync**: Update local database daily
|
||||
3. **Weekly Batch ETL**: Full database refresh weekly
|
||||
4. **Hybrid Approach**: Local cache with periodic full refresh
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Weekly Batch ETL with local database
|
||||
|
||||
### Rationale
|
||||
- **Data Freshness**: Vehicle specifications change infrequently
|
||||
- **Performance**: Sub-100ms response times achievable with local queries
|
||||
- **Reliability**: No dependency on external API availability
|
||||
- **Cost**: Reduces external API calls and rate limiting concerns
|
||||
- **Control**: Complete control over data quality and availability
|
||||
|
||||
### Implementation
|
||||
- Weekly Sunday 2 AM ETL execution
|
||||
- Complete database rebuild each cycle
|
||||
- Comprehensive error handling and retry logic
|
||||
- Health monitoring and alerting
|
||||
|
||||
---
|
||||
|
||||
## Decision 4: Scheduled Processing Implementation
|
||||
|
||||
### Context
|
||||
Need to implement automated ETL processing with proper scheduling, monitoring, and error handling.
|
||||
|
||||
### Options Considered
|
||||
1. **External Cron**: Use host system cron to trigger Docker exec
|
||||
2. **Container Cron**: Install cron daemon within ETL container
|
||||
3. **Kubernetes CronJob**: Use K8s native job scheduling
|
||||
4. **Third-party Scheduler**: Use external scheduling service
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Container Cron with Docker Compose
|
||||
|
||||
### Rationale
|
||||
- **Simplicity**: Maintains single Docker Compose deployment
|
||||
- **Self-Contained**: No external dependencies for development
|
||||
- **Kubernetes Ready**: Can be migrated to K8s CronJob later
|
||||
- **Monitoring**: Container-based health checks and logging
|
||||
- **Development**: Easy local testing and debugging
|
||||
|
||||
### Implementation
|
||||
- Python 3.11 container with cron daemon
|
||||
- Configurable schedule via environment variables
|
||||
- Health checks and status monitoring
|
||||
- Comprehensive logging and error reporting
|
||||
|
||||
---
|
||||
|
||||
## Decision 5: API Integration Pattern
|
||||
|
||||
### Context
|
||||
Need to integrate MVP Platform database access while maintaining exact API compatibility.
|
||||
|
||||
### Options Considered
|
||||
1. **API Gateway**: Proxy requests to separate ETL API service
|
||||
2. **Direct Integration**: Query MVP Platform database directly from vehicles feature
|
||||
3. **Service Layer**: Create intermediate service layer
|
||||
4. **Hybrid**: Mix of direct queries and service calls
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Direct Integration within Vehicles Feature
|
||||
|
||||
### Rationale
|
||||
- **Performance**: Direct database queries eliminate HTTP overhead
|
||||
- **Simplicity**: Reduces complexity and potential failure points
|
||||
- **Maintainability**: All vehicle-related code in single feature capsule
|
||||
- **Zero Breaking Changes**: Exact same API interface preserved
|
||||
- **Feature Capsule Pattern**: Maintains self-contained feature architecture
|
||||
|
||||
### Implementation
|
||||
- MVP Platform repository within vehicles feature
|
||||
- Direct PostgreSQL queries using existing connection pool pattern
|
||||
- Same caching strategy with Redis
|
||||
- Preserve exact response formats
|
||||
|
||||
---
|
||||
|
||||
## Decision 6: VIN Decoding Algorithm Migration
|
||||
|
||||
### Context
|
||||
Need to port complex VIN decoding logic from Python ETL to TypeScript backend.
|
||||
|
||||
### Options Considered
|
||||
1. **Full Port**: Rewrite all VIN decoding logic in TypeScript
|
||||
2. **Database Functions**: Implement logic as PostgreSQL functions
|
||||
3. **API Calls**: Call Python ETL API for VIN decoding
|
||||
4. **Simplified Logic**: Implement basic VIN decoding only
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Full Port to TypeScript with Database Assist
|
||||
|
||||
### Rationale
|
||||
- **Performance**: Avoids HTTP calls for every VIN decode
|
||||
- **Consistency**: All business logic in same language/runtime
|
||||
- **Maintainability**: Single codebase for vehicle logic
|
||||
- **Flexibility**: Can enhance VIN logic without ETL changes
|
||||
- **Testing**: Easier to test within existing test framework
|
||||
|
||||
### Implementation
|
||||
- TypeScript VIN validation and year extraction
|
||||
- Database queries for pattern matching and confidence scoring
|
||||
- Comprehensive error handling and fallback logic
|
||||
- Maintain exact same accuracy as original Python implementation
|
||||
|
||||
---
|
||||
|
||||
## Decision 7: Caching Strategy
|
||||
|
||||
### Context
|
||||
Need to maintain high performance while transitioning from external API to database queries.
|
||||
|
||||
### Options Considered
|
||||
1. **No Caching**: Direct database queries only
|
||||
2. **Database-Level Caching**: PostgreSQL query caching
|
||||
3. **Application Caching**: Redis with existing patterns
|
||||
4. **Multi-Level Caching**: Both database and Redis caching
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Application Caching with Updated Key Patterns
|
||||
|
||||
### Rationale
|
||||
- **Existing Infrastructure**: Leverage existing Redis instance
|
||||
- **Performance Requirements**: Meet sub-100ms response time goals
|
||||
- **Cache Hit Rates**: Maintain high cache efficiency
|
||||
- **TTL Strategy**: Different TTLs for different data types
|
||||
- **Invalidation**: Clear invalidation strategy for data updates
|
||||
|
||||
### Implementation
|
||||
- VIN decoding: 30-day TTL (specifications don't change)
|
||||
- Dropdown data: 7-day TTL (infrequent updates)
|
||||
- Cache key pattern: `mvp-platform:*` for new services
|
||||
- Existing Redis instance with updated key patterns
|
||||
|
||||
---
|
||||
|
||||
## Decision 8: Error Handling and Fallback Strategy
|
||||
|
||||
### Context
|
||||
Need to ensure system reliability when MVP Platform database is unavailable.
|
||||
|
||||
### Options Considered
|
||||
1. **Fail Fast**: Return errors immediately when database unavailable
|
||||
2. **External API Fallback**: Fall back to original NHTSA API
|
||||
3. **Cached Responses**: Return stale cached data
|
||||
4. **Graceful Degradation**: Provide limited functionality
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Graceful Degradation with Cached Responses
|
||||
|
||||
### Rationale
|
||||
- **User Experience**: Avoid complete service failure
|
||||
- **Data Availability**: Cached data still valuable when fresh data unavailable
|
||||
- **System Reliability**: Partial functionality better than complete failure
|
||||
- **Performance**: Cached responses still meet performance requirements
|
||||
- **Recovery**: System automatically recovers when database available
|
||||
|
||||
### Implementation
|
||||
- Return cached data when database unavailable
|
||||
- Appropriate HTTP status codes (503 Service Unavailable)
|
||||
- Health check endpoints for monitoring
|
||||
- Automatic retry logic with exponential backoff
|
||||
|
||||
---
|
||||
|
||||
## Decision 9: Authentication and Security Model
|
||||
|
||||
### Context
|
||||
Need to maintain existing security model while adding new platform services.
|
||||
|
||||
### Options Considered
|
||||
1. **Authenticate All**: Require authentication for all new endpoints
|
||||
2. **Mixed Authentication**: Some endpoints public, some authenticated
|
||||
3. **Maintain Current**: Keep dropdown endpoints unauthenticated
|
||||
4. **Enhanced Security**: Add additional security layers
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Maintain Current Security Model
|
||||
|
||||
### Rationale
|
||||
- **Zero Breaking Changes**: Frontend requires no modifications
|
||||
- **Security Analysis**: Dropdown data is public NHTSA information
|
||||
- **Performance**: No authentication overhead for public data
|
||||
- **Documentation**: Aligned with security.md requirements
|
||||
- **Future Flexibility**: Can add authentication layers later if needed
|
||||
|
||||
### Implementation
|
||||
- Dropdown endpoints remain unauthenticated
|
||||
- CRUD endpoints still require JWT authentication
|
||||
- Platform services follow same security patterns
|
||||
- Comprehensive input validation and SQL injection prevention
|
||||
|
||||
---
|
||||
|
||||
## Decision 10: Testing and Validation Strategy
|
||||
|
||||
### Context
|
||||
Need comprehensive testing to ensure zero breaking changes and meet performance requirements.
|
||||
|
||||
### Options Considered
|
||||
1. **Unit Tests Only**: Focus on code-level testing
|
||||
2. **Integration Tests**: Test API endpoints and database integration
|
||||
3. **Performance Tests**: Focus on response time requirements
|
||||
4. **Comprehensive Testing**: All test types with automation
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Comprehensive Testing with Automation
|
||||
|
||||
### Rationale
|
||||
- **Quality Assurance**: Meet all success criteria requirements
|
||||
- **Risk Mitigation**: Identify issues before production deployment
|
||||
- **Performance Validation**: Ensure sub-100ms response times
|
||||
- **Regression Prevention**: Automated tests catch future issues
|
||||
- **Documentation**: Tests serve as behavior documentation
|
||||
|
||||
### Implementation
|
||||
- API functionality tests for response format validation
|
||||
- Authentication tests for security model compliance
|
||||
- Performance tests for response time requirements
|
||||
- Data accuracy tests for VIN decoding validation
|
||||
- ETL process tests for scheduled job functionality
|
||||
- Load tests for concurrent request handling
|
||||
- Error handling tests for failure scenarios
|
||||
|
||||
---
|
||||
|
||||
## Decision 11: Deployment and Infrastructure Strategy
|
||||
|
||||
### Context
|
||||
Need to determine deployment approach that supports both development and production.
|
||||
|
||||
### Options Considered
|
||||
1. **Docker Compose Only**: Single deployment method
|
||||
2. **Kubernetes Only**: Production-focused deployment
|
||||
3. **Hybrid Approach**: Docker Compose for dev, Kubernetes for prod
|
||||
4. **Multiple Options**: Support multiple deployment methods
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Hybrid Approach (Docker Compose → Kubernetes)
|
||||
|
||||
### Rationale
|
||||
- **Development Efficiency**: Docker Compose simpler for local development
|
||||
- **Production Scalability**: Kubernetes required for production scaling
|
||||
- **Migration Path**: Clear path from development to production
|
||||
- **Team Skills**: Matches team capabilities and tooling
|
||||
- **Cost Efficiency**: Docker Compose sufficient for development/staging
|
||||
|
||||
### Implementation
|
||||
- Current implementation: Docker Compose with production-ready containers
|
||||
- Future migration: Kubernetes manifests for production deployment
|
||||
- Container images designed for both environments
|
||||
- Environment variable configuration for deployment flexibility
|
||||
|
||||
---
|
||||
|
||||
## Decision 12: Data Migration and Backwards Compatibility
|
||||
|
||||
### Context
|
||||
Need to handle transition from external API to local database without service disruption.
|
||||
|
||||
### Options Considered
|
||||
1. **Big Bang Migration**: Switch all at once
|
||||
2. **Gradual Migration**: Migrate endpoints one by one
|
||||
3. **Blue-Green Deployment**: Parallel systems with traffic switch
|
||||
4. **Feature Flags**: Toggle between old and new systems
|
||||
|
||||
### Decision Made
|
||||
**Chosen**: Big Bang Migration with Comprehensive Testing
|
||||
|
||||
### Rationale
|
||||
- **Simplicity**: Single transition point reduces complexity
|
||||
- **Testing**: Comprehensive test suite validates entire system
|
||||
- **Rollback**: Clear rollback path if issues discovered
|
||||
- **MVP Scope**: Limited scope makes big bang migration feasible
|
||||
- **Zero Downtime**: Migration can be done without service interruption
|
||||
|
||||
### Implementation
|
||||
- Complete testing in development environment
|
||||
- Staging deployment for validation
|
||||
- Production deployment during low-traffic window
|
||||
- Immediate rollback capability if issues detected
|
||||
- Monitoring and alerting for post-deployment validation
|
||||
|
||||
---
|
||||
|
||||
## MVP Platform Architecture Principles
|
||||
|
||||
Based on these decisions, the following principles guide MVP Platform development:
|
||||
|
||||
### 1. Service Isolation
|
||||
- Each platform service has its own database
|
||||
- Independent deployment and scaling
|
||||
- Clear service boundaries and responsibilities
|
||||
|
||||
### 2. Standardized Naming
|
||||
- All platform services use `mvp-platform-*` prefix
|
||||
- Consistent naming across databases, containers, and cache keys
|
||||
- Predictable patterns for future services
|
||||
|
||||
### 3. Performance First
|
||||
- Sub-100ms response times for all public endpoints
|
||||
- Aggressive caching with appropriate TTLs
|
||||
- Database optimization and connection pooling
|
||||
|
||||
### 4. Zero Breaking Changes
|
||||
- Existing API contracts never change
|
||||
- Frontend requires no modifications
|
||||
- Backward compatibility maintained across all changes
|
||||
|
||||
### 5. Comprehensive Testing
|
||||
- Automated test suites for all changes
|
||||
- Performance validation requirements
|
||||
- Error handling and edge case coverage
|
||||
|
||||
### 6. Graceful Degradation
|
||||
- Systems continue operating with reduced functionality
|
||||
- Appropriate error responses and status codes
|
||||
- Automatic recovery when services restore
|
||||
|
||||
### 7. Observability Ready
|
||||
- Health check endpoints for all services
|
||||
- Comprehensive logging and monitoring
|
||||
- Alerting for critical failures
|
||||
|
||||
### 8. Future-Proof Architecture
|
||||
- Designed for Kubernetes migration
|
||||
- Microservices-ready patterns
|
||||
- Extensible for additional platform services
|
||||
|
||||
---
|
||||
|
||||
## Future Architecture Evolution
|
||||
|
||||
### Next Platform Services
|
||||
Following this pattern, future platform services will include:
|
||||
|
||||
1. **mvp-platform-analytics**: User behavior tracking and analysis
|
||||
2. **mvp-platform-notifications**: Email, SMS, and push notifications
|
||||
3. **mvp-platform-payments**: Payment processing and billing
|
||||
4. **mvp-platform-documents**: File storage and document management
|
||||
5. **mvp-platform-search**: Full-text search and indexing
|
||||
|
||||
### Kubernetes Migration Plan
|
||||
When ready for production scaling:
|
||||
|
||||
1. **Container Compatibility**: All containers designed for Kubernetes
|
||||
2. **Configuration Management**: Environment-based configuration
|
||||
3. **Service Discovery**: Native Kubernetes service discovery
|
||||
4. **Persistent Storage**: Kubernetes persistent volumes
|
||||
5. **Auto-scaling**: Horizontal pod autoscaling
|
||||
6. **Ingress**: Kubernetes ingress controllers
|
||||
7. **Monitoring**: Prometheus and Grafana integration
|
||||
|
||||
### Microservices Evolution
|
||||
Path to full microservices architecture:
|
||||
|
||||
1. **Service Extraction**: Extract platform services to independent deployments
|
||||
2. **API Gateway**: Implement centralized API gateway
|
||||
3. **Service Mesh**: Add service mesh for advanced networking
|
||||
4. **Event-Driven**: Implement event-driven communication patterns
|
||||
5. **CQRS**: Command Query Responsibility Segregation for complex domains
|
||||
|
||||
---
|
||||
|
||||
## Decision Review and Updates
|
||||
|
||||
This document should be reviewed and updated:
|
||||
|
||||
- **Before adding new platform services**: Ensure consistency with established patterns
|
||||
- **During performance issues**: Review caching and database decisions
|
||||
- **When scaling requirements change**: Evaluate deployment and infrastructure choices
|
||||
- **After major technology updates**: Reassess technology choices with current best practices
|
||||
|
||||
All architectural decisions should be validated against:
|
||||
- Performance requirements and SLAs
|
||||
- Security and compliance requirements
|
||||
- Team capabilities and maintenance burden
|
||||
- Cost and resource constraints
|
||||
- Future scalability and extensibility needs
|
||||
|
||||
**Document Last Updated**: [Current Date]
|
||||
**Next Review Date**: [3 months from last update]
|
||||
634
docs/changes/vehicles-dropdown-v1/implementation-checklist.md
Normal file
634
docs/changes/vehicles-dropdown-v1/implementation-checklist.md
Normal file
@@ -0,0 +1,634 @@
|
||||
# Vehicle ETL Integration - Implementation Checklist
|
||||
|
||||
## Overview
|
||||
|
||||
This checklist provides step-by-step execution guidance for implementing the Vehicle ETL integration. Each item includes verification steps and dependencies to ensure successful completion.
|
||||
|
||||
## Pre-Implementation Requirements
|
||||
|
||||
- [ ] **Docker Environment Ready**: Docker and Docker Compose installed and functional
|
||||
- [ ] **Main Application Running**: MotoVaultPro backend and frontend operational
|
||||
- [ ] **NHTSA Database Backup**: VPICList backup file available in `vehicle-etl/volumes/mssql/backups/`
|
||||
- [ ] **Network Ports Available**: Ports 5433 (MVP Platform DB), 1433 (MSSQL), available
|
||||
- [ ] **Git Branch Created**: Feature branch created for implementation
|
||||
- [ ] **Backup Taken**: Complete backup of current working state
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Infrastructure Setup
|
||||
|
||||
### ✅ Task 1.1: Add MVP Platform Database Service
|
||||
|
||||
**Files**: `docker-compose.yml`
|
||||
|
||||
- [ ] Add `mvp-platform-database` service definition
|
||||
- [ ] Configure PostgreSQL 15-alpine image
|
||||
- [ ] Set database name to `mvp-platform-vehicles`
|
||||
- [ ] Configure user `mvp_platform_user`
|
||||
- [ ] Set port mapping to `5433:5432`
|
||||
- [ ] Add health check configuration
|
||||
- [ ] Add volume `mvp_platform_data`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
docker-compose config | grep -A 20 "mvp-platform-database"
|
||||
```
|
||||
|
||||
### ✅ Task 1.2: Add MSSQL Source Database Service
|
||||
|
||||
**Files**: `docker-compose.yml`
|
||||
|
||||
- [ ] Add `mssql-source` service definition
|
||||
- [ ] Configure MSSQL Server 2019 image
|
||||
- [ ] Set SA password from environment variable
|
||||
- [ ] Configure backup volume mount
|
||||
- [ ] Add health check with 60s start period
|
||||
- [ ] Add volume `mssql_source_data`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
docker-compose config | grep -A 15 "mssql-source"
|
||||
```
|
||||
|
||||
### ✅ Task 1.3: Add ETL Scheduler Service
|
||||
|
||||
**Files**: `docker-compose.yml`
|
||||
|
||||
- [ ] Add `etl-scheduler` service definition
|
||||
- [ ] Configure build context to `./vehicle-etl`
|
||||
- [ ] Set all required environment variables
|
||||
- [ ] Add dependency on both databases with health checks
|
||||
- [ ] Configure logs volume mount
|
||||
- [ ] Add volume `etl_scheduler_data`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
docker-compose config | grep -A 25 "etl-scheduler"
|
||||
```
|
||||
|
||||
### ✅ Task 1.4: Update Backend Environment Variables
|
||||
|
||||
**Files**: `docker-compose.yml`
|
||||
|
||||
- [ ] Add `MVP_PLATFORM_DB_HOST` environment variable to backend
|
||||
- [ ] Add `MVP_PLATFORM_DB_PORT` environment variable
|
||||
- [ ] Add `MVP_PLATFORM_DB_NAME` environment variable
|
||||
- [ ] Add `MVP_PLATFORM_DB_USER` environment variable
|
||||
- [ ] Add `MVP_PLATFORM_DB_PASSWORD` environment variable
|
||||
- [ ] Add dependency on `mvp-platform-database`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
docker-compose config | grep -A 10 "MVP_PLATFORM_DB"
|
||||
```
|
||||
|
||||
### ✅ Task 1.5: Update Environment Files
|
||||
|
||||
**Files**: `.env.example`, `.env`
|
||||
|
||||
- [ ] Add `MVP_PLATFORM_DB_PASSWORD` to .env.example
|
||||
- [ ] Add `MSSQL_SOURCE_PASSWORD` to .env.example
|
||||
- [ ] Add ETL configuration variables
|
||||
- [ ] Update local `.env` file if it exists
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "MVP_PLATFORM_DB_PASSWORD" .env.example
|
||||
```
|
||||
|
||||
### ✅ Phase 1 Validation
|
||||
|
||||
- [ ] **Docker Compose Valid**: `docker-compose config` succeeds
|
||||
- [ ] **Services Start**: `docker-compose up mvp-platform-database mssql-source -d` succeeds
|
||||
- [ ] **Health Checks Pass**: Both databases show healthy status
|
||||
- [ ] **Database Connections**: Can connect to both databases
|
||||
- [ ] **Logs Directory Created**: `./vehicle-etl/logs/` exists
|
||||
|
||||
**Critical Check**:
|
||||
```bash
|
||||
docker-compose ps | grep -E "(mvp-platform-database|mssql-source)" | grep "healthy"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Backend Migration
|
||||
|
||||
### ✅ Task 2.1: Remove External vPIC Dependencies
|
||||
|
||||
**Files**: `backend/src/features/vehicles/external/` (directory)
|
||||
|
||||
- [ ] Delete entire `external/vpic/` directory
|
||||
- [ ] Remove `VPIC_API_URL` from `environment.ts`
|
||||
- [ ] Add MVP Platform DB configuration to `environment.ts`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
ls backend/src/features/vehicles/external/ 2>/dev/null || echo "Directory removed ✅"
|
||||
grep "VPIC_API_URL" backend/src/core/config/environment.ts || echo "VPIC_API_URL removed ✅"
|
||||
```
|
||||
|
||||
### ✅ Task 2.2: Create MVP Platform Database Connection
|
||||
|
||||
**Files**: `backend/src/core/config/database.ts`
|
||||
|
||||
- [ ] Add `mvpPlatformPool` export
|
||||
- [ ] Configure connection with MVP Platform DB parameters
|
||||
- [ ] Set appropriate pool size (10 connections)
|
||||
- [ ] Configure idle timeout
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "mvpPlatformPool" backend/src/core/config/database.ts
|
||||
```
|
||||
|
||||
### ✅ Task 2.3: Create MVP Platform Repository
|
||||
|
||||
**Files**: `backend/src/features/vehicles/data/mvp-platform.repository.ts`
|
||||
|
||||
- [ ] Create `MvpPlatformRepository` class
|
||||
- [ ] Implement `decodeVIN()` method
|
||||
- [ ] Implement `getMakes()` method
|
||||
- [ ] Implement `getModelsForMake()` method
|
||||
- [ ] Implement `getTransmissions()` method
|
||||
- [ ] Implement `getEngines()` method
|
||||
- [ ] Implement `getTrims()` method
|
||||
- [ ] Export singleton instance
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "export class MvpPlatformRepository" backend/src/features/vehicles/data/mvp-platform.repository.ts
|
||||
```
|
||||
|
||||
### ✅ Task 2.4: Create VIN Decoder Service
|
||||
|
||||
**Files**: `backend/src/features/vehicles/domain/vin-decoder.service.ts`
|
||||
|
||||
- [ ] Create `VinDecoderService` class
|
||||
- [ ] Implement VIN validation logic
|
||||
- [ ] Implement cache-first decoding
|
||||
- [ ] Implement model year extraction from VIN
|
||||
- [ ] Add comprehensive error handling
|
||||
- [ ] Export singleton instance
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "export class VinDecoderService" backend/src/features/vehicles/domain/vin-decoder.service.ts
|
||||
```
|
||||
|
||||
### ✅ Task 2.5: Update Vehicles Service
|
||||
|
||||
**Files**: `backend/src/features/vehicles/domain/vehicles.service.ts`
|
||||
|
||||
- [ ] Remove imports for `vpicClient`
|
||||
- [ ] Add imports for `vinDecoderService` and `mvpPlatformRepository`
|
||||
- [ ] Replace `vpicClient.decodeVIN()` with `vinDecoderService.decodeVIN()`
|
||||
- [ ] Add `getDropdownMakes()` method
|
||||
- [ ] Add `getDropdownModels()` method
|
||||
- [ ] Add `getDropdownTransmissions()` method
|
||||
- [ ] Add `getDropdownEngines()` method
|
||||
- [ ] Add `getDropdownTrims()` method
|
||||
- [ ] Update cache prefix to `mvp-platform:vehicles`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "vpicClient" backend/src/features/vehicles/domain/vehicles.service.ts || echo "vpicClient removed ✅"
|
||||
grep "mvp-platform:vehicles" backend/src/features/vehicles/domain/vehicles.service.ts
|
||||
```
|
||||
|
||||
### ✅ Phase 2 Validation
|
||||
|
||||
- [ ] **TypeScript Compiles**: `npm run build` succeeds in backend directory
|
||||
- [ ] **No vPIC References**: `grep -r "vpic" backend/src/features/vehicles/` returns no results
|
||||
- [ ] **Database Connection Test**: MVP Platform database accessible from backend
|
||||
- [ ] **VIN Decoder Test**: VIN decoding service functional
|
||||
|
||||
**Critical Check**:
|
||||
```bash
|
||||
cd backend && npm run build && echo "Backend compilation successful ✅"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: API Migration
|
||||
|
||||
### ✅ Task 3.1: Update Vehicles Controller
|
||||
|
||||
**Files**: `backend/src/features/vehicles/api/vehicles.controller.ts`
|
||||
|
||||
- [ ] Remove imports for `vpicClient`
|
||||
- [ ] Add import for updated `VehiclesService`
|
||||
- [ ] Update `getDropdownMakes()` method to use MVP Platform
|
||||
- [ ] Update `getDropdownModels()` method
|
||||
- [ ] Update `getDropdownTransmissions()` method
|
||||
- [ ] Update `getDropdownEngines()` method
|
||||
- [ ] Update `getDropdownTrims()` method
|
||||
- [ ] Maintain exact response format compatibility
|
||||
- [ ] Add performance monitoring
|
||||
- [ ] Add database error handling
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "vehiclesService.getDropdownMakes" backend/src/features/vehicles/api/vehicles.controller.ts
|
||||
```
|
||||
|
||||
### ✅ Task 3.2: Verify Routes Configuration
|
||||
|
||||
**Files**: `backend/src/features/vehicles/api/vehicles.routes.ts`
|
||||
|
||||
- [ ] Confirm dropdown routes remain unauthenticated
|
||||
- [ ] Verify no `preHandler: fastify.authenticate` on dropdown routes
|
||||
- [ ] Ensure CRUD routes still require authentication
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep -A 3 "dropdown/makes" backend/src/features/vehicles/api/vehicles.routes.ts | grep "preHandler" || echo "No auth on dropdown routes ✅"
|
||||
```
|
||||
|
||||
### ✅ Task 3.3: Add Health Check Endpoint
|
||||
|
||||
**Files**: `vehicles.controller.ts`, `vehicles.routes.ts`
|
||||
|
||||
- [ ] Add `healthCheck()` method to controller
|
||||
- [ ] Add `testMvpPlatformConnection()` method to service
|
||||
- [ ] Add `/vehicles/health` route (unauthenticated)
|
||||
- [ ] Test MVP Platform database connectivity
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "healthCheck" backend/src/features/vehicles/api/vehicles.controller.ts
|
||||
```
|
||||
|
||||
### ✅ Phase 3 Validation
|
||||
|
||||
- [ ] **API Format Tests**: All dropdown endpoints return correct format
|
||||
- [ ] **Authentication Tests**: Dropdown endpoints unauthenticated, CRUD authenticated
|
||||
- [ ] **Performance Tests**: All endpoints respond < 100ms
|
||||
- [ ] **Health Check**: `/api/vehicles/health` returns healthy status
|
||||
|
||||
**Critical Check**:
|
||||
```bash
|
||||
curl -s http://localhost:3001/api/vehicles/dropdown/makes | jq '.[0]' | grep "Make_ID"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Scheduled ETL Implementation
|
||||
|
||||
### ✅ Task 4.1: Create ETL Dockerfile
|
||||
|
||||
**Files**: `vehicle-etl/docker/Dockerfile.etl`
|
||||
|
||||
- [ ] Base on Python 3.11-slim
|
||||
- [ ] Install cron and system dependencies
|
||||
- [ ] Install Python requirements
|
||||
- [ ] Copy ETL source code
|
||||
- [ ] Set up cron configuration
|
||||
- [ ] Add health check
|
||||
- [ ] Configure entrypoint
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
ls vehicle-etl/docker/Dockerfile.etl
|
||||
```
|
||||
|
||||
### ✅ Task 4.2: Create Cron Setup Script
|
||||
|
||||
**Files**: `vehicle-etl/docker/setup-cron.sh`
|
||||
|
||||
- [ ] Create script with execute permissions
|
||||
- [ ] Configure cron job from environment variable
|
||||
- [ ] Set proper file permissions
|
||||
- [ ] Apply cron job to system
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
ls -la vehicle-etl/docker/setup-cron.sh | grep "x"
|
||||
```
|
||||
|
||||
### ✅ Task 4.3: Create Container Entrypoint
|
||||
|
||||
**Files**: `vehicle-etl/docker/entrypoint.sh`
|
||||
|
||||
- [ ] Start cron daemon in background
|
||||
- [ ] Handle shutdown signals properly
|
||||
- [ ] Support initial ETL run option
|
||||
- [ ] Keep container running
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "cron -f" vehicle-etl/docker/entrypoint.sh
|
||||
```
|
||||
|
||||
### ✅ Task 4.4: Update ETL Main Module
|
||||
|
||||
**Files**: `vehicle-etl/etl/main.py`
|
||||
|
||||
- [ ] Support `build-catalog` command
|
||||
- [ ] Test all connections before ETL
|
||||
- [ ] Implement complete ETL pipeline
|
||||
- [ ] Add comprehensive error handling
|
||||
- [ ] Write completion markers
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "build-catalog" vehicle-etl/etl/main.py
|
||||
```
|
||||
|
||||
### ✅ Task 4.5: Create Connection Testing Module
|
||||
|
||||
**Files**: `vehicle-etl/etl/connections.py`
|
||||
|
||||
- [ ] Implement `test_mssql_connection()`
|
||||
- [ ] Implement `test_postgres_connection()`
|
||||
- [ ] Implement `test_redis_connection()`
|
||||
- [ ] Implement `test_connections()` wrapper
|
||||
- [ ] Add proper error logging
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "def test_connections" vehicle-etl/etl/connections.py
|
||||
```
|
||||
|
||||
### ✅ Task 4.6: Create ETL Monitoring Script
|
||||
|
||||
**Files**: `vehicle-etl/scripts/check-etl-status.sh`
|
||||
|
||||
- [ ] Check last run status file
|
||||
- [ ] Report success/failure status
|
||||
- [ ] Show recent log entries
|
||||
- [ ] Return appropriate exit codes
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
ls -la vehicle-etl/scripts/check-etl-status.sh | grep "x"
|
||||
```
|
||||
|
||||
### ✅ Task 4.7: Create Requirements File
|
||||
|
||||
**Files**: `vehicle-etl/requirements-etl.txt`
|
||||
|
||||
- [ ] Add database connectivity packages
|
||||
- [ ] Add data processing packages
|
||||
- [ ] Add logging and monitoring packages
|
||||
- [ ] Add testing packages
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
grep "pyodbc" vehicle-etl/requirements-etl.txt
|
||||
```
|
||||
|
||||
### ✅ Phase 4 Validation
|
||||
|
||||
- [ ] **ETL Container Builds**: `docker-compose build etl-scheduler` succeeds
|
||||
- [ ] **Connection Tests**: ETL can connect to all databases
|
||||
- [ ] **Manual ETL Run**: ETL completes successfully
|
||||
- [ ] **Cron Configuration**: Cron job properly configured
|
||||
- [ ] **Health Checks**: ETL health monitoring functional
|
||||
|
||||
**Critical Check**:
|
||||
```bash
|
||||
docker-compose exec etl-scheduler python -m etl.main test-connections
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Testing & Validation
|
||||
|
||||
### ✅ Task 5.1: Run API Functionality Tests
|
||||
|
||||
**Script**: `test-api-formats.sh`
|
||||
|
||||
- [ ] Test dropdown API response formats
|
||||
- [ ] Validate data counts and structure
|
||||
- [ ] Verify error handling
|
||||
- [ ] Check all endpoint availability
|
||||
|
||||
**Verification**: All API format tests pass
|
||||
|
||||
### ✅ Task 5.2: Run Authentication Tests
|
||||
|
||||
**Script**: `test-authentication.sh`
|
||||
|
||||
- [ ] Test dropdown endpoints are unauthenticated
|
||||
- [ ] Test CRUD endpoints require authentication
|
||||
- [ ] Verify security model unchanged
|
||||
|
||||
**Verification**: All authentication tests pass
|
||||
|
||||
### ✅ Task 5.3: Run Performance Tests
|
||||
|
||||
**Script**: `test-performance.sh`, `test-cache-performance.sh`
|
||||
|
||||
- [ ] Measure response times for all endpoints
|
||||
- [ ] Verify < 100ms requirement met
|
||||
- [ ] Test cache performance improvement
|
||||
- [ ] Validate under load
|
||||
|
||||
**Verification**: All performance tests pass
|
||||
|
||||
### ✅ Task 5.4: Run Data Accuracy Tests
|
||||
|
||||
**Script**: `test-vin-accuracy.sh`, `test-data-completeness.sh`
|
||||
|
||||
- [ ] Test VIN decoding accuracy
|
||||
- [ ] Verify data completeness
|
||||
- [ ] Check data quality metrics
|
||||
- [ ] Validate against known test cases
|
||||
|
||||
**Verification**: All accuracy tests pass
|
||||
|
||||
### ✅ Task 5.5: Run ETL Process Tests
|
||||
|
||||
**Script**: `test-etl-execution.sh`, `test-etl-scheduling.sh`
|
||||
|
||||
- [ ] Test ETL execution
|
||||
- [ ] Verify scheduling configuration
|
||||
- [ ] Check error handling
|
||||
- [ ] Validate monitoring
|
||||
|
||||
**Verification**: All ETL tests pass
|
||||
|
||||
### ✅ Task 5.6: Run Error Handling Tests
|
||||
|
||||
**Script**: `test-error-handling.sh`
|
||||
|
||||
- [ ] Test database unavailability scenarios
|
||||
- [ ] Verify graceful degradation
|
||||
- [ ] Test recovery mechanisms
|
||||
- [ ] Check error responses
|
||||
|
||||
**Verification**: All error handling tests pass
|
||||
|
||||
### ✅ Task 5.7: Run Load Tests
|
||||
|
||||
**Script**: `test-load.sh`
|
||||
|
||||
- [ ] Test concurrent request handling
|
||||
- [ ] Measure performance under load
|
||||
- [ ] Verify system stability
|
||||
- [ ] Check resource usage
|
||||
|
||||
**Verification**: All load tests pass
|
||||
|
||||
### ✅ Task 5.8: Run Security Tests
|
||||
|
||||
**Script**: `test-security.sh`
|
||||
|
||||
- [ ] Test SQL injection prevention
|
||||
- [ ] Verify input validation
|
||||
- [ ] Check authentication bypasses
|
||||
- [ ] Test parameter tampering
|
||||
|
||||
**Verification**: All security tests pass
|
||||
|
||||
### ✅ Phase 5 Validation
|
||||
|
||||
- [ ] **Master Test Script**: `test-all.sh` passes completely
|
||||
- [ ] **Zero Breaking Changes**: All existing functionality preserved
|
||||
- [ ] **Performance Requirements**: < 100ms response times achieved
|
||||
- [ ] **Data Accuracy**: 99.9%+ VIN decoding accuracy maintained
|
||||
- [ ] **ETL Reliability**: Weekly ETL process functional
|
||||
|
||||
**Critical Check**:
|
||||
```bash
|
||||
./test-all.sh && echo "ALL TESTS PASSED ✅"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Final Implementation Checklist
|
||||
|
||||
### ✅ Pre-Production Validation
|
||||
|
||||
- [ ] **All Phases Complete**: Phases 1-5 successfully implemented
|
||||
- [ ] **All Tests Pass**: Master test script shows 100% pass rate
|
||||
- [ ] **Documentation Updated**: All documentation reflects current state
|
||||
- [ ] **Environment Variables**: All required environment variables configured
|
||||
- [ ] **Backup Validated**: Can restore to pre-implementation state if needed
|
||||
|
||||
### ✅ Production Readiness
|
||||
|
||||
- [ ] **Monitoring Configured**: ETL success/failure alerting set up
|
||||
- [ ] **Log Rotation**: Log file rotation configured for ETL processes
|
||||
- [ ] **Database Maintenance**: MVP Platform database backup scheduled
|
||||
- [ ] **Performance Baseline**: Response time baselines established
|
||||
- [ ] **Error Alerting**: API error rate monitoring configured
|
||||
|
||||
### ✅ Deployment
|
||||
|
||||
- [ ] **Staging Deployment**: Changes deployed and tested in staging
|
||||
- [ ] **Production Deployment**: Changes deployed to production
|
||||
- [ ] **Post-Deployment Tests**: All tests pass in production
|
||||
- [ ] **Performance Monitoring**: Response times within acceptable range
|
||||
- [ ] **ETL Schedule Active**: First scheduled ETL run successful
|
||||
|
||||
### ✅ Post-Deployment
|
||||
|
||||
- [ ] **Documentation Complete**: All documentation updated and accurate
|
||||
- [ ] **Team Handover**: Development team trained on new architecture
|
||||
- [ ] **Monitoring Active**: All monitoring and alerting operational
|
||||
- [ ] **Support Runbook**: Troubleshooting procedures documented
|
||||
- [ ] **MVP Platform Foundation**: Architecture pattern ready for next services
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria Validation
|
||||
|
||||
### ✅ **Zero Breaking Changes**
|
||||
- [ ] All existing vehicle endpoints work identically
|
||||
- [ ] Frontend requires no changes
|
||||
- [ ] User experience unchanged
|
||||
- [ ] API response formats preserved exactly
|
||||
|
||||
### ✅ **Performance Requirements**
|
||||
- [ ] Dropdown APIs consistently < 100ms
|
||||
- [ ] VIN decoding < 200ms
|
||||
- [ ] Cache hit rates > 90%
|
||||
- [ ] No performance degradation under load
|
||||
|
||||
### ✅ **Data Accuracy**
|
||||
- [ ] VIN decoding accuracy ≥ 99.9%
|
||||
- [ ] All makes/models/trims available
|
||||
- [ ] Data completeness maintained
|
||||
- [ ] No data quality regressions
|
||||
|
||||
### ✅ **Reliability Requirements**
|
||||
- [ ] Weekly ETL completes successfully
|
||||
- [ ] Error handling and recovery functional
|
||||
- [ ] Health checks operational
|
||||
- [ ] Monitoring and alerting active
|
||||
|
||||
### ✅ **MVP Platform Foundation**
|
||||
- [ ] Standardized naming conventions established
|
||||
- [ ] Service isolation pattern implemented
|
||||
- [ ] Scheduled processing framework operational
|
||||
- [ ] Ready for additional platform services
|
||||
|
||||
---
|
||||
|
||||
## Emergency Rollback Plan
|
||||
|
||||
If critical issues arise during implementation:
|
||||
|
||||
### ✅ Immediate Rollback Steps
|
||||
|
||||
1. **Stop New Services**:
|
||||
```bash
|
||||
docker-compose stop mvp-platform-database mssql-source etl-scheduler
|
||||
```
|
||||
|
||||
2. **Restore Backend Code**:
|
||||
```bash
|
||||
git checkout HEAD~1 -- backend/src/features/vehicles/
|
||||
git checkout HEAD~1 -- backend/src/core/config/
|
||||
```
|
||||
|
||||
3. **Restore Docker Configuration**:
|
||||
```bash
|
||||
git checkout HEAD~1 -- docker-compose.yml
|
||||
git checkout HEAD~1 -- .env.example
|
||||
```
|
||||
|
||||
4. **Restart Application**:
|
||||
```bash
|
||||
docker-compose restart backend
|
||||
```
|
||||
|
||||
5. **Validate Rollback**:
|
||||
```bash
|
||||
curl -s http://localhost:3001/api/vehicles/dropdown/makes | jq '. | length'
|
||||
```
|
||||
|
||||
### ✅ Rollback Validation
|
||||
|
||||
- [ ] **External API Working**: vPIC API endpoints functional
|
||||
- [ ] **All Tests Pass**: Original functionality restored
|
||||
- [ ] **No Data Loss**: No existing data affected
|
||||
- [ ] **Performance Restored**: Response times back to baseline
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### Dependencies Between Phases
|
||||
- **Phase 2** requires **Phase 1** infrastructure
|
||||
- **Phase 3** requires **Phase 2** backend changes
|
||||
- **Phase 4** requires **Phase 1** infrastructure
|
||||
- **Phase 5** requires **Phases 1-4** complete
|
||||
|
||||
### Critical Success Factors
|
||||
1. **Database Connectivity**: All database connections must be stable
|
||||
2. **Data Population**: MVP Platform database must have comprehensive data
|
||||
3. **Performance Optimization**: Database queries must be optimized for speed
|
||||
4. **Error Handling**: Graceful degradation when services unavailable
|
||||
5. **Cache Strategy**: Proper caching for performance requirements
|
||||
|
||||
### AI Assistant Guidance
|
||||
This checklist is designed for efficient execution by AI assistants:
|
||||
- Each task has clear file locations and verification steps
|
||||
- Dependencies are explicitly stated
|
||||
- Validation commands provided for each step
|
||||
- Rollback procedures documented for safety
|
||||
- Critical checks identified for each phase
|
||||
|
||||
**For any implementation questions, refer to the detailed phase documentation in the same directory.**
|
||||
290
docs/changes/vehicles-dropdown-v1/phase-01-infrastructure.md
Normal file
290
docs/changes/vehicles-dropdown-v1/phase-01-infrastructure.md
Normal file
@@ -0,0 +1,290 @@
|
||||
# Phase 1: Infrastructure Setup
|
||||
|
||||
## Overview
|
||||
|
||||
This phase establishes the foundational infrastructure for the MVP Platform by adding three new Docker services to the main `docker-compose.yml`. This creates the shared services architecture pattern that future platform services will follow.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker and Docker Compose installed
|
||||
- Main MotoVaultPro application running successfully
|
||||
- Access to NHTSA vPIC database backup file (VPICList_lite_2025_07.bak)
|
||||
- Understanding of existing docker-compose.yml structure
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 1.1: Add MVP Platform Database Service
|
||||
|
||||
**Location**: `docker-compose.yml`
|
||||
|
||||
**Action**: Add the following service definition to the services section:
|
||||
|
||||
```yaml
|
||||
mvp-platform-database:
|
||||
image: postgres:15-alpine
|
||||
container_name: mvp-platform-db
|
||||
environment:
|
||||
POSTGRES_DB: mvp-platform-vehicles
|
||||
POSTGRES_USER: mvp_platform_user
|
||||
POSTGRES_PASSWORD: ${MVP_PLATFORM_DB_PASSWORD:-platform_dev_password}
|
||||
POSTGRES_INITDB_ARGS: "--encoding=UTF8"
|
||||
volumes:
|
||||
- mvp_platform_data:/var/lib/postgresql/data
|
||||
- ./vehicle-etl/sql/schema:/docker-entrypoint-initdb.d
|
||||
ports:
|
||||
- "5433:5432"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U mvp_platform_user -p 5432"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- default
|
||||
```
|
||||
|
||||
**Action**: Add the volume definition to the volumes section:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
minio_data:
|
||||
mvp_platform_data: # Add this line
|
||||
```
|
||||
|
||||
### Task 1.2: Add MSSQL Source Database Service
|
||||
|
||||
**Location**: `docker-compose.yml`
|
||||
|
||||
**Action**: Add the following service definition:
|
||||
|
||||
```yaml
|
||||
mssql-source:
|
||||
image: mcr.microsoft.com/mssql/server:2019-latest
|
||||
container_name: mvp-mssql-source
|
||||
user: root
|
||||
environment:
|
||||
- ACCEPT_EULA=Y
|
||||
- SA_PASSWORD=${MSSQL_SOURCE_PASSWORD:-Source123!}
|
||||
- MSSQL_PID=Developer
|
||||
ports:
|
||||
- "1433:1433"
|
||||
volumes:
|
||||
- mssql_source_data:/var/opt/mssql/data
|
||||
- ./vehicle-etl/volumes/mssql/backups:/backups
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${MSSQL_SOURCE_PASSWORD:-Source123!} -Q 'SELECT 1'"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 60s
|
||||
networks:
|
||||
- default
|
||||
```
|
||||
|
||||
**Action**: Add volume to volumes section:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
minio_data:
|
||||
mvp_platform_data:
|
||||
mssql_source_data: # Add this line
|
||||
```
|
||||
|
||||
### Task 1.3: Add Scheduled ETL Service
|
||||
|
||||
**Location**: `docker-compose.yml`
|
||||
|
||||
**Action**: Add the following service definition:
|
||||
|
||||
```yaml
|
||||
etl-scheduler:
|
||||
build:
|
||||
context: ./vehicle-etl
|
||||
dockerfile: docker/Dockerfile.etl
|
||||
container_name: mvp-etl-scheduler
|
||||
environment:
|
||||
# Database connections
|
||||
- MSSQL_HOST=mssql-source
|
||||
- MSSQL_PORT=1433
|
||||
- MSSQL_DATABASE=VPICList
|
||||
- MSSQL_USERNAME=sa
|
||||
- MSSQL_PASSWORD=${MSSQL_SOURCE_PASSWORD:-Source123!}
|
||||
- POSTGRES_HOST=mvp-platform-database
|
||||
- POSTGRES_PORT=5432
|
||||
- POSTGRES_DATABASE=mvp-platform-vehicles
|
||||
- POSTGRES_USERNAME=mvp_platform_user
|
||||
- POSTGRES_PASSWORD=${MVP_PLATFORM_DB_PASSWORD:-platform_dev_password}
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
# ETL configuration
|
||||
- ETL_SCHEDULE=0 2 * * 0 # Weekly on Sunday at 2 AM
|
||||
- ETL_LOG_LEVEL=INFO
|
||||
- ETL_BATCH_SIZE=10000
|
||||
- ETL_MAX_RETRIES=3
|
||||
volumes:
|
||||
- ./vehicle-etl/logs:/app/logs
|
||||
- etl_scheduler_data:/app/data
|
||||
depends_on:
|
||||
mssql-source:
|
||||
condition: service_healthy
|
||||
mvp-platform-database:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- default
|
||||
```
|
||||
|
||||
**Action**: Add volume to volumes section:
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
minio_data:
|
||||
mvp_platform_data:
|
||||
mssql_source_data:
|
||||
etl_scheduler_data: # Add this line
|
||||
```
|
||||
|
||||
### Task 1.4: Update Backend Service Environment Variables
|
||||
|
||||
**Location**: `docker-compose.yml`
|
||||
|
||||
**Action**: Add MVP Platform database environment variables to the backend service:
|
||||
|
||||
```yaml
|
||||
backend:
|
||||
# ... existing configuration ...
|
||||
environment:
|
||||
# ... existing environment variables ...
|
||||
# MVP Platform Database
|
||||
MVP_PLATFORM_DB_HOST: mvp-platform-database
|
||||
MVP_PLATFORM_DB_PORT: 5432
|
||||
MVP_PLATFORM_DB_NAME: mvp-platform-vehicles
|
||||
MVP_PLATFORM_DB_USER: mvp_platform_user
|
||||
MVP_PLATFORM_DB_PASSWORD: ${MVP_PLATFORM_DB_PASSWORD:-platform_dev_password}
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
- minio
|
||||
- mvp-platform-database # Add this dependency
|
||||
```
|
||||
|
||||
### Task 1.5: Create Environment File Template
|
||||
|
||||
**Location**: `.env.example`
|
||||
|
||||
**Action**: Add the following environment variables:
|
||||
|
||||
```env
|
||||
# MVP Platform Database
|
||||
MVP_PLATFORM_DB_PASSWORD=platform_secure_password
|
||||
|
||||
# ETL Source Database
|
||||
MSSQL_SOURCE_PASSWORD=Source123!
|
||||
|
||||
# ETL Configuration
|
||||
ETL_SCHEDULE=0 2 * * 0
|
||||
ETL_LOG_LEVEL=INFO
|
||||
ETL_BATCH_SIZE=10000
|
||||
ETL_MAX_RETRIES=3
|
||||
```
|
||||
|
||||
### Task 1.6: Update .env File (if exists)
|
||||
|
||||
**Location**: `.env`
|
||||
|
||||
**Action**: If `.env` exists, add the above environment variables with appropriate values for your environment.
|
||||
|
||||
## Validation Steps
|
||||
|
||||
### Step 1: Verify Docker Compose Configuration
|
||||
|
||||
```bash
|
||||
# Test docker-compose configuration
|
||||
docker-compose config
|
||||
|
||||
# Should output valid YAML without errors
|
||||
```
|
||||
|
||||
### Step 2: Build and Start New Services
|
||||
|
||||
```bash
|
||||
# Build the ETL scheduler container
|
||||
docker-compose build etl-scheduler
|
||||
|
||||
# Start only the new services for testing
|
||||
docker-compose up mvp-platform-database mssql-source -d
|
||||
|
||||
# Check service health
|
||||
docker-compose ps
|
||||
```
|
||||
|
||||
### Step 3: Test Database Connections
|
||||
|
||||
```bash
|
||||
# Test MVP Platform database connection
|
||||
docker-compose exec mvp-platform-database psql -U mvp_platform_user -d mvp-platform-vehicles -c "SELECT version();"
|
||||
|
||||
# Test MSSQL source database connection
|
||||
docker-compose exec mssql-source /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "Source123!" -Q "SELECT @@VERSION"
|
||||
```
|
||||
|
||||
### Step 4: Verify Logs Directory Creation
|
||||
|
||||
```bash
|
||||
# Check that ETL logs directory is created
|
||||
ls -la ./vehicle-etl/logs/
|
||||
|
||||
# Should exist and be writable
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**Issue**: PostgreSQL container fails to start
|
||||
**Solution**: Check port 5433 is not in use, verify password complexity requirements
|
||||
|
||||
**Issue**: MSSQL container fails health check
|
||||
**Solution**: Increase start_period, verify password meets MSSQL requirements, check available memory
|
||||
|
||||
**Issue**: ETL scheduler cannot connect to databases
|
||||
**Solution**: Verify network connectivity, check environment variable values, ensure databases are healthy
|
||||
|
||||
### Rollback Procedure
|
||||
|
||||
1. Stop the new services:
|
||||
```bash
|
||||
docker-compose stop mvp-platform-database mssql-source etl-scheduler
|
||||
```
|
||||
|
||||
2. Remove the new containers:
|
||||
```bash
|
||||
docker-compose rm mvp-platform-database mssql-source etl-scheduler
|
||||
```
|
||||
|
||||
3. Remove the volume definitions from docker-compose.yml
|
||||
|
||||
4. Remove the service definitions from docker-compose.yml
|
||||
|
||||
5. Remove environment variables from backend service
|
||||
|
||||
## Next Steps
|
||||
|
||||
After successful completion of Phase 1:
|
||||
|
||||
1. Proceed to [Phase 2: Backend Migration](./phase-02-backend-migration.md)
|
||||
2. Ensure all services are running and healthy before starting backend changes
|
||||
3. Take note of any performance impacts on the existing application
|
||||
|
||||
## Dependencies for Next Phase
|
||||
|
||||
- MVP Platform database must be accessible and initialized
|
||||
- Backend service must be able to connect to MVP Platform database
|
||||
- Existing Redis service must be available for new caching patterns
|
||||
601
docs/changes/vehicles-dropdown-v1/phase-02-backend-migration.md
Normal file
601
docs/changes/vehicles-dropdown-v1/phase-02-backend-migration.md
Normal file
@@ -0,0 +1,601 @@
|
||||
# Phase 2: Backend Migration
|
||||
|
||||
## Overview
|
||||
|
||||
This phase removes external NHTSA vPIC API dependencies from the vehicles feature and integrates direct access to the MVP Platform database. All VIN decoding logic will be ported from Python to TypeScript while maintaining exact API compatibility.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Phase 1 infrastructure completed successfully
|
||||
- MVP Platform database running and accessible
|
||||
- Existing Redis service available
|
||||
- Backend service can connect to MVP Platform database
|
||||
- Understanding of existing vehicles feature structure
|
||||
|
||||
## Current Architecture Analysis
|
||||
|
||||
**Files to Modify/Remove**:
|
||||
- `backend/src/features/vehicles/external/vpic/` (entire directory - DELETE)
|
||||
- `backend/src/features/vehicles/domain/vehicles.service.ts` (UPDATE)
|
||||
- `backend/src/features/vehicles/api/vehicles.controller.ts` (UPDATE)
|
||||
- `backend/src/core/config/environment.ts` (UPDATE)
|
||||
|
||||
**New Files to Create**:
|
||||
- `backend/src/features/vehicles/data/mvp-platform.repository.ts`
|
||||
- `backend/src/features/vehicles/domain/vin-decoder.service.ts`
|
||||
- `backend/src/features/vehicles/data/vehicle-catalog.repository.ts`
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 2.1: Remove External vPIC API Dependencies
|
||||
|
||||
**Action**: Delete external API directory
|
||||
```bash
|
||||
rm -rf backend/src/features/vehicles/external/
|
||||
```
|
||||
|
||||
**Location**: `backend/src/core/config/environment.ts`
|
||||
|
||||
**Action**: Remove VPIC_API_URL environment variable:
|
||||
|
||||
```typescript
|
||||
// REMOVE this line:
|
||||
// VPIC_API_URL: process.env.VPIC_API_URL || 'https://vpic.nhtsa.dot.gov/api/vehicles',
|
||||
|
||||
// ADD MVP Platform database configuration:
|
||||
MVP_PLATFORM_DB_HOST: process.env.MVP_PLATFORM_DB_HOST || 'mvp-platform-database',
|
||||
MVP_PLATFORM_DB_PORT: parseInt(process.env.MVP_PLATFORM_DB_PORT || '5432'),
|
||||
MVP_PLATFORM_DB_NAME: process.env.MVP_PLATFORM_DB_NAME || 'mvp-platform-vehicles',
|
||||
MVP_PLATFORM_DB_USER: process.env.MVP_PLATFORM_DB_USER || 'mvp_platform_user',
|
||||
MVP_PLATFORM_DB_PASSWORD: process.env.MVP_PLATFORM_DB_PASSWORD || 'platform_dev_password',
|
||||
```
|
||||
|
||||
### Task 2.2: Create MVP Platform Database Connection
|
||||
|
||||
**Location**: `backend/src/core/config/database.ts`
|
||||
|
||||
**Action**: Add MVP Platform database pool configuration:
|
||||
|
||||
```typescript
|
||||
import { Pool } from 'pg';
|
||||
import { env } from './environment';
|
||||
|
||||
// Existing main database pool
|
||||
export const dbPool = new Pool({
|
||||
host: env.DB_HOST,
|
||||
port: env.DB_PORT,
|
||||
database: env.DB_NAME,
|
||||
user: env.DB_USER,
|
||||
password: env.DB_PASSWORD,
|
||||
max: 20,
|
||||
idleTimeoutMillis: 30000,
|
||||
});
|
||||
|
||||
// NEW: MVP Platform database pool
|
||||
export const mvpPlatformPool = new Pool({
|
||||
host: env.MVP_PLATFORM_DB_HOST,
|
||||
port: env.MVP_PLATFORM_DB_PORT,
|
||||
database: env.MVP_PLATFORM_DB_NAME,
|
||||
user: env.MVP_PLATFORM_DB_USER,
|
||||
password: env.MVP_PLATFORM_DB_PASSWORD,
|
||||
max: 10,
|
||||
idleTimeoutMillis: 30000,
|
||||
});
|
||||
```
|
||||
|
||||
### Task 2.3: Create MVP Platform Repository
|
||||
|
||||
**Location**: `backend/src/features/vehicles/data/mvp-platform.repository.ts`
|
||||
|
||||
**Action**: Create new file with the following content:
|
||||
|
||||
```typescript
|
||||
import { mvpPlatformPool } from '../../../core/config/database';
|
||||
import { logger } from '../../../core/logging/logger';
|
||||
|
||||
export interface VehicleDecodeResult {
|
||||
make?: string;
|
||||
model?: string;
|
||||
year?: number;
|
||||
engineType?: string;
|
||||
bodyType?: string;
|
||||
trim?: string;
|
||||
transmission?: string;
|
||||
}
|
||||
|
||||
export interface DropdownItem {
|
||||
id: number;
|
||||
name: string;
|
||||
}
|
||||
|
||||
export class MvpPlatformRepository {
|
||||
|
||||
async decodeVIN(vin: string): Promise<VehicleDecodeResult | null> {
|
||||
try {
|
||||
const query = `
|
||||
SELECT
|
||||
make_name as make,
|
||||
model_name as model,
|
||||
model_year as year,
|
||||
engine_type,
|
||||
body_type,
|
||||
trim_name as trim,
|
||||
transmission_type as transmission
|
||||
FROM vehicle_catalog
|
||||
WHERE vin_pattern_matches($1)
|
||||
ORDER BY confidence_score DESC
|
||||
LIMIT 1
|
||||
`;
|
||||
|
||||
const result = await mvpPlatformPool.query(query, [vin]);
|
||||
|
||||
if (result.rows.length === 0) {
|
||||
logger.warn('VIN decode returned no results', { vin });
|
||||
return null;
|
||||
}
|
||||
|
||||
const row = result.rows[0];
|
||||
return {
|
||||
make: row.make,
|
||||
model: row.model,
|
||||
year: row.year,
|
||||
engineType: row.engine_type,
|
||||
bodyType: row.body_type,
|
||||
trim: row.trim,
|
||||
transmission: row.transmission
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.error('VIN decode failed', { vin, error });
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async getMakes(): Promise<DropdownItem[]> {
|
||||
try {
|
||||
const query = `
|
||||
SELECT DISTINCT
|
||||
make_id as id,
|
||||
make_name as name
|
||||
FROM vehicle_catalog
|
||||
WHERE make_name IS NOT NULL
|
||||
ORDER BY make_name
|
||||
`;
|
||||
|
||||
const result = await mvpPlatformPool.query(query);
|
||||
return result.rows;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get makes failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getModelsForMake(make: string): Promise<DropdownItem[]> {
|
||||
try {
|
||||
const query = `
|
||||
SELECT DISTINCT
|
||||
model_id as id,
|
||||
model_name as name
|
||||
FROM vehicle_catalog
|
||||
WHERE LOWER(make_name) = LOWER($1)
|
||||
AND model_name IS NOT NULL
|
||||
ORDER BY model_name
|
||||
`;
|
||||
|
||||
const result = await mvpPlatformPool.query(query, [make]);
|
||||
return result.rows;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get models failed', { make, error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getTransmissions(): Promise<DropdownItem[]> {
|
||||
try {
|
||||
const query = `
|
||||
SELECT DISTINCT
|
||||
ROW_NUMBER() OVER (ORDER BY transmission_type) as id,
|
||||
transmission_type as name
|
||||
FROM vehicle_catalog
|
||||
WHERE transmission_type IS NOT NULL
|
||||
ORDER BY transmission_type
|
||||
`;
|
||||
|
||||
const result = await mvpPlatformPool.query(query);
|
||||
return result.rows;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get transmissions failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getEngines(): Promise<DropdownItem[]> {
|
||||
try {
|
||||
const query = `
|
||||
SELECT DISTINCT
|
||||
ROW_NUMBER() OVER (ORDER BY engine_type) as id,
|
||||
engine_type as name
|
||||
FROM vehicle_catalog
|
||||
WHERE engine_type IS NOT NULL
|
||||
ORDER BY engine_type
|
||||
`;
|
||||
|
||||
const result = await mvpPlatformPool.query(query);
|
||||
return result.rows;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get engines failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getTrims(): Promise<DropdownItem[]> {
|
||||
try {
|
||||
const query = `
|
||||
SELECT DISTINCT
|
||||
ROW_NUMBER() OVER (ORDER BY trim_name) as id,
|
||||
trim_name as name
|
||||
FROM vehicle_catalog
|
||||
WHERE trim_name IS NOT NULL
|
||||
ORDER BY trim_name
|
||||
`;
|
||||
|
||||
const result = await mvpPlatformPool.query(query);
|
||||
return result.rows;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get trims failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export const mvpPlatformRepository = new MvpPlatformRepository();
|
||||
```
|
||||
|
||||
### Task 2.4: Create VIN Decoder Service
|
||||
|
||||
**Location**: `backend/src/features/vehicles/domain/vin-decoder.service.ts`
|
||||
|
||||
**Action**: Create new file with TypeScript port of VIN decoding logic:
|
||||
|
||||
```typescript
|
||||
import { logger } from '../../../core/logging/logger';
|
||||
import { cacheService } from '../../../core/config/redis';
|
||||
import { mvpPlatformRepository, VehicleDecodeResult } from '../data/mvp-platform.repository';
|
||||
|
||||
export class VinDecoderService {
|
||||
private readonly cachePrefix = 'mvp-platform';
|
||||
private readonly vinCacheTTL = 30 * 24 * 60 * 60; // 30 days
|
||||
|
||||
async decodeVIN(vin: string): Promise<VehicleDecodeResult | null> {
|
||||
// Validate VIN format
|
||||
if (!this.isValidVIN(vin)) {
|
||||
logger.warn('Invalid VIN format', { vin });
|
||||
return null;
|
||||
}
|
||||
|
||||
// Check cache first
|
||||
const cacheKey = `${this.cachePrefix}:vin:${vin}`;
|
||||
const cached = await cacheService.get<VehicleDecodeResult>(cacheKey);
|
||||
if (cached) {
|
||||
logger.debug('VIN decode cache hit', { vin });
|
||||
return cached;
|
||||
}
|
||||
|
||||
// Decode VIN using MVP Platform database
|
||||
logger.info('Decoding VIN via MVP Platform database', { vin });
|
||||
const result = await mvpPlatformRepository.decodeVIN(vin);
|
||||
|
||||
// Cache successful results
|
||||
if (result) {
|
||||
await cacheService.set(cacheKey, result, this.vinCacheTTL);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private isValidVIN(vin: string): boolean {
|
||||
// Basic VIN validation
|
||||
if (!vin || vin.length !== 17) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check for invalid characters (I, O, Q not allowed)
|
||||
const invalidChars = /[IOQ]/gi;
|
||||
if (invalidChars.test(vin)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
// Extract model year from VIN (positions 10 and 7)
|
||||
extractModelYear(vin: string, currentYear: number = new Date().getFullYear()): number[] {
|
||||
if (!this.isValidVIN(vin)) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const yearChar = vin.charAt(9); // Position 10 (0-indexed)
|
||||
const seventhChar = vin.charAt(6); // Position 7 (0-indexed)
|
||||
|
||||
// Year code mapping
|
||||
const yearCodes: { [key: string]: number[] } = {
|
||||
'A': [2010, 1980], 'B': [2011, 1981], 'C': [2012, 1982], 'D': [2013, 1983],
|
||||
'E': [2014, 1984], 'F': [2015, 1985], 'G': [2016, 1986], 'H': [2017, 1987],
|
||||
'J': [2018, 1988], 'K': [2019, 1989], 'L': [2020, 1990], 'M': [2021, 1991],
|
||||
'N': [2022, 1992], 'P': [2023, 1993], 'R': [2024, 1994], 'S': [2025, 1995],
|
||||
'T': [2026, 1996], 'V': [2027, 1997], 'W': [2028, 1998], 'X': [2029, 1999],
|
||||
'Y': [2030, 2000], '1': [2031, 2001], '2': [2032, 2002], '3': [2033, 2003],
|
||||
'4': [2034, 2004], '5': [2035, 2005], '6': [2036, 2006], '7': [2037, 2007],
|
||||
'8': [2038, 2008], '9': [2039, 2009]
|
||||
};
|
||||
|
||||
const possibleYears = yearCodes[yearChar.toUpperCase()];
|
||||
if (!possibleYears) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Use 7th character for disambiguation if numeric (older cycle)
|
||||
if (/\d/.test(seventhChar)) {
|
||||
return [possibleYears[1]]; // Older year
|
||||
} else {
|
||||
return [possibleYears[0]]; // Newer year
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export const vinDecoderService = new VinDecoderService();
|
||||
```
|
||||
|
||||
### Task 2.5: Update Vehicles Service
|
||||
|
||||
**Location**: `backend/src/features/vehicles/domain/vehicles.service.ts`
|
||||
|
||||
**Action**: Replace external API calls with MVP Platform database calls:
|
||||
|
||||
```typescript
|
||||
// REMOVE these imports:
|
||||
// import { vpicClient } from '../external/vpic/vpic.client';
|
||||
|
||||
// ADD these imports:
|
||||
import { vinDecoderService } from './vin-decoder.service';
|
||||
import { mvpPlatformRepository } from '../data/mvp-platform.repository';
|
||||
|
||||
// In the createVehicle method, REPLACE:
|
||||
// const vinData = await vpicClient.decodeVIN(data.vin);
|
||||
|
||||
// WITH:
|
||||
const vinData = await vinDecoderService.decodeVIN(data.vin);
|
||||
|
||||
// Add new dropdown methods to the VehiclesService class:
|
||||
async getDropdownMakes(): Promise<any[]> {
|
||||
const cacheKey = `${this.cachePrefix}:dropdown:makes`;
|
||||
|
||||
try {
|
||||
const cached = await cacheService.get<any[]>(cacheKey);
|
||||
if (cached) {
|
||||
logger.debug('Makes dropdown cache hit');
|
||||
return cached;
|
||||
}
|
||||
|
||||
logger.info('Fetching makes from MVP Platform database');
|
||||
const makes = await mvpPlatformRepository.getMakes();
|
||||
|
||||
// Cache for 7 days
|
||||
await cacheService.set(cacheKey, makes, 7 * 24 * 60 * 60);
|
||||
return makes;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown makes failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownModels(make: string): Promise<any[]> {
|
||||
const cacheKey = `${this.cachePrefix}:dropdown:models:${make}`;
|
||||
|
||||
try {
|
||||
const cached = await cacheService.get<any[]>(cacheKey);
|
||||
if (cached) {
|
||||
logger.debug('Models dropdown cache hit', { make });
|
||||
return cached;
|
||||
}
|
||||
|
||||
logger.info('Fetching models from MVP Platform database', { make });
|
||||
const models = await mvpPlatformRepository.getModelsForMake(make);
|
||||
|
||||
// Cache for 7 days
|
||||
await cacheService.set(cacheKey, models, 7 * 24 * 60 * 60);
|
||||
return models;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown models failed', { make, error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownTransmissions(): Promise<any[]> {
|
||||
const cacheKey = `${this.cachePrefix}:dropdown:transmissions`;
|
||||
|
||||
try {
|
||||
const cached = await cacheService.get<any[]>(cacheKey);
|
||||
if (cached) {
|
||||
logger.debug('Transmissions dropdown cache hit');
|
||||
return cached;
|
||||
}
|
||||
|
||||
logger.info('Fetching transmissions from MVP Platform database');
|
||||
const transmissions = await mvpPlatformRepository.getTransmissions();
|
||||
|
||||
// Cache for 7 days
|
||||
await cacheService.set(cacheKey, transmissions, 7 * 24 * 60 * 60);
|
||||
return transmissions;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown transmissions failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownEngines(): Promise<any[]> {
|
||||
const cacheKey = `${this.cachePrefix}:dropdown:engines`;
|
||||
|
||||
try {
|
||||
const cached = await cacheService.get<any[]>(cacheKey);
|
||||
if (cached) {
|
||||
logger.debug('Engines dropdown cache hit');
|
||||
return cached;
|
||||
}
|
||||
|
||||
logger.info('Fetching engines from MVP Platform database');
|
||||
const engines = await mvpPlatformRepository.getEngines();
|
||||
|
||||
// Cache for 7 days
|
||||
await cacheService.set(cacheKey, engines, 7 * 24 * 60 * 60);
|
||||
return engines;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown engines failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownTrims(): Promise<any[]> {
|
||||
const cacheKey = `${this.cachePrefix}:dropdown:trims`;
|
||||
|
||||
try {
|
||||
const cached = await cacheService.get<any[]>(cacheKey);
|
||||
if (cached) {
|
||||
logger.debug('Trims dropdown cache hit');
|
||||
return cached;
|
||||
}
|
||||
|
||||
logger.info('Fetching trims from MVP Platform database');
|
||||
const trims = await mvpPlatformRepository.getTrims();
|
||||
|
||||
// Cache for 7 days
|
||||
await cacheService.set(cacheKey, trims, 7 * 24 * 60 * 60);
|
||||
return trims;
|
||||
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown trims failed', { error });
|
||||
return [];
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Task 2.6: Update Cache Key Patterns
|
||||
|
||||
**Action**: Update all existing cache keys to use MVP Platform prefix
|
||||
|
||||
In vehicles.service.ts, UPDATE:
|
||||
```typescript
|
||||
// CHANGE:
|
||||
private readonly cachePrefix = 'vehicles';
|
||||
|
||||
// TO:
|
||||
private readonly cachePrefix = 'mvp-platform:vehicles';
|
||||
```
|
||||
|
||||
## Validation Steps
|
||||
|
||||
### Step 1: Compile TypeScript
|
||||
|
||||
```bash
|
||||
# From backend directory
|
||||
cd backend
|
||||
npm run build
|
||||
|
||||
# Should compile without errors
|
||||
```
|
||||
|
||||
### Step 2: Test Database Connections
|
||||
|
||||
```bash
|
||||
# Test MVP Platform database connection
|
||||
docker-compose exec backend node -e "
|
||||
const { mvpPlatformPool } = require('./dist/core/config/database');
|
||||
mvpPlatformPool.query('SELECT 1 as test')
|
||||
.then(r => console.log('MVP Platform DB:', r.rows[0]))
|
||||
.catch(e => console.error('Error:', e));
|
||||
"
|
||||
```
|
||||
|
||||
### Step 3: Test VIN Decoder Service
|
||||
|
||||
```bash
|
||||
# Test VIN decoding functionality
|
||||
docker-compose exec backend node -e "
|
||||
const { vinDecoderService } = require('./dist/features/vehicles/domain/vin-decoder.service');
|
||||
vinDecoderService.decodeVIN('1HGBH41JXMN109186')
|
||||
.then(r => console.log('VIN decode result:', r))
|
||||
.catch(e => console.error('Error:', e));
|
||||
"
|
||||
```
|
||||
|
||||
### Step 4: Verify Import Statements
|
||||
|
||||
Check that all imports are resolved correctly:
|
||||
|
||||
```bash
|
||||
# Check for any remaining vpic imports
|
||||
grep -r "vpic" backend/src/features/vehicles/ || echo "No vpic references found"
|
||||
|
||||
# Check for MVP Platform imports
|
||||
grep -r "mvp-platform" backend/src/features/vehicles/ | head -5
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**Issue**: TypeScript compilation errors
|
||||
**Solution**: Check import paths, verify all referenced modules exist
|
||||
|
||||
**Issue**: Database connection failures
|
||||
**Solution**: Verify MVP Platform database is running, check connection parameters
|
||||
|
||||
**Issue**: Missing external directory references
|
||||
**Solution**: Update any remaining imports from deleted external/vpic directory
|
||||
|
||||
### Rollback Procedure
|
||||
|
||||
1. Restore external/vpic directory from git:
|
||||
```bash
|
||||
git checkout HEAD -- backend/src/features/vehicles/external/
|
||||
```
|
||||
|
||||
2. Revert vehicles.service.ts changes:
|
||||
```bash
|
||||
git checkout HEAD -- backend/src/features/vehicles/domain/vehicles.service.ts
|
||||
```
|
||||
|
||||
3. Remove new files:
|
||||
```bash
|
||||
rm backend/src/features/vehicles/data/mvp-platform.repository.ts
|
||||
rm backend/src/features/vehicles/domain/vin-decoder.service.ts
|
||||
```
|
||||
|
||||
4. Revert environment.ts changes:
|
||||
```bash
|
||||
git checkout HEAD -- backend/src/core/config/environment.ts
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After successful completion of Phase 2:
|
||||
|
||||
1. Proceed to [Phase 3: API Migration](./phase-03-api-migration.md)
|
||||
2. Test VIN decoding functionality thoroughly
|
||||
3. Monitor performance of new database queries
|
||||
|
||||
## Dependencies for Next Phase
|
||||
|
||||
- All backend changes compiled successfully
|
||||
- MVP Platform database queries working correctly
|
||||
- VIN decoder service functional
|
||||
- Cache keys updated to new pattern
|
||||
426
docs/changes/vehicles-dropdown-v1/phase-03-api-migration.md
Normal file
426
docs/changes/vehicles-dropdown-v1/phase-03-api-migration.md
Normal file
@@ -0,0 +1,426 @@
|
||||
# Phase 3: API Migration
|
||||
|
||||
## Overview
|
||||
|
||||
This phase updates the vehicles API controller to use the new MVP Platform database for all dropdown endpoints while maintaining exact API compatibility. All existing response formats and authentication patterns are preserved.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Phase 2 backend migration completed successfully
|
||||
- VIN decoder service functional
|
||||
- MVP Platform repository working correctly
|
||||
- Backend service can query MVP Platform database
|
||||
- All TypeScript compilation successful
|
||||
|
||||
## Current API Endpoints to Update
|
||||
|
||||
**Existing endpoints that will be updated**:
|
||||
- `GET /api/vehicles/dropdown/makes` (unauthenticated)
|
||||
- `GET /api/vehicles/dropdown/models/:make` (unauthenticated)
|
||||
- `GET /api/vehicles/dropdown/transmissions` (unauthenticated)
|
||||
- `GET /api/vehicles/dropdown/engines` (unauthenticated)
|
||||
- `GET /api/vehicles/dropdown/trims` (unauthenticated)
|
||||
|
||||
**Existing endpoints that remain unchanged**:
|
||||
- `POST /api/vehicles` (authenticated - uses VIN decoder)
|
||||
- `GET /api/vehicles` (authenticated)
|
||||
- `GET /api/vehicles/:id` (authenticated)
|
||||
- `PUT /api/vehicles/:id` (authenticated)
|
||||
- `DELETE /api/vehicles/:id` (authenticated)
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 3.1: Update Vehicles Controller
|
||||
|
||||
**Location**: `backend/src/features/vehicles/api/vehicles.controller.ts`
|
||||
|
||||
**Action**: Replace external API dropdown methods with MVP Platform database calls:
|
||||
|
||||
```typescript
|
||||
// UPDATE imports - REMOVE:
|
||||
// import { vpicClient } from '../external/vpic/vpic.client';
|
||||
|
||||
// ADD new imports:
|
||||
import { VehiclesService } from '../domain/vehicles.service';
|
||||
|
||||
export class VehiclesController {
|
||||
private vehiclesService: VehiclesService;
|
||||
|
||||
constructor() {
|
||||
this.vehiclesService = new VehiclesService();
|
||||
}
|
||||
|
||||
// UPDATE existing dropdown methods:
|
||||
|
||||
async getDropdownMakes(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
logger.info('Getting dropdown makes from MVP Platform');
|
||||
const makes = await this.vehiclesService.getDropdownMakes();
|
||||
|
||||
// Maintain exact same response format
|
||||
const response = makes.map(make => ({
|
||||
Make_ID: make.id,
|
||||
Make_Name: make.name
|
||||
}));
|
||||
|
||||
reply.status(200).send(response);
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown makes failed', { error });
|
||||
reply.status(500).send({ error: 'Failed to retrieve makes' });
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownModels(request: FastifyRequest<{ Params: { make: string } }>, reply: FastifyReply) {
|
||||
try {
|
||||
const { make } = request.params;
|
||||
logger.info('Getting dropdown models from MVP Platform', { make });
|
||||
|
||||
const models = await this.vehiclesService.getDropdownModels(make);
|
||||
|
||||
// Maintain exact same response format
|
||||
const response = models.map(model => ({
|
||||
Model_ID: model.id,
|
||||
Model_Name: model.name
|
||||
}));
|
||||
|
||||
reply.status(200).send(response);
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown models failed', { error });
|
||||
reply.status(500).send({ error: 'Failed to retrieve models' });
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownTransmissions(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
logger.info('Getting dropdown transmissions from MVP Platform');
|
||||
const transmissions = await this.vehiclesService.getDropdownTransmissions();
|
||||
|
||||
// Maintain exact same response format
|
||||
const response = transmissions.map(transmission => ({
|
||||
Name: transmission.name
|
||||
}));
|
||||
|
||||
reply.status(200).send(response);
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown transmissions failed', { error });
|
||||
reply.status(500).send({ error: 'Failed to retrieve transmissions' });
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownEngines(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
logger.info('Getting dropdown engines from MVP Platform');
|
||||
const engines = await this.vehiclesService.getDropdownEngines();
|
||||
|
||||
// Maintain exact same response format
|
||||
const response = engines.map(engine => ({
|
||||
Name: engine.name
|
||||
}));
|
||||
|
||||
reply.status(200).send(response);
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown engines failed', { error });
|
||||
reply.status(500).send({ error: 'Failed to retrieve engines' });
|
||||
}
|
||||
}
|
||||
|
||||
async getDropdownTrims(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
logger.info('Getting dropdown trims from MVP Platform');
|
||||
const trims = await this.vehiclesService.getDropdownTrims();
|
||||
|
||||
// Maintain exact same response format
|
||||
const response = trims.map(trim => ({
|
||||
Name: trim.name
|
||||
}));
|
||||
|
||||
reply.status(200).send(response);
|
||||
} catch (error) {
|
||||
logger.error('Get dropdown trims failed', { error });
|
||||
reply.status(500).send({ error: 'Failed to retrieve trims' });
|
||||
}
|
||||
}
|
||||
|
||||
// All other methods remain unchanged (createVehicle, getUserVehicles, etc.)
|
||||
}
|
||||
```
|
||||
|
||||
### Task 3.2: Verify Routes Configuration
|
||||
|
||||
**Location**: `backend/src/features/vehicles/api/vehicles.routes.ts`
|
||||
|
||||
**Action**: Ensure dropdown routes remain unauthenticated (no changes needed, just verification):
|
||||
|
||||
```typescript
|
||||
// VERIFY these routes remain unauthenticated:
|
||||
fastify.get('/vehicles/dropdown/makes', {
|
||||
handler: vehiclesController.getDropdownMakes.bind(vehiclesController)
|
||||
});
|
||||
|
||||
fastify.get<{ Params: { make: string } }>('/vehicles/dropdown/models/:make', {
|
||||
handler: vehiclesController.getDropdownModels.bind(vehiclesController)
|
||||
});
|
||||
|
||||
fastify.get('/vehicles/dropdown/transmissions', {
|
||||
handler: vehiclesController.getDropdownTransmissions.bind(vehiclesController)
|
||||
});
|
||||
|
||||
fastify.get('/vehicles/dropdown/engines', {
|
||||
handler: vehiclesController.getDropdownEngines.bind(vehiclesController)
|
||||
});
|
||||
|
||||
fastify.get('/vehicles/dropdown/trims', {
|
||||
handler: vehiclesController.getDropdownTrims.bind(vehiclesController)
|
||||
});
|
||||
```
|
||||
|
||||
**Note**: These routes should NOT have `preHandler: fastify.authenticate` to maintain unauthenticated access as required by security.md.
|
||||
|
||||
### Task 3.3: Update Response Error Handling
|
||||
|
||||
**Action**: Add specific error handling for database connectivity issues:
|
||||
|
||||
```typescript
|
||||
// Add to VehiclesController class:
|
||||
|
||||
private handleDatabaseError(error: any, operation: string, reply: FastifyReply) {
|
||||
logger.error(`${operation} database error`, { error });
|
||||
|
||||
// Check for specific database connection errors
|
||||
if (error.code === 'ECONNREFUSED' || error.code === 'ENOTFOUND') {
|
||||
reply.status(503).send({
|
||||
error: 'Service temporarily unavailable',
|
||||
message: 'Database connection issue'
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Generic database error
|
||||
if (error.code && error.code.startsWith('P')) { // PostgreSQL error codes
|
||||
reply.status(500).send({
|
||||
error: 'Database query failed',
|
||||
message: 'Please try again later'
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Generic error
|
||||
reply.status(500).send({
|
||||
error: `Failed to ${operation}`,
|
||||
message: 'Internal server error'
|
||||
});
|
||||
}
|
||||
|
||||
// Update all dropdown methods to use this error handler:
|
||||
// Replace each catch block with:
|
||||
} catch (error) {
|
||||
this.handleDatabaseError(error, 'retrieve makes', reply);
|
||||
}
|
||||
```
|
||||
|
||||
### Task 3.4: Add Performance Monitoring
|
||||
|
||||
**Action**: Add response time logging for performance monitoring:
|
||||
|
||||
```typescript
|
||||
// Add to VehiclesController class:
|
||||
|
||||
private async measurePerformance<T>(
|
||||
operation: string,
|
||||
fn: () => Promise<T>
|
||||
): Promise<T> {
|
||||
const startTime = Date.now();
|
||||
try {
|
||||
const result = await fn();
|
||||
const duration = Date.now() - startTime;
|
||||
logger.info(`MVP Platform ${operation} completed`, { duration });
|
||||
return result;
|
||||
} catch (error) {
|
||||
const duration = Date.now() - startTime;
|
||||
logger.error(`MVP Platform ${operation} failed`, { duration, error });
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Update dropdown methods to use performance monitoring:
|
||||
async getDropdownMakes(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
logger.info('Getting dropdown makes from MVP Platform');
|
||||
const makes = await this.measurePerformance('makes query', () =>
|
||||
this.vehiclesService.getDropdownMakes()
|
||||
);
|
||||
|
||||
// ... rest of method unchanged
|
||||
} catch (error) {
|
||||
this.handleDatabaseError(error, 'retrieve makes', reply);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Task 3.5: Update Health Check
|
||||
|
||||
**Location**: `backend/src/features/vehicles/api/vehicles.controller.ts`
|
||||
|
||||
**Action**: Add MVP Platform database health check method:
|
||||
|
||||
```typescript
|
||||
// Add new health check method:
|
||||
async healthCheck(request: FastifyRequest, reply: FastifyReply) {
|
||||
try {
|
||||
// Test MVP Platform database connection
|
||||
await this.measurePerformance('health check', async () => {
|
||||
const testResult = await this.vehiclesService.testMvpPlatformConnection();
|
||||
if (!testResult) {
|
||||
throw new Error('MVP Platform database connection failed');
|
||||
}
|
||||
});
|
||||
|
||||
reply.status(200).send({
|
||||
status: 'healthy',
|
||||
mvpPlatform: 'connected',
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('Health check failed', { error });
|
||||
reply.status(503).send({
|
||||
status: 'unhealthy',
|
||||
error: error.message,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Location**: `backend/src/features/vehicles/domain/vehicles.service.ts`
|
||||
|
||||
**Action**: Add health check method to service:
|
||||
|
||||
```typescript
|
||||
// Add to VehiclesService class:
|
||||
async testMvpPlatformConnection(): Promise<boolean> {
|
||||
try {
|
||||
await mvpPlatformRepository.getMakes();
|
||||
return true;
|
||||
} catch (error) {
|
||||
logger.error('MVP Platform connection test failed', { error });
|
||||
return false;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Task 3.6: Update Route Registration for Health Check
|
||||
|
||||
**Location**: `backend/src/features/vehicles/api/vehicles.routes.ts`
|
||||
|
||||
**Action**: Add health check route:
|
||||
|
||||
```typescript
|
||||
// Add health check route (unauthenticated for monitoring):
|
||||
fastify.get('/vehicles/health', {
|
||||
handler: vehiclesController.healthCheck.bind(vehiclesController)
|
||||
});
|
||||
```
|
||||
|
||||
## Validation Steps
|
||||
|
||||
### Step 1: Test API Response Formats
|
||||
|
||||
```bash
|
||||
# Test makes endpoint
|
||||
curl -s http://localhost:3001/api/vehicles/dropdown/makes | jq '.[0]'
|
||||
# Should return: {"Make_ID": number, "Make_Name": "string"}
|
||||
|
||||
# Test models endpoint
|
||||
curl -s "http://localhost:3001/api/vehicles/dropdown/models/Honda" | jq '.[0]'
|
||||
# Should return: {"Model_ID": number, "Model_Name": "string"}
|
||||
|
||||
# Test transmissions endpoint
|
||||
curl -s http://localhost:3001/api/vehicles/dropdown/transmissions | jq '.[0]'
|
||||
# Should return: {"Name": "string"}
|
||||
```
|
||||
|
||||
### Step 2: Test Performance
|
||||
|
||||
```bash
|
||||
# Test response times (should be < 100ms)
|
||||
time curl -s http://localhost:3001/api/vehicles/dropdown/makes > /dev/null
|
||||
|
||||
# Load test with multiple concurrent requests
|
||||
for i in {1..10}; do
|
||||
curl -s http://localhost:3001/api/vehicles/dropdown/makes > /dev/null &
|
||||
done
|
||||
wait
|
||||
```
|
||||
|
||||
### Step 3: Test Error Handling
|
||||
|
||||
```bash
|
||||
# Test with invalid make name
|
||||
curl -s "http://localhost:3001/api/vehicles/dropdown/models/InvalidMake" | jq '.'
|
||||
# Should return empty array or appropriate error
|
||||
|
||||
# Test health check
|
||||
curl -s http://localhost:3001/api/vehicles/health | jq '.'
|
||||
# Should return: {"status": "healthy", "mvpPlatform": "connected", "timestamp": "..."}
|
||||
```
|
||||
|
||||
### Step 4: Verify Authentication Patterns
|
||||
|
||||
```bash
|
||||
# Test that dropdown endpoints are unauthenticated (should work without token)
|
||||
curl -s http://localhost:3001/api/vehicles/dropdown/makes | jq '. | length'
|
||||
# Should return number > 0
|
||||
|
||||
# Test that vehicle CRUD endpoints still require authentication
|
||||
curl -s http://localhost:3001/api/vehicles
|
||||
# Should return 401 Unauthorized
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**Issue**: Empty response arrays
|
||||
**Solution**: Check MVP Platform database has data, verify SQL queries, check table names
|
||||
|
||||
**Issue**: Slow response times (> 100ms)
|
||||
**Solution**: Add database indexes, optimize queries, check connection pool settings
|
||||
|
||||
**Issue**: Authentication errors on dropdown endpoints
|
||||
**Solution**: Verify routes don't have authentication middleware, check security.md compliance
|
||||
|
||||
**Issue**: Wrong response format
|
||||
**Solution**: Compare with original vPIC API responses, adjust mapping in controller
|
||||
|
||||
### Rollback Procedure
|
||||
|
||||
1. Revert vehicles.controller.ts:
|
||||
```bash
|
||||
git checkout HEAD -- backend/src/features/vehicles/api/vehicles.controller.ts
|
||||
```
|
||||
|
||||
2. Revert vehicles.routes.ts if modified:
|
||||
```bash
|
||||
git checkout HEAD -- backend/src/features/vehicles/api/vehicles.routes.ts
|
||||
```
|
||||
|
||||
3. Restart backend service:
|
||||
```bash
|
||||
docker-compose restart backend
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After successful completion of Phase 3:
|
||||
|
||||
1. Proceed to [Phase 4: Scheduled ETL](./phase-04-scheduled-etl.md)
|
||||
2. Monitor API response times in production
|
||||
3. Set up alerts for health check failures
|
||||
|
||||
## Dependencies for Next Phase
|
||||
|
||||
- All dropdown APIs returning correct data
|
||||
- Response times consistently under 100ms
|
||||
- Health check endpoint functional
|
||||
- No authentication issues with dropdown endpoints
|
||||
- Error handling working properly
|
||||
596
docs/changes/vehicles-dropdown-v1/phase-04-scheduled-etl.md
Normal file
596
docs/changes/vehicles-dropdown-v1/phase-04-scheduled-etl.md
Normal file
@@ -0,0 +1,596 @@
|
||||
# Phase 4: Scheduled ETL Implementation
|
||||
|
||||
## Overview
|
||||
|
||||
This phase implements automated weekly ETL processing using a cron-based scheduler within the existing ETL container. The ETL process extracts data from the MSSQL source database, transforms it for optimal query performance, and loads it into the MVP Platform database.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Phase 3 API migration completed successfully
|
||||
- ETL scheduler container built and functional
|
||||
- MSSQL source database with NHTSA data restored
|
||||
- MVP Platform database accessible
|
||||
- ETL Python code functional in vehicle-etl directory
|
||||
|
||||
## Scheduled ETL Architecture
|
||||
|
||||
**Container**: `etl-scheduler` (already defined in Phase 1)
|
||||
**Schedule**: Weekly on Sunday at 2 AM (configurable)
|
||||
**Runtime**: Python 3.11 with cron daemon
|
||||
**Dependencies**: Both MSSQL and MVP Platform databases must be healthy
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 4.1: Create ETL Scheduler Dockerfile
|
||||
|
||||
**Location**: `vehicle-etl/docker/Dockerfile.etl`
|
||||
|
||||
**Action**: Create Dockerfile with cron daemon and ETL dependencies:
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Install system dependencies including cron
|
||||
RUN apt-get update && apt-get install -y \
|
||||
cron \
|
||||
procps \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Create app directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY requirements-etl.txt .
|
||||
RUN pip install --no-cache-dir -r requirements-etl.txt
|
||||
|
||||
# Copy ETL source code
|
||||
COPY etl/ ./etl/
|
||||
COPY sql/ ./sql/
|
||||
COPY scripts/ ./scripts/
|
||||
|
||||
# Create logs directory
|
||||
RUN mkdir -p /app/logs
|
||||
|
||||
# Copy cron configuration script
|
||||
COPY docker/setup-cron.sh /setup-cron.sh
|
||||
RUN chmod +x /setup-cron.sh
|
||||
|
||||
# Copy entrypoint script
|
||||
COPY docker/entrypoint.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
# Set up cron job
|
||||
RUN /setup-cron.sh
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD python -c "import sys; from etl.connections import test_connections; sys.exit(0 if test_connections() else 1)"
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
```
|
||||
|
||||
### Task 4.2: Create Cron Setup Script
|
||||
|
||||
**Location**: `vehicle-etl/docker/setup-cron.sh`
|
||||
|
||||
**Action**: Create script to configure cron job:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Create cron job from environment variable or default
|
||||
ETL_SCHEDULE=${ETL_SCHEDULE:-"0 2 * * 0"}
|
||||
|
||||
# Create cron job that runs the ETL process
|
||||
echo "$ETL_SCHEDULE cd /app && python -m etl.main build-catalog >> /app/logs/etl-cron.log 2>&1" > /etc/cron.d/etl-job
|
||||
|
||||
# Set permissions
|
||||
chmod 0644 /etc/cron.d/etl-job
|
||||
|
||||
# Apply cron job
|
||||
crontab /etc/cron.d/etl-job
|
||||
|
||||
echo "ETL cron job configured with schedule: $ETL_SCHEDULE"
|
||||
```
|
||||
|
||||
### Task 4.3: Create Container Entrypoint
|
||||
|
||||
**Location**: `vehicle-etl/docker/entrypoint.sh`
|
||||
|
||||
**Action**: Create entrypoint script that starts cron daemon:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Start cron daemon in the background
|
||||
cron -f &
|
||||
CRON_PID=$!
|
||||
|
||||
# Function to handle shutdown
|
||||
shutdown() {
|
||||
echo "Shutting down ETL scheduler..."
|
||||
kill $CRON_PID
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Trap SIGTERM and SIGINT
|
||||
trap shutdown SIGTERM SIGINT
|
||||
|
||||
# Run initial ETL if requested
|
||||
if [ "$RUN_INITIAL_ETL" = "true" ]; then
|
||||
echo "Running initial ETL process..."
|
||||
cd /app && python -m etl.main build-catalog
|
||||
fi
|
||||
|
||||
# Log startup
|
||||
echo "ETL scheduler started with schedule: ${ETL_SCHEDULE:-0 2 * * 0}"
|
||||
echo "Cron daemon PID: $CRON_PID"
|
||||
|
||||
# Keep container running
|
||||
wait $CRON_PID
|
||||
```
|
||||
|
||||
### Task 4.4: Update ETL Main Module
|
||||
|
||||
**Location**: `vehicle-etl/etl/main.py`
|
||||
|
||||
**Action**: Ensure ETL main module supports build-catalog command:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
ETL Main Module - Vehicle Catalog Builder
|
||||
"""
|
||||
|
||||
import sys
|
||||
import argparse
|
||||
import logging
|
||||
from datetime import datetime
|
||||
import traceback
|
||||
|
||||
from etl.utils.logging import setup_logging
|
||||
from etl.builders.vehicle_catalog_builder import VehicleCatalogBuilder
|
||||
from etl.connections import test_connections
|
||||
|
||||
def build_catalog():
|
||||
"""Run the complete ETL pipeline to build vehicle catalog"""
|
||||
try:
|
||||
setup_logging()
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
start_time = datetime.now()
|
||||
logger.info(f"Starting ETL pipeline at {start_time}")
|
||||
|
||||
# Test all connections first
|
||||
if not test_connections():
|
||||
logger.error("Connection tests failed - aborting ETL")
|
||||
return False
|
||||
|
||||
# Initialize catalog builder
|
||||
builder = VehicleCatalogBuilder()
|
||||
|
||||
# Run ETL pipeline steps
|
||||
logger.info("Step 1: Extracting data from MSSQL source...")
|
||||
extract_success = builder.extract_source_data()
|
||||
if not extract_success:
|
||||
logger.error("Data extraction failed")
|
||||
return False
|
||||
|
||||
logger.info("Step 2: Transforming data for catalog...")
|
||||
transform_success = builder.transform_catalog_data()
|
||||
if not transform_success:
|
||||
logger.error("Data transformation failed")
|
||||
return False
|
||||
|
||||
logger.info("Step 3: Loading data to MVP Platform database...")
|
||||
load_success = builder.load_catalog_data()
|
||||
if not load_success:
|
||||
logger.error("Data loading failed")
|
||||
return False
|
||||
|
||||
# Generate completion report
|
||||
end_time = datetime.now()
|
||||
duration = end_time - start_time
|
||||
logger.info(f"ETL pipeline completed successfully in {duration}")
|
||||
|
||||
# Write completion marker
|
||||
with open('/app/logs/etl-last-run.txt', 'w') as f:
|
||||
f.write(f"{end_time.isoformat()}\n")
|
||||
f.write(f"Duration: {duration}\n")
|
||||
f.write("Status: SUCCESS\n")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"ETL pipeline failed: {str(e)}")
|
||||
logger.error(traceback.format_exc())
|
||||
|
||||
# Write error marker
|
||||
with open('/app/logs/etl-last-run.txt', 'w') as f:
|
||||
f.write(f"{datetime.now().isoformat()}\n")
|
||||
f.write(f"Status: FAILED\n")
|
||||
f.write(f"Error: {str(e)}\n")
|
||||
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(description='Vehicle ETL Pipeline')
|
||||
parser.add_argument('command', choices=['build-catalog', 'test-connections', 'validate'],
|
||||
help='Command to execute')
|
||||
parser.add_argument('--log-level', default='INFO',
|
||||
choices=['DEBUG', 'INFO', 'WARNING', 'ERROR'],
|
||||
help='Logging level')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
level=getattr(logging, args.log_level),
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
|
||||
if args.command == 'build-catalog':
|
||||
success = build_catalog()
|
||||
sys.exit(0 if success else 1)
|
||||
|
||||
elif args.command == 'test-connections':
|
||||
success = test_connections()
|
||||
print("All connections successful" if success else "Connection tests failed")
|
||||
sys.exit(0 if success else 1)
|
||||
|
||||
elif args.command == 'validate':
|
||||
# Add validation logic here
|
||||
print("Validation not yet implemented")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
### Task 4.5: Create Connection Testing Module
|
||||
|
||||
**Location**: `vehicle-etl/etl/connections.py`
|
||||
|
||||
**Action**: Create connection testing utilities:
|
||||
|
||||
```python
|
||||
"""
|
||||
Database connection testing utilities
|
||||
"""
|
||||
|
||||
import os
|
||||
import logging
|
||||
import pyodbc
|
||||
import psycopg2
|
||||
import redis
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def test_mssql_connection():
|
||||
"""Test MSSQL source database connection"""
|
||||
try:
|
||||
connection_string = (
|
||||
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
|
||||
f"SERVER={os.getenv('MSSQL_HOST', 'localhost')};"
|
||||
f"DATABASE={os.getenv('MSSQL_DATABASE', 'VPICList')};"
|
||||
f"UID={os.getenv('MSSQL_USERNAME', 'sa')};"
|
||||
f"PWD={os.getenv('MSSQL_PASSWORD')};"
|
||||
f"TrustServerCertificate=yes;"
|
||||
)
|
||||
|
||||
conn = pyodbc.connect(connection_string)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT @@VERSION")
|
||||
version = cursor.fetchone()
|
||||
logger.info(f"MSSQL connection successful: {version[0][:50]}...")
|
||||
cursor.close()
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"MSSQL connection failed: {str(e)}")
|
||||
return False
|
||||
|
||||
def test_postgres_connection():
|
||||
"""Test PostgreSQL MVP Platform database connection"""
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host=os.getenv('POSTGRES_HOST', 'localhost'),
|
||||
port=int(os.getenv('POSTGRES_PORT', '5432')),
|
||||
database=os.getenv('POSTGRES_DATABASE', 'mvp-platform-vehicles'),
|
||||
user=os.getenv('POSTGRES_USERNAME', 'mvp_platform_user'),
|
||||
password=os.getenv('POSTGRES_PASSWORD')
|
||||
)
|
||||
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT version()")
|
||||
version = cursor.fetchone()
|
||||
logger.info(f"PostgreSQL connection successful: {version[0][:50]}...")
|
||||
cursor.close()
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"PostgreSQL connection failed: {str(e)}")
|
||||
return False
|
||||
|
||||
def test_redis_connection():
|
||||
"""Test Redis cache connection"""
|
||||
try:
|
||||
r = redis.Redis(
|
||||
host=os.getenv('REDIS_HOST', 'localhost'),
|
||||
port=int(os.getenv('REDIS_PORT', '6379')),
|
||||
decode_responses=True
|
||||
)
|
||||
|
||||
r.ping()
|
||||
logger.info("Redis connection successful")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Redis connection failed: {str(e)}")
|
||||
return False
|
||||
|
||||
def test_connections():
|
||||
"""Test all database connections"""
|
||||
logger.info("Testing all database connections...")
|
||||
|
||||
mssql_ok = test_mssql_connection()
|
||||
postgres_ok = test_postgres_connection()
|
||||
redis_ok = test_redis_connection()
|
||||
|
||||
all_ok = mssql_ok and postgres_ok and redis_ok
|
||||
|
||||
if all_ok:
|
||||
logger.info("All database connections successful")
|
||||
else:
|
||||
logger.error("One or more database connections failed")
|
||||
|
||||
return all_ok
|
||||
```
|
||||
|
||||
### Task 4.6: Create ETL Monitoring Script
|
||||
|
||||
**Location**: `vehicle-etl/scripts/check-etl-status.sh`
|
||||
|
||||
**Action**: Create monitoring script for ETL health:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# ETL Status Monitoring Script
|
||||
|
||||
LOG_FILE="/app/logs/etl-last-run.txt"
|
||||
CRON_LOG="/app/logs/etl-cron.log"
|
||||
|
||||
echo "=== ETL Status Check ==="
|
||||
echo "Timestamp: $(date)"
|
||||
echo
|
||||
|
||||
# Check if last run file exists
|
||||
if [ ! -f "$LOG_FILE" ]; then
|
||||
echo "❌ No ETL run detected yet"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Read last run information
|
||||
echo "📄 Last ETL Run Information:"
|
||||
cat "$LOG_FILE"
|
||||
echo
|
||||
|
||||
# Check if last run was successful
|
||||
if grep -q "Status: SUCCESS" "$LOG_FILE"; then
|
||||
echo "✅ Last ETL run was successful"
|
||||
EXIT_CODE=0
|
||||
else
|
||||
echo "❌ Last ETL run failed"
|
||||
EXIT_CODE=1
|
||||
fi
|
||||
|
||||
# Show last few lines of cron log
|
||||
echo
|
||||
echo "📋 Recent ETL Log (last 10 lines):"
|
||||
if [ -f "$CRON_LOG" ]; then
|
||||
tail -10 "$CRON_LOG"
|
||||
else
|
||||
echo "No cron log found"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "=== End Status Check ==="
|
||||
|
||||
exit $EXIT_CODE
|
||||
```
|
||||
|
||||
### Task 4.7: Update Docker Compose Health Checks
|
||||
|
||||
**Location**: `docker-compose.yml` (update existing etl-scheduler service)
|
||||
|
||||
**Action**: Update the ETL scheduler service definition with proper health checks:
|
||||
|
||||
```yaml
|
||||
etl-scheduler:
|
||||
build:
|
||||
context: ./vehicle-etl
|
||||
dockerfile: docker/Dockerfile.etl
|
||||
container_name: mvp-etl-scheduler
|
||||
environment:
|
||||
# ... existing environment variables ...
|
||||
# Health check configuration
|
||||
- HEALTH_CHECK_ENABLED=true
|
||||
volumes:
|
||||
- ./vehicle-etl/logs:/app/logs
|
||||
- etl_scheduler_data:/app/data
|
||||
depends_on:
|
||||
mssql-source:
|
||||
condition: service_healthy
|
||||
mvp-platform-database:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "/app/scripts/check-etl-status.sh"]
|
||||
interval: 60s
|
||||
timeout: 30s
|
||||
retries: 3
|
||||
start_period: 120s
|
||||
```
|
||||
|
||||
### Task 4.8: Create ETL Requirements File
|
||||
|
||||
**Location**: `vehicle-etl/requirements-etl.txt`
|
||||
|
||||
**Action**: Ensure all required Python packages are listed:
|
||||
|
||||
```txt
|
||||
# Database connectivity
|
||||
pyodbc>=4.0.35
|
||||
psycopg2-binary>=2.9.5
|
||||
redis>=4.5.1
|
||||
|
||||
# Data processing
|
||||
pandas>=1.5.3
|
||||
numpy>=1.24.2
|
||||
|
||||
# Utilities
|
||||
python-dateutil>=2.8.2
|
||||
tqdm>=4.64.1
|
||||
|
||||
# Logging and monitoring
|
||||
structlog>=22.3.0
|
||||
|
||||
# Configuration
|
||||
python-decouple>=3.6
|
||||
|
||||
# Testing (for validation)
|
||||
pytest>=7.2.1
|
||||
pytest-asyncio>=0.20.3
|
||||
```
|
||||
|
||||
## Validation Steps
|
||||
|
||||
### Step 1: Build and Test ETL Container
|
||||
|
||||
```bash
|
||||
# Build the ETL scheduler container
|
||||
docker-compose build etl-scheduler
|
||||
|
||||
# Test container startup
|
||||
docker-compose up etl-scheduler -d
|
||||
|
||||
# Check container logs
|
||||
docker-compose logs etl-scheduler
|
||||
```
|
||||
|
||||
### Step 2: Test ETL Connection
|
||||
|
||||
```bash
|
||||
# Test database connections
|
||||
docker-compose exec etl-scheduler python -m etl.main test-connections
|
||||
|
||||
# Should output: "All connections successful"
|
||||
```
|
||||
|
||||
### Step 3: Test Manual ETL Execution
|
||||
|
||||
```bash
|
||||
# Run ETL manually to test functionality
|
||||
docker-compose exec etl-scheduler python -m etl.main build-catalog
|
||||
|
||||
# Check for success in logs
|
||||
docker-compose exec etl-scheduler cat /app/logs/etl-last-run.txt
|
||||
```
|
||||
|
||||
### Step 4: Verify Cron Configuration
|
||||
|
||||
```bash
|
||||
# Check cron job is configured
|
||||
docker-compose exec etl-scheduler crontab -l
|
||||
|
||||
# Should show: "0 2 * * 0 cd /app && python -m etl.main build-catalog >> /app/logs/etl-cron.log 2>&1"
|
||||
```
|
||||
|
||||
### Step 5: Test ETL Status Monitoring
|
||||
|
||||
```bash
|
||||
# Test status check script
|
||||
docker-compose exec etl-scheduler /app/scripts/check-etl-status.sh
|
||||
|
||||
# Check health check endpoint
|
||||
curl -f http://localhost:8080/health || echo "Health check failed"
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**Issue**: Cron daemon not starting
|
||||
**Solution**: Check entrypoint.sh permissions, verify cron package installation
|
||||
|
||||
**Issue**: Database connection failures
|
||||
**Solution**: Verify network connectivity, check environment variables, ensure databases are healthy
|
||||
|
||||
**Issue**: ETL process hanging
|
||||
**Solution**: Add timeout mechanisms, check for deadlocks, increase memory limits
|
||||
|
||||
**Issue**: Log files not being written
|
||||
**Solution**: Check volume mounts, verify directory permissions
|
||||
|
||||
### ETL Failure Recovery
|
||||
|
||||
**Automatic Recovery**:
|
||||
- Container restart policy: `unless-stopped`
|
||||
- Retry logic in ETL scripts (max 3 retries)
|
||||
- Health check will restart container if ETL consistently fails
|
||||
|
||||
**Manual Recovery**:
|
||||
```bash
|
||||
# Check ETL status
|
||||
docker-compose exec etl-scheduler /app/scripts/check-etl-status.sh
|
||||
|
||||
# Restart ETL container
|
||||
docker-compose restart etl-scheduler
|
||||
|
||||
# Run ETL manually if needed
|
||||
docker-compose exec etl-scheduler python -m etl.main build-catalog
|
||||
```
|
||||
|
||||
### Rollback Procedure
|
||||
|
||||
1. Stop ETL scheduler:
|
||||
```bash
|
||||
docker-compose stop etl-scheduler
|
||||
```
|
||||
|
||||
2. Remove ETL-related files if needed:
|
||||
```bash
|
||||
rm -rf vehicle-etl/docker/
|
||||
```
|
||||
|
||||
3. Remove ETL scheduler from docker-compose.yml
|
||||
|
||||
4. Restart remaining services:
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After successful completion of Phase 4:
|
||||
|
||||
1. Proceed to [Phase 5: Testing & Validation](./phase-05-testing.md)
|
||||
2. Monitor ETL execution for first few runs
|
||||
3. Set up alerting for ETL failures
|
||||
4. Document ETL maintenance procedures
|
||||
|
||||
## Dependencies for Next Phase
|
||||
|
||||
- ETL scheduler running successfully
|
||||
- Cron job configured and functional
|
||||
- First ETL run completed successfully
|
||||
- MVP Platform database populated with vehicle data
|
||||
- ETL monitoring and health checks working
|
||||
727
docs/changes/vehicles-dropdown-v1/phase-05-testing.md
Normal file
727
docs/changes/vehicles-dropdown-v1/phase-05-testing.md
Normal file
@@ -0,0 +1,727 @@
|
||||
# Phase 5: Testing & Validation
|
||||
|
||||
## Overview
|
||||
|
||||
This phase provides comprehensive testing procedures to validate that the Vehicle ETL integration meets all performance, accuracy, and reliability requirements. Testing covers API functionality, performance benchmarks, data accuracy, and system reliability.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- All previous phases (1-4) completed successfully
|
||||
- MVP Platform database populated with vehicle data
|
||||
- All API endpoints functional
|
||||
- ETL scheduler running and operational
|
||||
- Backend service connected to MVP Platform database
|
||||
|
||||
## Success Criteria Review
|
||||
|
||||
Before starting tests, review the success criteria:
|
||||
|
||||
- ✅ **Zero Breaking Changes**: All existing vehicle functionality unchanged
|
||||
- ✅ **Performance**: Dropdown APIs maintain < 100ms response times
|
||||
- ✅ **Accuracy**: VIN decoding matches current NHTSA accuracy (99.9%+)
|
||||
- ✅ **Reliability**: Weekly ETL completes successfully with error handling
|
||||
- ✅ **Scalability**: Clean two-database architecture ready for additional platform services
|
||||
|
||||
## Testing Categories
|
||||
|
||||
### Category 1: API Functionality Testing
|
||||
### Category 2: Performance Testing
|
||||
### Category 3: Data Accuracy Validation
|
||||
### Category 4: ETL Process Testing
|
||||
### Category 5: Error Handling & Recovery
|
||||
### Category 6: Load Testing
|
||||
### Category 7: Security Validation
|
||||
|
||||
---
|
||||
|
||||
## Category 1: API Functionality Testing
|
||||
|
||||
### Test 1.1: Dropdown API Response Formats
|
||||
|
||||
**Purpose**: Verify all dropdown endpoints return data in the exact same format as before
|
||||
|
||||
**Test Script**: `test-api-formats.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== API Format Validation Tests ==="
|
||||
|
||||
# Test makes endpoint
|
||||
echo "Testing /api/vehicles/dropdown/makes..."
|
||||
MAKES_RESPONSE=$(curl -s http://localhost:3001/api/vehicles/dropdown/makes)
|
||||
MAKES_COUNT=$(echo "$MAKES_RESPONSE" | jq '. | length')
|
||||
|
||||
if [ "$MAKES_COUNT" -gt 0 ]; then
|
||||
# Check first item has correct format
|
||||
FIRST_MAKE=$(echo "$MAKES_RESPONSE" | jq '.[0]')
|
||||
if echo "$FIRST_MAKE" | jq -e '.Make_ID and .Make_Name' > /dev/null; then
|
||||
echo "✅ Makes format correct"
|
||||
else
|
||||
echo "❌ Makes format incorrect: $FIRST_MAKE"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "❌ No makes returned"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test models endpoint
|
||||
echo "Testing /api/vehicles/dropdown/models/:make..."
|
||||
FIRST_MAKE_NAME=$(echo "$MAKES_RESPONSE" | jq -r '.[0].Make_Name')
|
||||
MODELS_RESPONSE=$(curl -s "http://localhost:3001/api/vehicles/dropdown/models/$FIRST_MAKE_NAME")
|
||||
MODELS_COUNT=$(echo "$MODELS_RESPONSE" | jq '. | length')
|
||||
|
||||
if [ "$MODELS_COUNT" -gt 0 ]; then
|
||||
FIRST_MODEL=$(echo "$MODELS_RESPONSE" | jq '.[0]')
|
||||
if echo "$FIRST_MODEL" | jq -e '.Model_ID and .Model_Name' > /dev/null; then
|
||||
echo "✅ Models format correct"
|
||||
else
|
||||
echo "❌ Models format incorrect: $FIRST_MODEL"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "⚠️ No models for $FIRST_MAKE_NAME (may be expected)"
|
||||
fi
|
||||
|
||||
# Test transmissions endpoint
|
||||
echo "Testing /api/vehicles/dropdown/transmissions..."
|
||||
TRANS_RESPONSE=$(curl -s http://localhost:3001/api/vehicles/dropdown/transmissions)
|
||||
TRANS_COUNT=$(echo "$TRANS_RESPONSE" | jq '. | length')
|
||||
|
||||
if [ "$TRANS_COUNT" -gt 0 ]; then
|
||||
FIRST_TRANS=$(echo "$TRANS_RESPONSE" | jq '.[0]')
|
||||
if echo "$FIRST_TRANS" | jq -e '.Name' > /dev/null; then
|
||||
echo "✅ Transmissions format correct"
|
||||
else
|
||||
echo "❌ Transmissions format incorrect: $FIRST_TRANS"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "❌ No transmissions returned"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test engines endpoint
|
||||
echo "Testing /api/vehicles/dropdown/engines..."
|
||||
ENGINES_RESPONSE=$(curl -s http://localhost:3001/api/vehicles/dropdown/engines)
|
||||
ENGINES_COUNT=$(echo "$ENGINES_RESPONSE" | jq '. | length')
|
||||
|
||||
if [ "$ENGINES_COUNT" -gt 0 ]; then
|
||||
FIRST_ENGINE=$(echo "$ENGINES_RESPONSE" | jq '.[0]')
|
||||
if echo "$FIRST_ENGINE" | jq -e '.Name' > /dev/null; then
|
||||
echo "✅ Engines format correct"
|
||||
else
|
||||
echo "❌ Engines format incorrect: $FIRST_ENGINE"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "❌ No engines returned"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test trims endpoint
|
||||
echo "Testing /api/vehicles/dropdown/trims..."
|
||||
TRIMS_RESPONSE=$(curl -s http://localhost:3001/api/vehicles/dropdown/trims)
|
||||
TRIMS_COUNT=$(echo "$TRIMS_RESPONSE" | jq '. | length')
|
||||
|
||||
if [ "$TRIMS_COUNT" -gt 0 ]; then
|
||||
FIRST_TRIM=$(echo "$TRIMS_RESPONSE" | jq '.[0]')
|
||||
if echo "$FIRST_TRIM" | jq -e '.Name' > /dev/null; then
|
||||
echo "✅ Trims format correct"
|
||||
else
|
||||
echo "❌ Trims format incorrect: $FIRST_TRIM"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "❌ No trims returned"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ All API format tests passed"
|
||||
```
|
||||
|
||||
### Test 1.2: Authentication Validation
|
||||
|
||||
**Purpose**: Ensure dropdown endpoints remain unauthenticated while CRUD endpoints require authentication
|
||||
|
||||
**Test Script**: `test-authentication.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Authentication Validation Tests ==="
|
||||
|
||||
# Test dropdown endpoints are unauthenticated
|
||||
echo "Testing dropdown endpoints without authentication..."
|
||||
|
||||
ENDPOINTS=(
|
||||
"/api/vehicles/dropdown/makes"
|
||||
"/api/vehicles/dropdown/transmissions"
|
||||
"/api/vehicles/dropdown/engines"
|
||||
"/api/vehicles/dropdown/trims"
|
||||
)
|
||||
|
||||
for endpoint in "${ENDPOINTS[@]}"; do
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:3001$endpoint")
|
||||
if [ "$RESPONSE" = "200" ]; then
|
||||
echo "✅ $endpoint accessible without auth"
|
||||
else
|
||||
echo "❌ $endpoint returned $RESPONSE (should be 200)"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Test CRUD endpoints require authentication
|
||||
echo "Testing CRUD endpoints require authentication..."
|
||||
|
||||
CRUD_ENDPOINTS=(
|
||||
"/api/vehicles"
|
||||
"/api/vehicles/123"
|
||||
)
|
||||
|
||||
for endpoint in "${CRUD_ENDPOINTS[@]}"; do
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:3001$endpoint")
|
||||
if [ "$RESPONSE" = "401" ]; then
|
||||
echo "✅ $endpoint properly requires auth"
|
||||
else
|
||||
echo "❌ $endpoint returned $RESPONSE (should be 401)"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ All authentication tests passed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category 2: Performance Testing
|
||||
|
||||
### Test 2.1: Response Time Measurement
|
||||
|
||||
**Purpose**: Verify all dropdown APIs respond in < 100ms
|
||||
|
||||
**Test Script**: `test-performance.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Performance Tests ==="
|
||||
|
||||
ENDPOINTS=(
|
||||
"/api/vehicles/dropdown/makes"
|
||||
"/api/vehicles/dropdown/models/Honda"
|
||||
"/api/vehicles/dropdown/transmissions"
|
||||
"/api/vehicles/dropdown/engines"
|
||||
"/api/vehicles/dropdown/trims"
|
||||
)
|
||||
|
||||
MAX_RESPONSE_TIME=100 # milliseconds
|
||||
|
||||
for endpoint in "${ENDPOINTS[@]}"; do
|
||||
echo "Testing $endpoint performance..."
|
||||
|
||||
# Run 5 tests and get average
|
||||
TOTAL_TIME=0
|
||||
for i in {1..5}; do
|
||||
START_TIME=$(date +%s%3N)
|
||||
curl -s "http://localhost:3001$endpoint" > /dev/null
|
||||
END_TIME=$(date +%s%3N)
|
||||
RESPONSE_TIME=$((END_TIME - START_TIME))
|
||||
TOTAL_TIME=$((TOTAL_TIME + RESPONSE_TIME))
|
||||
done
|
||||
|
||||
AVG_TIME=$((TOTAL_TIME / 5))
|
||||
|
||||
if [ "$AVG_TIME" -lt "$MAX_RESPONSE_TIME" ]; then
|
||||
echo "✅ $endpoint: ${AVG_TIME}ms (under ${MAX_RESPONSE_TIME}ms)"
|
||||
else
|
||||
echo "❌ $endpoint: ${AVG_TIME}ms (exceeds ${MAX_RESPONSE_TIME}ms)"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ All performance tests passed"
|
||||
```
|
||||
|
||||
### Test 2.2: Cache Performance Testing
|
||||
|
||||
**Purpose**: Verify caching improves performance on subsequent requests
|
||||
|
||||
**Test Script**: `test-cache-performance.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Cache Performance Tests ==="
|
||||
|
||||
ENDPOINT="/api/vehicles/dropdown/makes"
|
||||
|
||||
# Clear cache (requires Redis access)
|
||||
docker-compose exec redis redis-cli FLUSHDB
|
||||
|
||||
echo "Testing first request (cache miss)..."
|
||||
START_TIME=$(date +%s%3N)
|
||||
curl -s "http://localhost:3001$ENDPOINT" > /dev/null
|
||||
END_TIME=$(date +%s%3N)
|
||||
FIRST_REQUEST_TIME=$((END_TIME - START_TIME))
|
||||
|
||||
echo "Testing second request (cache hit)..."
|
||||
START_TIME=$(date +%s%3N)
|
||||
curl -s "http://localhost:3001$ENDPOINT" > /dev/null
|
||||
END_TIME=$(date +%s%3N)
|
||||
SECOND_REQUEST_TIME=$((END_TIME - START_TIME))
|
||||
|
||||
echo "First request: ${FIRST_REQUEST_TIME}ms"
|
||||
echo "Second request: ${SECOND_REQUEST_TIME}ms"
|
||||
|
||||
# Cache hit should be significantly faster
|
||||
if [ "$SECOND_REQUEST_TIME" -lt "$FIRST_REQUEST_TIME" ]; then
|
||||
IMPROVEMENT=$((((FIRST_REQUEST_TIME - SECOND_REQUEST_TIME) * 100) / FIRST_REQUEST_TIME))
|
||||
echo "✅ Cache improved performance by ${IMPROVEMENT}%"
|
||||
else
|
||||
echo "❌ Cache did not improve performance"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Cache performance test passed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category 3: Data Accuracy Validation
|
||||
|
||||
### Test 3.1: VIN Decoding Accuracy
|
||||
|
||||
**Purpose**: Verify VIN decoding produces accurate results
|
||||
|
||||
**Test Script**: `test-vin-accuracy.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== VIN Decoding Accuracy Tests ==="
|
||||
|
||||
# Test VINs with known results
|
||||
declare -A TEST_VINS=(
|
||||
["1HGBH41JXMN109186"]="Honda,Civic,2021"
|
||||
["3GTUUFEL6PG140748"]="GMC,Sierra,2023"
|
||||
["1G1YU3D64H5602799"]="Chevrolet,Corvette,2017"
|
||||
)
|
||||
|
||||
for vin in "${!TEST_VINS[@]}"; do
|
||||
echo "Testing VIN: $vin"
|
||||
|
||||
# Create test vehicle to trigger VIN decoding
|
||||
RESPONSE=$(curl -s -X POST "http://localhost:3001/api/vehicles" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer test-token" \
|
||||
-d "{\"vin\":\"$vin\",\"nickname\":\"Test\"}" \
|
||||
2>/dev/null || echo "AUTH_ERROR")
|
||||
|
||||
if [ "$RESPONSE" = "AUTH_ERROR" ]; then
|
||||
echo "⚠️ Skipping VIN test due to authentication (expected in testing)"
|
||||
continue
|
||||
fi
|
||||
|
||||
# Parse expected results
|
||||
IFS=',' read -r EXPECTED_MAKE EXPECTED_MODEL EXPECTED_YEAR <<< "${TEST_VINS[$vin]}"
|
||||
|
||||
# Extract actual results
|
||||
ACTUAL_MAKE=$(echo "$RESPONSE" | jq -r '.make // empty')
|
||||
ACTUAL_MODEL=$(echo "$RESPONSE" | jq -r '.model // empty')
|
||||
ACTUAL_YEAR=$(echo "$RESPONSE" | jq -r '.year // empty')
|
||||
|
||||
# Validate results
|
||||
if [ "$ACTUAL_MAKE" = "$EXPECTED_MAKE" ] && \
|
||||
[ "$ACTUAL_MODEL" = "$EXPECTED_MODEL" ] && \
|
||||
[ "$ACTUAL_YEAR" = "$EXPECTED_YEAR" ]; then
|
||||
echo "✅ VIN $vin decoded correctly"
|
||||
else
|
||||
echo "❌ VIN $vin decoded incorrectly:"
|
||||
echo " Expected: $EXPECTED_MAKE $EXPECTED_MODEL $EXPECTED_YEAR"
|
||||
echo " Actual: $ACTUAL_MAKE $ACTUAL_MODEL $ACTUAL_YEAR"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ VIN accuracy tests passed"
|
||||
```
|
||||
|
||||
### Test 3.2: Data Completeness Check
|
||||
|
||||
**Purpose**: Verify MVP Platform database has comprehensive data
|
||||
|
||||
**Test Script**: `test-data-completeness.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Data Completeness Tests ==="
|
||||
|
||||
# Test makes count
|
||||
MAKES_COUNT=$(curl -s http://localhost:3001/api/vehicles/dropdown/makes | jq '. | length')
|
||||
echo "Makes available: $MAKES_COUNT"
|
||||
|
||||
if [ "$MAKES_COUNT" -lt 50 ]; then
|
||||
echo "❌ Too few makes ($MAKES_COUNT < 50)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test transmissions count
|
||||
TRANS_COUNT=$(curl -s http://localhost:3001/api/vehicles/dropdown/transmissions | jq '. | length')
|
||||
echo "Transmissions available: $TRANS_COUNT"
|
||||
|
||||
if [ "$TRANS_COUNT" -lt 10 ]; then
|
||||
echo "❌ Too few transmissions ($TRANS_COUNT < 10)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test engines count
|
||||
ENGINES_COUNT=$(curl -s http://localhost:3001/api/vehicles/dropdown/engines | jq '. | length')
|
||||
echo "Engines available: $ENGINES_COUNT"
|
||||
|
||||
if [ "$ENGINES_COUNT" -lt 20 ]; then
|
||||
echo "❌ Too few engines ($ENGINES_COUNT < 20)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Data completeness tests passed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category 4: ETL Process Testing
|
||||
|
||||
### Test 4.1: ETL Execution Test
|
||||
|
||||
**Purpose**: Verify ETL process runs successfully
|
||||
|
||||
**Test Script**: `test-etl-execution.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== ETL Execution Tests ==="
|
||||
|
||||
# Check ETL container is running
|
||||
if ! docker-compose ps etl-scheduler | grep -q "Up"; then
|
||||
echo "❌ ETL scheduler container is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test manual ETL execution
|
||||
echo "Running manual ETL test..."
|
||||
docker-compose exec etl-scheduler python -m etl.main test-connections
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ ETL connections successful"
|
||||
else
|
||||
echo "❌ ETL connections failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check ETL status
|
||||
echo "Checking ETL status..."
|
||||
docker-compose exec etl-scheduler /app/scripts/check-etl-status.sh
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ ETL status check passed"
|
||||
else
|
||||
echo "⚠️ ETL status check returned warnings (may be expected)"
|
||||
fi
|
||||
|
||||
echo "✅ ETL execution tests completed"
|
||||
```
|
||||
|
||||
### Test 4.2: ETL Scheduling Test
|
||||
|
||||
**Purpose**: Verify ETL is properly scheduled
|
||||
|
||||
**Test Script**: `test-etl-scheduling.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== ETL Scheduling Tests ==="
|
||||
|
||||
# Check cron job is configured
|
||||
CRON_OUTPUT=$(docker-compose exec etl-scheduler crontab -l)
|
||||
|
||||
if echo "$CRON_OUTPUT" | grep -q "etl.main build-catalog"; then
|
||||
echo "✅ ETL cron job is configured"
|
||||
else
|
||||
echo "❌ ETL cron job not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check cron daemon is running
|
||||
if docker-compose exec etl-scheduler pgrep cron > /dev/null; then
|
||||
echo "✅ Cron daemon is running"
|
||||
else
|
||||
echo "❌ Cron daemon is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ ETL scheduling tests passed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category 5: Error Handling & Recovery
|
||||
|
||||
### Test 5.1: Database Connection Error Handling
|
||||
|
||||
**Purpose**: Verify graceful handling when MVP Platform database is unavailable
|
||||
|
||||
**Test Script**: `test-error-handling.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Error Handling Tests ==="
|
||||
|
||||
# Stop MVP Platform database temporarily
|
||||
echo "Stopping MVP Platform database..."
|
||||
docker-compose stop mvp-platform-database
|
||||
|
||||
sleep 5
|
||||
|
||||
# Test API responses when database is down
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:3001/api/vehicles/dropdown/makes")
|
||||
|
||||
if [ "$RESPONSE" = "503" ] || [ "$RESPONSE" = "500" ]; then
|
||||
echo "✅ API properly handles database unavailability (returned $RESPONSE)"
|
||||
else
|
||||
echo "❌ API returned unexpected status: $RESPONSE"
|
||||
fi
|
||||
|
||||
# Restart database
|
||||
echo "Restarting MVP Platform database..."
|
||||
docker-compose start mvp-platform-database
|
||||
|
||||
# Wait for database to be ready
|
||||
sleep 15
|
||||
|
||||
# Test API recovery
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:3001/api/vehicles/dropdown/makes")
|
||||
|
||||
if [ "$RESPONSE" = "200" ]; then
|
||||
echo "✅ API recovered after database restart"
|
||||
else
|
||||
echo "❌ API did not recover (returned $RESPONSE)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Error handling tests passed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category 6: Load Testing
|
||||
|
||||
### Test 6.1: Concurrent Request Testing
|
||||
|
||||
**Purpose**: Verify system handles multiple concurrent requests
|
||||
|
||||
**Test Script**: `test-load.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Load Testing ==="
|
||||
|
||||
ENDPOINT="http://localhost:3001/api/vehicles/dropdown/makes"
|
||||
CONCURRENT_REQUESTS=50
|
||||
MAX_RESPONSE_TIME=500 # milliseconds
|
||||
|
||||
echo "Running $CONCURRENT_REQUESTS concurrent requests..."
|
||||
|
||||
# Create temporary file for results
|
||||
RESULTS_FILE=$(mktemp)
|
||||
|
||||
# Run concurrent requests
|
||||
for i in $(seq 1 $CONCURRENT_REQUESTS); do
|
||||
{
|
||||
START_TIME=$(date +%s%3N)
|
||||
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "$ENDPOINT")
|
||||
END_TIME=$(date +%s%3N)
|
||||
RESPONSE_TIME=$((END_TIME - START_TIME))
|
||||
echo "$HTTP_CODE,$RESPONSE_TIME" >> "$RESULTS_FILE"
|
||||
} &
|
||||
done
|
||||
|
||||
# Wait for all requests to complete
|
||||
wait
|
||||
|
||||
# Analyze results
|
||||
SUCCESS_COUNT=$(grep -c "^200," "$RESULTS_FILE")
|
||||
TOTAL_COUNT=$(wc -l < "$RESULTS_FILE")
|
||||
AVG_TIME=$(awk -F',' '{sum+=$2} END {print sum/NR}' "$RESULTS_FILE")
|
||||
MAX_TIME=$(awk -F',' '{max=($2>max?$2:max)} END {print max}' "$RESULTS_FILE")
|
||||
|
||||
echo "Results:"
|
||||
echo " Successful requests: $SUCCESS_COUNT/$TOTAL_COUNT"
|
||||
echo " Average response time: ${AVG_TIME}ms"
|
||||
echo " Maximum response time: ${MAX_TIME}ms"
|
||||
|
||||
# Cleanup
|
||||
rm "$RESULTS_FILE"
|
||||
|
||||
# Validate results
|
||||
if [ "$SUCCESS_COUNT" -eq "$TOTAL_COUNT" ] && [ "$MAX_TIME" -lt "$MAX_RESPONSE_TIME" ]; then
|
||||
echo "✅ Load test passed"
|
||||
else
|
||||
echo "❌ Load test failed"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category 7: Security Validation
|
||||
|
||||
### Test 7.1: SQL Injection Prevention
|
||||
|
||||
**Purpose**: Verify protection against SQL injection attacks
|
||||
|
||||
**Test Script**: `test-security.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "=== Security Tests ==="
|
||||
|
||||
# Test SQL injection attempts in make parameter
|
||||
INJECTION_ATTEMPTS=(
|
||||
"'; DROP TABLE vehicles; --"
|
||||
"' OR '1'='1"
|
||||
"'; SELECT * FROM users; --"
|
||||
"../../../etc/passwd"
|
||||
)
|
||||
|
||||
for injection in "${INJECTION_ATTEMPTS[@]}"; do
|
||||
echo "Testing injection attempt: $injection"
|
||||
|
||||
# URL encode the injection
|
||||
ENCODED=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$injection'))")
|
||||
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
"http://localhost:3001/api/vehicles/dropdown/models/$ENCODED")
|
||||
|
||||
# Should return 400, 404, or 500 (not 200 with injected content)
|
||||
if [ "$RESPONSE" != "200" ]; then
|
||||
echo "✅ Injection attempt blocked (returned $RESPONSE)"
|
||||
else
|
||||
echo "⚠️ Injection attempt returned 200 (investigating...)"
|
||||
# Additional validation would be needed here
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ Security tests completed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Comprehensive Test Execution
|
||||
|
||||
### Master Test Script
|
||||
|
||||
**Location**: `test-all.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "========================================="
|
||||
echo "MotoVaultPro Vehicle ETL Integration Tests"
|
||||
echo "========================================="
|
||||
|
||||
# Set up
|
||||
chmod +x test-*.sh
|
||||
|
||||
# Track test results
|
||||
PASSED=0
|
||||
FAILED=0
|
||||
|
||||
run_test() {
|
||||
echo
|
||||
echo "Running $1..."
|
||||
if ./$1; then
|
||||
echo "✅ $1 PASSED"
|
||||
((PASSED++))
|
||||
else
|
||||
echo "❌ $1 FAILED"
|
||||
((FAILED++))
|
||||
fi
|
||||
}
|
||||
|
||||
# Execute all test categories
|
||||
run_test "test-api-formats.sh"
|
||||
run_test "test-authentication.sh"
|
||||
run_test "test-performance.sh"
|
||||
run_test "test-cache-performance.sh"
|
||||
run_test "test-data-completeness.sh"
|
||||
run_test "test-etl-execution.sh"
|
||||
run_test "test-etl-scheduling.sh"
|
||||
run_test "test-error-handling.sh"
|
||||
run_test "test-load.sh"
|
||||
run_test "test-security.sh"
|
||||
|
||||
# Final results
|
||||
echo
|
||||
echo "========================================="
|
||||
echo "TEST SUMMARY"
|
||||
echo "========================================="
|
||||
echo "Passed: $PASSED"
|
||||
echo "Failed: $FAILED"
|
||||
echo "Total: $((PASSED + FAILED))"
|
||||
|
||||
if [ $FAILED -eq 0 ]; then
|
||||
echo "✅ ALL TESTS PASSED"
|
||||
echo "Vehicle ETL integration is ready for production!"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ SOME TESTS FAILED"
|
||||
echo "Please review failed tests before proceeding."
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## Post-Testing Actions
|
||||
|
||||
### Success Actions
|
||||
|
||||
If all tests pass:
|
||||
|
||||
1. **Document Test Results**: Save test output and timestamps
|
||||
2. **Update Monitoring**: Configure alerts for ETL failures
|
||||
3. **Schedule Production Deployment**: Plan rollout timing
|
||||
4. **Update Documentation**: Mark implementation as complete
|
||||
|
||||
### Failure Actions
|
||||
|
||||
If tests fail:
|
||||
|
||||
1. **Identify Root Cause**: Review failed test details
|
||||
2. **Fix Issues**: Address specific failures
|
||||
3. **Re-run Tests**: Validate fixes work
|
||||
4. **Update Documentation**: Document any issues found
|
||||
|
||||
## Ongoing Monitoring
|
||||
|
||||
After successful testing, implement ongoing monitoring:
|
||||
|
||||
1. **API Performance Monitoring**: Track response times daily
|
||||
2. **ETL Success Monitoring**: Weekly ETL completion alerts
|
||||
3. **Data Quality Checks**: Monthly data completeness validation
|
||||
4. **Error Rate Monitoring**: Track and alert on API error rates
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If critical issues are discovered during testing:
|
||||
|
||||
1. **Immediate Rollback**: Revert to external vPIC API
|
||||
2. **Data Preservation**: Ensure no data loss occurs
|
||||
3. **Service Continuity**: Maintain all existing functionality
|
||||
4. **Issue Analysis**: Investigate and document problems
|
||||
5. **Improved Re-implementation**: Address issues before retry
|
||||
Reference in New Issue
Block a user