yet another processing refactor :3 Mostly works
This commit is contained in:
parent
ab4db1b8bd
commit
81d8404576
@ -1,154 +1,72 @@
|
||||
# Revised Refactoring Plan: Processing Pipeline
|
||||
# Processing Pipeline Refactoring Plan
|
||||
|
||||
**Overall Goal:** To simplify the processing pipeline by refactoring the map merging process, consolidating map transformations (Gloss-to-Rough, Normal Green Invert), and creating a unified, configurable image saving utility. This plan aims to improve clarity, significantly reduce I/O by favoring in-memory operations, and make Power-of-Two (POT) scaling an optional, integrated step.
|
||||
## 1. Problem Summary
|
||||
|
||||
**I. Map Merging Stage (`processing/pipeline/stages/map_merging.py`)**
|
||||
The current processing pipeline, particularly the `IndividualMapProcessingStage`, exhibits maintainability challenges:
|
||||
|
||||
* **Objective:** Transform this stage from performing merges to generating tasks for merged images.
|
||||
* **Changes to `MapMergingStage.execute()`:**
|
||||
1. Iterate through `context.config_obj.map_merge_rules`.
|
||||
2. Identify required input map types and find their corresponding source file paths (potentially original paths or outputs of prior essential stages if any).
|
||||
3. Create "merged image tasks" and add them to `context.merged_image_tasks`.
|
||||
4. Each task entry will contain:
|
||||
* `output_map_type`: Target map type (e.g., "MAP_NRMRGH").
|
||||
* `input_map_sources`: Details of source map types and file paths.
|
||||
* `merge_rule_config`: Complete merge rule configuration (including fallback values).
|
||||
* `source_dimensions`: Dimensions for the high-resolution merged map basis.
|
||||
* `source_bit_depths`: Information about the bit depth of original source maps (needed for "respect_inputs" rule in save utility).
|
||||
* **High Complexity:** The stage handles too many responsibilities (loading, merging, transformations, scaling, saving).
|
||||
* **Duplicated Logic:** Image transformations (Gloss-to-Rough, Normal Green Invert) are duplicated within the stage instead of relying solely on dedicated stages or being handled consistently.
|
||||
* **Tight Coupling:** Heavy reliance on the large, mutable `AssetProcessingContext` object creates implicit dependencies and makes isolated testing difficult.
|
||||
|
||||
**II. Individual Map Processing Stage (`processing/pipeline/stages/individual_map_processing.py`)**
|
||||
## 2. Refactoring Goals
|
||||
|
||||
* **Objective:** Adapt this stage to handle both individual raw maps and `merged_image_tasks`. It will perform necessary in-memory transformations (Gloss-to-Rough, Normal Green Invert) and prepare a single "high-resolution" source image (in memory) to be passed to the `UnifiedSaveUtility`.
|
||||
* **Changes to `IndividualMapProcessingStage.execute()`:**
|
||||
1. **Input Handling Loop:** Iterate through `context.files_to_process` (regular maps) and `context.merged_image_tasks`.
|
||||
2. **Image Data Preparation:**
|
||||
* **For regular maps:** Load the source image file into memory (`current_image_data`). Determine `base_map_type` from the `FileRule`. Determine source bit depth.
|
||||
* **For `merged_image_tasks`:**
|
||||
* Attempt to load input map files specified in `input_map_sources`. If a file is missing, log a warning and generate placeholder data using fallback values from `merge_rule_config`. Handle other load errors.
|
||||
* Check dimensions of loaded/fallback data. Apply `MERGE_DIMENSION_MISMATCH_STRATEGY` (e.g., resize, log warning) or handle "ERROR_SKIP" strategy (log error, mark task failed, continue).
|
||||
* Perform the merge operation in memory according to `merge_rule_config`. Result is `current_image_data`. `base_map_type` is the task's `output_map_type`.
|
||||
3. **In-Memory Transformations:**
|
||||
* **Gloss-to-Rough Conversion:**
|
||||
* If `base_map_type` starts with "MAP_GLOSS":
|
||||
* Perform inversion on `current_image_data` (in memory).
|
||||
* Update `base_map_type` to "MAP_ROUGH".
|
||||
* Log the conversion.
|
||||
* **Normal Map Green Channel Inversion:**
|
||||
* If `base_map_type` is "NORMAL" *and* `context.config_obj.general_settings.invert_normal_map_green_channel_globally` is true:
|
||||
* Perform green channel inversion on `current_image_data` (in memory).
|
||||
* Log the inversion.
|
||||
4. **Optional Initial Scaling (POT or other):**
|
||||
* Check `INITIAL_SCALING_MODE` from config.
|
||||
* If `"POT_DOWNSCALE"`: Perform POT downscaling on `current_image_data` (in memory) -> `image_to_save`.
|
||||
* If `"NONE"`: `image_to_save` = `current_image_data`.
|
||||
* *(Note: `image_to_save` now reflects any prior transformations)*.
|
||||
5. **Color Management:** Apply necessary color management to `image_to_save`.
|
||||
6. **Pass to Save Utility:** Pass `image_to_save`, the (potentially updated) `base_map_type`, original source bit depth info (for "respect_inputs" rule), and other necessary details (like specific config values) to the `UnifiedSaveUtility`.
|
||||
7. **Remove Old Logic:** Remove old save logic, separate Gloss/Normal stage calls.
|
||||
8. **Context Update:** Update `context.processed_maps_details` with results from the `UnifiedSaveUtility`, including notes about any conversions/inversions performed or merge task failures.
|
||||
* Improve code readability and understanding.
|
||||
* Enhance maintainability by localizing changes and removing duplication.
|
||||
* Increase testability through smaller, focused components with clear interfaces.
|
||||
* Clarify data dependencies between pipeline stages.
|
||||
* Adhere more closely to the Single Responsibility Principle (SRP).
|
||||
|
||||
**III. Unified Image Save Utility (New file: `processing/utils/image_saving_utils.py`)**
|
||||
## 3. Proposed New Pipeline Stages
|
||||
|
||||
* **Objective:** Centralize all image saving logic (resolution variants, format, bit depth, compression).
|
||||
* **Interface (e.g., `save_image_variants` function):**
|
||||
* **Inputs:**
|
||||
* `source_image_data (np.ndarray)`: High-res image data (in memory, potentially transformed).
|
||||
* `base_map_type (str)`: Final map type (e.g., "COL", "ROUGH", "NORMAL", "MAP_NRMRGH").
|
||||
* `source_bit_depth_info (list)`: List of original source bit depth(s).
|
||||
* Specific config values (e.g., `image_resolutions: dict`, `file_type_defs: dict`, `output_format_8bit: str`, etc.).
|
||||
* `output_filename_pattern_tokens (dict)`.
|
||||
* `output_base_directory (Path)`.
|
||||
* **Core Functionality:**
|
||||
1. Use provided configuration inputs.
|
||||
2. Determine Target Bit Depth:
|
||||
* Use `bit_depth_rule` for `base_map_type` from `file_type_defs`.
|
||||
* If "force_8bit": target 8-bit.
|
||||
* If "respect_inputs": If `any(depth > 8 for depth in source_bit_depth_info)`, target 16-bit, else 8-bit.
|
||||
3. Determine Output File Format(s) (based on target bit depth, config).
|
||||
4. Generate and Save Resolution Variants:
|
||||
* Iterate through `image_resolutions`.
|
||||
* Resize `source_image_data` (in memory) for each variant (no upscaling).
|
||||
* Construct filename and path.
|
||||
* Prepare save parameters.
|
||||
* Convert variant data to target bit depth/color space just before saving.
|
||||
* Save variant using `cv2.imwrite` or similar.
|
||||
* Discard in-memory variant after saving.
|
||||
5. Return List of Saved File Details: `{'path': str, 'resolution_key': str, 'format': str, 'bit_depth': int, 'dimensions': (w,h)}`.
|
||||
* **Memory Management:** Holds `source_image_data` + one variant in memory at a time.
|
||||
Replace the existing `IndividualMapProcessingStage` with the following sequence of smaller, focused stages, executed by the `PipelineOrchestrator` for each processing item:
|
||||
|
||||
**IV. Configuration Changes (`config/app_settings.json`)**
|
||||
1. **`PrepareProcessingItemsStage`:**
|
||||
* **Responsibility:** Identifies and lists all items (`FileRule`, `MergeTaskDefinition`) to be processed from the main context.
|
||||
* **Output:** Updates `context.processing_items`.
|
||||
|
||||
1. **Add/Confirm Settings:**
|
||||
* `"INITIAL_SCALING_MODE": "POT_DOWNSCALE"` (Options: "POT_DOWNSCALE", "NONE").
|
||||
* `"MERGE_DIMENSION_MISMATCH_STRATEGY": "USE_LARGEST"` (Options: "USE_LARGEST", "USE_FIRST", "ERROR_SKIP").
|
||||
* Ensure `general_settings.invert_normal_map_green_channel_globally` exists (boolean).
|
||||
2. **Review/Confirm Existing Settings:**
|
||||
* Ensure `IMAGE_RESOLUTIONS`, `FILE_TYPE_DEFINITIONS` (`bit_depth_rule`), `MAP_MERGE_RULES` (`output_bit_depth`, fallback values), format settings, quality settings are comprehensive.
|
||||
3. **Remove Obsolete Setting:**
|
||||
* `RESPECT_VARIANT_MAP_TYPES`.
|
||||
2. **`RegularMapProcessorStage`:** (Handles `FileRule` items)
|
||||
* **Responsibility:** Loads source image, determines internal map type (with suffix), applies relevant transformations (Gloss-to-Rough, Normal Green Invert), determines original metadata.
|
||||
* **Output:** `ProcessedRegularMapData` object containing transformed image data and metadata.
|
||||
|
||||
**V. Data Flow Diagram (Mermaid)**
|
||||
3. **`MergedTaskProcessorStage`:** (Handles `MergeTaskDefinition` items)
|
||||
* **Responsibility:** Loads input images, applies transformations to inputs, handles fallbacks/resizing, performs merge operation.
|
||||
* **Output:** `ProcessedMergedMapData` object containing merged image data and metadata.
|
||||
|
||||
4. **`InitialScalingStage`:** (Optional)
|
||||
* **Responsibility:** Applies configured scaling (e.g., POT downscale) to the processed image data received from the previous stage.
|
||||
* **Output:** Scaled image data.
|
||||
|
||||
5. **`SaveVariantsStage`:**
|
||||
* **Responsibility:** Takes the final processed (and potentially scaled) image data and orchestrates saving variants using the `save_image_variants` utility.
|
||||
* **Output:** List of saved file details (`saved_files_details`).
|
||||
|
||||
## 4. Proposed Data Flow
|
||||
|
||||
* **Input/Output Objects:** Key stages (`RegularMapProcessor`, `MergedTaskProcessor`, `InitialScaling`, `SaveVariants`) will use specific Input and Output dataclasses for clearer interfaces.
|
||||
* **Orchestrator Role:** The `PipelineOrchestrator` manages the overall flow. It calls stages, passes necessary data (extracting image data references and metadata from previous stage outputs to create inputs for the next), receives output objects, and integrates final results (like saved file details) back into the main `AssetProcessingContext`.
|
||||
* **Image Data Handling:** Large image arrays (`np.ndarray`) are passed primarily via stage return values (Output objects) and used as inputs to subsequent stages, managed by the Orchestrator. They are not stored long-term in the main `AssetProcessingContext`.
|
||||
* **Main Context:** The `AssetProcessingContext` remains for overall state (rules, paths, configuration access, final status tracking) and potentially for simpler stages with minimal side effects.
|
||||
|
||||
## 5. Visualization (Conceptual)
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Start Asset Processing] --> B[File Rules Filter];
|
||||
B --> STAGE_INDIVIDUAL_MAP_PROCESSING[Individual Map Processing Stage];
|
||||
|
||||
subgraph STAGE_INDIVIDUAL_MAP_PROCESSING [Individual Map Processing Stage]
|
||||
direction LR
|
||||
C1{Is it a regular map or merged task?}
|
||||
C1 -- Regular Map --> C2[Load Source Image File into Memory (current_image_data)];
|
||||
C1 -- Merged Task (from Map Merging Stage) --> C3[Load Inputs (Handle Missing w/ Fallbacks) & Merge in Memory (Handle Dim Mismatch) (current_image_data)];
|
||||
|
||||
C2 --> C4[current_image_data];
|
||||
C3 --> C4;
|
||||
|
||||
C4 --> C4_TRANSFORM{Transformations?};
|
||||
C4_TRANSFORM -- Gloss Map? --> C4a[Invert Data (in memory), Update base_map_type to ROUGH];
|
||||
C4_TRANSFORM -- Normal Map & Invert Config? --> C4b[Invert Green Channel (in memory)];
|
||||
C4_TRANSFORM -- No Transformation Needed --> C4_POST_TRANSFORM;
|
||||
C4a --> C4_POST_TRANSFORM;
|
||||
C4b --> C4_POST_TRANSFORM;
|
||||
|
||||
C4_POST_TRANSFORM[current_image_data (potentially transformed)] --> C5{INITIAL_SCALING_MODE};
|
||||
C5 -- "POT_DOWNSCALE" --> C6[Perform POT Scale (in memory) --> image_to_save];
|
||||
C5 -- "NONE" --> C7[image_to_save = current_image_data];
|
||||
|
||||
C6 --> C8[Apply Color Management to image_to_save (in memory)];
|
||||
C7 --> C8;
|
||||
|
||||
C8 --> UNIFIED_SAVE_UTILITY[Call Unified Save Utility with image_to_save, final base_map_type, source bit depth info, config];
|
||||
subgraph Proposed Pipeline Stages
|
||||
Start --> Prep[PrepareProcessingItemsStage]
|
||||
Prep --> ItemLoop{Loop per Item}
|
||||
ItemLoop -- FileRule --> RegProc[RegularMapProcessorStage]
|
||||
ItemLoop -- MergeTask --> MergeProc[MergedTaskProcessorStage]
|
||||
RegProc --> Scale(InitialScalingStage)
|
||||
MergeProc --> Scale
|
||||
Scale --> Save[SaveVariantsStage]
|
||||
Save --> UpdateContext[Update Main Context w/ Results]
|
||||
UpdateContext --> ItemLoop
|
||||
end
|
||||
```
|
||||
|
||||
UNIFIED_SAVE_UTILITY --> H[Update context.processed_maps_details with list of saved files & notes];
|
||||
H --> STAGE_METADATA_SAVE[Metadata Finalization & Save Stage];
|
||||
## 6. Benefits
|
||||
|
||||
STAGE_MAP_MERGING[Map Merging Stage] --> N{Identify Merge Rules};
|
||||
N --> O[Create Merged Image Tasks (incl. inputs, config, source bit depths)];
|
||||
O --> STAGE_INDIVIDUAL_MAP_PROCESSING; %% Feed tasks
|
||||
|
||||
A --> STAGE_OTHER_INITIAL[Other Initial Stages]
|
||||
STAGE_OTHER_INITIAL --> STAGE_MAP_MERGING;
|
||||
|
||||
STAGE_METADATA_SAVE --> Z[End Asset Processing];
|
||||
|
||||
subgraph UNIFIED_SAVE_UTILITY_DETAILS [Unified Save Utility (processing.utils.image_saving_utils)]
|
||||
direction TB
|
||||
INPUTS[Input: in-memory image_to_save, final base_map_type, source_bit_depth_info, config_params, tokens, out_base_dir]
|
||||
INPUTS --> CONFIG_LOAD[1. Use Provided Config Params]
|
||||
CONFIG_LOAD --> DETERMINE_BIT_DEPTH[2. Determine Target Bit Depth (using rule & source_bit_depth_info)]
|
||||
DETERMINE_BIT_DEPTH --> DETERMINE_FORMAT[3. Determine Output Format]
|
||||
DETERMINE_FORMAT --> LOOP_VARIANTS[4. For each Resolution:]
|
||||
LOOP_VARIANTS --> RESIZE_VARIANT[4a. Resize image_to_save to Variant (in memory)]
|
||||
RESIZE_VARIANT --> PREPARE_SAVE[4b. Prepare Filename & Save Params]
|
||||
PREPARE_SAVE --> SAVE_IMAGE[4c. Convert & Save Variant to Disk]
|
||||
SAVE_IMAGE --> LOOP_VARIANTS;
|
||||
LOOP_VARIANTS --> OUTPUT_LIST[5. Return List of Saved File Details]
|
||||
end
|
||||
|
||||
style STAGE_INDIVIDUAL_MAP_PROCESSING fill:#f9f,stroke:#333,stroke-width:2px;
|
||||
style STAGE_MAP_MERGING fill:#f9f,stroke:#333,stroke-width:2px;
|
||||
style UNIFIED_SAVE_UTILITY fill:#ccf,stroke:#333,stroke-width:2px;
|
||||
style UNIFIED_SAVE_UTILITY_DETAILS fill:#ccf,stroke:#333,stroke-width:1px,dashed;
|
||||
style O fill:#lightgrey,stroke:#333,stroke-width:2px;
|
||||
style C4_POST_TRANSFORM fill:#e6ffe6,stroke:#333,stroke-width:1px;
|
||||
* Improved Readability & Understanding.
|
||||
* Enhanced Maintainability & Reduced Risk.
|
||||
* Better Testability.
|
||||
* Clearer Dependencies.
|
||||
@ -268,7 +268,7 @@
|
||||
"OUTPUT_FORMAT_8BIT": "png",
|
||||
"MAP_MERGE_RULES": [
|
||||
{
|
||||
"output_map_type": "NRMRGH",
|
||||
"output_map_type": "MAP_NRMRGH",
|
||||
"inputs": {
|
||||
"R": "MAP_NRM",
|
||||
"G": "MAP_NRM",
|
||||
|
||||
@ -5,6 +5,82 @@ from typing import Dict, List, Optional
|
||||
from rule_structure import AssetRule, FileRule, SourceRule
|
||||
from configuration import Configuration
|
||||
|
||||
# Imports needed for new dataclasses
|
||||
import numpy as np
|
||||
from typing import Any, Tuple, Union
|
||||
|
||||
# --- Stage Input/Output Dataclasses ---
|
||||
|
||||
# Item types for PrepareProcessingItemsStage output
|
||||
@dataclass
|
||||
class MergeTaskDefinition:
|
||||
"""Represents a merge task identified by PrepareProcessingItemsStage."""
|
||||
task_data: Dict # The original task data from context.merged_image_tasks
|
||||
task_key: str # e.g., "merged_task_0"
|
||||
|
||||
# Output for RegularMapProcessorStage
|
||||
@dataclass
|
||||
class ProcessedRegularMapData:
|
||||
processed_image_data: np.ndarray
|
||||
final_internal_map_type: str
|
||||
source_file_path: Path
|
||||
original_bit_depth: Optional[int]
|
||||
original_dimensions: Optional[Tuple[int, int]] # (width, height)
|
||||
transformations_applied: List[str]
|
||||
status: str = "Processed"
|
||||
error_message: Optional[str] = None
|
||||
|
||||
# Output for MergedTaskProcessorStage
|
||||
@dataclass
|
||||
class ProcessedMergedMapData:
|
||||
merged_image_data: np.ndarray
|
||||
output_map_type: str # Internal type
|
||||
source_bit_depths: List[int]
|
||||
final_dimensions: Optional[Tuple[int, int]] # (width, height)
|
||||
transformations_applied_to_inputs: Dict[str, List[str]] # Map type -> list of transforms
|
||||
status: str = "Processed"
|
||||
error_message: Optional[str] = None
|
||||
|
||||
# Input for InitialScalingStage
|
||||
@dataclass
|
||||
class InitialScalingInput:
|
||||
image_data: np.ndarray
|
||||
original_dimensions: Optional[Tuple[int, int]] # (width, height)
|
||||
# Configuration needed
|
||||
initial_scaling_mode: str
|
||||
|
||||
# Output for InitialScalingStage
|
||||
@dataclass
|
||||
class InitialScalingOutput:
|
||||
scaled_image_data: np.ndarray
|
||||
scaling_applied: bool
|
||||
final_dimensions: Tuple[int, int] # (width, height)
|
||||
|
||||
# Input for SaveVariantsStage
|
||||
@dataclass
|
||||
class SaveVariantsInput:
|
||||
image_data: np.ndarray # Final data (potentially scaled)
|
||||
internal_map_type: str # Final internal type (e.g., MAP_ROUGH, MAP_COL-1)
|
||||
source_bit_depth_info: List[int]
|
||||
# Configuration needed
|
||||
output_filename_pattern_tokens: Dict[str, Any]
|
||||
image_resolutions: List[int]
|
||||
file_type_defs: Dict[str, Dict]
|
||||
output_format_8bit: str
|
||||
output_format_16bit_primary: str
|
||||
output_format_16bit_fallback: str
|
||||
png_compression_level: int
|
||||
jpg_quality: int
|
||||
output_filename_pattern: str
|
||||
|
||||
# Output for SaveVariantsStage
|
||||
@dataclass
|
||||
class SaveVariantsOutput:
|
||||
saved_files_details: List[Dict]
|
||||
status: str = "Processed"
|
||||
error_message: Optional[str] = None
|
||||
|
||||
# Add a field to AssetProcessingContext for the prepared items
|
||||
@dataclass
|
||||
class AssetProcessingContext:
|
||||
source_rule: SourceRule
|
||||
@ -14,11 +90,16 @@ class AssetProcessingContext:
|
||||
output_base_path: Path
|
||||
effective_supplier: Optional[str]
|
||||
asset_metadata: Dict
|
||||
processed_maps_details: Dict[str, Dict[str, Dict]]
|
||||
merged_maps_details: Dict[str, Dict[str, Dict]]
|
||||
processed_maps_details: Dict[str, Dict] # Will store final results per item_key
|
||||
merged_maps_details: Dict[str, Dict] # This might become redundant? Keep for now.
|
||||
files_to_process: List[FileRule]
|
||||
loaded_data_cache: Dict
|
||||
config_obj: Configuration
|
||||
status_flags: Dict
|
||||
incrementing_value: Optional[str]
|
||||
sha5_value: Optional[str]
|
||||
sha5_value: Optional[str] # Keep existing fields
|
||||
# New field for prepared items
|
||||
processing_items: Optional[List[Union[FileRule, MergeTaskDefinition]]] = None
|
||||
# Temporary storage during pipeline execution (managed by orchestrator)
|
||||
# Keys could be FileRule object hash/id or MergeTaskDefinition task_key
|
||||
intermediate_results: Optional[Dict[Any, Union[ProcessedRegularMapData, ProcessedMergedMapData, InitialScalingOutput]]] = None
|
||||
@ -1,126 +1,405 @@
|
||||
from typing import List, Dict, Optional
|
||||
from pathlib import Path
|
||||
# --- Imports ---
|
||||
import logging
|
||||
import shutil
|
||||
import tempfile
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Optional, Any, Union # Added Any, Union
|
||||
|
||||
import numpy as np # Added numpy
|
||||
|
||||
from configuration import Configuration
|
||||
from rule_structure import SourceRule, AssetRule
|
||||
from .asset_context import AssetProcessingContext
|
||||
from rule_structure import SourceRule, AssetRule, FileRule # Added FileRule
|
||||
|
||||
# Import new context classes and stages
|
||||
from .asset_context import (
|
||||
AssetProcessingContext,
|
||||
MergeTaskDefinition,
|
||||
ProcessedRegularMapData,
|
||||
ProcessedMergedMapData,
|
||||
InitialScalingInput,
|
||||
InitialScalingOutput,
|
||||
SaveVariantsInput,
|
||||
SaveVariantsOutput,
|
||||
)
|
||||
from .stages.base_stage import ProcessingStage
|
||||
# Import the new stages we created
|
||||
from .stages.prepare_processing_items import PrepareProcessingItemsStage
|
||||
from .stages.regular_map_processor import RegularMapProcessorStage
|
||||
from .stages.merged_task_processor import MergedTaskProcessorStage
|
||||
from .stages.initial_scaling import InitialScalingStage
|
||||
from .stages.save_variants import SaveVariantsStage
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
# --- PipelineOrchestrator Class ---
|
||||
|
||||
class PipelineOrchestrator:
|
||||
"""
|
||||
Orchestrates the processing of assets based on source rules and a series of processing stages.
|
||||
Manages the overall flow, including the core item processing sequence.
|
||||
"""
|
||||
|
||||
def __init__(self, config_obj: Configuration, stages: List[ProcessingStage]):
|
||||
def __init__(self, config_obj: Configuration,
|
||||
pre_item_stages: List[ProcessingStage],
|
||||
post_item_stages: List[ProcessingStage]):
|
||||
"""
|
||||
Initializes the PipelineOrchestrator.
|
||||
|
||||
Args:
|
||||
config_obj: The main configuration object.
|
||||
stages: A list of processing stages to be executed in order.
|
||||
pre_item_stages: Stages to run before the core item processing loop.
|
||||
post_item_stages: Stages to run after the core item processing loop.
|
||||
"""
|
||||
self.config_obj: Configuration = config_obj
|
||||
self.stages: List[ProcessingStage] = stages
|
||||
self.pre_item_stages: List[ProcessingStage] = pre_item_stages
|
||||
self.post_item_stages: List[ProcessingStage] = post_item_stages
|
||||
# Instantiate the core item processing stages internally
|
||||
self._prepare_stage = PrepareProcessingItemsStage()
|
||||
self._regular_processor_stage = RegularMapProcessorStage()
|
||||
self._merged_processor_stage = MergedTaskProcessorStage()
|
||||
self._scaling_stage = InitialScalingStage()
|
||||
self._save_stage = SaveVariantsStage()
|
||||
|
||||
def _execute_specific_stages(
|
||||
self, context: AssetProcessingContext,
|
||||
stages_to_run: List[ProcessingStage],
|
||||
stage_group_name: str,
|
||||
stop_on_skip: bool = True
|
||||
) -> AssetProcessingContext:
|
||||
"""Executes a specific list of stages."""
|
||||
asset_name = context.asset_rule.asset_name if context.asset_rule else "Unknown"
|
||||
log.debug(f"Asset '{asset_name}': Executing {stage_group_name} stages...")
|
||||
for stage in stages_to_run:
|
||||
stage_name = stage.__class__.__name__
|
||||
log.debug(f"Asset '{asset_name}': Executing {stage_group_name} stage: {stage_name}")
|
||||
try:
|
||||
# Check if stage expects context directly or specific input
|
||||
# For now, assume outer stages take context directly
|
||||
# This might need refinement if outer stages also adopt Input/Output pattern
|
||||
context = stage.execute(context)
|
||||
except Exception as e:
|
||||
log.error(f"Asset '{asset_name}': Error during outer stage '{stage_name}': {e}", exc_info=True)
|
||||
context.status_flags["asset_failed"] = True
|
||||
context.status_flags["asset_failed_stage"] = stage_name
|
||||
context.status_flags["asset_failed_reason"] = str(e)
|
||||
# Update overall metadata immediately on outer stage failure
|
||||
context.asset_metadata["status"] = f"Failed: Error in stage {stage_name}"
|
||||
context.asset_metadata["error_message"] = str(e)
|
||||
break # Stop processing outer stages for this asset on error
|
||||
|
||||
if stop_on_skip and context.status_flags.get("skip_asset"):
|
||||
log.info(f"Asset '{asset_name}': Skipped by outer stage '{stage_name}'. Reason: {context.status_flags.get('skip_reason', 'N/A')}")
|
||||
break # Skip remaining outer stages for this asset
|
||||
return context
|
||||
|
||||
def process_source_rule(
|
||||
self,
|
||||
source_rule: SourceRule,
|
||||
workspace_path: Path,
|
||||
output_base_path: Path,
|
||||
overwrite: bool, # Not used in this initial implementation, but part of the signature
|
||||
overwrite: bool,
|
||||
incrementing_value: Optional[str],
|
||||
sha5_value: Optional[str] # Corrected from sha5_value to sha256_value as per typical usage, assuming typo
|
||||
sha5_value: Optional[str] # Keep param name consistent for now
|
||||
) -> Dict[str, List[str]]:
|
||||
"""
|
||||
Processes a single source rule, iterating through its asset rules and applying all stages.
|
||||
|
||||
Args:
|
||||
source_rule: The source rule to process.
|
||||
workspace_path: The base path of the workspace.
|
||||
output_base_path: The base path for output files.
|
||||
overwrite: Whether to overwrite existing files (not fully implemented yet).
|
||||
incrementing_value: An optional incrementing value for versioning or naming.
|
||||
sha5_value: An optional SHA5 hash value for the asset (assuming typo, likely sha256).
|
||||
|
||||
Returns:
|
||||
A dictionary summarizing the processing status of assets.
|
||||
Processes a single source rule, applying pre-processing stages,
|
||||
the core item processing loop (Prepare, Process, Scale, Save),
|
||||
and post-processing stages.
|
||||
"""
|
||||
overall_status: Dict[str, List[str]] = {
|
||||
"processed": [],
|
||||
"skipped": [],
|
||||
"failed": [],
|
||||
}
|
||||
engine_temp_dir_path: Optional[Path] = None # Initialize to None
|
||||
engine_temp_dir_path: Optional[Path] = None
|
||||
|
||||
try:
|
||||
# Create a temporary directory for this processing run if needed by any stage
|
||||
# This temp dir is for the entire source_rule processing, not per asset.
|
||||
# Individual stages might create their own sub-temp dirs if necessary.
|
||||
# --- Setup Temporary Directory ---
|
||||
temp_dir_path_str = tempfile.mkdtemp(prefix=self.config_obj.temp_dir_prefix)
|
||||
engine_temp_dir_path = Path(temp_dir_path_str)
|
||||
log.debug(f"PipelineOrchestrator created temporary directory: {engine_temp_dir_path} using prefix '{self.config_obj.temp_dir_prefix}'")
|
||||
|
||||
log.debug(f"PipelineOrchestrator created temporary directory: {engine_temp_dir_path}")
|
||||
|
||||
# --- Process Each Asset Rule ---
|
||||
for asset_rule in source_rule.assets:
|
||||
log.debug(f"Orchestrator: Processing asset '{asset_rule.asset_name}'")
|
||||
asset_name = asset_rule.asset_name
|
||||
log.info(f"Orchestrator: Processing asset '{asset_name}'")
|
||||
|
||||
# --- Initialize Asset Context ---
|
||||
context = AssetProcessingContext(
|
||||
source_rule=source_rule,
|
||||
asset_rule=asset_rule,
|
||||
workspace_path=workspace_path, # This is the path to the source files (e.g. extracted archive)
|
||||
engine_temp_dir=engine_temp_dir_path, # Pass the orchestrator's temp dir
|
||||
workspace_path=workspace_path,
|
||||
engine_temp_dir=engine_temp_dir_path,
|
||||
output_base_path=output_base_path,
|
||||
effective_supplier=None, # Will be set by SupplierDeterminationStage
|
||||
asset_metadata={}, # Will be populated by stages
|
||||
processed_maps_details={}, # Will be populated by stages
|
||||
merged_maps_details={}, # Will be populated by stages
|
||||
files_to_process=[], # Will be populated by FileRuleFilterStage
|
||||
loaded_data_cache={}, # For image loading cache within this asset's processing
|
||||
effective_supplier=None,
|
||||
asset_metadata={},
|
||||
processed_maps_details={}, # Final results per item
|
||||
merged_maps_details={}, # Keep for potential backward compat or other uses?
|
||||
files_to_process=[], # Populated by FileRuleFilterStage (assumed in outer_stages)
|
||||
loaded_data_cache={},
|
||||
config_obj=self.config_obj,
|
||||
status_flags={"skip_asset": False, "asset_failed": False}, # Initialize common flags
|
||||
status_flags={"skip_asset": False, "asset_failed": False},
|
||||
incrementing_value=incrementing_value,
|
||||
sha5_value=sha5_value
|
||||
sha5_value=sha5_value,
|
||||
processing_items=[], # Initialize new fields
|
||||
intermediate_results={}
|
||||
)
|
||||
|
||||
for stage_idx, stage in enumerate(self.stages):
|
||||
log.debug(f"Asset '{asset_rule.asset_name}': Executing stage {stage_idx + 1}/{len(self.stages)}: {stage.__class__.__name__}")
|
||||
try:
|
||||
context = stage.execute(context)
|
||||
except Exception as e:
|
||||
log.error(f"Asset '{asset_rule.asset_name}': Error during stage '{stage.__class__.__name__}': {e}", exc_info=True)
|
||||
context.status_flags["asset_failed"] = True
|
||||
context.asset_metadata["status"] = f"Failed: Error in stage {stage.__class__.__name__}"
|
||||
context.asset_metadata["error_message"] = str(e)
|
||||
break # Stop processing stages for this asset on error
|
||||
# --- Execute Pre-Item-Processing Outer Stages ---
|
||||
# (e.g., MetadataInit, SupplierDet, FileRuleFilter, GlossToRough, NormalInvert)
|
||||
# Identify which outer stages run before the item loop
|
||||
# This requires knowing the intended order. Assume all run before for now.
|
||||
context = self._execute_specific_stages(context, self.pre_item_stages, "pre-item", stop_on_skip=True)
|
||||
|
||||
# Check if asset should be skipped or failed after pre-processing
|
||||
if context.status_flags.get("asset_failed"):
|
||||
log.error(f"Asset '{asset_name}': Failed during pre-processing stage '{context.status_flags.get('asset_failed_stage', 'Unknown')}'. Skipping item processing.")
|
||||
overall_status["failed"].append(f"{asset_name} (Failed in {context.status_flags.get('asset_failed_stage', 'Pre-Processing')})")
|
||||
continue # Move to the next asset rule
|
||||
|
||||
if context.status_flags.get("skip_asset"):
|
||||
log.info(f"Asset '{asset_rule.asset_name}': Skipped by stage '{stage.__class__.__name__}'. Reason: {context.status_flags.get('skip_reason', 'N/A')}")
|
||||
break # Skip remaining stages for this asset
|
||||
log.info(f"Asset '{asset_name}': Skipped during pre-processing. Skipping item processing.")
|
||||
overall_status["skipped"].append(asset_name)
|
||||
continue # Move to the next asset rule
|
||||
|
||||
# Refined status collection
|
||||
if context.status_flags.get('skip_asset'):
|
||||
overall_status["skipped"].append(asset_rule.asset_name)
|
||||
elif context.status_flags.get('asset_failed') or str(context.asset_metadata.get('status', '')).startswith("Failed"):
|
||||
overall_status["failed"].append(asset_rule.asset_name)
|
||||
elif context.asset_metadata.get('status') == "Processed":
|
||||
overall_status["processed"].append(asset_rule.asset_name)
|
||||
else: # Default or unknown state
|
||||
log.warning(f"Asset '{asset_rule.asset_name}': Unknown status after pipeline execution. Metadata status: '{context.asset_metadata.get('status')}'. Marking as failed.")
|
||||
overall_status["failed"].append(f"{asset_rule.asset_name} (Unknown Status: {context.asset_metadata.get('status')})")
|
||||
log.debug(f"Asset '{asset_rule.asset_name}' final status: {context.asset_metadata.get('status', 'N/A')}, Flags: {context.status_flags}")
|
||||
# --- Prepare Processing Items ---
|
||||
log.debug(f"Asset '{asset_name}': Preparing processing items...")
|
||||
try:
|
||||
# Prepare stage modifies context directly
|
||||
context = self._prepare_stage.execute(context)
|
||||
except Exception as e:
|
||||
log.error(f"Asset '{asset_name}': Error during PrepareProcessingItemsStage: {e}", exc_info=True)
|
||||
context.status_flags["asset_failed"] = True
|
||||
context.status_flags["asset_failed_stage"] = "PrepareProcessingItemsStage"
|
||||
context.status_flags["asset_failed_reason"] = str(e)
|
||||
overall_status["failed"].append(f"{asset_name} (Failed in Prepare Items)")
|
||||
continue # Move to next asset
|
||||
|
||||
if context.status_flags.get('prepare_items_failed'):
|
||||
log.error(f"Asset '{asset_name}': Failed during item preparation. Reason: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')}. Skipping item processing loop.")
|
||||
overall_status["failed"].append(f"{asset_name} (Failed Prepare Items: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')})")
|
||||
continue # Move to next asset
|
||||
|
||||
if not context.processing_items:
|
||||
log.info(f"Asset '{asset_name}': No items to process after preparation stage.")
|
||||
# Status will be determined at the end
|
||||
|
||||
# --- Core Item Processing Loop ---
|
||||
log.info("ORCHESTRATOR: Starting processing items loop for asset '%s'", asset_name) # Corrected indentation and message
|
||||
log.info(f"Asset '{asset_name}': Starting core item processing loop for {len(context.processing_items)} items...")
|
||||
asset_had_item_errors = False
|
||||
for item_index, item in enumerate(context.processing_items):
|
||||
item_key: Any = None # Key for storing results (FileRule object or task_key string)
|
||||
item_log_prefix = f"Asset '{asset_name}', Item {item_index + 1}/{len(context.processing_items)}"
|
||||
processed_data: Optional[Union[ProcessedRegularMapData, ProcessedMergedMapData]] = None
|
||||
scaled_data_output: Optional[InitialScalingOutput] = None # Store output object
|
||||
saved_data: Optional[SaveVariantsOutput] = None
|
||||
item_status = "Failed" # Default item status
|
||||
current_image_data: Optional[np.ndarray] = None # Track current image data ref
|
||||
|
||||
try:
|
||||
# 1. Process (Load/Merge + Transform)
|
||||
if isinstance(item, FileRule):
|
||||
item_key = item.file_path # Use file_path string as key
|
||||
log.debug(f"{item_log_prefix}: Processing FileRule '{item.file_path}'...")
|
||||
processed_data = self._regular_processor_stage.execute(context, item)
|
||||
elif isinstance(item, MergeTaskDefinition):
|
||||
item_key = item.task_key # Use task_key string as key
|
||||
log.debug(f"{item_log_prefix}: Processing MergeTask '{item_key}'...")
|
||||
processed_data = self._merged_processor_stage.execute(context, item)
|
||||
else:
|
||||
log.warning(f"{item_log_prefix}: Unknown item type '{type(item)}'. Skipping.")
|
||||
item_key = f"unknown_item_{item_index}"
|
||||
context.processed_maps_details[item_key] = {"status": "Skipped", "notes": f"Unknown item type {type(item)}"}
|
||||
asset_had_item_errors = True
|
||||
continue # Next item
|
||||
|
||||
# Check for processing failure
|
||||
if not processed_data or processed_data.status != "Processed":
|
||||
error_msg = processed_data.error_message if processed_data else "Processor returned None"
|
||||
log.error(f"{item_log_prefix}: Failed during processing stage. Error: {error_msg}")
|
||||
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Processing Error: {error_msg}", "stage": processed_data.__class__.__name__ if processed_data else "UnknownProcessor"}
|
||||
asset_had_item_errors = True
|
||||
continue # Next item
|
||||
|
||||
# Store intermediate result & get current image data
|
||||
context.intermediate_results[item_key] = processed_data
|
||||
current_image_data = processed_data.processed_image_data if isinstance(processed_data, ProcessedRegularMapData) else processed_data.merged_image_data
|
||||
current_dimensions = processed_data.original_dimensions if isinstance(processed_data, ProcessedRegularMapData) else processed_data.final_dimensions
|
||||
|
||||
# 2. Scale (Optional)
|
||||
scaling_mode = getattr(context.config_obj, "INITIAL_SCALING_MODE", "NONE")
|
||||
if scaling_mode != "NONE" and current_image_data is not None and current_image_data.size > 0:
|
||||
log.debug(f"{item_log_prefix}: Applying initial scaling (Mode: {scaling_mode})...")
|
||||
scale_input = InitialScalingInput(
|
||||
image_data=current_image_data,
|
||||
original_dimensions=current_dimensions, # Pass original/merged dims
|
||||
initial_scaling_mode=scaling_mode
|
||||
)
|
||||
scaled_data_output = self._scaling_stage.execute(scale_input)
|
||||
# Update intermediate result and current image data reference
|
||||
context.intermediate_results[item_key] = scaled_data_output # Overwrite previous intermediate
|
||||
current_image_data = scaled_data_output.scaled_image_data # Use scaled data for saving
|
||||
log.debug(f"{item_log_prefix}: Scaling applied: {scaled_data_output.scaling_applied}. New Dims: {scaled_data_output.final_dimensions}")
|
||||
else:
|
||||
log.debug(f"{item_log_prefix}: Initial scaling skipped (Mode: NONE or empty image).")
|
||||
# Create dummy output if scaling skipped, using current dims
|
||||
final_dims = current_dimensions if current_dimensions else (current_image_data.shape[1], current_image_data.shape[0]) if current_image_data is not None else (0,0)
|
||||
scaled_data_output = InitialScalingOutput(scaled_image_data=current_image_data, scaling_applied=False, final_dimensions=final_dims)
|
||||
|
||||
|
||||
# 3. Save Variants
|
||||
if current_image_data is None or current_image_data.size == 0:
|
||||
log.warning(f"{item_log_prefix}: Skipping save stage because image data is empty.")
|
||||
context.processed_maps_details[item_key] = {"status": "Skipped", "notes": "No image data to save", "stage": "SaveVariantsStage"}
|
||||
# Don't mark as asset error, just skip this item's saving
|
||||
continue # Next item
|
||||
|
||||
log.debug(f"{item_log_prefix}: Saving variants...")
|
||||
# Prepare input for save stage
|
||||
internal_map_type = processed_data.final_internal_map_type if isinstance(processed_data, ProcessedRegularMapData) else processed_data.output_map_type
|
||||
source_bit_depth = [processed_data.original_bit_depth] if isinstance(processed_data, ProcessedRegularMapData) and processed_data.original_bit_depth is not None else processed_data.source_bit_depths if isinstance(processed_data, ProcessedMergedMapData) else [8] # Default bit depth if unknown
|
||||
|
||||
# Construct filename tokens (ensure temp dir is used)
|
||||
output_filename_tokens = {
|
||||
'asset_name': asset_name,
|
||||
'output_base_directory': context.engine_temp_dir, # Save variants to temp dir
|
||||
# Add other tokens from context/config as needed by the pattern
|
||||
'supplier': context.effective_supplier or 'UnknownSupplier',
|
||||
}
|
||||
|
||||
save_input = SaveVariantsInput(
|
||||
image_data=current_image_data, # Use potentially scaled data
|
||||
internal_map_type=internal_map_type,
|
||||
source_bit_depth_info=source_bit_depth,
|
||||
output_filename_pattern_tokens=output_filename_tokens,
|
||||
# Pass config values needed by save stage
|
||||
image_resolutions=context.config_obj.image_resolutions,
|
||||
file_type_defs=getattr(context.config_obj, "FILE_TYPE_DEFINITIONS", {}),
|
||||
output_format_8bit=context.config_obj.get_8bit_output_format(),
|
||||
output_format_16bit_primary=context.config_obj.get_16bit_output_formats()[0],
|
||||
output_format_16bit_fallback=context.config_obj.get_16bit_output_formats()[1],
|
||||
png_compression_level=context.config_obj.png_compression_level,
|
||||
jpg_quality=context.config_obj.jpg_quality,
|
||||
output_filename_pattern=context.config_obj.output_filename_pattern,
|
||||
)
|
||||
saved_data = self._save_stage.execute(save_input)
|
||||
|
||||
# Check save status and finalize item result
|
||||
if saved_data and saved_data.status.startswith("Processed"):
|
||||
item_status = saved_data.status # e.g., "Processed" or "Processed (No Output)"
|
||||
log.info(f"{item_log_prefix}: Item successfully processed and saved. Status: {item_status}")
|
||||
# Populate final details for this item
|
||||
final_details = {
|
||||
"status": item_status,
|
||||
"saved_files_info": saved_data.saved_files_details, # List of dicts from save util
|
||||
"internal_map_type": internal_map_type,
|
||||
"original_dimensions": processed_data.original_dimensions if isinstance(processed_data, ProcessedRegularMapData) else None,
|
||||
"final_dimensions": scaled_data_output.final_dimensions if scaled_data_output else current_dimensions,
|
||||
"transformations": processed_data.transformations_applied if isinstance(processed_data, ProcessedRegularMapData) else processed_data.transformations_applied_to_inputs,
|
||||
# Add source file if regular map
|
||||
"source_file": str(processed_data.source_file_path) if isinstance(processed_data, ProcessedRegularMapData) else None,
|
||||
}
|
||||
context.processed_maps_details[item_key] = final_details
|
||||
else:
|
||||
error_msg = saved_data.error_message if saved_data else "Save stage returned None"
|
||||
log.error(f"{item_log_prefix}: Failed during save stage. Error: {error_msg}")
|
||||
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Save Error: {error_msg}", "stage": "SaveVariantsStage"}
|
||||
asset_had_item_errors = True
|
||||
item_status = "Failed" # Ensure item status reflects failure
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"PipelineOrchestrator.process_source_rule failed: {e}", exc_info=True)
|
||||
# Mark all remaining assets as failed if a top-level error occurs
|
||||
processed_or_skipped_or_failed = set(overall_status["processed"] + overall_status["skipped"] + overall_status["failed"])
|
||||
log.exception(f"{item_log_prefix}: Unhandled exception during item processing loop: {e}")
|
||||
# Ensure details are recorded even on unhandled exception
|
||||
if item_key is not None:
|
||||
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Unhandled Loop Error: {e}", "stage": "OrchestratorLoop"}
|
||||
else:
|
||||
log.error(f"Asset '{asset_name}': Unhandled exception in item loop before item key was set.")
|
||||
asset_had_item_errors = True
|
||||
item_status = "Failed"
|
||||
# Optionally break loop or continue? Continue for now to process other items.
|
||||
|
||||
log.info("ORCHESTRATOR: Finished processing items loop for asset '%s'", asset_name)
|
||||
log.info(f"Asset '{asset_name}': Finished core item processing loop.")
|
||||
|
||||
# --- Execute Post-Item-Processing Outer Stages ---
|
||||
# (e.g., OutputOrganization, MetadataFinalizationSave)
|
||||
# Identify which outer stages run after the item loop
|
||||
# This needs better handling based on stage purpose. Assume none run after for now.
|
||||
if not context.status_flags.get("asset_failed"):
|
||||
log.info("ORCHESTRATOR: Executing post-item-processing outer stages for asset '%s'", asset_name)
|
||||
context = self._execute_specific_stages(context, self.post_item_stages, "post-item", stop_on_skip=False)
|
||||
|
||||
# --- Final Asset Status Determination ---
|
||||
final_asset_status = "Unknown"
|
||||
fail_reason = ""
|
||||
if context.status_flags.get("asset_failed"):
|
||||
final_asset_status = "Failed"
|
||||
fail_reason = f"(Failed in {context.status_flags.get('asset_failed_stage', 'Unknown Stage')}: {context.status_flags.get('asset_failed_reason', 'Unknown Reason')})"
|
||||
elif context.status_flags.get("skip_asset"):
|
||||
final_asset_status = "Skipped"
|
||||
fail_reason = f"(Skipped: {context.status_flags.get('skip_reason', 'Unknown Reason')})"
|
||||
elif asset_had_item_errors:
|
||||
final_asset_status = "Failed"
|
||||
fail_reason = "(One or more items failed)"
|
||||
elif not context.processing_items:
|
||||
# No items prepared, no errors -> consider skipped or processed based on definition?
|
||||
final_asset_status = "Skipped" # Or "Processed (No Items)"
|
||||
fail_reason = "(No items to process)"
|
||||
elif not context.processed_maps_details and context.processing_items:
|
||||
# Items were prepared, but none resulted in processed_maps_details entry
|
||||
final_asset_status = "Skipped" # Or Failed?
|
||||
fail_reason = "(All processing items skipped or failed internally)"
|
||||
elif context.processed_maps_details:
|
||||
# Check if all items in processed_maps_details are actually processed successfully
|
||||
all_processed_ok = all(
|
||||
str(details.get("status", "")).startswith("Processed")
|
||||
for details in context.processed_maps_details.values()
|
||||
)
|
||||
some_processed_ok = any(
|
||||
str(details.get("status", "")).startswith("Processed")
|
||||
for details in context.processed_maps_details.values()
|
||||
)
|
||||
|
||||
if all_processed_ok:
|
||||
final_asset_status = "Processed"
|
||||
elif some_processed_ok:
|
||||
final_asset_status = "Partial" # Introduce a partial status? Or just Failed?
|
||||
fail_reason = "(Some items failed)"
|
||||
final_asset_status = "Failed" # Treat partial as Failed for overall status
|
||||
else: # No items processed successfully
|
||||
final_asset_status = "Failed"
|
||||
fail_reason = "(All items failed)"
|
||||
else:
|
||||
# Should not happen if processing_items existed
|
||||
final_asset_status = "Failed"
|
||||
fail_reason = "(Unknown state after item processing)"
|
||||
|
||||
|
||||
# Update overall status list
|
||||
if final_asset_status == "Processed":
|
||||
overall_status["processed"].append(asset_name)
|
||||
elif final_asset_status == "Skipped":
|
||||
overall_status["skipped"].append(f"{asset_name} {fail_reason}")
|
||||
else: # Failed or Unknown
|
||||
overall_status["failed"].append(f"{asset_name} {fail_reason}")
|
||||
|
||||
log.info(f"Asset '{asset_name}' final status: {final_asset_status} {fail_reason}")
|
||||
# Clean up intermediate results for the asset to save memory
|
||||
context.intermediate_results = {}
|
||||
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"PipelineOrchestrator.process_source_rule failed critically: {e}", exc_info=True)
|
||||
# Mark all assets from this source rule that weren't finished as failed
|
||||
processed_or_skipped_or_failed = set(overall_status["processed"]) | \
|
||||
set(name.split(" ")[0] for name in overall_status["skipped"]) | \
|
||||
set(name.split(" ")[0] for name in overall_status["failed"])
|
||||
for asset_rule in source_rule.assets:
|
||||
if asset_rule.asset_name not in processed_or_skipped_or_failed:
|
||||
overall_status["failed"].append(f"{asset_rule.asset_name} (Orchestrator Error)")
|
||||
overall_status["failed"].append(f"{asset_rule.asset_name} (Orchestrator Error: {e})")
|
||||
finally:
|
||||
# --- Cleanup Temporary Directory ---
|
||||
if engine_temp_dir_path and engine_temp_dir_path.exists():
|
||||
try:
|
||||
log.debug(f"PipelineOrchestrator cleaning up temporary directory: {engine_temp_dir_path}")
|
||||
|
||||
@ -1,695 +0,0 @@
|
||||
import uuid
|
||||
import dataclasses
|
||||
import re
|
||||
import os
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Optional, Tuple, Dict, List, Any, Union
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from .base_stage import ProcessingStage
|
||||
from ..asset_context import AssetProcessingContext
|
||||
from rule_structure import FileRule
|
||||
from utils.path_utils import sanitize_filename
|
||||
from ...utils import image_processing_utils as ipu # Includes get_image_bit_depth implicitly now
|
||||
from ...utils.image_saving_utils import save_image_variants # Added import
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Helper function to get filename-friendly map type (adapted from old logic)
|
||||
def get_filename_friendly_map_type(internal_map_type: str, file_type_definitions: Optional[Dict[str, Dict]]) -> str:
|
||||
"""Derives a filename-friendly map type from the internal map type."""
|
||||
filename_friendly_map_type = internal_map_type # Fallback
|
||||
if not file_type_definitions or not isinstance(file_type_definitions, dict) or not file_type_definitions:
|
||||
logger.warning(f"Filename-friendly lookup: FILE_TYPE_DEFINITIONS not available or invalid. Falling back to internal type: {internal_map_type}")
|
||||
return filename_friendly_map_type
|
||||
|
||||
base_map_key_val = None
|
||||
suffix_part = ""
|
||||
sorted_known_base_keys = sorted(list(file_type_definitions.keys()), key=len, reverse=True)
|
||||
|
||||
for known_key in sorted_known_base_keys:
|
||||
if internal_map_type.startswith(known_key):
|
||||
base_map_key_val = known_key
|
||||
suffix_part = internal_map_type[len(known_key):]
|
||||
break
|
||||
|
||||
if base_map_key_val:
|
||||
definition = file_type_definitions.get(base_map_key_val)
|
||||
if definition and isinstance(definition, dict):
|
||||
standard_type_alias = definition.get("standard_type")
|
||||
if standard_type_alias and isinstance(standard_type_alias, str) and standard_type_alias.strip():
|
||||
filename_friendly_map_type = standard_type_alias.strip() + suffix_part
|
||||
logger.debug(f"Filename-friendly lookup: Transformed '{internal_map_type}' -> '{filename_friendly_map_type}'")
|
||||
else:
|
||||
logger.warning(f"Filename-friendly lookup: Standard type alias for '{base_map_key_val}' is missing or invalid. Falling back.")
|
||||
else:
|
||||
logger.warning(f"Filename-friendly lookup: No valid definition for '{base_map_key_val}'. Falling back.")
|
||||
else:
|
||||
logger.warning(f"Filename-friendly lookup: Could not parse base key from '{internal_map_type}'. Falling back.")
|
||||
|
||||
return filename_friendly_map_type
|
||||
|
||||
|
||||
class IndividualMapProcessingStage(ProcessingStage):
|
||||
"""
|
||||
Processes individual texture maps and merged map tasks.
|
||||
This stage loads source images (or merges inputs for tasks), performs
|
||||
in-memory transformations (Gloss-to-Rough, Normal Green Invert, optional scaling),
|
||||
and passes the result to the UnifiedSaveUtility for final output generation.
|
||||
It updates the AssetProcessingContext with detailed results.
|
||||
"""
|
||||
|
||||
def _apply_in_memory_transformations(
|
||||
self,
|
||||
image_data: np.ndarray,
|
||||
processing_map_type: str,
|
||||
invert_normal_green: bool,
|
||||
file_type_definitions: Dict[str, Dict],
|
||||
log_prefix: str # e.g., "Asset 'X', Key Y, Proc. Tag Z"
|
||||
) -> Tuple[np.ndarray, str, List[str]]:
|
||||
"""
|
||||
Applies in-memory transformations (Gloss-to-Rough, Normal Green Invert).
|
||||
|
||||
Returns:
|
||||
Tuple containing:
|
||||
- Potentially transformed image data.
|
||||
- Potentially updated processing_map_type (e.g., MAP_GLOSS -> MAP_ROUGH).
|
||||
- List of strings describing applied transformations.
|
||||
"""
|
||||
transformation_notes = []
|
||||
current_image_data = image_data # Start with original data
|
||||
updated_processing_map_type = processing_map_type # Start with original type
|
||||
|
||||
# Gloss-to-Rough
|
||||
if processing_map_type.startswith("MAP_GLOSS"):
|
||||
logger.info(f"{log_prefix}: Applying Gloss-to-Rough conversion.")
|
||||
inversion_succeeded = False
|
||||
# Replicate inversion logic from GlossToRoughConversionStage
|
||||
if np.issubdtype(current_image_data.dtype, np.floating):
|
||||
current_image_data = 1.0 - current_image_data
|
||||
current_image_data = np.clip(current_image_data, 0.0, 1.0)
|
||||
logger.debug(f"{log_prefix}: Inverted float image data for Gloss->Rough.")
|
||||
inversion_succeeded = True
|
||||
elif np.issubdtype(current_image_data.dtype, np.integer):
|
||||
max_val = np.iinfo(current_image_data.dtype).max
|
||||
current_image_data = max_val - current_image_data
|
||||
logger.debug(f"{log_prefix}: Inverted integer image data (max_val: {max_val}) for Gloss->Rough.")
|
||||
inversion_succeeded = True
|
||||
else:
|
||||
logger.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS map. Cannot invert.")
|
||||
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
|
||||
|
||||
# Update type and notes based on success flag
|
||||
if inversion_succeeded:
|
||||
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
|
||||
logger.info(f"{log_prefix}: Map type updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
|
||||
transformation_notes.append("Gloss-to-Rough applied")
|
||||
|
||||
# Normal Green Invert
|
||||
# Use internal 'MAP_NRM' type for check
|
||||
if processing_map_type == "MAP_NRM" and invert_normal_green:
|
||||
logger.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting).")
|
||||
current_image_data = ipu.invert_normal_map_green_channel(current_image_data)
|
||||
transformation_notes.append("Normal Green Inverted (Global)")
|
||||
|
||||
return current_image_data, updated_processing_map_type, transformation_notes
|
||||
|
||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||
"""
|
||||
Executes the individual map and merged task processing logic.
|
||||
"""
|
||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||
if context.status_flags.get('skip_asset', False):
|
||||
logger.info(f"Asset '{asset_name_for_log}': Skipping individual map processing due to skip_asset flag.")
|
||||
return context
|
||||
|
||||
if not hasattr(context, 'processed_maps_details') or context.processed_maps_details is None:
|
||||
context.processed_maps_details = {}
|
||||
logger.debug(f"Asset '{asset_name_for_log}': Initialized processed_maps_details.")
|
||||
|
||||
# --- Configuration Fetching ---
|
||||
config = context.config_obj
|
||||
file_type_definitions = getattr(config, "FILE_TYPE_DEFINITIONS", {})
|
||||
respect_variant_map_types = getattr(config, "respect_variant_map_types", []) # Needed for suffixing logic
|
||||
initial_scaling_mode = getattr(config, "INITIAL_SCALING_MODE", "NONE")
|
||||
merge_dimension_mismatch_strategy = getattr(config, "MERGE_DIMENSION_MISMATCH_STRATEGY", "USE_LARGEST")
|
||||
invert_normal_green = config.invert_normal_green_globally # Use the new property
|
||||
output_base_dir = context.output_base_path # This is the FINAL base path
|
||||
asset_name = context.asset_rule.asset_name if context.asset_rule else "UnknownAsset"
|
||||
# For save_image_variants, the 'output_base_directory' should be the engine_temp_dir,
|
||||
# as these are intermediate variant files before final organization.
|
||||
temp_output_base_dir_for_variants = context.engine_temp_dir
|
||||
output_filename_pattern_tokens = {'asset_name': asset_name, 'output_base_directory': temp_output_base_dir_for_variants}
|
||||
|
||||
# --- Prepare Items to Process ---
|
||||
items_to_process: List[Union[Tuple[int, FileRule], Tuple[str, Dict]]] = []
|
||||
|
||||
# Add regular files
|
||||
if context.files_to_process:
|
||||
# Validate source path early for regular files
|
||||
if not context.source_rule or not context.source_rule.input_path:
|
||||
logger.error(f"Asset '{asset_name_for_log}': SourceRule or SourceRule.input_path is not set. Cannot process regular files.")
|
||||
context.status_flags['individual_map_processing_failed'] = True
|
||||
# Mark all file_rules as failed if source path is missing
|
||||
for fr_idx, file_rule_to_fail in enumerate(context.files_to_process):
|
||||
map_type_for_fail = file_rule_to_fail.item_type_override or file_rule_to_fail.item_type or "UnknownMapType"
|
||||
ff_map_type = get_filename_friendly_map_type(map_type_for_fail, file_type_definitions)
|
||||
context.processed_maps_details[fr_idx] = {
|
||||
'status': 'Failed',
|
||||
'map_type': ff_map_type,
|
||||
'processing_map_type': map_type_for_fail,
|
||||
'notes': "SourceRule.input_path missing",
|
||||
'saved_files_info': []
|
||||
}
|
||||
# Don't add regular files if source path is bad
|
||||
elif not context.workspace_path or not context.workspace_path.is_dir():
|
||||
logger.error(f"Asset '{asset_name_for_log}': Workspace path '{context.workspace_path}' is not a valid directory. Cannot process regular files.")
|
||||
context.status_flags['individual_map_processing_failed'] = True
|
||||
for fr_idx, file_rule_to_fail in enumerate(context.files_to_process):
|
||||
map_type_for_fail = file_rule_to_fail.item_type_override or file_rule_to_fail.item_type or "UnknownMapType"
|
||||
ff_map_type = get_filename_friendly_map_type(map_type_for_fail, file_type_definitions)
|
||||
context.processed_maps_details[fr_idx] = {
|
||||
'status': 'Failed',
|
||||
'map_type': ff_map_type,
|
||||
'processing_map_type': map_type_for_fail,
|
||||
'notes': "Workspace path invalid",
|
||||
'saved_files_info': []
|
||||
}
|
||||
# Don't add regular files if workspace path is bad
|
||||
else:
|
||||
for idx, file_rule in enumerate(context.files_to_process):
|
||||
items_to_process.append((idx, file_rule))
|
||||
|
||||
# Add merged tasks
|
||||
if hasattr(context, 'merged_image_tasks') and context.merged_image_tasks:
|
||||
for task_idx, task_data in enumerate(context.merged_image_tasks):
|
||||
task_key = f"merged_task_{task_idx}"
|
||||
items_to_process.append((task_key, task_data))
|
||||
|
||||
if not items_to_process:
|
||||
logger.info(f"Asset '{asset_name_for_log}': No regular files or merged tasks to process in this stage.")
|
||||
return context
|
||||
|
||||
# --- Unified Processing Loop ---
|
||||
for item_key, item_data in items_to_process:
|
||||
current_image_data: Optional[np.ndarray] = None
|
||||
base_map_type: str = "Unknown" # Filename-friendly
|
||||
processing_map_type: str = "Unknown" # Internal MAP_XXX type
|
||||
source_bit_depth_info_for_save_util: List[int] = []
|
||||
is_merged_task: bool = False
|
||||
status_notes: List[str] = []
|
||||
processing_status: str = "Started"
|
||||
saved_files_details_list: List[Dict] = []
|
||||
original_dimensions: Optional[Tuple[int, int]] = None
|
||||
source_file_path_regular: Optional[Path] = None # For regular maps
|
||||
merge_task_config_output_type: Optional[str] = None # For merged tasks
|
||||
inputs_used_for_merge: Optional[Dict[str, str]] = None # For merged tasks
|
||||
processing_instance_tag = f"item_{item_key}_{uuid.uuid4().hex[:8]}" # Unique tag for logging this item
|
||||
|
||||
try:
|
||||
# --- A. Regular Map Processing ---
|
||||
if isinstance(item_data, FileRule):
|
||||
file_rule: FileRule = item_data
|
||||
file_rule_idx: int = item_key # Key is the index for regular maps
|
||||
is_merged_task = False
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Processing Regular Map from FileRule: {file_rule.file_path}")
|
||||
|
||||
if not file_rule.file_path:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: FileRule has an empty or None file_path. Skipping.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("FileRule has no file_path")
|
||||
continue # To finally block
|
||||
|
||||
# Determine internal map type (MAP_XXX) with suffixing
|
||||
initial_internal_map_type = file_rule.item_type_override or file_rule.item_type or "UnknownMapType"
|
||||
processing_map_type = self._get_suffixed_internal_map_type(context, file_rule, initial_internal_map_type, respect_variant_map_types)
|
||||
base_map_type = get_filename_friendly_map_type(processing_map_type, file_type_definitions) # Get filename friendly version
|
||||
|
||||
# Skip types not meant for individual processing (e.g., composites handled elsewhere)
|
||||
if not processing_map_type or not processing_map_type.startswith("MAP_") or processing_map_type == "MAP_GEN_COMPOSITE":
|
||||
logger.debug(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Skipping, type '{processing_map_type}' (Filename: '{base_map_type}') not targeted for individual processing.")
|
||||
processing_status = "Skipped"
|
||||
status_notes.append(f"Type '{processing_map_type}' not processed individually.")
|
||||
continue # To finally block
|
||||
|
||||
# Find source file (relative to workspace_path)
|
||||
source_base_path = context.workspace_path
|
||||
# Use the file_rule.file_path directly as it should be relative now
|
||||
potential_source_path = source_base_path / file_rule.file_path
|
||||
if potential_source_path.is_file():
|
||||
source_file_path_regular = potential_source_path
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Found source file: {source_file_path_regular}")
|
||||
else:
|
||||
# Attempt globbing as a fallback if direct path fails (optional, based on previous logic)
|
||||
found_files = list(source_base_path.glob(file_rule.file_path))
|
||||
if len(found_files) == 1:
|
||||
source_file_path_regular = found_files[0]
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Found source file via glob: {source_file_path_regular}")
|
||||
elif len(found_files) > 1:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Multiple files found for pattern '{file_rule.file_path}' in '{source_base_path}'. Using first: {found_files[0]}")
|
||||
source_file_path_regular = found_files[0]
|
||||
else:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Source file not found using path/pattern '{file_rule.file_path}' in '{source_base_path}'.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Source file not found")
|
||||
continue # To finally block
|
||||
|
||||
# Load image
|
||||
source_image_data = ipu.load_image(str(source_file_path_regular))
|
||||
if source_image_data is None:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Failed to load image from '{source_file_path_regular}'.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Image load failed")
|
||||
continue # To finally block
|
||||
|
||||
original_height, original_width = source_image_data.shape[:2]
|
||||
original_dimensions = (original_width, original_height)
|
||||
logger.debug(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Loaded image with dimensions {original_width}x{original_height}.")
|
||||
|
||||
# Get original bit depth
|
||||
try:
|
||||
original_source_bit_depth = ipu.get_image_bit_depth(str(source_file_path_regular))
|
||||
source_bit_depth_info_for_save_util = [original_source_bit_depth]
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Determined source bit depth: {original_source_bit_depth}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}: Could not determine source bit depth for {source_file_path_regular}: {e}. Using default [8].")
|
||||
source_bit_depth_info_for_save_util = [8] # Default fallback
|
||||
status_notes.append("Could not determine source bit depth, defaulted to 8.")
|
||||
|
||||
current_image_data = source_image_data.copy()
|
||||
# Apply transformations for regular maps AFTER loading
|
||||
log_prefix_regular = f"Asset '{asset_name_for_log}', Key {file_rule_idx}, Proc. Tag {processing_instance_tag}"
|
||||
current_image_data, processing_map_type, transform_notes = self._apply_in_memory_transformations(
|
||||
current_image_data, processing_map_type, invert_normal_green, file_type_definitions, log_prefix_regular
|
||||
)
|
||||
status_notes.extend(transform_notes)
|
||||
# Update base_map_type AFTER potential transformation
|
||||
base_map_type = get_filename_friendly_map_type(processing_map_type, file_type_definitions)
|
||||
|
||||
|
||||
# --- B. Merged Image Task Processing ---
|
||||
elif isinstance(item_data, dict):
|
||||
task: Dict = item_data
|
||||
task_key: str = item_key # Key is the generated string for merged tasks
|
||||
is_merged_task = True
|
||||
merge_task_config_output_type = task.get('output_map_type', 'UnknownMergeOutput')
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Processing Merged Task for output type: {merge_task_config_output_type}")
|
||||
|
||||
processing_map_type = merge_task_config_output_type # Internal type is the output type from config
|
||||
base_map_type = get_filename_friendly_map_type(processing_map_type, file_type_definitions) # Get filename friendly version
|
||||
source_bit_depth_info_for_save_util = task.get('source_bit_depths', [])
|
||||
merge_rule_config = task.get('merge_rule_config', {})
|
||||
input_map_sources = task.get('input_map_sources', {})
|
||||
target_dimensions = task.get('source_dimensions') # Expected dimensions (h, w)
|
||||
|
||||
if not merge_rule_config or not input_map_sources or not target_dimensions:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Merge task data is incomplete (missing config, sources, or dimensions). Skipping.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Incomplete merge task data")
|
||||
continue # To finally block
|
||||
|
||||
loaded_inputs_for_merge: Dict[str, np.ndarray] = {}
|
||||
actual_input_dimensions: List[Tuple[int, int]] = [] # List of (h, w)
|
||||
inputs_used_for_merge = {} # Track actual files/fallbacks used
|
||||
|
||||
# Load/Prepare Inputs for Merge
|
||||
merge_inputs_config = merge_rule_config.get('inputs', {})
|
||||
merge_defaults = merge_rule_config.get('defaults', {})
|
||||
|
||||
for channel_char, required_map_type_from_rule in merge_inputs_config.items():
|
||||
input_info = input_map_sources.get(required_map_type_from_rule)
|
||||
input_image_data = None
|
||||
input_source_desc = f"Fallback for {required_map_type_from_rule}"
|
||||
|
||||
if input_info and input_info.get('file_path'):
|
||||
# Paths in merged tasks should ideally be absolute or relative to a known base (e.g., workspace)
|
||||
# Assuming they are resolvable as is for now.
|
||||
input_file_path = Path(input_info['file_path'])
|
||||
if input_file_path.is_file():
|
||||
try:
|
||||
input_image_data = ipu.load_image(str(input_file_path))
|
||||
if input_image_data is not None:
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Loaded input '{required_map_type_from_rule}' for channel '{channel_char}' from: {input_file_path}")
|
||||
actual_input_dimensions.append(input_image_data.shape[:2]) # (h, w)
|
||||
input_source_desc = str(input_file_path)
|
||||
else:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Failed to load input '{required_map_type_from_rule}' from {input_file_path}. Attempting fallback.")
|
||||
except Exception as e:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Error loading input '{required_map_type_from_rule}' from {input_file_path}: {e}. Attempting fallback.")
|
||||
else:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Input file path for '{required_map_type_from_rule}' not found: {input_file_path}. Attempting fallback.")
|
||||
else:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: No file path provided for required input '{required_map_type_from_rule}'. Attempting fallback.")
|
||||
|
||||
# Fallback if load failed or no path
|
||||
if input_image_data is None:
|
||||
fallback_value = merge_defaults.get(channel_char)
|
||||
if fallback_value is not None:
|
||||
try:
|
||||
# Determine shape and dtype for fallback
|
||||
h, w = target_dimensions
|
||||
# Infer channels needed based on typical usage or config (e.g., RGB default, single channel for masks)
|
||||
# This might need refinement based on how defaults are structured. Assuming uint8 for now.
|
||||
# If fallback_value is a single number, assume grayscale, else assume color based on length?
|
||||
num_channels = 1 if isinstance(fallback_value, (int, float)) else len(fallback_value) if isinstance(fallback_value, (list, tuple)) else 3 # Default to 3? Risky.
|
||||
dtype = np.uint8 # Default dtype, might need adjustment based on context
|
||||
shape = (h, w) if num_channels == 1 else (h, w, num_channels)
|
||||
|
||||
input_image_data = np.full(shape, fallback_value, dtype=dtype)
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Using fallback value {fallback_value} for channel '{channel_char}' (Target Dims: {target_dimensions}).")
|
||||
# Fallback uses target dimensions, don't add to actual_input_dimensions for mismatch check unless required
|
||||
# actual_input_dimensions.append(target_dimensions) # Optional: Treat fallback as having target dims
|
||||
status_notes.append(f"Used fallback for {required_map_type_from_rule}")
|
||||
except Exception as e:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Error creating fallback for channel '{channel_char}': {e}. Cannot proceed with merge.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append(f"Fallback creation failed for {required_map_type_from_rule}")
|
||||
break # Break inner loop
|
||||
else:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Missing input '{required_map_type_from_rule}' and no fallback default provided for channel '{channel_char}'. Cannot proceed.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append(f"Missing input {required_map_type_from_rule} and no fallback")
|
||||
break # Break inner loop
|
||||
|
||||
if processing_status == "Failed": break # Exit outer loop if inner loop failed
|
||||
|
||||
# --- Apply Pre-Merge Transformations using Helper ---
|
||||
if input_image_data is not None: # Only transform if we have data
|
||||
log_prefix_merge_input = f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}, Input {required_map_type_from_rule}"
|
||||
input_image_data, _, transform_notes = self._apply_in_memory_transformations(
|
||||
input_image_data, required_map_type_from_rule, invert_normal_green, file_type_definitions, log_prefix_merge_input
|
||||
)
|
||||
# We don't need the updated map type for the input key, just the transformed data
|
||||
status_notes.extend(transform_notes) # Add notes to the main task's notes
|
||||
|
||||
# --- End Pre-Merge Transformations ---
|
||||
|
||||
loaded_inputs_for_merge[channel_char] = input_image_data
|
||||
inputs_used_for_merge[required_map_type_from_rule] = input_source_desc
|
||||
|
||||
if processing_status == "Failed": continue # To finally block
|
||||
|
||||
# Dimension Mismatch Handling
|
||||
unique_dimensions = set(actual_input_dimensions)
|
||||
target_merge_dims = target_dimensions # Default
|
||||
if len(unique_dimensions) > 1:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Mismatched dimensions found among loaded inputs: {unique_dimensions}. Applying strategy: {merge_dimension_mismatch_strategy}")
|
||||
status_notes.append(f"Mismatched input dimensions ({unique_dimensions}), applied {merge_dimension_mismatch_strategy}")
|
||||
|
||||
if merge_dimension_mismatch_strategy == "ERROR_SKIP":
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Dimension mismatch strategy is ERROR_SKIP. Failing task.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Dimension mismatch (ERROR_SKIP)")
|
||||
continue # To finally block
|
||||
elif merge_dimension_mismatch_strategy == "USE_LARGEST":
|
||||
max_h = max(h for h, w in unique_dimensions)
|
||||
max_w = max(w for h, w in unique_dimensions)
|
||||
target_merge_dims = (max_h, max_w)
|
||||
elif merge_dimension_mismatch_strategy == "USE_FIRST":
|
||||
target_merge_dims = actual_input_dimensions[0] if actual_input_dimensions else target_dimensions
|
||||
else: # Default or unknown: Use largest
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Unknown dimension mismatch strategy '{merge_dimension_mismatch_strategy}'. Defaulting to USE_LARGEST.")
|
||||
max_h = max(h for h, w in unique_dimensions)
|
||||
max_w = max(w for h, w in unique_dimensions)
|
||||
target_merge_dims = (max_h, max_w)
|
||||
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Resizing inputs to target merge dimensions: {target_merge_dims}")
|
||||
# Resize loaded inputs (not fallbacks unless they were added to actual_input_dimensions)
|
||||
for channel_char, img_data in loaded_inputs_for_merge.items():
|
||||
# Only resize if it was a loaded input that contributed to the mismatch check
|
||||
if img_data.shape[:2] in unique_dimensions and img_data.shape[:2] != target_merge_dims:
|
||||
resized_img = ipu.resize_image(img_data, target_merge_dims[1], target_merge_dims[0]) # w, h
|
||||
if resized_img is None:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Failed to resize input for channel '{channel_char}' to {target_merge_dims}. Failing task.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append(f"Input resize failed for {channel_char}")
|
||||
break
|
||||
loaded_inputs_for_merge[channel_char] = resized_img
|
||||
if processing_status == "Failed": continue # To finally block
|
||||
|
||||
# Perform Merge (Example: Simple Channel Packing - Adapt as needed)
|
||||
# This needs to be robust based on merge_rule_config structure
|
||||
try:
|
||||
merge_channels_order = merge_rule_config.get('channel_order', 'RGB') # e.g., 'RGB', 'BGR', 'R', 'RGBA' etc.
|
||||
output_channels = len(merge_channels_order)
|
||||
h, w = target_merge_dims # Use the potentially adjusted dimensions
|
||||
|
||||
if output_channels == 1:
|
||||
# Assume the first channel in order is the one to use
|
||||
channel_char_to_use = merge_channels_order[0]
|
||||
source_img = loaded_inputs_for_merge[channel_char_to_use]
|
||||
# Ensure it's grayscale (take first channel if it's multi-channel)
|
||||
if len(source_img.shape) == 3:
|
||||
current_image_data = source_img[:, :, 0].copy()
|
||||
else:
|
||||
current_image_data = source_img.copy()
|
||||
elif output_channels > 1:
|
||||
# Assume uint8 dtype for merged output unless specified otherwise
|
||||
merged_image = np.zeros((h, w, output_channels), dtype=np.uint8)
|
||||
for i, channel_char in enumerate(merge_channels_order):
|
||||
source_img = loaded_inputs_for_merge.get(channel_char)
|
||||
if source_img is not None:
|
||||
# Extract the correct channel (e.g., R from RGB, or use grayscale directly)
|
||||
if len(source_img.shape) == 3:
|
||||
# Assuming standard RGB/BGR order in source based on channel_char? Needs clear definition.
|
||||
# Example: If source is RGB and channel_char is 'R', take channel 0.
|
||||
# This mapping needs to be defined in merge_rule_config or conventions.
|
||||
# Simple approach: take the first channel if source is color.
|
||||
merged_image[:, :, i] = source_img[:, :, 0]
|
||||
else: # Grayscale source
|
||||
merged_image[:, :, i] = source_img
|
||||
else:
|
||||
# This case should have been caught by fallback logic earlier
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Missing prepared input for channel '{channel_char}' during final merge assembly. This shouldn't happen.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append(f"Internal error: Missing input '{channel_char}' at merge assembly")
|
||||
break
|
||||
if processing_status != "Failed":
|
||||
current_image_data = merged_image
|
||||
else:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Invalid channel_order '{merge_channels_order}' in merge config.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Invalid merge channel_order")
|
||||
|
||||
if processing_status != "Failed":
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Successfully merged inputs into image with shape {current_image_data.shape}")
|
||||
original_dimensions = (current_image_data.shape[1], current_image_data.shape[0]) # Set original dims after merge
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"Asset '{asset_name_for_log}', Key {task_key}, Proc. Tag {processing_instance_tag}: Error during merge operation: {e}")
|
||||
processing_status = "Failed"
|
||||
status_notes.append(f"Merge operation failed: {e}")
|
||||
continue # To finally block
|
||||
|
||||
else:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {item_key}: Unknown item type in processing loop: {type(item_data)}. Skipping.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Unknown item type in loop")
|
||||
continue # To finally block
|
||||
|
||||
# --- C. Common Processing Path ---
|
||||
if current_image_data is None:
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: current_image_data is None before common processing. Status: {processing_status}. Skipping common path.")
|
||||
# Status should already be Failed or Skipped from A or B
|
||||
if processing_status not in ["Failed", "Skipped"]:
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Internal error: Image data missing before common processing")
|
||||
continue # To finally block
|
||||
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Entering common processing path for '{base_map_type}' (Internal: '{processing_map_type}')")
|
||||
|
||||
# Optional Initial Scaling (In Memory)
|
||||
# Transformations are now handled earlier by the helper function
|
||||
image_to_save = None
|
||||
scaling_applied = False
|
||||
h_pre_scale, w_pre_scale = current_image_data.shape[:2]
|
||||
|
||||
if initial_scaling_mode == "POT_DOWNSCALE":
|
||||
pot_w = ipu.get_nearest_power_of_two_downscale(w_pre_scale)
|
||||
pot_h = ipu.get_nearest_power_of_two_downscale(h_pre_scale)
|
||||
if (pot_w, pot_h) != (w_pre_scale, h_pre_scale):
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Applying Initial Scaling: POT Downscale from ({w_pre_scale},{h_pre_scale}) to ({pot_w},{pot_h}).")
|
||||
# Use aspect ratio preserving POT logic if needed, or simple independent POT per dim? Plan implies simple POT.
|
||||
# Let's use the more robust aspect-preserving POT downscale logic from ipu if available, otherwise simple resize.
|
||||
# Simple resize for now based on calculated pot_w, pot_h:
|
||||
resized_img = ipu.resize_image(current_image_data, pot_w, pot_h, interpolation=cv2.INTER_AREA)
|
||||
if resized_img is not None:
|
||||
image_to_save = resized_img
|
||||
scaling_applied = True
|
||||
status_notes.append(f"Initial POT Downscale applied ({pot_w}x{pot_h})")
|
||||
else:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: POT Downscale resize failed. Using original data for saving.")
|
||||
image_to_save = current_image_data.copy()
|
||||
status_notes.append("Initial POT Downscale failed, used original")
|
||||
else:
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Initial Scaling: POT Downscale - Image already POT or smaller. No scaling needed.")
|
||||
image_to_save = current_image_data.copy()
|
||||
elif initial_scaling_mode == "NONE":
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Initial Scaling: Mode is NONE.")
|
||||
image_to_save = current_image_data.copy()
|
||||
else:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Unknown INITIAL_SCALING_MODE '{initial_scaling_mode}'. Defaulting to NONE.")
|
||||
image_to_save = current_image_data.copy()
|
||||
status_notes.append(f"Unknown initial scale mode '{initial_scaling_mode}', used original")
|
||||
|
||||
if image_to_save is None: # Should not happen if logic above is correct
|
||||
logger.error(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: image_to_save is None after scaling block. This indicates an error. Failing.")
|
||||
processing_status = "Failed"
|
||||
status_notes.append("Internal error: image_to_save is None post-scaling")
|
||||
continue # To finally block
|
||||
|
||||
# Color Management (Example: BGR to RGB if needed)
|
||||
# This logic might need refinement based on actual requirements and ipu capabilities
|
||||
# Assuming save_image_variants expects RGB by default if color conversion is needed.
|
||||
# Let's assume save_image_variants handles color internally based on format/config for now.
|
||||
# If specific BGR->RGB conversion is needed *before* saving based on map type:
|
||||
# if base_map_type in ["COL", "DIFF", "ALB"] and len(image_to_save.shape) == 3 and image_to_save.shape[2] == 3:
|
||||
# logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Applying BGR to RGB conversion before saving.")
|
||||
# image_to_save = ipu.convert_bgr_to_rgb(image_to_save)
|
||||
# status_notes.append("BGR->RGB applied")
|
||||
|
||||
# Call Unified Save Utility
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Calling Unified Save Utility for map type '{base_map_type}' (Internal: '{processing_map_type}')")
|
||||
|
||||
try:
|
||||
# Prepare arguments for save_image_variants
|
||||
save_args = {
|
||||
"source_image_data": image_to_save,
|
||||
"base_map_type": base_map_type, # Filename-friendly
|
||||
"source_bit_depth_info": source_bit_depth_info_for_save_util,
|
||||
"output_filename_pattern_tokens": output_filename_pattern_tokens,
|
||||
# "config_obj": config, # Removed: save_image_variants doesn't expect this directly
|
||||
# "asset_name_for_log": asset_name_for_log, # Removed: save_image_variants doesn't expect this
|
||||
# "processing_instance_tag": processing_instance_tag # Removed: save_image_variants doesn't expect this
|
||||
}
|
||||
|
||||
# Pass only the expected arguments to save_image_variants
|
||||
# We need to extract the required args from config and pass them individually
|
||||
save_args_filtered = {
|
||||
"source_image_data": image_to_save,
|
||||
"base_map_type": base_map_type,
|
||||
"source_bit_depth_info": source_bit_depth_info_for_save_util,
|
||||
"image_resolutions": config.image_resolutions,
|
||||
"file_type_defs": config.FILE_TYPE_DEFINITIONS,
|
||||
"output_format_8bit": config.get_8bit_output_format(),
|
||||
"output_format_16bit_primary": config.get_16bit_output_formats()[0],
|
||||
"output_format_16bit_fallback": config.get_16bit_output_formats()[1],
|
||||
"png_compression_level": config.png_compression_level,
|
||||
"jpg_quality": config.jpg_quality,
|
||||
"output_filename_pattern_tokens": output_filename_pattern_tokens,
|
||||
"output_filename_pattern": config.output_filename_pattern,
|
||||
}
|
||||
|
||||
saved_files_details_list = save_image_variants(**save_args_filtered)
|
||||
|
||||
if saved_files_details_list:
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Unified Save Utility completed successfully. Saved {len(saved_files_details_list)} variants.")
|
||||
processing_status = "Processed_Via_Save_Utility"
|
||||
else:
|
||||
logger.warning(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Unified Save Utility returned no saved file details. Check utility logs.")
|
||||
processing_status = "Processed_Save_Utility_No_Output" # Or potentially "Failed" depending on severity
|
||||
status_notes.append("Save utility reported no files saved")
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Error calling or executing save_image_variants: {e}")
|
||||
processing_status = "Failed"
|
||||
status_notes.append(f"Save utility call failed: {e}")
|
||||
# saved_files_details_list remains empty
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Unhandled exception during processing loop for item: {e}")
|
||||
processing_status = "Failed"
|
||||
status_notes.append(f"Unhandled exception: {e}")
|
||||
|
||||
finally:
|
||||
# --- Update Context ---
|
||||
logger.debug(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Updating context. Status: {processing_status}, Notes: {status_notes}")
|
||||
details_entry = {
|
||||
'status': processing_status,
|
||||
'map_type': base_map_type, # Final filename-friendly type
|
||||
'processing_map_type': processing_map_type, # Final internal type
|
||||
'notes': " | ".join(status_notes),
|
||||
'saved_files_info': saved_files_details_list,
|
||||
'original_dimensions': original_dimensions, # (w, h)
|
||||
}
|
||||
if is_merged_task:
|
||||
details_entry['merge_task_config_output_type'] = merge_task_config_output_type
|
||||
details_entry['inputs_used_for_merge'] = inputs_used_for_merge
|
||||
details_entry['source_bit_depths'] = source_bit_depth_info_for_save_util # Store the list used
|
||||
else:
|
||||
# Regular map specific details
|
||||
details_entry['source_file'] = str(source_file_path_regular) if source_file_path_regular else "N/A"
|
||||
details_entry['original_bit_depth'] = source_bit_depth_info_for_save_util[0] if source_bit_depth_info_for_save_util else None
|
||||
details_entry['source_file_rule_index'] = item_key # Store original index
|
||||
|
||||
context.processed_maps_details[item_key] = details_entry
|
||||
logger.info(f"Asset '{asset_name_for_log}', Key {item_key}, Proc. Tag {processing_instance_tag}: Context updated for this item.")
|
||||
|
||||
logger.info(f"Asset '{asset_name_for_log}': Finished individual map processing stage.")
|
||||
return context
|
||||
|
||||
def _get_suffixed_internal_map_type(self, context: AssetProcessingContext, current_file_rule: FileRule, initial_internal_map_type: str, respect_variant_map_types: List[str]) -> str:
|
||||
"""
|
||||
Determines the potentially suffixed internal map type (e.g., MAP_COL-1)
|
||||
based on occurrences within the asset rule's file list.
|
||||
"""
|
||||
final_internal_map_type = initial_internal_map_type # Default
|
||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||
|
||||
|
||||
base_map_type_match = re.match(r"(MAP_[A-Z]{3})", initial_internal_map_type)
|
||||
if not base_map_type_match or not context.asset_rule or not context.asset_rule.files:
|
||||
return final_internal_map_type # Cannot determine suffix without base type or asset rule files
|
||||
|
||||
true_base_map_type = base_map_type_match.group(1) # This is "MAP_XXX"
|
||||
|
||||
peers_of_same_base_type = []
|
||||
for fr_asset in context.asset_rule.files:
|
||||
fr_asset_item_type = fr_asset.item_type_override or fr_asset.item_type or "UnknownMapType"
|
||||
fr_asset_base_match = re.match(r"(MAP_[A-Z]{3})", fr_asset_item_type)
|
||||
if fr_asset_base_match and fr_asset_base_match.group(1) == true_base_map_type:
|
||||
peers_of_same_base_type.append(fr_asset)
|
||||
|
||||
num_occurrences = len(peers_of_same_base_type)
|
||||
current_instance_index = 0 # 1-based index
|
||||
|
||||
try:
|
||||
# Find the index based on the FileRule object itself
|
||||
current_instance_index = peers_of_same_base_type.index(current_file_rule) + 1
|
||||
except ValueError:
|
||||
# Fallback: try matching by file_path if object identity fails (less reliable)
|
||||
try:
|
||||
current_instance_index = [fr.file_path for fr in peers_of_same_base_type].index(current_file_rule.file_path) + 1
|
||||
logger.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Found peer index using file_path fallback.")
|
||||
except (ValueError, AttributeError): # Catch AttributeError if file_path is None
|
||||
logger.warning(
|
||||
f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}' (Initial Type: '{initial_internal_map_type}', Base: '{true_base_map_type}'): "
|
||||
f"Could not find its own instance in the list of {num_occurrences} peers from asset_rule.files using object identity or path. Suffixing may be incorrect."
|
||||
)
|
||||
# Keep index 0, suffix logic below will handle it
|
||||
|
||||
# Determine Suffix
|
||||
map_type_for_respect_check = true_base_map_type.replace("MAP_", "") # e.g., "COL"
|
||||
is_in_respect_list = map_type_for_respect_check in respect_variant_map_types
|
||||
|
||||
suffix_to_append = ""
|
||||
if num_occurrences > 1:
|
||||
if current_instance_index > 0:
|
||||
suffix_to_append = f"-{current_instance_index}"
|
||||
else:
|
||||
# If index is still 0 (not found), don't add suffix to avoid ambiguity
|
||||
logger.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Index for multi-occurrence map type '{true_base_map_type}' (count: {num_occurrences}) not determined. Omitting numeric suffix.")
|
||||
elif num_occurrences == 1 and is_in_respect_list:
|
||||
suffix_to_append = "-1" # Add suffix even for single instance if in respect list
|
||||
|
||||
if suffix_to_append:
|
||||
final_internal_map_type = true_base_map_type + suffix_to_append
|
||||
# else: final_internal_map_type remains the initial_internal_map_type if no suffix needed
|
||||
|
||||
if final_internal_map_type != initial_internal_map_type:
|
||||
logger.debug(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Suffixed internal map type determined: '{initial_internal_map_type}' -> '{final_internal_map_type}'")
|
||||
|
||||
return final_internal_map_type
|
||||
83
processing/pipeline/stages/initial_scaling.py
Normal file
83
processing/pipeline/stages/initial_scaling.py
Normal file
@ -0,0 +1,83 @@
|
||||
import logging
|
||||
from typing import Tuple
|
||||
|
||||
import cv2 # Assuming cv2 is available for interpolation flags
|
||||
import numpy as np
|
||||
|
||||
from .base_stage import ProcessingStage
|
||||
# Import necessary context classes and utils
|
||||
from ..asset_context import InitialScalingInput, InitialScalingOutput
|
||||
from ...utils import image_processing_utils as ipu
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
class InitialScalingStage(ProcessingStage):
|
||||
"""
|
||||
Applies initial scaling (e.g., Power-of-Two downscaling) to image data
|
||||
if configured via the InitialScalingInput.
|
||||
"""
|
||||
|
||||
def execute(self, input_data: InitialScalingInput) -> InitialScalingOutput:
|
||||
"""
|
||||
Applies scaling based on input_data.initial_scaling_mode.
|
||||
"""
|
||||
log.debug(f"Initial Scaling Stage: Mode '{input_data.initial_scaling_mode}'.")
|
||||
|
||||
image_to_scale = input_data.image_data
|
||||
original_dims_wh = input_data.original_dimensions
|
||||
scaling_mode = input_data.initial_scaling_mode
|
||||
scaling_applied = False
|
||||
final_image_data = image_to_scale # Default to original if no scaling happens
|
||||
|
||||
if image_to_scale is None or image_to_scale.size == 0:
|
||||
log.warning("Initial Scaling Stage: Input image data is None or empty. Skipping.")
|
||||
# Return original (empty) data and indicate no scaling
|
||||
return InitialScalingOutput(
|
||||
scaled_image_data=np.array([]),
|
||||
scaling_applied=False,
|
||||
final_dimensions=(0, 0)
|
||||
)
|
||||
|
||||
if original_dims_wh is None:
|
||||
log.warning("Initial Scaling Stage: Original dimensions not provided. Using current image shape.")
|
||||
h_pre_scale, w_pre_scale = image_to_scale.shape[:2]
|
||||
original_dims_wh = (w_pre_scale, h_pre_scale)
|
||||
else:
|
||||
w_pre_scale, h_pre_scale = original_dims_wh
|
||||
|
||||
|
||||
if scaling_mode == "POT_DOWNSCALE":
|
||||
pot_w = ipu.get_nearest_power_of_two_downscale(w_pre_scale)
|
||||
pot_h = ipu.get_nearest_power_of_two_downscale(h_pre_scale)
|
||||
|
||||
if (pot_w, pot_h) != (w_pre_scale, h_pre_scale):
|
||||
log.info(f"Initial Scaling: Applying POT Downscale from ({w_pre_scale},{h_pre_scale}) to ({pot_w},{pot_h}).")
|
||||
# Use INTER_AREA for downscaling generally
|
||||
resized_img = ipu.resize_image(image_to_scale, pot_w, pot_h, interpolation=cv2.INTER_AREA)
|
||||
if resized_img is not None:
|
||||
final_image_data = resized_img
|
||||
scaling_applied = True
|
||||
log.debug("Initial Scaling: POT Downscale applied successfully.")
|
||||
else:
|
||||
log.warning("Initial Scaling: POT Downscale resize failed. Using original data.")
|
||||
# final_image_data remains image_to_scale
|
||||
else:
|
||||
log.info("Initial Scaling: POT Downscale - Image already POT or smaller. No scaling needed.")
|
||||
# final_image_data remains image_to_scale
|
||||
|
||||
elif scaling_mode == "NONE":
|
||||
log.info("Initial Scaling: Mode is NONE. No scaling applied.")
|
||||
# final_image_data remains image_to_scale
|
||||
else:
|
||||
log.warning(f"Initial Scaling: Unknown INITIAL_SCALING_MODE '{scaling_mode}'. Defaulting to NONE.")
|
||||
# final_image_data remains image_to_scale
|
||||
|
||||
# Determine final dimensions
|
||||
final_h, final_w = final_image_data.shape[:2]
|
||||
final_dims_wh = (final_w, final_h)
|
||||
|
||||
return InitialScalingOutput(
|
||||
scaled_image_data=final_image_data,
|
||||
scaling_applied=scaling_applied,
|
||||
final_dimensions=final_dims_wh
|
||||
)
|
||||
@ -1,153 +0,0 @@
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, Optional, List, Tuple
|
||||
|
||||
|
||||
from .base_stage import ProcessingStage
|
||||
from ..asset_context import AssetProcessingContext
|
||||
from rule_structure import FileRule
|
||||
from utils.path_utils import sanitize_filename
|
||||
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MapMergingStage(ProcessingStage):
|
||||
"""
|
||||
Merges individually processed maps based on MAP_MERGE rules.
|
||||
This stage performs operations like channel packing.
|
||||
"""
|
||||
|
||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||
"""
|
||||
Executes the map merging logic.
|
||||
|
||||
Args:
|
||||
context: The asset processing context.
|
||||
|
||||
Returns:
|
||||
The updated asset processing context.
|
||||
"""
|
||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||
if context.status_flags.get('skip_asset'):
|
||||
logger.info(f"Skipping map merging for asset {asset_name_for_log} as skip_asset flag is set.")
|
||||
return context
|
||||
if not hasattr(context, 'merged_maps_details'):
|
||||
context.merged_maps_details = {}
|
||||
|
||||
if not hasattr(context, 'merged_image_tasks'):
|
||||
context.merged_image_tasks = []
|
||||
|
||||
if not hasattr(context, 'processed_maps_details'):
|
||||
logger.warning(f"Asset {asset_name_for_log}: 'processed_maps_details' not found in context. Cannot generate merge tasks.")
|
||||
return context
|
||||
|
||||
logger.info(f"Starting MapMergingStage for asset: {asset_name_for_log}")
|
||||
|
||||
# The core merge rules are in context.config_obj.map_merge_rules
|
||||
# Each rule in there defines an output_map_type and its inputs.
|
||||
|
||||
# For now, let's assume no merge rules are processed until the logic is fixed.
|
||||
num_merge_rules_attempted = 0
|
||||
# If context.config_obj.map_merge_rules exists, iterate it here.
|
||||
# The original code iterated context.files_to_process looking for item_type "MAP_MERGE".
|
||||
# This implies FileRule objects were being used to define merge operations, which is no longer the case
|
||||
# if 'merge_settings' and 'id' were removed from FileRule.
|
||||
|
||||
# The core merge rules are in context.config_obj.map_merge_rules
|
||||
# Each rule in there defines an output_map_type and its inputs.
|
||||
|
||||
config_merge_rules = context.config_obj.map_merge_rules
|
||||
if not config_merge_rules:
|
||||
logger.info(f"Asset {asset_name_for_log}: No map_merge_rules found in configuration. Skipping map merging.")
|
||||
return context
|
||||
|
||||
for rule_idx, configured_merge_rule in enumerate(config_merge_rules):
|
||||
output_map_type = configured_merge_rule.get('output_map_type')
|
||||
inputs_map_type_to_channel = configured_merge_rule.get('inputs') # e.g. {"R": "NRM", "G": "NRM", "B": "ROUGH"}
|
||||
default_values = configured_merge_rule.get('defaults', {}) # e.g. {"R": 0.5, "G": 0.5, "B": 0.5}
|
||||
# output_bit_depth_rule = configured_merge_rule.get('output_bit_depth', 'respect_inputs') # Not used yet
|
||||
|
||||
if not output_map_type or not inputs_map_type_to_channel:
|
||||
logger.warning(f"Asset {asset_name_for_log}: Invalid configured_merge_rule at index {rule_idx}. Missing 'output_map_type' or 'inputs'. Rule: {configured_merge_rule}")
|
||||
continue
|
||||
|
||||
num_merge_rules_attempted +=1
|
||||
merge_op_id = f"merge_{sanitize_filename(output_map_type)}_{rule_idx}"
|
||||
logger.info(f"Asset {asset_name_for_log}: Processing configured merge rule for '{output_map_type}' (Op ID: {merge_op_id})")
|
||||
|
||||
input_map_sources_list = []
|
||||
source_bit_depths_list = []
|
||||
primary_source_dimensions = None
|
||||
|
||||
# Find required input maps from processed_maps_details
|
||||
required_input_map_types = set(inputs_map_type_to_channel.values())
|
||||
|
||||
for required_map_type in required_input_map_types:
|
||||
found_processed_map_details = None
|
||||
# Iterate through processed_maps_details to find the required map type
|
||||
for p_key_idx, p_details in context.processed_maps_details.items():
|
||||
processed_map_identifier = p_details.get('processing_map_type', p_details.get('map_type'))
|
||||
|
||||
# Check for a match, considering both "MAP_TYPE" and "TYPE" formats
|
||||
is_match = False
|
||||
if processed_map_identifier == required_map_type:
|
||||
is_match = True
|
||||
elif required_map_type.startswith("MAP_") and processed_map_identifier == required_map_type.split("MAP_")[-1]:
|
||||
is_match = True
|
||||
elif not required_map_type.startswith("MAP_") and processed_map_identifier == f"MAP_{required_map_type}":
|
||||
is_match = True
|
||||
|
||||
# Check if the found map is in a usable status and has a temporary file
|
||||
valid_input_statuses = ['BasePOTSaved', 'Processed_With_Variants', 'Processed_No_Variants', 'Converted_To_Rough'] # Add other relevant statuses if needed
|
||||
if is_match and p_details.get('status') in valid_input_statuses and p_details.get('temp_processed_file'):
|
||||
# Also check if the temp file actually exists on disk
|
||||
if Path(p_details.get('temp_processed_file')).exists():
|
||||
found_processed_map_details = p_details
|
||||
break # Found a suitable input, move to the next required map type
|
||||
|
||||
if found_processed_map_details:
|
||||
file_path = found_processed_map_details.get('temp_processed_file')
|
||||
dimensions = found_processed_map_details.get('base_pot_dimensions')
|
||||
|
||||
# Attempt to get original_bit_depth, log warning if not found
|
||||
original_bit_depth = found_processed_map_details.get('original_bit_depth')
|
||||
if original_bit_depth is None:
|
||||
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: 'original_bit_depth' not found in processed_maps_details for map type '{required_map_type}'. This value is pending IndividualMapProcessingStage refactoring and will be None or a default for now.")
|
||||
|
||||
input_map_sources_list.append({
|
||||
'map_type': required_map_type,
|
||||
'file_path': file_path,
|
||||
'dimensions': dimensions,
|
||||
'original_bit_depth': original_bit_depth
|
||||
})
|
||||
source_bit_depths_list.append(original_bit_depth)
|
||||
|
||||
# Set primary_source_dimensions from the first valid input found
|
||||
if primary_source_dimensions is None and dimensions:
|
||||
primary_source_dimensions = dimensions
|
||||
else:
|
||||
# If a required map is not found, log a warning but don't fail the task generation.
|
||||
# The consuming stage will handle missing inputs and fallbacks.
|
||||
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Required input map type '{required_map_type}' not found or not in a usable state in context.processed_maps_details. This input will be skipped for task generation.")
|
||||
|
||||
|
||||
# Create the merged image task dictionary
|
||||
merged_task = {
|
||||
'output_map_type': output_map_type,
|
||||
'input_map_sources': input_map_sources_list,
|
||||
'merge_rule_config': configured_merge_rule,
|
||||
'source_dimensions': primary_source_dimensions, # Can be None if no inputs were found
|
||||
'source_bit_depths': source_bit_depths_list
|
||||
}
|
||||
|
||||
# Append the task to the context
|
||||
context.merged_image_tasks.append(merged_task)
|
||||
logger.info(f"Asset {asset_name_for_log}: Generated merge task for '{output_map_type}' (Op ID: {merge_op_id}). Task details: {merged_task}")
|
||||
|
||||
# Note: We no longer populate context.merged_maps_details with 'Processed' status here,
|
||||
# as this stage only generates tasks, it doesn't perform the merge or save files.
|
||||
# The merged_maps_details will be populated by the stage that consumes these tasks.
|
||||
|
||||
logger.info(f"Finished MapMergingStage for asset: {asset_name_for_log}. Merge tasks generated: {len(context.merged_image_tasks)}")
|
||||
return context
|
||||
304
processing/pipeline/stages/merged_task_processor.py
Normal file
304
processing/pipeline/stages/merged_task_processor.py
Normal file
@ -0,0 +1,304 @@
|
||||
import logging
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Tuple, Dict, Any
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from .base_stage import ProcessingStage
|
||||
# Import necessary context classes and utils
|
||||
from ..asset_context import AssetProcessingContext, MergeTaskDefinition, ProcessedMergedMapData
|
||||
from ...utils import image_processing_utils as ipu
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
# Helper function (Duplicated from RegularMapProcessorStage - consider moving to utils)
|
||||
def _apply_in_memory_transformations(
|
||||
image_data: np.ndarray,
|
||||
processing_map_type: str, # The internal type of the *input* map
|
||||
invert_normal_green: bool,
|
||||
file_type_definitions: Dict[str, Dict],
|
||||
log_prefix: str
|
||||
) -> Tuple[np.ndarray, str, List[str]]:
|
||||
"""
|
||||
Applies in-memory transformations (Gloss-to-Rough, Normal Green Invert).
|
||||
Returns potentially transformed image data, potentially updated map type, and notes.
|
||||
NOTE: This is applied to individual inputs *before* merging.
|
||||
"""
|
||||
transformation_notes = []
|
||||
current_image_data = image_data # Start with original data
|
||||
updated_processing_map_type = processing_map_type # Start with original type
|
||||
|
||||
# Gloss-to-Rough
|
||||
base_map_type_match = re.match(r"(MAP_GLOSS)", processing_map_type)
|
||||
if base_map_type_match:
|
||||
log.info(f"{log_prefix}: Applying Gloss-to-Rough conversion to input.")
|
||||
inversion_succeeded = False
|
||||
if np.issubdtype(current_image_data.dtype, np.floating):
|
||||
current_image_data = 1.0 - current_image_data
|
||||
current_image_data = np.clip(current_image_data, 0.0, 1.0)
|
||||
log.debug(f"{log_prefix}: Inverted float input data for Gloss->Rough.")
|
||||
inversion_succeeded = True
|
||||
elif np.issubdtype(current_image_data.dtype, np.integer):
|
||||
max_val = np.iinfo(current_image_data.dtype).max
|
||||
current_image_data = max_val - current_image_data
|
||||
log.debug(f"{log_prefix}: Inverted integer input data (max_val: {max_val}) for Gloss->Rough.")
|
||||
inversion_succeeded = True
|
||||
else:
|
||||
log.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS input map. Cannot invert.")
|
||||
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
|
||||
|
||||
if inversion_succeeded:
|
||||
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
|
||||
log.info(f"{log_prefix}: Input map type conceptually updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
|
||||
transformation_notes.append("Gloss-to-Rough applied to input")
|
||||
|
||||
# Normal Green Invert
|
||||
base_map_type_match_nrm = re.match(r"(MAP_NRM)", processing_map_type)
|
||||
if base_map_type_match_nrm and invert_normal_green:
|
||||
log.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting) to input.")
|
||||
current_image_data = ipu.invert_normal_map_green_channel(current_image_data)
|
||||
transformation_notes.append("Normal Green Inverted (Global) applied to input")
|
||||
|
||||
# Return the transformed data, the *original* map type (as it identifies the input source), and notes
|
||||
return current_image_data, processing_map_type, transformation_notes
|
||||
|
||||
|
||||
class MergedTaskProcessorStage(ProcessingStage):
|
||||
"""
|
||||
Processes a single merge task defined in the configuration.
|
||||
Loads inputs, applies transformations to inputs, handles fallbacks/resizing,
|
||||
performs the merge, and returns the merged data.
|
||||
"""
|
||||
|
||||
def execute(
|
||||
self,
|
||||
context: AssetProcessingContext,
|
||||
merge_task: MergeTaskDefinition # Specific item passed by orchestrator
|
||||
) -> ProcessedMergedMapData:
|
||||
"""
|
||||
Processes the given MergeTaskDefinition item.
|
||||
"""
|
||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||
task_key = merge_task.task_key
|
||||
task_data = merge_task.task_data
|
||||
log_prefix = f"Asset '{asset_name_for_log}', Task '{task_key}'"
|
||||
log.info(f"{log_prefix}: Processing Merge Task.")
|
||||
|
||||
# Initialize output object with default failure state
|
||||
result = ProcessedMergedMapData(
|
||||
merged_image_data=np.array([]), # Placeholder
|
||||
output_map_type=task_data.get('output_map_type', 'UnknownMergeOutput'),
|
||||
source_bit_depths=[],
|
||||
final_dimensions=None,
|
||||
transformations_applied_to_inputs={},
|
||||
status="Failed",
|
||||
error_message="Initialization error"
|
||||
)
|
||||
|
||||
try:
|
||||
# --- Configuration & Task Data ---
|
||||
config = context.config_obj
|
||||
file_type_definitions = getattr(config, "FILE_TYPE_DEFINITIONS", {})
|
||||
invert_normal_green = config.invert_normal_green_globally
|
||||
merge_dimension_mismatch_strategy = getattr(config, "MERGE_DIMENSION_MISMATCH_STRATEGY", "USE_LARGEST")
|
||||
workspace_path = context.workspace_path # Base for resolving relative input paths
|
||||
|
||||
merge_rule_config = task_data.get('merge_rule_config', {})
|
||||
input_map_sources_from_task = task_data.get('input_map_sources', {}) # Info about where inputs come from
|
||||
target_dimensions_hw = task_data.get('source_dimensions') # Expected dimensions (h, w) from previous stage
|
||||
merge_inputs_config = merge_rule_config.get('inputs', {}) # e.g., {'R': 'MAP_AO', 'G': 'MAP_ROUGH', ...}
|
||||
merge_defaults = merge_rule_config.get('defaults', {}) # e.g., {'R': 255, 'G': 255, ...}
|
||||
merge_channels_order = merge_rule_config.get('channel_order', 'RGB') # e.g., 'RGB', 'RGBA'
|
||||
|
||||
if not merge_rule_config or not input_map_sources_from_task or not target_dimensions_hw or not merge_inputs_config:
|
||||
result.error_message = "Merge task data is incomplete (missing config, sources, dimensions, or input mapping)."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
|
||||
loaded_inputs_for_merge: Dict[str, np.ndarray] = {} # Channel char -> image data
|
||||
actual_input_dimensions: List[Tuple[int, int]] = [] # List of (h, w) for loaded files
|
||||
input_source_bit_depths: Dict[str, int] = {} # Channel char -> bit depth
|
||||
all_transform_notes: Dict[str, List[str]] = {} # Channel char -> list of transform notes
|
||||
|
||||
# --- Load, Transform, and Prepare Inputs ---
|
||||
log.debug(f"{log_prefix}: Loading and preparing inputs...")
|
||||
for channel_char, required_map_type_from_rule in merge_inputs_config.items():
|
||||
input_info = input_map_sources_from_task.get(required_map_type_from_rule)
|
||||
input_image_data: Optional[np.ndarray] = None
|
||||
input_source_desc = f"Fallback for {required_map_type_from_rule}"
|
||||
input_log_prefix = f"{log_prefix}, Input '{required_map_type_from_rule}' (Channel '{channel_char}')"
|
||||
channel_transform_notes: List[str] = []
|
||||
|
||||
# 1. Attempt to load from file path
|
||||
if input_info and input_info.get('file_path'):
|
||||
# Paths in merged tasks should be relative to workspace_path
|
||||
input_file_path_str = input_info['file_path']
|
||||
input_file_path = workspace_path / input_file_path_str
|
||||
if input_file_path.is_file():
|
||||
try:
|
||||
input_image_data = ipu.load_image(str(input_file_path))
|
||||
if input_image_data is not None:
|
||||
log.info(f"{input_log_prefix}: Loaded from: {input_file_path}")
|
||||
actual_input_dimensions.append(input_image_data.shape[:2]) # (h, w)
|
||||
input_source_desc = str(input_file_path)
|
||||
try:
|
||||
input_source_bit_depths[channel_char] = ipu.get_image_bit_depth(str(input_file_path))
|
||||
except Exception:
|
||||
log.warning(f"{input_log_prefix}: Could not get bit depth for {input_file_path}. Defaulting to 8.")
|
||||
input_source_bit_depths[channel_char] = 8
|
||||
else:
|
||||
log.warning(f"{input_log_prefix}: Failed to load image from {input_file_path}. Attempting fallback.")
|
||||
except Exception as e:
|
||||
log.warning(f"{input_log_prefix}: Error loading image from {input_file_path}: {e}. Attempting fallback.")
|
||||
else:
|
||||
log.warning(f"{input_log_prefix}: Input file path not found: {input_file_path}. Attempting fallback.")
|
||||
else:
|
||||
log.warning(f"{input_log_prefix}: No file path provided. Attempting fallback.")
|
||||
|
||||
# 2. Apply Fallback if needed
|
||||
if input_image_data is None:
|
||||
fallback_value = merge_defaults.get(channel_char)
|
||||
if fallback_value is not None:
|
||||
try:
|
||||
h, w = target_dimensions_hw
|
||||
# Infer shape/dtype for fallback (simplified)
|
||||
num_channels = 1 if isinstance(fallback_value, (int, float)) else len(fallback_value) if isinstance(fallback_value, (list, tuple)) else 1 # Default to 1 channel? Needs refinement.
|
||||
dtype = np.uint8 # Default dtype
|
||||
shape = (h, w) if num_channels == 1 else (h, w, num_channels)
|
||||
|
||||
input_image_data = np.full(shape, fallback_value, dtype=dtype)
|
||||
log.warning(f"{input_log_prefix}: Using fallback value {fallback_value} (Target Dims: {target_dimensions_hw}).")
|
||||
input_source_desc = f"Fallback value {fallback_value}"
|
||||
input_source_bit_depths[channel_char] = 8 # Assume 8-bit for fallbacks
|
||||
channel_transform_notes.append(f"Used fallback value {fallback_value}")
|
||||
except Exception as e:
|
||||
result.error_message = f"Error creating fallback for channel '{channel_char}': {e}"
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result # Critical failure
|
||||
else:
|
||||
result.error_message = f"Missing input '{required_map_type_from_rule}' and no fallback default provided for channel '{channel_char}'."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result # Critical failure
|
||||
|
||||
# 3. Apply Transformations to the loaded/fallback input
|
||||
if input_image_data is not None:
|
||||
input_image_data, _, transform_notes = _apply_in_memory_transformations(
|
||||
input_image_data.copy(), # Transform a copy
|
||||
required_map_type_from_rule, # Use the type required by the rule
|
||||
invert_normal_green,
|
||||
file_type_definitions,
|
||||
input_log_prefix
|
||||
)
|
||||
channel_transform_notes.extend(transform_notes)
|
||||
else:
|
||||
# This case should be prevented by fallback logic, but as a safeguard:
|
||||
result.error_message = f"Input data for channel '{channel_char}' is None after load/fallback attempt."
|
||||
log.error(f"{log_prefix}: {result.error_message} This indicates an internal logic error.")
|
||||
return result
|
||||
|
||||
loaded_inputs_for_merge[channel_char] = input_image_data
|
||||
all_transform_notes[channel_char] = channel_transform_notes
|
||||
|
||||
result.transformations_applied_to_inputs = all_transform_notes # Store notes
|
||||
|
||||
# --- Handle Dimension Mismatches (using transformed inputs) ---
|
||||
log.debug(f"{log_prefix}: Handling dimension mismatches...")
|
||||
unique_dimensions = set(actual_input_dimensions)
|
||||
target_merge_dims_hw = target_dimensions_hw # Default
|
||||
|
||||
if len(unique_dimensions) > 1:
|
||||
log.warning(f"{log_prefix}: Mismatched dimensions found among loaded inputs: {unique_dimensions}. Applying strategy: {merge_dimension_mismatch_strategy}")
|
||||
mismatch_note = f"Mismatched input dimensions ({unique_dimensions}), applied {merge_dimension_mismatch_strategy}"
|
||||
# Add note to all relevant inputs? Or just a general note? Add general for now.
|
||||
# result.status_notes.append(mismatch_note) # Need a place for general notes
|
||||
|
||||
if merge_dimension_mismatch_strategy == "ERROR_SKIP":
|
||||
result.error_message = "Dimension mismatch and strategy is ERROR_SKIP."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
elif merge_dimension_mismatch_strategy == "USE_LARGEST":
|
||||
max_h = max(h for h, w in unique_dimensions)
|
||||
max_w = max(w for h, w in unique_dimensions)
|
||||
target_merge_dims_hw = (max_h, max_w)
|
||||
elif merge_dimension_mismatch_strategy == "USE_FIRST":
|
||||
target_merge_dims_hw = actual_input_dimensions[0] if actual_input_dimensions else target_dimensions_hw
|
||||
# Add other strategies or default to USE_LARGEST
|
||||
|
||||
log.info(f"{log_prefix}: Resizing inputs to target merge dimensions: {target_merge_dims_hw}")
|
||||
# Resize loaded inputs (not fallbacks unless they were treated as having target dims)
|
||||
for channel_char, img_data in loaded_inputs_for_merge.items():
|
||||
# Only resize if it was a loaded input that contributed to the mismatch check
|
||||
if img_data.shape[:2] in unique_dimensions and img_data.shape[:2] != target_merge_dims_hw:
|
||||
resized_img = ipu.resize_image(img_data, target_merge_dims_hw[1], target_merge_dims_hw[0]) # w, h
|
||||
if resized_img is None:
|
||||
result.error_message = f"Failed to resize input for channel '{channel_char}' to {target_merge_dims_hw}."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
loaded_inputs_for_merge[channel_char] = resized_img
|
||||
log.debug(f"{log_prefix}: Resized input for channel '{channel_char}'.")
|
||||
|
||||
# --- Perform Merge ---
|
||||
log.debug(f"{log_prefix}: Performing merge operation for channels '{merge_channels_order}'.")
|
||||
try:
|
||||
output_channels = len(merge_channels_order)
|
||||
h, w = target_merge_dims_hw # Use the potentially adjusted dimensions
|
||||
|
||||
# Determine output dtype (e.g., based on inputs or config) - Assume uint8 for now
|
||||
output_dtype = np.uint8
|
||||
|
||||
if output_channels == 1:
|
||||
# Assume the first channel in order is the one to use
|
||||
channel_char_to_use = merge_channels_order[0]
|
||||
source_img = loaded_inputs_for_merge[channel_char_to_use]
|
||||
# Ensure it's grayscale (take first channel if it's multi-channel)
|
||||
if len(source_img.shape) == 3:
|
||||
merged_image = source_img[:, :, 0].copy().astype(output_dtype)
|
||||
else:
|
||||
merged_image = source_img.copy().astype(output_dtype)
|
||||
elif output_channels > 1:
|
||||
merged_image = np.zeros((h, w, output_channels), dtype=output_dtype)
|
||||
for i, channel_char in enumerate(merge_channels_order):
|
||||
source_img = loaded_inputs_for_merge.get(channel_char)
|
||||
if source_img is not None:
|
||||
# Extract the correct channel (e.g., R from RGB, or use grayscale directly)
|
||||
if len(source_img.shape) == 3:
|
||||
# Simple approach: take the first channel if source is color. Needs refinement if specific channel mapping (R->R, G->G etc.) is needed.
|
||||
merged_image[:, :, i] = source_img[:, :, 0]
|
||||
else: # Grayscale source
|
||||
merged_image[:, :, i] = source_img
|
||||
else:
|
||||
# This case should have been caught by fallback logic earlier
|
||||
result.error_message = f"Internal error: Missing prepared input for channel '{channel_char}' during final merge assembly."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
else:
|
||||
result.error_message = f"Invalid channel_order '{merge_channels_order}' in merge config."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
|
||||
result.merged_image_data = merged_image
|
||||
result.final_dimensions = (merged_image.shape[1], merged_image.shape[0]) # w, h
|
||||
result.source_bit_depths = list(input_source_bit_depths.values()) # Collect bit depths used
|
||||
log.info(f"{log_prefix}: Successfully merged inputs into image with shape {result.merged_image_data.shape}")
|
||||
|
||||
except Exception as e:
|
||||
log.exception(f"{log_prefix}: Error during merge operation: {e}")
|
||||
result.error_message = f"Merge operation failed: {e}"
|
||||
return result
|
||||
|
||||
# --- Success ---
|
||||
result.status = "Processed"
|
||||
result.error_message = None
|
||||
log.info(f"{log_prefix}: Successfully processed merge task.")
|
||||
|
||||
except Exception as e:
|
||||
log.exception(f"{log_prefix}: Unhandled exception during processing: {e}")
|
||||
result.status = "Failed"
|
||||
result.error_message = f"Unhandled exception: {e}"
|
||||
# Ensure image data is empty on failure
|
||||
if result.merged_image_data is None or result.merged_image_data.size == 0:
|
||||
result.merged_image_data = np.array([])
|
||||
|
||||
return result
|
||||
@ -5,10 +5,10 @@ from typing import List, Dict, Optional
|
||||
|
||||
from .base_stage import ProcessingStage
|
||||
from ..asset_context import AssetProcessingContext
|
||||
from utils.path_utils import generate_path_from_pattern, sanitize_filename
|
||||
from utils.path_utils import generate_path_from_pattern, sanitize_filename, get_filename_friendly_map_type # Absolute import
|
||||
from rule_structure import FileRule # Assuming these are needed for type hints if not directly in context
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class OutputOrganizationStage(ProcessingStage):
|
||||
@ -17,6 +17,8 @@ class OutputOrganizationStage(ProcessingStage):
|
||||
"""
|
||||
|
||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||
log.info("OUTPUT_ORG: Stage execution started for asset '%s'", context.asset_rule.asset_name)
|
||||
log.info(f"OUTPUT_ORG: context.processed_maps_details at start: {context.processed_maps_details}")
|
||||
"""
|
||||
Copies temporary processed and merged files to their final output locations
|
||||
based on path patterns and updates AssetProcessingContext.
|
||||
@ -45,17 +47,19 @@ class OutputOrganizationStage(ProcessingStage):
|
||||
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(context.processed_maps_details)} processed individual map entries.")
|
||||
for processed_map_key, details in context.processed_maps_details.items():
|
||||
map_status = details.get('status')
|
||||
base_map_type = details.get('map_type', 'unknown_map_type') # Final filename-friendly type
|
||||
# Retrieve the internal map type first
|
||||
internal_map_type = details.get('internal_map_type', 'unknown_map_type')
|
||||
# Convert internal type to filename-friendly type using the helper
|
||||
file_type_definitions = getattr(context.config_obj, "FILE_TYPE_DEFINITIONS", {})
|
||||
base_map_type = get_filename_friendly_map_type(internal_map_type, file_type_definitions) # Final filename-friendly type
|
||||
|
||||
# --- Handle maps processed by the Unified Save Utility ---
|
||||
if map_status == 'Processed_Via_Save_Utility':
|
||||
saved_files_info = details.get('saved_files_info')
|
||||
if not saved_files_info or not isinstance(saved_files_info, list):
|
||||
logger.warning(f"Asset '{asset_name_for_log}': Map key '{processed_map_key}' (status '{map_status}') has missing or invalid 'saved_files_info'. Skipping organization.")
|
||||
details['status'] = 'Organization Failed (Missing saved_files_info)'
|
||||
continue
|
||||
# --- Handle maps processed by the SaveVariantsStage (identified by having saved_files_info) ---
|
||||
saved_files_info = details.get('saved_files_info') # This is a list of dicts from SaveVariantsOutput
|
||||
|
||||
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(saved_files_info)} variants for map key '{processed_map_key}' (map type: {base_map_type}) from Save Utility.")
|
||||
# Check if 'saved_files_info' exists and is a non-empty list.
|
||||
# This indicates the item was processed by SaveVariantsStage.
|
||||
if saved_files_info and isinstance(saved_files_info, list) and len(saved_files_info) > 0:
|
||||
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(saved_files_info)} variants for map key '{processed_map_key}' (map type: {base_map_type}) from SaveVariantsStage.")
|
||||
|
||||
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
|
||||
map_metadata_entry['map_type'] = base_map_type
|
||||
|
||||
92
processing/pipeline/stages/prepare_processing_items.py
Normal file
92
processing/pipeline/stages/prepare_processing_items.py
Normal file
@ -0,0 +1,92 @@
|
||||
import logging
|
||||
from typing import List, Union, Optional
|
||||
|
||||
from .base_stage import ProcessingStage
|
||||
from ..asset_context import AssetProcessingContext, MergeTaskDefinition
|
||||
from rule_structure import FileRule # Assuming FileRule is imported correctly
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
class PrepareProcessingItemsStage(ProcessingStage):
|
||||
"""
|
||||
Identifies and prepares a unified list of items (FileRule, MergeTaskDefinition)
|
||||
to be processed in subsequent stages. Performs initial validation.
|
||||
"""
|
||||
|
||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||
"""
|
||||
Populates context.processing_items with FileRule and MergeTaskDefinition objects.
|
||||
"""
|
||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||
log.info(f"Asset '{asset_name_for_log}': Preparing processing items...")
|
||||
|
||||
if context.status_flags.get('skip_asset', False):
|
||||
log.info(f"Asset '{asset_name_for_log}': Skipping item preparation due to skip_asset flag.")
|
||||
context.processing_items = []
|
||||
return context
|
||||
|
||||
items_to_process: List[Union[FileRule, MergeTaskDefinition]] = []
|
||||
preparation_failed = False
|
||||
|
||||
# --- Add regular files ---
|
||||
if context.files_to_process:
|
||||
# Validate source path early for regular files
|
||||
source_path_valid = True
|
||||
if not context.source_rule or not context.source_rule.input_path:
|
||||
log.error(f"Asset '{asset_name_for_log}': SourceRule or SourceRule.input_path is not set. Cannot process regular files.")
|
||||
source_path_valid = False
|
||||
preparation_failed = True # Mark as failed if source path is missing
|
||||
context.status_flags['prepare_items_failed_reason'] = "SourceRule.input_path missing"
|
||||
elif not context.workspace_path or not context.workspace_path.is_dir():
|
||||
log.error(f"Asset '{asset_name_for_log}': Workspace path '{context.workspace_path}' is not a valid directory. Cannot process regular files.")
|
||||
source_path_valid = False
|
||||
preparation_failed = True # Mark as failed if workspace path is bad
|
||||
context.status_flags['prepare_items_failed_reason'] = "Workspace path invalid"
|
||||
|
||||
if source_path_valid:
|
||||
for file_rule in context.files_to_process:
|
||||
# Basic validation for FileRule itself
|
||||
if not file_rule.file_path:
|
||||
log.warning(f"Asset '{asset_name_for_log}': Skipping FileRule with empty file_path.")
|
||||
continue # Skip this specific rule, but don't fail the whole stage
|
||||
items_to_process.append(file_rule)
|
||||
log.debug(f"Asset '{asset_name_for_log}': Added {len(context.files_to_process)} potential FileRule items.")
|
||||
else:
|
||||
log.warning(f"Asset '{asset_name_for_log}': Skipping addition of all FileRule items due to invalid source/workspace path.")
|
||||
|
||||
|
||||
# --- Add merged tasks ---
|
||||
merged_tasks_attr_name = 'merged_image_tasks' # Check attribute name if different
|
||||
if hasattr(context, merged_tasks_attr_name) and getattr(context, merged_tasks_attr_name):
|
||||
merged_tasks_list = getattr(context, merged_tasks_attr_name)
|
||||
if isinstance(merged_tasks_list, list):
|
||||
for task_idx, task_data in enumerate(merged_tasks_list):
|
||||
if isinstance(task_data, dict):
|
||||
task_key = f"merged_task_{task_idx}"
|
||||
# Basic validation for merge task data (can be expanded)
|
||||
if not task_data.get('output_map_type') or not task_data.get('merge_rule_config'):
|
||||
log.warning(f"Asset '{asset_name_for_log}', Task Index {task_idx}: Skipping merge task due to missing 'output_map_type' or 'merge_rule_config'.")
|
||||
continue # Skip this specific task
|
||||
items_to_process.append(MergeTaskDefinition(task_data=task_data, task_key=task_key))
|
||||
else:
|
||||
log.warning(f"Asset '{asset_name_for_log}': Item at index {task_idx} in '{merged_tasks_attr_name}' is not a dictionary. Skipping.")
|
||||
log.debug(f"Asset '{asset_name_for_log}': Added {len(merged_tasks_list)} potential MergeTaskDefinition items.")
|
||||
else:
|
||||
log.warning(f"Asset '{asset_name_for_log}': Attribute '{merged_tasks_attr_name}' is not a list. Skipping merge tasks.")
|
||||
|
||||
|
||||
if not items_to_process:
|
||||
log.info(f"Asset '{asset_name_for_log}': No valid items found to process after preparation.")
|
||||
|
||||
context.processing_items = items_to_process
|
||||
context.intermediate_results = {} # Initialize intermediate results storage
|
||||
|
||||
if preparation_failed:
|
||||
# Set a flag indicating failure during preparation, even if some items might have been added before failure
|
||||
context.status_flags['prepare_items_failed'] = True
|
||||
log.error(f"Asset '{asset_name_for_log}': Item preparation failed. Reason: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')}")
|
||||
# Optionally, clear items if failure means nothing should proceed
|
||||
# context.processing_items = []
|
||||
|
||||
log.info(f"Asset '{asset_name_for_log}': Finished preparing items. Found {len(context.processing_items)} valid items.")
|
||||
return context
|
||||
257
processing/pipeline/stages/regular_map_processor.py
Normal file
257
processing/pipeline/stages/regular_map_processor.py
Normal file
@ -0,0 +1,257 @@
|
||||
import logging
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Tuple, Dict
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from .base_stage import ProcessingStage # Assuming base_stage is in the same directory
|
||||
from ..asset_context import AssetProcessingContext, ProcessedRegularMapData
|
||||
from rule_structure import FileRule, AssetRule
|
||||
from processing.utils import image_processing_utils as ipu # Absolute import
|
||||
from utils.path_utils import get_filename_friendly_map_type # Absolute import
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class RegularMapProcessorStage(ProcessingStage):
|
||||
"""
|
||||
Processes a single regular texture map defined by a FileRule.
|
||||
Loads the image, determines map type, applies transformations,
|
||||
and returns the processed data.
|
||||
"""
|
||||
|
||||
# --- Helper Methods (Adapted from IndividualMapProcessingStage) ---
|
||||
|
||||
def _get_suffixed_internal_map_type(
|
||||
self,
|
||||
asset_rule: Optional[AssetRule],
|
||||
current_file_rule: FileRule,
|
||||
initial_internal_map_type: str,
|
||||
respect_variant_map_types: List[str],
|
||||
asset_name_for_log: str
|
||||
) -> str:
|
||||
"""
|
||||
Determines the potentially suffixed internal map type (e.g., MAP_COL-1).
|
||||
"""
|
||||
final_internal_map_type = initial_internal_map_type # Default
|
||||
|
||||
base_map_type_match = re.match(r"(MAP_[A-Z]{3})", initial_internal_map_type)
|
||||
if not base_map_type_match or not asset_rule or not asset_rule.files:
|
||||
return final_internal_map_type # Cannot determine suffix without base type or asset rule files
|
||||
|
||||
true_base_map_type = base_map_type_match.group(1) # This is "MAP_XXX"
|
||||
|
||||
# Find all FileRules in the asset with the same base map type
|
||||
peers_of_same_base_type = []
|
||||
for fr_asset in asset_rule.files:
|
||||
fr_asset_item_type = fr_asset.item_type_override or fr_asset.item_type or "UnknownMapType"
|
||||
fr_asset_base_match = re.match(r"(MAP_[A-Z]{3})", fr_asset_item_type)
|
||||
if fr_asset_base_match and fr_asset_base_match.group(1) == true_base_map_type:
|
||||
peers_of_same_base_type.append(fr_asset)
|
||||
|
||||
num_occurrences = len(peers_of_same_base_type)
|
||||
current_instance_index = 0 # 1-based index
|
||||
|
||||
try:
|
||||
# Find the index based on the FileRule object itself (requires object identity)
|
||||
current_instance_index = peers_of_same_base_type.index(current_file_rule) + 1
|
||||
except ValueError:
|
||||
# Fallback: try matching by file_path if object identity fails (less reliable)
|
||||
try:
|
||||
current_instance_index = [fr.file_path for fr in peers_of_same_base_type].index(current_file_rule.file_path) + 1
|
||||
log.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Found peer index using file_path fallback for suffixing.")
|
||||
except (ValueError, AttributeError): # Catch AttributeError if file_path is None
|
||||
log.warning(
|
||||
f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}' (Initial Type: '{initial_internal_map_type}', Base: '{true_base_map_type}'): "
|
||||
f"Could not find its own instance in the list of {num_occurrences} peers from asset_rule.files using object identity or path. Suffixing may be incorrect."
|
||||
)
|
||||
# Keep index 0, suffix logic below will handle it
|
||||
|
||||
# Determine Suffix
|
||||
map_type_for_respect_check = true_base_map_type.replace("MAP_", "") # e.g., "COL"
|
||||
is_in_respect_list = map_type_for_respect_check in respect_variant_map_types
|
||||
|
||||
suffix_to_append = ""
|
||||
if num_occurrences > 1:
|
||||
if current_instance_index > 0:
|
||||
suffix_to_append = f"-{current_instance_index}"
|
||||
else:
|
||||
# If index is still 0 (not found), don't add suffix to avoid ambiguity
|
||||
log.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Index for multi-occurrence map type '{true_base_map_type}' (count: {num_occurrences}) not determined. Omitting numeric suffix.")
|
||||
elif num_occurrences == 1 and is_in_respect_list:
|
||||
suffix_to_append = "-1" # Add suffix even for single instance if in respect list
|
||||
|
||||
if suffix_to_append:
|
||||
final_internal_map_type = true_base_map_type + suffix_to_append
|
||||
|
||||
if final_internal_map_type != initial_internal_map_type:
|
||||
log.debug(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Suffixed internal map type determined: '{initial_internal_map_type}' -> '{final_internal_map_type}'")
|
||||
|
||||
return final_internal_map_type
|
||||
|
||||
def _apply_in_memory_transformations(
|
||||
self,
|
||||
image_data: np.ndarray,
|
||||
processing_map_type: str, # The potentially suffixed internal type
|
||||
invert_normal_green: bool,
|
||||
file_type_definitions: Dict[str, Dict],
|
||||
log_prefix: str
|
||||
) -> Tuple[np.ndarray, str, List[str]]:
|
||||
"""
|
||||
Applies in-memory transformations (Gloss-to-Rough, Normal Green Invert).
|
||||
Returns potentially transformed image data, potentially updated map type, and notes.
|
||||
"""
|
||||
transformation_notes = []
|
||||
current_image_data = image_data # Start with original data
|
||||
updated_processing_map_type = processing_map_type # Start with original type
|
||||
|
||||
# Gloss-to-Rough
|
||||
# Check if the base type is Gloss (before suffix)
|
||||
base_map_type_match = re.match(r"(MAP_GLOSS)", processing_map_type)
|
||||
if base_map_type_match:
|
||||
log.info(f"{log_prefix}: Applying Gloss-to-Rough conversion.")
|
||||
inversion_succeeded = False
|
||||
if np.issubdtype(current_image_data.dtype, np.floating):
|
||||
current_image_data = 1.0 - current_image_data
|
||||
current_image_data = np.clip(current_image_data, 0.0, 1.0)
|
||||
log.debug(f"{log_prefix}: Inverted float image data for Gloss->Rough.")
|
||||
inversion_succeeded = True
|
||||
elif np.issubdtype(current_image_data.dtype, np.integer):
|
||||
max_val = np.iinfo(current_image_data.dtype).max
|
||||
current_image_data = max_val - current_image_data
|
||||
log.debug(f"{log_prefix}: Inverted integer image data (max_val: {max_val}) for Gloss->Rough.")
|
||||
inversion_succeeded = True
|
||||
else:
|
||||
log.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS map. Cannot invert.")
|
||||
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
|
||||
|
||||
if inversion_succeeded:
|
||||
# Update the type string itself (e.g., MAP_GLOSS-1 -> MAP_ROUGH-1)
|
||||
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
|
||||
log.info(f"{log_prefix}: Map type updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
|
||||
transformation_notes.append("Gloss-to-Rough applied")
|
||||
|
||||
# Normal Green Invert
|
||||
# Check if the base type is Normal (before suffix)
|
||||
base_map_type_match_nrm = re.match(r"(MAP_NRM)", processing_map_type)
|
||||
if base_map_type_match_nrm and invert_normal_green:
|
||||
log.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting).")
|
||||
current_image_data = ipu.invert_normal_map_green_channel(current_image_data)
|
||||
transformation_notes.append("Normal Green Inverted (Global)")
|
||||
|
||||
return current_image_data, updated_processing_map_type, transformation_notes
|
||||
|
||||
# --- Execute Method ---
|
||||
|
||||
def execute(
|
||||
self,
|
||||
context: AssetProcessingContext,
|
||||
file_rule: FileRule # Specific item passed by orchestrator
|
||||
) -> ProcessedRegularMapData:
|
||||
"""
|
||||
Processes the given FileRule item.
|
||||
"""
|
||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||
log_prefix = f"Asset '{asset_name_for_log}', File '{file_rule.file_path}'"
|
||||
log.info(f"{log_prefix}: Processing Regular Map.")
|
||||
|
||||
# Initialize output object with default failure state
|
||||
result = ProcessedRegularMapData(
|
||||
processed_image_data=np.array([]), # Placeholder
|
||||
final_internal_map_type="Unknown",
|
||||
source_file_path=Path(file_rule.file_path or "InvalidPath"),
|
||||
original_bit_depth=None,
|
||||
original_dimensions=None,
|
||||
transformations_applied=[],
|
||||
status="Failed",
|
||||
error_message="Initialization error"
|
||||
)
|
||||
|
||||
try:
|
||||
# --- Configuration ---
|
||||
config = context.config_obj
|
||||
file_type_definitions = getattr(config, "FILE_TYPE_DEFINITIONS", {})
|
||||
respect_variant_map_types = getattr(config, "respect_variant_map_types", [])
|
||||
invert_normal_green = config.invert_normal_green_globally
|
||||
|
||||
# --- Determine Map Type (with suffix) ---
|
||||
initial_internal_map_type = file_rule.item_type_override or file_rule.item_type or "UnknownMapType"
|
||||
if not initial_internal_map_type or initial_internal_map_type == "UnknownMapType":
|
||||
result.error_message = "Map type (item_type) not defined in FileRule."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result # Early exit
|
||||
|
||||
processing_map_type = self._get_suffixed_internal_map_type(
|
||||
context.asset_rule, file_rule, initial_internal_map_type, respect_variant_map_types, asset_name_for_log
|
||||
)
|
||||
result.final_internal_map_type = processing_map_type # Store initial suffixed type
|
||||
|
||||
# --- Find and Load Source File ---
|
||||
if not file_rule.file_path: # Should have been caught by Prepare stage, but double-check
|
||||
result.error_message = "FileRule has empty file_path."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
|
||||
source_base_path = context.workspace_path
|
||||
potential_source_path = source_base_path / file_rule.file_path
|
||||
source_file_path_found: Optional[Path] = None
|
||||
|
||||
if potential_source_path.is_file():
|
||||
source_file_path_found = potential_source_path
|
||||
log.info(f"{log_prefix}: Found source file: {source_file_path_found}")
|
||||
else:
|
||||
# Optional: Add globbing fallback if needed, similar to original stage
|
||||
log.warning(f"{log_prefix}: Source file not found directly at '{potential_source_path}'. Add globbing if necessary.")
|
||||
result.error_message = f"Source file not found at '{potential_source_path}'"
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
|
||||
result.source_file_path = source_file_path_found # Update result with found path
|
||||
|
||||
# Load image
|
||||
source_image_data = ipu.load_image(str(source_file_path_found))
|
||||
if source_image_data is None:
|
||||
result.error_message = f"Failed to load image from '{source_file_path_found}'."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
|
||||
original_height, original_width = source_image_data.shape[:2]
|
||||
result.original_dimensions = (original_width, original_height)
|
||||
log.debug(f"{log_prefix}: Loaded image {result.original_dimensions[0]}x{result.original_dimensions[1]}.")
|
||||
|
||||
# Get original bit depth
|
||||
try:
|
||||
result.original_bit_depth = ipu.get_image_bit_depth(str(source_file_path_found))
|
||||
log.info(f"{log_prefix}: Determined source bit depth: {result.original_bit_depth}")
|
||||
except Exception as e:
|
||||
log.warning(f"{log_prefix}: Could not determine source bit depth for {source_file_path_found}: {e}. Setting to None.")
|
||||
result.original_bit_depth = None # Indicate failure to determine
|
||||
|
||||
# --- Apply Transformations ---
|
||||
transformed_image_data, final_map_type, transform_notes = self._apply_in_memory_transformations(
|
||||
source_image_data.copy(), # Pass a copy to avoid modifying original load
|
||||
processing_map_type,
|
||||
invert_normal_green,
|
||||
file_type_definitions,
|
||||
log_prefix
|
||||
)
|
||||
result.processed_image_data = transformed_image_data
|
||||
result.final_internal_map_type = final_map_type # Update if Gloss->Rough changed it
|
||||
result.transformations_applied = transform_notes
|
||||
|
||||
# --- Success ---
|
||||
result.status = "Processed"
|
||||
result.error_message = None
|
||||
log.info(f"{log_prefix}: Successfully processed regular map. Final type: '{result.final_internal_map_type}'.")
|
||||
|
||||
except Exception as e:
|
||||
log.exception(f"{log_prefix}: Unhandled exception during processing: {e}")
|
||||
result.status = "Failed"
|
||||
result.error_message = f"Unhandled exception: {e}"
|
||||
# Ensure image data is empty on failure if it wasn't set
|
||||
if result.processed_image_data is None or result.processed_image_data.size == 0:
|
||||
result.processed_image_data = np.array([])
|
||||
|
||||
return result
|
||||
88
processing/pipeline/stages/save_variants.py
Normal file
88
processing/pipeline/stages/save_variants.py
Normal file
@ -0,0 +1,88 @@
|
||||
import logging
|
||||
from typing import List, Dict, Optional # Added Optional
|
||||
|
||||
import numpy as np
|
||||
|
||||
from .base_stage import ProcessingStage
|
||||
# Import necessary context classes and utils
|
||||
from ..asset_context import SaveVariantsInput, SaveVariantsOutput
|
||||
from processing.utils import image_saving_utils as isu # Absolute import
|
||||
from utils.path_utils import get_filename_friendly_map_type # Absolute import
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SaveVariantsStage(ProcessingStage):
|
||||
"""
|
||||
Takes final processed image data and configuration, calls the
|
||||
save_image_variants utility, and returns the results.
|
||||
"""
|
||||
|
||||
def execute(self, input_data: SaveVariantsInput) -> SaveVariantsOutput:
|
||||
"""
|
||||
Calls isu.save_image_variants with data from input_data.
|
||||
"""
|
||||
internal_map_type = input_data.internal_map_type
|
||||
log_prefix = f"Save Variants Stage (Type: {internal_map_type})"
|
||||
log.info(f"{log_prefix}: Starting.")
|
||||
|
||||
# Initialize output object with default failure state
|
||||
result = SaveVariantsOutput(
|
||||
saved_files_details=[],
|
||||
status="Failed",
|
||||
error_message="Initialization error"
|
||||
)
|
||||
|
||||
if input_data.image_data is None or input_data.image_data.size == 0:
|
||||
result.error_message = "Input image data is None or empty."
|
||||
log.error(f"{log_prefix}: {result.error_message}")
|
||||
return result
|
||||
|
||||
try:
|
||||
# --- Prepare arguments for save_image_variants ---
|
||||
|
||||
# Get the filename-friendly base map type using the helper
|
||||
# This assumes the save utility expects the friendly type. Adjust if needed.
|
||||
base_map_type_friendly = get_filename_friendly_map_type(
|
||||
internal_map_type, input_data.file_type_defs
|
||||
)
|
||||
log.debug(f"{log_prefix}: Using filename-friendly base type '{base_map_type_friendly}' for saving.")
|
||||
|
||||
save_args = {
|
||||
"source_image_data": input_data.image_data,
|
||||
"base_map_type": base_map_type_friendly, # Use the friendly type
|
||||
"source_bit_depth_info": input_data.source_bit_depth_info,
|
||||
"image_resolutions": input_data.image_resolutions,
|
||||
"file_type_defs": input_data.file_type_defs,
|
||||
"output_format_8bit": input_data.output_format_8bit,
|
||||
"output_format_16bit_primary": input_data.output_format_16bit_primary,
|
||||
"output_format_16bit_fallback": input_data.output_format_16bit_fallback,
|
||||
"png_compression_level": input_data.png_compression_level,
|
||||
"jpg_quality": input_data.jpg_quality,
|
||||
"output_filename_pattern_tokens": input_data.output_filename_pattern_tokens,
|
||||
"output_filename_pattern": input_data.output_filename_pattern,
|
||||
}
|
||||
|
||||
log.debug(f"{log_prefix}: Calling save_image_variants utility.")
|
||||
saved_files_details: List[Dict] = isu.save_image_variants(**save_args)
|
||||
|
||||
if saved_files_details:
|
||||
log.info(f"{log_prefix}: Save utility completed successfully. Saved {len(saved_files_details)} variants.")
|
||||
result.saved_files_details = saved_files_details
|
||||
result.status = "Processed"
|
||||
result.error_message = None
|
||||
else:
|
||||
# This might not be an error, maybe no variants were configured?
|
||||
log.warning(f"{log_prefix}: Save utility returned no saved file details. This might be expected if no resolutions/formats matched.")
|
||||
result.saved_files_details = []
|
||||
result.status = "Processed (No Output)" # Indicate processing happened but nothing saved
|
||||
result.error_message = "Save utility reported no files saved (check configuration/resolutions)."
|
||||
|
||||
|
||||
except Exception as e:
|
||||
log.exception(f"{log_prefix}: Error calling or executing save_image_variants: {e}")
|
||||
result.status = "Failed"
|
||||
result.error_message = f"Save utility call failed: {e}"
|
||||
result.saved_files_details = [] # Ensure empty list on error
|
||||
|
||||
return result
|
||||
@ -7,7 +7,7 @@ import tempfile
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Tuple, Optional, Set
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
# Attempt to import image processing libraries
|
||||
try:
|
||||
import cv2
|
||||
@ -21,7 +21,6 @@ except ImportError as e:
|
||||
np = None
|
||||
|
||||
|
||||
|
||||
try:
|
||||
from configuration import Configuration, ConfigurationError
|
||||
from rule_structure import SourceRule, AssetRule, FileRule
|
||||
@ -50,6 +49,7 @@ if not log.hasHandlers():
|
||||
|
||||
from processing.pipeline.orchestrator import PipelineOrchestrator
|
||||
# from processing.pipeline.asset_context import AssetProcessingContext # AssetProcessingContext is used by the orchestrator
|
||||
# Import stages that will be passed to the orchestrator (outer stages)
|
||||
from processing.pipeline.stages.supplier_determination import SupplierDeterminationStage
|
||||
from processing.pipeline.stages.asset_skip_logic import AssetSkipLogicStage
|
||||
from processing.pipeline.stages.metadata_initialization import MetadataInitializationStage
|
||||
@ -57,8 +57,8 @@ from processing.pipeline.stages.file_rule_filter import FileRuleFilterStage
|
||||
from processing.pipeline.stages.gloss_to_rough_conversion import GlossToRoughConversionStage
|
||||
from processing.pipeline.stages.alpha_extraction_to_mask import AlphaExtractionToMaskStage
|
||||
from processing.pipeline.stages.normal_map_green_channel import NormalMapGreenChannelStage
|
||||
from processing.pipeline.stages.individual_map_processing import IndividualMapProcessingStage
|
||||
from processing.pipeline.stages.map_merging import MapMergingStage
|
||||
# Removed: from processing.pipeline.stages.individual_map_processing import IndividualMapProcessingStage
|
||||
# Removed: from processing.pipeline.stages.map_merging import MapMergingStage
|
||||
from processing.pipeline.stages.metadata_finalization_save import MetadataFinalizationAndSaveStage
|
||||
from processing.pipeline.stages.output_organization import OutputOrganizationStage
|
||||
|
||||
@ -94,22 +94,33 @@ class ProcessingEngine:
|
||||
self.loaded_data_cache: dict = {} # Cache for loaded/resized data within a single process call
|
||||
|
||||
# --- Pipeline Orchestrator Setup ---
|
||||
self.stages = [
|
||||
# Define pre-item and post-item processing stages
|
||||
pre_item_stages = [
|
||||
SupplierDeterminationStage(),
|
||||
AssetSkipLogicStage(),
|
||||
MetadataInitializationStage(),
|
||||
FileRuleFilterStage(),
|
||||
GlossToRoughConversionStage(),
|
||||
AlphaExtractionToMaskStage(),
|
||||
NormalMapGreenChannelStage(),
|
||||
IndividualMapProcessingStage(),
|
||||
MapMergingStage(),
|
||||
MetadataFinalizationAndSaveStage(),
|
||||
OutputOrganizationStage(),
|
||||
GlossToRoughConversionStage(), # Assumed to run on context.files_to_process if needed by old logic
|
||||
AlphaExtractionToMaskStage(), # Same assumption as above
|
||||
NormalMapGreenChannelStage(), # Same assumption as above
|
||||
# Note: The new RegularMapProcessorStage and MergedTaskProcessorStage handle their own transformations
|
||||
# on the specific items they process. These global transformation stages might need review
|
||||
# if they were intended to operate on a broader scope or if their logic is now fully
|
||||
# encapsulated in the new item-specific processor stages. For now, keeping them as pre-stages.
|
||||
]
|
||||
|
||||
post_item_stages = [
|
||||
OutputOrganizationStage(), # Must run after all items are saved to temp
|
||||
MetadataFinalizationAndSaveStage(),# Must run after output organization to have final paths
|
||||
]
|
||||
|
||||
try:
|
||||
self.pipeline_orchestrator = PipelineOrchestrator(config_obj=self.config_obj, stages=self.stages)
|
||||
log.info("PipelineOrchestrator initialized successfully in ProcessingEngine.")
|
||||
self.pipeline_orchestrator = PipelineOrchestrator(
|
||||
config_obj=self.config_obj,
|
||||
pre_item_stages=pre_item_stages,
|
||||
post_item_stages=post_item_stages
|
||||
)
|
||||
log.info("PipelineOrchestrator initialized successfully in ProcessingEngine with pre and post stages.")
|
||||
except Exception as e:
|
||||
log.error(f"Failed to initialize PipelineOrchestrator in ProcessingEngine: {e}", exc_info=True)
|
||||
self.pipeline_orchestrator = None # Ensure it's None if init fails
|
||||
|
||||
@ -163,6 +163,39 @@ def sanitize_filename(name: str) -> str:
|
||||
if not name: name = "invalid_name"
|
||||
return name
|
||||
|
||||
def get_filename_friendly_map_type(internal_map_type: str, file_type_definitions: Optional[Dict[str, Dict]]) -> str:
|
||||
"""Derives a filename-friendly map type from the internal map type."""
|
||||
filename_friendly_map_type = internal_map_type # Fallback
|
||||
if not file_type_definitions or not isinstance(file_type_definitions, dict) or not file_type_definitions:
|
||||
logger.warning(f"Filename-friendly lookup: FILE_TYPE_DEFINITIONS not available or invalid. Falling back to internal type: {internal_map_type}")
|
||||
return filename_friendly_map_type
|
||||
|
||||
base_map_key_val = None
|
||||
suffix_part = ""
|
||||
# Sort keys by length descending to match longest prefix first (e.g., MAP_ROUGHNESS before MAP_ROUGH)
|
||||
sorted_known_base_keys = sorted(list(file_type_definitions.keys()), key=len, reverse=True)
|
||||
|
||||
for known_key in sorted_known_base_keys:
|
||||
if internal_map_type.startswith(known_key):
|
||||
base_map_key_val = known_key
|
||||
suffix_part = internal_map_type[len(known_key):]
|
||||
break
|
||||
|
||||
if base_map_key_val:
|
||||
definition = file_type_definitions.get(base_map_key_val)
|
||||
if definition and isinstance(definition, dict):
|
||||
standard_type_alias = definition.get("standard_type")
|
||||
if standard_type_alias and isinstance(standard_type_alias, str) and standard_type_alias.strip():
|
||||
filename_friendly_map_type = standard_type_alias.strip() + suffix_part
|
||||
logger.debug(f"Filename-friendly lookup: Transformed '{internal_map_type}' -> '{filename_friendly_map_type}'")
|
||||
else:
|
||||
logger.warning(f"Filename-friendly lookup: Standard type alias for '{base_map_key_val}' is missing or invalid. Falling back.")
|
||||
else:
|
||||
logger.warning(f"Filename-friendly lookup: No valid definition for '{base_map_key_val}'. Falling back.")
|
||||
else:
|
||||
logger.warning(f"Filename-friendly lookup: Could not parse base key from '{internal_map_type}'. Falling back.")
|
||||
|
||||
return filename_friendly_map_type
|
||||
# --- Basic Unit Tests ---
|
||||
if __name__ == "__main__":
|
||||
print("Running basic tests for path_utils.generate_path_from_pattern...")
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user