Compare commits
12 Commits
5bf53f036c
...
Pipeline
| Author | SHA1 | Date | |
|---|---|---|---|
| 6e7daf260a | |||
| 1cd81cb87a | |||
| f800bb25a9 | |||
| 35a7221f57 | |||
| 0de4db1826 | |||
| b441174076 | |||
| c2ad299ce2 | |||
| 528d9be47f | |||
| 81d8404576 | |||
| ab4db1b8bd | |||
| 06552216d5 | |||
| 4ffb2ff78c |
@@ -1,69 +1,98 @@
|
|||||||
# Developer Guide: Processing Pipeline
|
Cl# Developer Guide: Processing Pipeline
|
||||||
|
|
||||||
This document details the step-by-step technical process executed by the asset processing pipeline, which is initiated by the `ProcessingEngine` class (`processing_engine.py`) and orchestrated by the `PipelineOrchestrator` (`processing/pipeline/orchestrator.py`).
|
This document details the step-by-step technical process executed by the asset processing pipeline, which is initiated by the [`ProcessingEngine`](processing_engine.py:73) class (`processing_engine.py`) and orchestrated by the [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) (`processing/pipeline/orchestrator.py`).
|
||||||
|
|
||||||
The `ProcessingEngine.process()` method serves as the main entry point. It initializes a `PipelineOrchestrator` instance, providing it with the application's `Configuration` object and a predefined list of processing stages. The `PipelineOrchestrator.process_source_rule()` method then manages the execution of these stages for each asset defined in the input `SourceRule`.
|
The [`ProcessingEngine.process()`](processing_engine.py:131) method serves as the main entry point. It initializes a [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) instance, providing it with the application's [`Configuration`](configuration.py:68) object and predefined lists of pre-item and post-item processing stages. The [`PipelineOrchestrator.process_source_rule()`](processing/pipeline/orchestrator.py:95) method then manages the execution of these stages for each asset defined in the input [`SourceRule`](rule_structure.py:40).
|
||||||
|
|
||||||
A crucial component in this architecture is the `AssetProcessingContext` (`processing/pipeline/asset_context.py`). An instance of this dataclass is created for each `AssetRule` being processed. It acts as a stateful container, carrying all relevant data (source files, rules, configuration, intermediate results, metadata) and is passed sequentially through each stage. Each stage can read from and write to the context, allowing data to flow and be modified throughout the pipeline.
|
A crucial component in this architecture is the [`AssetProcessingContext`](processing/pipeline/asset_context.py:86) (`processing/pipeline/asset_context.py`). An instance of this dataclass is created for each [`AssetRule`](rule_structure.py:22) being processed. It acts as a stateful container, carrying all relevant data (source files, rules, configuration, intermediate results, metadata) and is passed sequentially through each stage. Each stage can read from and write to the context, allowing data to flow and be modified throughout the pipeline.
|
||||||
|
|
||||||
The pipeline stages are executed in the following order:
|
The pipeline execution for each asset follows this general flow:
|
||||||
|
|
||||||
1. **`SupplierDeterminationStage` (`processing/pipeline/stages/supplier_determination.py`)**:
|
1. **Pre-Item Stages:** A sequence of stages executed once per asset before the core item processing loop. These stages typically perform initial setup, filtering, and asset-level transformations.
|
||||||
* **Responsibility**: Determines the effective supplier for the asset based on the `SourceRule`'s `supplier_identifier`, `supplier_override`, and supplier definitions in the `Configuration`.
|
2. **Core Item Processing Loop:** The [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) iterates through a list of "processing items" (individual files or merge tasks) prepared by a dedicated stage. For each item, a sequence of core processing stages is executed.
|
||||||
* **Context Interaction**: Updates `AssetProcessingContext.effective_supplier` and potentially `AssetProcessingContext.asset_metadata` with supplier information.
|
3. **Post-Item Stages:** A sequence of stages executed once per asset after the core item processing loop is complete. These stages handle final tasks like organizing output files and saving metadata.
|
||||||
|
|
||||||
2. **`AssetSkipLogicStage` (`processing/pipeline/stages/asset_skip_logic.py`)**:
|
## Pipeline Stages
|
||||||
* **Responsibility**: Checks if the asset should be skipped, typically if the output already exists and overwriting is not forced.
|
|
||||||
* **Context Interaction**: Sets `AssetProcessingContext.status_flags['skip_asset']` to `True` if the asset should be skipped, halting further processing for this asset by the orchestrator.
|
|
||||||
|
|
||||||
3. **`MetadataInitializationStage` (`processing/pipeline/stages/metadata_initialization.py`)**:
|
The stages are executed in the following order for each asset:
|
||||||
* **Responsibility**: Initializes the `AssetProcessingContext.asset_metadata` dictionary with base information derived from the `AssetRule`, `SourceRule`, and `Configuration`. This includes asset name, type, and any common metadata.
|
|
||||||
* **Context Interaction**: Populates `AssetProcessingContext.asset_metadata`.
|
|
||||||
|
|
||||||
4. **`FileRuleFilterStage` (`processing/pipeline/stages/file_rule_filter.py`)**:
|
### Pre-Item Stages
|
||||||
* **Responsibility**: Filters the `FileRule` objects from the `AssetRule` to determine which files should actually be processed. It respects `FILE_IGNORE` rules.
|
|
||||||
* **Context Interaction**: Populates `AssetProcessingContext.files_to_process` with the list of `FileRule` objects that passed the filter.
|
|
||||||
|
|
||||||
5. **`GlossToRoughConversionStage` (`processing/pipeline/stages/gloss_to_rough_conversion.py`)**:
|
These stages are executed sequentially once for each asset before the core item processing loop begins.
|
||||||
* **Responsibility**: Identifies gloss maps (based on `FileRule` properties and filename conventions) that are intended to be used as roughness maps. If found, it loads the image, inverts its colors, and saves a temporary inverted version.
|
|
||||||
* **Context Interaction**: Modifies `FileRule` objects in `AssetProcessingContext.files_to_process` (e.g., updates `file_path` to point to the temporary inverted map, sets flags indicating inversion). Updates `AssetProcessingContext.processed_maps_details` with information about the conversion.
|
|
||||||
|
|
||||||
6. **`AlphaExtractionToMaskStage` (`processing/pipeline/stages/alpha_extraction_to_mask.py`)**:
|
1. **[`SupplierDeterminationStage`](processing/pipeline/stages/supplier_determination.py:6)** (`processing/pipeline/stages/supplier_determination.py`):
|
||||||
* **Responsibility**: If a `FileRule` specifies alpha channel extraction (e.g., from a diffuse map to create an opacity mask), this stage loads the source image, extracts its alpha channel, and saves it as a new temporary grayscale map.
|
* **Responsibility**: Determines the effective supplier for the asset based on the [`SourceRule`](rule_structure.py:40)'s `supplier_override`, `supplier_identifier`, and validation against configured suppliers.
|
||||||
* **Context Interaction**: May add new `FileRule`-like entries or details to `AssetProcessingContext.processed_maps_details` representing the extracted mask.
|
* **Context Interaction**: Sets `context.effective_supplier` and may set a `supplier_error` flag in `context.status_flags`.
|
||||||
|
|
||||||
7. **`NormalMapGreenChannelStage` (`processing/pipeline/stages/normal_map_green_channel.py`)**:
|
2. **[`AssetSkipLogicStage`](processing/pipeline/stages/asset_skip_logic.py:5)** (`processing/pipeline/stages/asset_skip_logic.py`):
|
||||||
* **Responsibility**: Checks `FileRule`s for normal maps and, based on configuration (e.g., `invert_normal_map_green_channel` for a specific supplier), potentially inverts the green channel of the normal map image.
|
* **Responsibility**: Checks if the entire asset should be skipped based on conditions like a missing/invalid supplier, a "SKIP" status in asset metadata, or if the asset is already processed and overwrite is disabled.
|
||||||
* **Context Interaction**: Modifies the image data for normal maps if inversion is needed, saving a new temporary version. Updates `AssetProcessingContext.processed_maps_details`.
|
* **Context Interaction**: Sets the `skip_asset` flag and `skip_reason` in `context.status_flags` if the asset should be skipped.
|
||||||
|
|
||||||
8. **`IndividualMapProcessingStage` (`processing/pipeline/stages/individual_map_processing.py`)**:
|
3. **[`MetadataInitializationStage`](processing/pipeline/stages/metadata_initialization.py:81)** (`processing/pipeline/stages/metadata_initialization.py`):
|
||||||
* **Responsibility**: Processes individual texture map files. This includes:
|
* **Responsibility**: Initializes the `context.asset_metadata` dictionary with base information derived from the [`AssetRule`](rule_structure.py:22), [`SourceRule`](rule_structure.py:40), and [`Configuration`](configuration.py:68). This includes asset name, IDs, source/output paths, timestamps, and initial status.
|
||||||
* Loading the source image.
|
* **Context Interaction**: Populates `context.asset_metadata`. Initializes `context.processed_maps_details` and `context.merged_maps_details` as empty dictionaries (these are used internally by subsequent stages but are not directly part of the final `metadata.json` in their original form).
|
||||||
* Applying Power-of-Two (POT) scaling.
|
|
||||||
* Generating multiple resolution variants based on configuration.
|
|
||||||
* Handling color space conversions (e.g., BGR to RGB).
|
|
||||||
* Calculating image statistics (min, max, mean, median).
|
|
||||||
* Determining and storing aspect ratio change information.
|
|
||||||
* Saving processed temporary map files.
|
|
||||||
* Applying name variant suffixing and using standard type aliases for filenames.
|
|
||||||
* **Context Interaction**: Heavily populates `AssetProcessingContext.processed_maps_details` with paths to temporary processed files, dimensions, and other metadata for each map and its variants. Updates `AssetProcessingContext.asset_metadata` with image stats and aspect ratio info.
|
|
||||||
|
|
||||||
9. **`MapMergingStage` (`processing/pipeline/stages/map_merging.py`)**:
|
4. **[`FileRuleFilterStage`](processing/pipeline/stages/file_rule_filter.py:10)** (`processing/pipeline/stages/file_rule_filter.py`):
|
||||||
* **Responsibility**: Performs channel packing and other merge operations based on `map_merge_rules` defined in the `Configuration`.
|
* **Responsibility**: Filters the [`FileRule`](rule_structure.py:5) objects associated with the asset to determine which individual files should be considered for processing. It identifies and excludes files matching "FILE_IGNORE" rules based on their `item_type`.
|
||||||
* **Context Interaction**: Reads source map details and temporary file paths from `AssetProcessingContext.processed_maps_details`. Saves new temporary merged maps and records their details in `AssetProcessingContext.merged_maps_details`.
|
* **Context Interaction**: Populates `context.files_to_process` with the list of [`FileRule`](rule_structure.py:5) objects that are not ignored.
|
||||||
|
|
||||||
10. **`MetadataFinalizationAndSaveStage` (`processing/pipeline/stages/metadata_finalization_save.py`)**:
|
5. **[`GlossToRoughConversionStage`](processing/pipeline/stages/gloss_to_rough_conversion.py:15)** (`processing/pipeline/stages/gloss_to_rough_conversion.py`):
|
||||||
* **Responsibility**: Collects all accumulated metadata from `AssetProcessingContext.asset_metadata`, `AssetProcessingContext.processed_maps_details`, and `AssetProcessingContext.merged_maps_details`. It structures this information and saves it as the `metadata.json` file in a temporary location within the engine's temporary directory.
|
* **Responsibility**: Identifies processed maps in `context.processed_maps_details` whose `internal_map_type` starts with "MAP_GLOSS". If found, it loads the temporary image data, inverts it using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), saves a new temporary roughness map ("MAP_ROUGH"), and updates the corresponding details in `context.processed_maps_details` (setting `internal_map_type` to "MAP_ROUGH") and the relevant [`FileRule`](rule_structure.py:5) in `context.files_to_process` (setting `item_type` to "MAP_ROUGH").
|
||||||
* **Context Interaction**: Reads from various context fields and writes the `metadata.json` file. Stores the path to this temporary metadata file in the context (e.g., `AssetProcessingContext.asset_metadata['temp_metadata_path']`).
|
* **Context Interaction**: Reads from and updates `context.processed_maps_details` (specifically `internal_map_type` and `temp_processed_file`) and `context.files_to_process` (specifically `item_type`).
|
||||||
|
|
||||||
11. **`OutputOrganizationStage` (`processing/pipeline/stages/output_organization.py`)**:
|
6. **[`AlphaExtractionToMaskStage`](processing/pipeline/stages/alpha_extraction_to_mask.py:16)** (`processing/pipeline/stages/alpha_extraction_to_mask.py`):
|
||||||
* **Responsibility**: Determines final output paths for all processed maps, merged maps, the metadata file, and any other asset files (like models). It then copies these files from their temporary locations to the final structured output directory.
|
* **Responsibility**: If no mask map is explicitly defined for the asset (as a [`FileRule`](rule_structure.py:5) with `item_type="MAP_MASK"`), this stage searches `context.processed_maps_details` for a suitable source map (e.g., a "MAP_COL" with an alpha channel, based on its `internal_map_type`). If found, it extracts the alpha channel, saves it as a new temporary mask map, and adds a new [`FileRule`](rule_structure.py:5) (with `item_type="MAP_MASK"`) and corresponding details (with `internal_map_type="MAP_MASK"`) to the context.
|
||||||
* **Context Interaction**: Reads temporary file paths from `AssetProcessingContext.processed_maps_details`, `AssetProcessingContext.merged_maps_details`, and the temporary metadata file path. Uses `Configuration` for output path patterns. Updates `AssetProcessingContext.asset_metadata` with final file paths and status.
|
* **Context Interaction**: Reads from `context.processed_maps_details`, adds a new [`FileRule`](rule_structure.py:5) to `context.files_to_process`, and adds a new entry to `context.processed_maps_details` (setting `internal_map_type`).
|
||||||
|
|
||||||
**External Steps (Not part of `PipelineOrchestrator`'s direct loop but integral to the overall process):**
|
7. **[`NormalMapGreenChannelStage`](processing/pipeline/stages/normal_map_green_channel.py:14)** (`processing/pipeline/stages/normal_map_green_channel.py`):
|
||||||
|
* **Responsibility**: Identifies processed normal maps in `context.processed_maps_details` (those with an `internal_map_type` starting with "MAP_NRM"). If the global `invert_normal_map_green_channel_globally` configuration is true, it loads the temporary image data, inverts the green channel using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), saves a new temporary modified normal map, and updates the `temp_processed_file` path in `context.processed_maps_details`.
|
||||||
|
* **Context Interaction**: Reads from and updates `context.processed_maps_details` (specifically `temp_processed_file` and `notes`).
|
||||||
|
|
||||||
* **Workspace Preparation and Cleanup**: Handled by the code that invokes `ProcessingEngine.process()` (e.g., `main.ProcessingTask`, `monitor._process_archive_task`), typically using `utils.workspace_utils`. The engine itself creates a sub-temporary directory (`engine_temp_dir`) within the workspace provided to it by the orchestrator, which it cleans up.
|
### Core Item Processing Loop
|
||||||
* **Prediction and Rule Generation**: Also external, performed before `ProcessingEngine` is called. Generates the `SourceRule`.
|
|
||||||
* **Optional Blender Script Execution**: Triggered externally after successful processing.
|
|
||||||
|
|
||||||
This staged pipeline provides a modular and extensible architecture for asset processing, with clear separation of concerns for each step. The `AssetProcessingContext` ensures that data flows consistently between these stages.r
|
The [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) iterates through the `context.processing_items` list (populated by the [`PrepareProcessingItemsStage`](processing/pipeline/stages/prepare_processing_items.py:10)). For each item (either a [`FileRule`](rule_structure.py:5) for a regular map or a [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16) for a merged map), the following stages are executed sequentially:
|
||||||
|
|
||||||
|
1. **[`PrepareProcessingItemsStage`](processing/pipeline/stages/prepare_processing_items.py:10)** (`processing/pipeline/stages/prepare_processing_items.py`):
|
||||||
|
* **Responsibility**: (Executed once before the loop) Creates the `context.processing_items` list by combining [`FileRule`](rule_structure.py:5)s from `context.files_to_process` and [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16)s derived from the global `map_merge_rules` configuration. It correctly accesses `map_merge_rules` from `context.config_obj` and validates each merge rule for the presence of `output_map_type` and a dictionary for `inputs`. Initializes `context.intermediate_results`.
|
||||||
|
* **Context Interaction**: Reads from `context.files_to_process` and `context.config_obj` (accessing `map_merge_rules`). Populates `context.processing_items` and initializes `context.intermediate_results`.
|
||||||
|
|
||||||
|
2. **[`RegularMapProcessorStage`](processing/pipeline/stages/regular_map_processor.py:18)** (`processing/pipeline/stages/regular_map_processor.py`):
|
||||||
|
* **Responsibility**: (Executed per [`FileRule`](rule_structure.py:5) item) Checks if the `FileRule.item_type` starts with "MAP_". If not, the item is skipped. Otherwise, it loads the image data for the file, determines its potentially suffixed internal map type (e.g., "MAP_COL-1"), applies in-memory transformations (Gloss-to-Rough, Normal Green Invert) using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), and returns the processed image data and details in a [`ProcessedRegularMapData`](processing/pipeline/asset_context.py:23) object. The `internal_map_type` in the output reflects any transformations (e.g., "MAP_GLOSS" becomes "MAP_ROUGH").
|
||||||
|
* **Context Interaction**: Reads from the input [`FileRule`](rule_structure.py:5) (checking `item_type`) and [`Configuration`](configuration.py:68). Returns a [`ProcessedRegularMapData`](processing/pipeline/asset_context.py:23) object which is stored in `context.intermediate_results`.
|
||||||
|
|
||||||
|
3. **[`MergedTaskProcessorStage`](processing/pipeline/stages/merged_task_processor.py:68)** (`processing/pipeline/stages/merged_task_processor.py`):
|
||||||
|
* **Responsibility**: (Executed per [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16) item) Validates that all input map types specified in the merge rule start with "MAP_". If not, the task is failed. It dynamically loads input images by looking up the required input map types (e.g., "MAP_NRM") in `context.processed_maps_details` and using the temporary file paths from their `saved_files_info`. It applies in-memory transformations to inputs using [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), handles dimension mismatches (with fallback creation if configured and `source_dimensions` are available), performs the channel merging operation, and returns the merged image data and details in a [`ProcessedMergedMapData`](processing/pipeline/asset_context.py:35) object. The `output_map_type` of the merged map must also be "MAP_" prefixed in the configuration.
|
||||||
|
* **Context Interaction**: Reads from the input [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16) (checking input map types), `context.workspace_path`, `context.processed_maps_details` (for input image data), and [`Configuration`](configuration.py:68). Returns a [`ProcessedMergedMapData`](processing/pipeline/asset_context.py:35) object which is stored in `context.intermediate_results`.
|
||||||
|
|
||||||
|
4. **[`InitialScalingStage`](processing/pipeline/stages/initial_scaling.py:14)** (`processing/pipeline/stages/initial_scaling.py`):
|
||||||
|
* **Responsibility**: (Executed per item) Applies initial scaling (e.g., Power-of-Two downscaling) to the image data from the previous processing stage based on the `initial_scaling_mode` configuration.
|
||||||
|
* **Context Interaction**: Takes a [`InitialScalingInput`](processing/pipeline/asset_context.py:46) (containing image data and config) and returns an [`InitialScalingOutput`](processing/pipeline/asset_context.py:54) object, which updates the item's entry in `context.intermediate_results`.
|
||||||
|
|
||||||
|
5. **[`SaveVariantsStage`](processing/pipeline/stages/save_variants.py:15)** (`processing/pipeline/stages/save_variants.py`):
|
||||||
|
* **Responsibility**: (Executed per item) Takes the final processed image data (potentially scaled) and configuration, and calls a utility to save the image to temporary files in various resolutions and formats as defined by the configuration.
|
||||||
|
* **Context Interaction**: Takes a [`SaveVariantsInput`](processing/pipeline/asset_context.py:61) object (which includes the "MAP_" prefixed `internal_map_type`). It uses the `get_filename_friendly_map_type` utility to convert this to a "standard type" (e.g., "COL") for output naming. Returns a [`SaveVariantsOutput`](processing/pipeline/asset_context.py:79) object containing details about the saved temporary files. The orchestrator stores these details, including the original "MAP_" prefixed `internal_map_type`, in `context.processed_maps_details` for the item.
|
||||||
|
|
||||||
|
### Post-Item Stages
|
||||||
|
|
||||||
|
These stages are executed sequentially once for each asset after the core item processing loop has finished for all items.
|
||||||
|
|
||||||
|
1. **[`OutputOrganizationStage`](processing/pipeline/stages/output_organization.py:14)** (`processing/pipeline/stages/output_organization.py`):
|
||||||
|
* **Responsibility**: Determines the final output paths for all processed maps (including variants) and extra files based on configured patterns. It copies the temporary files generated by the core stages to these final destinations, creating directories as needed and respecting overwrite settings.
|
||||||
|
* **Context Interaction**: Reads from `context.processed_maps_details`, `context.files_to_process` (for 'EXTRA' files), `context.output_base_path`, and [`Configuration`](configuration.py:68). Updates entries in `context.processed_maps_details` with organization status. Populates `context.asset_metadata['maps']` with the final map structure:
|
||||||
|
* The `maps` object is a dictionary where keys are standard map types (e.g., "COL", "REFL").
|
||||||
|
* Each entry contains a `variant_paths` dictionary, where keys are resolution strings (e.g., "8K", "4K") and values are the filenames of the map variants (relative to the asset's output directory).
|
||||||
|
It also populates `context.asset_metadata['final_output_files']` with a list of absolute paths to all generated files (this list itself is not saved in the final `metadata.json`).
|
||||||
|
|
||||||
|
2. **[`MetadataFinalizationAndSaveStage`](processing/pipeline/stages/metadata_finalization_save.py:14)** (`processing/pipeline/stages/metadata_finalization_save.py`):
|
||||||
|
* **Responsibility**: Finalizes the `context.asset_metadata` (setting final status based on flags). It determines the save path for the metadata file based on configuration and patterns, serializes the `context.asset_metadata` (which now contains the structured `maps` data from `OutputOrganizationStage`) to JSON, and saves the `metadata.json` file.
|
||||||
|
* **Context Interaction**: Reads from `context.asset_metadata` (including the `maps` structure), `context.output_base_path`, and [`Configuration`](configuration.py:68). Before saving, it explicitly removes the `final_output_files` key from `context.asset_metadata`. The `processing_end_time` is also no longer added. The `metadata.json` file is written, and `context.asset_metadata` is updated with its final path and status. The older `processed_maps_details` and `merged_maps_details` from the context are not directly included in the JSON.
|
||||||
|
|
||||||
|
## External Steps
|
||||||
|
|
||||||
|
Certain steps are integral to the overall asset processing workflow but are handled outside the [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36)'s direct execution loop:
|
||||||
|
|
||||||
|
* **Workspace Preparation and Cleanup**: Handled by the code that invokes [`ProcessingEngine.process()`](processing_engine.py:131) (e.g., `main.ProcessingTask`, `monitor._process_archive_task`), typically involving extracting archives and setting up temporary directories. The engine itself manages a sub-temporary directory (`engine_temp_dir`) for intermediate processing files.
|
||||||
|
* **Prediction and Rule Generation**: Performed before the [`ProcessingEngine`](processing_engine.py:73) is called. This involves analyzing source files and generating the [`SourceRule`](rule_structure.py:40) object with its nested [`AssetRule`](rule_structure.py:22)s and [`FileRule`](rule_structure.py:5)s, often involving prediction logic (potentially using LLMs).
|
||||||
|
* **Optional Blender Script Execution**: Can be triggered externally after successful processing to perform tasks like material setup in Blender using the generated output files and metadata.
|
||||||
|
|
||||||
|
This staged pipeline provides a modular and extensible architecture for asset processing, with clear separation of concerns for each step. The [`AssetProcessingContext`](processing/pipeline/asset_context.py:86) ensures that data flows consistently between these stages.
|
||||||
96
ProjectNotes/MAP_Prefix_Enforcement_Plan.md
Normal file
96
ProjectNotes/MAP_Prefix_Enforcement_Plan.md
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
# Plan: Enforcing "MAP_" Prefix for Internal Processing and Standard Type for Output Naming
|
||||||
|
|
||||||
|
**Date:** 2025-05-13
|
||||||
|
|
||||||
|
**I. Goal:**
|
||||||
|
The primary goal is to ensure that for all internal processing, the system *exclusively* uses `FileRule.item_type` values that start with the "MAP_" prefix (e.g., "MAP_COL", "MAP_NRM"). The "standard type" (e.g., "COL", "NRM") associated with these "MAP_" types (as defined in `config/app_settings.json`) should *only* be used during the file saving stages for output naming. Any `FileRule` whose `item_type` does not start with "MAP_" (and isn't a special type like "EXTRA" or "MODEL") should be skipped by the relevant map processing stages.
|
||||||
|
|
||||||
|
**II. Current State Analysis Summary:**
|
||||||
|
|
||||||
|
* **Output Naming:** The use of "standard type" for output filenames via the `get_filename_friendly_map_type` utility in `SaveVariantsStage` and `OutputOrganizationStage` is **correct** and already meets the requirement.
|
||||||
|
* **Internal "MAP_" Prefix Usage:**
|
||||||
|
* Some stages like `GlossToRoughConversionStage` correctly check for "MAP_" prefixes (e.g., `processing_map_type.startswith("MAP_GLOSS")`).
|
||||||
|
* Other stages like `RegularMapProcessorStage` and `MergedTaskProcessorStage` (and its helpers) implicitly expect "MAP_" prefixed types for their internal regex-based logic but lack explicit checks to skip items if the prefix is missing.
|
||||||
|
* Stages like `AlphaExtractionToMaskStage` and `NormalMapGreenChannelStage` currently use non-"MAP_" prefixed "standard types" (e.g., "NORMAL", "ALBEDO") when reading from `context.processed_maps_details` for their decision-making logic.
|
||||||
|
* The `PrepareProcessingItemsStage` adds `FileRule`s to the processing queue without filtering based on the "MAP_" prefix in `item_type`.
|
||||||
|
* **Data Consistency in `AssetProcessingContext`:**
|
||||||
|
* `FileRule.item_type` is the field that should hold the "MAP_" prefixed type from the initial rule generation.
|
||||||
|
* `context.processed_maps_details` entries can contain various map type representations:
|
||||||
|
* `map_type`: Often stores the "standard type" (e.g., "Roughness", "MASK", "NORMAL").
|
||||||
|
* `processing_map_type` / `internal_map_type`: Generally seem to store the "MAP_" prefixed type. This needs to be consistent.
|
||||||
|
* **Configuration (`config/app_settings.json`):**
|
||||||
|
* `FILE_TYPE_DEFINITIONS` correctly use "MAP_" prefixed keys.
|
||||||
|
* `MAP_MERGE_RULES` need to be reviewed to ensure their `output_map_type` and input map types are "MAP_" prefixed.
|
||||||
|
|
||||||
|
**III. Proposed Changes (Code Identification & Recommendations):**
|
||||||
|
|
||||||
|
**A. Enforce "MAP_" Prefix for Processing Items (Skipping Logic):**
|
||||||
|
The core requirement is that processing stages should skip `FileRule` items if their `item_type` doesn't start with "MAP_".
|
||||||
|
|
||||||
|
1. **`RegularMapProcessorStage` (`processing/pipeline/stages/regular_map_processor.py`):**
|
||||||
|
* **Identify:** In the `execute` method, `initial_internal_map_type` is derived from `file_rule.item_type_override` or `file_rule.item_type`.
|
||||||
|
* **Recommend:** Add an explicit check after determining `initial_internal_map_type`. If `initial_internal_map_type` does not start with `"MAP_"`, the stage should log a warning, set the `result.status` to "Skipped (Invalid Type)" or similar, and return `result` early, effectively skipping processing for this item.
|
||||||
|
|
||||||
|
2. **`MergedTaskProcessorStage` (`processing/pipeline/stages/merged_task_processor.py`):**
|
||||||
|
* **Identify:** This stage processes `MergeTaskDefinition`s. The definitions for these tasks (input types, output type) come from `MAP_MERGE_RULES` in `config/app_settings.json`. The stage uses `required_map_type_from_rule` for its inputs.
|
||||||
|
* **Recommend:**
|
||||||
|
* **Configuration First:** Review all entries in `MAP_MERGE_RULES` in `config/app_settings.json`.
|
||||||
|
* Ensure the `output_map_type` for each rule (e.g., "MAP_NRMRGH") starts with "MAP_".
|
||||||
|
* Ensure all map type values within the `inputs` dictionary (e.g., `"R": "MAP_NRM"`) start with "MAP_".
|
||||||
|
* **Stage Logic:** In the `execute` method, when iterating through `merge_inputs_config.items()`, check if `required_map_type_from_rule` starts with `"MAP_"`. If not, log a warning and either:
|
||||||
|
* Skip loading/processing this specific input channel (potentially using its fallback if the overall merge can still proceed).
|
||||||
|
* Or, if a non-"MAP_" input is critical, fail the entire merge task for this asset.
|
||||||
|
* The helper `_apply_in_memory_transformations` already uses regex expecting "MAP_" prefixes; this will naturally fail or misbehave if inputs are not "MAP_" prefixed, reinforcing the need for the check above.
|
||||||
|
|
||||||
|
**B. Standardize Map Type Fields and Usage in `context.processed_maps_details`:**
|
||||||
|
Ensure consistency in how "MAP_" prefixed types are stored and accessed within `context.processed_maps_details` for internal logic (not naming).
|
||||||
|
|
||||||
|
1. **Recommendation:** Establish a single, consistent field name within `context.processed_maps_details` to store the definitive "MAP_" prefixed internal map type (e.g., `internal_map_type` or `processing_map_type`). All stages that perform logic based on the specific *kind* of map (e.g., transformations, source selection) should read from this standardized field. The `map_type` field can continue to store the "standard type" (e.g., "Roughness") primarily for informational/metadata purposes if needed, but not for core processing logic.
|
||||||
|
|
||||||
|
2. **`AlphaExtractionToMaskStage` (`processing/pipeline/stages/alpha_extraction_to_mask.py`):**
|
||||||
|
* **Identify:**
|
||||||
|
* Checks for existing MASK map using `file_rule.map_type == "MASK"`. (Discrepancy: `FileRule` uses `item_type`).
|
||||||
|
* Searches for suitable source maps using `details.get('map_type') in self.SUITABLE_SOURCE_MAP_TYPES` where `SUITABLE_SOURCE_MAP_TYPES` are standard types like "ALBEDO".
|
||||||
|
* When adding new details, it sets `map_type: "MASK"` and the new `FileRule` gets `item_type="MAP_MASK"`.
|
||||||
|
* **Recommend:**
|
||||||
|
* Change the check for an existing MASK map to `file_rule.item_type == "MAP_MASK"`.
|
||||||
|
* Modify the source map search to use the standardized "MAP_" prefixed field from `details` (e.g., `details.get('internal_map_type')`) and update `SUITABLE_SOURCE_MAP_TYPES` to be "MAP_" prefixed (e.g., "MAP_COL", "MAP_ALBEDO").
|
||||||
|
* When adding new details for the created MASK map to `context.processed_maps_details`, ensure the standardized "MAP_" prefixed field is set to "MAP_MASK", and `map_type` (if kept) is "MASK".
|
||||||
|
|
||||||
|
3. **`NormalMapGreenChannelStage` (`processing/pipeline/stages/normal_map_green_channel.py`):**
|
||||||
|
* **Identify:** Checks `map_details.get('map_type') == "NORMAL"`.
|
||||||
|
* **Recommend:** Change this check to use the standardized "MAP_" prefixed field from `map_details` (e.g., `map_details.get('internal_map_type')`) and verify if it `startswith("MAP_NRM")`.
|
||||||
|
|
||||||
|
4. **`GlossToRoughConversionStage` (`processing/pipeline/stages/gloss_to_rough_conversion.py`):**
|
||||||
|
* **Identify:** This stage already uses `processing_map_type.startswith("MAP_GLOSS")` and updates `processing_map_type` to "MAP_ROUGH" in `map_details`. It also updates the `FileRule.item_type` correctly.
|
||||||
|
* **Recommend:** This stage is largely consistent. Ensure the field it reads/writes (`processing_map_type`) aligns with the chosen standardized "MAP_" prefixed field for `processed_maps_details`.
|
||||||
|
|
||||||
|
**C. Review Orchestration Logic (Conceptual):**
|
||||||
|
* When the orchestrator populates `context.processed_maps_details` after stages like `SaveVariantsStage`, ensure it stores the "MAP_" prefixed `internal_map_type` (from `SaveVariantsInput`) into the chosen standardized field in `processed_maps_details`.
|
||||||
|
|
||||||
|
**IV. Testing Recommendations:**
|
||||||
|
|
||||||
|
* Create test cases with `AssetRule`s containing `FileRule`s where `item_type` is intentionally set to a non-"MAP_" prefixed value (e.g., "COLOR_MAP", "TEXTURE_ROUGH"). Verify that `RegularMapProcessorStage` skips these.
|
||||||
|
* Modify `MAP_MERGE_RULES` in a test configuration:
|
||||||
|
* Set an `output_map_type` to a non-"MAP_" value.
|
||||||
|
* Set an input map type (e.g., for channel "R") to a non-"MAP_" value.
|
||||||
|
* Verify that `MergedTaskProcessorStage` correctly handles these (e.g., fails the task, skips the input, logs warnings).
|
||||||
|
* Test `AlphaExtractionToMaskStage`:
|
||||||
|
* With an existing `FileRule` having `item_type="MAP_MASK"` to ensure extraction is skipped.
|
||||||
|
* With source maps having "MAP_COL" (with alpha) as their `internal_map_type` in `processed_maps_details` to ensure they are correctly identified as sources.
|
||||||
|
* Test `NormalMapGreenChannelStage` with a normal map having "MAP_NRM" as its `internal_map_type` in `processed_maps_details` to ensure it's processed.
|
||||||
|
* Verify that output filenames continue to use the "standard type" (e.g., "COL", "ROUGH", "NRM") correctly.
|
||||||
|
|
||||||
|
**V. Mermaid Diagram (Illustrative Flow for `FileRule` Processing):**
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[AssetRule with FileRules] --> B{FileRuleFilterStage};
|
||||||
|
B -- files_to_process --> C{PrepareProcessingItemsStage};
|
||||||
|
C -- processing_items (FileRule) --> D{PipelineOrchestrator};
|
||||||
|
D -- FileRule --> E(RegularMapProcessorStage);
|
||||||
|
E --> F{Check FileRule.item_type};
|
||||||
|
F -- Starts with "MAP_"? --> G[Process Map];
|
||||||
|
F -- No --> H[Skip Map / Log Warning];
|
||||||
|
G --> I[...subsequent stages...];
|
||||||
|
H --> I;
|
||||||
72
ProjectNotes/PipelineRefactoringPlan.md
Normal file
72
ProjectNotes/PipelineRefactoringPlan.md
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
# Processing Pipeline Refactoring Plan
|
||||||
|
|
||||||
|
## 1. Problem Summary
|
||||||
|
|
||||||
|
The current processing pipeline, particularly the `IndividualMapProcessingStage`, exhibits maintainability challenges:
|
||||||
|
|
||||||
|
* **High Complexity:** The stage handles too many responsibilities (loading, merging, transformations, scaling, saving).
|
||||||
|
* **Duplicated Logic:** Image transformations (Gloss-to-Rough, Normal Green Invert) are duplicated within the stage instead of relying solely on dedicated stages or being handled consistently.
|
||||||
|
* **Tight Coupling:** Heavy reliance on the large, mutable `AssetProcessingContext` object creates implicit dependencies and makes isolated testing difficult.
|
||||||
|
|
||||||
|
## 2. Refactoring Goals
|
||||||
|
|
||||||
|
* Improve code readability and understanding.
|
||||||
|
* Enhance maintainability by localizing changes and removing duplication.
|
||||||
|
* Increase testability through smaller, focused components with clear interfaces.
|
||||||
|
* Clarify data dependencies between pipeline stages.
|
||||||
|
* Adhere more closely to the Single Responsibility Principle (SRP).
|
||||||
|
|
||||||
|
## 3. Proposed New Pipeline Stages
|
||||||
|
|
||||||
|
Replace the existing `IndividualMapProcessingStage` with the following sequence of smaller, focused stages, executed by the `PipelineOrchestrator` for each processing item:
|
||||||
|
|
||||||
|
1. **`PrepareProcessingItemsStage`:**
|
||||||
|
* **Responsibility:** Identifies and lists all items (`FileRule`, `MergeTaskDefinition`) to be processed from the main context.
|
||||||
|
* **Output:** Updates `context.processing_items`.
|
||||||
|
|
||||||
|
2. **`RegularMapProcessorStage`:** (Handles `FileRule` items)
|
||||||
|
* **Responsibility:** Loads source image, determines internal map type (with suffix), applies relevant transformations (Gloss-to-Rough, Normal Green Invert), determines original metadata.
|
||||||
|
* **Output:** `ProcessedRegularMapData` object containing transformed image data and metadata.
|
||||||
|
|
||||||
|
3. **`MergedTaskProcessorStage`:** (Handles `MergeTaskDefinition` items)
|
||||||
|
* **Responsibility:** Loads input images, applies transformations to inputs, handles fallbacks/resizing, performs merge operation.
|
||||||
|
* **Output:** `ProcessedMergedMapData` object containing merged image data and metadata.
|
||||||
|
|
||||||
|
4. **`InitialScalingStage`:** (Optional)
|
||||||
|
* **Responsibility:** Applies configured scaling (e.g., POT downscale) to the processed image data received from the previous stage.
|
||||||
|
* **Output:** Scaled image data.
|
||||||
|
|
||||||
|
5. **`SaveVariantsStage`:**
|
||||||
|
* **Responsibility:** Takes the final processed (and potentially scaled) image data and orchestrates saving variants using the `save_image_variants` utility.
|
||||||
|
* **Output:** List of saved file details (`saved_files_details`).
|
||||||
|
|
||||||
|
## 4. Proposed Data Flow
|
||||||
|
|
||||||
|
* **Input/Output Objects:** Key stages (`RegularMapProcessor`, `MergedTaskProcessor`, `InitialScaling`, `SaveVariants`) will use specific Input and Output dataclasses for clearer interfaces.
|
||||||
|
* **Orchestrator Role:** The `PipelineOrchestrator` manages the overall flow. It calls stages, passes necessary data (extracting image data references and metadata from previous stage outputs to create inputs for the next), receives output objects, and integrates final results (like saved file details) back into the main `AssetProcessingContext`.
|
||||||
|
* **Image Data Handling:** Large image arrays (`np.ndarray`) are passed primarily via stage return values (Output objects) and used as inputs to subsequent stages, managed by the Orchestrator. They are not stored long-term in the main `AssetProcessingContext`.
|
||||||
|
* **Main Context:** The `AssetProcessingContext` remains for overall state (rules, paths, configuration access, final status tracking) and potentially for simpler stages with minimal side effects.
|
||||||
|
|
||||||
|
## 5. Visualization (Conceptual)
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
subgraph Proposed Pipeline Stages
|
||||||
|
Start --> Prep[PrepareProcessingItemsStage]
|
||||||
|
Prep --> ItemLoop{Loop per Item}
|
||||||
|
ItemLoop -- FileRule --> RegProc[RegularMapProcessorStage]
|
||||||
|
ItemLoop -- MergeTask --> MergeProc[MergedTaskProcessorStage]
|
||||||
|
RegProc --> Scale(InitialScalingStage)
|
||||||
|
MergeProc --> Scale
|
||||||
|
Scale --> Save[SaveVariantsStage]
|
||||||
|
Save --> UpdateContext[Update Main Context w/ Results]
|
||||||
|
UpdateContext --> ItemLoop
|
||||||
|
end
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Benefits
|
||||||
|
|
||||||
|
* Improved Readability & Understanding.
|
||||||
|
* Enhanced Maintainability & Reduced Risk.
|
||||||
|
* Better Testability.
|
||||||
|
* Clearer Dependencies.
|
||||||
@@ -1,181 +0,0 @@
|
|||||||
# Project Plan: Modularizing the Asset Processing Engine
|
|
||||||
|
|
||||||
**Last Updated:** May 9, 2025
|
|
||||||
|
|
||||||
**1. Project Vision & Goals**
|
|
||||||
|
|
||||||
* **Vision:** Transform the asset processing pipeline into a highly modular, extensible, and testable system.
|
|
||||||
* **Primary Goals:**
|
|
||||||
1. Decouple processing steps into independent, reusable stages.
|
|
||||||
2. Simplify the addition of new processing capabilities (e.g., GLOSS > ROUGH conversion, Alpha to MASK, Normal Map Green Channel inversion).
|
|
||||||
3. Improve code maintainability and readability.
|
|
||||||
4. Enhance unit and integration testing capabilities for each processing component.
|
|
||||||
5. Centralize common utility functions (image manipulation, path generation).
|
|
||||||
|
|
||||||
**2. Proposed Architecture Overview**
|
|
||||||
|
|
||||||
* **Core Concept:** A `PipelineOrchestrator` will manage a sequence of `ProcessingStage`s. Each stage will operate on an `AssetProcessingContext` object, which carries all necessary data and state for a single asset through the pipeline.
|
|
||||||
* **Key Components:**
|
|
||||||
* `AssetProcessingContext`: Data class holding asset-specific data, configuration, temporary paths, and status.
|
|
||||||
* `PipelineOrchestrator`: Class to manage the overall processing flow for a `SourceRule`, iterating through assets and executing the pipeline of stages for each.
|
|
||||||
* `ProcessingStage` (Base Class/Interface): Defines the contract for all individual processing stages (e.g., `execute(context)` method).
|
|
||||||
* Specific Stage Classes: (e.g., `SupplierDeterminationStage`, `IndividualMapProcessingStage`, etc.)
|
|
||||||
* Utility Modules: `image_processing_utils.py`, enhancements to `utils/path_utils.py`.
|
|
||||||
|
|
||||||
**3. Proposed File Structure**
|
|
||||||
|
|
||||||
* `processing/`
|
|
||||||
* `pipeline/`
|
|
||||||
* `__init__.py`
|
|
||||||
* `asset_context.py` (Defines `AssetProcessingContext`)
|
|
||||||
* `orchestrator.py` (Defines `PipelineOrchestrator`)
|
|
||||||
* `stages/`
|
|
||||||
* `__init__.py`
|
|
||||||
* `base_stage.py` (Defines `ProcessingStage` interface)
|
|
||||||
* `supplier_determination.py`
|
|
||||||
* `asset_skip_logic.py`
|
|
||||||
* `metadata_initialization.py`
|
|
||||||
* `file_rule_filter.py`
|
|
||||||
* `gloss_to_rough_conversion.py`
|
|
||||||
* `alpha_extraction_to_mask.py`
|
|
||||||
* `normal_map_green_channel.py`
|
|
||||||
* `individual_map_processing.py`
|
|
||||||
* `map_merging.py`
|
|
||||||
* `metadata_finalization.py`
|
|
||||||
* `output_organization.py`
|
|
||||||
* `utils/`
|
|
||||||
* `__init__.py`
|
|
||||||
* `image_processing_utils.py` (New module for image functions)
|
|
||||||
* `utils/` (Top-level existing directory)
|
|
||||||
* `path_utils.py` (To be enhanced with `sanitize_filename` from `processing_engine.py`)
|
|
||||||
|
|
||||||
**4. Detailed Phases and Tasks**
|
|
||||||
|
|
||||||
**Phase 0: Setup & Core Structures Definition**
|
|
||||||
*Goal: Establish the foundational classes for the new pipeline.*
|
|
||||||
* **Task 0.1: Define `AssetProcessingContext`**
|
|
||||||
* Create `processing/pipeline/asset_context.py`.
|
|
||||||
* Define the `AssetProcessingContext` data class with fields: `source_rule: SourceRule`, `asset_rule: AssetRule`, `workspace_path: Path`, `engine_temp_dir: Path`, `output_base_path: Path`, `effective_supplier: Optional[str]`, `asset_metadata: Dict`, `processed_maps_details: Dict[str, Dict[str, Dict]]`, `merged_maps_details: Dict[str, Dict[str, Dict]]`, `files_to_process: List[FileRule]`, `loaded_data_cache: Dict`, `config_obj: Configuration`, `status_flags: Dict`, `incrementing_value: Optional[str]`, `sha5_value: Optional[str]`.
|
|
||||||
* Ensure proper type hinting.
|
|
||||||
* **Task 0.2: Define `ProcessingStage` Base Class/Interface**
|
|
||||||
* Create `processing/pipeline/stages/base_stage.py`.
|
|
||||||
* Define an abstract base class `ProcessingStage` with an abstract method `execute(self, context: AssetProcessingContext) -> AssetProcessingContext`.
|
|
||||||
* **Task 0.3: Implement Initial `PipelineOrchestrator`**
|
|
||||||
* Create `processing/pipeline/orchestrator.py`.
|
|
||||||
* Define the `PipelineOrchestrator` class.
|
|
||||||
* Implement `__init__(self, config_obj: Configuration, stages: List[ProcessingStage])`.
|
|
||||||
* Implement `process_source_rule(self, source_rule: SourceRule, workspace_path: Path, output_base_path: Path, overwrite: bool, incrementing_value: Optional[str], sha5_value: Optional[str]) -> Dict[str, List[str]]`.
|
|
||||||
* Handles creation/cleanup of the main engine temporary directory.
|
|
||||||
* Loops through `source_rule.assets`, initializes `AssetProcessingContext` for each.
|
|
||||||
* Iterates `self.stages`, calling `stage.execute(context)`.
|
|
||||||
* Collects overall status.
|
|
||||||
|
|
||||||
**Phase 1: Utility Module Refactoring**
|
|
||||||
*Goal: Consolidate and centralize common utility functions.*
|
|
||||||
* **Task 1.1: Refactor Path Utilities**
|
|
||||||
* Move `_sanitize_filename` from `processing_engine.py` to `utils/path_utils.py`.
|
|
||||||
* Update uses to call the new utility function.
|
|
||||||
* **Task 1.2: Create `image_processing_utils.py`**
|
|
||||||
* Create `processing/utils/image_processing_utils.py`.
|
|
||||||
* Move general-purpose image functions from `processing_engine.py`:
|
|
||||||
* `is_power_of_two`
|
|
||||||
* `get_nearest_pot`
|
|
||||||
* `calculate_target_dimensions`
|
|
||||||
* `calculate_image_stats`
|
|
||||||
* `normalize_aspect_ratio_change`
|
|
||||||
* Core image loading, BGR<>RGB conversion, generic resizing (from `_load_and_transform_source`).
|
|
||||||
* Core data type conversion for saving, color conversion for saving, `cv2.imwrite` call (from `_save_image`).
|
|
||||||
* Ensure functions are pure and testable.
|
|
||||||
|
|
||||||
**Phase 2: Implementing Core Processing Stages (Migrating Existing Logic)**
|
|
||||||
*Goal: Migrate existing functionalities from `processing_engine.py` into the new stage-based architecture.*
|
|
||||||
(For each task: create stage file, implement class, move logic, adapt to `AssetProcessingContext`)
|
|
||||||
* **Task 2.1: Implement `SupplierDeterminationStage`**
|
|
||||||
* **Task 2.2: Implement `AssetSkipLogicStage`**
|
|
||||||
* **Task 2.3: Implement `MetadataInitializationStage`**
|
|
||||||
* **Task 2.4: Implement `FileRuleFilterStage`** (New logic for `item_type == "FILE_IGNORE"`)
|
|
||||||
* **Task 2.5: Implement `IndividualMapProcessingStage`** (Adapts `_process_individual_maps`, uses `image_processing_utils.py`)
|
|
||||||
* **Task 2.6: Implement `MapMergingStage`** (Adapts `_merge_maps`, uses `image_processing_utils.py`)
|
|
||||||
* **Task 2.7: Implement `MetadataFinalizationAndSaveStage`** (Adapts `_generate_metadata_file`, uses `utils.path_utils.generate_path_from_pattern`)
|
|
||||||
* **Task 2.8: Implement `OutputOrganizationStage`** (Adapts `_organize_output_files`)
|
|
||||||
|
|
||||||
**Phase 3: Implementing New Feature Stages**
|
|
||||||
*Goal: Add the new desired processing capabilities as distinct stages.*
|
|
||||||
* **Task 3.1: Implement `GlossToRoughConversionStage`** (Identify gloss, convert, invert, save temp, update `FileRule`)
|
|
||||||
* **Task 3.2: Implement `AlphaExtractionToMaskStage`** (Check existing mask, find MAP_COL with alpha, extract, save temp, add new `FileRule`)
|
|
||||||
* **Task 3.3: Implement `NormalMapGreenChannelStage`** (Identify normal maps, invert green based on config, save temp, update `FileRule`)
|
|
||||||
|
|
||||||
**Phase 4: Integration, Testing & Finalization**
|
|
||||||
*Goal: Assemble the pipeline, test thoroughly, and deprecate old code.*
|
|
||||||
* **Task 4.1: Configure `PipelineOrchestrator`**
|
|
||||||
* Instantiate `PipelineOrchestrator` in main application logic with the ordered list of stage instances.
|
|
||||||
* **Task 4.2: Unit Testing**
|
|
||||||
* Unit tests for each `ProcessingStage` (mocking `AssetProcessingContext`).
|
|
||||||
* Unit tests for `image_processing_utils.py` and `utils/path_utils.py` functions.
|
|
||||||
* **Task 4.3: Integration Testing**
|
|
||||||
* Test `PipelineOrchestrator` end-to-end with sample data.
|
|
||||||
* Compare outputs with the existing engine for consistency.
|
|
||||||
* **Task 4.4: Documentation Update**
|
|
||||||
* Update developer documentation (e.g., `Documentation/02_Developer_Guide/05_Processing_Pipeline.md`).
|
|
||||||
* Document `AssetProcessingContext` and stage responsibilities.
|
|
||||||
* **Task 4.5: Deprecate/Remove Old `ProcessingEngine` Code**
|
|
||||||
* Gradually remove refactored logic from `processing_engine.py`.
|
|
||||||
|
|
||||||
**5. Workflow Diagram**
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TD
|
|
||||||
AA[Load SourceRule & Config] --> BA(PipelineOrchestrator: process_source_rule);
|
|
||||||
BA --> CA{For Each Asset in SourceRule};
|
|
||||||
CA -- Yes --> DA(Orchestrator: Create AssetProcessingContext);
|
|
||||||
DA --> EA(SupplierDeterminationStage);
|
|
||||||
EA -- context --> FA(AssetSkipLogicStage);
|
|
||||||
FA -- context --> GA{context.skip_asset?};
|
|
||||||
GA -- Yes --> HA(Orchestrator: Record Skipped);
|
|
||||||
HA --> CA;
|
|
||||||
GA -- No --> IA(MetadataInitializationStage);
|
|
||||||
IA -- context --> JA(FileRuleFilterStage);
|
|
||||||
JA -- context --> KA(GlossToRoughConversionStage);
|
|
||||||
KA -- context --> LA(AlphaExtractionToMaskStage);
|
|
||||||
LA -- context --> MA(NormalMapGreenChannelStage);
|
|
||||||
MA -- context --> NA(IndividualMapProcessingStage);
|
|
||||||
NA -- context --> OA(MapMergingStage);
|
|
||||||
OA -- context --> PA(MetadataFinalizationAndSaveStage);
|
|
||||||
PA -- context --> QA(OutputOrganizationStage);
|
|
||||||
QA -- context --> RA(Orchestrator: Record Processed/Failed);
|
|
||||||
RA --> CA;
|
|
||||||
CA -- No --> SA(Orchestrator: Cleanup Engine Temp Dir);
|
|
||||||
SA --> TA[Processing Complete];
|
|
||||||
|
|
||||||
subgraph Stages
|
|
||||||
direction LR
|
|
||||||
EA
|
|
||||||
FA
|
|
||||||
IA
|
|
||||||
JA
|
|
||||||
KA
|
|
||||||
LA
|
|
||||||
MA
|
|
||||||
NA
|
|
||||||
OA
|
|
||||||
PA
|
|
||||||
QA
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Utils
|
|
||||||
direction LR
|
|
||||||
U1[image_processing_utils.py]
|
|
||||||
U2[utils/path_utils.py]
|
|
||||||
end
|
|
||||||
|
|
||||||
NA -.-> U1;
|
|
||||||
OA -.-> U1;
|
|
||||||
KA -.-> U1;
|
|
||||||
LA -.-> U1;
|
|
||||||
MA -.-> U1;
|
|
||||||
|
|
||||||
PA -.-> U2;
|
|
||||||
QA -.-> U2;
|
|
||||||
|
|
||||||
classDef context fill:#f9f,stroke:#333,stroke-width:2px;
|
|
||||||
class DA,EA,FA,IA,JA,KA,LA,MA,NA,OA,PA,QA context;
|
|
||||||
@@ -268,7 +268,7 @@
|
|||||||
"OUTPUT_FORMAT_8BIT": "png",
|
"OUTPUT_FORMAT_8BIT": "png",
|
||||||
"MAP_MERGE_RULES": [
|
"MAP_MERGE_RULES": [
|
||||||
{
|
{
|
||||||
"output_map_type": "NRMRGH",
|
"output_map_type": "MAP_NRMRGH",
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"R": "MAP_NRM",
|
"R": "MAP_NRM",
|
||||||
"G": "MAP_NRM",
|
"G": "MAP_NRM",
|
||||||
@@ -284,5 +284,10 @@
|
|||||||
],
|
],
|
||||||
"CALCULATE_STATS_RESOLUTION": "1K",
|
"CALCULATE_STATS_RESOLUTION": "1K",
|
||||||
"DEFAULT_ASSET_CATEGORY": "Surface",
|
"DEFAULT_ASSET_CATEGORY": "Surface",
|
||||||
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_"
|
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_",
|
||||||
|
"INITIAL_SCALING_MODE": "POT_DOWNSCALE",
|
||||||
|
"MERGE_DIMENSION_MISMATCH_STRATEGY": "USE_LARGEST",
|
||||||
|
"general_settings": {
|
||||||
|
"invert_normal_map_green_channel_globally": false
|
||||||
|
}
|
||||||
}
|
}
|
||||||
@@ -379,10 +379,33 @@ class Configuration:
|
|||||||
"""Gets the configured JPG quality level."""
|
"""Gets the configured JPG quality level."""
|
||||||
return self._core_settings.get('JPG_QUALITY', 95)
|
return self._core_settings.get('JPG_QUALITY', 95)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def invert_normal_green_globally(self) -> bool:
|
||||||
|
"""Gets the global setting for inverting the green channel of normal maps."""
|
||||||
|
# Default to False if the setting is missing in the core config
|
||||||
|
return self._core_settings.get('invert_normal_map_green_channel_globally', False)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def overwrite_existing(self) -> bool:
|
||||||
|
"""Gets the setting for overwriting existing files from core settings."""
|
||||||
|
return self._core_settings.get('overwrite_existing', False)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def png_compression_level(self) -> int:
|
||||||
|
"""Gets the PNG compression level from core settings."""
|
||||||
|
return self._core_settings.get('PNG_COMPRESSION', 6) # Default to 6 if not found
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def resolution_threshold_for_jpg(self) -> int:
|
def resolution_threshold_for_jpg(self) -> int:
|
||||||
"""Gets the pixel dimension threshold for using JPG for 8-bit images."""
|
"""Gets the pixel dimension threshold for using JPG for 8-bit images."""
|
||||||
return self._core_settings.get('RESOLUTION_THRESHOLD_FOR_JPG', 4096)
|
value = self._core_settings.get('RESOLUTION_THRESHOLD_FOR_JPG', 4096)
|
||||||
|
log.info(f"CONFIGURATION_DEBUG: resolution_threshold_for_jpg property returning: {value} (type: {type(value)})")
|
||||||
|
# Ensure it's an int, as downstream might expect it.
|
||||||
|
# The .get() default is an int, but if the JSON had null or a string, it might be different.
|
||||||
|
if not isinstance(value, int):
|
||||||
|
log.warning(f"CONFIGURATION_DEBUG: RESOLUTION_THRESHOLD_FOR_JPG was not an int, got {type(value)}. Defaulting to 4096.")
|
||||||
|
return 4096
|
||||||
|
return value
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def respect_variant_map_types(self) -> list:
|
def respect_variant_map_types(self) -> list:
|
||||||
|
|||||||
@@ -5,6 +5,83 @@ from typing import Dict, List, Optional
|
|||||||
from rule_structure import AssetRule, FileRule, SourceRule
|
from rule_structure import AssetRule, FileRule, SourceRule
|
||||||
from configuration import Configuration
|
from configuration import Configuration
|
||||||
|
|
||||||
|
# Imports needed for new dataclasses
|
||||||
|
import numpy as np
|
||||||
|
from typing import Any, Tuple, Union
|
||||||
|
|
||||||
|
# --- Stage Input/Output Dataclasses ---
|
||||||
|
|
||||||
|
# Item types for PrepareProcessingItemsStage output
|
||||||
|
@dataclass
|
||||||
|
class MergeTaskDefinition:
|
||||||
|
"""Represents a merge task identified by PrepareProcessingItemsStage."""
|
||||||
|
task_data: Dict # The original task data from context.merged_image_tasks
|
||||||
|
task_key: str # e.g., "merged_task_0"
|
||||||
|
|
||||||
|
# Output for RegularMapProcessorStage
|
||||||
|
@dataclass
|
||||||
|
class ProcessedRegularMapData:
|
||||||
|
processed_image_data: np.ndarray
|
||||||
|
final_internal_map_type: str
|
||||||
|
source_file_path: Path
|
||||||
|
original_bit_depth: Optional[int]
|
||||||
|
original_dimensions: Optional[Tuple[int, int]] # (width, height)
|
||||||
|
transformations_applied: List[str]
|
||||||
|
status: str = "Processed"
|
||||||
|
error_message: Optional[str] = None
|
||||||
|
|
||||||
|
# Output for MergedTaskProcessorStage
|
||||||
|
@dataclass
|
||||||
|
class ProcessedMergedMapData:
|
||||||
|
merged_image_data: np.ndarray
|
||||||
|
output_map_type: str # Internal type
|
||||||
|
source_bit_depths: List[int]
|
||||||
|
final_dimensions: Optional[Tuple[int, int]] # (width, height)
|
||||||
|
transformations_applied_to_inputs: Dict[str, List[str]] # Map type -> list of transforms
|
||||||
|
status: str = "Processed"
|
||||||
|
error_message: Optional[str] = None
|
||||||
|
|
||||||
|
# Input for InitialScalingStage
|
||||||
|
@dataclass
|
||||||
|
class InitialScalingInput:
|
||||||
|
image_data: np.ndarray
|
||||||
|
original_dimensions: Optional[Tuple[int, int]] # (width, height)
|
||||||
|
# Configuration needed
|
||||||
|
initial_scaling_mode: str
|
||||||
|
|
||||||
|
# Output for InitialScalingStage
|
||||||
|
@dataclass
|
||||||
|
class InitialScalingOutput:
|
||||||
|
scaled_image_data: np.ndarray
|
||||||
|
scaling_applied: bool
|
||||||
|
final_dimensions: Tuple[int, int] # (width, height)
|
||||||
|
|
||||||
|
# Input for SaveVariantsStage
|
||||||
|
@dataclass
|
||||||
|
class SaveVariantsInput:
|
||||||
|
image_data: np.ndarray # Final data (potentially scaled)
|
||||||
|
internal_map_type: str # Final internal type (e.g., MAP_ROUGH, MAP_COL-1)
|
||||||
|
source_bit_depth_info: List[int]
|
||||||
|
# Configuration needed
|
||||||
|
output_filename_pattern_tokens: Dict[str, Any]
|
||||||
|
image_resolutions: List[int]
|
||||||
|
file_type_defs: Dict[str, Dict]
|
||||||
|
output_format_8bit: str
|
||||||
|
output_format_16bit_primary: str
|
||||||
|
output_format_16bit_fallback: str
|
||||||
|
png_compression_level: int
|
||||||
|
jpg_quality: int
|
||||||
|
output_filename_pattern: str
|
||||||
|
resolution_threshold_for_jpg: Optional[int] # Added for JPG conversion
|
||||||
|
|
||||||
|
# Output for SaveVariantsStage
|
||||||
|
@dataclass
|
||||||
|
class SaveVariantsOutput:
|
||||||
|
saved_files_details: List[Dict]
|
||||||
|
status: str = "Processed"
|
||||||
|
error_message: Optional[str] = None
|
||||||
|
|
||||||
|
# Add a field to AssetProcessingContext for the prepared items
|
||||||
@dataclass
|
@dataclass
|
||||||
class AssetProcessingContext:
|
class AssetProcessingContext:
|
||||||
source_rule: SourceRule
|
source_rule: SourceRule
|
||||||
@@ -14,11 +91,16 @@ class AssetProcessingContext:
|
|||||||
output_base_path: Path
|
output_base_path: Path
|
||||||
effective_supplier: Optional[str]
|
effective_supplier: Optional[str]
|
||||||
asset_metadata: Dict
|
asset_metadata: Dict
|
||||||
processed_maps_details: Dict[str, Dict[str, Dict]]
|
processed_maps_details: Dict[str, Dict] # Will store final results per item_key
|
||||||
merged_maps_details: Dict[str, Dict[str, Dict]]
|
merged_maps_details: Dict[str, Dict] # This might become redundant? Keep for now.
|
||||||
files_to_process: List[FileRule]
|
files_to_process: List[FileRule]
|
||||||
loaded_data_cache: Dict
|
loaded_data_cache: Dict
|
||||||
config_obj: Configuration
|
config_obj: Configuration
|
||||||
status_flags: Dict
|
status_flags: Dict
|
||||||
incrementing_value: Optional[str]
|
incrementing_value: Optional[str]
|
||||||
sha5_value: Optional[str]
|
sha5_value: Optional[str] # Keep existing fields
|
||||||
|
# New field for prepared items
|
||||||
|
processing_items: Optional[List[Union[FileRule, MergeTaskDefinition]]] = None
|
||||||
|
# Temporary storage during pipeline execution (managed by orchestrator)
|
||||||
|
# Keys could be FileRule object hash/id or MergeTaskDefinition task_key
|
||||||
|
intermediate_results: Optional[Dict[Any, Union[ProcessedRegularMapData, ProcessedMergedMapData, InitialScalingOutput]]] = None
|
||||||
@@ -1,126 +1,434 @@
|
|||||||
from typing import List, Dict, Optional
|
# --- Imports ---
|
||||||
from pathlib import Path
|
import logging
|
||||||
import shutil
|
import shutil
|
||||||
import tempfile
|
import tempfile
|
||||||
import logging
|
from pathlib import Path
|
||||||
|
from typing import List, Dict, Optional, Any, Union # Added Any, Union
|
||||||
|
|
||||||
|
import numpy as np # Added numpy
|
||||||
|
|
||||||
from configuration import Configuration
|
from configuration import Configuration
|
||||||
from rule_structure import SourceRule, AssetRule
|
from rule_structure import SourceRule, AssetRule, FileRule # Added FileRule
|
||||||
from .asset_context import AssetProcessingContext
|
|
||||||
|
# Import new context classes and stages
|
||||||
|
from .asset_context import (
|
||||||
|
AssetProcessingContext,
|
||||||
|
MergeTaskDefinition,
|
||||||
|
ProcessedRegularMapData,
|
||||||
|
ProcessedMergedMapData,
|
||||||
|
InitialScalingInput,
|
||||||
|
InitialScalingOutput,
|
||||||
|
SaveVariantsInput,
|
||||||
|
SaveVariantsOutput,
|
||||||
|
)
|
||||||
from .stages.base_stage import ProcessingStage
|
from .stages.base_stage import ProcessingStage
|
||||||
|
# Import the new stages we created
|
||||||
|
from .stages.prepare_processing_items import PrepareProcessingItemsStage
|
||||||
|
from .stages.regular_map_processor import RegularMapProcessorStage
|
||||||
|
from .stages.merged_task_processor import MergedTaskProcessorStage
|
||||||
|
from .stages.initial_scaling import InitialScalingStage
|
||||||
|
from .stages.save_variants import SaveVariantsStage
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# --- PipelineOrchestrator Class ---
|
||||||
|
|
||||||
class PipelineOrchestrator:
|
class PipelineOrchestrator:
|
||||||
"""
|
"""
|
||||||
Orchestrates the processing of assets based on source rules and a series of processing stages.
|
Orchestrates the processing of assets based on source rules and a series of processing stages.
|
||||||
|
Manages the overall flow, including the core item processing sequence.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, config_obj: Configuration, stages: List[ProcessingStage]):
|
def __init__(self, config_obj: Configuration,
|
||||||
|
pre_item_stages: List[ProcessingStage],
|
||||||
|
post_item_stages: List[ProcessingStage]):
|
||||||
"""
|
"""
|
||||||
Initializes the PipelineOrchestrator.
|
Initializes the PipelineOrchestrator.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config_obj: The main configuration object.
|
config_obj: The main configuration object.
|
||||||
stages: A list of processing stages to be executed in order.
|
pre_item_stages: Stages to run before the core item processing loop.
|
||||||
|
post_item_stages: Stages to run after the core item processing loop.
|
||||||
"""
|
"""
|
||||||
self.config_obj: Configuration = config_obj
|
self.config_obj: Configuration = config_obj
|
||||||
self.stages: List[ProcessingStage] = stages
|
self.pre_item_stages: List[ProcessingStage] = pre_item_stages
|
||||||
|
self.post_item_stages: List[ProcessingStage] = post_item_stages
|
||||||
|
# Instantiate the core item processing stages internally
|
||||||
|
self._prepare_stage = PrepareProcessingItemsStage()
|
||||||
|
self._regular_processor_stage = RegularMapProcessorStage()
|
||||||
|
self._merged_processor_stage = MergedTaskProcessorStage()
|
||||||
|
self._scaling_stage = InitialScalingStage()
|
||||||
|
self._save_stage = SaveVariantsStage()
|
||||||
|
|
||||||
|
def _execute_specific_stages(
|
||||||
|
self, context: AssetProcessingContext,
|
||||||
|
stages_to_run: List[ProcessingStage],
|
||||||
|
stage_group_name: str,
|
||||||
|
stop_on_skip: bool = True
|
||||||
|
) -> AssetProcessingContext:
|
||||||
|
"""Executes a specific list of stages."""
|
||||||
|
asset_name = context.asset_rule.asset_name if context.asset_rule else "Unknown"
|
||||||
|
log.debug(f"Asset '{asset_name}': Executing {stage_group_name} stages...")
|
||||||
|
for stage in stages_to_run:
|
||||||
|
stage_name = stage.__class__.__name__
|
||||||
|
log.debug(f"Asset '{asset_name}': Executing {stage_group_name} stage: {stage_name}")
|
||||||
|
try:
|
||||||
|
# Check if stage expects context directly or specific input
|
||||||
|
# For now, assume outer stages take context directly
|
||||||
|
# This might need refinement if outer stages also adopt Input/Output pattern
|
||||||
|
context = stage.execute(context)
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Asset '{asset_name}': Error during outer stage '{stage_name}': {e}", exc_info=True)
|
||||||
|
context.status_flags["asset_failed"] = True
|
||||||
|
context.status_flags["asset_failed_stage"] = stage_name
|
||||||
|
context.status_flags["asset_failed_reason"] = str(e)
|
||||||
|
# Update overall metadata immediately on outer stage failure
|
||||||
|
context.asset_metadata["status"] = f"Failed: Error in stage {stage_name}"
|
||||||
|
context.asset_metadata["error_message"] = str(e)
|
||||||
|
break # Stop processing outer stages for this asset on error
|
||||||
|
|
||||||
|
if stop_on_skip and context.status_flags.get("skip_asset"):
|
||||||
|
log.info(f"Asset '{asset_name}': Skipped by outer stage '{stage_name}'. Reason: {context.status_flags.get('skip_reason', 'N/A')}")
|
||||||
|
break # Skip remaining outer stages for this asset
|
||||||
|
return context
|
||||||
|
|
||||||
def process_source_rule(
|
def process_source_rule(
|
||||||
self,
|
self,
|
||||||
source_rule: SourceRule,
|
source_rule: SourceRule,
|
||||||
workspace_path: Path,
|
workspace_path: Path,
|
||||||
output_base_path: Path,
|
output_base_path: Path,
|
||||||
overwrite: bool, # Not used in this initial implementation, but part of the signature
|
overwrite: bool,
|
||||||
incrementing_value: Optional[str],
|
incrementing_value: Optional[str],
|
||||||
sha5_value: Optional[str] # Corrected from sha5_value to sha256_value as per typical usage, assuming typo
|
sha5_value: Optional[str] # Keep param name consistent for now
|
||||||
) -> Dict[str, List[str]]:
|
) -> Dict[str, List[str]]:
|
||||||
"""
|
"""
|
||||||
Processes a single source rule, iterating through its asset rules and applying all stages.
|
Processes a single source rule, applying pre-processing stages,
|
||||||
|
the core item processing loop (Prepare, Process, Scale, Save),
|
||||||
Args:
|
and post-processing stages.
|
||||||
source_rule: The source rule to process.
|
|
||||||
workspace_path: The base path of the workspace.
|
|
||||||
output_base_path: The base path for output files.
|
|
||||||
overwrite: Whether to overwrite existing files (not fully implemented yet).
|
|
||||||
incrementing_value: An optional incrementing value for versioning or naming.
|
|
||||||
sha5_value: An optional SHA5 hash value for the asset (assuming typo, likely sha256).
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
A dictionary summarizing the processing status of assets.
|
|
||||||
"""
|
"""
|
||||||
overall_status: Dict[str, List[str]] = {
|
overall_status: Dict[str, List[str]] = {
|
||||||
"processed": [],
|
"processed": [],
|
||||||
"skipped": [],
|
"skipped": [],
|
||||||
"failed": [],
|
"failed": [],
|
||||||
}
|
}
|
||||||
engine_temp_dir_path: Optional[Path] = None # Initialize to None
|
engine_temp_dir_path: Optional[Path] = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Create a temporary directory for this processing run if needed by any stage
|
# --- Setup Temporary Directory ---
|
||||||
# This temp dir is for the entire source_rule processing, not per asset.
|
|
||||||
# Individual stages might create their own sub-temp dirs if necessary.
|
|
||||||
temp_dir_path_str = tempfile.mkdtemp(prefix=self.config_obj.temp_dir_prefix)
|
temp_dir_path_str = tempfile.mkdtemp(prefix=self.config_obj.temp_dir_prefix)
|
||||||
engine_temp_dir_path = Path(temp_dir_path_str)
|
engine_temp_dir_path = Path(temp_dir_path_str)
|
||||||
log.debug(f"PipelineOrchestrator created temporary directory: {engine_temp_dir_path} using prefix '{self.config_obj.temp_dir_prefix}'")
|
log.debug(f"PipelineOrchestrator created temporary directory: {engine_temp_dir_path}")
|
||||||
|
|
||||||
|
|
||||||
|
# --- Process Each Asset Rule ---
|
||||||
for asset_rule in source_rule.assets:
|
for asset_rule in source_rule.assets:
|
||||||
log.debug(f"Orchestrator: Processing asset '{asset_rule.asset_name}'")
|
asset_name = asset_rule.asset_name
|
||||||
|
log.info(f"Orchestrator: Processing asset '{asset_name}'")
|
||||||
|
|
||||||
|
# --- Initialize Asset Context ---
|
||||||
context = AssetProcessingContext(
|
context = AssetProcessingContext(
|
||||||
source_rule=source_rule,
|
source_rule=source_rule,
|
||||||
asset_rule=asset_rule,
|
asset_rule=asset_rule,
|
||||||
workspace_path=workspace_path, # This is the path to the source files (e.g. extracted archive)
|
workspace_path=workspace_path,
|
||||||
engine_temp_dir=engine_temp_dir_path, # Pass the orchestrator's temp dir
|
engine_temp_dir=engine_temp_dir_path,
|
||||||
output_base_path=output_base_path,
|
output_base_path=output_base_path,
|
||||||
effective_supplier=None, # Will be set by SupplierDeterminationStage
|
effective_supplier=None,
|
||||||
asset_metadata={}, # Will be populated by stages
|
asset_metadata={},
|
||||||
processed_maps_details={}, # Will be populated by stages
|
processed_maps_details={}, # Final results per item
|
||||||
merged_maps_details={}, # Will be populated by stages
|
merged_maps_details={}, # Keep for potential backward compat or other uses?
|
||||||
files_to_process=[], # Will be populated by FileRuleFilterStage
|
files_to_process=[], # Populated by FileRuleFilterStage (assumed in outer_stages)
|
||||||
loaded_data_cache={}, # For image loading cache within this asset's processing
|
loaded_data_cache={},
|
||||||
config_obj=self.config_obj,
|
config_obj=self.config_obj,
|
||||||
status_flags={"skip_asset": False, "asset_failed": False}, # Initialize common flags
|
status_flags={"skip_asset": False, "asset_failed": False},
|
||||||
incrementing_value=incrementing_value,
|
incrementing_value=incrementing_value,
|
||||||
sha5_value=sha5_value
|
sha5_value=sha5_value,
|
||||||
|
processing_items=[], # Initialize new fields
|
||||||
|
intermediate_results={}
|
||||||
)
|
)
|
||||||
|
|
||||||
for stage_idx, stage in enumerate(self.stages):
|
# --- Execute Pre-Item-Processing Outer Stages ---
|
||||||
log.debug(f"Asset '{asset_rule.asset_name}': Executing stage {stage_idx + 1}/{len(self.stages)}: {stage.__class__.__name__}")
|
# (e.g., MetadataInit, SupplierDet, FileRuleFilter, GlossToRough, NormalInvert)
|
||||||
try:
|
# Identify which outer stages run before the item loop
|
||||||
context = stage.execute(context)
|
# This requires knowing the intended order. Assume all run before for now.
|
||||||
except Exception as e:
|
context = self._execute_specific_stages(context, self.pre_item_stages, "pre-item", stop_on_skip=True)
|
||||||
log.error(f"Asset '{asset_rule.asset_name}': Error during stage '{stage.__class__.__name__}': {e}", exc_info=True)
|
|
||||||
context.status_flags["asset_failed"] = True
|
# Check if asset should be skipped or failed after pre-processing
|
||||||
context.asset_metadata["status"] = f"Failed: Error in stage {stage.__class__.__name__}"
|
if context.status_flags.get("asset_failed"):
|
||||||
context.asset_metadata["error_message"] = str(e)
|
log.error(f"Asset '{asset_name}': Failed during pre-processing stage '{context.status_flags.get('asset_failed_stage', 'Unknown')}'. Skipping item processing.")
|
||||||
break # Stop processing stages for this asset on error
|
overall_status["failed"].append(f"{asset_name} (Failed in {context.status_flags.get('asset_failed_stage', 'Pre-Processing')})")
|
||||||
|
continue # Move to the next asset rule
|
||||||
|
|
||||||
if context.status_flags.get("skip_asset"):
|
if context.status_flags.get("skip_asset"):
|
||||||
log.info(f"Asset '{asset_rule.asset_name}': Skipped by stage '{stage.__class__.__name__}'. Reason: {context.status_flags.get('skip_reason', 'N/A')}")
|
log.info(f"Asset '{asset_name}': Skipped during pre-processing. Skipping item processing.")
|
||||||
break # Skip remaining stages for this asset
|
overall_status["skipped"].append(asset_name)
|
||||||
|
continue # Move to the next asset rule
|
||||||
|
|
||||||
# Refined status collection
|
# --- Prepare Processing Items ---
|
||||||
if context.status_flags.get('skip_asset'):
|
log.debug(f"Asset '{asset_name}': Preparing processing items...")
|
||||||
overall_status["skipped"].append(asset_rule.asset_name)
|
try:
|
||||||
elif context.status_flags.get('asset_failed') or str(context.asset_metadata.get('status', '')).startswith("Failed"):
|
log.info(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': Attempting to call _prepare_stage.execute(). Current context.status_flags: {context.status_flags}")
|
||||||
overall_status["failed"].append(asset_rule.asset_name)
|
# Prepare stage modifies context directly
|
||||||
elif context.asset_metadata.get('status') == "Processed":
|
context = self._prepare_stage.execute(context)
|
||||||
overall_status["processed"].append(asset_rule.asset_name)
|
log.info(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': Successfully RETURNED from _prepare_stage.execute(). context.processing_items count: {len(context.processing_items) if context.processing_items is not None else 'None'}. context.status_flags: {context.status_flags}")
|
||||||
else: # Default or unknown state
|
except Exception as e:
|
||||||
log.warning(f"Asset '{asset_rule.asset_name}': Unknown status after pipeline execution. Metadata status: '{context.asset_metadata.get('status')}'. Marking as failed.")
|
log.error(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': EXCEPTION during _prepare_stage.execute(): {e}", exc_info=True)
|
||||||
overall_status["failed"].append(f"{asset_rule.asset_name} (Unknown Status: {context.asset_metadata.get('status')})")
|
context.status_flags["asset_failed"] = True
|
||||||
log.debug(f"Asset '{asset_rule.asset_name}' final status: {context.asset_metadata.get('status', 'N/A')}, Flags: {context.status_flags}")
|
context.status_flags["asset_failed_stage"] = "PrepareProcessingItemsStage"
|
||||||
|
context.status_flags["asset_failed_reason"] = str(e)
|
||||||
|
overall_status["failed"].append(f"{asset_name} (Failed in Prepare Items)")
|
||||||
|
continue # Move to next asset
|
||||||
|
|
||||||
|
if context.status_flags.get('prepare_items_failed'):
|
||||||
|
log.error(f"Asset '{asset_name}': Failed during item preparation. Reason: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')}. Skipping item processing loop.")
|
||||||
|
overall_status["failed"].append(f"{asset_name} (Failed Prepare Items: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')})")
|
||||||
|
continue # Move to next asset
|
||||||
|
|
||||||
|
if not context.processing_items:
|
||||||
|
log.info(f"Asset '{asset_name}': No items to process after preparation stage.")
|
||||||
|
# Status will be determined at the end
|
||||||
|
|
||||||
|
# --- Core Item Processing Loop ---
|
||||||
|
log.info("ORCHESTRATOR: Starting processing items loop for asset '%s'", asset_name) # Corrected indentation and message
|
||||||
|
log.info(f"Asset '{asset_name}': Starting core item processing loop for {len(context.processing_items)} items...")
|
||||||
|
asset_had_item_errors = False
|
||||||
|
for item_index, item in enumerate(context.processing_items):
|
||||||
|
item_key: Any = None # Key for storing results (FileRule object or task_key string)
|
||||||
|
item_log_prefix = f"Asset '{asset_name}', Item {item_index + 1}/{len(context.processing_items)}"
|
||||||
|
processed_data: Optional[Union[ProcessedRegularMapData, ProcessedMergedMapData]] = None
|
||||||
|
scaled_data_output: Optional[InitialScalingOutput] = None # Store output object
|
||||||
|
saved_data: Optional[SaveVariantsOutput] = None
|
||||||
|
item_status = "Failed" # Default item status
|
||||||
|
current_image_data: Optional[np.ndarray] = None # Track current image data ref
|
||||||
|
|
||||||
|
try:
|
||||||
|
# 1. Process (Load/Merge + Transform)
|
||||||
|
if isinstance(item, FileRule):
|
||||||
|
if item.item_type == 'EXTRA':
|
||||||
|
log.debug(f"{item_log_prefix}: Skipping image processing for EXTRA FileRule '{item.file_path}'.")
|
||||||
|
# Add a basic entry to processed_maps_details to acknowledge it was seen
|
||||||
|
context.processed_maps_details[item.file_path] = {
|
||||||
|
"status": "Skipped (EXTRA file)",
|
||||||
|
"internal_map_type": "EXTRA",
|
||||||
|
"source_file": str(item.file_path)
|
||||||
|
}
|
||||||
|
continue # Skip to the next item
|
||||||
|
item_key = item.file_path # Use file_path string as key
|
||||||
|
log.debug(f"{item_log_prefix}: Processing FileRule '{item.file_path}'...")
|
||||||
|
processed_data = self._regular_processor_stage.execute(context, item)
|
||||||
|
elif isinstance(item, MergeTaskDefinition):
|
||||||
|
item_key = item.task_key # Use task_key string as key
|
||||||
|
log.info(f"{item_log_prefix}: Executing MergedTaskProcessorStage for MergeTask '{item_key}'...") # Log call
|
||||||
|
processed_data = self._merged_processor_stage.execute(context, item)
|
||||||
|
# Log status/error from merge processor
|
||||||
|
if processed_data:
|
||||||
|
log.info(f"{item_log_prefix}: MergedTaskProcessorStage result - Status: {processed_data.status}, Error: {processed_data.error_message}")
|
||||||
|
else:
|
||||||
|
log.warning(f"{item_log_prefix}: MergedTaskProcessorStage returned None for MergeTask '{item_key}'.")
|
||||||
|
else:
|
||||||
|
log.warning(f"{item_log_prefix}: Unknown item type '{type(item)}'. Skipping.")
|
||||||
|
item_key = f"unknown_item_{item_index}"
|
||||||
|
context.processed_maps_details[item_key] = {"status": "Skipped", "notes": f"Unknown item type {type(item)}"}
|
||||||
|
asset_had_item_errors = True
|
||||||
|
continue # Next item
|
||||||
|
|
||||||
|
# Check for processing failure
|
||||||
|
if not processed_data or processed_data.status != "Processed":
|
||||||
|
error_msg = processed_data.error_message if processed_data else "Processor returned None"
|
||||||
|
log.error(f"{item_log_prefix}: Failed during processing stage. Error: {error_msg}")
|
||||||
|
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Processing Error: {error_msg}", "stage": processed_data.__class__.__name__ if processed_data else "UnknownProcessor"}
|
||||||
|
asset_had_item_errors = True
|
||||||
|
continue # Next item
|
||||||
|
|
||||||
|
# Store intermediate result & get current image data
|
||||||
|
context.intermediate_results[item_key] = processed_data
|
||||||
|
current_image_data = processed_data.processed_image_data if isinstance(processed_data, ProcessedRegularMapData) else processed_data.merged_image_data
|
||||||
|
current_dimensions = processed_data.original_dimensions if isinstance(processed_data, ProcessedRegularMapData) else processed_data.final_dimensions
|
||||||
|
|
||||||
|
# 2. Scale (Optional)
|
||||||
|
scaling_mode = getattr(context.config_obj, "INITIAL_SCALING_MODE", "NONE")
|
||||||
|
if scaling_mode != "NONE" and current_image_data is not None and current_image_data.size > 0:
|
||||||
|
if isinstance(item, MergeTaskDefinition): # Log scaling call for merge tasks
|
||||||
|
log.info(f"{item_log_prefix}: Calling InitialScalingStage for MergeTask '{item_key}' (Mode: {scaling_mode})...")
|
||||||
|
log.debug(f"{item_log_prefix}: Applying initial scaling (Mode: {scaling_mode})...")
|
||||||
|
scale_input = InitialScalingInput(
|
||||||
|
image_data=current_image_data,
|
||||||
|
original_dimensions=current_dimensions, # Pass original/merged dims
|
||||||
|
initial_scaling_mode=scaling_mode
|
||||||
|
)
|
||||||
|
scaled_data_output = self._scaling_stage.execute(scale_input)
|
||||||
|
# Update intermediate result and current image data reference
|
||||||
|
context.intermediate_results[item_key] = scaled_data_output # Overwrite previous intermediate
|
||||||
|
current_image_data = scaled_data_output.scaled_image_data # Use scaled data for saving
|
||||||
|
log.debug(f"{item_log_prefix}: Scaling applied: {scaled_data_output.scaling_applied}. New Dims: {scaled_data_output.final_dimensions}")
|
||||||
|
else:
|
||||||
|
log.debug(f"{item_log_prefix}: Initial scaling skipped (Mode: NONE or empty image).")
|
||||||
|
# Create dummy output if scaling skipped, using current dims
|
||||||
|
final_dims = current_dimensions if current_dimensions else (current_image_data.shape[1], current_image_data.shape[0]) if current_image_data is not None else (0,0)
|
||||||
|
scaled_data_output = InitialScalingOutput(scaled_image_data=current_image_data, scaling_applied=False, final_dimensions=final_dims)
|
||||||
|
|
||||||
|
|
||||||
|
# 3. Save Variants
|
||||||
|
if current_image_data is None or current_image_data.size == 0:
|
||||||
|
log.warning(f"{item_log_prefix}: Skipping save stage because image data is empty.")
|
||||||
|
context.processed_maps_details[item_key] = {"status": "Skipped", "notes": "No image data to save", "stage": "SaveVariantsStage"}
|
||||||
|
# Don't mark as asset error, just skip this item's saving
|
||||||
|
continue # Next item
|
||||||
|
|
||||||
|
if isinstance(item, MergeTaskDefinition): # Log save call for merge tasks
|
||||||
|
log.info(f"{item_log_prefix}: Calling SaveVariantsStage for MergeTask '{item_key}'...")
|
||||||
|
log.debug(f"{item_log_prefix}: Saving variants...")
|
||||||
|
# Prepare input for save stage
|
||||||
|
internal_map_type = processed_data.final_internal_map_type if isinstance(processed_data, ProcessedRegularMapData) else processed_data.output_map_type
|
||||||
|
source_bit_depth = [processed_data.original_bit_depth] if isinstance(processed_data, ProcessedRegularMapData) and processed_data.original_bit_depth is not None else processed_data.source_bit_depths if isinstance(processed_data, ProcessedMergedMapData) else [8] # Default bit depth if unknown
|
||||||
|
|
||||||
|
# Construct filename tokens (ensure temp dir is used)
|
||||||
|
output_filename_tokens = {
|
||||||
|
'asset_name': asset_name,
|
||||||
|
'output_base_directory': context.engine_temp_dir, # Save variants to temp dir
|
||||||
|
# Add other tokens from context/config as needed by the pattern
|
||||||
|
'supplier': context.effective_supplier or 'UnknownSupplier',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Log the value being read for the threshold before creating the input object
|
||||||
|
log.info(f"ORCHESTRATOR_DEBUG: Reading RESOLUTION_THRESHOLD_FOR_JPG from config for SaveVariantsInput: {getattr(context.config_obj, 'RESOLUTION_THRESHOLD_FOR_JPG', None)}")
|
||||||
|
save_input = SaveVariantsInput(
|
||||||
|
image_data=current_image_data, # Use potentially scaled data
|
||||||
|
internal_map_type=internal_map_type,
|
||||||
|
source_bit_depth_info=source_bit_depth,
|
||||||
|
output_filename_pattern_tokens=output_filename_tokens,
|
||||||
|
# Pass config values needed by save stage
|
||||||
|
image_resolutions=context.config_obj.image_resolutions,
|
||||||
|
file_type_defs=getattr(context.config_obj, "FILE_TYPE_DEFINITIONS", {}),
|
||||||
|
output_format_8bit=context.config_obj.get_8bit_output_format(),
|
||||||
|
output_format_16bit_primary=context.config_obj.get_16bit_output_formats()[0],
|
||||||
|
output_format_16bit_fallback=context.config_obj.get_16bit_output_formats()[1],
|
||||||
|
png_compression_level=context.config_obj.png_compression_level,
|
||||||
|
jpg_quality=context.config_obj.jpg_quality,
|
||||||
|
output_filename_pattern=context.config_obj.output_filename_pattern,
|
||||||
|
resolution_threshold_for_jpg=getattr(context.config_obj, "resolution_threshold_for_jpg", None) # Corrected case
|
||||||
|
)
|
||||||
|
saved_data = self._save_stage.execute(save_input)
|
||||||
|
# Log saved_data for merge tasks
|
||||||
|
if isinstance(item, MergeTaskDefinition):
|
||||||
|
log.info(f"{item_log_prefix}: SaveVariantsStage result for MergeTask '{item_key}' - Status: {saved_data.status if saved_data else 'N/A'}, Saved Files: {len(saved_data.saved_files_details) if saved_data else 0}")
|
||||||
|
|
||||||
|
# Check save status and finalize item result
|
||||||
|
if saved_data and saved_data.status.startswith("Processed"):
|
||||||
|
item_status = saved_data.status # e.g., "Processed" or "Processed (No Output)"
|
||||||
|
log.info(f"{item_log_prefix}: Item successfully processed and saved. Status: {item_status}")
|
||||||
|
# Populate final details for this item
|
||||||
|
final_details = {
|
||||||
|
"status": item_status,
|
||||||
|
"saved_files_info": saved_data.saved_files_details, # List of dicts from save util
|
||||||
|
"internal_map_type": internal_map_type,
|
||||||
|
"original_dimensions": processed_data.original_dimensions if isinstance(processed_data, ProcessedRegularMapData) else None,
|
||||||
|
"final_dimensions": scaled_data_output.final_dimensions if scaled_data_output else current_dimensions,
|
||||||
|
"transformations": processed_data.transformations_applied if isinstance(processed_data, ProcessedRegularMapData) else processed_data.transformations_applied_to_inputs,
|
||||||
|
# Add source file if regular map
|
||||||
|
"source_file": str(processed_data.source_file_path) if isinstance(processed_data, ProcessedRegularMapData) else None,
|
||||||
|
}
|
||||||
|
# Log final details addition for merge tasks
|
||||||
|
if isinstance(item, MergeTaskDefinition):
|
||||||
|
log.info(f"{item_log_prefix}: Adding final details to context.processed_maps_details for MergeTask '{item_key}'. Details: {final_details}")
|
||||||
|
context.processed_maps_details[item_key] = final_details
|
||||||
|
else:
|
||||||
|
error_msg = saved_data.error_message if saved_data else "Save stage returned None"
|
||||||
|
log.error(f"{item_log_prefix}: Failed during save stage. Error: {error_msg}")
|
||||||
|
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Save Error: {error_msg}", "stage": "SaveVariantsStage"}
|
||||||
|
asset_had_item_errors = True
|
||||||
|
item_status = "Failed" # Ensure item status reflects failure
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"PipelineOrchestrator.process_source_rule failed: {e}", exc_info=True)
|
log.exception(f"{item_log_prefix}: Unhandled exception during item processing loop: {e}")
|
||||||
# Mark all remaining assets as failed if a top-level error occurs
|
# Ensure details are recorded even on unhandled exception
|
||||||
processed_or_skipped_or_failed = set(overall_status["processed"] + overall_status["skipped"] + overall_status["failed"])
|
if item_key is not None:
|
||||||
|
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Unhandled Loop Error: {e}", "stage": "OrchestratorLoop"}
|
||||||
|
else:
|
||||||
|
log.error(f"Asset '{asset_name}': Unhandled exception in item loop before item key was set.")
|
||||||
|
asset_had_item_errors = True
|
||||||
|
item_status = "Failed"
|
||||||
|
# Optionally break loop or continue? Continue for now to process other items.
|
||||||
|
|
||||||
|
log.info("ORCHESTRATOR: Finished processing items loop for asset '%s'", asset_name)
|
||||||
|
log.info(f"Asset '{asset_name}': Finished core item processing loop.")
|
||||||
|
|
||||||
|
# --- Execute Post-Item-Processing Outer Stages ---
|
||||||
|
# (e.g., OutputOrganization, MetadataFinalizationSave)
|
||||||
|
# Identify which outer stages run after the item loop
|
||||||
|
# This needs better handling based on stage purpose. Assume none run after for now.
|
||||||
|
if not context.status_flags.get("asset_failed"):
|
||||||
|
log.info("ORCHESTRATOR: Executing post-item-processing outer stages for asset '%s'", asset_name)
|
||||||
|
context = self._execute_specific_stages(context, self.post_item_stages, "post-item", stop_on_skip=False)
|
||||||
|
|
||||||
|
# --- Final Asset Status Determination ---
|
||||||
|
final_asset_status = "Unknown"
|
||||||
|
fail_reason = ""
|
||||||
|
if context.status_flags.get("asset_failed"):
|
||||||
|
final_asset_status = "Failed"
|
||||||
|
fail_reason = f"(Failed in {context.status_flags.get('asset_failed_stage', 'Unknown Stage')}: {context.status_flags.get('asset_failed_reason', 'Unknown Reason')})"
|
||||||
|
elif context.status_flags.get("skip_asset"):
|
||||||
|
final_asset_status = "Skipped"
|
||||||
|
fail_reason = f"(Skipped: {context.status_flags.get('skip_reason', 'Unknown Reason')})"
|
||||||
|
elif asset_had_item_errors:
|
||||||
|
final_asset_status = "Failed"
|
||||||
|
fail_reason = "(One or more items failed)"
|
||||||
|
elif not context.processing_items:
|
||||||
|
# No items prepared, no errors -> consider skipped or processed based on definition?
|
||||||
|
final_asset_status = "Skipped" # Or "Processed (No Items)"
|
||||||
|
fail_reason = "(No items to process)"
|
||||||
|
elif not context.processed_maps_details and context.processing_items:
|
||||||
|
# Items were prepared, but none resulted in processed_maps_details entry
|
||||||
|
final_asset_status = "Skipped" # Or Failed?
|
||||||
|
fail_reason = "(All processing items skipped or failed internally)"
|
||||||
|
elif context.processed_maps_details:
|
||||||
|
# Check if all items in processed_maps_details are actually processed successfully
|
||||||
|
all_processed_ok = all(
|
||||||
|
str(details.get("status", "")).startswith("Processed")
|
||||||
|
for details in context.processed_maps_details.values()
|
||||||
|
)
|
||||||
|
some_processed_ok = any(
|
||||||
|
str(details.get("status", "")).startswith("Processed")
|
||||||
|
for details in context.processed_maps_details.values()
|
||||||
|
)
|
||||||
|
|
||||||
|
if all_processed_ok:
|
||||||
|
final_asset_status = "Processed"
|
||||||
|
elif some_processed_ok:
|
||||||
|
final_asset_status = "Partial" # Introduce a partial status? Or just Failed?
|
||||||
|
fail_reason = "(Some items failed)"
|
||||||
|
final_asset_status = "Failed" # Treat partial as Failed for overall status
|
||||||
|
else: # No items processed successfully
|
||||||
|
final_asset_status = "Failed"
|
||||||
|
fail_reason = "(All items failed)"
|
||||||
|
else:
|
||||||
|
# Should not happen if processing_items existed
|
||||||
|
final_asset_status = "Failed"
|
||||||
|
fail_reason = "(Unknown state after item processing)"
|
||||||
|
|
||||||
|
|
||||||
|
# Update overall status list
|
||||||
|
if final_asset_status == "Processed":
|
||||||
|
overall_status["processed"].append(asset_name)
|
||||||
|
elif final_asset_status == "Skipped":
|
||||||
|
overall_status["skipped"].append(f"{asset_name} {fail_reason}")
|
||||||
|
else: # Failed or Unknown
|
||||||
|
overall_status["failed"].append(f"{asset_name} {fail_reason}")
|
||||||
|
|
||||||
|
log.info(f"Asset '{asset_name}' final status: {final_asset_status} {fail_reason}")
|
||||||
|
# Clean up intermediate results for the asset to save memory
|
||||||
|
context.intermediate_results = {}
|
||||||
|
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"PipelineOrchestrator.process_source_rule failed critically: {e}", exc_info=True)
|
||||||
|
# Mark all assets from this source rule that weren't finished as failed
|
||||||
|
processed_or_skipped_or_failed = set(overall_status["processed"]) | \
|
||||||
|
set(name.split(" ")[0] for name in overall_status["skipped"]) | \
|
||||||
|
set(name.split(" ")[0] for name in overall_status["failed"])
|
||||||
for asset_rule in source_rule.assets:
|
for asset_rule in source_rule.assets:
|
||||||
if asset_rule.asset_name not in processed_or_skipped_or_failed:
|
if asset_rule.asset_name not in processed_or_skipped_or_failed:
|
||||||
overall_status["failed"].append(f"{asset_rule.asset_name} (Orchestrator Error)")
|
overall_status["failed"].append(f"{asset_rule.asset_name} (Orchestrator Error: {e})")
|
||||||
finally:
|
finally:
|
||||||
|
# --- Cleanup Temporary Directory ---
|
||||||
if engine_temp_dir_path and engine_temp_dir_path.exists():
|
if engine_temp_dir_path and engine_temp_dir_path.exists():
|
||||||
try:
|
try:
|
||||||
log.debug(f"PipelineOrchestrator cleaning up temporary directory: {engine_temp_dir_path}")
|
log.debug(f"PipelineOrchestrator cleaning up temporary directory: {engine_temp_dir_path}")
|
||||||
|
|||||||
@@ -18,7 +18,8 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
Extracts an alpha channel from a suitable source map (e.g., Albedo, Diffuse)
|
Extracts an alpha channel from a suitable source map (e.g., Albedo, Diffuse)
|
||||||
to generate a MASK map if one is not explicitly defined.
|
to generate a MASK map if one is not explicitly defined.
|
||||||
"""
|
"""
|
||||||
SUITABLE_SOURCE_MAP_TYPES = ["ALBEDO", "DIFFUSE", "BASE_COLOR"] # Map types likely to have alpha
|
# Use MAP_ prefixed types for internal logic checks
|
||||||
|
SUITABLE_SOURCE_MAP_TYPES = ["MAP_COL", "MAP_ALBEDO", "MAP_BASECOLOR"] # Map types likely to have alpha
|
||||||
|
|
||||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||||
@@ -38,7 +39,8 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
# A. Check for Existing MASK Map
|
# A. Check for Existing MASK Map
|
||||||
for file_rule in context.files_to_process:
|
for file_rule in context.files_to_process:
|
||||||
# Assuming file_rule has 'map_type' and 'file_path' (instead of filename_pattern)
|
# Assuming file_rule has 'map_type' and 'file_path' (instead of filename_pattern)
|
||||||
if hasattr(file_rule, 'map_type') and file_rule.map_type == "MASK":
|
# Check for existing MASK map using the correct item_type field and MAP_ prefix
|
||||||
|
if file_rule.item_type == "MAP_MASK":
|
||||||
file_path_for_log = file_rule.file_path if hasattr(file_rule, 'file_path') else "Unknown file path"
|
file_path_for_log = file_rule.file_path if hasattr(file_rule, 'file_path') else "Unknown file path"
|
||||||
logger.info(
|
logger.info(
|
||||||
f"Asset '{asset_name_for_log}': MASK map already defined by FileRule "
|
f"Asset '{asset_name_for_log}': MASK map already defined by FileRule "
|
||||||
@@ -51,8 +53,10 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
source_file_rule_id_for_alpha: Optional[str] = None # This ID comes from processed_maps_details keys
|
source_file_rule_id_for_alpha: Optional[str] = None # This ID comes from processed_maps_details keys
|
||||||
|
|
||||||
for file_rule_id, details in context.processed_maps_details.items():
|
for file_rule_id, details in context.processed_maps_details.items():
|
||||||
|
# Check for suitable source map using the standardized internal_map_type field
|
||||||
|
internal_map_type = details.get('internal_map_type') # Use the standardized field
|
||||||
if details.get('status') == 'Processed' and \
|
if details.get('status') == 'Processed' and \
|
||||||
details.get('map_type') in self.SUITABLE_SOURCE_MAP_TYPES:
|
internal_map_type in self.SUITABLE_SOURCE_MAP_TYPES:
|
||||||
try:
|
try:
|
||||||
temp_path = Path(details['temp_processed_file'])
|
temp_path = Path(details['temp_processed_file'])
|
||||||
if not temp_path.exists():
|
if not temp_path.exists():
|
||||||
@@ -153,15 +157,16 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
|
|
||||||
|
|
||||||
context.processed_maps_details[new_mask_processed_map_key] = {
|
context.processed_maps_details[new_mask_processed_map_key] = {
|
||||||
'map_type': "MASK",
|
'internal_map_type': "MAP_MASK", # Use the standardized MAP_ prefixed field
|
||||||
|
'map_type': "MASK", # Keep standard type for metadata/naming consistency if needed
|
||||||
'source_file': str(source_image_path),
|
'source_file': str(source_image_path),
|
||||||
'temp_processed_file': str(mask_temp_path),
|
'temp_processed_file': str(mask_temp_path),
|
||||||
'original_dimensions': original_dims,
|
'original_dimensions': original_dims,
|
||||||
'processed_dimensions': (alpha_channel.shape[1], alpha_channel.shape[0]),
|
'processed_dimensions': (alpha_channel.shape[1], alpha_channel.shape[0]),
|
||||||
'status': 'Processed',
|
'status': 'Processed',
|
||||||
'notes': (
|
'notes': (
|
||||||
f"Generated from alpha of {source_map_details_for_alpha['map_type']} "
|
f"Generated from alpha of {source_map_details_for_alpha.get('internal_map_type', 'unknown type')} " # Use internal_map_type for notes
|
||||||
f"(Source Detail ID: {source_file_rule_id_for_alpha})" # Changed from Source Rule ID
|
f"(Source Detail ID: {source_file_rule_id_for_alpha})"
|
||||||
),
|
),
|
||||||
# 'file_rule_id': new_mask_file_rule_id_str # FileRule doesn't have an ID to link here directly
|
# 'file_rule_id': new_mask_file_rule_id_str # FileRule doesn't have an ID to link here directly
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -51,7 +51,8 @@ class GlossToRoughConversionStage(ProcessingStage):
|
|||||||
|
|
||||||
# Iterate using the index (map_key_index) as the key, which is now standard.
|
# Iterate using the index (map_key_index) as the key, which is now standard.
|
||||||
for map_key_index, map_details in context.processed_maps_details.items():
|
for map_key_index, map_details in context.processed_maps_details.items():
|
||||||
processing_map_type = map_details.get('processing_map_type', '')
|
# Use the standardized internal_map_type field
|
||||||
|
internal_map_type = map_details.get('internal_map_type', '')
|
||||||
map_status = map_details.get('status')
|
map_status = map_details.get('status')
|
||||||
original_temp_path_str = map_details.get('temp_processed_file')
|
original_temp_path_str = map_details.get('temp_processed_file')
|
||||||
# source_file_rule_idx from details should align with map_key_index.
|
# source_file_rule_idx from details should align with map_key_index.
|
||||||
@@ -70,11 +71,12 @@ class GlossToRoughConversionStage(ProcessingStage):
|
|||||||
processing_tag = f"mki_{map_key_index}_fallback_tag"
|
processing_tag = f"mki_{map_key_index}_fallback_tag"
|
||||||
|
|
||||||
|
|
||||||
if not processing_map_type.startswith("MAP_GLOSS"):
|
# Check if the map is a GLOSS map using the standardized internal_map_type
|
||||||
# logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Type '{processing_map_type}' is not GLOSS. Skipping.")
|
if not internal_map_type.startswith("MAP_GLOSS"):
|
||||||
|
# logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Type '{internal_map_type}' is not GLOSS. Skipping.")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index} (Tag: {processing_tag}): Identified potential GLOSS map (Type: {processing_map_type}).")
|
logger.info(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index} (Tag: {processing_tag}): Identified potential GLOSS map (Type: {internal_map_type}).")
|
||||||
|
|
||||||
if map_status not in successful_conversion_statuses:
|
if map_status not in successful_conversion_statuses:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
@@ -163,9 +165,9 @@ class GlossToRoughConversionStage(ProcessingStage):
|
|||||||
|
|
||||||
# Update context.processed_maps_details for this map_key_index
|
# Update context.processed_maps_details for this map_key_index
|
||||||
map_details['temp_processed_file'] = str(new_temp_path)
|
map_details['temp_processed_file'] = str(new_temp_path)
|
||||||
map_details['original_map_type_before_conversion'] = processing_map_type
|
map_details['original_map_type_before_conversion'] = internal_map_type # Store the original internal type
|
||||||
map_details['processing_map_type'] = "MAP_ROUGH"
|
map_details['internal_map_type'] = "MAP_ROUGH" # Use the standardized MAP_ prefixed field
|
||||||
map_details['map_type'] = "Roughness"
|
map_details['map_type'] = "Roughness" # Keep standard type for metadata/naming consistency if needed
|
||||||
map_details['status'] = "Converted_To_Rough"
|
map_details['status'] = "Converted_To_Rough"
|
||||||
map_details['notes'] = map_details.get('notes', '') + "; Converted from GLOSS by GlossToRoughConversionStage"
|
map_details['notes'] = map_details.get('notes', '') + "; Converted from GLOSS by GlossToRoughConversionStage"
|
||||||
if 'base_pot_resolution_name' in map_details:
|
if 'base_pot_resolution_name' in map_details:
|
||||||
|
|||||||
@@ -1,700 +0,0 @@
|
|||||||
import uuid
|
|
||||||
import dataclasses
|
|
||||||
import re
|
|
||||||
import os
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Optional, Tuple, Dict
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
from .base_stage import ProcessingStage
|
|
||||||
from ..asset_context import AssetProcessingContext
|
|
||||||
from rule_structure import FileRule
|
|
||||||
from utils.path_utils import sanitize_filename
|
|
||||||
from ...utils import image_processing_utils as ipu
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class IndividualMapProcessingStage(ProcessingStage):
|
|
||||||
"""
|
|
||||||
Processes individual texture map files based on FileRules.
|
|
||||||
This stage finds the source file, loads it, applies transformations
|
|
||||||
(resize, color space), saves a temporary processed version, and updates
|
|
||||||
the AssetProcessingContext with details.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
|
||||||
"""
|
|
||||||
Executes the individual map processing logic.
|
|
||||||
"""
|
|
||||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
|
||||||
if context.status_flags.get('skip_asset', False):
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Skipping individual map processing due to skip_asset flag.")
|
|
||||||
return context
|
|
||||||
|
|
||||||
if not hasattr(context, 'processed_maps_details') or context.processed_maps_details is None:
|
|
||||||
context.processed_maps_details = {}
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Initialized processed_maps_details.")
|
|
||||||
|
|
||||||
if not context.files_to_process:
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': No files to process in this stage.")
|
|
||||||
return context
|
|
||||||
|
|
||||||
# Source path for the asset group comes from SourceRule
|
|
||||||
if not context.source_rule or not context.source_rule.input_path:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}': SourceRule or SourceRule.input_path is not set. Cannot determine source base path.")
|
|
||||||
context.status_flags['individual_map_processing_failed'] = True
|
|
||||||
# Mark all file_rules as failed
|
|
||||||
for fr_idx, file_rule_to_fail in enumerate(context.files_to_process):
|
|
||||||
# Use fr_idx as the key for status update for these early failures
|
|
||||||
map_type_for_fail = file_rule_to_fail.item_type_override or file_rule_to_fail.item_type or "UnknownMapType"
|
|
||||||
self._update_file_rule_status(context, fr_idx, 'Failed', map_type=map_type_for_fail, details="SourceRule.input_path missing")
|
|
||||||
return context
|
|
||||||
|
|
||||||
# The workspace_path in the context should be the directory where files are extracted/available.
|
|
||||||
source_base_path = context.workspace_path
|
|
||||||
if not source_base_path.is_dir():
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}': Workspace path '{source_base_path}' is not a valid directory.")
|
|
||||||
context.status_flags['individual_map_processing_failed'] = True
|
|
||||||
for fr_idx, file_rule_to_fail in enumerate(context.files_to_process):
|
|
||||||
# Use fr_idx as the key for status update
|
|
||||||
map_type_for_fail = file_rule_to_fail.item_type_override or file_rule_to_fail.item_type or "UnknownMapType"
|
|
||||||
self._update_file_rule_status(context, fr_idx, 'Failed', map_type=map_type_for_fail, details="Workspace path invalid")
|
|
||||||
return context
|
|
||||||
|
|
||||||
# Fetch config settings once before the loop
|
|
||||||
respect_variant_map_types = getattr(context.config_obj, "respect_variant_map_types", [])
|
|
||||||
image_resolutions = getattr(context.config_obj, "image_resolutions", {})
|
|
||||||
output_filename_pattern = getattr(context.config_obj, "output_filename_pattern", "[assetname]_[maptype]_[resolution].[ext]")
|
|
||||||
|
|
||||||
for file_rule_idx, file_rule in enumerate(context.files_to_process):
|
|
||||||
# file_rule_idx will be the key for processed_maps_details.
|
|
||||||
# processing_instance_tag is for unique temp files and detailed logging for this specific run.
|
|
||||||
processing_instance_tag = f"map_{file_rule_idx}_{uuid.uuid4().hex[:8]}"
|
|
||||||
current_map_key = file_rule_idx # Key for processed_maps_details
|
|
||||||
|
|
||||||
if not file_rule.file_path: # Ensure file_path exists, critical for later stages if they rely on it from FileRule
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', FileRule at index {file_rule_idx} has an empty or None file_path. Skipping this rule.")
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Failed',
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
details="FileRule has no file_path")
|
|
||||||
continue
|
|
||||||
|
|
||||||
initial_current_map_type = file_rule.item_type_override or file_rule.item_type or "UnknownMapType"
|
|
||||||
|
|
||||||
# --- START NEW SUFFIXING LOGIC ---
|
|
||||||
final_current_map_type = initial_current_map_type # Default to initial
|
|
||||||
|
|
||||||
# 1. Determine Base Map Type from initial_current_map_type
|
|
||||||
base_map_type_match = re.match(r"(MAP_[A-Z]{3})", initial_current_map_type)
|
|
||||||
|
|
||||||
if base_map_type_match and context.asset_rule:
|
|
||||||
true_base_map_type = base_map_type_match.group(1) # This is "MAP_XXX"
|
|
||||||
|
|
||||||
# 2. Count Occurrences and Find Index of current_file_rule in context.asset_rule.files
|
|
||||||
peers_of_same_base_type_in_asset_rule = []
|
|
||||||
for fr_asset in context.asset_rule.files:
|
|
||||||
fr_asset_item_type = fr_asset.item_type_override or fr_asset.item_type or "UnknownMapType"
|
|
||||||
fr_asset_base_map_type_match = re.match(r"(MAP_[A-Z]{3})", fr_asset_item_type)
|
|
||||||
|
|
||||||
if fr_asset_base_map_type_match:
|
|
||||||
fr_asset_base_map_type = fr_asset_base_map_type_match.group(1)
|
|
||||||
if fr_asset_base_map_type == true_base_map_type:
|
|
||||||
peers_of_same_base_type_in_asset_rule.append(fr_asset)
|
|
||||||
|
|
||||||
num_occurrences_of_base_type = len(peers_of_same_base_type_in_asset_rule)
|
|
||||||
current_instance_index = 0 # 1-based
|
|
||||||
|
|
||||||
try:
|
|
||||||
current_instance_index = peers_of_same_base_type_in_asset_rule.index(file_rule) + 1
|
|
||||||
except ValueError:
|
|
||||||
logger.warning(
|
|
||||||
f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Initial Type: '{initial_current_map_type}', Base: '{true_base_map_type}'): "
|
|
||||||
f"Could not find its own instance in the list of peers from asset_rule.files. "
|
|
||||||
f"Number of peers found: {num_occurrences_of_base_type}. Suffixing may be affected."
|
|
||||||
)
|
|
||||||
|
|
||||||
# 3. Determine Suffix
|
|
||||||
map_type_for_respect_check = true_base_map_type.replace("MAP_", "") # e.g., "COL"
|
|
||||||
is_in_respect_list = map_type_for_respect_check in respect_variant_map_types
|
|
||||||
|
|
||||||
suffix_to_append = ""
|
|
||||||
if num_occurrences_of_base_type > 1:
|
|
||||||
if current_instance_index > 0:
|
|
||||||
suffix_to_append = f"-{current_instance_index}"
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}': Index for multi-occurrence map type '{true_base_map_type}' (count: {num_occurrences_of_base_type}) not determined. Omitting numeric suffix.")
|
|
||||||
elif num_occurrences_of_base_type == 1 and is_in_respect_list:
|
|
||||||
suffix_to_append = "-1"
|
|
||||||
|
|
||||||
# 4. Form the final_current_map_type
|
|
||||||
if suffix_to_append:
|
|
||||||
final_current_map_type = true_base_map_type + suffix_to_append
|
|
||||||
else:
|
|
||||||
final_current_map_type = initial_current_map_type
|
|
||||||
|
|
||||||
current_map_type = final_current_map_type
|
|
||||||
# --- END NEW SUFFIXING LOGIC ---
|
|
||||||
|
|
||||||
# --- START: Filename-friendly map type derivation ---
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: --- Starting Filename-Friendly Map Type Logic for: {current_map_type} ---")
|
|
||||||
filename_friendly_map_type = current_map_type # Fallback
|
|
||||||
|
|
||||||
# 1. Access FILE_TYPE_DEFINITIONS
|
|
||||||
file_type_definitions = None
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Attempting to access context.config_obj.FILE_TYPE_DEFINITIONS.")
|
|
||||||
try:
|
|
||||||
file_type_definitions = context.config_obj.FILE_TYPE_DEFINITIONS
|
|
||||||
if not file_type_definitions: # Check if it's None or empty
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS is present but empty or None.")
|
|
||||||
else:
|
|
||||||
sample_defs_log = {k: file_type_definitions[k] for k in list(file_type_definitions.keys())[:2]} # Log first 2 for brevity
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Accessed FILE_TYPE_DEFINITIONS. Sample: {sample_defs_log}, Total keys: {len(file_type_definitions)}.")
|
|
||||||
except AttributeError:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Could not access context.config_obj.FILE_TYPE_DEFINITIONS via direct attribute.")
|
|
||||||
|
|
||||||
base_map_key_val = None # Renamed from base_map_key to avoid conflict with current_map_key
|
|
||||||
suffix_part = ""
|
|
||||||
|
|
||||||
if file_type_definitions and isinstance(file_type_definitions, dict) and len(file_type_definitions) > 0:
|
|
||||||
base_map_key_val = None
|
|
||||||
suffix_part = ""
|
|
||||||
|
|
||||||
sorted_known_base_keys = sorted(list(file_type_definitions.keys()), key=len, reverse=True)
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Sorted known base keys for parsing: {sorted_known_base_keys}")
|
|
||||||
|
|
||||||
for known_key in sorted_known_base_keys:
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Checking if '{current_map_type}' starts with '{known_key}'")
|
|
||||||
if current_map_type.startswith(known_key):
|
|
||||||
base_map_key_val = known_key
|
|
||||||
suffix_part = current_map_type[len(known_key):]
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Match found! current_map_type: '{current_map_type}', base_map_key_val: '{base_map_key_val}', suffix_part: '{suffix_part}'")
|
|
||||||
break
|
|
||||||
|
|
||||||
if base_map_key_val is None:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Could not parse base_map_key_val from '{current_map_type}' using known keys. Fallback: filename_friendly_map_type = '{filename_friendly_map_type}'.")
|
|
||||||
else:
|
|
||||||
definition = file_type_definitions.get(base_map_key_val)
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Definition for '{base_map_key_val}': {definition}")
|
|
||||||
if definition and isinstance(definition, dict):
|
|
||||||
standard_type_alias = definition.get("standard_type")
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Standard type alias for '{base_map_key_val}': '{standard_type_alias}'")
|
|
||||||
if standard_type_alias and isinstance(standard_type_alias, str) and standard_type_alias.strip():
|
|
||||||
filename_friendly_map_type = standard_type_alias.strip() + suffix_part
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Successfully transformed map type: '{current_map_type}' -> '{filename_friendly_map_type}' (standard_type_alias: '{standard_type_alias}', suffix_part: '{suffix_part}').")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Standard type alias for '{base_map_key_val}' is missing, empty, or not a string (value: '{standard_type_alias}'). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: No definition or invalid definition for '{base_map_key_val}' (value: {definition}). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
|
|
||||||
elif file_type_definitions is None:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS not available for lookup (was None). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
|
|
||||||
elif not isinstance(file_type_definitions, dict):
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS is not a dictionary (type: {type(file_type_definitions)}). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS is an empty dictionary. Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
|
|
||||||
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Final filename_friendly_map_type: '{filename_friendly_map_type}'")
|
|
||||||
# --- END: Filename-friendly map type derivation ---
|
|
||||||
|
|
||||||
if not current_map_type or not current_map_type.startswith("MAP_") or current_map_type == "MAP_GEN_COMPOSITE":
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}': Skipping, item_type '{current_map_type}' (initial: '{initial_current_map_type}') not targeted for individual processing.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Type: {current_map_type}, Initial Type: {initial_current_map_type}, Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Starting individual processing.")
|
|
||||||
|
|
||||||
# A. Find Source File (using file_rule.file_path as the pattern relative to source_base_path)
|
|
||||||
source_file_path = self._find_source_file(source_base_path, file_rule.file_path, asset_name_for_log, processing_instance_tag)
|
|
||||||
if not source_file_path:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Source file not found with path/pattern '{file_rule.file_path}' in '{source_base_path}'.")
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Failed',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
details="Source file not found")
|
|
||||||
continue
|
|
||||||
|
|
||||||
# B. Load and Transform Image
|
|
||||||
image_data: Optional[np.ndarray] = ipu.load_image(str(source_file_path))
|
|
||||||
if image_data is None:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Failed to load image from '{source_file_path}'.")
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Failed',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
source_file=str(source_file_path),
|
|
||||||
details="Image load failed")
|
|
||||||
continue
|
|
||||||
|
|
||||||
original_height, original_width = image_data.shape[:2]
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Loaded image '{source_file_path}' with dimensions {original_width}x{original_height}.")
|
|
||||||
|
|
||||||
# 1. Initial Power-of-Two (POT) Downscaling
|
|
||||||
pot_width = ipu.get_nearest_power_of_two_downscale(original_width)
|
|
||||||
pot_height = ipu.get_nearest_power_of_two_downscale(original_height)
|
|
||||||
|
|
||||||
# Maintain aspect ratio for initial POT scaling, using the smaller of the scaled dimensions
|
|
||||||
# This ensures we only downscale.
|
|
||||||
if original_width > 0 and original_height > 0 : # Avoid division by zero
|
|
||||||
aspect_ratio = original_width / original_height
|
|
||||||
|
|
||||||
# Calculate new dimensions based on POT width, then POT height, and pick the one that results in downscale or same size
|
|
||||||
pot_h_from_w = int(pot_width / aspect_ratio)
|
|
||||||
pot_w_from_h = int(pot_height * aspect_ratio)
|
|
||||||
|
|
||||||
# Option 1: Scale by width, adjust height
|
|
||||||
candidate1_w, candidate1_h = pot_width, ipu.get_nearest_power_of_two_downscale(pot_h_from_w)
|
|
||||||
# Option 2: Scale by height, adjust width
|
|
||||||
candidate2_w, candidate2_h = ipu.get_nearest_power_of_two_downscale(pot_w_from_h), pot_height
|
|
||||||
|
|
||||||
# Ensure candidates are not upscaling
|
|
||||||
if candidate1_w > original_width or candidate1_h > original_height:
|
|
||||||
candidate1_w, candidate1_h = original_width, original_height # Fallback to original if upscaling
|
|
||||||
if candidate2_w > original_width or candidate2_h > original_height:
|
|
||||||
candidate2_w, candidate2_h = original_width, original_height # Fallback to original if upscaling
|
|
||||||
|
|
||||||
# Choose the candidate that results in a larger area (preferring less downscaling if multiple POT options)
|
|
||||||
# but still respects the POT downscale logic for each dimension individually.
|
|
||||||
# The actual POT dimensions are already calculated by get_nearest_power_of_two_downscale.
|
|
||||||
# We need to decide if we base the aspect ratio calc on pot_width or pot_height.
|
|
||||||
# The goal is to make one dimension POT and the other POT while maintaining aspect as much as possible, only downscaling.
|
|
||||||
|
|
||||||
final_pot_width = ipu.get_nearest_power_of_two_downscale(original_width)
|
|
||||||
final_pot_height = ipu.get_nearest_power_of_two_downscale(original_height)
|
|
||||||
|
|
||||||
# If original aspect is not 1:1, one of the POT dimensions might need further adjustment to maintain aspect
|
|
||||||
# after the other dimension is set to its POT.
|
|
||||||
# We prioritize fitting within the *downscaled* POT dimensions.
|
|
||||||
|
|
||||||
# Scale to fit within final_pot_width, adjust height, then make height POT (downscale)
|
|
||||||
scaled_h_for_pot_w = max(1, round(final_pot_width / aspect_ratio))
|
|
||||||
h1 = ipu.get_nearest_power_of_two_downscale(scaled_h_for_pot_w)
|
|
||||||
w1 = final_pot_width
|
|
||||||
if h1 > final_pot_height: # If this adjustment made height too big, re-evaluate
|
|
||||||
h1 = final_pot_height
|
|
||||||
w1 = ipu.get_nearest_power_of_two_downscale(max(1, round(h1 * aspect_ratio)))
|
|
||||||
|
|
||||||
|
|
||||||
# Scale to fit within final_pot_height, adjust width, then make width POT (downscale)
|
|
||||||
scaled_w_for_pot_h = max(1, round(final_pot_height * aspect_ratio))
|
|
||||||
w2 = ipu.get_nearest_power_of_two_downscale(scaled_w_for_pot_h)
|
|
||||||
h2 = final_pot_height
|
|
||||||
if w2 > final_pot_width: # If this adjustment made width too big, re-evaluate
|
|
||||||
w2 = final_pot_width
|
|
||||||
h2 = ipu.get_nearest_power_of_two_downscale(max(1, round(w2 / aspect_ratio)))
|
|
||||||
|
|
||||||
# Choose the option that results in larger area (less aggressive downscaling)
|
|
||||||
# while ensuring both dimensions are POT and not upscaled from original.
|
|
||||||
if w1 * h1 >= w2 * h2:
|
|
||||||
base_pot_width, base_pot_height = w1, h1
|
|
||||||
else:
|
|
||||||
base_pot_width, base_pot_height = w2, h2
|
|
||||||
|
|
||||||
# Final check to ensure no upscaling from original dimensions
|
|
||||||
base_pot_width = min(base_pot_width, original_width)
|
|
||||||
base_pot_height = min(base_pot_height, original_height)
|
|
||||||
# And ensure they are POT
|
|
||||||
base_pot_width = ipu.get_nearest_power_of_two_downscale(base_pot_width)
|
|
||||||
base_pot_height = ipu.get_nearest_power_of_two_downscale(base_pot_height)
|
|
||||||
|
|
||||||
else: # Handle cases like 0-dim images, though load_image should prevent this
|
|
||||||
base_pot_width, base_pot_height = 1, 1
|
|
||||||
|
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Original dims: ({original_width},{original_height}), Initial POT Scaled Dims: ({base_pot_width},{base_pot_height}).")
|
|
||||||
|
|
||||||
# Calculate and store aspect ratio change string
|
|
||||||
if original_width > 0 and original_height > 0 and base_pot_width > 0 and base_pot_height > 0:
|
|
||||||
aspect_change_str = ipu.normalize_aspect_ratio_change(
|
|
||||||
original_width, original_height,
|
|
||||||
base_pot_width, base_pot_height
|
|
||||||
)
|
|
||||||
if aspect_change_str:
|
|
||||||
# This will overwrite if multiple maps are processed; specified by requirements.
|
|
||||||
context.asset_metadata['aspect_ratio_change_string'] = aspect_change_str
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type {current_map_type}: Calculated aspect ratio change string: '{aspect_change_str}' (Original: {original_width}x{original_height}, Base POT: {base_pot_width}x{base_pot_height}). Stored in asset_metadata.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type {current_map_type}: Failed to calculate aspect ratio change string.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type {current_map_type}: Skipping aspect ratio change string calculation due to invalid dimensions (Original: {original_width}x{original_height}, Base POT: {base_pot_width}x{base_pot_height}).")
|
|
||||||
|
|
||||||
base_pot_image_data = image_data.copy()
|
|
||||||
if (base_pot_width, base_pot_height) != (original_width, original_height):
|
|
||||||
interpolation = cv2.INTER_AREA # Good for downscaling
|
|
||||||
base_pot_image_data = ipu.resize_image(base_pot_image_data, base_pot_width, base_pot_height, interpolation=interpolation)
|
|
||||||
if base_pot_image_data is None:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Failed to resize image to base POT dimensions.")
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Failed',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
source_file=str(source_file_path),
|
|
||||||
original_dimensions=(original_width, original_height),
|
|
||||||
details="Base POT resize failed")
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Color Profile Management (after initial POT resize, before multi-res saving)
|
|
||||||
# Initialize transform settings with defaults for color management
|
|
||||||
transform_settings = {
|
|
||||||
"color_profile_management": False, # Default, can be overridden by FileRule
|
|
||||||
"target_color_profile": "sRGB", # Default
|
|
||||||
"output_format_settings": None # For JPG quality, PNG compression
|
|
||||||
}
|
|
||||||
if file_rule.channel_merge_instructions and 'transform' in file_rule.channel_merge_instructions:
|
|
||||||
custom_transform_settings = file_rule.channel_merge_instructions['transform']
|
|
||||||
if isinstance(custom_transform_settings, dict):
|
|
||||||
transform_settings.update(custom_transform_settings)
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Loaded transform settings for color/output from file_rule.")
|
|
||||||
|
|
||||||
if transform_settings['color_profile_management'] and transform_settings['target_color_profile'] == "RGB":
|
|
||||||
if len(base_pot_image_data.shape) == 3 and base_pot_image_data.shape[2] == 3: # BGR to RGB
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Converting BGR to RGB for base POT image.")
|
|
||||||
base_pot_image_data = ipu.convert_bgr_to_rgb(base_pot_image_data)
|
|
||||||
elif len(base_pot_image_data.shape) == 3 and base_pot_image_data.shape[2] == 4: # BGRA to RGBA
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Converting BGRA to RGBA for base POT image.")
|
|
||||||
base_pot_image_data = ipu.convert_bgra_to_rgba(base_pot_image_data)
|
|
||||||
|
|
||||||
# Ensure engine_temp_dir exists before saving base POT
|
|
||||||
if not context.engine_temp_dir.exists():
|
|
||||||
try:
|
|
||||||
context.engine_temp_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Created engine_temp_dir at '{context.engine_temp_dir}'")
|
|
||||||
except OSError as e:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}': Failed to create engine_temp_dir '{context.engine_temp_dir}': {e}")
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Failed',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
source_file=str(source_file_path),
|
|
||||||
details="Failed to create temp directory for base POT")
|
|
||||||
continue
|
|
||||||
|
|
||||||
temp_filename_suffix = Path(source_file_path).suffix
|
|
||||||
base_pot_temp_filename = f"{processing_instance_tag}_basePOT{temp_filename_suffix}" # Use processing_instance_tag
|
|
||||||
base_pot_temp_path = context.engine_temp_dir / base_pot_temp_filename
|
|
||||||
|
|
||||||
# Determine save parameters for base POT image (can be different from variants if needed)
|
|
||||||
base_save_params = []
|
|
||||||
base_output_ext = temp_filename_suffix.lstrip('.') # Default to original, can be overridden by format rules
|
|
||||||
# TODO: Add logic here to determine base_output_ext and base_save_params based on bit depth and config, similar to variants.
|
|
||||||
# For now, using simple save.
|
|
||||||
|
|
||||||
if not ipu.save_image(str(base_pot_temp_path), base_pot_image_data, params=base_save_params):
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Failed to save base POT image to '{base_pot_temp_path}'.")
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Failed',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
source_file=str(source_file_path),
|
|
||||||
original_dimensions=(original_width, original_height),
|
|
||||||
base_pot_dimensions=(base_pot_width, base_pot_height),
|
|
||||||
details="Base POT image save failed")
|
|
||||||
continue
|
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Successfully saved base POT image to '{base_pot_temp_path}' with dims ({base_pot_width}x{base_pot_height}).")
|
|
||||||
|
|
||||||
# Initialize/update the status for this map in processed_maps_details
|
|
||||||
self._update_file_rule_status(
|
|
||||||
context,
|
|
||||||
current_map_key, # Use file_rule_idx as key
|
|
||||||
'BasePOTSaved', # Intermediate status, will be updated after variant check
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag, # Store the tag
|
|
||||||
source_file=str(source_file_path),
|
|
||||||
original_dimensions=(original_width, original_height),
|
|
||||||
base_pot_dimensions=(base_pot_width, base_pot_height),
|
|
||||||
temp_processed_file=str(base_pot_temp_path) # Store path to the saved base POT
|
|
||||||
)
|
|
||||||
|
|
||||||
# 2. Multiple Resolution Output (Variants)
|
|
||||||
processed_at_least_one_resolution_variant = False
|
|
||||||
# Resolution variants are attempted for all map types individually processed.
|
|
||||||
# The filter at the beginning of the loop ensures only relevant maps reach this stage.
|
|
||||||
generate_variants_for_this_map_type = True
|
|
||||||
|
|
||||||
if generate_variants_for_this_map_type: # This will now always be true if code execution reaches here
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Map type '{current_map_type}' is eligible for individual processing. Attempting to generate resolution variants.")
|
|
||||||
# Sort resolutions from largest to smallest
|
|
||||||
sorted_resolutions = sorted(image_resolutions.items(), key=lambda item: item[1], reverse=True)
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Sorted resolutions for variant processing: {sorted_resolutions}")
|
|
||||||
|
|
||||||
for res_key, res_max_dim in sorted_resolutions:
|
|
||||||
current_w, current_h = base_pot_image_data.shape[1], base_pot_image_data.shape[0]
|
|
||||||
|
|
||||||
if current_w <= 0 or current_h <=0:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Base POT image has zero dimension ({current_w}x{current_h}). Skipping this resolution variant.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if max(current_w, current_h) >= res_max_dim:
|
|
||||||
target_w_res, target_h_res = current_w, current_h
|
|
||||||
if max(current_w, current_h) > res_max_dim:
|
|
||||||
if current_w >= current_h:
|
|
||||||
target_w_res = res_max_dim
|
|
||||||
target_h_res = max(1, round(target_w_res / (current_w / current_h)))
|
|
||||||
else:
|
|
||||||
target_h_res = res_max_dim
|
|
||||||
target_w_res = max(1, round(target_h_res * (current_w / current_h)))
|
|
||||||
else:
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Base POT image ({current_w}x{current_h}) is smaller than target max dim {res_max_dim}. Skipping this resolution variant.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
target_w_res = min(target_w_res, current_w)
|
|
||||||
target_h_res = min(target_h_res, current_h)
|
|
||||||
|
|
||||||
if target_w_res <=0 or target_h_res <=0:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Calculated target variant dims are zero or negative ({target_w_res}x{target_h_res}). Skipping.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Processing variant for {res_max_dim}. Base POT Dims: ({current_w}x{current_h}), Target Dims for {res_key}: ({target_w_res}x{target_h_res}).")
|
|
||||||
|
|
||||||
output_image_data_for_res = base_pot_image_data
|
|
||||||
if (target_w_res, target_h_res) != (current_w, current_h):
|
|
||||||
interpolation_res = cv2.INTER_AREA
|
|
||||||
output_image_data_for_res = ipu.resize_image(base_pot_image_data, target_w_res, target_h_res, interpolation=interpolation_res)
|
|
||||||
if output_image_data_for_res is None:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Failed to resize image for resolution variant {res_key}.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
assetname_placeholder = context.asset_rule.asset_name if context.asset_rule else "UnknownAsset"
|
|
||||||
resolution_placeholder = res_key
|
|
||||||
|
|
||||||
# TODO: Implement proper output format/extension determination for variants
|
|
||||||
output_ext_variant = temp_filename_suffix.lstrip('.')
|
|
||||||
|
|
||||||
temp_output_filename_variant = output_filename_pattern.replace("[assetname]", sanitize_filename(assetname_placeholder)) \
|
|
||||||
.replace("[maptype]", sanitize_filename(filename_friendly_map_type)) \
|
|
||||||
.replace("[resolution]", sanitize_filename(resolution_placeholder)) \
|
|
||||||
.replace("[ext]", output_ext_variant)
|
|
||||||
temp_output_filename_variant = f"{processing_instance_tag}_variant_{temp_output_filename_variant}" # Use processing_instance_tag
|
|
||||||
temp_output_path_variant = context.engine_temp_dir / temp_output_filename_variant
|
|
||||||
|
|
||||||
save_params_variant = []
|
|
||||||
if transform_settings.get('output_format_settings'):
|
|
||||||
if output_ext_variant.lower() in ['jpg', 'jpeg']:
|
|
||||||
quality = transform_settings['output_format_settings'].get('quality', context.config_obj.get("JPG_QUALITY", 95))
|
|
||||||
save_params_variant = [cv2.IMWRITE_JPEG_QUALITY, quality]
|
|
||||||
elif output_ext_variant.lower() == 'png':
|
|
||||||
compression = transform_settings['output_format_settings'].get('compression_level', context.config_obj.get("PNG_COMPRESSION_LEVEL", 6))
|
|
||||||
save_params_variant = [cv2.IMWRITE_PNG_COMPRESSION, compression]
|
|
||||||
|
|
||||||
save_success_variant = ipu.save_image(str(temp_output_path_variant), output_image_data_for_res, params=save_params_variant)
|
|
||||||
|
|
||||||
if not save_success_variant:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Failed to save temporary variant image to '{temp_output_path_variant}'.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Successfully saved temporary variant map to '{temp_output_path_variant}' with dims ({target_w_res}x{target_h_res}).")
|
|
||||||
processed_at_least_one_resolution_variant = True
|
|
||||||
|
|
||||||
if 'variants' not in context.processed_maps_details[current_map_key]: # Use current_map_key (file_rule_idx)
|
|
||||||
context.processed_maps_details[current_map_key]['variants'] = []
|
|
||||||
|
|
||||||
context.processed_maps_details[current_map_key]['variants'].append({ # Use current_map_key (file_rule_idx)
|
|
||||||
'resolution_key': res_key,
|
|
||||||
'temp_path': str(temp_output_path_variant),
|
|
||||||
'dimensions': (target_w_res, target_h_res),
|
|
||||||
'resolution_name': f"{target_w_res}x{target_h_res}"
|
|
||||||
})
|
|
||||||
|
|
||||||
if 'processed_files' not in context.asset_metadata:
|
|
||||||
context.asset_metadata['processed_files'] = []
|
|
||||||
context.asset_metadata['processed_files'].append({
|
|
||||||
'processed_map_key': current_map_key, # Use current_map_key (file_rule_idx)
|
|
||||||
'resolution_key': res_key,
|
|
||||||
'path': str(temp_output_path_variant),
|
|
||||||
'type': 'temporary_map_variant',
|
|
||||||
'map_type': current_map_type,
|
|
||||||
'dimensions_w': target_w_res,
|
|
||||||
'dimensions_h': target_h_res
|
|
||||||
})
|
|
||||||
# Calculate and store image statistics for the lowest resolution output
|
|
||||||
lowest_res_image_data_for_stats = None
|
|
||||||
image_to_stat_path_for_log = "N/A"
|
|
||||||
source_of_stats_image = "unknown"
|
|
||||||
|
|
||||||
if processed_at_least_one_resolution_variant and \
|
|
||||||
current_map_key in context.processed_maps_details and \
|
|
||||||
'variants' in context.processed_maps_details[current_map_key] and \
|
|
||||||
context.processed_maps_details[current_map_key]['variants']:
|
|
||||||
|
|
||||||
variants_list = context.processed_maps_details[current_map_key]['variants']
|
|
||||||
valid_variants_for_stats = [
|
|
||||||
v for v in variants_list
|
|
||||||
if isinstance(v.get('dimensions'), tuple) and len(v['dimensions']) == 2 and v['dimensions'][0] > 0 and v['dimensions'][1] > 0
|
|
||||||
]
|
|
||||||
|
|
||||||
if valid_variants_for_stats:
|
|
||||||
smallest_variant = min(valid_variants_for_stats, key=lambda v: v['dimensions'][0] * v['dimensions'][1])
|
|
||||||
|
|
||||||
if smallest_variant and 'temp_path' in smallest_variant and smallest_variant.get('dimensions'):
|
|
||||||
smallest_res_w, smallest_res_h = smallest_variant['dimensions']
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Identified smallest variant for stats: {smallest_variant.get('resolution_key', 'N/A')} ({smallest_res_w}x{smallest_res_h}) at {smallest_variant['temp_path']}")
|
|
||||||
lowest_res_image_data_for_stats = ipu.load_image(smallest_variant['temp_path'])
|
|
||||||
image_to_stat_path_for_log = smallest_variant['temp_path']
|
|
||||||
source_of_stats_image = f"variant {smallest_variant.get('resolution_key', 'N/A')}"
|
|
||||||
if lowest_res_image_data_for_stats is None:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Failed to load smallest variant image '{smallest_variant['temp_path']}' for stats.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Could not determine smallest variant for stats from valid variants list (details missing).")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: No valid variants found to determine the smallest one for stats.")
|
|
||||||
|
|
||||||
if lowest_res_image_data_for_stats is None:
|
|
||||||
if base_pot_image_data is not None:
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Using base POT image for stats (dimensions: {base_pot_width}x{base_pot_height}). Smallest variant not available/loaded or no variants generated.")
|
|
||||||
lowest_res_image_data_for_stats = base_pot_image_data
|
|
||||||
image_to_stat_path_for_log = f"In-memory base POT image (dims: {base_pot_width}x{base_pot_height})"
|
|
||||||
source_of_stats_image = "base POT"
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Base POT image data is also None. Cannot calculate stats.")
|
|
||||||
|
|
||||||
if lowest_res_image_data_for_stats is not None:
|
|
||||||
stats_dict = ipu.calculate_image_stats(lowest_res_image_data_for_stats)
|
|
||||||
if stats_dict and "error" not in stats_dict:
|
|
||||||
if 'image_stats_lowest_res' not in context.asset_metadata:
|
|
||||||
context.asset_metadata['image_stats_lowest_res'] = {}
|
|
||||||
|
|
||||||
context.asset_metadata['image_stats_lowest_res'][current_map_type] = stats_dict # Keyed by map_type
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': Calculated and stored image stats from '{source_of_stats_image}' (source ref: '{image_to_stat_path_for_log}').")
|
|
||||||
elif stats_dict and "error" in stats_dict:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': Error calculating image stats from '{source_of_stats_image}': {stats_dict['error']}.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': Failed to calculate image stats from '{source_of_stats_image}' (result was None or empty).")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': No image data available (from variant or base POT) to calculate stats.")
|
|
||||||
|
|
||||||
# Final status update based on whether variants were generated (and expected)
|
|
||||||
if generate_variants_for_this_map_type:
|
|
||||||
if processed_at_least_one_resolution_variant:
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Processed_With_Variants',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
details="Successfully processed with multiple resolution variants.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Variants were expected for map type '{current_map_type}', but none were generated (e.g., base POT too small for any variant tier).")
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Processed_No_Variants',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
details="Variants expected but none generated (e.g., base POT too small).")
|
|
||||||
else: # No variants were expected for this map type
|
|
||||||
self._update_file_rule_status(context, current_map_key, 'Processed_No_Variants',
|
|
||||||
map_type=filename_friendly_map_type,
|
|
||||||
processing_map_type=current_map_type,
|
|
||||||
source_file_rule_index=file_rule_idx,
|
|
||||||
processing_tag=processing_instance_tag,
|
|
||||||
details="Processed to base POT; variants not applicable for this map type.")
|
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Finished individual map processing stage.")
|
|
||||||
return context
|
|
||||||
|
|
||||||
def _find_source_file(self, base_path: Path, pattern: str, asset_name_for_log: str, processing_instance_tag: str) -> Optional[Path]:
|
|
||||||
"""
|
|
||||||
Finds a single source file matching the pattern within the base_path.
|
|
||||||
Logs use processing_instance_tag for specific run tracing.
|
|
||||||
"""
|
|
||||||
if not pattern:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Empty file_path provided in FileRule.")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# If pattern is an absolute path, use it directly
|
|
||||||
potential_abs_path = Path(pattern)
|
|
||||||
if potential_abs_path.is_absolute() and potential_abs_path.exists():
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: file_path '{pattern}' is absolute and exists. Using it directly.")
|
|
||||||
return potential_abs_path
|
|
||||||
elif potential_abs_path.is_absolute():
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: file_path '{pattern}' is absolute but does not exist.")
|
|
||||||
# Fall through to try resolving against base_path if it's just a name/relative pattern
|
|
||||||
|
|
||||||
# Treat pattern as relative to base_path
|
|
||||||
# This could be an exact name or a glob pattern
|
|
||||||
try:
|
|
||||||
# First, check if pattern is an exact relative path
|
|
||||||
exact_match_path = base_path / pattern
|
|
||||||
if exact_match_path.exists() and exact_match_path.is_file():
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Found exact match for '{pattern}' at '{exact_match_path}'.")
|
|
||||||
return exact_match_path
|
|
||||||
|
|
||||||
# If not an exact match, try as a glob pattern (recursive)
|
|
||||||
matched_files_rglob = list(base_path.rglob(pattern))
|
|
||||||
if matched_files_rglob:
|
|
||||||
if len(matched_files_rglob) > 1:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Multiple files ({len(matched_files_rglob)}) found for pattern '{pattern}' in '{base_path}' (recursive). Using first: {matched_files_rglob[0]}. Files: {matched_files_rglob}")
|
|
||||||
return matched_files_rglob[0]
|
|
||||||
|
|
||||||
# Try non-recursive glob if rglob fails
|
|
||||||
matched_files_glob = list(base_path.glob(pattern))
|
|
||||||
if matched_files_glob:
|
|
||||||
if len(matched_files_glob) > 1:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Multiple files ({len(matched_files_glob)}) found for pattern '{pattern}' in '{base_path}' (non-recursive). Using first: {matched_files_glob[0]}. Files: {matched_files_glob}")
|
|
||||||
return matched_files_glob[0]
|
|
||||||
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: No files found matching pattern '{pattern}' in '{base_path}' (exact, recursive, or non-recursive).")
|
|
||||||
return None
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Error searching for file with pattern '{pattern}' in '{base_path}': {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _update_file_rule_status(self, context: AssetProcessingContext, map_key_index: int, status: str, **kwargs): # Renamed map_id_hex to map_key_index
|
|
||||||
"""Helper to update processed_maps_details for a map, keyed by file_rule_idx."""
|
|
||||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
|
||||||
if map_key_index not in context.processed_maps_details:
|
|
||||||
context.processed_maps_details[map_key_index] = {}
|
|
||||||
|
|
||||||
context.processed_maps_details[map_key_index]['status'] = status
|
|
||||||
for key, value in kwargs.items():
|
|
||||||
# Ensure source_file_rule_id_hex is not added if it was somehow passed (it shouldn't be)
|
|
||||||
if key == 'source_file_rule_id_hex':
|
|
||||||
continue
|
|
||||||
context.processed_maps_details[map_key_index][key] = value
|
|
||||||
|
|
||||||
if 'map_type' not in context.processed_maps_details[map_key_index] and 'map_type' in kwargs:
|
|
||||||
context.processed_maps_details[map_key_index]['map_type'] = kwargs['map_type']
|
|
||||||
|
|
||||||
# Add formatted resolution names
|
|
||||||
if 'original_dimensions' in kwargs and isinstance(kwargs['original_dimensions'], tuple) and len(kwargs['original_dimensions']) == 2:
|
|
||||||
orig_w, orig_h = kwargs['original_dimensions']
|
|
||||||
context.processed_maps_details[map_key_index]['original_resolution_name'] = f"{orig_w}x{orig_h}"
|
|
||||||
|
|
||||||
# Determine the correct dimensions to use for 'processed_resolution_name'
|
|
||||||
# This name refers to the base POT scaled image dimensions before variant generation.
|
|
||||||
dims_to_log_as_base_processed = None
|
|
||||||
if 'base_pot_dimensions' in kwargs and isinstance(kwargs['base_pot_dimensions'], tuple) and len(kwargs['base_pot_dimensions']) == 2:
|
|
||||||
# This key is used when status is 'Processed_With_Variants'
|
|
||||||
dims_to_log_as_base_processed = kwargs['base_pot_dimensions']
|
|
||||||
elif 'processed_dimensions' in kwargs and isinstance(kwargs['processed_dimensions'], tuple) and len(kwargs['processed_dimensions']) == 2:
|
|
||||||
# This key is used when status is 'Processed_No_Variants' (and potentially others)
|
|
||||||
dims_to_log_as_base_processed = kwargs['processed_dimensions']
|
|
||||||
|
|
||||||
if dims_to_log_as_base_processed:
|
|
||||||
proc_w, proc_h = dims_to_log_as_base_processed
|
|
||||||
resolution_name_str = f"{proc_w}x{proc_h}"
|
|
||||||
context.processed_maps_details[map_key_index]['base_pot_resolution_name'] = resolution_name_str
|
|
||||||
# Ensure 'processed_resolution_name' is also set for OutputOrganizationStage compatibility
|
|
||||||
context.processed_maps_details[map_key_index]['processed_resolution_name'] = resolution_name_str
|
|
||||||
elif 'processed_dimensions' in kwargs or 'base_pot_dimensions' in kwargs:
|
|
||||||
details_for_warning = kwargs.get('processed_dimensions', kwargs.get('base_pot_dimensions'))
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: 'processed_dimensions' or 'base_pot_dimensions' key present but its value is not a valid 2-element tuple: {details_for_warning}")
|
|
||||||
|
|
||||||
# If temp_processed_file was passed, ensure it's in the details
|
|
||||||
if 'temp_processed_file' in kwargs:
|
|
||||||
context.processed_maps_details[map_key_index]['temp_processed_file'] = kwargs['temp_processed_file']
|
|
||||||
|
|
||||||
|
|
||||||
# Log all details being stored for clarity, including the newly added resolution names
|
|
||||||
log_details = context.processed_maps_details[map_key_index].copy()
|
|
||||||
# Avoid logging full image data if it accidentally gets into kwargs
|
|
||||||
if 'image_data' in log_details: del log_details['image_data']
|
|
||||||
if 'base_pot_image_data' in log_details: del log_details['base_pot_image_data']
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Status updated to '{status}'. Details: {log_details}")
|
|
||||||
83
processing/pipeline/stages/initial_scaling.py
Normal file
83
processing/pipeline/stages/initial_scaling.py
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
import logging
|
||||||
|
from typing import Tuple
|
||||||
|
|
||||||
|
import cv2 # Assuming cv2 is available for interpolation flags
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from .base_stage import ProcessingStage
|
||||||
|
# Import necessary context classes and utils
|
||||||
|
from ..asset_context import InitialScalingInput, InitialScalingOutput
|
||||||
|
from ...utils import image_processing_utils as ipu
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class InitialScalingStage(ProcessingStage):
|
||||||
|
"""
|
||||||
|
Applies initial scaling (e.g., Power-of-Two downscaling) to image data
|
||||||
|
if configured via the InitialScalingInput.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def execute(self, input_data: InitialScalingInput) -> InitialScalingOutput:
|
||||||
|
"""
|
||||||
|
Applies scaling based on input_data.initial_scaling_mode.
|
||||||
|
"""
|
||||||
|
log.debug(f"Initial Scaling Stage: Mode '{input_data.initial_scaling_mode}'.")
|
||||||
|
|
||||||
|
image_to_scale = input_data.image_data
|
||||||
|
original_dims_wh = input_data.original_dimensions
|
||||||
|
scaling_mode = input_data.initial_scaling_mode
|
||||||
|
scaling_applied = False
|
||||||
|
final_image_data = image_to_scale # Default to original if no scaling happens
|
||||||
|
|
||||||
|
if image_to_scale is None or image_to_scale.size == 0:
|
||||||
|
log.warning("Initial Scaling Stage: Input image data is None or empty. Skipping.")
|
||||||
|
# Return original (empty) data and indicate no scaling
|
||||||
|
return InitialScalingOutput(
|
||||||
|
scaled_image_data=np.array([]),
|
||||||
|
scaling_applied=False,
|
||||||
|
final_dimensions=(0, 0)
|
||||||
|
)
|
||||||
|
|
||||||
|
if original_dims_wh is None:
|
||||||
|
log.warning("Initial Scaling Stage: Original dimensions not provided. Using current image shape.")
|
||||||
|
h_pre_scale, w_pre_scale = image_to_scale.shape[:2]
|
||||||
|
original_dims_wh = (w_pre_scale, h_pre_scale)
|
||||||
|
else:
|
||||||
|
w_pre_scale, h_pre_scale = original_dims_wh
|
||||||
|
|
||||||
|
|
||||||
|
if scaling_mode == "POT_DOWNSCALE":
|
||||||
|
pot_w = ipu.get_nearest_power_of_two_downscale(w_pre_scale)
|
||||||
|
pot_h = ipu.get_nearest_power_of_two_downscale(h_pre_scale)
|
||||||
|
|
||||||
|
if (pot_w, pot_h) != (w_pre_scale, h_pre_scale):
|
||||||
|
log.info(f"Initial Scaling: Applying POT Downscale from ({w_pre_scale},{h_pre_scale}) to ({pot_w},{pot_h}).")
|
||||||
|
# Use INTER_AREA for downscaling generally
|
||||||
|
resized_img = ipu.resize_image(image_to_scale, pot_w, pot_h, interpolation=cv2.INTER_AREA)
|
||||||
|
if resized_img is not None:
|
||||||
|
final_image_data = resized_img
|
||||||
|
scaling_applied = True
|
||||||
|
log.debug("Initial Scaling: POT Downscale applied successfully.")
|
||||||
|
else:
|
||||||
|
log.warning("Initial Scaling: POT Downscale resize failed. Using original data.")
|
||||||
|
# final_image_data remains image_to_scale
|
||||||
|
else:
|
||||||
|
log.info("Initial Scaling: POT Downscale - Image already POT or smaller. No scaling needed.")
|
||||||
|
# final_image_data remains image_to_scale
|
||||||
|
|
||||||
|
elif scaling_mode == "NONE":
|
||||||
|
log.info("Initial Scaling: Mode is NONE. No scaling applied.")
|
||||||
|
# final_image_data remains image_to_scale
|
||||||
|
else:
|
||||||
|
log.warning(f"Initial Scaling: Unknown INITIAL_SCALING_MODE '{scaling_mode}'. Defaulting to NONE.")
|
||||||
|
# final_image_data remains image_to_scale
|
||||||
|
|
||||||
|
# Determine final dimensions
|
||||||
|
final_h, final_w = final_image_data.shape[:2]
|
||||||
|
final_dims_wh = (final_w, final_h)
|
||||||
|
|
||||||
|
return InitialScalingOutput(
|
||||||
|
scaled_image_data=final_image_data,
|
||||||
|
scaling_applied=scaling_applied,
|
||||||
|
final_dimensions=final_dims_wh
|
||||||
|
)
|
||||||
@@ -1,347 +0,0 @@
|
|||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, Optional, List, Tuple
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
import cv2 # For potential direct cv2 operations if ipu doesn't cover all merge needs
|
|
||||||
|
|
||||||
from .base_stage import ProcessingStage
|
|
||||||
from ..asset_context import AssetProcessingContext
|
|
||||||
from rule_structure import FileRule
|
|
||||||
from utils.path_utils import sanitize_filename
|
|
||||||
from ...utils import image_processing_utils as ipu
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class MapMergingStage(ProcessingStage):
|
|
||||||
"""
|
|
||||||
Merges individually processed maps based on MAP_MERGE rules.
|
|
||||||
This stage performs operations like channel packing.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
|
||||||
"""
|
|
||||||
Executes the map merging logic.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
context: The asset processing context.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
The updated asset processing context.
|
|
||||||
"""
|
|
||||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
|
||||||
if context.status_flags.get('skip_asset'):
|
|
||||||
logger.info(f"Skipping map merging for asset {asset_name_for_log} as skip_asset flag is set.")
|
|
||||||
return context
|
|
||||||
|
|
||||||
if not hasattr(context, 'merged_maps_details'):
|
|
||||||
context.merged_maps_details = {}
|
|
||||||
|
|
||||||
if not hasattr(context, 'processed_maps_details'):
|
|
||||||
logger.warning(f"Asset {asset_name_for_log}: 'processed_maps_details' not found in context. Cannot perform map merging.")
|
|
||||||
return context
|
|
||||||
|
|
||||||
if not context.files_to_process: # This list might not be relevant if merge rules are defined elsewhere or implicitly
|
|
||||||
logger.info(f"Asset {asset_name_for_log}: No files_to_process defined. This stage might rely on config or processed_maps_details directly for merge rules.")
|
|
||||||
# Depending on design, this might not be an error, so we don't return yet.
|
|
||||||
|
|
||||||
logger.info(f"Starting MapMergingStage for asset: {asset_name_for_log}")
|
|
||||||
|
|
||||||
# TODO: The logic for identifying merge rules and their inputs needs significant rework
|
|
||||||
# as FileRule no longer has 'id' or 'merge_settings' directly in the way this stage expects.
|
|
||||||
# Merge rules are likely defined in the main configuration (context.config_obj.map_merge_rules)
|
|
||||||
# and need to be matched against available maps in context.processed_maps_details.
|
|
||||||
|
|
||||||
# Placeholder for the loop that would iterate over context.config_obj.map_merge_rules
|
|
||||||
# For now, this stage will effectively do nothing until that logic is implemented.
|
|
||||||
|
|
||||||
# Example of how one might start to adapt:
|
|
||||||
# for configured_merge_rule in context.config_obj.map_merge_rules:
|
|
||||||
# output_map_type = configured_merge_rule.get('output_map_type')
|
|
||||||
# inputs_config = configured_merge_rule.get('inputs') # e.g. {"R": "NORMAL", "G": "ROUGHNESS"}
|
|
||||||
# # ... then find these input map_types in context.processed_maps_details ...
|
|
||||||
# # ... and perform the merge ...
|
|
||||||
# # This is a complex change beyond simple attribute renaming.
|
|
||||||
|
|
||||||
# The following is the original loop structure, which will likely fail due to missing attributes on FileRule.
|
|
||||||
# Keeping it commented out to show what was there.
|
|
||||||
"""
|
|
||||||
for merge_rule in context.files_to_process: # This iteration logic is likely incorrect for merge rules
|
|
||||||
if not isinstance(merge_rule, FileRule) or merge_rule.item_type != "MAP_MERGE":
|
|
||||||
continue
|
|
||||||
|
|
||||||
# FileRule does not have merge_settings or id.hex
|
|
||||||
# This entire block needs to be re-thought based on where merge rules are defined.
|
|
||||||
# Assuming merge_rule_id_hex would be a generated UUID for this operation.
|
|
||||||
merge_rule_id_hex = f"merge_op_{uuid.uuid4().hex[:8]}"
|
|
||||||
current_map_type = merge_rule.item_type_override or merge_rule.item_type
|
|
||||||
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Potential Merge for {current_map_type}: Merge rule processing needs rework. FileRule lacks 'merge_settings' and 'id'. Skipping this rule.")
|
|
||||||
context.merged_maps_details[merge_rule_id_hex] = {
|
|
||||||
'map_type': current_map_type,
|
|
||||||
'status': 'Failed',
|
|
||||||
'reason': 'Merge rule processing logic in MapMergingStage needs refactor due to FileRule changes.'
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
"""
|
|
||||||
|
|
||||||
# For now, let's assume no merge rules are processed until the logic is fixed.
|
|
||||||
num_merge_rules_attempted = 0
|
|
||||||
# If context.config_obj.map_merge_rules exists, iterate it here.
|
|
||||||
# The original code iterated context.files_to_process looking for item_type "MAP_MERGE".
|
|
||||||
# This implies FileRule objects were being used to define merge operations, which is no longer the case
|
|
||||||
# if 'merge_settings' and 'id' were removed from FileRule.
|
|
||||||
|
|
||||||
# The core merge rules are in context.config_obj.map_merge_rules
|
|
||||||
# Each rule in there defines an output_map_type and its inputs.
|
|
||||||
|
|
||||||
config_merge_rules = context.config_obj.map_merge_rules
|
|
||||||
if not config_merge_rules:
|
|
||||||
logger.info(f"Asset {asset_name_for_log}: No map_merge_rules found in configuration. Skipping map merging.")
|
|
||||||
return context
|
|
||||||
|
|
||||||
for rule_idx, configured_merge_rule in enumerate(config_merge_rules):
|
|
||||||
output_map_type = configured_merge_rule.get('output_map_type')
|
|
||||||
inputs_map_type_to_channel = configured_merge_rule.get('inputs') # e.g. {"R": "NRM", "G": "NRM", "B": "ROUGH"}
|
|
||||||
default_values = configured_merge_rule.get('defaults', {}) # e.g. {"R": 0.5, "G": 0.5, "B": 0.5}
|
|
||||||
# output_bit_depth_rule = configured_merge_rule.get('output_bit_depth', 'respect_inputs') # Not used yet
|
|
||||||
|
|
||||||
if not output_map_type or not inputs_map_type_to_channel:
|
|
||||||
logger.warning(f"Asset {asset_name_for_log}: Invalid configured_merge_rule at index {rule_idx}. Missing 'output_map_type' or 'inputs'. Rule: {configured_merge_rule}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
num_merge_rules_attempted +=1
|
|
||||||
merge_op_id = f"merge_{sanitize_filename(output_map_type)}_{rule_idx}"
|
|
||||||
logger.info(f"Asset {asset_name_for_log}: Processing configured merge rule for '{output_map_type}' (Op ID: {merge_op_id})")
|
|
||||||
|
|
||||||
loaded_input_maps: Dict[str, np.ndarray] = {} # Key: input_map_type (e.g. "NRM"), Value: image_data
|
|
||||||
input_map_paths: Dict[str, str] = {} # Key: input_map_type, Value: path_str
|
|
||||||
target_dims: Optional[Tuple[int, int]] = None
|
|
||||||
all_inputs_valid = True
|
|
||||||
|
|
||||||
# Find and load input maps from processed_maps_details
|
|
||||||
# This assumes one processed map per map_type. If multiple variants exist, this needs refinement.
|
|
||||||
required_input_map_types = set(inputs_map_type_to_channel.values())
|
|
||||||
|
|
||||||
for required_map_type in required_input_map_types:
|
|
||||||
found_processed_map_details = None
|
|
||||||
# The key `p_key_idx` is the file_rule_idx from the IndividualMapProcessingStage
|
|
||||||
for p_key_idx, p_details in context.processed_maps_details.items(): # p_key_idx is an int
|
|
||||||
processed_map_identifier = p_details.get('processing_map_type', p_details.get('map_type'))
|
|
||||||
|
|
||||||
# Comprehensive list of valid statuses for an input map to be used in merging
|
|
||||||
valid_input_statuses = ['BasePOTSaved', 'Processed_With_Variants', 'Processed_No_Variants', 'Converted_To_Rough']
|
|
||||||
|
|
||||||
is_match = False
|
|
||||||
if processed_map_identifier == required_map_type:
|
|
||||||
is_match = True
|
|
||||||
elif required_map_type.startswith("MAP_") and processed_map_identifier == required_map_type.split("MAP_")[-1]:
|
|
||||||
is_match = True
|
|
||||||
elif not required_map_type.startswith("MAP_") and processed_map_identifier == f"MAP_{required_map_type}":
|
|
||||||
is_match = True
|
|
||||||
|
|
||||||
if is_match and p_details.get('status') in valid_input_statuses:
|
|
||||||
found_processed_map_details = p_details
|
|
||||||
# The key `p_key_idx` (which is the FileRule index) is implicitly associated with these details.
|
|
||||||
break
|
|
||||||
|
|
||||||
if not found_processed_map_details:
|
|
||||||
can_be_fully_defaulted = True
|
|
||||||
channels_requiring_this_map = [
|
|
||||||
ch_key for ch_key, map_type_val in inputs_map_type_to_channel.items()
|
|
||||||
if map_type_val == required_map_type
|
|
||||||
]
|
|
||||||
|
|
||||||
if not channels_requiring_this_map:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Internal logic error. Required map_type '{required_map_type}' is not actually used by any output channel. Configuration: {inputs_map_type_to_channel}")
|
|
||||||
all_inputs_valid = False
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Internal error: required map_type '{required_map_type}' not in use."}
|
|
||||||
break
|
|
||||||
|
|
||||||
for channel_char_needing_default in channels_requiring_this_map:
|
|
||||||
if default_values.get(channel_char_needing_default) is None:
|
|
||||||
can_be_fully_defaulted = False
|
|
||||||
break
|
|
||||||
|
|
||||||
if can_be_fully_defaulted:
|
|
||||||
logger.info(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Required input map_type '{required_map_type}' for output '{output_map_type}' not found or not in usable state. Will attempt to use default values for its channels: {channels_requiring_this_map}.")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Required input map_type '{required_map_type}' for output '{output_map_type}' not found/unusable, AND not all its required channels ({channels_requiring_this_map}) have defaults. Failing merge op.")
|
|
||||||
all_inputs_valid = False
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Input '{required_map_type}' missing and defaults incomplete."}
|
|
||||||
break
|
|
||||||
|
|
||||||
if found_processed_map_details:
|
|
||||||
temp_file_path_str = found_processed_map_details.get('temp_processed_file')
|
|
||||||
if not temp_file_path_str:
|
|
||||||
# Log with p_key_idx if available, or just the map type if not (though it should be if found_processed_map_details is set)
|
|
||||||
log_key_info = f"(Associated Key Index: {p_key_idx})" if 'p_key_idx' in locals() and found_processed_map_details else "" # Use locals() to check if p_key_idx is defined in this scope
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: 'temp_processed_file' missing in details for found map_type '{required_map_type}' {log_key_info}.")
|
|
||||||
all_inputs_valid = False
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Temp file path missing for input '{required_map_type}'."}
|
|
||||||
break
|
|
||||||
|
|
||||||
temp_file_path = Path(temp_file_path_str)
|
|
||||||
if not temp_file_path.exists():
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Temp file {temp_file_path} for input map_type '{required_map_type}' does not exist.")
|
|
||||||
all_inputs_valid = False
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Temp file for input '{required_map_type}' missing."}
|
|
||||||
break
|
|
||||||
|
|
||||||
try:
|
|
||||||
image_data = ipu.load_image(str(temp_file_path))
|
|
||||||
if image_data is None: raise ValueError("Loaded image is None")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error loading image {temp_file_path} for input map_type '{required_map_type}': {e}")
|
|
||||||
all_inputs_valid = False
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Error loading input '{required_map_type}'."}
|
|
||||||
break
|
|
||||||
|
|
||||||
loaded_input_maps[required_map_type] = image_data
|
|
||||||
input_map_paths[required_map_type] = str(temp_file_path)
|
|
||||||
|
|
||||||
current_dims = (image_data.shape[1], image_data.shape[0])
|
|
||||||
if target_dims is None:
|
|
||||||
target_dims = current_dims
|
|
||||||
elif current_dims != target_dims:
|
|
||||||
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Input map '{required_map_type}' dims {current_dims} differ from target {target_dims}. Resizing.")
|
|
||||||
try:
|
|
||||||
image_data_resized = ipu.resize_image(image_data, target_dims[0], target_dims[1])
|
|
||||||
if image_data_resized is None: raise ValueError("Resize returned None")
|
|
||||||
loaded_input_maps[required_map_type] = image_data_resized
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Failed to resize '{required_map_type}': {e}")
|
|
||||||
all_inputs_valid = False
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Failed to resize input '{required_map_type}'."}
|
|
||||||
break
|
|
||||||
|
|
||||||
if not all_inputs_valid:
|
|
||||||
logger.warning(f"Asset {asset_name_for_log}: Skipping merge for Op ID {merge_op_id} ('{output_map_type}') due to invalid inputs.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if not loaded_input_maps and not any(default_values.get(ch) is not None for ch in inputs_map_type_to_channel.keys()):
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: No input maps loaded and no defaults available for any channel for '{output_map_type}'. Cannot proceed.")
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'No input maps loaded and no defaults available.'}
|
|
||||||
continue
|
|
||||||
|
|
||||||
if target_dims is None:
|
|
||||||
default_res_key = context.config_obj.get("default_output_resolution_key_for_merge", "1K")
|
|
||||||
image_resolutions_cfg = getattr(context.config_obj, "image_resolutions", {})
|
|
||||||
default_max_dim = image_resolutions_cfg.get(default_res_key)
|
|
||||||
|
|
||||||
if default_max_dim:
|
|
||||||
target_dims = (default_max_dim, default_max_dim)
|
|
||||||
logger.info(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Target dimensions not set by inputs (all defaulted). Using configured default resolution '{default_res_key}': {target_dims}.")
|
|
||||||
else:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Target dimensions could not be determined for '{output_map_type}' (all inputs defaulted and no default output resolution configured).")
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'Target dimensions undetermined for fully defaulted merge.'}
|
|
||||||
continue
|
|
||||||
|
|
||||||
output_channel_keys = sorted(list(inputs_map_type_to_channel.keys()))
|
|
||||||
num_output_channels = len(output_channel_keys)
|
|
||||||
|
|
||||||
if num_output_channels == 0:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: No output channels defined in 'inputs' for '{output_map_type}'.")
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'No output channels defined.'}
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
output_dtype = np.uint8
|
|
||||||
|
|
||||||
if num_output_channels == 1:
|
|
||||||
merged_image = np.zeros((target_dims[1], target_dims[0]), dtype=output_dtype)
|
|
||||||
else:
|
|
||||||
merged_image = np.zeros((target_dims[1], target_dims[0], num_output_channels), dtype=output_dtype)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error creating empty merged image for '{output_map_type}': {e}")
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f'Error creating output canvas: {e}'}
|
|
||||||
continue
|
|
||||||
|
|
||||||
merge_op_failed_detail = False
|
|
||||||
for i, out_channel_char in enumerate(output_channel_keys):
|
|
||||||
input_map_type_for_this_channel = inputs_map_type_to_channel[out_channel_char]
|
|
||||||
source_image = loaded_input_maps.get(input_map_type_for_this_channel)
|
|
||||||
|
|
||||||
source_data_this_channel = None
|
|
||||||
if source_image is not None:
|
|
||||||
if source_image.dtype != np.uint8:
|
|
||||||
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Input map '{input_map_type_for_this_channel}' has dtype {source_image.dtype}, expected uint8. Attempting conversion.")
|
|
||||||
source_image = ipu.convert_to_uint8(source_image)
|
|
||||||
if source_image is None:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Failed to convert input '{input_map_type_for_this_channel}' to uint8.")
|
|
||||||
merge_op_failed_detail = True; break
|
|
||||||
|
|
||||||
|
|
||||||
if source_image.ndim == 2:
|
|
||||||
source_data_this_channel = source_image
|
|
||||||
elif source_image.ndim == 3:
|
|
||||||
semantic_to_bgr_idx = {'R': 2, 'G': 1, 'B': 0, 'A': 3}
|
|
||||||
|
|
||||||
idx_to_extract = semantic_to_bgr_idx.get(out_channel_char.upper())
|
|
||||||
|
|
||||||
if idx_to_extract is not None and idx_to_extract < source_image.shape[2]:
|
|
||||||
source_data_this_channel = source_image[:, :, idx_to_extract]
|
|
||||||
logger.debug(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: For output '{out_channel_char}', using source '{input_map_type_for_this_channel}' semantic '{out_channel_char}' (BGR(A) index {idx_to_extract}).")
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Could not map output '{out_channel_char}' to a specific BGR(A) channel of '{input_map_type_for_this_channel}' (shape {source_image.shape}). Defaulting to its channel 0 (Blue).")
|
|
||||||
source_data_this_channel = source_image[:, :, 0]
|
|
||||||
else:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Source image '{input_map_type_for_this_channel}' has unexpected dimensions: {source_image.ndim} (shape {source_image.shape}).")
|
|
||||||
merge_op_failed_detail = True; break
|
|
||||||
|
|
||||||
else:
|
|
||||||
default_val_for_channel = default_values.get(out_channel_char)
|
|
||||||
if default_val_for_channel is not None:
|
|
||||||
try:
|
|
||||||
scaled_default_val = int(float(default_val_for_channel) * 255)
|
|
||||||
source_data_this_channel = np.full((target_dims[1], target_dims[0]), scaled_default_val, dtype=np.uint8)
|
|
||||||
logger.info(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Using default value {default_val_for_channel} (scaled to {scaled_default_val}) for output channel '{out_channel_char}' as input map '{input_map_type_for_this_channel}' was missing.")
|
|
||||||
except ValueError:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Default value '{default_val_for_channel}' for channel '{out_channel_char}' is not a valid float. Cannot scale.")
|
|
||||||
merge_op_failed_detail = True; break
|
|
||||||
else:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Input map '{input_map_type_for_this_channel}' for output channel '{out_channel_char}' is missing and no default value provided.")
|
|
||||||
merge_op_failed_detail = True; break
|
|
||||||
|
|
||||||
if source_data_this_channel is None:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Failed to get source data for output channel '{out_channel_char}'.")
|
|
||||||
merge_op_failed_detail = True; break
|
|
||||||
|
|
||||||
try:
|
|
||||||
if merged_image.ndim == 2:
|
|
||||||
merged_image = source_data_this_channel
|
|
||||||
else:
|
|
||||||
merged_image[:, :, i] = source_data_this_channel
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error assigning data to output channel '{out_channel_char}' (index {i}): {e}. Merged shape: {merged_image.shape}, Source data shape: {source_data_this_channel.shape}")
|
|
||||||
merge_op_failed_detail = True; break
|
|
||||||
|
|
||||||
if merge_op_failed_detail:
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'Error during channel assignment.'}
|
|
||||||
continue
|
|
||||||
|
|
||||||
output_format = 'png'
|
|
||||||
temp_merged_filename = f"merged_{sanitize_filename(output_map_type)}_{merge_op_id}.{output_format}"
|
|
||||||
temp_merged_path = context.engine_temp_dir / temp_merged_filename
|
|
||||||
|
|
||||||
try:
|
|
||||||
save_success = ipu.save_image(str(temp_merged_path), merged_image)
|
|
||||||
if not save_success: raise ValueError("Save image returned false")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error saving merged image {temp_merged_path}: {e}")
|
|
||||||
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f'Failed to save merged image: {e}'}
|
|
||||||
continue
|
|
||||||
|
|
||||||
logger.info(f"Asset {asset_name_for_log}: Successfully merged and saved '{output_map_type}' (Op ID: {merge_op_id}) to {temp_merged_path}")
|
|
||||||
context.merged_maps_details[merge_op_id] = {
|
|
||||||
'map_type': output_map_type,
|
|
||||||
'temp_merged_file': str(temp_merged_path),
|
|
||||||
'input_map_types_used': list(inputs_map_type_to_channel.values()),
|
|
||||||
'input_map_files_used': input_map_paths,
|
|
||||||
'merged_dimensions': target_dims,
|
|
||||||
'status': 'Processed'
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(f"Finished MapMergingStage for asset: {asset_name_for_log}. Merged maps operations attempted: {num_merge_rules_attempted}, Succeeded: {len([d for d in context.merged_maps_details.values() if d.get('status') == 'Processed'])}")
|
|
||||||
return context
|
|
||||||
329
processing/pipeline/stages/merged_task_processor.py
Normal file
329
processing/pipeline/stages/merged_task_processor.py
Normal file
@@ -0,0 +1,329 @@
|
|||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Optional, Tuple, Dict, Any
|
||||||
|
|
||||||
|
import cv2
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from .base_stage import ProcessingStage
|
||||||
|
# Import necessary context classes and utils
|
||||||
|
from ..asset_context import AssetProcessingContext, MergeTaskDefinition, ProcessedMergedMapData
|
||||||
|
from ...utils import image_processing_utils as ipu
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class MergedTaskProcessorStage(ProcessingStage):
|
||||||
|
"""
|
||||||
|
Processes a single merge task defined in the configuration.
|
||||||
|
Loads inputs, applies transformations to inputs, handles fallbacks/resizing,
|
||||||
|
performs the merge, and returns the merged data.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _find_input_map_details_in_context(
|
||||||
|
self,
|
||||||
|
required_map_type: str,
|
||||||
|
processed_map_details_context: Dict[str, Dict[str, Any]],
|
||||||
|
log_prefix_for_find: str
|
||||||
|
) -> Optional[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Finds the details of a required input map from the context's processed_maps_details.
|
||||||
|
Prefers exact match for full types (e.g. MAP_TYPE-1), or base type / base type + "-1" for base types (e.g. MAP_TYPE).
|
||||||
|
Returns the details dictionary for the found map if it has saved_files_info.
|
||||||
|
"""
|
||||||
|
# Try exact match first (e.g., rule asks for "MAP_NRM-1" or "MAP_NRM" if that's how it was processed)
|
||||||
|
for item_key, details in processed_map_details_context.items():
|
||||||
|
if details.get('internal_map_type') == required_map_type:
|
||||||
|
if details.get('saved_files_info') and isinstance(details['saved_files_info'], list) and len(details['saved_files_info']) > 0:
|
||||||
|
log.debug(f"{log_prefix_for_find}: Found exact match for '{required_map_type}' with key '{item_key}'.")
|
||||||
|
return details
|
||||||
|
log.warning(f"{log_prefix_for_find}: Found exact match for '{required_map_type}' (key '{item_key}') but no saved_files_info.")
|
||||||
|
return None # Found type but no usable files
|
||||||
|
|
||||||
|
# If exact match not found, and required_map_type is a base type (e.g. "MAP_NRM")
|
||||||
|
# try to find the primary suffixed version "MAP_NRM-1" or the base type itself if it was processed without a suffix.
|
||||||
|
if not re.search(r'-\d+$', required_map_type): # if it's a base type like MAP_XXX
|
||||||
|
# Prefer "MAP_XXX-1" as the primary variant if suffixed types exist
|
||||||
|
primary_suffixed_type = f"{required_map_type}-1"
|
||||||
|
for item_key, details in processed_map_details_context.items():
|
||||||
|
if details.get('internal_map_type') == primary_suffixed_type:
|
||||||
|
if details.get('saved_files_info') and isinstance(details['saved_files_info'], list) and len(details['saved_files_info']) > 0:
|
||||||
|
log.debug(f"{log_prefix_for_find}: Found primary suffixed match '{primary_suffixed_type}' for base '{required_map_type}' with key '{item_key}'.")
|
||||||
|
return details
|
||||||
|
log.warning(f"{log_prefix_for_find}: Found primary suffixed match '{primary_suffixed_type}' (key '{item_key}') but no saved_files_info.")
|
||||||
|
return None # Found type but no usable files
|
||||||
|
|
||||||
|
log.debug(f"{log_prefix_for_find}: No suitable match found for '{required_map_type}' via exact or primary suffixed type search.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def execute(
|
||||||
|
self,
|
||||||
|
context: AssetProcessingContext,
|
||||||
|
merge_task: MergeTaskDefinition # Specific item passed by orchestrator
|
||||||
|
) -> ProcessedMergedMapData:
|
||||||
|
"""
|
||||||
|
Processes the given MergeTaskDefinition item.
|
||||||
|
"""
|
||||||
|
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||||
|
task_key = merge_task.task_key
|
||||||
|
task_data = merge_task.task_data
|
||||||
|
log_prefix = f"Asset '{asset_name_for_log}', Task '{task_key}'"
|
||||||
|
log.info(f"{log_prefix}: Processing Merge Task.")
|
||||||
|
|
||||||
|
# Initialize output object with default failure state
|
||||||
|
result = ProcessedMergedMapData(
|
||||||
|
merged_image_data=np.array([]), # Placeholder
|
||||||
|
output_map_type=task_data.get('output_map_type', 'UnknownMergeOutput'),
|
||||||
|
source_bit_depths=[],
|
||||||
|
final_dimensions=None,
|
||||||
|
transformations_applied_to_inputs={},
|
||||||
|
status="Failed",
|
||||||
|
error_message="Initialization error"
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# --- Configuration & Task Data ---
|
||||||
|
config = context.config_obj
|
||||||
|
file_type_definitions = getattr(config, "FILE_TYPE_DEFINITIONS", {})
|
||||||
|
invert_normal_green = config.invert_normal_green_globally
|
||||||
|
merge_dimension_mismatch_strategy = getattr(config, "MERGE_DIMENSION_MISMATCH_STRATEGY", "USE_LARGEST")
|
||||||
|
workspace_path = context.workspace_path # Base for resolving relative input paths
|
||||||
|
|
||||||
|
# input_map_sources_from_task is no longer used for paths. Paths are sourced from context.processed_maps_details.
|
||||||
|
target_dimensions_hw = task_data.get('source_dimensions') # Expected dimensions (h, w) for fallback creation, must be in config.
|
||||||
|
merge_inputs_config = task_data.get('inputs', {}) # e.g., {'R': 'MAP_AO', 'G': 'MAP_ROUGH', ...}
|
||||||
|
merge_defaults = task_data.get('defaults', {}) # e.g., {'R': 255, 'G': 255, ...}
|
||||||
|
merge_channels_order = task_data.get('channel_order', 'RGB') # e.g., 'RGB', 'RGBA'
|
||||||
|
|
||||||
|
# Target dimensions are crucial if fallbacks are needed.
|
||||||
|
# Merge inputs config is essential.
|
||||||
|
# Merge inputs config is essential. Check directly in task_data.
|
||||||
|
inputs_from_task_data = task_data.get('inputs')
|
||||||
|
if not isinstance(inputs_from_task_data, dict) or not inputs_from_task_data:
|
||||||
|
result.error_message = "Merge task data is incomplete (missing or invalid 'inputs' dictionary in task_data)."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
if not target_dimensions_hw and any(merge_defaults.get(ch) is not None for ch in merge_inputs_config.keys()):
|
||||||
|
log.warning(f"{log_prefix}: Merge task has defaults defined, but 'source_dimensions' (target_dimensions_hw) is missing in task_data. Fallback image creation might fail if needed.")
|
||||||
|
# Not returning error yet, as fallbacks might not be triggered.
|
||||||
|
|
||||||
|
loaded_inputs_for_merge: Dict[str, np.ndarray] = {} # Channel char -> image data
|
||||||
|
actual_input_dimensions: List[Tuple[int, int]] = [] # List of (h, w) for loaded files
|
||||||
|
input_source_bit_depths: Dict[str, int] = {} # Channel char -> bit depth
|
||||||
|
all_transform_notes: Dict[str, List[str]] = {} # Channel char -> list of transform notes
|
||||||
|
|
||||||
|
# --- Load, Transform, and Prepare Inputs ---
|
||||||
|
log.debug(f"{log_prefix}: Loading and preparing inputs...")
|
||||||
|
for channel_char, required_map_type_from_rule in merge_inputs_config.items():
|
||||||
|
# Validate that the required input map type starts with "MAP_"
|
||||||
|
if not required_map_type_from_rule.startswith("MAP_"):
|
||||||
|
result.error_message = (
|
||||||
|
f"Invalid input map type '{required_map_type_from_rule}' for channel '{channel_char}'. "
|
||||||
|
f"Input map types for merging must start with 'MAP_'."
|
||||||
|
)
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result # Fail the task if an input type is invalid
|
||||||
|
|
||||||
|
input_image_data: Optional[np.ndarray] = None
|
||||||
|
input_source_desc = f"Fallback for {required_map_type_from_rule}"
|
||||||
|
input_log_prefix = f"{log_prefix}, Input '{required_map_type_from_rule}' (Channel '{channel_char}')"
|
||||||
|
channel_transform_notes: List[str] = []
|
||||||
|
|
||||||
|
# 1. Attempt to load from context.processed_maps_details
|
||||||
|
found_input_map_details = self._find_input_map_details_in_context(
|
||||||
|
required_map_type_from_rule, context.processed_maps_details, input_log_prefix
|
||||||
|
)
|
||||||
|
|
||||||
|
if found_input_map_details:
|
||||||
|
# Assuming the first saved file is the primary one for merging.
|
||||||
|
# This might need refinement if specific variants (resolutions/formats) are required.
|
||||||
|
primary_saved_file_info = found_input_map_details['saved_files_info'][0]
|
||||||
|
input_file_path_str = primary_saved_file_info.get('path')
|
||||||
|
|
||||||
|
if input_file_path_str:
|
||||||
|
input_file_path = Path(input_file_path_str) # Path is absolute from SaveVariantsStage
|
||||||
|
if input_file_path.is_file():
|
||||||
|
try:
|
||||||
|
input_image_data = ipu.load_image(str(input_file_path))
|
||||||
|
if input_image_data is not None:
|
||||||
|
log.info(f"{input_log_prefix}: Loaded from context: {input_file_path}")
|
||||||
|
actual_input_dimensions.append(input_image_data.shape[:2]) # (h, w)
|
||||||
|
input_source_desc = str(input_file_path)
|
||||||
|
# Bit depth from the saved variant info
|
||||||
|
input_source_bit_depths[channel_char] = primary_saved_file_info.get('bit_depth', 8)
|
||||||
|
else:
|
||||||
|
log.warning(f"{input_log_prefix}: Failed to load image from {input_file_path} (found in context). Attempting fallback.")
|
||||||
|
input_image_data = None # Ensure fallback is triggered
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(f"{input_log_prefix}: Error loading image from {input_file_path} (found in context): {e}. Attempting fallback.")
|
||||||
|
input_image_data = None # Ensure fallback is triggered
|
||||||
|
else:
|
||||||
|
log.warning(f"{input_log_prefix}: Input file path '{input_file_path}' (from context) not found. Attempting fallback.")
|
||||||
|
input_image_data = None # Ensure fallback is triggered
|
||||||
|
else:
|
||||||
|
log.warning(f"{input_log_prefix}: Found map type '{required_map_type_from_rule}' in context, but 'path' is missing in saved_files_info. Attempting fallback.")
|
||||||
|
input_image_data = None # Ensure fallback is triggered
|
||||||
|
else:
|
||||||
|
log.info(f"{input_log_prefix}: Input map type '{required_map_type_from_rule}' not found in context.processed_maps_details. Attempting fallback.")
|
||||||
|
input_image_data = None # Ensure fallback is triggered
|
||||||
|
|
||||||
|
# 2. Apply Fallback if needed
|
||||||
|
if input_image_data is None:
|
||||||
|
fallback_value = merge_defaults.get(channel_char)
|
||||||
|
if fallback_value is not None:
|
||||||
|
try:
|
||||||
|
if not target_dimensions_hw:
|
||||||
|
result.error_message = f"Cannot create fallback for channel '{channel_char}': 'source_dimensions' (target_dimensions_hw) not defined in task_data."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result # Critical failure if dimensions for fallback are missing
|
||||||
|
h, w = target_dimensions_hw
|
||||||
|
# Infer shape/dtype for fallback (simplified)
|
||||||
|
num_channels = 1 if isinstance(fallback_value, (int, float)) else len(fallback_value) if isinstance(fallback_value, (list, tuple)) else 1
|
||||||
|
dtype = np.uint8 # Default dtype
|
||||||
|
shape = (h, w) if num_channels == 1 else (h, w, num_channels)
|
||||||
|
|
||||||
|
input_image_data = np.full(shape, fallback_value, dtype=dtype)
|
||||||
|
log.warning(f"{input_log_prefix}: Using fallback value {fallback_value} (Target Dims: {target_dimensions_hw}).")
|
||||||
|
input_source_desc = f"Fallback value {fallback_value}"
|
||||||
|
input_source_bit_depths[channel_char] = 8 # Assume 8-bit for fallbacks
|
||||||
|
channel_transform_notes.append(f"Used fallback value {fallback_value}")
|
||||||
|
except Exception as e:
|
||||||
|
result.error_message = f"Error creating fallback for channel '{channel_char}': {e}"
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result # Critical failure
|
||||||
|
else:
|
||||||
|
result.error_message = f"Missing input '{required_map_type_from_rule}' and no fallback default provided for channel '{channel_char}'."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result # Critical failure
|
||||||
|
|
||||||
|
# 3. Apply Transformations to the loaded/fallback input
|
||||||
|
if input_image_data is not None:
|
||||||
|
input_image_data, _, transform_notes = ipu.apply_common_map_transformations(
|
||||||
|
input_image_data.copy(), # Transform a copy
|
||||||
|
required_map_type_from_rule, # Use the type required by the rule
|
||||||
|
invert_normal_green,
|
||||||
|
file_type_definitions,
|
||||||
|
input_log_prefix
|
||||||
|
)
|
||||||
|
channel_transform_notes.extend(transform_notes)
|
||||||
|
else:
|
||||||
|
# This case should be prevented by fallback logic, but as a safeguard:
|
||||||
|
result.error_message = f"Input data for channel '{channel_char}' is None after load/fallback attempt."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message} This indicates an internal logic error.")
|
||||||
|
return result
|
||||||
|
|
||||||
|
loaded_inputs_for_merge[channel_char] = input_image_data
|
||||||
|
all_transform_notes[channel_char] = channel_transform_notes
|
||||||
|
|
||||||
|
result.transformations_applied_to_inputs = all_transform_notes # Store notes
|
||||||
|
|
||||||
|
# --- Handle Dimension Mismatches (using transformed inputs) ---
|
||||||
|
log.debug(f"{log_prefix}: Handling dimension mismatches...")
|
||||||
|
unique_dimensions = set(actual_input_dimensions)
|
||||||
|
target_merge_dims_hw = target_dimensions_hw # Default
|
||||||
|
|
||||||
|
if len(unique_dimensions) > 1:
|
||||||
|
log.warning(f"{log_prefix}: Mismatched dimensions found among loaded inputs: {unique_dimensions}. Applying strategy: {merge_dimension_mismatch_strategy}")
|
||||||
|
mismatch_note = f"Mismatched input dimensions ({unique_dimensions}), applied {merge_dimension_mismatch_strategy}"
|
||||||
|
# Add note to all relevant inputs? Or just a general note? Add general for now.
|
||||||
|
# result.status_notes.append(mismatch_note) # Need a place for general notes
|
||||||
|
|
||||||
|
if merge_dimension_mismatch_strategy == "ERROR_SKIP":
|
||||||
|
result.error_message = "Dimension mismatch and strategy is ERROR_SKIP."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
elif merge_dimension_mismatch_strategy == "USE_LARGEST":
|
||||||
|
max_h = max(h for h, w in unique_dimensions)
|
||||||
|
max_w = max(w for h, w in unique_dimensions)
|
||||||
|
target_merge_dims_hw = (max_h, max_w)
|
||||||
|
elif merge_dimension_mismatch_strategy == "USE_FIRST":
|
||||||
|
target_merge_dims_hw = actual_input_dimensions[0] if actual_input_dimensions else target_dimensions_hw
|
||||||
|
# Add other strategies or default to USE_LARGEST
|
||||||
|
|
||||||
|
log.info(f"{log_prefix}: Resizing inputs to target merge dimensions: {target_merge_dims_hw}")
|
||||||
|
# Resize loaded inputs (not fallbacks unless they were treated as having target dims)
|
||||||
|
for channel_char, img_data in loaded_inputs_for_merge.items():
|
||||||
|
# Only resize if it was a loaded input that contributed to the mismatch check
|
||||||
|
if img_data.shape[:2] in unique_dimensions and img_data.shape[:2] != target_merge_dims_hw:
|
||||||
|
resized_img = ipu.resize_image(img_data, target_merge_dims_hw[1], target_merge_dims_hw[0]) # w, h
|
||||||
|
if resized_img is None:
|
||||||
|
result.error_message = f"Failed to resize input for channel '{channel_char}' to {target_merge_dims_hw}."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
loaded_inputs_for_merge[channel_char] = resized_img
|
||||||
|
log.debug(f"{log_prefix}: Resized input for channel '{channel_char}'.")
|
||||||
|
|
||||||
|
# If target_merge_dims_hw is still None (no source_dimensions and no mismatch), use first loaded input's dimensions
|
||||||
|
if target_merge_dims_hw is None and actual_input_dimensions:
|
||||||
|
target_merge_dims_hw = actual_input_dimensions[0]
|
||||||
|
log.info(f"{log_prefix}: Using dimensions from first loaded input: {target_merge_dims_hw}")
|
||||||
|
|
||||||
|
# --- Perform Merge ---
|
||||||
|
log.debug(f"{log_prefix}: Performing merge operation for channels '{merge_channels_order}'.")
|
||||||
|
try:
|
||||||
|
# Final check for valid dimensions before unpacking
|
||||||
|
if not isinstance(target_merge_dims_hw, tuple) or len(target_merge_dims_hw) != 2:
|
||||||
|
result.error_message = "Could not determine valid target dimensions for merge operation."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message} (target_merge_dims_hw: {target_merge_dims_hw})")
|
||||||
|
return result
|
||||||
|
|
||||||
|
output_channels = len(merge_channels_order)
|
||||||
|
h, w = target_merge_dims_hw # Use the potentially adjusted dimensions
|
||||||
|
|
||||||
|
# Determine output dtype (e.g., based on inputs or config) - Assume uint8 for now
|
||||||
|
output_dtype = np.uint8
|
||||||
|
|
||||||
|
if output_channels == 1:
|
||||||
|
# Assume the first channel in order is the one to use
|
||||||
|
channel_char_to_use = merge_channels_order[0]
|
||||||
|
source_img = loaded_inputs_for_merge[channel_char_to_use]
|
||||||
|
# Ensure it's grayscale (take first channel if it's multi-channel)
|
||||||
|
if len(source_img.shape) == 3:
|
||||||
|
merged_image = source_img[:, :, 0].copy().astype(output_dtype)
|
||||||
|
else:
|
||||||
|
merged_image = source_img.copy().astype(output_dtype)
|
||||||
|
elif output_channels > 1:
|
||||||
|
merged_image = np.zeros((h, w, output_channels), dtype=output_dtype)
|
||||||
|
for i, channel_char in enumerate(merge_channels_order):
|
||||||
|
source_img = loaded_inputs_for_merge.get(channel_char)
|
||||||
|
if source_img is not None:
|
||||||
|
# Extract the correct channel (e.g., R from RGB, or use grayscale directly)
|
||||||
|
if len(source_img.shape) == 3:
|
||||||
|
# Simple approach: take the first channel if source is color. Needs refinement if specific channel mapping (R->R, G->G etc.) is needed.
|
||||||
|
merged_image[:, :, i] = source_img[:, :, 0]
|
||||||
|
else: # Grayscale source
|
||||||
|
merged_image[:, :, i] = source_img
|
||||||
|
else:
|
||||||
|
# This case should have been caught by fallback logic earlier
|
||||||
|
result.error_message = f"Internal error: Missing prepared input for channel '{channel_char}' during final merge assembly."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
else:
|
||||||
|
result.error_message = f"Invalid channel_order '{merge_channels_order}' in merge config."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
|
||||||
|
result.merged_image_data = merged_image
|
||||||
|
result.final_dimensions = (merged_image.shape[1], merged_image.shape[0]) # w, h
|
||||||
|
result.source_bit_depths = list(input_source_bit_depths.values()) # Collect bit depths used
|
||||||
|
log.info(f"{log_prefix}: Successfully merged inputs into image with shape {result.merged_image_data.shape}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"{log_prefix}: Error during merge operation: {e}")
|
||||||
|
result.error_message = f"Merge operation failed: {e}"
|
||||||
|
return result
|
||||||
|
|
||||||
|
# --- Success ---
|
||||||
|
result.status = "Processed"
|
||||||
|
result.error_message = None
|
||||||
|
log.info(f"{log_prefix}: Successfully processed merge task.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"{log_prefix}: Unhandled exception during processing: {e}")
|
||||||
|
result.status = "Failed"
|
||||||
|
result.error_message = f"Unhandled exception: {e}"
|
||||||
|
# Ensure image data is empty on failure
|
||||||
|
if result.merged_image_data is None or result.merged_image_data.size == 0:
|
||||||
|
result.merged_image_data = np.array([])
|
||||||
|
|
||||||
|
return result
|
||||||
@@ -41,7 +41,7 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
|
|||||||
# Check Skip Flag
|
# Check Skip Flag
|
||||||
if context.status_flags.get('skip_asset'):
|
if context.status_flags.get('skip_asset'):
|
||||||
context.asset_metadata['status'] = "Skipped"
|
context.asset_metadata['status'] = "Skipped"
|
||||||
context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
|
# context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
|
||||||
context.asset_metadata['notes'] = context.status_flags.get('skip_reason', 'Skipped early in pipeline')
|
context.asset_metadata['notes'] = context.status_flags.get('skip_reason', 'Skipped early in pipeline')
|
||||||
logger.info(
|
logger.info(
|
||||||
f"Asset '{asset_name_for_log}': Marked as skipped. Reason: {context.asset_metadata['notes']}"
|
f"Asset '{asset_name_for_log}': Marked as skipped. Reason: {context.asset_metadata['notes']}"
|
||||||
@@ -51,7 +51,7 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
|
|||||||
# However, if we are here, asset_metadata IS initialized.
|
# However, if we are here, asset_metadata IS initialized.
|
||||||
|
|
||||||
# A. Finalize Metadata
|
# A. Finalize Metadata
|
||||||
context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
|
# context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
|
||||||
|
|
||||||
# Determine final status (if not already set to Skipped)
|
# Determine final status (if not already set to Skipped)
|
||||||
if context.asset_metadata.get('status') != "Skipped":
|
if context.asset_metadata.get('status') != "Skipped":
|
||||||
@@ -115,8 +115,8 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
|
|||||||
restructured_processed_maps[map_key] = new_map_entry
|
restructured_processed_maps[map_key] = new_map_entry
|
||||||
|
|
||||||
# Assign the restructured details. Note: 'processed_map_details' (singular 'map') is the key in asset_metadata.
|
# Assign the restructured details. Note: 'processed_map_details' (singular 'map') is the key in asset_metadata.
|
||||||
context.asset_metadata['processed_map_details'] = restructured_processed_maps
|
# context.asset_metadata['processed_map_details'] = restructured_processed_maps
|
||||||
context.asset_metadata['merged_map_details'] = getattr(context, 'merged_maps_details', {})
|
# context.asset_metadata['merged_map_details'] = getattr(context, 'merged_maps_details', {})
|
||||||
|
|
||||||
# (Optional) Add a list of all temporary files
|
# (Optional) Add a list of all temporary files
|
||||||
# context.asset_metadata['temporary_files'] = getattr(context, 'temporary_files', []) # Assuming this is populated elsewhere
|
# context.asset_metadata['temporary_files'] = getattr(context, 'temporary_files', []) # Assuming this is populated elsewhere
|
||||||
@@ -203,6 +203,8 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
|
|||||||
return [make_serializable(i) for i in data]
|
return [make_serializable(i) for i in data]
|
||||||
return data
|
return data
|
||||||
|
|
||||||
|
# final_output_files is populated by OutputOrganizationStage. Explicitly remove it as per user request.
|
||||||
|
context.asset_metadata.pop('final_output_files', None)
|
||||||
serializable_metadata = make_serializable(context.asset_metadata)
|
serializable_metadata = make_serializable(context.asset_metadata)
|
||||||
|
|
||||||
with open(metadata_save_path, 'w') as f:
|
with open(metadata_save_path, 'w') as f:
|
||||||
|
|||||||
@@ -38,7 +38,9 @@ class NormalMapGreenChannelStage(ProcessingStage):
|
|||||||
|
|
||||||
# Iterate through processed maps, as FileRule objects don't have IDs directly
|
# Iterate through processed maps, as FileRule objects don't have IDs directly
|
||||||
for map_id_hex, map_details in context.processed_maps_details.items():
|
for map_id_hex, map_details in context.processed_maps_details.items():
|
||||||
if map_details.get('map_type') == "NORMAL" and map_details.get('status') == 'Processed':
|
# Check if the map is a processed normal map using the standardized internal_map_type
|
||||||
|
internal_map_type = map_details.get('internal_map_type')
|
||||||
|
if internal_map_type and internal_map_type.startswith("MAP_NRM") and map_details.get('status') == 'Processed':
|
||||||
|
|
||||||
# Check configuration for inversion
|
# Check configuration for inversion
|
||||||
# Assuming general_settings is an attribute of config_obj and might be a dict or an object
|
# Assuming general_settings is an attribute of config_obj and might be a dict or an object
|
||||||
|
|||||||
@@ -5,10 +5,10 @@ from typing import List, Dict, Optional
|
|||||||
|
|
||||||
from .base_stage import ProcessingStage
|
from .base_stage import ProcessingStage
|
||||||
from ..asset_context import AssetProcessingContext
|
from ..asset_context import AssetProcessingContext
|
||||||
from utils.path_utils import generate_path_from_pattern, sanitize_filename
|
from utils.path_utils import generate_path_from_pattern, sanitize_filename, get_filename_friendly_map_type # Absolute import
|
||||||
from rule_structure import FileRule # Assuming these are needed for type hints if not directly in context
|
from rule_structure import FileRule # Assuming these are needed for type hints if not directly in context
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
class OutputOrganizationStage(ProcessingStage):
|
class OutputOrganizationStage(ProcessingStage):
|
||||||
@@ -17,6 +17,8 @@ class OutputOrganizationStage(ProcessingStage):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||||
|
log.info("OUTPUT_ORG: Stage execution started for asset '%s'", context.asset_rule.asset_name)
|
||||||
|
log.info(f"OUTPUT_ORG: context.processed_maps_details at start: {context.processed_maps_details}")
|
||||||
"""
|
"""
|
||||||
Copies temporary processed and merged files to their final output locations
|
Copies temporary processed and merged files to their final output locations
|
||||||
based on path patterns and updates AssetProcessingContext.
|
based on path patterns and updates AssetProcessingContext.
|
||||||
@@ -34,15 +36,7 @@ class OutputOrganizationStage(ProcessingStage):
|
|||||||
return context
|
return context
|
||||||
|
|
||||||
final_output_files: List[str] = []
|
final_output_files: List[str] = []
|
||||||
overwrite_existing = False
|
overwrite_existing = context.config_obj.overwrite_existing
|
||||||
# Correctly access general_settings and overwrite_existing from config_obj
|
|
||||||
if hasattr(context.config_obj, 'general_settings'):
|
|
||||||
if isinstance(context.config_obj.general_settings, dict):
|
|
||||||
overwrite_existing = context.config_obj.general_settings.get('overwrite_existing', False)
|
|
||||||
elif hasattr(context.config_obj.general_settings, 'overwrite_existing'): # If general_settings is an object
|
|
||||||
overwrite_existing = getattr(context.config_obj.general_settings, 'overwrite_existing', False)
|
|
||||||
else:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}': config_obj.general_settings not found, defaulting overwrite_existing to False.")
|
|
||||||
|
|
||||||
output_dir_pattern = getattr(context.config_obj, 'output_directory_pattern', "[supplier]/[assetname]")
|
output_dir_pattern = getattr(context.config_obj, 'output_directory_pattern', "[supplier]/[assetname]")
|
||||||
output_filename_pattern_config = getattr(context.config_obj, 'output_filename_pattern', "[assetname]_[maptype]_[resolution].[ext]")
|
output_filename_pattern_config = getattr(context.config_obj, 'output_filename_pattern', "[assetname]_[maptype]_[resolution].[ext]")
|
||||||
@@ -53,15 +47,108 @@ class OutputOrganizationStage(ProcessingStage):
|
|||||||
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(context.processed_maps_details)} processed individual map entries.")
|
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(context.processed_maps_details)} processed individual map entries.")
|
||||||
for processed_map_key, details in context.processed_maps_details.items():
|
for processed_map_key, details in context.processed_maps_details.items():
|
||||||
map_status = details.get('status')
|
map_status = details.get('status')
|
||||||
base_map_type = details.get('map_type', 'unknown_map_type') # Original map type
|
# Retrieve the internal map type first
|
||||||
|
internal_map_type = details.get('internal_map_type', 'unknown_map_type')
|
||||||
|
# Convert internal type to filename-friendly type using the helper
|
||||||
|
file_type_definitions = getattr(context.config_obj, "FILE_TYPE_DEFINITIONS", {})
|
||||||
|
base_map_type = get_filename_friendly_map_type(internal_map_type, file_type_definitions) # Final filename-friendly type
|
||||||
|
|
||||||
if map_status in ['Processed', 'Processed_No_Variants']:
|
# --- Handle maps processed by the SaveVariantsStage (identified by having saved_files_info) ---
|
||||||
if not details.get('temp_processed_file'):
|
saved_files_info = details.get('saved_files_info') # This is a list of dicts from SaveVariantsOutput
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status '{map_status}') due to missing 'temp_processed_file'.")
|
|
||||||
|
# Check if 'saved_files_info' exists and is a non-empty list.
|
||||||
|
# This indicates the item was processed by SaveVariantsStage.
|
||||||
|
if saved_files_info and isinstance(saved_files_info, list) and len(saved_files_info) > 0:
|
||||||
|
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(saved_files_info)} variants for map key '{processed_map_key}' (map type: {base_map_type}) from SaveVariantsStage.")
|
||||||
|
|
||||||
|
# Use base_map_type (e.g., "COL") as the key for the map entry
|
||||||
|
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(base_map_type, {})
|
||||||
|
# map_type is now the key, so no need to store it inside the entry
|
||||||
|
# map_metadata_entry['map_type'] = base_map_type
|
||||||
|
map_metadata_entry.setdefault('variant_paths', {}) # Initialize if not present
|
||||||
|
|
||||||
|
processed_any_variant_successfully = False
|
||||||
|
failed_any_variant = False
|
||||||
|
|
||||||
|
for variant_index, variant_detail in enumerate(saved_files_info):
|
||||||
|
# Extract info from the save utility's output structure
|
||||||
|
temp_variant_path_str = variant_detail.get('path') # Key is 'path'
|
||||||
|
if not temp_variant_path_str:
|
||||||
|
logger.warning(f"Asset '{asset_name_for_log}': Variant {variant_index} for map '{processed_map_key}' is missing 'path' in saved_files_info. Skipping.")
|
||||||
|
# Optionally update variant_detail status if it's mutable and tracked, otherwise just skip
|
||||||
|
continue
|
||||||
|
|
||||||
|
temp_variant_path = Path(temp_variant_path_str)
|
||||||
|
if not temp_variant_path.is_file():
|
||||||
|
logger.warning(f"Asset '{asset_name_for_log}': Temporary variant file '{temp_variant_path}' for map '{processed_map_key}' not found. Skipping.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
variant_resolution_key = variant_detail.get('resolution_key', f"varRes{variant_index}")
|
||||||
|
variant_ext = variant_detail.get('format', temp_variant_path.suffix.lstrip('.')) # Use 'format' key
|
||||||
|
|
||||||
|
token_data_variant = {
|
||||||
|
"assetname": asset_name_for_log,
|
||||||
|
"supplier": context.effective_supplier or "DefaultSupplier",
|
||||||
|
"maptype": base_map_type,
|
||||||
|
"resolution": variant_resolution_key,
|
||||||
|
"ext": variant_ext,
|
||||||
|
"incrementingvalue": getattr(context, 'incrementing_value', None),
|
||||||
|
"sha5": getattr(context, 'sha5_value', None)
|
||||||
|
}
|
||||||
|
token_data_variant_cleaned = {k: v for k, v in token_data_variant.items() if v is not None}
|
||||||
|
output_filename_variant = generate_path_from_pattern(output_filename_pattern_config, token_data_variant_cleaned)
|
||||||
|
|
||||||
|
try:
|
||||||
|
relative_dir_path_str_variant = generate_path_from_pattern(
|
||||||
|
pattern_string=output_dir_pattern,
|
||||||
|
token_data=token_data_variant_cleaned
|
||||||
|
)
|
||||||
|
final_variant_path = Path(context.output_base_path) / Path(relative_dir_path_str_variant) / Path(output_filename_variant)
|
||||||
|
final_variant_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
if final_variant_path.exists() and not overwrite_existing:
|
||||||
|
logger.info(f"Asset '{asset_name_for_log}': Output variant file {final_variant_path} for map '{processed_map_key}' (res: {variant_resolution_key}) exists and overwrite is disabled. Skipping copy.")
|
||||||
|
# Optionally update variant_detail status if needed
|
||||||
|
else:
|
||||||
|
shutil.copy2(temp_variant_path, final_variant_path)
|
||||||
|
logger.info(f"Asset '{asset_name_for_log}': Copied variant {temp_variant_path} to {final_variant_path} for map '{processed_map_key}'.")
|
||||||
|
final_output_files.append(str(final_variant_path))
|
||||||
|
# Optionally update variant_detail status if needed
|
||||||
|
|
||||||
|
# Store relative path in metadata
|
||||||
|
# Store only the filename, as it's relative to the metadata.json location
|
||||||
|
map_metadata_entry['variant_paths'][variant_resolution_key] = output_filename_variant
|
||||||
|
processed_any_variant_successfully = True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Asset '{asset_name_for_log}': Failed to copy variant {temp_variant_path} for map key '{processed_map_key}' (res: {variant_resolution_key}). Error: {e}", exc_info=True)
|
||||||
|
context.status_flags['output_organization_error'] = True
|
||||||
|
context.asset_metadata['status'] = "Failed (Output Organization Error - Variant)"
|
||||||
|
# Optionally update variant_detail status if needed
|
||||||
|
failed_any_variant = True
|
||||||
|
|
||||||
|
# Update parent map detail status based on variant outcomes
|
||||||
|
if failed_any_variant:
|
||||||
|
details['status'] = 'Organization Failed (Save Utility Variants)'
|
||||||
|
elif processed_any_variant_successfully:
|
||||||
|
details['status'] = 'Organized (Save Utility Variants)'
|
||||||
|
else: # No variants were successfully copied (e.g., all skipped due to existing file or missing temp file)
|
||||||
|
details['status'] = 'Organization Skipped (No Save Utility Variants Copied/Needed)'
|
||||||
|
|
||||||
|
# --- Handle older/other processing statuses (like single file processing) ---
|
||||||
|
elif map_status in ['Processed', 'Processed_No_Variants', 'Converted_To_Rough']: # Add other single-file statuses if needed
|
||||||
|
temp_file_path_str = details.get('temp_processed_file')
|
||||||
|
if not temp_file_path_str:
|
||||||
|
logger.warning(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status '{map_status}') due to missing 'temp_processed_file'.")
|
||||||
details['status'] = 'Organization Skipped (Missing Temp File)'
|
details['status'] = 'Organization Skipped (Missing Temp File)'
|
||||||
continue
|
continue
|
||||||
|
|
||||||
temp_file_path = Path(details['temp_processed_file'])
|
temp_file_path = Path(temp_file_path_str)
|
||||||
|
if not temp_file_path.is_file():
|
||||||
|
logger.warning(f"Asset '{asset_name_for_log}': Temporary file '{temp_file_path}' for map '{processed_map_key}' not found. Skipping.")
|
||||||
|
details['status'] = 'Organization Skipped (Temp File Not Found)'
|
||||||
|
continue
|
||||||
|
|
||||||
resolution_str = details.get('processed_resolution_name', details.get('original_resolution_name', 'resX'))
|
resolution_str = details.get('processed_resolution_name', details.get('original_resolution_name', 'resX'))
|
||||||
|
|
||||||
token_data = {
|
token_data = {
|
||||||
@@ -87,18 +174,26 @@ class OutputOrganizationStage(ProcessingStage):
|
|||||||
|
|
||||||
if final_path.exists() and not overwrite_existing:
|
if final_path.exists() and not overwrite_existing:
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Output file {final_path} for map '{processed_map_key}' exists and overwrite is disabled. Skipping copy.")
|
logger.info(f"Asset '{asset_name_for_log}': Output file {final_path} for map '{processed_map_key}' exists and overwrite is disabled. Skipping copy.")
|
||||||
|
details['status'] = 'Organized (Exists, Skipped Copy)'
|
||||||
else:
|
else:
|
||||||
shutil.copy2(temp_file_path, final_path)
|
shutil.copy2(temp_file_path, final_path)
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Copied {temp_file_path} to {final_path} for map '{processed_map_key}'.")
|
logger.info(f"Asset '{asset_name_for_log}': Copied {temp_file_path} to {final_path} for map '{processed_map_key}'.")
|
||||||
final_output_files.append(str(final_path))
|
final_output_files.append(str(final_path))
|
||||||
|
|
||||||
details['final_output_path'] = str(final_path)
|
|
||||||
details['status'] = 'Organized'
|
details['status'] = 'Organized'
|
||||||
|
|
||||||
|
details['final_output_path'] = str(final_path)
|
||||||
|
|
||||||
# Update asset_metadata for metadata.json
|
# Update asset_metadata for metadata.json
|
||||||
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
|
# Use base_map_type (e.g., "COL") as the key for the map entry
|
||||||
map_metadata_entry['map_type'] = base_map_type
|
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(base_map_type, {})
|
||||||
map_metadata_entry['path'] = str(Path(relative_dir_path_str) / Path(output_filename)) # Store relative path
|
# map_type is now the key, so no need to store it inside the entry
|
||||||
|
# map_metadata_entry['map_type'] = base_map_type
|
||||||
|
# Store single path in variant_paths, keyed by its resolution string
|
||||||
|
# Store only the filename, as it's relative to the metadata.json location
|
||||||
|
map_metadata_entry.setdefault('variant_paths', {})[resolution_str] = output_filename
|
||||||
|
# Remove old cleanup logic, as variant_paths is now the standard
|
||||||
|
# if 'variant_paths' in map_metadata_entry:
|
||||||
|
# del map_metadata_entry['variant_paths']
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Asset '{asset_name_for_log}': Failed to copy {temp_file_path} for map key '{processed_map_key}'. Error: {e}", exc_info=True)
|
logger.error(f"Asset '{asset_name_for_log}': Failed to copy {temp_file_path} for map key '{processed_map_key}'. Error: {e}", exc_info=True)
|
||||||
@@ -106,204 +201,17 @@ class OutputOrganizationStage(ProcessingStage):
|
|||||||
context.asset_metadata['status'] = "Failed (Output Organization Error)"
|
context.asset_metadata['status'] = "Failed (Output Organization Error)"
|
||||||
details['status'] = 'Organization Failed'
|
details['status'] = 'Organization Failed'
|
||||||
|
|
||||||
elif map_status == 'Processed_With_Variants':
|
# --- Handle other statuses (Skipped, Failed, etc.) ---
|
||||||
variants = details.get('variants')
|
else: # Catches statuses not explicitly handled above
|
||||||
if not variants: # No variants list, or it's empty
|
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status: '{map_status}') for organization as it's not a recognized final processed state or variant state.")
|
||||||
logger.warning(f"Asset '{asset_name_for_log}': Map key '{processed_map_key}' (status '{map_status}') has no 'variants' list or it is empty. Attempting fallback to base file.")
|
|
||||||
if not details.get('temp_processed_file'):
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (fallback) as 'temp_processed_file' is also missing.")
|
|
||||||
details['status'] = 'Organization Failed (No Variants, No Temp File)'
|
|
||||||
continue # Skip to next map key
|
|
||||||
|
|
||||||
# Fallback: Process the base temp_processed_file
|
|
||||||
temp_file_path = Path(details['temp_processed_file'])
|
|
||||||
resolution_str = details.get('processed_resolution_name', details.get('original_resolution_name', 'baseRes'))
|
|
||||||
|
|
||||||
token_data = {
|
|
||||||
"assetname": asset_name_for_log,
|
|
||||||
"supplier": context.effective_supplier or "DefaultSupplier",
|
|
||||||
"maptype": base_map_type,
|
|
||||||
"resolution": resolution_str,
|
|
||||||
"ext": temp_file_path.suffix.lstrip('.'),
|
|
||||||
"incrementingvalue": getattr(context, 'incrementing_value', None),
|
|
||||||
"sha5": getattr(context, 'sha5_value', None)
|
|
||||||
}
|
|
||||||
token_data_cleaned = {k: v for k, v in token_data.items() if v is not None}
|
|
||||||
output_filename = generate_path_from_pattern(output_filename_pattern_config, token_data_cleaned)
|
|
||||||
|
|
||||||
try:
|
|
||||||
relative_dir_path_str = generate_path_from_pattern(
|
|
||||||
pattern_string=output_dir_pattern,
|
|
||||||
token_data=token_data_cleaned
|
|
||||||
)
|
|
||||||
final_path = Path(context.output_base_path) / Path(relative_dir_path_str) / Path(output_filename)
|
|
||||||
final_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
if final_path.exists() and not overwrite_existing:
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Output file {final_path} for map '{processed_map_key}' (fallback) exists and overwrite is disabled. Skipping copy.")
|
|
||||||
else:
|
|
||||||
shutil.copy2(temp_file_path, final_path)
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Copied {temp_file_path} to {final_path} for map '{processed_map_key}' (fallback).")
|
|
||||||
final_output_files.append(str(final_path))
|
|
||||||
|
|
||||||
details['final_output_path'] = str(final_path)
|
|
||||||
details['status'] = 'Organized (Base File Fallback)'
|
|
||||||
|
|
||||||
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
|
|
||||||
map_metadata_entry['map_type'] = base_map_type
|
|
||||||
map_metadata_entry['path'] = str(Path(relative_dir_path_str) / Path(output_filename))
|
|
||||||
if 'variant_paths' in map_metadata_entry: # Clean up if it was somehow set
|
|
||||||
del map_metadata_entry['variant_paths']
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}': Failed to copy {temp_file_path} (fallback) for map key '{processed_map_key}'. Error: {e}", exc_info=True)
|
|
||||||
context.status_flags['output_organization_error'] = True
|
|
||||||
context.asset_metadata['status'] = "Failed (Output Organization Error - Fallback)"
|
|
||||||
details['status'] = 'Organization Failed (Fallback)'
|
|
||||||
continue # Finished with this map key due to fallback
|
|
||||||
|
|
||||||
# If we are here, 'variants' list exists and is not empty. Proceed with variant processing.
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(variants)} variants for map key '{processed_map_key}' (map type: {base_map_type}).")
|
|
||||||
|
|
||||||
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
|
|
||||||
map_metadata_entry['map_type'] = base_map_type
|
|
||||||
map_metadata_entry.setdefault('variant_paths', {}) # Initialize if not present
|
|
||||||
|
|
||||||
processed_any_variant_successfully = False
|
|
||||||
failed_any_variant = False
|
|
||||||
|
|
||||||
for variant_index, variant_detail in enumerate(variants):
|
|
||||||
temp_variant_path_str = variant_detail.get('temp_path')
|
|
||||||
if not temp_variant_path_str:
|
|
||||||
logger.warning(f"Asset '{asset_name_for_log}': Variant {variant_index} for map '{processed_map_key}' is missing 'temp_path'. Skipping.")
|
|
||||||
variant_detail['status'] = 'Organization Skipped (Missing Temp Path)'
|
|
||||||
continue
|
|
||||||
|
|
||||||
temp_variant_path = Path(temp_variant_path_str)
|
|
||||||
variant_resolution_key = variant_detail.get('resolution_key', f"varRes{variant_index}")
|
|
||||||
variant_ext = temp_variant_path.suffix.lstrip('.')
|
|
||||||
|
|
||||||
token_data_variant = {
|
|
||||||
"assetname": asset_name_for_log,
|
|
||||||
"supplier": context.effective_supplier or "DefaultSupplier",
|
|
||||||
"maptype": base_map_type,
|
|
||||||
"resolution": variant_resolution_key,
|
|
||||||
"ext": variant_ext,
|
|
||||||
"incrementingvalue": getattr(context, 'incrementing_value', None),
|
|
||||||
"sha5": getattr(context, 'sha5_value', None)
|
|
||||||
}
|
|
||||||
token_data_variant_cleaned = {k: v for k, v in token_data_variant.items() if v is not None}
|
|
||||||
output_filename_variant = generate_path_from_pattern(output_filename_pattern_config, token_data_variant_cleaned)
|
|
||||||
|
|
||||||
try:
|
|
||||||
relative_dir_path_str_variant = generate_path_from_pattern(
|
|
||||||
pattern_string=output_dir_pattern,
|
|
||||||
token_data=token_data_variant_cleaned
|
|
||||||
)
|
|
||||||
final_variant_path = Path(context.output_base_path) / Path(relative_dir_path_str_variant) / Path(output_filename_variant)
|
|
||||||
final_variant_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
if final_variant_path.exists() and not overwrite_existing:
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Output variant file {final_variant_path} for map '{processed_map_key}' (res: {variant_resolution_key}) exists and overwrite is disabled. Skipping copy.")
|
|
||||||
variant_detail['status'] = 'Organized (Exists, Skipped Copy)'
|
|
||||||
else:
|
|
||||||
shutil.copy2(temp_variant_path, final_variant_path)
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Copied variant {temp_variant_path} to {final_variant_path} for map '{processed_map_key}'.")
|
|
||||||
final_output_files.append(str(final_variant_path))
|
|
||||||
variant_detail['status'] = 'Organized'
|
|
||||||
|
|
||||||
variant_detail['final_output_path'] = str(final_variant_path)
|
|
||||||
# Store the Path object for metadata stage to make it relative later
|
|
||||||
variant_detail['final_output_path_for_metadata'] = final_variant_path
|
|
||||||
relative_final_variant_path_str = str(Path(relative_dir_path_str_variant) / Path(output_filename_variant))
|
|
||||||
map_metadata_entry['variant_paths'][variant_resolution_key] = relative_final_variant_path_str
|
|
||||||
processed_any_variant_successfully = True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}': Failed to copy variant {temp_variant_path} for map key '{processed_map_key}' (res: {variant_resolution_key}). Error: {e}", exc_info=True)
|
|
||||||
context.status_flags['output_organization_error'] = True
|
|
||||||
context.asset_metadata['status'] = "Failed (Output Organization Error - Variant)"
|
|
||||||
variant_detail['status'] = 'Organization Failed'
|
|
||||||
failed_any_variant = True
|
|
||||||
|
|
||||||
# Update parent map detail status based on variant outcomes
|
|
||||||
if failed_any_variant:
|
|
||||||
details['status'] = 'Organization Failed (Variants)'
|
|
||||||
elif processed_any_variant_successfully:
|
|
||||||
# Check if all processable variants were organized
|
|
||||||
all_attempted_organized = True
|
|
||||||
for v_detail in variants:
|
|
||||||
if v_detail.get('temp_path') and not v_detail.get('status', '').startswith('Organized'):
|
|
||||||
all_attempted_organized = False
|
|
||||||
break
|
|
||||||
if all_attempted_organized:
|
|
||||||
details['status'] = 'Organized (All Attempted Variants)'
|
|
||||||
else:
|
|
||||||
details['status'] = 'Partially Organized (Variants)'
|
|
||||||
elif not any(v.get('temp_path') for v in variants): # No variants had temp_paths to begin with
|
|
||||||
details['status'] = 'Processed_With_Variants (No Valid Variants to Organize)'
|
|
||||||
else: # Variants list existed, items had temp_paths, but none were successfully organized (e.g., all skipped due to existing file and no overwrite)
|
|
||||||
details['status'] = 'Organization Skipped (No Variants Copied/Needed)'
|
|
||||||
|
|
||||||
|
|
||||||
else: # Other statuses like 'Skipped', 'Failed', 'Organization Failed' etc.
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status: '{map_status}') for organization as it's not 'Processed', 'Processed_No_Variants', or 'Processed_With_Variants'.")
|
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': No processed individual maps to organize.")
|
logger.debug(f"Asset '{asset_name_for_log}': No processed individual maps to organize.")
|
||||||
|
|
||||||
# B. Organize Merged Maps
|
# B. Organize Merged Maps (OBSOLETE BLOCK - Merged maps are handled by the main loop processing context.processed_maps_details)
|
||||||
if context.merged_maps_details:
|
# The log "No merged maps to organize" will no longer appear from here.
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(context.merged_maps_details)} merged map(s).")
|
# If merged maps are not appearing, the issue is likely that they are not being added
|
||||||
for merge_op_id, details in context.merged_maps_details.items(): # Use merge_op_id
|
# to context.processed_maps_details with 'saved_files_info' by the orchestrator/SaveVariantsStage.
|
||||||
if details.get('status') != 'Processed' or not details.get('temp_merged_file'):
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Skipping merge op id '{merge_op_id}' due to status '{details.get('status')}' or missing temp file.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
temp_file_path = Path(details['temp_merged_file'])
|
|
||||||
map_type = details.get('map_type', 'unknown_merged_map') # This is the output_map_type of the merge rule
|
|
||||||
# Merged maps might not have a simple 'resolution' token like individual maps.
|
|
||||||
# We'll use a placeholder or derive if possible.
|
|
||||||
resolution_str = details.get('merged_resolution_name', 'mergedRes')
|
|
||||||
|
|
||||||
|
|
||||||
token_data_merged = {
|
|
||||||
"assetname": asset_name_for_log,
|
|
||||||
"supplier": context.effective_supplier or "DefaultSupplier",
|
|
||||||
"maptype": map_type,
|
|
||||||
"resolution": resolution_str,
|
|
||||||
"ext": temp_file_path.suffix.lstrip('.'),
|
|
||||||
"incrementingvalue": getattr(context, 'incrementing_value', None),
|
|
||||||
"sha5": getattr(context, 'sha5_value', None)
|
|
||||||
}
|
|
||||||
token_data_merged_cleaned = {k: v for k, v in token_data_merged.items() if v is not None}
|
|
||||||
|
|
||||||
output_filename_merged = generate_path_from_pattern(output_filename_pattern_config, token_data_merged_cleaned)
|
|
||||||
|
|
||||||
try:
|
|
||||||
relative_dir_path_str_merged = generate_path_from_pattern(
|
|
||||||
pattern_string=output_dir_pattern,
|
|
||||||
token_data=token_data_merged_cleaned
|
|
||||||
)
|
|
||||||
final_path_merged = Path(context.output_base_path) / Path(relative_dir_path_str_merged) / Path(output_filename_merged)
|
|
||||||
final_path_merged.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
if final_path_merged.exists() and not overwrite_existing:
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Output file {final_path_merged} exists and overwrite is disabled. Skipping copy for merged map.")
|
|
||||||
else:
|
|
||||||
shutil.copy2(temp_file_path, final_path_merged)
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}': Copied merged map {temp_file_path} to {final_path_merged}")
|
|
||||||
final_output_files.append(str(final_path_merged))
|
|
||||||
|
|
||||||
context.merged_maps_details[merge_op_id]['final_output_path'] = str(final_path_merged)
|
|
||||||
context.merged_maps_details[merge_op_id]['status'] = 'Organized'
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Asset '{asset_name_for_log}': Failed to copy merged map {temp_file_path} to destination for merge op id '{merge_op_id}'. Error: {e}", exc_info=True)
|
|
||||||
context.status_flags['output_organization_error'] = True
|
|
||||||
context.asset_metadata['status'] = "Failed (Output Organization Error)"
|
|
||||||
context.merged_maps_details[merge_op_id]['status'] = 'Organization Failed'
|
|
||||||
else:
|
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': No merged maps to organize.")
|
|
||||||
|
|
||||||
# C. Organize Extra Files (e.g., previews, text files)
|
# C. Organize Extra Files (e.g., previews, text files)
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Checking for EXTRA files to organize.")
|
logger.debug(f"Asset '{asset_name_for_log}': Checking for EXTRA files to organize.")
|
||||||
|
|||||||
105
processing/pipeline/stages/prepare_processing_items.py
Normal file
105
processing/pipeline/stages/prepare_processing_items.py
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
import logging
|
||||||
|
from typing import List, Union, Optional
|
||||||
|
|
||||||
|
from .base_stage import ProcessingStage
|
||||||
|
from ..asset_context import AssetProcessingContext, MergeTaskDefinition
|
||||||
|
from rule_structure import FileRule # Assuming FileRule is imported correctly
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class PrepareProcessingItemsStage(ProcessingStage):
|
||||||
|
"""
|
||||||
|
Identifies and prepares a unified list of items (FileRule, MergeTaskDefinition)
|
||||||
|
to be processed in subsequent stages. Performs initial validation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||||
|
"""
|
||||||
|
Populates context.processing_items with FileRule and MergeTaskDefinition objects.
|
||||||
|
"""
|
||||||
|
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||||
|
log.info(f"Asset '{asset_name_for_log}': Preparing processing items...")
|
||||||
|
|
||||||
|
if context.status_flags.get('skip_asset', False):
|
||||||
|
log.info(f"Asset '{asset_name_for_log}': Skipping item preparation due to skip_asset flag.")
|
||||||
|
context.processing_items = []
|
||||||
|
return context
|
||||||
|
|
||||||
|
items_to_process: List[Union[FileRule, MergeTaskDefinition]] = []
|
||||||
|
preparation_failed = False
|
||||||
|
|
||||||
|
# --- Add regular files ---
|
||||||
|
if context.files_to_process:
|
||||||
|
# Validate source path early for regular files
|
||||||
|
source_path_valid = True
|
||||||
|
if not context.source_rule or not context.source_rule.input_path:
|
||||||
|
log.error(f"Asset '{asset_name_for_log}': SourceRule or SourceRule.input_path is not set. Cannot process regular files.")
|
||||||
|
source_path_valid = False
|
||||||
|
preparation_failed = True # Mark as failed if source path is missing
|
||||||
|
context.status_flags['prepare_items_failed_reason'] = "SourceRule.input_path missing"
|
||||||
|
elif not context.workspace_path or not context.workspace_path.is_dir():
|
||||||
|
log.error(f"Asset '{asset_name_for_log}': Workspace path '{context.workspace_path}' is not a valid directory. Cannot process regular files.")
|
||||||
|
source_path_valid = False
|
||||||
|
preparation_failed = True # Mark as failed if workspace path is bad
|
||||||
|
context.status_flags['prepare_items_failed_reason'] = "Workspace path invalid"
|
||||||
|
|
||||||
|
if source_path_valid:
|
||||||
|
for file_rule in context.files_to_process:
|
||||||
|
# Basic validation for FileRule itself
|
||||||
|
if not file_rule.file_path:
|
||||||
|
log.warning(f"Asset '{asset_name_for_log}': Skipping FileRule with empty file_path.")
|
||||||
|
continue # Skip this specific rule, but don't fail the whole stage
|
||||||
|
items_to_process.append(file_rule)
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}': Added {len(context.files_to_process)} potential FileRule items.")
|
||||||
|
else:
|
||||||
|
log.warning(f"Asset '{asset_name_for_log}': Skipping addition of all FileRule items due to invalid source/workspace path.")
|
||||||
|
|
||||||
|
|
||||||
|
# --- Add merged tasks ---
|
||||||
|
# --- Add merged tasks from global configuration ---
|
||||||
|
# merged_image_tasks are expected to be loaded into context.config_obj
|
||||||
|
# by the Configuration class from app_settings.json.
|
||||||
|
|
||||||
|
merged_tasks_list = getattr(context.config_obj, 'map_merge_rules', None)
|
||||||
|
|
||||||
|
if merged_tasks_list and isinstance(merged_tasks_list, list):
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}': Found {len(merged_tasks_list)} merge tasks in global config.")
|
||||||
|
for task_idx, task_data in enumerate(merged_tasks_list):
|
||||||
|
if isinstance(task_data, dict):
|
||||||
|
task_key = f"merged_task_{task_idx}"
|
||||||
|
# Basic validation for merge task data: requires output_map_type and an inputs dictionary
|
||||||
|
if not task_data.get('output_map_type') or not isinstance(task_data.get('inputs'), dict):
|
||||||
|
log.warning(f"Asset '{asset_name_for_log}', Task Index {task_idx}: Skipping merge task due to missing 'output_map_type' or valid 'inputs' dictionary. Task data: {task_data}")
|
||||||
|
continue # Skip this specific task
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}', Preparing Merge Task Index {task_idx}: Raw task_data: {task_data}")
|
||||||
|
merge_def = MergeTaskDefinition(task_data=task_data, task_key=task_key)
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}': Created MergeTaskDefinition object: {merge_def}")
|
||||||
|
log.info(f"Asset '{asset_name_for_log}': Successfully CREATED MergeTaskDefinition: Key='{merge_def.task_key}', OutputType='{merge_def.task_data.get('output_map_type', 'N/A')}'")
|
||||||
|
items_to_process.append(merge_def)
|
||||||
|
else:
|
||||||
|
log.warning(f"Asset '{asset_name_for_log}': Item at index {task_idx} in config_obj.merged_image_tasks is not a dictionary. Skipping. Item: {task_data}")
|
||||||
|
# The log for "Added X potential MergeTaskDefinition items" will be covered by the final log.
|
||||||
|
elif merged_tasks_list is None:
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}': 'merged_image_tasks' not found in config_obj. No global merge tasks to add.")
|
||||||
|
elif not isinstance(merged_tasks_list, list):
|
||||||
|
log.warning(f"Asset '{asset_name_for_log}': 'merged_image_tasks' in config_obj is not a list. Skipping global merge tasks. Type: {type(merged_tasks_list)}")
|
||||||
|
else: # Empty list
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}': 'merged_image_tasks' in config_obj is empty. No global merge tasks to add.")
|
||||||
|
|
||||||
|
|
||||||
|
if not items_to_process:
|
||||||
|
log.info(f"Asset '{asset_name_for_log}': No valid items found to process after preparation.")
|
||||||
|
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}': Final items_to_process before assigning to context: {items_to_process}")
|
||||||
|
context.processing_items = items_to_process
|
||||||
|
context.intermediate_results = {} # Initialize intermediate results storage
|
||||||
|
|
||||||
|
if preparation_failed:
|
||||||
|
# Set a flag indicating failure during preparation, even if some items might have been added before failure
|
||||||
|
context.status_flags['prepare_items_failed'] = True
|
||||||
|
log.error(f"Asset '{asset_name_for_log}': Item preparation failed. Reason: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')}")
|
||||||
|
# Optionally, clear items if failure means nothing should proceed
|
||||||
|
# context.processing_items = []
|
||||||
|
|
||||||
|
log.info(f"Asset '{asset_name_for_log}': Finished preparing items. Found {len(context.processing_items)} valid items.")
|
||||||
|
return context
|
||||||
213
processing/pipeline/stages/regular_map_processor.py
Normal file
213
processing/pipeline/stages/regular_map_processor.py
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Optional, Tuple, Dict
|
||||||
|
|
||||||
|
import cv2
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from .base_stage import ProcessingStage # Assuming base_stage is in the same directory
|
||||||
|
from ..asset_context import AssetProcessingContext, ProcessedRegularMapData
|
||||||
|
from rule_structure import FileRule, AssetRule
|
||||||
|
from processing.utils import image_processing_utils as ipu # Absolute import
|
||||||
|
from utils.path_utils import get_filename_friendly_map_type # Absolute import
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class RegularMapProcessorStage(ProcessingStage):
|
||||||
|
"""
|
||||||
|
Processes a single regular texture map defined by a FileRule.
|
||||||
|
Loads the image, determines map type, applies transformations,
|
||||||
|
and returns the processed data.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# --- Helper Methods (Adapted from IndividualMapProcessingStage) ---
|
||||||
|
|
||||||
|
def _get_suffixed_internal_map_type(
|
||||||
|
self,
|
||||||
|
asset_rule: Optional[AssetRule],
|
||||||
|
current_file_rule: FileRule,
|
||||||
|
initial_internal_map_type: str,
|
||||||
|
respect_variant_map_types: List[str],
|
||||||
|
asset_name_for_log: str
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Determines the potentially suffixed internal map type (e.g., MAP_COL-1).
|
||||||
|
"""
|
||||||
|
final_internal_map_type = initial_internal_map_type # Default
|
||||||
|
|
||||||
|
base_map_type_match = re.match(r"(MAP_[A-Z]{3})", initial_internal_map_type)
|
||||||
|
if not base_map_type_match or not asset_rule or not asset_rule.files:
|
||||||
|
return final_internal_map_type # Cannot determine suffix without base type or asset rule files
|
||||||
|
|
||||||
|
true_base_map_type = base_map_type_match.group(1) # This is "MAP_XXX"
|
||||||
|
|
||||||
|
# Find all FileRules in the asset with the same base map type
|
||||||
|
peers_of_same_base_type = []
|
||||||
|
for fr_asset in asset_rule.files:
|
||||||
|
fr_asset_item_type = fr_asset.item_type_override or fr_asset.item_type or "UnknownMapType"
|
||||||
|
fr_asset_base_match = re.match(r"(MAP_[A-Z]{3})", fr_asset_item_type)
|
||||||
|
if fr_asset_base_match and fr_asset_base_match.group(1) == true_base_map_type:
|
||||||
|
peers_of_same_base_type.append(fr_asset)
|
||||||
|
|
||||||
|
num_occurrences = len(peers_of_same_base_type)
|
||||||
|
current_instance_index = 0 # 1-based index
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Find the index based on the FileRule object itself (requires object identity)
|
||||||
|
current_instance_index = peers_of_same_base_type.index(current_file_rule) + 1
|
||||||
|
except ValueError:
|
||||||
|
# Fallback: try matching by file_path if object identity fails (less reliable)
|
||||||
|
try:
|
||||||
|
current_instance_index = [fr.file_path for fr in peers_of_same_base_type].index(current_file_rule.file_path) + 1
|
||||||
|
log.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Found peer index using file_path fallback for suffixing.")
|
||||||
|
except (ValueError, AttributeError): # Catch AttributeError if file_path is None
|
||||||
|
log.warning(
|
||||||
|
f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}' (Initial Type: '{initial_internal_map_type}', Base: '{true_base_map_type}'): "
|
||||||
|
f"Could not find its own instance in the list of {num_occurrences} peers from asset_rule.files using object identity or path. Suffixing may be incorrect."
|
||||||
|
)
|
||||||
|
# Keep index 0, suffix logic below will handle it
|
||||||
|
|
||||||
|
# Determine Suffix
|
||||||
|
map_type_for_respect_check = true_base_map_type.replace("MAP_", "") # e.g., "COL"
|
||||||
|
is_in_respect_list = map_type_for_respect_check in respect_variant_map_types
|
||||||
|
|
||||||
|
suffix_to_append = ""
|
||||||
|
if num_occurrences > 1:
|
||||||
|
if current_instance_index > 0:
|
||||||
|
suffix_to_append = f"-{current_instance_index}"
|
||||||
|
else:
|
||||||
|
# If index is still 0 (not found), don't add suffix to avoid ambiguity
|
||||||
|
log.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Index for multi-occurrence map type '{true_base_map_type}' (count: {num_occurrences}) not determined. Omitting numeric suffix.")
|
||||||
|
elif num_occurrences == 1 and is_in_respect_list:
|
||||||
|
suffix_to_append = "-1" # Add suffix even for single instance if in respect list
|
||||||
|
|
||||||
|
if suffix_to_append:
|
||||||
|
final_internal_map_type = true_base_map_type + suffix_to_append
|
||||||
|
|
||||||
|
if final_internal_map_type != initial_internal_map_type:
|
||||||
|
log.debug(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Suffixed internal map type determined: '{initial_internal_map_type}' -> '{final_internal_map_type}'")
|
||||||
|
|
||||||
|
return final_internal_map_type
|
||||||
|
|
||||||
|
|
||||||
|
# --- Execute Method ---
|
||||||
|
|
||||||
|
def execute(
|
||||||
|
self,
|
||||||
|
context: AssetProcessingContext,
|
||||||
|
file_rule: FileRule # Specific item passed by orchestrator
|
||||||
|
) -> ProcessedRegularMapData:
|
||||||
|
"""
|
||||||
|
Processes the given FileRule item.
|
||||||
|
"""
|
||||||
|
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||||
|
log_prefix = f"Asset '{asset_name_for_log}', File '{file_rule.file_path}'"
|
||||||
|
log.info(f"{log_prefix}: Processing Regular Map.")
|
||||||
|
|
||||||
|
# Initialize output object with default failure state
|
||||||
|
result = ProcessedRegularMapData(
|
||||||
|
processed_image_data=np.array([]), # Placeholder
|
||||||
|
final_internal_map_type="Unknown",
|
||||||
|
source_file_path=Path(file_rule.file_path or "InvalidPath"),
|
||||||
|
original_bit_depth=None,
|
||||||
|
original_dimensions=None,
|
||||||
|
transformations_applied=[],
|
||||||
|
status="Failed",
|
||||||
|
error_message="Initialization error"
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# --- Configuration ---
|
||||||
|
config = context.config_obj
|
||||||
|
file_type_definitions = getattr(config, "FILE_TYPE_DEFINITIONS", {})
|
||||||
|
respect_variant_map_types = getattr(config, "respect_variant_map_types", [])
|
||||||
|
invert_normal_green = config.invert_normal_green_globally
|
||||||
|
|
||||||
|
# --- Determine Map Type (with suffix) ---
|
||||||
|
initial_internal_map_type = file_rule.item_type_override or file_rule.item_type or "UnknownMapType"
|
||||||
|
if not initial_internal_map_type or initial_internal_map_type == "UnknownMapType":
|
||||||
|
result.error_message = "Map type (item_type) not defined in FileRule."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result # Early exit
|
||||||
|
|
||||||
|
# Explicitly skip if the determined type doesn't start with "MAP_"
|
||||||
|
if not initial_internal_map_type.startswith("MAP_"):
|
||||||
|
result.status = "Skipped (Invalid Type)"
|
||||||
|
result.error_message = f"FileRule item_type '{initial_internal_map_type}' does not start with 'MAP_'. Skipping processing."
|
||||||
|
log.warning(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result # Early exit
|
||||||
|
|
||||||
|
processing_map_type = self._get_suffixed_internal_map_type(
|
||||||
|
context.asset_rule, file_rule, initial_internal_map_type, respect_variant_map_types, asset_name_for_log
|
||||||
|
)
|
||||||
|
result.final_internal_map_type = processing_map_type # Store initial suffixed type
|
||||||
|
|
||||||
|
# --- Find and Load Source File ---
|
||||||
|
if not file_rule.file_path: # Should have been caught by Prepare stage, but double-check
|
||||||
|
result.error_message = "FileRule has empty file_path."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
|
||||||
|
source_base_path = context.workspace_path
|
||||||
|
potential_source_path = source_base_path / file_rule.file_path
|
||||||
|
source_file_path_found: Optional[Path] = None
|
||||||
|
|
||||||
|
if potential_source_path.is_file():
|
||||||
|
source_file_path_found = potential_source_path
|
||||||
|
log.info(f"{log_prefix}: Found source file: {source_file_path_found}")
|
||||||
|
else:
|
||||||
|
# Optional: Add globbing fallback if needed, similar to original stage
|
||||||
|
log.warning(f"{log_prefix}: Source file not found directly at '{potential_source_path}'. Add globbing if necessary.")
|
||||||
|
result.error_message = f"Source file not found at '{potential_source_path}'"
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
|
||||||
|
result.source_file_path = source_file_path_found # Update result with found path
|
||||||
|
|
||||||
|
# Load image
|
||||||
|
source_image_data = ipu.load_image(str(source_file_path_found))
|
||||||
|
if source_image_data is None:
|
||||||
|
result.error_message = f"Failed to load image from '{source_file_path_found}'."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
|
||||||
|
original_height, original_width = source_image_data.shape[:2]
|
||||||
|
result.original_dimensions = (original_width, original_height)
|
||||||
|
log.debug(f"{log_prefix}: Loaded image {result.original_dimensions[0]}x{result.original_dimensions[1]}.")
|
||||||
|
|
||||||
|
# Get original bit depth
|
||||||
|
try:
|
||||||
|
result.original_bit_depth = ipu.get_image_bit_depth(str(source_file_path_found))
|
||||||
|
log.info(f"{log_prefix}: Determined source bit depth: {result.original_bit_depth}")
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(f"{log_prefix}: Could not determine source bit depth for {source_file_path_found}: {e}. Setting to None.")
|
||||||
|
result.original_bit_depth = None # Indicate failure to determine
|
||||||
|
|
||||||
|
# --- Apply Transformations ---
|
||||||
|
transformed_image_data, final_map_type, transform_notes = ipu.apply_common_map_transformations(
|
||||||
|
source_image_data.copy(), # Pass a copy to avoid modifying original load
|
||||||
|
processing_map_type,
|
||||||
|
invert_normal_green,
|
||||||
|
file_type_definitions,
|
||||||
|
log_prefix
|
||||||
|
)
|
||||||
|
result.processed_image_data = transformed_image_data
|
||||||
|
result.final_internal_map_type = final_map_type # Update if Gloss->Rough changed it
|
||||||
|
result.transformations_applied = transform_notes
|
||||||
|
|
||||||
|
# --- Success ---
|
||||||
|
result.status = "Processed"
|
||||||
|
result.error_message = None
|
||||||
|
log.info(f"{log_prefix}: Successfully processed regular map. Final type: '{result.final_internal_map_type}'.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"{log_prefix}: Unhandled exception during processing: {e}")
|
||||||
|
result.status = "Failed"
|
||||||
|
result.error_message = f"Unhandled exception: {e}"
|
||||||
|
# Ensure image data is empty on failure if it wasn't set
|
||||||
|
if result.processed_image_data is None or result.processed_image_data.size == 0:
|
||||||
|
result.processed_image_data = np.array([])
|
||||||
|
|
||||||
|
return result
|
||||||
89
processing/pipeline/stages/save_variants.py
Normal file
89
processing/pipeline/stages/save_variants.py
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
import logging
|
||||||
|
from typing import List, Dict, Optional # Added Optional
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from .base_stage import ProcessingStage
|
||||||
|
# Import necessary context classes and utils
|
||||||
|
from ..asset_context import SaveVariantsInput, SaveVariantsOutput
|
||||||
|
from processing.utils import image_saving_utils as isu # Absolute import
|
||||||
|
from utils.path_utils import get_filename_friendly_map_type # Absolute import
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class SaveVariantsStage(ProcessingStage):
|
||||||
|
"""
|
||||||
|
Takes final processed image data and configuration, calls the
|
||||||
|
save_image_variants utility, and returns the results.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def execute(self, input_data: SaveVariantsInput) -> SaveVariantsOutput:
|
||||||
|
"""
|
||||||
|
Calls isu.save_image_variants with data from input_data.
|
||||||
|
"""
|
||||||
|
internal_map_type = input_data.internal_map_type
|
||||||
|
log_prefix = f"Save Variants Stage (Type: {internal_map_type})"
|
||||||
|
log.info(f"{log_prefix}: Starting.")
|
||||||
|
|
||||||
|
# Initialize output object with default failure state
|
||||||
|
result = SaveVariantsOutput(
|
||||||
|
saved_files_details=[],
|
||||||
|
status="Failed",
|
||||||
|
error_message="Initialization error"
|
||||||
|
)
|
||||||
|
|
||||||
|
if input_data.image_data is None or input_data.image_data.size == 0:
|
||||||
|
result.error_message = "Input image data is None or empty."
|
||||||
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
|
return result
|
||||||
|
|
||||||
|
try:
|
||||||
|
# --- Prepare arguments for save_image_variants ---
|
||||||
|
|
||||||
|
# Get the filename-friendly base map type using the helper
|
||||||
|
# This assumes the save utility expects the friendly type. Adjust if needed.
|
||||||
|
base_map_type_friendly = get_filename_friendly_map_type(
|
||||||
|
internal_map_type, input_data.file_type_defs
|
||||||
|
)
|
||||||
|
log.debug(f"{log_prefix}: Using filename-friendly base type '{base_map_type_friendly}' for saving.")
|
||||||
|
|
||||||
|
save_args = {
|
||||||
|
"source_image_data": input_data.image_data,
|
||||||
|
"base_map_type": base_map_type_friendly, # Use the friendly type
|
||||||
|
"source_bit_depth_info": input_data.source_bit_depth_info,
|
||||||
|
"image_resolutions": input_data.image_resolutions,
|
||||||
|
"file_type_defs": input_data.file_type_defs,
|
||||||
|
"output_format_8bit": input_data.output_format_8bit,
|
||||||
|
"output_format_16bit_primary": input_data.output_format_16bit_primary,
|
||||||
|
"output_format_16bit_fallback": input_data.output_format_16bit_fallback,
|
||||||
|
"png_compression_level": input_data.png_compression_level,
|
||||||
|
"jpg_quality": input_data.jpg_quality,
|
||||||
|
"output_filename_pattern_tokens": input_data.output_filename_pattern_tokens,
|
||||||
|
"output_filename_pattern": input_data.output_filename_pattern,
|
||||||
|
"resolution_threshold_for_jpg": input_data.resolution_threshold_for_jpg, # Added
|
||||||
|
}
|
||||||
|
|
||||||
|
log.debug(f"{log_prefix}: Calling save_image_variants utility.")
|
||||||
|
saved_files_details: List[Dict] = isu.save_image_variants(**save_args)
|
||||||
|
|
||||||
|
if saved_files_details:
|
||||||
|
log.info(f"{log_prefix}: Save utility completed successfully. Saved {len(saved_files_details)} variants.")
|
||||||
|
result.saved_files_details = saved_files_details
|
||||||
|
result.status = "Processed"
|
||||||
|
result.error_message = None
|
||||||
|
else:
|
||||||
|
# This might not be an error, maybe no variants were configured?
|
||||||
|
log.warning(f"{log_prefix}: Save utility returned no saved file details. This might be expected if no resolutions/formats matched.")
|
||||||
|
result.saved_files_details = []
|
||||||
|
result.status = "Processed (No Output)" # Indicate processing happened but nothing saved
|
||||||
|
result.error_message = "Save utility reported no files saved (check configuration/resolutions)."
|
||||||
|
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"{log_prefix}: Error calling or executing save_image_variants: {e}")
|
||||||
|
result.status = "Failed"
|
||||||
|
result.error_message = f"Save utility call failed: {e}"
|
||||||
|
result.saved_files_details = [] # Ensure empty list on error
|
||||||
|
|
||||||
|
return result
|
||||||
@@ -56,5 +56,12 @@ class SupplierDeterminationStage(ProcessingStage):
|
|||||||
if 'supplier_error' in context.status_flags:
|
if 'supplier_error' in context.status_flags:
|
||||||
del context.status_flags['supplier_error']
|
del context.status_flags['supplier_error']
|
||||||
|
|
||||||
|
# merged_image_tasks are loaded from app_settings.json into Configuration object,
|
||||||
|
# not from supplier-specific presets.
|
||||||
|
# Ensure the attribute exists on context for PrepareProcessingItemsStage,
|
||||||
|
# which will get it from context.config_obj.
|
||||||
|
if not hasattr(context, 'merged_image_tasks'):
|
||||||
|
context.merged_image_tasks = []
|
||||||
|
|
||||||
|
|
||||||
return context
|
return context
|
||||||
@@ -163,6 +163,37 @@ def calculate_target_dimensions(
|
|||||||
|
|
||||||
# --- Image Statistics ---
|
# --- Image Statistics ---
|
||||||
|
|
||||||
|
def get_image_bit_depth(image_path_str: str) -> Optional[int]:
|
||||||
|
"""
|
||||||
|
Determines the bit depth of an image file.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Use IMREAD_UNCHANGED to preserve original bit depth
|
||||||
|
img = cv2.imread(image_path_str, cv2.IMREAD_UNCHANGED)
|
||||||
|
if img is None:
|
||||||
|
# logger.error(f"Failed to read image for bit depth: {image_path_str}") # Use print for utils
|
||||||
|
print(f"Warning: Failed to read image for bit depth: {image_path_str}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
dtype_to_bit_depth = {
|
||||||
|
np.dtype('uint8'): 8,
|
||||||
|
np.dtype('uint16'): 16,
|
||||||
|
np.dtype('float32'): 32, # Typically for EXR etc.
|
||||||
|
np.dtype('int8'): 8, # Unlikely for images but good to have
|
||||||
|
np.dtype('int16'): 16, # Unlikely
|
||||||
|
# Add other dtypes if necessary
|
||||||
|
}
|
||||||
|
bit_depth = dtype_to_bit_depth.get(img.dtype)
|
||||||
|
if bit_depth is None:
|
||||||
|
# logger.warning(f"Unknown dtype {img.dtype} for image {image_path_str}, cannot determine bit depth.") # Use print for utils
|
||||||
|
print(f"Warning: Unknown dtype {img.dtype} for image {image_path_str}, cannot determine bit depth.")
|
||||||
|
pass # Return None
|
||||||
|
return bit_depth
|
||||||
|
except Exception as e:
|
||||||
|
# logger.error(f"Error getting bit depth for {image_path_str}: {e}") # Use print for utils
|
||||||
|
print(f"Error getting bit depth for {image_path_str}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
def calculate_image_stats(image_data: np.ndarray) -> Optional[Dict]:
|
def calculate_image_stats(image_data: np.ndarray) -> Optional[Dict]:
|
||||||
"""
|
"""
|
||||||
Calculates min, max, mean for a given numpy image array.
|
Calculates min, max, mean for a given numpy image array.
|
||||||
@@ -396,3 +427,89 @@ def save_image(
|
|||||||
except Exception: # as e:
|
except Exception: # as e:
|
||||||
# print(f"Error saving image {path_obj}: {e}") # Optional: for debugging utils
|
# print(f"Error saving image {path_obj}: {e}") # Optional: for debugging utils
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
# --- Common Map Transformations ---
|
||||||
|
|
||||||
|
import re
|
||||||
|
import logging
|
||||||
|
|
||||||
|
ipu_log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
def apply_common_map_transformations(
|
||||||
|
image_data: np.ndarray,
|
||||||
|
processing_map_type: str, # The potentially suffixed internal type
|
||||||
|
invert_normal_green: bool,
|
||||||
|
file_type_definitions: Dict[str, Dict],
|
||||||
|
log_prefix: str
|
||||||
|
) -> Tuple[np.ndarray, str, List[str]]:
|
||||||
|
"""
|
||||||
|
Applies common in-memory transformations (Gloss-to-Rough, Normal Green Invert).
|
||||||
|
Returns potentially transformed image data, potentially updated map type, and notes.
|
||||||
|
"""
|
||||||
|
transformation_notes = []
|
||||||
|
current_image_data = image_data # Start with original data
|
||||||
|
updated_processing_map_type = processing_map_type # Start with original type
|
||||||
|
|
||||||
|
# Gloss-to-Rough
|
||||||
|
# Check if the base type is Gloss (before suffix)
|
||||||
|
base_map_type_match = re.match(r"(MAP_GLOSS)", processing_map_type)
|
||||||
|
if base_map_type_match:
|
||||||
|
ipu_log.info(f"{log_prefix}: Applying Gloss-to-Rough conversion.")
|
||||||
|
inversion_succeeded = False
|
||||||
|
if np.issubdtype(current_image_data.dtype, np.floating):
|
||||||
|
current_image_data = 1.0 - current_image_data
|
||||||
|
current_image_data = np.clip(current_image_data, 0.0, 1.0)
|
||||||
|
ipu_log.debug(f"{log_prefix}: Inverted float image data for Gloss->Rough.")
|
||||||
|
inversion_succeeded = True
|
||||||
|
elif np.issubdtype(current_image_data.dtype, np.integer):
|
||||||
|
max_val = np.iinfo(current_image_data.dtype).max
|
||||||
|
current_image_data = max_val - current_image_data
|
||||||
|
ipu_log.debug(f"{log_prefix}: Inverted integer image data (max_val: {max_val}) for Gloss->Rough.")
|
||||||
|
inversion_succeeded = True
|
||||||
|
else:
|
||||||
|
ipu_log.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS map. Cannot invert.")
|
||||||
|
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
|
||||||
|
|
||||||
|
if inversion_succeeded:
|
||||||
|
# Update the type string itself (e.g., MAP_GLOSS-1 -> MAP_ROUGH-1)
|
||||||
|
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
|
||||||
|
ipu_log.info(f"{log_prefix}: Map type updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
|
||||||
|
transformation_notes.append("Gloss-to-Rough applied")
|
||||||
|
|
||||||
|
# Normal Green Invert
|
||||||
|
# Check if the base type is Normal (before suffix)
|
||||||
|
base_map_type_match_nrm = re.match(r"(MAP_NRM)", processing_map_type)
|
||||||
|
if base_map_type_match_nrm and invert_normal_green:
|
||||||
|
ipu_log.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting).")
|
||||||
|
current_image_data = invert_normal_map_green_channel(current_image_data)
|
||||||
|
transformation_notes.append("Normal Green Inverted (Global)")
|
||||||
|
|
||||||
|
return current_image_data, updated_processing_map_type, transformation_notes
|
||||||
|
|
||||||
|
# --- Normal Map Utilities ---
|
||||||
|
|
||||||
|
def invert_normal_map_green_channel(normal_map: np.ndarray) -> np.ndarray:
|
||||||
|
"""
|
||||||
|
Inverts the green channel of a normal map.
|
||||||
|
Assumes the normal map is in RGB or RGBA format (channel order R, G, B, A).
|
||||||
|
"""
|
||||||
|
if normal_map is None or len(normal_map.shape) < 3 or normal_map.shape[2] < 3:
|
||||||
|
# Not a valid color image with at least 3 channels
|
||||||
|
return normal_map
|
||||||
|
|
||||||
|
# Ensure data is mutable
|
||||||
|
inverted_map = normal_map.copy()
|
||||||
|
|
||||||
|
# Invert the green channel (index 1)
|
||||||
|
# Handle different data types
|
||||||
|
if np.issubdtype(inverted_map.dtype, np.floating):
|
||||||
|
inverted_map[:, :, 1] = 1.0 - inverted_map[:, :, 1]
|
||||||
|
elif np.issubdtype(inverted_map.dtype, np.integer):
|
||||||
|
max_val = np.iinfo(inverted_map.dtype).max
|
||||||
|
inverted_map[:, :, 1] = max_val - inverted_map[:, :, 1]
|
||||||
|
else:
|
||||||
|
# Unsupported dtype, return original
|
||||||
|
print(f"Warning: Unsupported dtype {inverted_map.dtype} for normal map green channel inversion.")
|
||||||
|
return normal_map
|
||||||
|
|
||||||
|
return inverted_map
|
||||||
297
processing/utils/image_saving_utils.py
Normal file
297
processing/utils/image_saving_utils.py
Normal file
@@ -0,0 +1,297 @@
|
|||||||
|
import logging
|
||||||
|
import cv2
|
||||||
|
import numpy as np
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Dict, Any, Tuple, Optional
|
||||||
|
|
||||||
|
# Potentially import ipu from ...utils import image_processing_utils as ipu
|
||||||
|
# Assuming ipu is available in the same utils directory or parent
|
||||||
|
try:
|
||||||
|
from . import image_processing_utils as ipu
|
||||||
|
except ImportError:
|
||||||
|
# Fallback for different import structures if needed, adjust based on actual project structure
|
||||||
|
# For this project structure, the relative import should work.
|
||||||
|
logging.warning("Could not import image_processing_utils using relative path. Attempting absolute import.")
|
||||||
|
try:
|
||||||
|
from processing.utils import image_processing_utils as ipu
|
||||||
|
except ImportError:
|
||||||
|
logging.error("Could not import image_processing_utils.")
|
||||||
|
ipu = None # Handle case where ipu is not available
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
def save_image_variants(
|
||||||
|
source_image_data: np.ndarray,
|
||||||
|
base_map_type: str, # Filename-friendly map type
|
||||||
|
source_bit_depth_info: List[Optional[int]],
|
||||||
|
image_resolutions: Dict[str, int],
|
||||||
|
file_type_defs: Dict[str, Dict[str, Any]],
|
||||||
|
output_format_8bit: str,
|
||||||
|
output_format_16bit_primary: str,
|
||||||
|
output_format_16bit_fallback: str,
|
||||||
|
png_compression_level: int,
|
||||||
|
jpg_quality: int,
|
||||||
|
output_filename_pattern_tokens: Dict[str, Any], # Must include 'output_base_directory': Path and 'asset_name': str
|
||||||
|
output_filename_pattern: str,
|
||||||
|
resolution_threshold_for_jpg: Optional[int] = None, # Added
|
||||||
|
# Consider adding ipu or relevant parts of it if not importing globally
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Centralizes image saving logic, generating and saving various resolution variants
|
||||||
|
according to configuration.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
source_image_data (np.ndarray): High-res image data (in memory, potentially transformed).
|
||||||
|
base_map_type (str): Final map type (e.g., "COL", "ROUGH", "NORMAL", "MAP_NRMRGH").
|
||||||
|
This is the filename-friendly map type.
|
||||||
|
source_bit_depth_info (List[Optional[int]]): List of original source bit depth(s)
|
||||||
|
(e.g., [8], [16], [8, 16]). Can contain None.
|
||||||
|
image_resolutions (Dict[str, int]): Dictionary mapping resolution keys (e.g., "4K")
|
||||||
|
to max dimensions (e.g., 4096).
|
||||||
|
file_type_defs (Dict[str, Dict[str, Any]]): Dictionary defining properties for map types,
|
||||||
|
including 'bit_depth_rule'.
|
||||||
|
output_format_8bit (str): File extension for 8-bit output (e.g., "jpg", "png").
|
||||||
|
output_format_16bit_primary (str): Primary file extension for 16-bit output (e.g., "png", "tif").
|
||||||
|
output_format_16bit_fallback (str): Fallback file extension for 16-bit output.
|
||||||
|
png_compression_level (int): Compression level for PNG output (0-9).
|
||||||
|
jpg_quality (int): Quality level for JPG output (0-100).
|
||||||
|
output_filename_pattern_tokens (Dict[str, Any]): Dictionary of tokens for filename
|
||||||
|
pattern replacement. Must include
|
||||||
|
'output_base_directory' (Path) and
|
||||||
|
'asset_name' (str).
|
||||||
|
output_filename_pattern (str): Pattern string for generating output filenames
|
||||||
|
(e.g., "[assetname]_[maptype]_[resolution].[ext]").
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[Dict[str, Any]]: A list of dictionaries, each containing details about a saved file.
|
||||||
|
Example: [{'path': str, 'resolution_key': str, 'format': str,
|
||||||
|
'bit_depth': int, 'dimensions': (w,h)}, ...]
|
||||||
|
"""
|
||||||
|
if ipu is None:
|
||||||
|
logger.error("image_processing_utils is not available. Cannot save images.")
|
||||||
|
return []
|
||||||
|
|
||||||
|
saved_file_details = []
|
||||||
|
source_h, source_w = source_image_data.shape[:2]
|
||||||
|
source_max_dim = max(source_h, source_w)
|
||||||
|
|
||||||
|
# 1. Use provided configuration inputs (already available as function arguments)
|
||||||
|
logger.info(f"SaveImageVariants: Starting for map type: {base_map_type}. Source shape: {source_image_data.shape}, Source bit depths: {source_bit_depth_info}")
|
||||||
|
logger.debug(f"SaveImageVariants: Resolutions: {image_resolutions}, File Type Defs: {file_type_defs.keys()}, Output Formats: 8bit={output_format_8bit}, 16bit_pri={output_format_16bit_primary}, 16bit_fall={output_format_16bit_fallback}")
|
||||||
|
logger.debug(f"SaveImageVariants: PNG Comp: {png_compression_level}, JPG Qual: {jpg_quality}")
|
||||||
|
logger.debug(f"SaveImageVariants: Output Tokens: {output_filename_pattern_tokens}, Output Pattern: {output_filename_pattern}")
|
||||||
|
logger.debug(f"SaveImageVariants: Received resolution_threshold_for_jpg: {resolution_threshold_for_jpg}") # Log received threshold
|
||||||
|
|
||||||
|
# 2. Determine Target Bit Depth
|
||||||
|
target_bit_depth = 8 # Default
|
||||||
|
bit_depth_rule = file_type_defs.get(base_map_type, {}).get('bit_depth_rule', 'force_8bit')
|
||||||
|
if bit_depth_rule not in ['force_8bit', 'respect_inputs']:
|
||||||
|
logger.warning(f"Unknown bit_depth_rule '{bit_depth_rule}' for map type '{base_map_type}'. Defaulting to 'force_8bit'.")
|
||||||
|
bit_depth_rule = 'force_8bit'
|
||||||
|
|
||||||
|
if bit_depth_rule == 'respect_inputs':
|
||||||
|
# Check if any source bit depth is > 8, ignoring None
|
||||||
|
if any(depth is not None and depth > 8 for depth in source_bit_depth_info):
|
||||||
|
target_bit_depth = 16
|
||||||
|
else:
|
||||||
|
target_bit_depth = 8
|
||||||
|
logger.info(f"Bit depth rule 'respect_inputs' applied. Source bit depths: {source_bit_depth_info}. Target bit depth: {target_bit_depth}")
|
||||||
|
else: # force_8bit
|
||||||
|
target_bit_depth = 8
|
||||||
|
logger.info(f"Bit depth rule 'force_8bit' applied. Target bit depth: {target_bit_depth}")
|
||||||
|
|
||||||
|
|
||||||
|
# 3. Determine Output File Format(s)
|
||||||
|
if target_bit_depth == 8:
|
||||||
|
output_ext = output_format_8bit.lstrip('.').lower()
|
||||||
|
elif target_bit_depth == 16:
|
||||||
|
# Prioritize primary, fallback to fallback if primary is not supported/desired
|
||||||
|
# For now, just use primary. More complex logic might be needed later.
|
||||||
|
output_ext = output_format_16bit_primary.lstrip('.').lower()
|
||||||
|
# Basic fallback logic example (can be expanded)
|
||||||
|
if output_ext not in ['png', 'tif']: # Assuming common 16-bit formats
|
||||||
|
output_ext = output_format_16bit_fallback.lstrip('.').lower()
|
||||||
|
logger.warning(f"Primary 16-bit format '{output_format_16bit_primary}' might not be suitable. Using fallback '{output_format_16bit_fallback}'.")
|
||||||
|
else:
|
||||||
|
logger.error(f"Unsupported target bit depth: {target_bit_depth}. Defaulting to 8-bit format.")
|
||||||
|
output_ext = output_format_8bit.lstrip('.').lower()
|
||||||
|
|
||||||
|
current_output_ext = output_ext # Store the initial extension based on bit depth
|
||||||
|
|
||||||
|
logger.info(f"SaveImageVariants: Determined target bit depth: {target_bit_depth}, Initial output format: {current_output_ext} for map type {base_map_type}")
|
||||||
|
|
||||||
|
# 4. Generate and Save Resolution Variants
|
||||||
|
# Sort resolutions by max dimension descending
|
||||||
|
sorted_resolutions = sorted(image_resolutions.items(), key=lambda item: item[1], reverse=True)
|
||||||
|
|
||||||
|
for res_key, res_max_dim in sorted_resolutions:
|
||||||
|
logger.info(f"SaveImageVariants: Processing variant {res_key} ({res_max_dim}px) for {base_map_type}")
|
||||||
|
|
||||||
|
# --- Prevent Upscaling ---
|
||||||
|
# Skip this resolution variant if its target dimension is larger than the source image's largest dimension.
|
||||||
|
if res_max_dim > source_max_dim:
|
||||||
|
logger.info(f"SaveImageVariants: Skipping variant {res_key} ({res_max_dim}px) for {base_map_type} because target resolution is larger than source ({source_max_dim}px).")
|
||||||
|
continue # Skip to the next resolution
|
||||||
|
|
||||||
|
# Calculate target dimensions for valid variants (equal or smaller than source)
|
||||||
|
if source_max_dim == res_max_dim:
|
||||||
|
# Use source dimensions if target is equal
|
||||||
|
target_w_res, target_h_res = source_w, source_h
|
||||||
|
logger.info(f"SaveImageVariants: Using source resolution ({source_w}x{source_h}) for {res_key} variant of {base_map_type} as target matches source.")
|
||||||
|
else: # Downscale (source_max_dim > res_max_dim)
|
||||||
|
# Downscale, maintaining aspect ratio
|
||||||
|
aspect_ratio = source_w / source_h
|
||||||
|
if source_w >= source_h: # Use >= to handle square images correctly
|
||||||
|
target_w_res = res_max_dim
|
||||||
|
target_h_res = max(1, int(res_max_dim / aspect_ratio)) # Ensure height is at least 1
|
||||||
|
else:
|
||||||
|
target_h_res = res_max_dim
|
||||||
|
target_w_res = max(1, int(res_max_dim * aspect_ratio)) # Ensure width is at least 1
|
||||||
|
logger.info(f"SaveImageVariants: Calculated downscale for {base_map_type} {res_key}: from ({source_w}x{source_h}) to ({target_w_res}x{target_h_res})")
|
||||||
|
|
||||||
|
|
||||||
|
# Resize source_image_data (only if necessary)
|
||||||
|
if (target_w_res, target_h_res) == (source_w, source_h):
|
||||||
|
# No resize needed if dimensions match
|
||||||
|
variant_data = source_image_data.copy() # Copy to avoid modifying original if needed later
|
||||||
|
logger.debug(f"SaveImageVariants: No resize needed for {base_map_type} {res_key}, using copy of source data.")
|
||||||
|
else:
|
||||||
|
# Perform resize only if dimensions differ (i.e., downscaling)
|
||||||
|
interpolation_method = cv2.INTER_AREA # Good for downscaling
|
||||||
|
try:
|
||||||
|
variant_data = ipu.resize_image(source_image_data, target_w_res, target_h_res, interpolation=interpolation_method)
|
||||||
|
if variant_data is None: # Check if resize failed
|
||||||
|
raise ValueError("ipu.resize_image returned None")
|
||||||
|
logger.debug(f"SaveImageVariants: Resized variant data shape for {base_map_type} {res_key}: {variant_data.shape}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"SaveImageVariants: Error resizing image for {base_map_type} {res_key} variant: {e}")
|
||||||
|
continue # Skip this variant if resizing fails
|
||||||
|
|
||||||
|
# Filename Construction
|
||||||
|
current_tokens = output_filename_pattern_tokens.copy()
|
||||||
|
current_tokens['maptype'] = base_map_type
|
||||||
|
current_tokens['resolution'] = res_key
|
||||||
|
|
||||||
|
# Determine final extension for this variant, considering JPG threshold
|
||||||
|
final_variant_ext = current_output_ext
|
||||||
|
|
||||||
|
# --- Start JPG Threshold Logging ---
|
||||||
|
logger.debug(f"SaveImageVariants: JPG Threshold Check for {base_map_type} {res_key}:")
|
||||||
|
logger.debug(f" - target_bit_depth: {target_bit_depth}")
|
||||||
|
logger.debug(f" - resolution_threshold_for_jpg: {resolution_threshold_for_jpg}")
|
||||||
|
logger.debug(f" - target_w_res: {target_w_res}, target_h_res: {target_h_res}")
|
||||||
|
logger.debug(f" - max(target_w_res, target_h_res): {max(target_w_res, target_h_res)}")
|
||||||
|
logger.debug(f" - current_output_ext: {current_output_ext}")
|
||||||
|
|
||||||
|
cond_bit_depth = target_bit_depth == 8
|
||||||
|
cond_threshold_not_none = resolution_threshold_for_jpg is not None
|
||||||
|
cond_res_exceeded = False
|
||||||
|
if cond_threshold_not_none: # Avoid comparison if threshold is None
|
||||||
|
cond_res_exceeded = max(target_w_res, target_h_res) > resolution_threshold_for_jpg
|
||||||
|
cond_is_png = current_output_ext == 'png'
|
||||||
|
|
||||||
|
logger.debug(f" - Condition (target_bit_depth == 8): {cond_bit_depth}")
|
||||||
|
logger.debug(f" - Condition (resolution_threshold_for_jpg is not None): {cond_threshold_not_none}")
|
||||||
|
logger.debug(f" - Condition (max(res) > threshold): {cond_res_exceeded}")
|
||||||
|
logger.debug(f" - Condition (current_output_ext == 'png'): {cond_is_png}")
|
||||||
|
# --- End JPG Threshold Logging ---
|
||||||
|
|
||||||
|
if cond_bit_depth and cond_threshold_not_none and cond_res_exceeded and cond_is_png:
|
||||||
|
final_variant_ext = 'jpg'
|
||||||
|
logger.info(f"SaveImageVariants: Overriding 8-bit PNG to JPG for {base_map_type} {res_key} due to resolution {max(target_w_res, target_h_res)}px > threshold {resolution_threshold_for_jpg}px.")
|
||||||
|
|
||||||
|
current_tokens['ext'] = final_variant_ext
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Replace placeholders in the pattern
|
||||||
|
filename = output_filename_pattern
|
||||||
|
for token, value in current_tokens.items():
|
||||||
|
# Ensure value is string for replacement, handle Path objects later
|
||||||
|
filename = filename.replace(f"[{token}]", str(value))
|
||||||
|
|
||||||
|
# Construct full output path
|
||||||
|
output_base_directory = current_tokens.get('output_base_directory')
|
||||||
|
if not isinstance(output_base_directory, Path):
|
||||||
|
logger.error(f"'output_base_directory' token is missing or not a Path object: {output_base_directory}. Cannot save file.")
|
||||||
|
continue # Skip this variant
|
||||||
|
|
||||||
|
output_path = output_base_directory / filename
|
||||||
|
logger.info(f"SaveImageVariants: Constructed output path for {base_map_type} {res_key}: {output_path}")
|
||||||
|
|
||||||
|
# Ensure parent directory exists
|
||||||
|
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
logger.debug(f"SaveImageVariants: Ensured directory exists for {base_map_type} {res_key}: {output_path.parent}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"SaveImageVariants: Error constructing filepath for {base_map_type} {res_key} variant: {e}")
|
||||||
|
continue # Skip this variant if path construction fails
|
||||||
|
|
||||||
|
|
||||||
|
# Prepare Save Parameters
|
||||||
|
save_params_cv2 = []
|
||||||
|
if final_variant_ext == 'jpg': # Check against final_variant_ext
|
||||||
|
save_params_cv2.append(cv2.IMWRITE_JPEG_QUALITY)
|
||||||
|
save_params_cv2.append(jpg_quality)
|
||||||
|
logger.debug(f"SaveImageVariants: Using JPG quality: {jpg_quality} for {base_map_type} {res_key}")
|
||||||
|
elif final_variant_ext == 'png': # Check against final_variant_ext
|
||||||
|
save_params_cv2.append(cv2.IMWRITE_PNG_COMPRESSION)
|
||||||
|
save_params_cv2.append(png_compression_level)
|
||||||
|
logger.debug(f"SaveImageVariants: Using PNG compression level: {png_compression_level} for {base_map_type} {res_key}")
|
||||||
|
# Add other format specific parameters if needed (e.g., TIFF compression)
|
||||||
|
|
||||||
|
|
||||||
|
# Bit Depth Conversion is handled by ipu.save_image via output_dtype_target
|
||||||
|
image_data_for_save = variant_data # Use the resized variant data directly
|
||||||
|
|
||||||
|
# Determine the target dtype for ipu.save_image
|
||||||
|
output_dtype_for_save: Optional[np.dtype] = None
|
||||||
|
if target_bit_depth == 8:
|
||||||
|
output_dtype_for_save = np.uint8
|
||||||
|
elif target_bit_depth == 16:
|
||||||
|
output_dtype_for_save = np.uint16
|
||||||
|
# Add other target bit depths like float16/float32 if necessary
|
||||||
|
# elif target_bit_depth == 32: # Assuming float32 for EXR etc.
|
||||||
|
# output_dtype_for_save = np.float32
|
||||||
|
|
||||||
|
|
||||||
|
# Saving
|
||||||
|
try:
|
||||||
|
# ipu.save_image is expected to handle the actual cv2.imwrite call
|
||||||
|
logger.debug(f"SaveImageVariants: Attempting to save {base_map_type} {res_key} to {output_path} with params {save_params_cv2}, target_dtype: {output_dtype_for_save}")
|
||||||
|
success = ipu.save_image(
|
||||||
|
str(output_path),
|
||||||
|
image_data_for_save,
|
||||||
|
output_dtype_target=output_dtype_for_save, # Pass the target dtype
|
||||||
|
params=save_params_cv2
|
||||||
|
)
|
||||||
|
if success:
|
||||||
|
logger.info(f"SaveImageVariants: Successfully saved {base_map_type} {res_key} variant to {output_path}")
|
||||||
|
# Collect details for the returned list
|
||||||
|
saved_file_details.append({
|
||||||
|
'path': str(output_path),
|
||||||
|
'resolution_key': res_key,
|
||||||
|
'format': final_variant_ext, # Log the actual saved format
|
||||||
|
'bit_depth': target_bit_depth,
|
||||||
|
'dimensions': (target_w_res, target_h_res)
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
logger.error(f"SaveImageVariants: Failed to save {base_map_type} {res_key} variant to {output_path} (ipu.save_image returned False)")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"SaveImageVariants: Error during ipu.save_image for {base_map_type} {res_key} variant to {output_path}: {e}", exc_info=True)
|
||||||
|
# Continue to next variant even if one fails
|
||||||
|
|
||||||
|
|
||||||
|
# Discard in-memory variant after saving (Python's garbage collection handles this)
|
||||||
|
del variant_data
|
||||||
|
del image_data_for_save
|
||||||
|
|
||||||
|
|
||||||
|
# 5. Return List of Saved File Details
|
||||||
|
logger.info(f"Finished saving variants for map type: {base_map_type}. Saved {len(saved_file_details)} variants.")
|
||||||
|
return saved_file_details
|
||||||
|
|
||||||
|
# Optional Helper Functions (can be added here if needed)
|
||||||
|
# def _determine_target_bit_depth(...): ...
|
||||||
|
# def _determine_output_format(...): ...
|
||||||
|
# def _construct_variant_filepath(...): ...
|
||||||
@@ -7,7 +7,7 @@ import tempfile
|
|||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import List, Dict, Tuple, Optional, Set
|
from typing import List, Dict, Tuple, Optional, Set
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
# Attempt to import image processing libraries
|
# Attempt to import image processing libraries
|
||||||
try:
|
try:
|
||||||
import cv2
|
import cv2
|
||||||
@@ -21,7 +21,6 @@ except ImportError as e:
|
|||||||
np = None
|
np = None
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from configuration import Configuration, ConfigurationError
|
from configuration import Configuration, ConfigurationError
|
||||||
from rule_structure import SourceRule, AssetRule, FileRule
|
from rule_structure import SourceRule, AssetRule, FileRule
|
||||||
@@ -50,6 +49,7 @@ if not log.hasHandlers():
|
|||||||
|
|
||||||
from processing.pipeline.orchestrator import PipelineOrchestrator
|
from processing.pipeline.orchestrator import PipelineOrchestrator
|
||||||
# from processing.pipeline.asset_context import AssetProcessingContext # AssetProcessingContext is used by the orchestrator
|
# from processing.pipeline.asset_context import AssetProcessingContext # AssetProcessingContext is used by the orchestrator
|
||||||
|
# Import stages that will be passed to the orchestrator (outer stages)
|
||||||
from processing.pipeline.stages.supplier_determination import SupplierDeterminationStage
|
from processing.pipeline.stages.supplier_determination import SupplierDeterminationStage
|
||||||
from processing.pipeline.stages.asset_skip_logic import AssetSkipLogicStage
|
from processing.pipeline.stages.asset_skip_logic import AssetSkipLogicStage
|
||||||
from processing.pipeline.stages.metadata_initialization import MetadataInitializationStage
|
from processing.pipeline.stages.metadata_initialization import MetadataInitializationStage
|
||||||
@@ -57,8 +57,8 @@ from processing.pipeline.stages.file_rule_filter import FileRuleFilterStage
|
|||||||
from processing.pipeline.stages.gloss_to_rough_conversion import GlossToRoughConversionStage
|
from processing.pipeline.stages.gloss_to_rough_conversion import GlossToRoughConversionStage
|
||||||
from processing.pipeline.stages.alpha_extraction_to_mask import AlphaExtractionToMaskStage
|
from processing.pipeline.stages.alpha_extraction_to_mask import AlphaExtractionToMaskStage
|
||||||
from processing.pipeline.stages.normal_map_green_channel import NormalMapGreenChannelStage
|
from processing.pipeline.stages.normal_map_green_channel import NormalMapGreenChannelStage
|
||||||
from processing.pipeline.stages.individual_map_processing import IndividualMapProcessingStage
|
# Removed: from processing.pipeline.stages.individual_map_processing import IndividualMapProcessingStage
|
||||||
from processing.pipeline.stages.map_merging import MapMergingStage
|
# Removed: from processing.pipeline.stages.map_merging import MapMergingStage
|
||||||
from processing.pipeline.stages.metadata_finalization_save import MetadataFinalizationAndSaveStage
|
from processing.pipeline.stages.metadata_finalization_save import MetadataFinalizationAndSaveStage
|
||||||
from processing.pipeline.stages.output_organization import OutputOrganizationStage
|
from processing.pipeline.stages.output_organization import OutputOrganizationStage
|
||||||
|
|
||||||
@@ -94,22 +94,33 @@ class ProcessingEngine:
|
|||||||
self.loaded_data_cache: dict = {} # Cache for loaded/resized data within a single process call
|
self.loaded_data_cache: dict = {} # Cache for loaded/resized data within a single process call
|
||||||
|
|
||||||
# --- Pipeline Orchestrator Setup ---
|
# --- Pipeline Orchestrator Setup ---
|
||||||
self.stages = [
|
# Define pre-item and post-item processing stages
|
||||||
|
pre_item_stages = [
|
||||||
SupplierDeterminationStage(),
|
SupplierDeterminationStage(),
|
||||||
AssetSkipLogicStage(),
|
AssetSkipLogicStage(),
|
||||||
MetadataInitializationStage(),
|
MetadataInitializationStage(),
|
||||||
FileRuleFilterStage(),
|
FileRuleFilterStage(),
|
||||||
GlossToRoughConversionStage(),
|
GlossToRoughConversionStage(), # Assumed to run on context.files_to_process if needed by old logic
|
||||||
AlphaExtractionToMaskStage(),
|
AlphaExtractionToMaskStage(), # Same assumption as above
|
||||||
NormalMapGreenChannelStage(),
|
NormalMapGreenChannelStage(), # Same assumption as above
|
||||||
IndividualMapProcessingStage(),
|
# Note: The new RegularMapProcessorStage and MergedTaskProcessorStage handle their own transformations
|
||||||
MapMergingStage(),
|
# on the specific items they process. These global transformation stages might need review
|
||||||
MetadataFinalizationAndSaveStage(),
|
# if they were intended to operate on a broader scope or if their logic is now fully
|
||||||
OutputOrganizationStage(),
|
# encapsulated in the new item-specific processor stages. For now, keeping them as pre-stages.
|
||||||
]
|
]
|
||||||
|
|
||||||
|
post_item_stages = [
|
||||||
|
OutputOrganizationStage(), # Must run after all items are saved to temp
|
||||||
|
MetadataFinalizationAndSaveStage(),# Must run after output organization to have final paths
|
||||||
|
]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.pipeline_orchestrator = PipelineOrchestrator(config_obj=self.config_obj, stages=self.stages)
|
self.pipeline_orchestrator = PipelineOrchestrator(
|
||||||
log.info("PipelineOrchestrator initialized successfully in ProcessingEngine.")
|
config_obj=self.config_obj,
|
||||||
|
pre_item_stages=pre_item_stages,
|
||||||
|
post_item_stages=post_item_stages
|
||||||
|
)
|
||||||
|
log.info("PipelineOrchestrator initialized successfully in ProcessingEngine with pre and post stages.")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"Failed to initialize PipelineOrchestrator in ProcessingEngine: {e}", exc_info=True)
|
log.error(f"Failed to initialize PipelineOrchestrator in ProcessingEngine: {e}", exc_info=True)
|
||||||
self.pipeline_orchestrator = None # Ensure it's None if init fails
|
self.pipeline_orchestrator = None # Ensure it's None if init fails
|
||||||
|
|||||||
@@ -163,6 +163,39 @@ def sanitize_filename(name: str) -> str:
|
|||||||
if not name: name = "invalid_name"
|
if not name: name = "invalid_name"
|
||||||
return name
|
return name
|
||||||
|
|
||||||
|
def get_filename_friendly_map_type(internal_map_type: str, file_type_definitions: Optional[Dict[str, Dict]]) -> str:
|
||||||
|
"""Derives a filename-friendly map type from the internal map type."""
|
||||||
|
filename_friendly_map_type = internal_map_type # Fallback
|
||||||
|
if not file_type_definitions or not isinstance(file_type_definitions, dict) or not file_type_definitions:
|
||||||
|
logger.warning(f"Filename-friendly lookup: FILE_TYPE_DEFINITIONS not available or invalid. Falling back to internal type: {internal_map_type}")
|
||||||
|
return filename_friendly_map_type
|
||||||
|
|
||||||
|
base_map_key_val = None
|
||||||
|
suffix_part = ""
|
||||||
|
# Sort keys by length descending to match longest prefix first (e.g., MAP_ROUGHNESS before MAP_ROUGH)
|
||||||
|
sorted_known_base_keys = sorted(list(file_type_definitions.keys()), key=len, reverse=True)
|
||||||
|
|
||||||
|
for known_key in sorted_known_base_keys:
|
||||||
|
if internal_map_type.startswith(known_key):
|
||||||
|
base_map_key_val = known_key
|
||||||
|
suffix_part = internal_map_type[len(known_key):]
|
||||||
|
break
|
||||||
|
|
||||||
|
if base_map_key_val:
|
||||||
|
definition = file_type_definitions.get(base_map_key_val)
|
||||||
|
if definition and isinstance(definition, dict):
|
||||||
|
standard_type_alias = definition.get("standard_type")
|
||||||
|
if standard_type_alias and isinstance(standard_type_alias, str) and standard_type_alias.strip():
|
||||||
|
filename_friendly_map_type = standard_type_alias.strip() + suffix_part
|
||||||
|
logger.debug(f"Filename-friendly lookup: Transformed '{internal_map_type}' -> '{filename_friendly_map_type}'")
|
||||||
|
else:
|
||||||
|
logger.warning(f"Filename-friendly lookup: Standard type alias for '{base_map_key_val}' is missing or invalid. Falling back.")
|
||||||
|
else:
|
||||||
|
logger.warning(f"Filename-friendly lookup: No valid definition for '{base_map_key_val}'. Falling back.")
|
||||||
|
else:
|
||||||
|
logger.warning(f"Filename-friendly lookup: Could not parse base key from '{internal_map_type}'. Falling back.")
|
||||||
|
|
||||||
|
return filename_friendly_map_type
|
||||||
# --- Basic Unit Tests ---
|
# --- Basic Unit Tests ---
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
print("Running basic tests for path_utils.generate_path_from_pattern...")
|
print("Running basic tests for path_utils.generate_path_from_pattern...")
|
||||||
|
|||||||
Reference in New Issue
Block a user