Compare commits
No commits in common. "f800bb25a9cb0c28e0adf2073b22274fb3682185" and "c2ad299ce21ca463ae33743cc33a7fbf98c8db7a" have entirely different histories.
f800bb25a9
...
c2ad299ce2
@ -1,95 +1,69 @@
|
|||||||
Cl# Developer Guide: Processing Pipeline
|
# Developer Guide: Processing Pipeline
|
||||||
|
|
||||||
This document details the step-by-step technical process executed by the asset processing pipeline, which is initiated by the [`ProcessingEngine`](processing_engine.py:73) class (`processing_engine.py`) and orchestrated by the [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) (`processing/pipeline/orchestrator.py`).
|
This document details the step-by-step technical process executed by the asset processing pipeline, which is initiated by the `ProcessingEngine` class (`processing_engine.py`) and orchestrated by the `PipelineOrchestrator` (`processing/pipeline/orchestrator.py`).
|
||||||
|
|
||||||
The [`ProcessingEngine.process()`](processing_engine.py:131) method serves as the main entry point. It initializes a [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) instance, providing it with the application's [`Configuration`](configuration.py:68) object and predefined lists of pre-item and post-item processing stages. The [`PipelineOrchestrator.process_source_rule()`](processing/pipeline/orchestrator.py:95) method then manages the execution of these stages for each asset defined in the input [`SourceRule`](rule_structure.py:40).
|
The `ProcessingEngine.process()` method serves as the main entry point. It initializes a `PipelineOrchestrator` instance, providing it with the application's `Configuration` object and a predefined list of processing stages. The `PipelineOrchestrator.process_source_rule()` method then manages the execution of these stages for each asset defined in the input `SourceRule`.
|
||||||
|
|
||||||
A crucial component in this architecture is the [`AssetProcessingContext`](processing/pipeline/asset_context.py:86) (`processing/pipeline/asset_context.py`). An instance of this dataclass is created for each [`AssetRule`](rule_structure.py:22) being processed. It acts as a stateful container, carrying all relevant data (source files, rules, configuration, intermediate results, metadata) and is passed sequentially through each stage. Each stage can read from and write to the context, allowing data to flow and be modified throughout the pipeline.
|
A crucial component in this architecture is the `AssetProcessingContext` (`processing/pipeline/asset_context.py`). An instance of this dataclass is created for each `AssetRule` being processed. It acts as a stateful container, carrying all relevant data (source files, rules, configuration, intermediate results, metadata) and is passed sequentially through each stage. Each stage can read from and write to the context, allowing data to flow and be modified throughout the pipeline.
|
||||||
|
|
||||||
The pipeline execution for each asset follows this general flow:
|
The pipeline stages are executed in the following order:
|
||||||
|
|
||||||
1. **Pre-Item Stages:** A sequence of stages executed once per asset before the core item processing loop. These stages typically perform initial setup, filtering, and asset-level transformations.
|
1. **`SupplierDeterminationStage` (`processing/pipeline/stages/supplier_determination.py`)**:
|
||||||
2. **Core Item Processing Loop:** The [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) iterates through a list of "processing items" (individual files or merge tasks) prepared by a dedicated stage. For each item, a sequence of core processing stages is executed.
|
* **Responsibility**: Determines the effective supplier for the asset based on the `SourceRule`'s `supplier_identifier`, `supplier_override`, and supplier definitions in the `Configuration`.
|
||||||
3. **Post-Item Stages:** A sequence of stages executed once per asset after the core item processing loop is complete. These stages handle final tasks like organizing output files and saving metadata.
|
* **Context Interaction**: Updates `AssetProcessingContext.effective_supplier` and potentially `AssetProcessingContext.asset_metadata` with supplier information.
|
||||||
|
|
||||||
## Pipeline Stages
|
2. **`AssetSkipLogicStage` (`processing/pipeline/stages/asset_skip_logic.py`)**:
|
||||||
|
* **Responsibility**: Checks if the asset should be skipped, typically if the output already exists and overwriting is not forced.
|
||||||
|
* **Context Interaction**: Sets `AssetProcessingContext.status_flags['skip_asset']` to `True` if the asset should be skipped, halting further processing for this asset by the orchestrator.
|
||||||
|
|
||||||
The stages are executed in the following order for each asset:
|
3. **`MetadataInitializationStage` (`processing/pipeline/stages/metadata_initialization.py`)**:
|
||||||
|
* **Responsibility**: Initializes the `AssetProcessingContext.asset_metadata` dictionary with base information derived from the `AssetRule`, `SourceRule`, and `Configuration`. This includes asset name, type, and any common metadata.
|
||||||
|
* **Context Interaction**: Populates `AssetProcessingContext.asset_metadata`.
|
||||||
|
|
||||||
### Pre-Item Stages
|
4. **`FileRuleFilterStage` (`processing/pipeline/stages/file_rule_filter.py`)**:
|
||||||
|
* **Responsibility**: Filters the `FileRule` objects from the `AssetRule` to determine which files should actually be processed. It respects `FILE_IGNORE` rules.
|
||||||
|
* **Context Interaction**: Populates `AssetProcessingContext.files_to_process` with the list of `FileRule` objects that passed the filter.
|
||||||
|
|
||||||
These stages are executed sequentially once for each asset before the core item processing loop begins.
|
5. **`GlossToRoughConversionStage` (`processing/pipeline/stages/gloss_to_rough_conversion.py`)**:
|
||||||
|
* **Responsibility**: Identifies gloss maps (based on `FileRule` properties and filename conventions) that are intended to be used as roughness maps. If found, it loads the image, inverts its colors, and saves a temporary inverted version.
|
||||||
|
* **Context Interaction**: Modifies `FileRule` objects in `AssetProcessingContext.files_to_process` (e.g., updates `file_path` to point to the temporary inverted map, sets flags indicating inversion). Updates `AssetProcessingContext.processed_maps_details` with information about the conversion.
|
||||||
|
|
||||||
1. **[`SupplierDeterminationStage`](processing/pipeline/stages/supplier_determination.py:6)** (`processing/pipeline/stages/supplier_determination.py`):
|
6. **`AlphaExtractionToMaskStage` (`processing/pipeline/stages/alpha_extraction_to_mask.py`)**:
|
||||||
* **Responsibility**: Determines the effective supplier for the asset based on the [`SourceRule`](rule_structure.py:40)'s `supplier_override`, `supplier_identifier`, and validation against configured suppliers.
|
* **Responsibility**: If a `FileRule` specifies alpha channel extraction (e.g., from a diffuse map to create an opacity mask), this stage loads the source image, extracts its alpha channel, and saves it as a new temporary grayscale map.
|
||||||
* **Context Interaction**: Sets `context.effective_supplier` and may set a `supplier_error` flag in `context.status_flags`.
|
* **Context Interaction**: May add new `FileRule`-like entries or details to `AssetProcessingContext.processed_maps_details` representing the extracted mask.
|
||||||
|
|
||||||
2. **[`AssetSkipLogicStage`](processing/pipeline/stages/asset_skip_logic.py:5)** (`processing/pipeline/stages/asset_skip_logic.py`):
|
7. **`NormalMapGreenChannelStage` (`processing/pipeline/stages/normal_map_green_channel.py`)**:
|
||||||
* **Responsibility**: Checks if the entire asset should be skipped based on conditions like a missing/invalid supplier, a "SKIP" status in asset metadata, or if the asset is already processed and overwrite is disabled.
|
* **Responsibility**: Checks `FileRule`s for normal maps and, based on configuration (e.g., `invert_normal_map_green_channel` for a specific supplier), potentially inverts the green channel of the normal map image.
|
||||||
* **Context Interaction**: Sets the `skip_asset` flag and `skip_reason` in `context.status_flags` if the asset should be skipped.
|
* **Context Interaction**: Modifies the image data for normal maps if inversion is needed, saving a new temporary version. Updates `AssetProcessingContext.processed_maps_details`.
|
||||||
|
|
||||||
3. **[`MetadataInitializationStage`](processing/pipeline/stages/metadata_initialization.py:81)** (`processing/pipeline/stages/metadata_initialization.py`):
|
8. **`IndividualMapProcessingStage` (`processing/pipeline/stages/individual_map_processing.py`)**:
|
||||||
* **Responsibility**: Initializes the `context.asset_metadata` dictionary with base information derived from the [`AssetRule`](rule_structure.py:22), [`SourceRule`](rule_structure.py:40), and [`Configuration`](configuration.py:68). This includes asset name, IDs, source/output paths, timestamps, and initial status.
|
* **Responsibility**: Processes individual texture map files. This includes:
|
||||||
* **Context Interaction**: Populates `context.asset_metadata` and initializes empty dictionaries for `processed_maps_details` and `merged_maps_details`.
|
* Loading the source image.
|
||||||
|
* Applying Power-of-Two (POT) scaling.
|
||||||
|
* Generating multiple resolution variants based on configuration.
|
||||||
|
* Handling color space conversions (e.g., BGR to RGB).
|
||||||
|
* Calculating image statistics (min, max, mean, median).
|
||||||
|
* Determining and storing aspect ratio change information.
|
||||||
|
* Saving processed temporary map files.
|
||||||
|
* Applying name variant suffixing and using standard type aliases for filenames.
|
||||||
|
* **Context Interaction**: Heavily populates `AssetProcessingContext.processed_maps_details` with paths to temporary processed files, dimensions, and other metadata for each map and its variants. Updates `AssetProcessingContext.asset_metadata` with image stats and aspect ratio info.
|
||||||
|
|
||||||
4. **[`FileRuleFilterStage`](processing/pipeline/stages/file_rule_filter.py:10)** (`processing/pipeline/stages/file_rule_filter.py`):
|
9. **`MapMergingStage` (`processing/pipeline/stages/map_merging.py`)**:
|
||||||
* **Responsibility**: Filters the [`FileRule`](rule_structure.py:5) objects associated with the asset to determine which individual files should be considered for processing. It identifies and excludes files matching "FILE_IGNORE" rules based on their `item_type`.
|
* **Responsibility**: Performs channel packing and other merge operations based on `map_merge_rules` defined in the `Configuration`.
|
||||||
* **Context Interaction**: Populates `context.files_to_process` with the list of [`FileRule`](rule_structure.py:5) objects that are not ignored.
|
* **Context Interaction**: Reads source map details and temporary file paths from `AssetProcessingContext.processed_maps_details`. Saves new temporary merged maps and records their details in `AssetProcessingContext.merged_maps_details`.
|
||||||
|
|
||||||
5. **[`GlossToRoughConversionStage`](processing/pipeline/stages/gloss_to_rough_conversion.py:15)** (`processing/pipeline/stages/gloss_to_rough_conversion.py`):
|
10. **`MetadataFinalizationAndSaveStage` (`processing/pipeline/stages/metadata_finalization_save.py`)**:
|
||||||
* **Responsibility**: Identifies processed maps in `context.processed_maps_details` whose `internal_map_type` starts with "MAP_GLOSS". If found, it loads the temporary image data, inverts it using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), saves a new temporary roughness map ("MAP_ROUGH"), and updates the corresponding details in `context.processed_maps_details` (setting `internal_map_type` to "MAP_ROUGH") and the relevant [`FileRule`](rule_structure.py:5) in `context.files_to_process` (setting `item_type` to "MAP_ROUGH").
|
* **Responsibility**: Collects all accumulated metadata from `AssetProcessingContext.asset_metadata`, `AssetProcessingContext.processed_maps_details`, and `AssetProcessingContext.merged_maps_details`. It structures this information and saves it as the `metadata.json` file in a temporary location within the engine's temporary directory.
|
||||||
* **Context Interaction**: Reads from and updates `context.processed_maps_details` (specifically `internal_map_type` and `temp_processed_file`) and `context.files_to_process` (specifically `item_type`).
|
* **Context Interaction**: Reads from various context fields and writes the `metadata.json` file. Stores the path to this temporary metadata file in the context (e.g., `AssetProcessingContext.asset_metadata['temp_metadata_path']`).
|
||||||
|
|
||||||
6. **[`AlphaExtractionToMaskStage`](processing/pipeline/stages/alpha_extraction_to_mask.py:16)** (`processing/pipeline/stages/alpha_extraction_to_mask.py`):
|
11. **`OutputOrganizationStage` (`processing/pipeline/stages/output_organization.py`)**:
|
||||||
* **Responsibility**: If no mask map is explicitly defined for the asset (as a [`FileRule`](rule_structure.py:5) with `item_type="MAP_MASK"`), this stage searches `context.processed_maps_details` for a suitable source map (e.g., a "MAP_COL" with an alpha channel, based on its `internal_map_type`). If found, it extracts the alpha channel, saves it as a new temporary mask map, and adds a new [`FileRule`](rule_structure.py:5) (with `item_type="MAP_MASK"`) and corresponding details (with `internal_map_type="MAP_MASK"`) to the context.
|
* **Responsibility**: Determines final output paths for all processed maps, merged maps, the metadata file, and any other asset files (like models). It then copies these files from their temporary locations to the final structured output directory.
|
||||||
* **Context Interaction**: Reads from `context.processed_maps_details`, adds a new [`FileRule`](rule_structure.py:5) to `context.files_to_process`, and adds a new entry to `context.processed_maps_details` (setting `internal_map_type`).
|
* **Context Interaction**: Reads temporary file paths from `AssetProcessingContext.processed_maps_details`, `AssetProcessingContext.merged_maps_details`, and the temporary metadata file path. Uses `Configuration` for output path patterns. Updates `AssetProcessingContext.asset_metadata` with final file paths and status.
|
||||||
|
|
||||||
7. **[`NormalMapGreenChannelStage`](processing/pipeline/stages/normal_map_green_channel.py:14)** (`processing/pipeline/stages/normal_map_green_channel.py`):
|
**External Steps (Not part of `PipelineOrchestrator`'s direct loop but integral to the overall process):**
|
||||||
* **Responsibility**: Identifies processed normal maps in `context.processed_maps_details` (those with an `internal_map_type` starting with "MAP_NRM"). If the global `invert_normal_map_green_channel_globally` configuration is true, it loads the temporary image data, inverts the green channel using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), saves a new temporary modified normal map, and updates the `temp_processed_file` path in `context.processed_maps_details`.
|
|
||||||
* **Context Interaction**: Reads from and updates `context.processed_maps_details` (specifically `temp_processed_file` and `notes`).
|
|
||||||
|
|
||||||
### Core Item Processing Loop
|
* **Workspace Preparation and Cleanup**: Handled by the code that invokes `ProcessingEngine.process()` (e.g., `main.ProcessingTask`, `monitor._process_archive_task`), typically using `utils.workspace_utils`. The engine itself creates a sub-temporary directory (`engine_temp_dir`) within the workspace provided to it by the orchestrator, which it cleans up.
|
||||||
|
* **Prediction and Rule Generation**: Also external, performed before `ProcessingEngine` is called. Generates the `SourceRule`.
|
||||||
|
* **Optional Blender Script Execution**: Triggered externally after successful processing.
|
||||||
|
|
||||||
The [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) iterates through the `context.processing_items` list (populated by the [`PrepareProcessingItemsStage`](processing/pipeline/stages/prepare_processing_items.py:10)). For each item (either a [`FileRule`](rule_structure.py:5) for a regular map or a [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16) for a merged map), the following stages are executed sequentially:
|
This staged pipeline provides a modular and extensible architecture for asset processing, with clear separation of concerns for each step. The `AssetProcessingContext` ensures that data flows consistently between these stages.r
|
||||||
|
|
||||||
1. **[`PrepareProcessingItemsStage`](processing/pipeline/stages/prepare_processing_items.py:10)** (`processing/pipeline/stages/prepare_processing_items.py`):
|
|
||||||
* **Responsibility**: (Executed once before the loop) Creates the `context.processing_items` list by combining [`FileRule`](rule_structure.py:5)s from `context.files_to_process` and [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16)s derived from the global `map_merge_rules` configuration. It correctly accesses `map_merge_rules` from `context.config_obj` and validates each merge rule for the presence of `output_map_type` and a dictionary for `inputs`. Initializes `context.intermediate_results`.
|
|
||||||
* **Context Interaction**: Reads from `context.files_to_process` and `context.config_obj` (accessing `map_merge_rules`). Populates `context.processing_items` and initializes `context.intermediate_results`.
|
|
||||||
|
|
||||||
2. **[`RegularMapProcessorStage`](processing/pipeline/stages/regular_map_processor.py:18)** (`processing/pipeline/stages/regular_map_processor.py`):
|
|
||||||
* **Responsibility**: (Executed per [`FileRule`](rule_structure.py:5) item) Checks if the `FileRule.item_type` starts with "MAP_". If not, the item is skipped. Otherwise, it loads the image data for the file, determines its potentially suffixed internal map type (e.g., "MAP_COL-1"), applies in-memory transformations (Gloss-to-Rough, Normal Green Invert) using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), and returns the processed image data and details in a [`ProcessedRegularMapData`](processing/pipeline/asset_context.py:23) object. The `internal_map_type` in the output reflects any transformations (e.g., "MAP_GLOSS" becomes "MAP_ROUGH").
|
|
||||||
* **Context Interaction**: Reads from the input [`FileRule`](rule_structure.py:5) (checking `item_type`) and [`Configuration`](configuration.py:68). Returns a [`ProcessedRegularMapData`](processing/pipeline/asset_context.py:23) object which is stored in `context.intermediate_results`.
|
|
||||||
|
|
||||||
3. **[`MergedTaskProcessorStage`](processing/pipeline/stages/merged_task_processor.py:68)** (`processing/pipeline/stages/merged_task_processor.py`):
|
|
||||||
* **Responsibility**: (Executed per [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16) item) Validates that all input map types specified in the merge rule start with "MAP_". If not, the task is failed. It dynamically loads input images by looking up the required input map types (e.g., "MAP_NRM") in `context.processed_maps_details` and using the temporary file paths from their `saved_files_info`. It applies in-memory transformations to inputs using [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), handles dimension mismatches (with fallback creation if configured and `source_dimensions` are available), performs the channel merging operation, and returns the merged image data and details in a [`ProcessedMergedMapData`](processing/pipeline/asset_context.py:35) object. The `output_map_type` of the merged map must also be "MAP_" prefixed in the configuration.
|
|
||||||
* **Context Interaction**: Reads from the input [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16) (checking input map types), `context.workspace_path`, `context.processed_maps_details` (for input image data), and [`Configuration`](configuration.py:68). Returns a [`ProcessedMergedMapData`](processing/pipeline/asset_context.py:35) object which is stored in `context.intermediate_results`.
|
|
||||||
|
|
||||||
4. **[`InitialScalingStage`](processing/pipeline/stages/initial_scaling.py:14)** (`processing/pipeline/stages/initial_scaling.py`):
|
|
||||||
* **Responsibility**: (Executed per item) Applies initial scaling (e.g., Power-of-Two downscaling) to the image data from the previous processing stage based on the `initial_scaling_mode` configuration.
|
|
||||||
* **Context Interaction**: Takes a [`InitialScalingInput`](processing/pipeline/asset_context.py:46) (containing image data and config) and returns an [`InitialScalingOutput`](processing/pipeline/asset_context.py:54) object, which updates the item's entry in `context.intermediate_results`.
|
|
||||||
|
|
||||||
5. **[`SaveVariantsStage`](processing/pipeline/stages/save_variants.py:15)** (`processing/pipeline/stages/save_variants.py`):
|
|
||||||
* **Responsibility**: (Executed per item) Takes the final processed image data (potentially scaled) and configuration, and calls a utility to save the image to temporary files in various resolutions and formats as defined by the configuration.
|
|
||||||
* **Context Interaction**: Takes a [`SaveVariantsInput`](processing/pipeline/asset_context.py:61) object (which includes the "MAP_" prefixed `internal_map_type`). It uses the `get_filename_friendly_map_type` utility to convert this to a "standard type" (e.g., "COL") for output naming. Returns a [`SaveVariantsOutput`](processing/pipeline/asset_context.py:79) object containing details about the saved temporary files. The orchestrator stores these details, including the original "MAP_" prefixed `internal_map_type`, in `context.processed_maps_details` for the item.
|
|
||||||
|
|
||||||
### Post-Item Stages
|
|
||||||
|
|
||||||
These stages are executed sequentially once for each asset after the core item processing loop has finished for all items.
|
|
||||||
|
|
||||||
1. **[`OutputOrganizationStage`](processing/pipeline/stages/output_organization.py:14)** (`processing/pipeline/stages/output_organization.py`):
|
|
||||||
* **Responsibility**: Determines the final output paths for all processed maps (including variants) and extra files based on configured patterns. It copies the temporary files generated by the core stages to these final destinations, creating directories as needed and respecting overwrite settings.
|
|
||||||
* **Context Interaction**: Reads from `context.processed_maps_details` (using the "MAP_" prefixed `internal_map_type` to get the "standard type" via `get_filename_friendly_map_type` for output naming), `context.files_to_process` (for 'EXTRA' files), `context.output_base_path`, and [`Configuration`](configuration.py:68). Updates entries in `context.processed_maps_details` with final paths and organization status. Populates `context.asset_metadata['final_output_files']`. (Note: Legacy code for `'Processed_With_Variants'` status has been removed from this stage).
|
|
||||||
|
|
||||||
2. **[`MetadataFinalizationAndSaveStage`](processing/pipeline/stages/metadata_finalization_save.py:14)** (`processing/pipeline/stages/metadata_finalization_save.py`):
|
|
||||||
* **Responsibility**: Finalizes the `context.asset_metadata` (setting end time, final status based on flags). It restructures the processed map details for inclusion, determines the save path for the metadata file based on configuration and patterns, serializes the metadata to JSON, and saves the `metadata.json` file to the final output location.
|
|
||||||
* **Context Interaction**: Reads from `context.asset_metadata`, `context.processed_maps_details`, `context.merged_maps_details`, `context.output_base_path`, and [`Configuration`](configuration.py:68). Writes the `metadata.json` file and updates `context.asset_metadata` with its final path and status.
|
|
||||||
|
|
||||||
## External Steps
|
|
||||||
|
|
||||||
Certain steps are integral to the overall asset processing workflow but are handled outside the [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36)'s direct execution loop:
|
|
||||||
|
|
||||||
* **Workspace Preparation and Cleanup**: Handled by the code that invokes [`ProcessingEngine.process()`](processing_engine.py:131) (e.g., `main.ProcessingTask`, `monitor._process_archive_task`), typically involving extracting archives and setting up temporary directories. The engine itself manages a sub-temporary directory (`engine_temp_dir`) for intermediate processing files.
|
|
||||||
* **Prediction and Rule Generation**: Performed before the [`ProcessingEngine`](processing_engine.py:73) is called. This involves analyzing source files and generating the [`SourceRule`](rule_structure.py:40) object with its nested [`AssetRule`](rule_structure.py:22)s and [`FileRule`](rule_structure.py:5)s, often involving prediction logic (potentially using LLMs).
|
|
||||||
* **Optional Blender Script Execution**: Can be triggered externally after successful processing to perform tasks like material setup in Blender using the generated output files and metadata.
|
|
||||||
|
|
||||||
This staged pipeline provides a modular and extensible architecture for asset processing, with clear separation of concerns for each step. The [`AssetProcessingContext`](processing/pipeline/asset_context.py:86) ensures that data flows consistently between these stages.
|
|
||||||
@ -1,96 +0,0 @@
|
|||||||
# Plan: Enforcing "MAP_" Prefix for Internal Processing and Standard Type for Output Naming
|
|
||||||
|
|
||||||
**Date:** 2025-05-13
|
|
||||||
|
|
||||||
**I. Goal:**
|
|
||||||
The primary goal is to ensure that for all internal processing, the system *exclusively* uses `FileRule.item_type` values that start with the "MAP_" prefix (e.g., "MAP_COL", "MAP_NRM"). The "standard type" (e.g., "COL", "NRM") associated with these "MAP_" types (as defined in `config/app_settings.json`) should *only* be used during the file saving stages for output naming. Any `FileRule` whose `item_type` does not start with "MAP_" (and isn't a special type like "EXTRA" or "MODEL") should be skipped by the relevant map processing stages.
|
|
||||||
|
|
||||||
**II. Current State Analysis Summary:**
|
|
||||||
|
|
||||||
* **Output Naming:** The use of "standard type" for output filenames via the `get_filename_friendly_map_type` utility in `SaveVariantsStage` and `OutputOrganizationStage` is **correct** and already meets the requirement.
|
|
||||||
* **Internal "MAP_" Prefix Usage:**
|
|
||||||
* Some stages like `GlossToRoughConversionStage` correctly check for "MAP_" prefixes (e.g., `processing_map_type.startswith("MAP_GLOSS")`).
|
|
||||||
* Other stages like `RegularMapProcessorStage` and `MergedTaskProcessorStage` (and its helpers) implicitly expect "MAP_" prefixed types for their internal regex-based logic but lack explicit checks to skip items if the prefix is missing.
|
|
||||||
* Stages like `AlphaExtractionToMaskStage` and `NormalMapGreenChannelStage` currently use non-"MAP_" prefixed "standard types" (e.g., "NORMAL", "ALBEDO") when reading from `context.processed_maps_details` for their decision-making logic.
|
|
||||||
* The `PrepareProcessingItemsStage` adds `FileRule`s to the processing queue without filtering based on the "MAP_" prefix in `item_type`.
|
|
||||||
* **Data Consistency in `AssetProcessingContext`:**
|
|
||||||
* `FileRule.item_type` is the field that should hold the "MAP_" prefixed type from the initial rule generation.
|
|
||||||
* `context.processed_maps_details` entries can contain various map type representations:
|
|
||||||
* `map_type`: Often stores the "standard type" (e.g., "Roughness", "MASK", "NORMAL").
|
|
||||||
* `processing_map_type` / `internal_map_type`: Generally seem to store the "MAP_" prefixed type. This needs to be consistent.
|
|
||||||
* **Configuration (`config/app_settings.json`):**
|
|
||||||
* `FILE_TYPE_DEFINITIONS` correctly use "MAP_" prefixed keys.
|
|
||||||
* `MAP_MERGE_RULES` need to be reviewed to ensure their `output_map_type` and input map types are "MAP_" prefixed.
|
|
||||||
|
|
||||||
**III. Proposed Changes (Code Identification & Recommendations):**
|
|
||||||
|
|
||||||
**A. Enforce "MAP_" Prefix for Processing Items (Skipping Logic):**
|
|
||||||
The core requirement is that processing stages should skip `FileRule` items if their `item_type` doesn't start with "MAP_".
|
|
||||||
|
|
||||||
1. **`RegularMapProcessorStage` (`processing/pipeline/stages/regular_map_processor.py`):**
|
|
||||||
* **Identify:** In the `execute` method, `initial_internal_map_type` is derived from `file_rule.item_type_override` or `file_rule.item_type`.
|
|
||||||
* **Recommend:** Add an explicit check after determining `initial_internal_map_type`. If `initial_internal_map_type` does not start with `"MAP_"`, the stage should log a warning, set the `result.status` to "Skipped (Invalid Type)" or similar, and return `result` early, effectively skipping processing for this item.
|
|
||||||
|
|
||||||
2. **`MergedTaskProcessorStage` (`processing/pipeline/stages/merged_task_processor.py`):**
|
|
||||||
* **Identify:** This stage processes `MergeTaskDefinition`s. The definitions for these tasks (input types, output type) come from `MAP_MERGE_RULES` in `config/app_settings.json`. The stage uses `required_map_type_from_rule` for its inputs.
|
|
||||||
* **Recommend:**
|
|
||||||
* **Configuration First:** Review all entries in `MAP_MERGE_RULES` in `config/app_settings.json`.
|
|
||||||
* Ensure the `output_map_type` for each rule (e.g., "MAP_NRMRGH") starts with "MAP_".
|
|
||||||
* Ensure all map type values within the `inputs` dictionary (e.g., `"R": "MAP_NRM"`) start with "MAP_".
|
|
||||||
* **Stage Logic:** In the `execute` method, when iterating through `merge_inputs_config.items()`, check if `required_map_type_from_rule` starts with `"MAP_"`. If not, log a warning and either:
|
|
||||||
* Skip loading/processing this specific input channel (potentially using its fallback if the overall merge can still proceed).
|
|
||||||
* Or, if a non-"MAP_" input is critical, fail the entire merge task for this asset.
|
|
||||||
* The helper `_apply_in_memory_transformations` already uses regex expecting "MAP_" prefixes; this will naturally fail or misbehave if inputs are not "MAP_" prefixed, reinforcing the need for the check above.
|
|
||||||
|
|
||||||
**B. Standardize Map Type Fields and Usage in `context.processed_maps_details`:**
|
|
||||||
Ensure consistency in how "MAP_" prefixed types are stored and accessed within `context.processed_maps_details` for internal logic (not naming).
|
|
||||||
|
|
||||||
1. **Recommendation:** Establish a single, consistent field name within `context.processed_maps_details` to store the definitive "MAP_" prefixed internal map type (e.g., `internal_map_type` or `processing_map_type`). All stages that perform logic based on the specific *kind* of map (e.g., transformations, source selection) should read from this standardized field. The `map_type` field can continue to store the "standard type" (e.g., "Roughness") primarily for informational/metadata purposes if needed, but not for core processing logic.
|
|
||||||
|
|
||||||
2. **`AlphaExtractionToMaskStage` (`processing/pipeline/stages/alpha_extraction_to_mask.py`):**
|
|
||||||
* **Identify:**
|
|
||||||
* Checks for existing MASK map using `file_rule.map_type == "MASK"`. (Discrepancy: `FileRule` uses `item_type`).
|
|
||||||
* Searches for suitable source maps using `details.get('map_type') in self.SUITABLE_SOURCE_MAP_TYPES` where `SUITABLE_SOURCE_MAP_TYPES` are standard types like "ALBEDO".
|
|
||||||
* When adding new details, it sets `map_type: "MASK"` and the new `FileRule` gets `item_type="MAP_MASK"`.
|
|
||||||
* **Recommend:**
|
|
||||||
* Change the check for an existing MASK map to `file_rule.item_type == "MAP_MASK"`.
|
|
||||||
* Modify the source map search to use the standardized "MAP_" prefixed field from `details` (e.g., `details.get('internal_map_type')`) and update `SUITABLE_SOURCE_MAP_TYPES` to be "MAP_" prefixed (e.g., "MAP_COL", "MAP_ALBEDO").
|
|
||||||
* When adding new details for the created MASK map to `context.processed_maps_details`, ensure the standardized "MAP_" prefixed field is set to "MAP_MASK", and `map_type` (if kept) is "MASK".
|
|
||||||
|
|
||||||
3. **`NormalMapGreenChannelStage` (`processing/pipeline/stages/normal_map_green_channel.py`):**
|
|
||||||
* **Identify:** Checks `map_details.get('map_type') == "NORMAL"`.
|
|
||||||
* **Recommend:** Change this check to use the standardized "MAP_" prefixed field from `map_details` (e.g., `map_details.get('internal_map_type')`) and verify if it `startswith("MAP_NRM")`.
|
|
||||||
|
|
||||||
4. **`GlossToRoughConversionStage` (`processing/pipeline/stages/gloss_to_rough_conversion.py`):**
|
|
||||||
* **Identify:** This stage already uses `processing_map_type.startswith("MAP_GLOSS")` and updates `processing_map_type` to "MAP_ROUGH" in `map_details`. It also updates the `FileRule.item_type` correctly.
|
|
||||||
* **Recommend:** This stage is largely consistent. Ensure the field it reads/writes (`processing_map_type`) aligns with the chosen standardized "MAP_" prefixed field for `processed_maps_details`.
|
|
||||||
|
|
||||||
**C. Review Orchestration Logic (Conceptual):**
|
|
||||||
* When the orchestrator populates `context.processed_maps_details` after stages like `SaveVariantsStage`, ensure it stores the "MAP_" prefixed `internal_map_type` (from `SaveVariantsInput`) into the chosen standardized field in `processed_maps_details`.
|
|
||||||
|
|
||||||
**IV. Testing Recommendations:**
|
|
||||||
|
|
||||||
* Create test cases with `AssetRule`s containing `FileRule`s where `item_type` is intentionally set to a non-"MAP_" prefixed value (e.g., "COLOR_MAP", "TEXTURE_ROUGH"). Verify that `RegularMapProcessorStage` skips these.
|
|
||||||
* Modify `MAP_MERGE_RULES` in a test configuration:
|
|
||||||
* Set an `output_map_type` to a non-"MAP_" value.
|
|
||||||
* Set an input map type (e.g., for channel "R") to a non-"MAP_" value.
|
|
||||||
* Verify that `MergedTaskProcessorStage` correctly handles these (e.g., fails the task, skips the input, logs warnings).
|
|
||||||
* Test `AlphaExtractionToMaskStage`:
|
|
||||||
* With an existing `FileRule` having `item_type="MAP_MASK"` to ensure extraction is skipped.
|
|
||||||
* With source maps having "MAP_COL" (with alpha) as their `internal_map_type` in `processed_maps_details` to ensure they are correctly identified as sources.
|
|
||||||
* Test `NormalMapGreenChannelStage` with a normal map having "MAP_NRM" as its `internal_map_type` in `processed_maps_details` to ensure it's processed.
|
|
||||||
* Verify that output filenames continue to use the "standard type" (e.g., "COL", "ROUGH", "NRM") correctly.
|
|
||||||
|
|
||||||
**V. Mermaid Diagram (Illustrative Flow for `FileRule` Processing):**
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
graph TD
|
|
||||||
A[AssetRule with FileRules] --> B{FileRuleFilterStage};
|
|
||||||
B -- files_to_process --> C{PrepareProcessingItemsStage};
|
|
||||||
C -- processing_items (FileRule) --> D{PipelineOrchestrator};
|
|
||||||
D -- FileRule --> E(RegularMapProcessorStage);
|
|
||||||
E --> F{Check FileRule.item_type};
|
|
||||||
F -- Starts with "MAP_"? --> G[Process Map];
|
|
||||||
F -- No --> H[Skip Map / Log Warning];
|
|
||||||
G --> I[...subsequent stages...];
|
|
||||||
H --> I;
|
|
||||||
@ -165,12 +165,10 @@ class PipelineOrchestrator:
|
|||||||
# --- Prepare Processing Items ---
|
# --- Prepare Processing Items ---
|
||||||
log.debug(f"Asset '{asset_name}': Preparing processing items...")
|
log.debug(f"Asset '{asset_name}': Preparing processing items...")
|
||||||
try:
|
try:
|
||||||
log.info(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': Attempting to call _prepare_stage.execute(). Current context.status_flags: {context.status_flags}")
|
|
||||||
# Prepare stage modifies context directly
|
# Prepare stage modifies context directly
|
||||||
context = self._prepare_stage.execute(context)
|
context = self._prepare_stage.execute(context)
|
||||||
log.info(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': Successfully RETURNED from _prepare_stage.execute(). context.processing_items count: {len(context.processing_items) if context.processing_items is not None else 'None'}. context.status_flags: {context.status_flags}")
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': EXCEPTION during _prepare_stage.execute(): {e}", exc_info=True)
|
log.error(f"Asset '{asset_name}': Error during PrepareProcessingItemsStage: {e}", exc_info=True)
|
||||||
context.status_flags["asset_failed"] = True
|
context.status_flags["asset_failed"] = True
|
||||||
context.status_flags["asset_failed_stage"] = "PrepareProcessingItemsStage"
|
context.status_flags["asset_failed_stage"] = "PrepareProcessingItemsStage"
|
||||||
context.status_flags["asset_failed_reason"] = str(e)
|
context.status_flags["asset_failed_reason"] = str(e)
|
||||||
|
|||||||
@ -18,8 +18,7 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
Extracts an alpha channel from a suitable source map (e.g., Albedo, Diffuse)
|
Extracts an alpha channel from a suitable source map (e.g., Albedo, Diffuse)
|
||||||
to generate a MASK map if one is not explicitly defined.
|
to generate a MASK map if one is not explicitly defined.
|
||||||
"""
|
"""
|
||||||
# Use MAP_ prefixed types for internal logic checks
|
SUITABLE_SOURCE_MAP_TYPES = ["ALBEDO", "DIFFUSE", "BASE_COLOR"] # Map types likely to have alpha
|
||||||
SUITABLE_SOURCE_MAP_TYPES = ["MAP_COL", "MAP_ALBEDO", "MAP_BASECOLOR"] # Map types likely to have alpha
|
|
||||||
|
|
||||||
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
|
||||||
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
|
||||||
@ -39,8 +38,7 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
# A. Check for Existing MASK Map
|
# A. Check for Existing MASK Map
|
||||||
for file_rule in context.files_to_process:
|
for file_rule in context.files_to_process:
|
||||||
# Assuming file_rule has 'map_type' and 'file_path' (instead of filename_pattern)
|
# Assuming file_rule has 'map_type' and 'file_path' (instead of filename_pattern)
|
||||||
# Check for existing MASK map using the correct item_type field and MAP_ prefix
|
if hasattr(file_rule, 'map_type') and file_rule.map_type == "MASK":
|
||||||
if file_rule.item_type == "MAP_MASK":
|
|
||||||
file_path_for_log = file_rule.file_path if hasattr(file_rule, 'file_path') else "Unknown file path"
|
file_path_for_log = file_rule.file_path if hasattr(file_rule, 'file_path') else "Unknown file path"
|
||||||
logger.info(
|
logger.info(
|
||||||
f"Asset '{asset_name_for_log}': MASK map already defined by FileRule "
|
f"Asset '{asset_name_for_log}': MASK map already defined by FileRule "
|
||||||
@ -53,10 +51,8 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
source_file_rule_id_for_alpha: Optional[str] = None # This ID comes from processed_maps_details keys
|
source_file_rule_id_for_alpha: Optional[str] = None # This ID comes from processed_maps_details keys
|
||||||
|
|
||||||
for file_rule_id, details in context.processed_maps_details.items():
|
for file_rule_id, details in context.processed_maps_details.items():
|
||||||
# Check for suitable source map using the standardized internal_map_type field
|
|
||||||
internal_map_type = details.get('internal_map_type') # Use the standardized field
|
|
||||||
if details.get('status') == 'Processed' and \
|
if details.get('status') == 'Processed' and \
|
||||||
internal_map_type in self.SUITABLE_SOURCE_MAP_TYPES:
|
details.get('map_type') in self.SUITABLE_SOURCE_MAP_TYPES:
|
||||||
try:
|
try:
|
||||||
temp_path = Path(details['temp_processed_file'])
|
temp_path = Path(details['temp_processed_file'])
|
||||||
if not temp_path.exists():
|
if not temp_path.exists():
|
||||||
@ -157,16 +153,15 @@ class AlphaExtractionToMaskStage(ProcessingStage):
|
|||||||
|
|
||||||
|
|
||||||
context.processed_maps_details[new_mask_processed_map_key] = {
|
context.processed_maps_details[new_mask_processed_map_key] = {
|
||||||
'internal_map_type': "MAP_MASK", # Use the standardized MAP_ prefixed field
|
'map_type': "MASK",
|
||||||
'map_type': "MASK", # Keep standard type for metadata/naming consistency if needed
|
|
||||||
'source_file': str(source_image_path),
|
'source_file': str(source_image_path),
|
||||||
'temp_processed_file': str(mask_temp_path),
|
'temp_processed_file': str(mask_temp_path),
|
||||||
'original_dimensions': original_dims,
|
'original_dimensions': original_dims,
|
||||||
'processed_dimensions': (alpha_channel.shape[1], alpha_channel.shape[0]),
|
'processed_dimensions': (alpha_channel.shape[1], alpha_channel.shape[0]),
|
||||||
'status': 'Processed',
|
'status': 'Processed',
|
||||||
'notes': (
|
'notes': (
|
||||||
f"Generated from alpha of {source_map_details_for_alpha.get('internal_map_type', 'unknown type')} " # Use internal_map_type for notes
|
f"Generated from alpha of {source_map_details_for_alpha['map_type']} "
|
||||||
f"(Source Detail ID: {source_file_rule_id_for_alpha})"
|
f"(Source Detail ID: {source_file_rule_id_for_alpha})" # Changed from Source Rule ID
|
||||||
),
|
),
|
||||||
# 'file_rule_id': new_mask_file_rule_id_str # FileRule doesn't have an ID to link here directly
|
# 'file_rule_id': new_mask_file_rule_id_str # FileRule doesn't have an ID to link here directly
|
||||||
}
|
}
|
||||||
|
|||||||
@ -51,8 +51,7 @@ class GlossToRoughConversionStage(ProcessingStage):
|
|||||||
|
|
||||||
# Iterate using the index (map_key_index) as the key, which is now standard.
|
# Iterate using the index (map_key_index) as the key, which is now standard.
|
||||||
for map_key_index, map_details in context.processed_maps_details.items():
|
for map_key_index, map_details in context.processed_maps_details.items():
|
||||||
# Use the standardized internal_map_type field
|
processing_map_type = map_details.get('processing_map_type', '')
|
||||||
internal_map_type = map_details.get('internal_map_type', '')
|
|
||||||
map_status = map_details.get('status')
|
map_status = map_details.get('status')
|
||||||
original_temp_path_str = map_details.get('temp_processed_file')
|
original_temp_path_str = map_details.get('temp_processed_file')
|
||||||
# source_file_rule_idx from details should align with map_key_index.
|
# source_file_rule_idx from details should align with map_key_index.
|
||||||
@ -71,12 +70,11 @@ class GlossToRoughConversionStage(ProcessingStage):
|
|||||||
processing_tag = f"mki_{map_key_index}_fallback_tag"
|
processing_tag = f"mki_{map_key_index}_fallback_tag"
|
||||||
|
|
||||||
|
|
||||||
# Check if the map is a GLOSS map using the standardized internal_map_type
|
if not processing_map_type.startswith("MAP_GLOSS"):
|
||||||
if not internal_map_type.startswith("MAP_GLOSS"):
|
# logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Type '{processing_map_type}' is not GLOSS. Skipping.")
|
||||||
# logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Type '{internal_map_type}' is not GLOSS. Skipping.")
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
logger.info(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index} (Tag: {processing_tag}): Identified potential GLOSS map (Type: {internal_map_type}).")
|
logger.info(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index} (Tag: {processing_tag}): Identified potential GLOSS map (Type: {processing_map_type}).")
|
||||||
|
|
||||||
if map_status not in successful_conversion_statuses:
|
if map_status not in successful_conversion_statuses:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
@ -165,9 +163,9 @@ class GlossToRoughConversionStage(ProcessingStage):
|
|||||||
|
|
||||||
# Update context.processed_maps_details for this map_key_index
|
# Update context.processed_maps_details for this map_key_index
|
||||||
map_details['temp_processed_file'] = str(new_temp_path)
|
map_details['temp_processed_file'] = str(new_temp_path)
|
||||||
map_details['original_map_type_before_conversion'] = internal_map_type # Store the original internal type
|
map_details['original_map_type_before_conversion'] = processing_map_type
|
||||||
map_details['internal_map_type'] = "MAP_ROUGH" # Use the standardized MAP_ prefixed field
|
map_details['processing_map_type'] = "MAP_ROUGH"
|
||||||
map_details['map_type'] = "Roughness" # Keep standard type for metadata/naming consistency if needed
|
map_details['map_type'] = "Roughness"
|
||||||
map_details['status'] = "Converted_To_Rough"
|
map_details['status'] = "Converted_To_Rough"
|
||||||
map_details['notes'] = map_details.get('notes', '') + "; Converted from GLOSS by GlossToRoughConversionStage"
|
map_details['notes'] = map_details.get('notes', '') + "; Converted from GLOSS by GlossToRoughConversionStage"
|
||||||
if 'base_pot_resolution_name' in map_details:
|
if 'base_pot_resolution_name' in map_details:
|
||||||
|
|||||||
@ -13,6 +13,58 @@ from ...utils import image_processing_utils as ipu
|
|||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Helper function (Duplicated from RegularMapProcessorStage - consider moving to utils)
|
||||||
|
def _apply_in_memory_transformations(
|
||||||
|
image_data: np.ndarray,
|
||||||
|
processing_map_type: str, # The internal type of the *input* map
|
||||||
|
invert_normal_green: bool,
|
||||||
|
file_type_definitions: Dict[str, Dict],
|
||||||
|
log_prefix: str
|
||||||
|
) -> Tuple[np.ndarray, str, List[str]]:
|
||||||
|
"""
|
||||||
|
Applies in-memory transformations (Gloss-to-Rough, Normal Green Invert).
|
||||||
|
Returns potentially transformed image data, potentially updated map type, and notes.
|
||||||
|
NOTE: This is applied to individual inputs *before* merging.
|
||||||
|
"""
|
||||||
|
transformation_notes = []
|
||||||
|
current_image_data = image_data # Start with original data
|
||||||
|
updated_processing_map_type = processing_map_type # Start with original type
|
||||||
|
|
||||||
|
# Gloss-to-Rough
|
||||||
|
base_map_type_match = re.match(r"(MAP_GLOSS)", processing_map_type)
|
||||||
|
if base_map_type_match:
|
||||||
|
log.info(f"{log_prefix}: Applying Gloss-to-Rough conversion to input.")
|
||||||
|
inversion_succeeded = False
|
||||||
|
if np.issubdtype(current_image_data.dtype, np.floating):
|
||||||
|
current_image_data = 1.0 - current_image_data
|
||||||
|
current_image_data = np.clip(current_image_data, 0.0, 1.0)
|
||||||
|
log.debug(f"{log_prefix}: Inverted float input data for Gloss->Rough.")
|
||||||
|
inversion_succeeded = True
|
||||||
|
elif np.issubdtype(current_image_data.dtype, np.integer):
|
||||||
|
max_val = np.iinfo(current_image_data.dtype).max
|
||||||
|
current_image_data = max_val - current_image_data
|
||||||
|
log.debug(f"{log_prefix}: Inverted integer input data (max_val: {max_val}) for Gloss->Rough.")
|
||||||
|
inversion_succeeded = True
|
||||||
|
else:
|
||||||
|
log.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS input map. Cannot invert.")
|
||||||
|
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
|
||||||
|
|
||||||
|
if inversion_succeeded:
|
||||||
|
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
|
||||||
|
log.info(f"{log_prefix}: Input map type conceptually updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
|
||||||
|
transformation_notes.append("Gloss-to-Rough applied to input")
|
||||||
|
|
||||||
|
# Normal Green Invert
|
||||||
|
base_map_type_match_nrm = re.match(r"(MAP_NRM)", processing_map_type)
|
||||||
|
if base_map_type_match_nrm and invert_normal_green:
|
||||||
|
log.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting) to input.")
|
||||||
|
current_image_data = ipu.invert_normal_map_green_channel(current_image_data)
|
||||||
|
transformation_notes.append("Normal Green Inverted (Global) applied to input")
|
||||||
|
|
||||||
|
# Return the transformed data, the *original* map type (as it identifies the input source), and notes
|
||||||
|
return current_image_data, processing_map_type, transformation_notes
|
||||||
|
|
||||||
|
|
||||||
class MergedTaskProcessorStage(ProcessingStage):
|
class MergedTaskProcessorStage(ProcessingStage):
|
||||||
"""
|
"""
|
||||||
Processes a single merge task defined in the configuration.
|
Processes a single merge task defined in the configuration.
|
||||||
@ -20,42 +72,6 @@ class MergedTaskProcessorStage(ProcessingStage):
|
|||||||
performs the merge, and returns the merged data.
|
performs the merge, and returns the merged data.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def _find_input_map_details_in_context(
|
|
||||||
self,
|
|
||||||
required_map_type: str,
|
|
||||||
processed_map_details_context: Dict[str, Dict[str, Any]],
|
|
||||||
log_prefix_for_find: str
|
|
||||||
) -> Optional[Dict[str, Any]]:
|
|
||||||
"""
|
|
||||||
Finds the details of a required input map from the context's processed_maps_details.
|
|
||||||
Prefers exact match for full types (e.g. MAP_TYPE-1), or base type / base type + "-1" for base types (e.g. MAP_TYPE).
|
|
||||||
Returns the details dictionary for the found map if it has saved_files_info.
|
|
||||||
"""
|
|
||||||
# Try exact match first (e.g., rule asks for "MAP_NRM-1" or "MAP_NRM" if that's how it was processed)
|
|
||||||
for item_key, details in processed_map_details_context.items():
|
|
||||||
if details.get('internal_map_type') == required_map_type:
|
|
||||||
if details.get('saved_files_info') and isinstance(details['saved_files_info'], list) and len(details['saved_files_info']) > 0:
|
|
||||||
log.debug(f"{log_prefix_for_find}: Found exact match for '{required_map_type}' with key '{item_key}'.")
|
|
||||||
return details
|
|
||||||
log.warning(f"{log_prefix_for_find}: Found exact match for '{required_map_type}' (key '{item_key}') but no saved_files_info.")
|
|
||||||
return None # Found type but no usable files
|
|
||||||
|
|
||||||
# If exact match not found, and required_map_type is a base type (e.g. "MAP_NRM")
|
|
||||||
# try to find the primary suffixed version "MAP_NRM-1" or the base type itself if it was processed without a suffix.
|
|
||||||
if not re.search(r'-\d+$', required_map_type): # if it's a base type like MAP_XXX
|
|
||||||
# Prefer "MAP_XXX-1" as the primary variant if suffixed types exist
|
|
||||||
primary_suffixed_type = f"{required_map_type}-1"
|
|
||||||
for item_key, details in processed_map_details_context.items():
|
|
||||||
if details.get('internal_map_type') == primary_suffixed_type:
|
|
||||||
if details.get('saved_files_info') and isinstance(details['saved_files_info'], list) and len(details['saved_files_info']) > 0:
|
|
||||||
log.debug(f"{log_prefix_for_find}: Found primary suffixed match '{primary_suffixed_type}' for base '{required_map_type}' with key '{item_key}'.")
|
|
||||||
return details
|
|
||||||
log.warning(f"{log_prefix_for_find}: Found primary suffixed match '{primary_suffixed_type}' (key '{item_key}') but no saved_files_info.")
|
|
||||||
return None # Found type but no usable files
|
|
||||||
|
|
||||||
log.debug(f"{log_prefix_for_find}: No suitable match found for '{required_map_type}' via exact or primary suffixed type search.")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def execute(
|
def execute(
|
||||||
self,
|
self,
|
||||||
context: AssetProcessingContext,
|
context: AssetProcessingContext,
|
||||||
@ -89,23 +105,17 @@ class MergedTaskProcessorStage(ProcessingStage):
|
|||||||
merge_dimension_mismatch_strategy = getattr(config, "MERGE_DIMENSION_MISMATCH_STRATEGY", "USE_LARGEST")
|
merge_dimension_mismatch_strategy = getattr(config, "MERGE_DIMENSION_MISMATCH_STRATEGY", "USE_LARGEST")
|
||||||
workspace_path = context.workspace_path # Base for resolving relative input paths
|
workspace_path = context.workspace_path # Base for resolving relative input paths
|
||||||
|
|
||||||
# input_map_sources_from_task is no longer used for paths. Paths are sourced from context.processed_maps_details.
|
merge_rule_config = task_data.get('merge_rule_config', {})
|
||||||
target_dimensions_hw = task_data.get('source_dimensions') # Expected dimensions (h, w) for fallback creation, must be in config.
|
input_map_sources_from_task = task_data.get('input_map_sources', {}) # Info about where inputs come from
|
||||||
merge_inputs_config = task_data.get('inputs', {}) # e.g., {'R': 'MAP_AO', 'G': 'MAP_ROUGH', ...}
|
target_dimensions_hw = task_data.get('source_dimensions') # Expected dimensions (h, w) from previous stage
|
||||||
merge_defaults = task_data.get('defaults', {}) # e.g., {'R': 255, 'G': 255, ...}
|
merge_inputs_config = merge_rule_config.get('inputs', {}) # e.g., {'R': 'MAP_AO', 'G': 'MAP_ROUGH', ...}
|
||||||
merge_channels_order = task_data.get('channel_order', 'RGB') # e.g., 'RGB', 'RGBA'
|
merge_defaults = merge_rule_config.get('defaults', {}) # e.g., {'R': 255, 'G': 255, ...}
|
||||||
|
merge_channels_order = merge_rule_config.get('channel_order', 'RGB') # e.g., 'RGB', 'RGBA'
|
||||||
|
|
||||||
# Target dimensions are crucial if fallbacks are needed.
|
if not merge_rule_config or not input_map_sources_from_task or not target_dimensions_hw or not merge_inputs_config:
|
||||||
# Merge inputs config is essential.
|
result.error_message = "Merge task data is incomplete (missing config, sources, dimensions, or input mapping)."
|
||||||
# Merge inputs config is essential. Check directly in task_data.
|
|
||||||
inputs_from_task_data = task_data.get('inputs')
|
|
||||||
if not isinstance(inputs_from_task_data, dict) or not inputs_from_task_data:
|
|
||||||
result.error_message = "Merge task data is incomplete (missing or invalid 'inputs' dictionary in task_data)."
|
|
||||||
log.error(f"{log_prefix}: {result.error_message}")
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
return result
|
return result
|
||||||
if not target_dimensions_hw and any(merge_defaults.get(ch) is not None for ch in merge_inputs_config.keys()):
|
|
||||||
log.warning(f"{log_prefix}: Merge task has defaults defined, but 'source_dimensions' (target_dimensions_hw) is missing in task_data. Fallback image creation might fail if needed.")
|
|
||||||
# Not returning error yet, as fallbacks might not be triggered.
|
|
||||||
|
|
||||||
loaded_inputs_for_merge: Dict[str, np.ndarray] = {} # Channel char -> image data
|
loaded_inputs_for_merge: Dict[str, np.ndarray] = {} # Channel char -> image data
|
||||||
actual_input_dimensions: List[Tuple[int, int]] = [] # List of (h, w) for loaded files
|
actual_input_dimensions: List[Tuple[int, int]] = [] # List of (h, w) for loaded files
|
||||||
@ -115,70 +125,46 @@ class MergedTaskProcessorStage(ProcessingStage):
|
|||||||
# --- Load, Transform, and Prepare Inputs ---
|
# --- Load, Transform, and Prepare Inputs ---
|
||||||
log.debug(f"{log_prefix}: Loading and preparing inputs...")
|
log.debug(f"{log_prefix}: Loading and preparing inputs...")
|
||||||
for channel_char, required_map_type_from_rule in merge_inputs_config.items():
|
for channel_char, required_map_type_from_rule in merge_inputs_config.items():
|
||||||
# Validate that the required input map type starts with "MAP_"
|
input_info = input_map_sources_from_task.get(required_map_type_from_rule)
|
||||||
if not required_map_type_from_rule.startswith("MAP_"):
|
|
||||||
result.error_message = (
|
|
||||||
f"Invalid input map type '{required_map_type_from_rule}' for channel '{channel_char}'. "
|
|
||||||
f"Input map types for merging must start with 'MAP_'."
|
|
||||||
)
|
|
||||||
log.error(f"{log_prefix}: {result.error_message}")
|
|
||||||
return result # Fail the task if an input type is invalid
|
|
||||||
|
|
||||||
input_image_data: Optional[np.ndarray] = None
|
input_image_data: Optional[np.ndarray] = None
|
||||||
input_source_desc = f"Fallback for {required_map_type_from_rule}"
|
input_source_desc = f"Fallback for {required_map_type_from_rule}"
|
||||||
input_log_prefix = f"{log_prefix}, Input '{required_map_type_from_rule}' (Channel '{channel_char}')"
|
input_log_prefix = f"{log_prefix}, Input '{required_map_type_from_rule}' (Channel '{channel_char}')"
|
||||||
channel_transform_notes: List[str] = []
|
channel_transform_notes: List[str] = []
|
||||||
|
|
||||||
# 1. Attempt to load from context.processed_maps_details
|
# 1. Attempt to load from file path
|
||||||
found_input_map_details = self._find_input_map_details_in_context(
|
if input_info and input_info.get('file_path'):
|
||||||
required_map_type_from_rule, context.processed_maps_details, input_log_prefix
|
# Paths in merged tasks should be relative to workspace_path
|
||||||
)
|
input_file_path_str = input_info['file_path']
|
||||||
|
input_file_path = workspace_path / input_file_path_str
|
||||||
if found_input_map_details:
|
if input_file_path.is_file():
|
||||||
# Assuming the first saved file is the primary one for merging.
|
try:
|
||||||
# This might need refinement if specific variants (resolutions/formats) are required.
|
input_image_data = ipu.load_image(str(input_file_path))
|
||||||
primary_saved_file_info = found_input_map_details['saved_files_info'][0]
|
if input_image_data is not None:
|
||||||
input_file_path_str = primary_saved_file_info.get('path')
|
log.info(f"{input_log_prefix}: Loaded from: {input_file_path}")
|
||||||
|
actual_input_dimensions.append(input_image_data.shape[:2]) # (h, w)
|
||||||
if input_file_path_str:
|
input_source_desc = str(input_file_path)
|
||||||
input_file_path = Path(input_file_path_str) # Path is absolute from SaveVariantsStage
|
try:
|
||||||
if input_file_path.is_file():
|
input_source_bit_depths[channel_char] = ipu.get_image_bit_depth(str(input_file_path))
|
||||||
try:
|
except Exception:
|
||||||
input_image_data = ipu.load_image(str(input_file_path))
|
log.warning(f"{input_log_prefix}: Could not get bit depth for {input_file_path}. Defaulting to 8.")
|
||||||
if input_image_data is not None:
|
input_source_bit_depths[channel_char] = 8
|
||||||
log.info(f"{input_log_prefix}: Loaded from context: {input_file_path}")
|
else:
|
||||||
actual_input_dimensions.append(input_image_data.shape[:2]) # (h, w)
|
log.warning(f"{input_log_prefix}: Failed to load image from {input_file_path}. Attempting fallback.")
|
||||||
input_source_desc = str(input_file_path)
|
except Exception as e:
|
||||||
# Bit depth from the saved variant info
|
log.warning(f"{input_log_prefix}: Error loading image from {input_file_path}: {e}. Attempting fallback.")
|
||||||
input_source_bit_depths[channel_char] = primary_saved_file_info.get('bit_depth', 8)
|
|
||||||
else:
|
|
||||||
log.warning(f"{input_log_prefix}: Failed to load image from {input_file_path} (found in context). Attempting fallback.")
|
|
||||||
input_image_data = None # Ensure fallback is triggered
|
|
||||||
except Exception as e:
|
|
||||||
log.warning(f"{input_log_prefix}: Error loading image from {input_file_path} (found in context): {e}. Attempting fallback.")
|
|
||||||
input_image_data = None # Ensure fallback is triggered
|
|
||||||
else:
|
|
||||||
log.warning(f"{input_log_prefix}: Input file path '{input_file_path}' (from context) not found. Attempting fallback.")
|
|
||||||
input_image_data = None # Ensure fallback is triggered
|
|
||||||
else:
|
else:
|
||||||
log.warning(f"{input_log_prefix}: Found map type '{required_map_type_from_rule}' in context, but 'path' is missing in saved_files_info. Attempting fallback.")
|
log.warning(f"{input_log_prefix}: Input file path not found: {input_file_path}. Attempting fallback.")
|
||||||
input_image_data = None # Ensure fallback is triggered
|
|
||||||
else:
|
else:
|
||||||
log.info(f"{input_log_prefix}: Input map type '{required_map_type_from_rule}' not found in context.processed_maps_details. Attempting fallback.")
|
log.warning(f"{input_log_prefix}: No file path provided. Attempting fallback.")
|
||||||
input_image_data = None # Ensure fallback is triggered
|
|
||||||
|
|
||||||
# 2. Apply Fallback if needed
|
# 2. Apply Fallback if needed
|
||||||
if input_image_data is None:
|
if input_image_data is None:
|
||||||
fallback_value = merge_defaults.get(channel_char)
|
fallback_value = merge_defaults.get(channel_char)
|
||||||
if fallback_value is not None:
|
if fallback_value is not None:
|
||||||
try:
|
try:
|
||||||
if not target_dimensions_hw:
|
|
||||||
result.error_message = f"Cannot create fallback for channel '{channel_char}': 'source_dimensions' (target_dimensions_hw) not defined in task_data."
|
|
||||||
log.error(f"{log_prefix}: {result.error_message}")
|
|
||||||
return result # Critical failure if dimensions for fallback are missing
|
|
||||||
h, w = target_dimensions_hw
|
h, w = target_dimensions_hw
|
||||||
# Infer shape/dtype for fallback (simplified)
|
# Infer shape/dtype for fallback (simplified)
|
||||||
num_channels = 1 if isinstance(fallback_value, (int, float)) else len(fallback_value) if isinstance(fallback_value, (list, tuple)) else 1
|
num_channels = 1 if isinstance(fallback_value, (int, float)) else len(fallback_value) if isinstance(fallback_value, (list, tuple)) else 1 # Default to 1 channel? Needs refinement.
|
||||||
dtype = np.uint8 # Default dtype
|
dtype = np.uint8 # Default dtype
|
||||||
shape = (h, w) if num_channels == 1 else (h, w, num_channels)
|
shape = (h, w) if num_channels == 1 else (h, w, num_channels)
|
||||||
|
|
||||||
@ -198,7 +184,7 @@ class MergedTaskProcessorStage(ProcessingStage):
|
|||||||
|
|
||||||
# 3. Apply Transformations to the loaded/fallback input
|
# 3. Apply Transformations to the loaded/fallback input
|
||||||
if input_image_data is not None:
|
if input_image_data is not None:
|
||||||
input_image_data, _, transform_notes = ipu.apply_common_map_transformations(
|
input_image_data, _, transform_notes = _apply_in_memory_transformations(
|
||||||
input_image_data.copy(), # Transform a copy
|
input_image_data.copy(), # Transform a copy
|
||||||
required_map_type_from_rule, # Use the type required by the rule
|
required_map_type_from_rule, # Use the type required by the rule
|
||||||
invert_normal_green,
|
invert_normal_green,
|
||||||
@ -253,20 +239,9 @@ class MergedTaskProcessorStage(ProcessingStage):
|
|||||||
loaded_inputs_for_merge[channel_char] = resized_img
|
loaded_inputs_for_merge[channel_char] = resized_img
|
||||||
log.debug(f"{log_prefix}: Resized input for channel '{channel_char}'.")
|
log.debug(f"{log_prefix}: Resized input for channel '{channel_char}'.")
|
||||||
|
|
||||||
# If target_merge_dims_hw is still None (no source_dimensions and no mismatch), use first loaded input's dimensions
|
|
||||||
if target_merge_dims_hw is None and actual_input_dimensions:
|
|
||||||
target_merge_dims_hw = actual_input_dimensions[0]
|
|
||||||
log.info(f"{log_prefix}: Using dimensions from first loaded input: {target_merge_dims_hw}")
|
|
||||||
|
|
||||||
# --- Perform Merge ---
|
# --- Perform Merge ---
|
||||||
log.debug(f"{log_prefix}: Performing merge operation for channels '{merge_channels_order}'.")
|
log.debug(f"{log_prefix}: Performing merge operation for channels '{merge_channels_order}'.")
|
||||||
try:
|
try:
|
||||||
# Final check for valid dimensions before unpacking
|
|
||||||
if not isinstance(target_merge_dims_hw, tuple) or len(target_merge_dims_hw) != 2:
|
|
||||||
result.error_message = "Could not determine valid target dimensions for merge operation."
|
|
||||||
log.error(f"{log_prefix}: {result.error_message} (target_merge_dims_hw: {target_merge_dims_hw})")
|
|
||||||
return result
|
|
||||||
|
|
||||||
output_channels = len(merge_channels_order)
|
output_channels = len(merge_channels_order)
|
||||||
h, w = target_merge_dims_hw # Use the potentially adjusted dimensions
|
h, w = target_merge_dims_hw # Use the potentially adjusted dimensions
|
||||||
|
|
||||||
|
|||||||
@ -38,9 +38,7 @@ class NormalMapGreenChannelStage(ProcessingStage):
|
|||||||
|
|
||||||
# Iterate through processed maps, as FileRule objects don't have IDs directly
|
# Iterate through processed maps, as FileRule objects don't have IDs directly
|
||||||
for map_id_hex, map_details in context.processed_maps_details.items():
|
for map_id_hex, map_details in context.processed_maps_details.items():
|
||||||
# Check if the map is a processed normal map using the standardized internal_map_type
|
if map_details.get('map_type') == "NORMAL" and map_details.get('status') == 'Processed':
|
||||||
internal_map_type = map_details.get('internal_map_type')
|
|
||||||
if internal_map_type and internal_map_type.startswith("MAP_NRM") and map_details.get('status') == 'Processed':
|
|
||||||
|
|
||||||
# Check configuration for inversion
|
# Check configuration for inversion
|
||||||
# Assuming general_settings is an attribute of config_obj and might be a dict or an object
|
# Assuming general_settings is an attribute of config_obj and might be a dict or an object
|
||||||
|
|||||||
@ -194,6 +194,83 @@ class OutputOrganizationStage(ProcessingStage):
|
|||||||
context.asset_metadata['status'] = "Failed (Output Organization Error)"
|
context.asset_metadata['status'] = "Failed (Output Organization Error)"
|
||||||
details['status'] = 'Organization Failed'
|
details['status'] = 'Organization Failed'
|
||||||
|
|
||||||
|
# --- Handle legacy 'Processed_With_Variants' status (if still needed, otherwise remove) ---
|
||||||
|
# This block is kept for potential backward compatibility but might be redundant
|
||||||
|
# if 'Processed_Via_Save_Utility' is the new standard for variants.
|
||||||
|
elif map_status == 'Processed_With_Variants':
|
||||||
|
variants = details.get('variants') # Expects old structure: list of dicts with 'temp_path'
|
||||||
|
if not variants:
|
||||||
|
logger.warning(f"Asset '{asset_name_for_log}': Map key '{processed_map_key}' (status '{map_status}') has no 'variants' list. Skipping.")
|
||||||
|
details['status'] = 'Organization Failed (Legacy Variants Missing)'
|
||||||
|
continue
|
||||||
|
|
||||||
|
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(variants)} legacy variants for map key '{processed_map_key}' (map type: {base_map_type}).")
|
||||||
|
|
||||||
|
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
|
||||||
|
map_metadata_entry['map_type'] = base_map_type
|
||||||
|
map_metadata_entry.setdefault('variant_paths', {})
|
||||||
|
|
||||||
|
processed_any_variant_successfully = False
|
||||||
|
failed_any_variant = False
|
||||||
|
|
||||||
|
for variant_index, variant_detail in enumerate(variants):
|
||||||
|
temp_variant_path_str = variant_detail.get('temp_path') # Uses 'temp_path'
|
||||||
|
if not temp_variant_path_str:
|
||||||
|
logger.warning(f"Asset '{asset_name_for_log}': Legacy Variant {variant_index} for map '{processed_map_key}' is missing 'temp_path'. Skipping.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
temp_variant_path = Path(temp_variant_path_str)
|
||||||
|
if not temp_variant_path.is_file():
|
||||||
|
logger.warning(f"Asset '{asset_name_for_log}': Legacy temporary variant file '{temp_variant_path}' for map '{processed_map_key}' not found. Skipping.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
variant_resolution_key = variant_detail.get('resolution_key', f"varRes{variant_index}")
|
||||||
|
variant_ext = temp_variant_path.suffix.lstrip('.')
|
||||||
|
|
||||||
|
token_data_variant = {
|
||||||
|
"assetname": asset_name_for_log,
|
||||||
|
"supplier": context.effective_supplier or "DefaultSupplier",
|
||||||
|
"maptype": base_map_type,
|
||||||
|
"resolution": variant_resolution_key,
|
||||||
|
"ext": variant_ext,
|
||||||
|
"incrementingvalue": getattr(context, 'incrementing_value', None),
|
||||||
|
"sha5": getattr(context, 'sha5_value', None)
|
||||||
|
}
|
||||||
|
token_data_variant_cleaned = {k: v for k, v in token_data_variant.items() if v is not None}
|
||||||
|
output_filename_variant = generate_path_from_pattern(output_filename_pattern_config, token_data_variant_cleaned)
|
||||||
|
|
||||||
|
try:
|
||||||
|
relative_dir_path_str_variant = generate_path_from_pattern(
|
||||||
|
pattern_string=output_dir_pattern,
|
||||||
|
token_data=token_data_variant_cleaned
|
||||||
|
)
|
||||||
|
final_variant_path = Path(context.output_base_path) / Path(relative_dir_path_str_variant) / Path(output_filename_variant)
|
||||||
|
final_variant_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
if final_variant_path.exists() and not overwrite_existing:
|
||||||
|
logger.info(f"Asset '{asset_name_for_log}': Output legacy variant file {final_variant_path} exists and overwrite is disabled. Skipping copy.")
|
||||||
|
else:
|
||||||
|
shutil.copy2(temp_variant_path, final_variant_path)
|
||||||
|
logger.info(f"Asset '{asset_name_for_log}': Copied legacy variant {temp_variant_path} to {final_variant_path}.")
|
||||||
|
final_output_files.append(str(final_variant_path))
|
||||||
|
|
||||||
|
relative_final_variant_path_str = str(Path(relative_dir_path_str_variant) / Path(output_filename_variant))
|
||||||
|
map_metadata_entry['variant_paths'][variant_resolution_key] = relative_final_variant_path_str
|
||||||
|
processed_any_variant_successfully = True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Asset '{asset_name_for_log}': Failed to copy legacy variant {temp_variant_path}. Error: {e}", exc_info=True)
|
||||||
|
context.status_flags['output_organization_error'] = True
|
||||||
|
context.asset_metadata['status'] = "Failed (Output Organization Error - Legacy Variant)"
|
||||||
|
failed_any_variant = True
|
||||||
|
|
||||||
|
if failed_any_variant:
|
||||||
|
details['status'] = 'Organization Failed (Legacy Variants)'
|
||||||
|
elif processed_any_variant_successfully:
|
||||||
|
details['status'] = 'Organized (Legacy Variants)'
|
||||||
|
else:
|
||||||
|
details['status'] = 'Organization Skipped (No Legacy Variants Copied/Needed)'
|
||||||
|
|
||||||
# --- Handle other statuses (Skipped, Failed, etc.) ---
|
# --- Handle other statuses (Skipped, Failed, etc.) ---
|
||||||
else: # Catches statuses not explicitly handled above
|
else: # Catches statuses not explicitly handled above
|
||||||
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status: '{map_status}') for organization as it's not a recognized final processed state or variant state.")
|
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status: '{map_status}') for organization as it's not a recognized final processed state or variant state.")
|
||||||
|
|||||||
@ -60,24 +60,23 @@ class PrepareProcessingItemsStage(ProcessingStage):
|
|||||||
# merged_image_tasks are expected to be loaded into context.config_obj
|
# merged_image_tasks are expected to be loaded into context.config_obj
|
||||||
# by the Configuration class from app_settings.json.
|
# by the Configuration class from app_settings.json.
|
||||||
|
|
||||||
merged_tasks_list = getattr(context.config_obj, 'map_merge_rules', None)
|
merged_tasks_list = getattr(context.config_obj, 'merged_image_tasks', None)
|
||||||
|
|
||||||
if merged_tasks_list and isinstance(merged_tasks_list, list):
|
if merged_tasks_list and isinstance(merged_tasks_list, list):
|
||||||
log.debug(f"Asset '{asset_name_for_log}': Found {len(merged_tasks_list)} merge tasks in global config.")
|
log.debug(f"Asset '{asset_name_for_log}': Found {len(merged_tasks_list)} merge tasks in global config.")
|
||||||
for task_idx, task_data in enumerate(merged_tasks_list):
|
for task_idx, task_data in enumerate(merged_tasks_list):
|
||||||
if isinstance(task_data, dict):
|
for task_idx, task_data in enumerate(merged_tasks_list):
|
||||||
task_key = f"merged_task_{task_idx}"
|
if isinstance(task_data, dict):
|
||||||
# Basic validation for merge task data: requires output_map_type and an inputs dictionary
|
task_key = f"merged_task_{task_idx}"
|
||||||
if not task_data.get('output_map_type') or not isinstance(task_data.get('inputs'), dict):
|
# Basic validation for merge task data (can be expanded)
|
||||||
log.warning(f"Asset '{asset_name_for_log}', Task Index {task_idx}: Skipping merge task due to missing 'output_map_type' or valid 'inputs' dictionary. Task data: {task_data}")
|
if not task_data.get('output_map_type') or not task_data.get('merge_rule_config'):
|
||||||
continue # Skip this specific task
|
log.warning(f"Asset '{asset_name_for_log}', Task Index {task_idx}: Skipping merge task due to missing 'output_map_type' or 'merge_rule_config'. Task data: {task_data}")
|
||||||
log.debug(f"Asset '{asset_name_for_log}', Preparing Merge Task Index {task_idx}: Raw task_data: {task_data}")
|
continue # Skip this specific task
|
||||||
merge_def = MergeTaskDefinition(task_data=task_data, task_key=task_key)
|
merge_def = MergeTaskDefinition(task_data=task_data, task_key=task_key)
|
||||||
log.debug(f"Asset '{asset_name_for_log}': Created MergeTaskDefinition object: {merge_def}")
|
log.info(f"Asset '{asset_name_for_log}': Identified and adding Merge Task: Key='{merge_def.task_key}', OutputType='{task_data.get('output_map_type', 'N/A')}'")
|
||||||
log.info(f"Asset '{asset_name_for_log}': Successfully CREATED MergeTaskDefinition: Key='{merge_def.task_key}', OutputType='{merge_def.task_data.get('output_map_type', 'N/A')}'")
|
items_to_process.append(merge_def)
|
||||||
items_to_process.append(merge_def)
|
else:
|
||||||
else:
|
log.warning(f"Asset '{asset_name_for_log}': Item at index {task_idx} in config_obj.merged_image_tasks is not a dictionary. Skipping. Item: {task_data}")
|
||||||
log.warning(f"Asset '{asset_name_for_log}': Item at index {task_idx} in config_obj.merged_image_tasks is not a dictionary. Skipping. Item: {task_data}")
|
|
||||||
# The log for "Added X potential MergeTaskDefinition items" will be covered by the final log.
|
# The log for "Added X potential MergeTaskDefinition items" will be covered by the final log.
|
||||||
elif merged_tasks_list is None:
|
elif merged_tasks_list is None:
|
||||||
log.debug(f"Asset '{asset_name_for_log}': 'merged_image_tasks' not found in config_obj. No global merge tasks to add.")
|
log.debug(f"Asset '{asset_name_for_log}': 'merged_image_tasks' not found in config_obj. No global merge tasks to add.")
|
||||||
@ -90,7 +89,6 @@ class PrepareProcessingItemsStage(ProcessingStage):
|
|||||||
if not items_to_process:
|
if not items_to_process:
|
||||||
log.info(f"Asset '{asset_name_for_log}': No valid items found to process after preparation.")
|
log.info(f"Asset '{asset_name_for_log}': No valid items found to process after preparation.")
|
||||||
|
|
||||||
log.debug(f"Asset '{asset_name_for_log}': Final items_to_process before assigning to context: {items_to_process}")
|
|
||||||
context.processing_items = items_to_process
|
context.processing_items = items_to_process
|
||||||
context.intermediate_results = {} # Initialize intermediate results storage
|
context.intermediate_results = {} # Initialize intermediate results storage
|
||||||
|
|
||||||
|
|||||||
@ -91,6 +91,57 @@ class RegularMapProcessorStage(ProcessingStage):
|
|||||||
|
|
||||||
return final_internal_map_type
|
return final_internal_map_type
|
||||||
|
|
||||||
|
def _apply_in_memory_transformations(
|
||||||
|
self,
|
||||||
|
image_data: np.ndarray,
|
||||||
|
processing_map_type: str, # The potentially suffixed internal type
|
||||||
|
invert_normal_green: bool,
|
||||||
|
file_type_definitions: Dict[str, Dict],
|
||||||
|
log_prefix: str
|
||||||
|
) -> Tuple[np.ndarray, str, List[str]]:
|
||||||
|
"""
|
||||||
|
Applies in-memory transformations (Gloss-to-Rough, Normal Green Invert).
|
||||||
|
Returns potentially transformed image data, potentially updated map type, and notes.
|
||||||
|
"""
|
||||||
|
transformation_notes = []
|
||||||
|
current_image_data = image_data # Start with original data
|
||||||
|
updated_processing_map_type = processing_map_type # Start with original type
|
||||||
|
|
||||||
|
# Gloss-to-Rough
|
||||||
|
# Check if the base type is Gloss (before suffix)
|
||||||
|
base_map_type_match = re.match(r"(MAP_GLOSS)", processing_map_type)
|
||||||
|
if base_map_type_match:
|
||||||
|
log.info(f"{log_prefix}: Applying Gloss-to-Rough conversion.")
|
||||||
|
inversion_succeeded = False
|
||||||
|
if np.issubdtype(current_image_data.dtype, np.floating):
|
||||||
|
current_image_data = 1.0 - current_image_data
|
||||||
|
current_image_data = np.clip(current_image_data, 0.0, 1.0)
|
||||||
|
log.debug(f"{log_prefix}: Inverted float image data for Gloss->Rough.")
|
||||||
|
inversion_succeeded = True
|
||||||
|
elif np.issubdtype(current_image_data.dtype, np.integer):
|
||||||
|
max_val = np.iinfo(current_image_data.dtype).max
|
||||||
|
current_image_data = max_val - current_image_data
|
||||||
|
log.debug(f"{log_prefix}: Inverted integer image data (max_val: {max_val}) for Gloss->Rough.")
|
||||||
|
inversion_succeeded = True
|
||||||
|
else:
|
||||||
|
log.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS map. Cannot invert.")
|
||||||
|
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
|
||||||
|
|
||||||
|
if inversion_succeeded:
|
||||||
|
# Update the type string itself (e.g., MAP_GLOSS-1 -> MAP_ROUGH-1)
|
||||||
|
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
|
||||||
|
log.info(f"{log_prefix}: Map type updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
|
||||||
|
transformation_notes.append("Gloss-to-Rough applied")
|
||||||
|
|
||||||
|
# Normal Green Invert
|
||||||
|
# Check if the base type is Normal (before suffix)
|
||||||
|
base_map_type_match_nrm = re.match(r"(MAP_NRM)", processing_map_type)
|
||||||
|
if base_map_type_match_nrm and invert_normal_green:
|
||||||
|
log.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting).")
|
||||||
|
current_image_data = ipu.invert_normal_map_green_channel(current_image_data)
|
||||||
|
transformation_notes.append("Normal Green Inverted (Global)")
|
||||||
|
|
||||||
|
return current_image_data, updated_processing_map_type, transformation_notes
|
||||||
|
|
||||||
# --- Execute Method ---
|
# --- Execute Method ---
|
||||||
|
|
||||||
@ -132,13 +183,6 @@ class RegularMapProcessorStage(ProcessingStage):
|
|||||||
log.error(f"{log_prefix}: {result.error_message}")
|
log.error(f"{log_prefix}: {result.error_message}")
|
||||||
return result # Early exit
|
return result # Early exit
|
||||||
|
|
||||||
# Explicitly skip if the determined type doesn't start with "MAP_"
|
|
||||||
if not initial_internal_map_type.startswith("MAP_"):
|
|
||||||
result.status = "Skipped (Invalid Type)"
|
|
||||||
result.error_message = f"FileRule item_type '{initial_internal_map_type}' does not start with 'MAP_'. Skipping processing."
|
|
||||||
log.warning(f"{log_prefix}: {result.error_message}")
|
|
||||||
return result # Early exit
|
|
||||||
|
|
||||||
processing_map_type = self._get_suffixed_internal_map_type(
|
processing_map_type = self._get_suffixed_internal_map_type(
|
||||||
context.asset_rule, file_rule, initial_internal_map_type, respect_variant_map_types, asset_name_for_log
|
context.asset_rule, file_rule, initial_internal_map_type, respect_variant_map_types, asset_name_for_log
|
||||||
)
|
)
|
||||||
@ -186,7 +230,7 @@ class RegularMapProcessorStage(ProcessingStage):
|
|||||||
result.original_bit_depth = None # Indicate failure to determine
|
result.original_bit_depth = None # Indicate failure to determine
|
||||||
|
|
||||||
# --- Apply Transformations ---
|
# --- Apply Transformations ---
|
||||||
transformed_image_data, final_map_type, transform_notes = ipu.apply_common_map_transformations(
|
transformed_image_data, final_map_type, transform_notes = self._apply_in_memory_transformations(
|
||||||
source_image_data.copy(), # Pass a copy to avoid modifying original load
|
source_image_data.copy(), # Pass a copy to avoid modifying original load
|
||||||
processing_map_type,
|
processing_map_type,
|
||||||
invert_normal_green,
|
invert_normal_green,
|
||||||
|
|||||||
@ -427,89 +427,3 @@ def save_image(
|
|||||||
except Exception: # as e:
|
except Exception: # as e:
|
||||||
# print(f"Error saving image {path_obj}: {e}") # Optional: for debugging utils
|
# print(f"Error saving image {path_obj}: {e}") # Optional: for debugging utils
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# --- Common Map Transformations ---
|
|
||||||
|
|
||||||
import re
|
|
||||||
import logging
|
|
||||||
|
|
||||||
ipu_log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
def apply_common_map_transformations(
|
|
||||||
image_data: np.ndarray,
|
|
||||||
processing_map_type: str, # The potentially suffixed internal type
|
|
||||||
invert_normal_green: bool,
|
|
||||||
file_type_definitions: Dict[str, Dict],
|
|
||||||
log_prefix: str
|
|
||||||
) -> Tuple[np.ndarray, str, List[str]]:
|
|
||||||
"""
|
|
||||||
Applies common in-memory transformations (Gloss-to-Rough, Normal Green Invert).
|
|
||||||
Returns potentially transformed image data, potentially updated map type, and notes.
|
|
||||||
"""
|
|
||||||
transformation_notes = []
|
|
||||||
current_image_data = image_data # Start with original data
|
|
||||||
updated_processing_map_type = processing_map_type # Start with original type
|
|
||||||
|
|
||||||
# Gloss-to-Rough
|
|
||||||
# Check if the base type is Gloss (before suffix)
|
|
||||||
base_map_type_match = re.match(r"(MAP_GLOSS)", processing_map_type)
|
|
||||||
if base_map_type_match:
|
|
||||||
ipu_log.info(f"{log_prefix}: Applying Gloss-to-Rough conversion.")
|
|
||||||
inversion_succeeded = False
|
|
||||||
if np.issubdtype(current_image_data.dtype, np.floating):
|
|
||||||
current_image_data = 1.0 - current_image_data
|
|
||||||
current_image_data = np.clip(current_image_data, 0.0, 1.0)
|
|
||||||
ipu_log.debug(f"{log_prefix}: Inverted float image data for Gloss->Rough.")
|
|
||||||
inversion_succeeded = True
|
|
||||||
elif np.issubdtype(current_image_data.dtype, np.integer):
|
|
||||||
max_val = np.iinfo(current_image_data.dtype).max
|
|
||||||
current_image_data = max_val - current_image_data
|
|
||||||
ipu_log.debug(f"{log_prefix}: Inverted integer image data (max_val: {max_val}) for Gloss->Rough.")
|
|
||||||
inversion_succeeded = True
|
|
||||||
else:
|
|
||||||
ipu_log.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS map. Cannot invert.")
|
|
||||||
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
|
|
||||||
|
|
||||||
if inversion_succeeded:
|
|
||||||
# Update the type string itself (e.g., MAP_GLOSS-1 -> MAP_ROUGH-1)
|
|
||||||
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
|
|
||||||
ipu_log.info(f"{log_prefix}: Map type updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
|
|
||||||
transformation_notes.append("Gloss-to-Rough applied")
|
|
||||||
|
|
||||||
# Normal Green Invert
|
|
||||||
# Check if the base type is Normal (before suffix)
|
|
||||||
base_map_type_match_nrm = re.match(r"(MAP_NRM)", processing_map_type)
|
|
||||||
if base_map_type_match_nrm and invert_normal_green:
|
|
||||||
ipu_log.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting).")
|
|
||||||
current_image_data = invert_normal_map_green_channel(current_image_data)
|
|
||||||
transformation_notes.append("Normal Green Inverted (Global)")
|
|
||||||
|
|
||||||
return current_image_data, updated_processing_map_type, transformation_notes
|
|
||||||
|
|
||||||
# --- Normal Map Utilities ---
|
|
||||||
|
|
||||||
def invert_normal_map_green_channel(normal_map: np.ndarray) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Inverts the green channel of a normal map.
|
|
||||||
Assumes the normal map is in RGB or RGBA format (channel order R, G, B, A).
|
|
||||||
"""
|
|
||||||
if normal_map is None or len(normal_map.shape) < 3 or normal_map.shape[2] < 3:
|
|
||||||
# Not a valid color image with at least 3 channels
|
|
||||||
return normal_map
|
|
||||||
|
|
||||||
# Ensure data is mutable
|
|
||||||
inverted_map = normal_map.copy()
|
|
||||||
|
|
||||||
# Invert the green channel (index 1)
|
|
||||||
# Handle different data types
|
|
||||||
if np.issubdtype(inverted_map.dtype, np.floating):
|
|
||||||
inverted_map[:, :, 1] = 1.0 - inverted_map[:, :, 1]
|
|
||||||
elif np.issubdtype(inverted_map.dtype, np.integer):
|
|
||||||
max_val = np.iinfo(inverted_map.dtype).max
|
|
||||||
inverted_map[:, :, 1] = max_val - inverted_map[:, :, 1]
|
|
||||||
else:
|
|
||||||
# Unsupported dtype, return original
|
|
||||||
print(f"Warning: Unsupported dtype {inverted_map.dtype} for normal map green channel inversion.")
|
|
||||||
return normal_map
|
|
||||||
|
|
||||||
return inverted_map
|
|
||||||
Loading…
x
Reference in New Issue
Block a user