24 Commits

Author SHA1 Message Date
588766ad0a Merge pull request 'Bugfixes from dev' (#68) from Dev into Stable
Reviewed-on: #68
2025-05-15 09:14:42 +02:00
fe844a2714 Merge remote-tracking branch 'origin/GUI-and-Configs' into Dev 2025-05-15 09:13:30 +02:00
3927f8e6c0 Merge pull request 'Prototype > PreAlpha' (#67) from Dev into Stable
Reviewed-on: #67
2025-05-15 09:10:53 +02:00
ca92c72070 CONPORT implementation - Autotest fix 2025-05-14 23:19:35 +02:00
85e94a3d0d Debugsession N2 - New fallback for LOWRES images 2025-05-14 18:07:28 +02:00
ce1d8c770c Debugsession N1 2025-05-14 16:46:09 +02:00
dfe6500141 Merge branch 'Dev' into GUI-and-Configs 2025-05-14 14:56:13 +02:00
58eb10b7dc AutoTest Implementation 2025-05-14 14:55:30 +02:00
87673507d8 New Definitions editor 2025-05-13 13:08:52 +02:00
344ae078a8 UI Updates - Error with Definitions 2025-05-13 11:54:22 +02:00
dec5d7d27f Config Updates - User settings - Saving Methods 2025-05-13 10:32:19 +02:00
383e904e1a Merge pull request 'Processing-Refactor' (#63) from Processing-Refactor into Dev
Reviewed-on: #63
2025-05-13 09:25:06 +02:00
6e7daf260a Metadata reformat done 2025-05-13 09:21:38 +02:00
1cd81cb87a Metadata reformatting 2025-05-13 09:15:43 +02:00
f800bb25a9 channelpacking now works 2025-05-13 04:01:38 +02:00
35a7221f57 Cleanup of inconsistencies 2025-05-13 03:07:00 +02:00
0de4db1826 Fixed inconcistencies - only processes MAP_ files now 2025-05-13 02:52:07 +02:00
b441174076 Processing Documentation Update 2025-05-13 02:28:42 +02:00
c2ad299ce2 Various Attempted fixes 2025-05-12 23:32:35 +02:00
528d9be47f Closer to feature parity - missing merge still 2025-05-12 23:03:26 +02:00
81d8404576 yet another processing refactor :3 Mostly works 2025-05-12 22:46:49 +02:00
ab4db1b8bd BugFixes 2025-05-12 16:49:57 +02:00
06552216d5 Logic Update - Perform MapType transforms before merging 2025-05-12 14:22:01 +02:00
4ffb2ff78c Pipeline simplification - Needs testing! 2025-05-12 13:31:58 +02:00
64 changed files with 7137 additions and 2600 deletions

4
.gitignore vendored
View File

@@ -30,6 +30,6 @@ Thumbs.db
gui/__pycache__
__pycache__
Testfiles
Testfiles/
Testfiles/TestOutputs
Testfiles_

45
.roo/mcp.json Normal file
View File

@@ -0,0 +1,45 @@
{
"mcpServers": {
"conport": {
"command": "C:\\Users\\theis\\context-portal\\.venv\\Scripts\\python.exe",
"args": [
"C:\\Users\\theis\\context-portal\\src\\context_portal_mcp\\main.py",
"--mode",
"stdio",
"--workspace_id",
"${workspaceFolder}"
],
"alwaysAllow": [
"get_product_context",
"update_product_context",
"get_active_context",
"update_active_context",
"log_decision",
"get_decisions",
"search_decisions_fts",
"log_progress",
"get_progress",
"update_progress",
"delete_progress_by_id",
"log_system_pattern",
"get_system_patterns",
"log_custom_data",
"get_custom_data",
"delete_custom_data",
"search_project_glossary_fts",
"export_conport_to_markdown",
"import_markdown_to_conport",
"link_conport_items",
"search_custom_data_value_fts",
"get_linked_items",
"batch_log_items",
"get_item_history",
"delete_decision_by_id",
"delete_system_pattern_by_id",
"get_conport_schema",
"get_recent_activity_summary",
"semantic_search_conport"
]
}
}
}

3
.roomodes Normal file
View File

@@ -0,0 +1,3 @@
{
"customModes": []
}

112
AUTOTEST_GUI_PLAN.md Normal file
View File

@@ -0,0 +1,112 @@
# Plan for Autotest GUI Mode Implementation
**I. Objective:**
Create an `autotest.py` script that can launch the Asset Processor GUI headlessly, load a predefined asset (`.zip`), select a predefined preset, verify the predicted rule structure against an expected JSON, trigger processing to a predefined output directory, check the output, and analyze logs for errors or specific messages. This serves as a sanity check for core GUI-driven workflows.
**II. `TestFiles` Directory:**
A new directory named `TestFiles` will be created in the project root (`c:/Users/Theis/Assetprocessor/Asset-Frameworker/TestFiles/`). This directory will house:
* Sample asset `.zip` files for testing (e.g., `TestFiles/SampleAsset1.zip`).
* Expected rule structure JSON files (e.g., `TestFiles/SampleAsset1_PresetX_expected_rules.json`).
* A subdirectory for test outputs (e.g., `TestFiles/TestOutputs/`).
**III. `autotest.py` Script:**
1. **Location:** `c:/Users/Theis/Assetprocessor/Asset-Frameworker/autotest.py` (or `scripts/autotest.py`).
2. **Command-Line Arguments (with defaults pointing to `TestFiles/`):**
* `--zipfile`: Path to the test asset. Default: `TestFiles/default_test_asset.zip`.
* `--preset`: Name of the preset. Default: `DefaultTestPreset`.
* `--expectedrules`: Path to expected rules JSON. Default: `TestFiles/default_test_asset_rules.json`.
* `--outputdir`: Path for processing output. Default: `TestFiles/TestOutputs/DefaultTestOutput`.
* `--search` (optional): Log search term. Default: `None`.
* `--additional-lines` (optional): Context lines for log search. Default: `0`.
3. **Core Structure:**
* Imports necessary modules from the main application and PySide6.
* Adds project root to `sys.path` for imports.
* `AutoTester` class:
* **`__init__(self, app_instance: App)`:**
* Stores `app_instance` and `main_window`.
* Initializes `QEventLoop`.
* Connects `app_instance.all_tasks_finished` to `self._on_all_tasks_finished`.
* Loads expected rules from the `--expectedrules` file.
* **`run_test(self)`:** Orchestrates the test steps sequentially:
1. Load ZIP (`main_window.add_input_paths()`).
2. Select Preset (`main_window.preset_editor_widget.editor_preset_list.setCurrentItem()`).
3. Await Prediction (using `QTimer` to poll `main_window._pending_predictions`, manage with `QEventLoop`).
4. Retrieve & Compare Rulelist:
* Get actual rules: `main_window.unified_model.get_all_source_rules()`.
* Convert actual rules to comparable dict (`_convert_rules_to_comparable()`).
* Compare with loaded expected rules (`_compare_rules()`). If mismatch, log and fail.
5. Start Processing (emit `main_window.start_backend_processing` with rules and output settings).
6. Await Processing (use `QEventLoop` waiting for `_on_all_tasks_finished`).
7. Check Output Path (verify existence of output dir, list contents, basic sanity checks like non-emptiness or presence of key asset folders).
8. Retrieve & Analyze Logs (`main_window.log_console.log_console_output.toPlainText()`, filter by `--search`, check for tracebacks).
9. Report result and call `cleanup_and_exit()`.
* **`_check_prediction_status(self)`:** Slot for prediction polling timer.
* **`_on_all_tasks_finished(self, processed_count, skipped_count, failed_count)`:** Slot for `App.all_tasks_finished` signal.
* **`_convert_rules_to_comparable(self, source_rules_list: List[SourceRule]) -> dict`:** Converts `SourceRule` objects to the JSON structure defined below.
* **`_compare_rules(self, actual_rules_data: dict, expected_rules_data: dict) -> bool`:** Implements Option 1 comparison logic:
* Errors if an expected field is missing or its value mismatches.
* Logs (but doesn't error on) fields present in actual but not in expected.
* **`_process_and_display_logs(self, logs_text: str)`:** Handles log filtering/display.
* **`cleanup_and_exit(self, success=True)`:** Quits `QCoreApplication` and `sys.exit()`.
* `main()` function:
* Parses CLI arguments.
* Initializes `QApplication`.
* Instantiates `main.App()` (does *not* show the GUI).
* Instantiates `AutoTester(app_instance)`.
* Uses `QTimer.singleShot(0, tester.run_test)` to start the test.
* Runs `q_app.exec()`.
**IV. `expected_rules.json` Structure (Revised):**
Located in `TestFiles/`. Example: `TestFiles/SampleAsset1_PresetX_expected_rules.json`.
```json
{
"source_rules": [
{
"input_path": "SampleAsset1.zip",
"supplier_identifier": "ExpectedSupplier",
"preset_name": "PresetX",
"assets": [
{
"asset_name": "AssetNameFromPrediction",
"asset_type": "Prop",
"files": [
{
"file_path": "relative/path/to/file1.png",
"item_type": "MAP_COL",
"target_asset_name_override": null
}
]
}
]
}
]
}
```
**V. Mermaid Diagram of Autotest Flow:**
```mermaid
graph TD
A[Start autotest.py with CLI Args (defaults to TestFiles/)] --> B{Setup Args & Logging};
B --> C[Init QApplication & main.App (GUI Headless)];
C --> D[Instantiate AutoTester(app_instance)];
D --> E[QTimer.singleShot -> AutoTester.run_test()];
subgraph AutoTester.run_test()
E --> F[Load Expected Rules from --expectedrules JSON];
F --> G[Load ZIP (--zipfile) via main_window.add_input_paths()];
G --> H[Select Preset (--preset) via main_window.preset_editor_widget];
H --> I[Await Prediction (Poll main_window._pending_predictions via QTimer & QEventLoop)];
I -- Prediction Done --> J[Get Actual Rules from main_window.unified_model];
J --> K[Convert Actual Rules to Comparable JSON Structure];
K --> L{Compare Actual vs Expected Rules (Option 1 Logic)};
L -- Match --> M[Start Processing (Emit main_window.start_backend_processing with --outputdir)];
L -- Mismatch --> ZFAIL[Log Mismatch & Call cleanup_and_exit(False)];
M --> N[Await Processing (QEventLoop for App.all_tasks_finished signal)];
N -- Processing Done --> O[Check Output Dir (--outputdir): Exists? Not Empty? Key Asset Folders?];
O --> P[Retrieve & Analyze Logs (Search, Tracebacks)];
P --> Q[Log Test Success & Call cleanup_and_exit(True)];
end
ZFAIL --> ZEND[AutoTester.cleanup_and_exit() -> QCoreApplication.quit() & sys.exit()];
Q --> ZEND;

View File

@@ -16,6 +16,7 @@ This document outlines the key features of the Asset Processor Tool.
* Saves maps in appropriate formats (JPG, PNG, EXR) based on complex rules involving map type (`FORCE_LOSSLESS_MAP_TYPES`), resolution (`RESOLUTION_THRESHOLD_FOR_JPG`), bit depth, and source format.
* Calculates basic image statistics (Min/Max/Mean) for a reference resolution.
* Calculates and stores the relative aspect ratio change string in metadata (e.g., `EVEN`, `X150`, `Y125`).
* **Low-Resolution Fallback:** If enabled (`ENABLE_LOW_RESOLUTION_FALLBACK`), automatically saves an additional "LOWRES" variant of source images if their largest dimension is below a configurable threshold (`LOW_RESOLUTION_THRESHOLD`). This "LOWRES" variant uses the original image dimensions and is saved in addition to any standard resolution outputs.
* **Channel Merging:** Combines channels from different maps into packed textures (e.g., NRMRGH) based on preset rules (`MAP_MERGE_RULES` in `config.py`).
* **Metadata Generation:** Creates a `metadata.json` file for each asset containing details about maps, category, archetype, aspect ratio change, processing settings, etc.
* **Output Organization:** Creates a clean, structured output directory (`<output_base>/<supplier>/<asset_name>/`).

View File

@@ -13,9 +13,21 @@ The `app_settings.json` file is structured into several key sections, including:
* `ASSET_TYPE_DEFINITIONS`: Defines known asset types (like Surface, Model, Decal) and their properties.
* `MAP_MERGE_RULES`: Defines how multiple input maps can be merged into a single output map (e.g., combining Normal and Roughness into one).
### Low-Resolution Fallback Settings
These settings control the generation of low-resolution "fallback" variants for source images:
* `ENABLE_LOW_RESOLUTION_FALLBACK` (boolean, default: `true`):
* If `true`, the tool will generate an additional "LOWRES" variant for source images whose largest dimension is smaller than the `LOW_RESOLUTION_THRESHOLD`.
* This "LOWRES" variant uses the original dimensions of the source image and is saved in addition to any other standard resolution outputs (e.g., 1K, PREVIEW).
* If `false`, this feature is disabled.
* `LOW_RESOLUTION_THRESHOLD` (integer, default: `512`):
* Defines the pixel dimension (for the largest side of an image) below which the "LOWRES" fallback variant will be generated (if enabled).
* For example, if set to `512`, any source image smaller than 512x512 (e.g., 256x512, 128x128) will have a "LOWRES" variant created.
### LLM Predictor Settings
For users who wish to utilize the experimental LLM Predictor feature, the following settings are available in `config/app_settings.json`:
For users who wish to utilize the experimental LLM Predictor feature, the following settings are available in `config/llm_settings.json`:
* `llm_endpoint_url`: The URL of the LLM API endpoint. For local LLMs like LM Studio or Ollama, this will typically be `http://localhost:<port>/v1`. Consult your LLM server documentation for the exact endpoint.
* `llm_api_key`: The API key required to access the LLM endpoint. Some local LLM servers may not require a key, in which case this can be left empty.
@@ -23,15 +35,39 @@ For users who wish to utilize the experimental LLM Predictor feature, the follow
* `llm_temperature`: Controls the randomness of the LLM's output. Lower values (e.g., 0.1-0.5) make the output more deterministic and focused, while higher values (e.g., 0.6-1.0) make it more creative and varied. For prediction tasks, lower temperatures are generally recommended.
* `llm_request_timeout`: The maximum time (in seconds) to wait for a response from the LLM API. Adjust this based on the performance of your LLM server and the complexity of the requests.
Note that the `llm_predictor_prompt` and `llm_predictor_examples` settings are also present in `app_settings.json`. These define the instructions and examples provided to the LLM for prediction. While they can be viewed here, they are primarily intended for developer reference and tuning the LLM's behavior, and most users will not need to modify them.
Note that the `llm_predictor_prompt` and `llm_predictor_examples` settings are also present in `config/llm_settings.json`. These define the instructions and examples provided to the LLM for prediction. While they can be viewed here, they are primarily intended for developer reference and tuning the LLM's behavior, and most users will not need to modify them directly via the file. These settings are editable via the LLM Editor panel in the main GUI when the LLM interpretation mode is selected.
## GUI Configuration Editor
## Application Preferences (`config/app_settings.json` overrides)
You can modify the `app_settings.json` file using the built-in GUI editor. Access it via the **Edit** -> **Preferences...** menu.
You can modify user-overridable application settings using the built-in GUI editor. These settings are loaded from `config/app_settings.json` and saved as overrides in `config/user_settings.json`. Access it via the **Edit** -> **Preferences...** menu.
This editor provides a tabbed interface (e.g., "General", "Output & Naming") to view and change the core application settings defined in `app_settings.json`. Settings in the editor directly correspond to the structure and values within the JSON file. Note that any changes made through the GUI editor require an application restart to take effect.
This editor provides a tabbed interface to view and change various application behaviors. The tabs include:
* **General:** Basic settings like output base directory and temporary file prefix.
* **Output & Naming:** Settings controlling output directory and filename patterns, and how variants are handled.
* **Image Processing:** Settings related to image resolution definitions, compression levels, and format choices.
* **Map Merging:** Configuration for how multiple input maps are combined into single output maps.
* **Postprocess Scripts:** Paths to default Blender files for post-processing.
*(Ideally, a screenshot of the GUI Configuration Editor would be included here.)*
Note that this editor focuses on user-specific overrides of core application settings. **Asset Type Definitions, File Type Definitions, and Supplier Settings are managed in a separate Definitions Editor.**
Any changes made through the Preferences editor require an application restart to take effect.
*(Ideally, a screenshot of the Application Preferences editor would be included here.)*
## Definitions Editor (`config/asset_type_definitions.json`, `config/file_type_definitions.json`, `config/suppliers.json`)
Core application definitions that are separate from general user preferences are managed in the dedicated Definitions Editor. This includes defining known asset types, file types, and configuring settings specific to different suppliers. Access it via the **Edit** -> **Edit Definitions...** menu.
The editor is organized into three tabs:
* **Asset Type Definitions:** Define the different categories of assets (e.g., Surface, Model, Decal). For each asset type, you can configure its description, a color for UI representation, and example usage strings.
* **File Type Definitions:** Define the specific types of files the tool recognizes (e.g., MAP_COL, MAP_NRM, MODEL). For each file type, you can configure its description, a color, example keywords/patterns, a standard type alias, bit depth handling rules, whether it's grayscale, and an optional keybind for quick assignment in the GUI.
* **Supplier Settings:** Configure settings that are specific to assets originating from different suppliers. Currently, this includes the "Normal Map Type" (OpenGL or DirectX) used for normal maps from that supplier.
Each tab presents a list of the defined items on the left (Asset Types, File Types, or Suppliers). Selecting an item in the list displays its configurable details on the right. Buttons are provided to add new definitions or remove existing ones.
Changes made in the Definitions Editor are saved directly to their respective configuration files (`config/asset_type_definitions.json`, `config/file_type_definitions.json`, and `config/suppliers.json`). Some changes may require an application restart to take full effect in processing logic.
*(Ideally, screenshots of the Definitions Editor tabs would be included here.)*
## Preset Files (`presets/*.json`)

View File

@@ -12,7 +12,10 @@ python -m gui.main_window
## Interface Overview
* **Menu Bar:** The "Edit" menu contains the "Preferences..." option to open the GUI Configuration Editor. The "View" menu allows you to toggle the visibility of the Log Console and the Detailed File Preview.
* **Menu Bar:** The "Edit" menu contains options to configure application settings and definitions:
* **Preferences...:** Opens the Application Preferences editor for user-overridable settings (saved to `config/user_settings.json`).
* **Edit Definitions...:** Opens the Definitions Editor for managing Asset Type Definitions, File Type Definitions, and Supplier Settings (saved to their respective files).
The "View" menu allows you to toggle the visibility of the Log Console and the Detailed File Preview.
* **Preset Editor Panel (Left):**
* **Optional Log Console:** Displays application logs (toggle via View menu).
* **Preset List:** Create, delete, load, edit, and save presets. On startup, the "-- Select a Preset --" item is explicitly selected. You must select a specific preset from this list to load it into the editor below, enable the detailed file preview, and enable the "Start Processing" button.

View File

@@ -2,7 +2,7 @@
This document describes the directory structure and contents of the processed assets generated by the Asset Processor Tool.
Processed assets are saved to a location determined by two global settings defined in `config/app_settings.json`:
Processed assets are saved to a location determined by two global settings, `OUTPUT_DIRECTORY_PATTERN` and `OUTPUT_FILENAME_PATTERN`, defined in `config/app_settings.json`. These settings can be overridden by the user via `config/user_settings.json`.
* `OUTPUT_DIRECTORY_PATTERN`: Defines the directory structure *within* the Base Output Directory.
* `OUTPUT_FILENAME_PATTERN`: Defines the naming convention for individual files *within* the directory created by `OUTPUT_DIRECTORY_PATTERN`.
@@ -23,7 +23,7 @@ The following tokens can be used in both `OUTPUT_DIRECTORY_PATTERN` and `OUTPUT_
* `[Time]`: Current time (`HHMMSS`).
* `[Sha5]`: The first 5 characters of the SHA-256 hash of the original input source file (e.g., the source zip archive).
* `[ApplicationPath]`: Absolute path to the application directory.
* `[maptype]`: The standardized map type identifier (e.g., `COL` for Color/Albedo, `NRM` for Normal, `RGH` for Roughness). This is derived from the `standard_type` defined in the application's `FILE_TYPE_DEFINITIONS` (see `config/app_settings.json`) and may include a variant suffix if applicable. (Primarily for filename pattern)
* `[maptype]`: The standardized map type identifier (e.g., `COL` for Color/Albedo, `NRM` for Normal, `RGH` for Roughness). This is derived from the `standard_type` defined in the application's `FILE_TYPE_DEFINITIONS` (managed in `config/file_type_definitions.json` via the Definitions Editor) and may include a variant suffix if applicable. (Primarily for filename pattern)
* `[dimensions]`: Pixel dimensions (e.g., `2048x2048`).
* `[bitdepth]`: Output bit depth (e.g., `8bit`, `16bit`).
* `[category]`: Asset category determined by preset rules.
@@ -51,13 +51,14 @@ The final output path is constructed by combining the Base Output Directory (set
* `OUTPUT_FILENAME_PATTERN`: `[maptype].[ext]`
* Resulting Path for a Normal map: `Output/Texture/Wood/WoodFloor001/Normal.exr`
The `<output_base_directory>` (the root folder where processing output starts) is configured separately via the GUI (**Edit** -> **Preferences...** -> **Output & Naming** tab -> **Base Output Directory**) or the `--output` CLI argument. The `OUTPUT_DIRECTORY_PATTERN` defines the structure *within* this base directory, and `OUTPUT_FILENAME_PATTERN` defines the filenames within that structure.
The `<output_base_directory>` (the root folder where processing output starts) is configured separately via the GUI (**Edit** -> **Preferences...** -> **General** tab -> **Output Base Directory**) or the `--output` CLI argument. The `OUTPUT_DIRECTORY_PATTERN` defines the structure *within* this base directory, and `OUTPUT_FILENAME_PATTERN` defines the filenames within that structure.
## Contents of Each Asset Directory
Each asset directory contains the following:
* Processed texture maps (e.g., `WoodFloor_Albedo_4k.png`, `MetalPanel_Normal_2k.exr`). The exact filenames depend on the `OUTPUT_FILENAME_PATTERN`. These are the resized, format-converted, and bit-depth adjusted texture files.
* **LOWRES Variants:** If the "Low-Resolution Fallback" feature is enabled and a source image's dimensions are below the configured threshold, an additional variant with "LOWRES" as its resolution token (e.g., `MyTexture_COL_LOWRES.png`) will be saved. This variant uses the original dimensions of the source image.
* Merged texture maps (e.g., `WoodFloor_Combined_4k.png`). The exact filenames depend on the `OUTPUT_FILENAME_PATTERN`. These are maps created by combining channels from different source maps based on the configured merge rules.
* Model files (if present in the source asset).
* `metadata.json`: A JSON file containing detailed information about the asset and the processing that was performed. This includes details about the maps (resolutions, formats, bit depths, and for roughness maps, a `derived_from_gloss_filename: true` flag if it was inverted from an original gloss map), merged map details, calculated image statistics, aspect ratio change information, asset category and archetype, the source preset used, and a list of ignored source files. This file is intended for use by downstream tools or scripts (like the Blender integration scripts).

View File

@@ -0,0 +1,83 @@
# User Guide: Usage - Automated GUI Testing (`autotest.py`)
This document explains how to use the `autotest.py` script for automated sanity checks of the Asset Processor Tool's GUI-driven workflow.
## Overview
The `autotest.py` script provides a way to run predefined test scenarios headlessly (without displaying the GUI). It simulates the core user actions: loading an asset, selecting a preset, allowing rules to be predicted, processing the asset, and then checks the results against expectations. This is primarily intended as a developer tool for regression testing and ensuring core functionality remains stable.
## Running the Autotest Script
From the project root directory, you can run the script using Python:
```bash
python autotest.py [OPTIONS]
```
### Command-Line Options
The script accepts several command-line arguments to configure the test run. If not provided, they use predefined default values.
* `--zipfile PATH_TO_ZIP`:
* Specifies the path to the input asset `.zip` file to be used for the test.
* Default: `TestFiles/BoucleChunky001.zip`
* `--preset PRESET_NAME`:
* Specifies the name of the preset to be selected and used for rule prediction and processing.
* Default: `Dinesen`
* `--expectedrules PATH_TO_JSON`:
* Specifies the path to a JSON file containing the expected rule structure that should be generated after the preset is applied to the input asset.
* Default: `TestFiles/test-BoucleChunky001.json`
* `--outputdir PATH_TO_DIR`:
* Specifies the directory where the processed assets will be written.
* Default: `TestFiles/TestOutputs/DefaultTestOutput`
* `--search "SEARCH_TERM"` (optional):
* A string to search for within the application logs generated during the test run. If found, matching log lines (with context) will be highlighted.
* Default: None
* `--additional-lines NUM_LINES` (optional):
* When using `--search`, this specifies how many lines of context before and after each matching log line should be displayed.
* Default: `0`
**Example Usage:**
```bash
# Run with default test files and settings
python autotest.py
# Run with specific test files and search for a log message
python autotest.py --zipfile TestFiles/MySpecificAsset.zip --preset MyPreset --expectedrules TestFiles/MySpecificAsset_rules.json --outputdir TestFiles/TestOutputs/MySpecificOutput --search "Processing complete for asset"
```
## `TestFiles` Directory
The autotest script relies on a directory named `TestFiles` located in the project root. This directory should contain:
* **Test Asset `.zip` files:** The actual asset archives used as input for tests (e.g., `default_test_asset.zip`, `MySpecificAsset.zip`).
* **Expected Rules `.json` files:** JSON files defining the expected rule structure for a given asset and preset combination (e.g., `default_test_asset_rules.json`, `MySpecificAsset_rules.json`). The structure of this file is detailed in the main autotest plan (`AUTOTEST_GUI_PLAN.md`).
* **`TestOutputs/` subdirectory:** This is the default parent directory where the autotest script will create specific output folders for each test run (e.g., `TestFiles/TestOutputs/DefaultTestOutput/`).
## Test Workflow
When executed, `autotest.py` performs the following steps:
1. **Initialization:** Parses command-line arguments and initializes the main application components headlessly.
2. **Load Expected Rules:** Loads the `expected_rules.json` file.
3. **Load Asset:** Loads the specified `.zip` file into the application.
4. **Select Preset:** Selects the specified preset. This triggers the internal rule prediction process.
5. **Await Prediction:** Waits for the rule prediction to complete.
6. **Compare Rules:** Retrieves the predicted rules from the application and compares them against the loaded expected rules. If there's a mismatch, the test typically fails at this point.
7. **Start Processing:** If the rules match, it initiates the asset processing pipeline, directing output to the specified output directory.
8. **Await Processing:** Waits for all backend processing tasks to complete.
9. **Check Output:** Verifies the existence of the output directory and lists its contents. Basic checks ensure some output was generated.
10. **Analyze Logs:** Retrieves logs from the application. If a search term was provided, it filters and displays relevant log portions. It also checks for Python tracebacks, which usually indicate a failure.
11. **Report Result:** Prints a summary of the test outcome (success or failure) and exits with an appropriate status code (0 for success, 1 for failure).
## Interpreting Results
* **Console Output:** The script will log its progress and the results of each step to the console.
* **Log Analysis:** Pay attention to the log output, especially if a `--search` term was used or if any tracebacks are reported.
* **Exit Code:**
* `0`: Test completed successfully.
* `1`: Test failed at some point (e.g., rule mismatch, processing error, traceback found).
* **Output Directory:** Inspect the contents of the specified output directory to manually verify the processed assets if needed.
This automated test helps ensure the stability of the core processing logic when driven by GUI-equivalent actions.

View File

@@ -2,43 +2,144 @@
This document provides technical details about the configuration system and the structure of preset files for developers working on the Asset Processor Tool.
## Configuration Flow
## Configuration System Overview
The tool utilizes a two-tiered configuration system managed by the `configuration.py` module:
The tool's configuration is managed by the `configuration.py` module and loaded from several JSON files, providing a layered approach for defaults, user overrides, definitions, and source-specific presets.
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources (e.g., the global `OUTPUT_DIRECTORY_PATTERN` and `OUTPUT_FILENAME_PATTERN`, standard image resolutions, map merge rules, output format rules, Blender paths, `FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`). See the [User Guide: Output Structure](../01_User_Guide/09_Output_Structure.md#available-tokens) for a list of available tokens for these patterns.
* **`FILE_TYPE_DEFINITIONS` Enhancements:**
* **`keybind` Property:** Each file type object within `FILE_TYPE_DEFINITIONS` can now optionally include a `keybind` property. This property accepts a single character string (e.g., `"C"`, `"R"`) representing the keyboard key. In the GUI, this key (typically combined with `Ctrl`, or standalone like `F2` for asset naming) is used as a shortcut to set or toggle the corresponding file type for selected items in the Preview Table.
*Example:*
```json
"MAP_COL": {
"description": "Color/Albedo Map",
"color": [200, 200, 200],
"examples": ["albedo", "col", "basecolor"],
"standard_type": "COL",
"bit_depth_rule": "respect",
"is_grayscale": false,
"keybind": "C"
},
```
* **New File Type `MAP_GLOSS`:** A new standard file type, `MAP_GLOSS`, has been added. It is typically configured as follows:
*Example:*
```json
"MAP_GLOSS": {
"description": "Glossiness Map",
"color": [180, 180, 220],
"examples": ["gloss", "gls"],
"standard_type": "GLOSS",
"bit_depth_rule": "respect",
"is_grayscale": true,
"keybind": "R"
}
```
Note: The `keybind` "R" for `MAP_GLOSS` is often shared with `MAP_ROUGH` to allow toggling between them.
2. **LLM Settings (`config/llm_settings.json`):** This JSON file contains settings specifically related to the LLM predictor, such as the API endpoint, model name, prompt template, and examples. These settings can be edited through the GUI using the `LLMEditorWidget`.
3. **Preset Files (`Presets/*.json`):** These JSON files define supplier-specific rules and overrides. They contain patterns to interpret filenames, classify map types, handle variants, define naming conventions, and specify other source-specific behaviors.
### Configuration Files
The `configuration.py` module contains the `Configuration` class (for loading/merging settings for processing) and standalone functions like `load_base_config()` (for accessing `app_settings.json` directly) and `save_llm_config()` / `save_base_config()` (for writing settings back to files). Note that the old `config.py` file has been deleted.
The tool's configuration is loaded from several JSON files, providing a layered approach for defaults, user overrides, definitions, and source-specific presets.
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources (e.g., the global `OUTPUT_DIRECTORY_PATTERN` and `OUTPUT_FILENAME_PATTERN`, standard image resolutions, map merge rules, output format rules, Blender paths, temporary directory prefix, initial scaling mode, merge dimension mismatch strategy). See the [User Guide: Output Structure](../01_User_Guide/09_Output_Structure.md#available-tokens) for a list of available tokens for these patterns.
* *Note:* `ASSET_TYPE_DEFINITIONS` and `FILE_TYPE_DEFINITIONS` are no longer stored here; they have been moved to dedicated files.
* It also includes settings for new features like the "Low-Resolution Fallback":
* `ENABLE_LOW_RESOLUTION_FALLBACK` (boolean): Enables or disables the generation of "LOWRES" variants for small source images. Defaults to `true`.
* `LOW_RESOLUTION_THRESHOLD` (integer): The pixel dimension threshold (largest side) below which a "LOWRES" variant is created if the feature is enabled. Defaults to `512`.
2. **User Settings (`config/user_settings.json`):** This optional JSON file allows users to override specific settings defined in `config/app_settings.json`. If this file exists, its values for corresponding keys will take precedence over the base application settings. This file is primarily managed through the GUI's Application Preferences Editor.
3. **Asset Type Definitions (`config/asset_type_definitions.json`):** This dedicated JSON file contains the definitions for different asset types (e.g., Surface, Model, Decal), including their descriptions, colors for UI representation, and example usage strings.
4. **File Type Definitions (`config/file_type_definitions.json`):** This dedicated JSON file contains the definitions for different file types (specifically texture maps and models), including descriptions, colors for UI representation, examples of keywords/patterns, a standard alias (`standard_type`), bit depth handling rules (`bit_depth_rule`), a grayscale flag (`is_grayscale`), and an optional GUI keybind (`keybind`).
* **`keybind` Property:** Each file type object within `FILE_TYPE_DEFINITIONS` can optionally include a `keybind` property. This property accepts a single character string (e.g., `"C"`, `"R"`) representing the keyboard key. In the GUI, this key (typically combined with `Ctrl`) is used as a shortcut to set or toggle the corresponding file type for selected items in the Preview Table.
*Example:*
```json
"MAP_COL": {
"description": "Color/Albedo Map",
"color": "#ffaa00",
"examples": ["_col.", "_basecolor.", "albedo", "diffuse"],
"standard_type": "COL",
"bit_depth_rule": "force_8bit",
"is_grayscale": false,
"keybind": "C"
},
```
Note: The `bit_depth_rule` property in `FILE_TYPE_DEFINITIONS` is the primary source for determining bit depth handling for a given map type.
5. **Supplier Settings (`config/suppliers.json`):** This JSON file stores settings specific to different asset suppliers. It is now structured as a dictionary where keys are supplier names and values are objects containing supplier-specific configurations.
* **Structure:**
```json
{
"SupplierName1": {
"setting_key1": "value",
"setting_key2": "value"
},
"SupplierName2": {
"setting_key1": "value"
}
}
```
* **`normal_map_type` Property:** A key setting within each supplier's object is `normal_map_type`, specifying whether normal maps from this supplier use "OpenGL" or "DirectX" conventions.
*Example:*
```json
{
"Poliigon": {
"normal_map_type": "DirectX"
},
"Dimensiva": {
"normal_map_type": "OpenGL"
}
}
```
6. **LLM Settings (`config/llm_settings.json`):** This JSON file contains settings specifically related to the LLM predictor, such as the API endpoint, model name, prompt template, and examples. These settings are managed through the GUI using the `LLMEditorWidget`.
7. **Preset Files (`Presets/*.json`):** These JSON files define source-specific rules and overrides. They contain patterns to interpret filenames, classify map types, handle variants, define naming conventions, and specify other source-specific behaviors. Preset settings override values from `app_settings.json` and `user_settings.json` where applicable.
### Configuration Loading and Access
The `configuration.py` module contains the `Configuration` class and standalone functions for loading and saving settings.
* **`Configuration` Class:** This is the primary class used by the processing engine and other core components. When initialized with a `preset_name`, it loads settings in the following order, with later files overriding earlier ones for shared keys:
1. `config/app_settings.json` (Base Defaults)
2. `config/user_settings.json` (User Overrides - if exists)
3. `config/asset_type_definitions.json` (Asset Type Definitions)
4. `config/file_type_definitions.json` (File Type Definitions)
5. `config/llm_settings.json` (LLM Settings)
6. `Presets/{preset_name}.json` (Preset Overrides)
The loaded settings are merged into internal dictionaries, and most are accessible via instance properties (e.g., `config.output_base_dir`, `config.llm_endpoint_url`, `config.get_asset_type_definitions()`). Regex patterns defined in the merged configuration are pre-compiled for performance.
* **`load_base_config()` function:** This standalone function is primarily used by the GUI for initial setup and displaying default/user-overridden settings before a specific preset is selected. It loads and merges the following files:
1. `config/app_settings.json`
2. `config/user_settings.json` (if exists)
3. `config/asset_type_definitions.json`
4. `config/file_type_definitions.json`
It returns a single dictionary containing the combined settings and definitions.
* **Saving Functions:**
* `save_base_config(settings_dict)`: Saves the provided dictionary to `config/app_settings.json`. (Used less frequently now for user-driven saves).
* `save_user_config(settings_dict)`: Saves the provided dictionary to `config/user_settings.json`. Used by `ConfigEditorDialog`.
* `save_llm_config(settings_dict)`: Saves the provided dictionary to `config/llm_settings.json`. Used by `LLMEditorWidget`.
## Supplier Management (`config/suppliers.json`)
A file, `config/suppliers.json`, is used to store a persistent list of known supplier names. This file is a simple JSON array of strings.
* **Purpose:** Provides a list of suggestions for the "Supplier" field in the GUI's Unified View, enabling auto-completion.
* **Management:** The GUI's `SupplierSearchDelegate` is responsible for loading this list on startup, adding new, unique supplier names entered by the user, and saving the updated list back to the file.
## GUI Configuration Editors
The GUI provides dedicated editors for modifying configuration files:
* **`ConfigEditorDialog` (`gui/config_editor_dialog.py`):** Edits user-configurable application settings.
* **`LLMEditorWidget` (`gui/llm_editor_widget.py`):** Edits the LLM-specific settings.
### `ConfigEditorDialog` (`gui/config_editor_dialog.py`)
The GUI includes a dedicated editor for modifying user-configurable settings. This is implemented in `gui/config_editor_dialog.py`.
* **Purpose:** Provides a user-friendly interface for viewing the effective application settings (defaults + user overrides + definitions) and editing the user-specific overrides.
* **Implementation:** The dialog loads the effective settings using `load_base_config()`. It presents relevant settings in a tabbed layout ("General", "Output & Naming", etc.). When saving, it now performs a **granular save**: it loads the current content of `config/user_settings.json`, identifies only the settings that were changed by the user during the current dialog session (by comparing against the initial state), updates only those specific values in the loaded `user_settings.json` content, and saves the modified content back to `config/user_settings.json` using `save_user_config()`. This preserves any other settings in `user_settings.json` that were not touched. The dialog displays definitions from `asset_type_definitions.json` and `file_type_definitions.json` but does not save changes to these files.
* **Limitations:** Currently, editing complex fields like `IMAGE_RESOLUTIONS` or the full details of `MAP_MERGE_RULES` via the UI is not fully supported for saving to `user_settings.json`.
### `LLMEditorWidget` (`gui/llm_editor_widget.py`)
* **Purpose:** Provides a user-friendly interface for viewing and editing the LLM settings defined in `config/llm_settings.json`.
* **Implementation:** Uses tabs for "Prompt Settings" and "API Settings". Allows editing the prompt, managing examples, and configuring API details. When saving, it also performs a **granular save**: it loads the current content of `config/llm_settings.json`, identifies only the settings changed by the user in the current session, updates only those values, and saves the modified content back to `config/llm_settings.json` using `configuration.save_llm_config()`.
## Preset File Structure (`Presets/*.json`)
Preset files are the primary way to adapt the tool to new asset sources. Developers should use `Presets/_template.json` as a starting point. Key fields include:
* `supplier_name`: The name of the asset source (e.g., `"Poliigon"`). Used for output directory naming.
* `map_type_mapping`: A list of dictionaries, each mapping source filename patterns/keywords to a specific file type. The `target_type` for this mapping **must** be a key from the `FILE_TYPE_DEFINITIONS` now located in `config/file_type_definitions.json`.
* `target_type`: The specific file type key from `FILE_TYPE_DEFINITIONS` (e.g., `"MAP_COL"`, `"MAP_NORM_GL"`, `"MAP_RGH"`). This replaces previous alias-based systems. The common aliases like "COL" or "NRM" are now derived from the `standard_type` property within `FILE_TYPE_DEFINITIONS` but are not used directly for `target_type`.
* `keywords`: A list of filename patterns (regex or fnmatch-style wildcards) used to identify this map type. The order of keywords within this list, and the order of dictionaries in the `map_type_mapping` list, determines the priority for assigning variant suffixes (`-1`, `-2`, etc.) when multiple files match the same `target_type`.
* `bit_depth_variants`: A dictionary mapping standard map types (e.g., `"NRM"`) to a pattern identifying its high bit-depth variant (e.g., `"*_NRM16*.tif"`). Files matching these patterns are prioritized over their standard counterparts.
* `map_bit_depth_rules`: Defines how to handle the bit depth of source maps. Can specify a default behavior (`"respect"` or `"force_8bit"`) and overrides for specific map types.
* `model_patterns`: A list of regex patterns to identify model files (e.g., `".*\\.fbx"`, `".*\\.obj"`).
* `move_to_extra_patterns`: A list of regex patterns for files that should be moved directly to the `Extra/` output subdirectory without further processing.
* `source_naming_convention`: Rules for extracting the base asset name and potentially the archetype from source filenames or directory structures (e.g., using separators and indices).
* `asset_category_rules`: Keywords or patterns used to determine the asset category (e.g., identifying `"Decal"` based on keywords).
* `archetype_rules`: Keywords or patterns used to determine the asset archetype (e.g., identifying `"Wood"` or `"Metal"`).
Careful definition of these patterns and rules, especially the regex in `map_type_mapping`, `bit_depth_variants`, `model_patterns`, and `move_to_extra_patterns`, is essential for correct asset processing.
**Note on Data Passing:** As mentioned in the Architecture documentation, major changes to the data passing mechanisms between the GUI, Main (CLI orchestration), and `AssetProcessor` modules are currently being planned. The descriptions of how configuration data is handled and passed within this document reflect the current state and will require review and updates once the plan for these changes is finalized.
## Supplier Management (`config/suppliers.json`)

View File

@@ -1,69 +1,115 @@
# Developer Guide: Processing Pipeline
Cl# Developer Guide: Processing Pipeline
This document details the step-by-step technical process executed by the asset processing pipeline, which is initiated by the `ProcessingEngine` class (`processing_engine.py`) and orchestrated by the `PipelineOrchestrator` (`processing/pipeline/orchestrator.py`).
This document details the step-by-step technical process executed by the asset processing pipeline, which is initiated by the [`ProcessingEngine`](processing_engine.py:73) class (`processing_engine.py`) and orchestrated by the [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) (`processing/pipeline/orchestrator.py`).
The `ProcessingEngine.process()` method serves as the main entry point. It initializes a `PipelineOrchestrator` instance, providing it with the application's `Configuration` object and a predefined list of processing stages. The `PipelineOrchestrator.process_source_rule()` method then manages the execution of these stages for each asset defined in the input `SourceRule`.
The [`ProcessingEngine.process()`](processing_engine.py:131) method serves as the main entry point. It initializes a [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) instance, providing it with the application's [`Configuration`](configuration.py:68) object and predefined lists of pre-item and post-item processing stages. The [`PipelineOrchestrator.process_source_rule()`](processing/pipeline/orchestrator.py:95) method then manages the execution of these stages for each asset defined in the input [`SourceRule`](rule_structure.py:40).
A crucial component in this architecture is the `AssetProcessingContext` (`processing/pipeline/asset_context.py`). An instance of this dataclass is created for each `AssetRule` being processed. It acts as a stateful container, carrying all relevant data (source files, rules, configuration, intermediate results, metadata) and is passed sequentially through each stage. Each stage can read from and write to the context, allowing data to flow and be modified throughout the pipeline.
A crucial component in this architecture is the [`AssetProcessingContext`](processing/pipeline/asset_context.py:86) (`processing/pipeline/asset_context.py`). An instance of this dataclass is created for each [`AssetRule`](rule_structure.py:22) being processed. It acts as a stateful container, carrying all relevant data (source files, rules, configuration, intermediate results, metadata) and is passed sequentially through each stage. Each stage can read from and write to the context, allowing data to flow and be modified throughout the pipeline.
The pipeline stages are executed in the following order:
The pipeline execution for each asset follows this general flow:
1. **`SupplierDeterminationStage` (`processing/pipeline/stages/supplier_determination.py`)**:
* **Responsibility**: Determines the effective supplier for the asset based on the `SourceRule`'s `supplier_identifier`, `supplier_override`, and supplier definitions in the `Configuration`.
* **Context Interaction**: Updates `AssetProcessingContext.effective_supplier` and potentially `AssetProcessingContext.asset_metadata` with supplier information.
1. **Pre-Item Stages:** A sequence of stages executed once per asset before the core item processing loop. These stages typically perform initial setup, filtering, and asset-level transformations.
2. **Core Item Processing Loop:** The [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) iterates through a list of "processing items" (individual files or merge tasks) prepared by a dedicated stage. For each item, a sequence of core processing stages is executed.
3. **Post-Item Stages:** A sequence of stages executed once per asset after the core item processing loop is complete. These stages handle final tasks like organizing output files and saving metadata.
2. **`AssetSkipLogicStage` (`processing/pipeline/stages/asset_skip_logic.py`)**:
* **Responsibility**: Checks if the asset should be skipped, typically if the output already exists and overwriting is not forced.
* **Context Interaction**: Sets `AssetProcessingContext.status_flags['skip_asset']` to `True` if the asset should be skipped, halting further processing for this asset by the orchestrator.
## Pipeline Stages
3. **`MetadataInitializationStage` (`processing/pipeline/stages/metadata_initialization.py`)**:
* **Responsibility**: Initializes the `AssetProcessingContext.asset_metadata` dictionary with base information derived from the `AssetRule`, `SourceRule`, and `Configuration`. This includes asset name, type, and any common metadata.
* **Context Interaction**: Populates `AssetProcessingContext.asset_metadata`.
The stages are executed in the following order for each asset:
4. **`FileRuleFilterStage` (`processing/pipeline/stages/file_rule_filter.py`)**:
* **Responsibility**: Filters the `FileRule` objects from the `AssetRule` to determine which files should actually be processed. It respects `FILE_IGNORE` rules.
* **Context Interaction**: Populates `AssetProcessingContext.files_to_process` with the list of `FileRule` objects that passed the filter.
### Pre-Item Stages
5. **`GlossToRoughConversionStage` (`processing/pipeline/stages/gloss_to_rough_conversion.py`)**:
* **Responsibility**: Identifies gloss maps (based on `FileRule` properties and filename conventions) that are intended to be used as roughness maps. If found, it loads the image, inverts its colors, and saves a temporary inverted version.
* **Context Interaction**: Modifies `FileRule` objects in `AssetProcessingContext.files_to_process` (e.g., updates `file_path` to point to the temporary inverted map, sets flags indicating inversion). Updates `AssetProcessingContext.processed_maps_details` with information about the conversion.
These stages are executed sequentially once for each asset before the core item processing loop begins.
6. **`AlphaExtractionToMaskStage` (`processing/pipeline/stages/alpha_extraction_to_mask.py`)**:
* **Responsibility**: If a `FileRule` specifies alpha channel extraction (e.g., from a diffuse map to create an opacity mask), this stage loads the source image, extracts its alpha channel, and saves it as a new temporary grayscale map.
* **Context Interaction**: May add new `FileRule`-like entries or details to `AssetProcessingContext.processed_maps_details` representing the extracted mask.
1. **[`SupplierDeterminationStage`](processing/pipeline/stages/supplier_determination.py:6)** (`processing/pipeline/stages/supplier_determination.py`):
* **Responsibility**: Determines the effective supplier for the asset based on the [`SourceRule`](rule_structure.py:40)'s `supplier_override`, `supplier_identifier`, and validation against configured suppliers.
* **Context Interaction**: Sets `context.effective_supplier` and may set a `supplier_error` flag in `context.status_flags`.
7. **`NormalMapGreenChannelStage` (`processing/pipeline/stages/normal_map_green_channel.py`)**:
* **Responsibility**: Checks `FileRule`s for normal maps and, based on configuration (e.g., `invert_normal_map_green_channel` for a specific supplier), potentially inverts the green channel of the normal map image.
* **Context Interaction**: Modifies the image data for normal maps if inversion is needed, saving a new temporary version. Updates `AssetProcessingContext.processed_maps_details`.
2. **[`AssetSkipLogicStage`](processing/pipeline/stages/asset_skip_logic.py:5)** (`processing/pipeline/stages/asset_skip_logic.py`):
* **Responsibility**: Checks if the entire asset should be skipped based on conditions like a missing/invalid supplier, a "SKIP" status in asset metadata, or if the asset is already processed and overwrite is disabled.
* **Context Interaction**: Sets the `skip_asset` flag and `skip_reason` in `context.status_flags` if the asset should be skipped.
8. **`IndividualMapProcessingStage` (`processing/pipeline/stages/individual_map_processing.py`)**:
* **Responsibility**: Processes individual texture map files. This includes:
* Loading the source image.
* Applying Power-of-Two (POT) scaling.
* Generating multiple resolution variants based on configuration.
* Handling color space conversions (e.g., BGR to RGB).
* Calculating image statistics (min, max, mean, median).
* Determining and storing aspect ratio change information.
* Saving processed temporary map files.
* Applying name variant suffixing and using standard type aliases for filenames.
* **Context Interaction**: Heavily populates `AssetProcessingContext.processed_maps_details` with paths to temporary processed files, dimensions, and other metadata for each map and its variants. Updates `AssetProcessingContext.asset_metadata` with image stats and aspect ratio info.
3. **[`MetadataInitializationStage`](processing/pipeline/stages/metadata_initialization.py:81)** (`processing/pipeline/stages/metadata_initialization.py`):
* **Responsibility**: Initializes the `context.asset_metadata` dictionary with base information derived from the [`AssetRule`](rule_structure.py:22), [`SourceRule`](rule_structure.py:40), and [`Configuration`](configuration.py:68). This includes asset name, IDs, source/output paths, timestamps, and initial status.
* **Context Interaction**: Populates `context.asset_metadata`. Initializes `context.processed_maps_details` and `context.merged_maps_details` as empty dictionaries (these are used internally by subsequent stages but are not directly part of the final `metadata.json` in their original form).
9. **`MapMergingStage` (`processing/pipeline/stages/map_merging.py`)**:
* **Responsibility**: Performs channel packing and other merge operations based on `map_merge_rules` defined in the `Configuration`.
* **Context Interaction**: Reads source map details and temporary file paths from `AssetProcessingContext.processed_maps_details`. Saves new temporary merged maps and records their details in `AssetProcessingContext.merged_maps_details`.
4. **[`FileRuleFilterStage`](processing/pipeline/stages/file_rule_filter.py:10)** (`processing/pipeline/stages/file_rule_filter.py`):
* **Responsibility**: Filters the [`FileRule`](rule_structure.py:5) objects associated with the asset to determine which individual files should be considered for processing. It identifies and excludes files matching "FILE_IGNORE" rules based on their `item_type`.
* **Context Interaction**: Populates `context.files_to_process` with the list of [`FileRule`](rule_structure.py:5) objects that are not ignored.
10. **`MetadataFinalizationAndSaveStage` (`processing/pipeline/stages/metadata_finalization_save.py`)**:
* **Responsibility**: Collects all accumulated metadata from `AssetProcessingContext.asset_metadata`, `AssetProcessingContext.processed_maps_details`, and `AssetProcessingContext.merged_maps_details`. It structures this information and saves it as the `metadata.json` file in a temporary location within the engine's temporary directory.
* **Context Interaction**: Reads from various context fields and writes the `metadata.json` file. Stores the path to this temporary metadata file in the context (e.g., `AssetProcessingContext.asset_metadata['temp_metadata_path']`).
5. **[`GlossToRoughConversionStage`](processing/pipeline/stages/gloss_to_rough_conversion.py:15)** (`processing/pipeline/stages/gloss_to_rough_conversion.py`):
* **Responsibility**: Identifies processed maps in `context.processed_maps_details` whose `internal_map_type` starts with "MAP_GLOSS". If found, it loads the temporary image data, inverts it using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), saves a new temporary roughness map ("MAP_ROUGH"), and updates the corresponding details in `context.processed_maps_details` (setting `internal_map_type` to "MAP_ROUGH") and the relevant [`FileRule`](rule_structure.py:5) in `context.files_to_process` (setting `item_type` to "MAP_ROUGH").
* **Context Interaction**: Reads from and updates `context.processed_maps_details` (specifically `internal_map_type` and `temp_processed_file`) and `context.files_to_process` (specifically `item_type`).
11. **`OutputOrganizationStage` (`processing/pipeline/stages/output_organization.py`)**:
* **Responsibility**: Determines final output paths for all processed maps, merged maps, the metadata file, and any other asset files (like models). It then copies these files from their temporary locations to the final structured output directory.
* **Context Interaction**: Reads temporary file paths from `AssetProcessingContext.processed_maps_details`, `AssetProcessingContext.merged_maps_details`, and the temporary metadata file path. Uses `Configuration` for output path patterns. Updates `AssetProcessingContext.asset_metadata` with final file paths and status.
6. **[`AlphaExtractionToMaskStage`](processing/pipeline/stages/alpha_extraction_to_mask.py:16)** (`processing/pipeline/stages/alpha_extraction_to_mask.py`):
* **Responsibility**: If no mask map is explicitly defined for the asset (as a [`FileRule`](rule_structure.py:5) with `item_type="MAP_MASK"`), this stage searches `context.processed_maps_details` for a suitable source map (e.g., a "MAP_COL" with an alpha channel, based on its `internal_map_type`). If found, it extracts the alpha channel, saves it as a new temporary mask map, and adds a new [`FileRule`](rule_structure.py:5) (with `item_type="MAP_MASK"`) and corresponding details (with `internal_map_type="MAP_MASK"`) to the context.
* **Context Interaction**: Reads from `context.processed_maps_details`, adds a new [`FileRule`](rule_structure.py:5) to `context.files_to_process`, and adds a new entry to `context.processed_maps_details` (setting `internal_map_type`).
**External Steps (Not part of `PipelineOrchestrator`'s direct loop but integral to the overall process):**
7. **[`NormalMapGreenChannelStage`](processing/pipeline/stages/normal_map_green_channel.py:14)** (`processing/pipeline/stages/normal_map_green_channel.py`):
* **Responsibility**: Identifies processed normal maps in `context.processed_maps_details` (those with an `internal_map_type` starting with "MAP_NRM"). If the global `invert_normal_map_green_channel_globally` configuration is true, it loads the temporary image data, inverts the green channel using the shared utility function [`apply_common_map_transformations`](processing/utils/image_processing_utils.py), saves a new temporary modified normal map, and updates the `temp_processed_file` path in `context.processed_maps_details`.
* **Context Interaction**: Reads from and updates `context.processed_maps_details` (specifically `temp_processed_file` and `notes`).
* **Workspace Preparation and Cleanup**: Handled by the code that invokes `ProcessingEngine.process()` (e.g., `main.ProcessingTask`, `monitor._process_archive_task`), typically using `utils.workspace_utils`. The engine itself creates a sub-temporary directory (`engine_temp_dir`) within the workspace provided to it by the orchestrator, which it cleans up.
* **Prediction and Rule Generation**: Also external, performed before `ProcessingEngine` is called. Generates the `SourceRule`.
* **Optional Blender Script Execution**: Triggered externally after successful processing.
### Core Item Processing Loop
This staged pipeline provides a modular and extensible architecture for asset processing, with clear separation of concerns for each step. The `AssetProcessingContext` ensures that data flows consistently between these stages.r
The [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36) iterates through the `context.processing_items` list (populated by the [`PrepareProcessingItemsStage`](processing/pipeline/stages/prepare_processing_items.py:10)). Each `item` in this list is now either a [`ProcessingItem`](rule_structure.py:0) (representing a specific variant of a source map, e.g., Color at 1K, or Color at LOWRES) or a [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16).
1. **[`PrepareProcessingItemsStage`](processing/pipeline/stages/prepare_processing_items.py:10)** (`processing/pipeline/stages/prepare_processing_items.py`):
* **Responsibility**: (Executed once before the loop) This stage is now responsible for "exploding" each relevant [`FileRule`](rule_structure.py:5) into one or more [`ProcessingItem`](rule_structure.py:0) objects.
* For each [`FileRule`](rule_structure.py:5) that represents an image map:
* It loads the source image data and determines its original dimensions and bit depth.
* It creates standard [`ProcessingItem`](rule_structure.py:0)s for each required output resolution (e.g., "1K", "PREVIEW"), populating them with a copy of the source image data and the respective `resolution_key`.
* If the "Low-Resolution Fallback" feature is enabled (`ENABLE_LOW_RESOLUTION_FALLBACK` in config) and the source image's largest dimension is below `LOW_RESOLUTION_THRESHOLD`, it creates an additional [`ProcessingItem`](rule_structure.py:0) with `resolution_key="LOWRES"`, using the original image data and dimensions.
* It also adds [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16)s derived from global `map_merge_rules`.
* **Context Interaction**: Reads `context.files_to_process` and `context.config_obj`. Populates `context.processing_items` with a list of [`ProcessingItem`](rule_structure.py:0) and [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16) objects. Initializes `context.intermediate_results`.
For each `item` in `context.processing_items`:
2. **Transformations (Implicit or via a dedicated stage - formerly `RegularMapProcessorStage` logic):**
* **Responsibility**: If the `item` is a [`ProcessingItem`](rule_structure.py:0), its `image_data` (loaded by `PrepareProcessingItemsStage`) may need transformations (Gloss-to-Rough, Normal Green Invert). This logic, previously in `RegularMapProcessorStage`, might be integrated into `PrepareProcessingItemsStage` before `ProcessingItem` creation, or handled by a new dedicated transformation stage that operates on `ProcessingItem.image_data`. The `item.map_type_identifier` would be updated if a transformation like Gloss-to-Rough occurs.
* **Context Interaction**: Modifies `item.image_data` and `item.map_type_identifier` within the [`ProcessingItem`](rule_structure.py:0) object.
3. **[`MergedTaskProcessorStage`](processing/pipeline/stages/merged_task_processor.py:68)** (`processing/pipeline/stages/merged_task_processor.py`):
* **Responsibility**: (Executed if `item` is a [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16)) Same as before: validates inputs, loads source map data (likely from `ProcessingItem`s in `context.processing_items` or a cache populated from them), applies transformations, merges channels, and returns [`ProcessedMergedMapData`](processing/pipeline/asset_context.py:35).
* **Context Interaction**: Reads [`MergeTaskDefinition`](processing/pipeline/asset_context.py:16), potentially `context.processing_items` (or a cache derived from it) for input image data. Returns [`ProcessedMergedMapData`](processing/pipeline/asset_context.py:35).
4. **[`InitialScalingStage`](processing/pipeline/stages/initial_scaling.py:14)** (`processing/pipeline/stages/initial_scaling.py`):
* **Responsibility**: (Executed per item)
* If `item` is a [`ProcessingItem`](rule_structure.py:0): Takes `item.image_data`, `item.current_dimensions`, and `item.resolution_key` as input. If `item.resolution_key` is "LOWRES", POT scaling is skipped. Otherwise, applies POT scaling if configured.
* If `item` is from a `MergeTaskDefinition` (i.e., `processed_data` from `MergedTaskProcessorStage`): Applies POT scaling as before.
* **Context Interaction**: Takes [`InitialScalingInput`](processing/pipeline/asset_context.py:46) (now including `resolution_key`). Returns [`InitialScalingOutput`](processing/pipeline/asset_context.py:54) (also including `resolution_key`), which updates `context.intermediate_results`. The `current_image_data` and `current_dimensions` for saving are taken from this output.
5. **[`SaveVariantsStage`](processing/pipeline/stages/save_variants.py:15)** (`processing/pipeline/stages/save_variants.py`):
* **Responsibility**: (Executed per item) Saves the (potentially scaled) `current_image_data`.
* **Context Interaction**:
* Takes [`SaveVariantsInput`](processing/pipeline/asset_context.py:61).
* `internal_map_type` is set from `item.map_type_identifier` (for `ProcessingItem`) or `processed_data.output_map_type` (for merged).
* `output_filename_pattern_tokens['resolution']` is set to the `resolution_key` obtained from `scaled_data_output.resolution_key` (which originates from `item.resolution_key` for `ProcessingItem`s, or is `None` for merged items that get all standard resolutions).
* `image_resolutions` argument for `SaveVariantsInput`:
* If `resolution_key == "LOWRES"`: Set to `{"LOWRES": width_of_lowres_data}`.
* If `resolution_key` is a standard key (e.g., "1K"): Set to `{resolution_key: configured_dimension}`.
* For merged items (where `resolution_key` from scaling is likely `None`): Set to the full `config.image_resolutions` map to generate all applicable standard sizes.
* Returns [`SaveVariantsOutput`](processing/pipeline/asset_context.py:79). Orchestrator stores details in `context.processed_maps_details`.
### Post-Item Stages
These stages are executed sequentially once for each asset after the core item processing loop has finished for all items.
1. **[`OutputOrganizationStage`](processing/pipeline/stages/output_organization.py:14)** (`processing/pipeline/stages/output_organization.py`):
* **Responsibility**: Determines the final output paths for all processed maps (including variants) and extra files based on configured patterns. It copies the temporary files generated by the core stages to these final destinations, creating directories as needed and respecting overwrite settings.
* **Context Interaction**: Reads from `context.processed_maps_details`, `context.files_to_process` (for 'EXTRA' files), `context.output_base_path`, and [`Configuration`](configuration.py:68). Updates entries in `context.processed_maps_details` with organization status. Populates `context.asset_metadata['maps']` with the final map structure:
* The `maps` object is a dictionary where keys are standard map types (e.g., "COL", "REFL").
* Each entry contains a `variant_paths` dictionary, where keys are resolution strings (e.g., "8K", "4K") and values are the filenames of the map variants (relative to the asset's output directory).
It also populates `context.asset_metadata['final_output_files']` with a list of absolute paths to all generated files (this list itself is not saved in the final `metadata.json`).
2. **[`MetadataFinalizationAndSaveStage`](processing/pipeline/stages/metadata_finalization_save.py:14)** (`processing/pipeline/stages/metadata_finalization_save.py`):
* **Responsibility**: Finalizes the `context.asset_metadata` (setting final status based on flags). It determines the save path for the metadata file based on configuration and patterns, serializes the `context.asset_metadata` (which now contains the structured `maps` data from `OutputOrganizationStage`) to JSON, and saves the `metadata.json` file.
* **Context Interaction**: Reads from `context.asset_metadata` (including the `maps` structure), `context.output_base_path`, and [`Configuration`](configuration.py:68). Before saving, it explicitly removes the `final_output_files` key from `context.asset_metadata`. The `processing_end_time` is also no longer added. The `metadata.json` file is written, and `context.asset_metadata` is updated with its final path and status. The older `processed_maps_details` and `merged_maps_details` from the context are not directly included in the JSON.
## External Steps
Certain steps are integral to the overall asset processing workflow but are handled outside the [`PipelineOrchestrator`](processing/pipeline/orchestrator.py:36)'s direct execution loop:
* **Workspace Preparation and Cleanup**: Handled by the code that invokes [`ProcessingEngine.process()`](processing_engine.py:131) (e.g., `main.ProcessingTask`, `monitor._process_archive_task`), typically involving extracting archives and setting up temporary directories. The engine itself manages a sub-temporary directory (`engine_temp_dir`) for intermediate processing files.
* **Prediction and Rule Generation**: Performed before the [`ProcessingEngine`](processing_engine.py:73) is called. This involves analyzing source files and generating the [`SourceRule`](rule_structure.py:40) object with its nested [`AssetRule`](rule_structure.py:22)s and [`FileRule`](rule_structure.py:5)s, often involving prediction logic (potentially using LLMs).
* **Optional Blender Script Execution**: Can be triggered externally after successful processing to perform tasks like material setup in Blender using the generated output files and metadata.
This staged pipeline provides a modular and extensible architecture for asset processing, with clear separation of concerns for each step. The [`AssetProcessingContext`](processing/pipeline/asset_context.py:86) ensures that data flows consistently between these stages.

View File

@@ -10,13 +10,13 @@ The GUI is built using `PySide6`, which provides Python bindings for the Qt fram
The `MainWindow` class acts as the central **coordinator** for the GUI application. It is responsible for:
* Setting up the main application window structure and menu bar.
* Setting up the main application window structure and menu bar, including actions to launch configuration and definition editors.
* **Layout:** Arranging the main GUI components using a `QSplitter`.
* **Left Pane:** Contains the preset selection controls (from `PresetEditorWidget`) permanently displayed at the top. Below this, a `QStackedWidget` switches between the preset JSON editor (also from `PresetEditorWidget`) and the `LLMEditorWidget`.
* **Right Pane:** Contains the `MainPanelWidget`.
* Instantiating and managing the major GUI widgets:
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Provides the preset selector and the JSON editor parts.
* `LLMEditorWidget` (`gui/llm_editor_widget.py`): Provides the editor for LLM settings.
* `LLMEditorWidget` (`gui/llm_editor_widget.py`): Provides the editor for LLM settings (from `config/llm_settings.json`).
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the rule hierarchy view and processing controls.
* `LogConsoleWidget` (`gui/log_console_widget.py`): Displays application logs.
* Instantiating key models and handlers:
@@ -198,13 +198,24 @@ The `LogConsoleWidget` displays logs captured by a custom `QtLogHandler` from Py
The GUI provides a "Cancel" button. Cancellation logic for the actual processing is now likely handled within the `main.ProcessingTask` or the code that manages it, as the `ProcessingHandler` has been removed. The GUI button would signal this external task manager.
## GUI Configuration Editor (`gui/config_editor_dialog.py`)
## Application Preferences Editor (`gui/config_editor_dialog.py`)
A dedicated dialog for editing `config/app_settings.json`.
A dedicated dialog for editing user-overridable application settings. It loads base settings from `config/app_settings.json` and saves user overrides to `config/user_settings.json`.
* **Functionality:** Loads `config/app_settings.json`, presents in tabs, allows editing basic fields, definitions tables (with color editing), and merge rules list/detail.
* **Limitations:** Editing complex fields like `IMAGE_RESOLUTIONS` or full `MAP_MERGE_RULES` details might still be limited.
* **Integration:** Launched by `MainWindow` ("Edit" -> "Preferences...").
* **Persistence:** Saves changes to `config/app_settings.json`. Requires application restart for changes to affect processing logic loaded by the `Configuration` class.
* **Functionality:** Provides a tabbed interface to edit various application settings, including general paths, output/naming patterns, image processing options (like resolutions and compression), and map merging rules. It no longer includes editors for Asset Type or File Type Definitions.
* **Integration:** Launched by `MainWindow` via the "Edit" -> "Preferences..." menu.
* **Persistence:** Saves changes to `config/user_settings.json`. Changes require an application restart to take effect in processing logic.
The refactored GUI separates concerns into distinct widgets and handlers, coordinated by the `MainWindow`. Background tasks use `QThreadPool` and `QRunnable`. The `UnifiedViewModel` focuses on data presentation and simple edits, delegating complex restructuring to the `AssetRestructureHandler`.
## Definitions Editor (`gui/definitions_editor_dialog.py`)
A new dedicated dialog for managing core application definitions that are separate from general user preferences.
* **Purpose:** Provides a structured UI for editing Asset Type Definitions, File Type Definitions, and Supplier Settings.
* **Structure:** Uses a `QTabWidget` with three tabs:
* **Asset Type Definitions:** Manages definitions from `config/asset_type_definitions.json`. Presents a list of asset types and allows editing their description, color, and examples.
* **File Type Definitions:** Manages definitions from `config/file_type_definitions.json`. Presents a list of file types and allows editing their description, color, examples, standard type, bit depth rule, grayscale status, and keybind.
* **Supplier Settings:** Manages settings from `config/suppliers.json`. Presents a list of suppliers and allows editing supplier-specific settings (e.g., Normal Map Type).
* **Integration:** Launched by `MainWindow` via the "Edit" -> "Edit Definitions..." menu.
* **Persistence:** Saves changes directly to the respective configuration files (`config/asset_type_definitions.json`, `config/file_type_definitions.json`, `config/suppliers.json`). Some changes may require an application restart.

View File

@@ -25,102 +25,10 @@
"*.pdf",
"*.url",
"*.htm*",
"*_Fabric.*"
],
"map_type_mapping": [
{
"target_type": "MAP_COL",
"keywords": [
"COLOR*",
"COL",
"DIFFUSE",
"DIF",
"ALBEDO"
]
},
{
"target_type": "MAP_NRM",
"keywords": [
"NORMAL*",
"NORM*",
"NRM*"
]
},
{
"target_type": "MAP_ROUGH",
"keywords": [
"ROUGHNESS",
"ROUGH"
]
},
{
"target_type": "MAP_GLOSS",
"keywords": [
"GLOSS"
],
"is_gloss_source": true
},
{
"target_type": "MAP_AO",
"keywords": [
"AMBIENTOCCLUSION",
"AO"
]
},
{
"target_type": "MAP_DISP",
"keywords": [
"DISPLACEMENT",
"DISP",
"HEIGHT",
"BUMP"
]
},
{
"target_type": "MAP_REFL",
"keywords": [
"REFLECTION",
"REFL",
"SPECULAR",
"SPEC"
]
},
{
"target_type": "MAP_SSS",
"keywords": [
"SSS",
"SUBSURFACE*"
]
},
{
"target_type": "MAP_FUZZ",
"keywords": [
"FUZZ"
]
},
{
"target_type": "MAP_IDMAP",
"keywords": [
"IDMAP"
]
},
{
"target_type": "MAP_MASK",
"keywords": [
"OPAC*",
"TRANSP*",
"MASK*",
"ALPHA*"
]
},
{
"target_type": "MAP_METAL",
"keywords": [
"METAL*",
"METALLIC"
]
}
"*_Fabric.*",
"*_Albedo*"
],
"map_type_mapping": [],
"asset_category_rules": {
"model_patterns": [
"*.fbx",

View File

@@ -0,0 +1,107 @@
# Configuration System Refactoring Plan
This document outlines the plan for refactoring the configuration system of the Asset Processor Tool.
## Overall Goals
1. **Decouple Definitions:** Separate `ASSET_TYPE_DEFINITIONS` and `FILE_TYPE_DEFINITIONS` from the main `config/app_settings.json` into dedicated files.
2. **Introduce User Overrides:** Allow users to override base settings via a new `config/user_settings.json` file.
3. **Improve GUI Saving:** (Lower Priority) Make GUI configuration saving more targeted to avoid overwriting unrelated settings when saving changes from `ConfigEditorDialog` or `LLMEditorWidget`.
## Proposed Plan Phases
**Phase 1: Decouple Definitions**
1. **Create New Definition Files:**
* Create `config/asset_type_definitions.json`.
* Create `config/file_type_definitions.json`.
2. **Migrate Content:**
* Move `ASSET_TYPE_DEFINITIONS` object from `config/app_settings.json` to `config/asset_type_definitions.json`.
* Move `FILE_TYPE_DEFINITIONS` object from `config/app_settings.json` to `config/file_type_definitions.json`.
3. **Update `configuration.py`:**
* Add constants for new definition file paths.
* Modify `Configuration` class to load these new files.
* Update property methods (e.g., `get_asset_type_definitions`, `get_file_type_definitions_with_examples`) to use data from the new definition dictionaries.
* Adjust validation (`_validate_configs`) as needed.
4. **Update GUI & `load_base_config()`:**
* Modify `load_base_config()` to load and return a combined dictionary including `app_settings.json` and the two new definition files.
* Update GUI components relying on `load_base_config()` to ensure they receive the necessary definition data.
**Phase 2: Implement User Overrides**
1. **Define `user_settings.json`:**
* Establish `config/user_settings.json` for user-specific overrides, mirroring parts of `app_settings.json`.
2. **Update `configuration.py` Loading:**
* In `Configuration.__init__`, load `app_settings.json`, then definition files, then attempt to load and deep merge `user_settings.json` (user settings override base).
* Load presets *after* the base+user merge (presets override combined base+user).
* Modify `load_base_config()` to also load and merge `user_settings.json` after `app_settings.json`.
3. **Update GUI Editors:**
* Modify `ConfigEditorDialog` to load the effective settings (base+user) but save changes *only* to `config/user_settings.json`.
* `LLMEditorWidget` continues targeting `llm_settings.json`.
**Phase 3: Granular GUI Saving (Lower Priority)**
1. **Refactor Saving Logic:**
* In `ConfigEditorDialog` and `LLMEditorWidget`:
* Load the current target file (`user_settings.json` or `llm_settings.json`).
* Identify specific setting(s) changed by the user in the GUI session.
* Update only those specific key(s) in the loaded dictionary.
* Write the entire modified dictionary back to the target file, preserving untouched settings.
## Proposed File Structure & Loading Flow
```mermaid
graph LR
subgraph Config Files
A[config/asset_type_definitions.json]
B[config/file_type_definitions.json]
C[config/app_settings.json (Base Defaults)]
D[config/user_settings.json (User Overrides)]
E[config/llm_settings.json]
F[config/suppliers.json]
G[Presets/*.json]
end
subgraph Code
H[configuration.py]
I[GUI]
J[Processing Engine / Pipeline]
K[LLM Handlers]
end
subgraph Loading Flow (Configuration Class)
L(Load Asset Types) --> H
M(Load File Types) --> H
N(Load Base Settings) --> P(Merge Base + User)
O(Load User Settings) --> P
P --> R(Merge Preset Overrides)
Q(Load LLM Settings) --> H
R --> T(Final Config Object)
G -- Load Preset --> R
H -- Contains --> T
end
subgraph Loading Flow (GUI - load_base_config)
L2(Load Asset Types) --> U(Return Merged Defaults + Defs)
M2(Load File Types) --> U
N2(Load Base Settings) --> V(Merge Base + User)
O2(Load User Settings) --> V
V --> U
I -- Calls --> U
end
T -- Used by --> J
T -- Used by --> K
I -- Edits --> D
I -- Edits --> E
I -- Manages --> F
style A fill:#f9f,stroke:#333,stroke-width:2px
style B fill:#f9f,stroke:#333,stroke-width:2px
style C fill:#ccf,stroke:#333,stroke-width:2px
style D fill:#9cf,stroke:#333,stroke-width:2px
style E fill:#ccf,stroke:#333,stroke-width:2px
style F fill:#9cf,stroke:#333,stroke-width:2px
style G fill:#ffc,stroke:#333,stroke-width:2px

View File

@@ -1,181 +0,0 @@
# Project Plan: Modularizing the Asset Processing Engine
**Last Updated:** May 9, 2025
**1. Project Vision & Goals**
* **Vision:** Transform the asset processing pipeline into a highly modular, extensible, and testable system.
* **Primary Goals:**
1. Decouple processing steps into independent, reusable stages.
2. Simplify the addition of new processing capabilities (e.g., GLOSS > ROUGH conversion, Alpha to MASK, Normal Map Green Channel inversion).
3. Improve code maintainability and readability.
4. Enhance unit and integration testing capabilities for each processing component.
5. Centralize common utility functions (image manipulation, path generation).
**2. Proposed Architecture Overview**
* **Core Concept:** A `PipelineOrchestrator` will manage a sequence of `ProcessingStage`s. Each stage will operate on an `AssetProcessingContext` object, which carries all necessary data and state for a single asset through the pipeline.
* **Key Components:**
* `AssetProcessingContext`: Data class holding asset-specific data, configuration, temporary paths, and status.
* `PipelineOrchestrator`: Class to manage the overall processing flow for a `SourceRule`, iterating through assets and executing the pipeline of stages for each.
* `ProcessingStage` (Base Class/Interface): Defines the contract for all individual processing stages (e.g., `execute(context)` method).
* Specific Stage Classes: (e.g., `SupplierDeterminationStage`, `IndividualMapProcessingStage`, etc.)
* Utility Modules: `image_processing_utils.py`, enhancements to `utils/path_utils.py`.
**3. Proposed File Structure**
* `processing/`
* `pipeline/`
* `__init__.py`
* `asset_context.py` (Defines `AssetProcessingContext`)
* `orchestrator.py` (Defines `PipelineOrchestrator`)
* `stages/`
* `__init__.py`
* `base_stage.py` (Defines `ProcessingStage` interface)
* `supplier_determination.py`
* `asset_skip_logic.py`
* `metadata_initialization.py`
* `file_rule_filter.py`
* `gloss_to_rough_conversion.py`
* `alpha_extraction_to_mask.py`
* `normal_map_green_channel.py`
* `individual_map_processing.py`
* `map_merging.py`
* `metadata_finalization.py`
* `output_organization.py`
* `utils/`
* `__init__.py`
* `image_processing_utils.py` (New module for image functions)
* `utils/` (Top-level existing directory)
* `path_utils.py` (To be enhanced with `sanitize_filename` from `processing_engine.py`)
**4. Detailed Phases and Tasks**
**Phase 0: Setup & Core Structures Definition**
*Goal: Establish the foundational classes for the new pipeline.*
* **Task 0.1: Define `AssetProcessingContext`**
* Create `processing/pipeline/asset_context.py`.
* Define the `AssetProcessingContext` data class with fields: `source_rule: SourceRule`, `asset_rule: AssetRule`, `workspace_path: Path`, `engine_temp_dir: Path`, `output_base_path: Path`, `effective_supplier: Optional[str]`, `asset_metadata: Dict`, `processed_maps_details: Dict[str, Dict[str, Dict]]`, `merged_maps_details: Dict[str, Dict[str, Dict]]`, `files_to_process: List[FileRule]`, `loaded_data_cache: Dict`, `config_obj: Configuration`, `status_flags: Dict`, `incrementing_value: Optional[str]`, `sha5_value: Optional[str]`.
* Ensure proper type hinting.
* **Task 0.2: Define `ProcessingStage` Base Class/Interface**
* Create `processing/pipeline/stages/base_stage.py`.
* Define an abstract base class `ProcessingStage` with an abstract method `execute(self, context: AssetProcessingContext) -> AssetProcessingContext`.
* **Task 0.3: Implement Initial `PipelineOrchestrator`**
* Create `processing/pipeline/orchestrator.py`.
* Define the `PipelineOrchestrator` class.
* Implement `__init__(self, config_obj: Configuration, stages: List[ProcessingStage])`.
* Implement `process_source_rule(self, source_rule: SourceRule, workspace_path: Path, output_base_path: Path, overwrite: bool, incrementing_value: Optional[str], sha5_value: Optional[str]) -> Dict[str, List[str]]`.
* Handles creation/cleanup of the main engine temporary directory.
* Loops through `source_rule.assets`, initializes `AssetProcessingContext` for each.
* Iterates `self.stages`, calling `stage.execute(context)`.
* Collects overall status.
**Phase 1: Utility Module Refactoring**
*Goal: Consolidate and centralize common utility functions.*
* **Task 1.1: Refactor Path Utilities**
* Move `_sanitize_filename` from `processing_engine.py` to `utils/path_utils.py`.
* Update uses to call the new utility function.
* **Task 1.2: Create `image_processing_utils.py`**
* Create `processing/utils/image_processing_utils.py`.
* Move general-purpose image functions from `processing_engine.py`:
* `is_power_of_two`
* `get_nearest_pot`
* `calculate_target_dimensions`
* `calculate_image_stats`
* `normalize_aspect_ratio_change`
* Core image loading, BGR<>RGB conversion, generic resizing (from `_load_and_transform_source`).
* Core data type conversion for saving, color conversion for saving, `cv2.imwrite` call (from `_save_image`).
* Ensure functions are pure and testable.
**Phase 2: Implementing Core Processing Stages (Migrating Existing Logic)**
*Goal: Migrate existing functionalities from `processing_engine.py` into the new stage-based architecture.*
(For each task: create stage file, implement class, move logic, adapt to `AssetProcessingContext`)
* **Task 2.1: Implement `SupplierDeterminationStage`**
* **Task 2.2: Implement `AssetSkipLogicStage`**
* **Task 2.3: Implement `MetadataInitializationStage`**
* **Task 2.4: Implement `FileRuleFilterStage`** (New logic for `item_type == "FILE_IGNORE"`)
* **Task 2.5: Implement `IndividualMapProcessingStage`** (Adapts `_process_individual_maps`, uses `image_processing_utils.py`)
* **Task 2.6: Implement `MapMergingStage`** (Adapts `_merge_maps`, uses `image_processing_utils.py`)
* **Task 2.7: Implement `MetadataFinalizationAndSaveStage`** (Adapts `_generate_metadata_file`, uses `utils.path_utils.generate_path_from_pattern`)
* **Task 2.8: Implement `OutputOrganizationStage`** (Adapts `_organize_output_files`)
**Phase 3: Implementing New Feature Stages**
*Goal: Add the new desired processing capabilities as distinct stages.*
* **Task 3.1: Implement `GlossToRoughConversionStage`** (Identify gloss, convert, invert, save temp, update `FileRule`)
* **Task 3.2: Implement `AlphaExtractionToMaskStage`** (Check existing mask, find MAP_COL with alpha, extract, save temp, add new `FileRule`)
* **Task 3.3: Implement `NormalMapGreenChannelStage`** (Identify normal maps, invert green based on config, save temp, update `FileRule`)
**Phase 4: Integration, Testing & Finalization**
*Goal: Assemble the pipeline, test thoroughly, and deprecate old code.*
* **Task 4.1: Configure `PipelineOrchestrator`**
* Instantiate `PipelineOrchestrator` in main application logic with the ordered list of stage instances.
* **Task 4.2: Unit Testing**
* Unit tests for each `ProcessingStage` (mocking `AssetProcessingContext`).
* Unit tests for `image_processing_utils.py` and `utils/path_utils.py` functions.
* **Task 4.3: Integration Testing**
* Test `PipelineOrchestrator` end-to-end with sample data.
* Compare outputs with the existing engine for consistency.
* **Task 4.4: Documentation Update**
* Update developer documentation (e.g., `Documentation/02_Developer_Guide/05_Processing_Pipeline.md`).
* Document `AssetProcessingContext` and stage responsibilities.
* **Task 4.5: Deprecate/Remove Old `ProcessingEngine` Code**
* Gradually remove refactored logic from `processing_engine.py`.
**5. Workflow Diagram**
```mermaid
graph TD
AA[Load SourceRule & Config] --> BA(PipelineOrchestrator: process_source_rule);
BA --> CA{For Each Asset in SourceRule};
CA -- Yes --> DA(Orchestrator: Create AssetProcessingContext);
DA --> EA(SupplierDeterminationStage);
EA -- context --> FA(AssetSkipLogicStage);
FA -- context --> GA{context.skip_asset?};
GA -- Yes --> HA(Orchestrator: Record Skipped);
HA --> CA;
GA -- No --> IA(MetadataInitializationStage);
IA -- context --> JA(FileRuleFilterStage);
JA -- context --> KA(GlossToRoughConversionStage);
KA -- context --> LA(AlphaExtractionToMaskStage);
LA -- context --> MA(NormalMapGreenChannelStage);
MA -- context --> NA(IndividualMapProcessingStage);
NA -- context --> OA(MapMergingStage);
OA -- context --> PA(MetadataFinalizationAndSaveStage);
PA -- context --> QA(OutputOrganizationStage);
QA -- context --> RA(Orchestrator: Record Processed/Failed);
RA --> CA;
CA -- No --> SA(Orchestrator: Cleanup Engine Temp Dir);
SA --> TA[Processing Complete];
subgraph Stages
direction LR
EA
FA
IA
JA
KA
LA
MA
NA
OA
PA
QA
end
subgraph Utils
direction LR
U1[image_processing_utils.py]
U2[utils/path_utils.py]
end
NA -.-> U1;
OA -.-> U1;
KA -.-> U1;
LA -.-> U1;
MA -.-> U1;
PA -.-> U2;
QA -.-> U2;
classDef context fill:#f9f,stroke:#333,stroke-width:2px;
class DA,EA,FA,IA,JA,KA,LA,MA,NA,OA,PA,QA context;

View File

@@ -0,0 +1,62 @@
# Issue: List item selection not working in Definitions Editor
**Date:** 2025-05-13
**Affected File:** [`gui/definitions_editor_dialog.py`](gui/definitions_editor_dialog.py)
**Problem Description:**
User mouse clicks on items within the `QListWidget` instances (for Asset Types, File Types, and Suppliers) in the Definitions Editor dialog do not trigger item selection or the `currentItemChanged` signal. The first item is selected by default and its details are displayed correctly. Programmatic selection of items (e.g., via a diagnostic button) *does* correctly trigger the `currentItemChanged` signal and updates the UI detail views. The issue is specific to user-initiated mouse clicks for selection after the initial load.
**Debugging Steps Taken & Findings:**
1. **Initial Analysis:**
* Reviewed GUI internals documentation ([`Documentation/02_Developer_Guide/06_GUI_Internals.md`](Documentation/02_Developer_Guide/06_GUI_Internals.md)) and [`gui/definitions_editor_dialog.py`](gui/definitions_editor_dialog.py) source code.
* Confirmed signal connections (`currentItemChanged` to display slots) are made.
2. **Logging in Display Slots (`_display_*_details`):**
* Added logging to display slots. Confirmed they are called for the initial (default) item selection.
* No further calls to these slots occur on user clicks, indicating `currentItemChanged` is not firing.
3. **Color Swatch Palette Role:**
* Investigated and corrected `QPalette.ColorRole` for color swatches (reverted from `Background` to `Window`). This fixed an `AttributeError` but did not resolve the selection issue.
4. **Robust Error Handling in Display Slots:**
* Wrapped display slot logic in `try...finally` blocks with detailed logging. Confirmed slots complete without error for initial selection and signals for detail widgets are reconnected.
5. **Diagnostic Lambda for `currentItemChanged`:**
* Added a lambda logger to `currentItemChanged` alongside the main display slot.
* Confirmed both lambda and display slot fire for initial programmatic selection.
* Neither fires for subsequent user clicks. This proved the `QListWidget` itself was not emitting the signal.
6. **Explicit `setEnabled` and `setSelectionMode` on `QListWidget`:**
* Explicitly set these properties. No change in behavior.
7. **Explicit `setEnabled` and `setFocusPolicy(Qt.ClickFocus)` on `tab_page` (parent of `QListWidget` layout):**
* This change **allowed programmatic selection via a diagnostic button to correctly fire `currentItemChanged` and update the UI**.
* However, user mouse clicks still did not work and did not fire the signal.
8. **Event Filter Investigation:**
* **Filter on `QListWidget`:** Did NOT receive mouse press/release events from user clicks.
* **Filter on `tab_page` (parent of `QListWidget`'s layout):** Did NOT receive mouse press/release events.
* **Filter on `self.tab_widget` (QTabWidget):** DID receive mouse press/release events.
* Modified `self.tab_widget`'s event filter to return `False` for events over the current page, attempting to ensure propagation.
* **Result:** With the modified `tab_widget` filter, an event filter re-added to `asset_type_list_widget` *did* start receiving mouse press/release events. **However, `asset_type_list_widget` still did not emit `currentItemChanged` from these user clicks.**
9. **`DebugListWidget` (Subclassing `QListWidget`):**
* Created `DebugListWidget` overriding `mousePressEvent` with logging.
* Used `DebugListWidget` for `asset_type_list_widget`.
* **Initial user report indicated that `DebugListWidget.mousePressEvent` logs were NOT appearing for user clicks.** This means that even with the `QTabWidget` event filter attempting to propagate events, and the `asset_type_list_widget`'s filter (from step 8) confirming it received them, the `mousePressEvent` of the `QListWidget` itself was not being triggered by those propagated events. This is the current mystery.
**Current Status:**
- Programmatic selection works and fires signals.
- User clicks are received by an event filter on `asset_type_list_widget` (after `QTabWidget` filter modification) but do not result in `mousePressEvent` being called on the `QListWidget` (or `DebugListWidget`) itself, and thus no `currentItemChanged` signal is emitted.
- The issue seems to be a very low-level event processing problem specifically for user mouse clicks within the `QListWidget` instances when they are children of the `QTabWidget` pages, even when events appear to reach the list widget via an event filter.
**Next Steps (When Resuming):**
1. Re-verify the logs from the `DebugListWidget.mousePressEvent` test. If it's truly not being called despite its event filter seeing events, this is extremely unusual.
2. Simplify the `_create_tab_pane` method drastically for one tab:
* Remove the right-hand pane.
* Add the `DebugListWidget` directly to the `tab_page`'s layout without the intermediate `left_pane_layout`.
3. Consider if any styles applied to `QListWidget` or its parents via stylesheets could be interfering with hit testing or event processing (unlikely for this specific symptom, but possible).
4. Explore alternative ways to populate/manage the `QListWidget` or its items if a subtle corruption is occurring.
5. If all else fails, consider replacing the `QListWidget` with a `QListView` and a `QStringListModel` as a more fundamental change to see if the issue is specific to `QListWidget` in this context.

Binary file not shown.

View File

@@ -0,0 +1,57 @@
{
"source_rules": [
{
"input_path": "BoucleChunky001.zip",
"supplier_identifier": "Dinesen",
"preset_name": null,
"assets": [
{
"asset_name": "BoucleChunky001",
"asset_type": "Surface",
"files": [
{
"file_path": "BoucleChunky001_AO_1K_METALNESS.png",
"item_type": "MAP_AO",
"target_asset_name_override": "BoucleChunky001"
},
{
"file_path": "BoucleChunky001_COL_1K_METALNESS.png",
"item_type": "MAP_COL",
"target_asset_name_override": "BoucleChunky001"
},
{
"file_path": "BoucleChunky001_DISP16_1K_METALNESS.png",
"item_type": "MAP_DISP",
"target_asset_name_override": "BoucleChunky001"
},
{
"file_path": "BoucleChunky001_DISP_1K_METALNESS.png",
"item_type": "EXTRA",
"target_asset_name_override": "BoucleChunky001"
},
{
"file_path": "BoucleChunky001_Fabric.png",
"item_type": "EXTRA",
"target_asset_name_override": "BoucleChunky001"
},
{
"file_path": "BoucleChunky001_METALNESS_1K_METALNESS.png",
"item_type": "MAP_METAL",
"target_asset_name_override": "BoucleChunky001"
},
{
"file_path": "BoucleChunky001_NRM_1K_METALNESS.png",
"item_type": "MAP_NRM",
"target_asset_name_override": "BoucleChunky001"
},
{
"file_path": "BoucleChunky001_ROUGHNESS_1K_METALNESS.png",
"item_type": "MAP_ROUGH",
"target_asset_name_override": "BoucleChunky001"
}
]
}
]
}
]
}

855
autotest.py Normal file
View File

@@ -0,0 +1,855 @@
import argparse
import sys
import logging
import logging.handlers
import time
import json
import shutil # Import shutil for directory operations
from pathlib import Path
from typing import List, Dict, Any
from PySide6.QtCore import QCoreApplication, QTimer, Slot, QEventLoop, QObject, Signal
from PySide6.QtWidgets import QApplication, QListWidgetItem
# Add project root to sys.path
project_root = Path(__file__).resolve().parent
if str(project_root) not in sys.path:
sys.path.insert(0, str(project_root))
try:
from main import App
from gui.main_window import MainWindow
from rule_structure import SourceRule # Assuming SourceRule is in rule_structure.py
except ImportError as e:
print(f"Error importing project modules: {e}")
print(f"Ensure that the script is run from the project root or that the project root is in PYTHONPATH.")
print(f"Current sys.path: {sys.path}")
sys.exit(1)
# Global variable for the memory log handler
autotest_memory_handler = None
# Custom Log Filter for Concise Output
class InfoSummaryFilter(logging.Filter):
# Keywords that identify INFO messages to *allow* for concise output
SUMMARY_KEYWORDS_PRECISE = [
"Test run completed",
"Test succeeded",
"Test failed",
"Rule comparison successful",
"Rule comparison failed",
"ProcessingEngine finished. Summary:",
"Autotest Context:",
"Parsed CLI arguments:",
"Prediction completed successfully.",
"Processing completed.",
"Signal 'all_tasks_finished' received",
"final status:", # To catch "Asset '...' final status:"
"User settings file not found:",
"MainPanelWidget: Default output directory set to:",
# Search related (as per original filter)
"Searching logs for term",
"Search term ",
"Found ",
"No tracebacks found in the logs.",
"--- End Log Analysis ---",
"Log analysis completed.",
]
# Patterns for case-insensitive rejection
REJECT_PATTERNS_LOWER = [
# Original debug prefixes (ensure these are still relevant or merge if needed)
"debug:", "orchestrator_trace:", "configuration_debug:", "app_debug:", "output_org_debug:",
# Iterative / Per-item / Per-file details / Intermediate steps
": item ", # Catches "Asset '...', Item X/Y"
"item successfully processed and saved",
", file '", # Catches "Asset '...', File '...'"
": processing regular map",
": found source file:",
": determined source bit depth:",
"successfully processed regular map",
"successfully created mergetaskdefinition",
": preparing processing items",
": finished preparing items. found",
": starting core item processing loop",
", task '",
": processing merge task",
"loaded from context:",
"using dimensions from first loaded input",
"successfully merged inputs into image",
"successfully processed merge task",
"mergedtaskprocessorstage result",
"calling savevariantsstage",
"savevariantsstage result",
"adding final details to context",
": finished core item processing loop",
": copied variant",
": copied extra file",
": successfully organized",
": output organization complete.",
": metadata saved to",
"worker thread: starting processing for rule:",
"preparing workspace for input:",
"input is a supported archive",
"calling processingengine.process with rule",
"calculated sha5 for",
"calculated next incrementing value for",
"verify: processingengine.process called",
": effective supplier set to",
": metadata initialized.",
"path",
"\\asset_processor",
": file rules queued for processing",
"successfully loaded base application settings",
"successfully loaded and merged asset_type_definitions",
"successfully loaded and merged file_type_definitions",
"starting rule-based prediction for:",
"rule-based prediction finished successfully for",
"finished rule-based prediction run for",
"updating model with rule-based results for source:",
"debug task ",
"worker thread: finished processing for rule:",
"task finished signal received for",
# Autotest step markers (not global summaries)
]
def filter(self, record):
# Allow CRITICAL, ERROR, WARNING unconditionally
if record.levelno >= logging.WARNING:
return True
if record.levelno == logging.INFO:
msg = record.getMessage()
msg_lower = msg.lower() # For case-insensitive pattern rejection
# 1. Explicitly REJECT if message contains verbose patterns (case-insensitive)
for pattern in self.REJECT_PATTERNS_LOWER: # Use the new list
if pattern in msg_lower:
return False # Reject
# 2. Then, if not rejected, ALLOW only if message contains precise summary keywords
for keyword in self.SUMMARY_KEYWORDS_PRECISE: # Use the new list
if keyword in msg: # Original message for case-sensitive summary keywords if needed
return True # Allow
# 3. Reject all other INFO messages that don't match precise summary keywords
return False
# Reject levels below INFO (e.g., DEBUG) by default for this handler
return False
# --- Root Logger Configuration for Concise Console Output ---
def setup_autotest_logging():
"""
Configures the root logger for concise console output for autotest.py.
This ensures that only essential summary information, warnings, and errors
are displayed on the console by default.
"""
root_logger = logging.getLogger()
# 1. Remove all existing handlers from the root logger.
# This prevents interference from other logging configurations.
for handler in root_logger.handlers[:]:
root_logger.removeHandler(handler)
handler.close() # Close handler before removing
# 2. Set the root logger's level to DEBUG to capture everything for the memory handler.
# The console handler will still filter down to INFO/selected.
root_logger.setLevel(logging.DEBUG) # Changed from INFO to DEBUG
# 3. Create a new StreamHandler for sys.stdout (for concise console output).
console_handler = logging.StreamHandler(sys.stdout)
# 4. Set this console handler's level to INFO.
# The filter will then decide which INFO messages to display on console.
console_handler.setLevel(logging.INFO)
# 5. Apply the enhanced InfoSummaryFilter to the console handler.
info_filter = InfoSummaryFilter()
console_handler.addFilter(info_filter)
# 6. Set a concise formatter for the console handler.
formatter = logging.Formatter('[%(levelname)s] %(message)s')
console_handler.setFormatter(formatter)
# 7. Add this newly configured console handler to the root_logger.
root_logger.addHandler(console_handler)
# 8. Setup the MemoryHandler
global autotest_memory_handler # Declare usage of global
autotest_memory_handler = logging.handlers.MemoryHandler(
capacity=20000, # Increased capacity
flushLevel=logging.CRITICAL + 1, # Prevent automatic flushing
target=None # Does not flush to another handler
)
autotest_memory_handler.setLevel(logging.DEBUG) # Capture all logs from DEBUG up
# Not adding a formatter here, will format in _process_and_display_logs
# 9. Add the memory handler to the root logger.
root_logger.addHandler(autotest_memory_handler)
# Call the setup function early in the script's execution.
setup_autotest_logging()
# Logger for autotest.py's own messages.
# Messages from this logger will propagate to the root logger and be filtered
# by the console_handler configured above.
# Setting its level to DEBUG allows autotest.py to generate DEBUG messages,
# which won't appear on the concise console (due to handler's INFO level)
# but can be captured by other handlers (e.g., the GUI's log console).
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # Ensure autotest.py can generate DEBUGs for other handlers
# Note: The GUI's log console (e.g., self.main_window.log_console.log_console_output)
# is assumed to capture all logs (including DEBUG) from various modules.
# The _process_and_display_logs function then uses these comprehensive logs for the --search feature.
# This root logger setup primarily makes autotest.py's direct console output concise,
# ensuring that only filtered, high-level information appears on stdout by default.
# --- End of Root Logger Configuration ---
# --- Argument Parsing ---
def parse_arguments():
"""Parses command-line arguments for the autotest script."""
parser = argparse.ArgumentParser(description="Automated test script for Asset Processor GUI.")
parser.add_argument(
"--zipfile",
type=Path,
default=project_root / "TestFiles" / "BoucleChunky001.zip",
help="Path to the test asset ZIP file. Default: TestFiles/BoucleChunky001.zip"
)
parser.add_argument(
"--preset",
type=str,
default="Dinesen", # This should match a preset name in the application
help="Name of the preset to use. Default: Dinesen"
)
parser.add_argument(
"--expectedrules",
type=Path,
default=project_root / "TestFiles" / "Test-BoucleChunky001.json",
help="Path to the JSON file with expected rules. Default: TestFiles/Test-BoucleChunky001.json"
)
parser.add_argument(
"--outputdir",
type=Path,
default=project_root / "TestFiles" / "TestOutputs" / "BoucleChunkyOutput",
help="Path for processing output. Default: TestFiles/TestOutputs/BoucleChunkyOutput"
)
parser.add_argument(
"--search",
type=str,
default=None,
help="Optional log search term. Default: None"
)
parser.add_argument(
"--additional-lines",
type=int,
default=0,
help="Context lines for log search. Default: 0"
)
return parser.parse_args()
class AutoTester(QObject):
"""
Handles the automated testing process for the Asset Processor GUI.
"""
# Define signals if needed, e.g., for specific test events
# test_step_completed = Signal(str)
def __init__(self, app_instance: App, cli_args: argparse.Namespace):
super().__init__()
self.app_instance: App = app_instance
self.main_window: MainWindow = app_instance.main_window
self.cli_args: argparse.Namespace = cli_args
self.event_loop = QEventLoop(self)
self.prediction_poll_timer = QTimer(self)
self.expected_rules_data: Dict[str, Any] = {}
self.test_step: str = "INIT" # Possible values: INIT, LOADING_ZIP, SELECTING_PRESET, AWAITING_PREDICTION, PREDICTION_COMPLETE, COMPARING_RULES, STARTING_PROCESSING, AWAITING_PROCESSING, PROCESSING_COMPLETE, CHECKING_OUTPUT, ANALYZING_LOGS, DONE
if not self.main_window:
logger.error("MainWindow instance not found in App. Cannot proceed.")
self.cleanup_and_exit(success=False)
return
# Connect signals
if hasattr(self.app_instance, 'all_tasks_finished') and isinstance(self.app_instance.all_tasks_finished, Signal):
self.app_instance.all_tasks_finished.connect(self._on_all_tasks_finished)
else:
logger.warning("App instance does not have 'all_tasks_finished' signal or it's not a Signal. Processing completion might not be detected.")
self._load_expected_rules()
def _load_expected_rules(self) -> None:
"""Loads the expected rules from the JSON file specified by cli_args."""
self.test_step = "LOADING_EXPECTED_RULES"
logger.debug(f"Loading expected rules from: {self.cli_args.expectedrules}")
try:
with open(self.cli_args.expectedrules, 'r') as f:
self.expected_rules_data = json.load(f)
logger.debug("Expected rules loaded successfully.")
except FileNotFoundError:
logger.error(f"Expected rules file not found: {self.cli_args.expectedrules}")
self.cleanup_and_exit(success=False)
except json.JSONDecodeError as e:
logger.error(f"Error decoding expected rules JSON: {e}")
self.cleanup_and_exit(success=False)
except Exception as e:
logger.error(f"An unexpected error occurred while loading expected rules: {e}")
self.cleanup_and_exit(success=False)
def run_test(self) -> None:
"""Orchestrates the test steps."""
logger.info("Starting test run...")
if not self.expected_rules_data: # Ensure rules were loaded
logger.error("Expected rules not loaded. Aborting test.")
self.cleanup_and_exit(success=False)
return
# Add a specific summary log for essential context
logger.info(f"Autotest Context: Input='{self.cli_args.zipfile.name}', Preset='{self.cli_args.preset}', Output='{self.cli_args.outputdir}'")
# Step 1: Load ZIP
self.test_step = "LOADING_ZIP"
logger.info(f"Step 1: Loading ZIP file: {self.cli_args.zipfile}") # KEEP INFO - Passes filter
if not self.cli_args.zipfile.exists():
logger.error(f"ZIP file not found: {self.cli_args.zipfile}")
self.cleanup_and_exit(success=False)
return
try:
# Assuming add_input_paths can take a list of strings or Path objects
self.main_window.add_input_paths([str(self.cli_args.zipfile)])
logger.debug("ZIP file loading initiated.")
except Exception as e:
logger.error(f"Error during ZIP file loading: {e}")
self.cleanup_and_exit(success=False)
return
# Step 2: Select Preset
self.test_step = "SELECTING_PRESET"
logger.info(f"Step 2: Selecting preset: {self.cli_args.preset}") # KEEP INFO - Passes filter
preset_found = False
preset_list_widget = self.main_window.preset_editor_widget.editor_preset_list
for i in range(preset_list_widget.count()):
item = preset_list_widget.item(i)
if item and item.text() == self.cli_args.preset:
preset_list_widget.setCurrentItem(item)
logger.debug(f"Preset '{self.cli_args.preset}' selected.")
preset_found = True
break
if not preset_found:
logger.error(f"Preset '{self.cli_args.preset}' not found in the list.")
available_presets = [preset_list_widget.item(i).text() for i in range(preset_list_widget.count())]
logger.debug(f"Available presets: {available_presets}")
self.cleanup_and_exit(success=False)
return
# Step 3: Await Prediction Completion
self.test_step = "AWAITING_PREDICTION"
logger.debug("Step 3: Awaiting prediction completion...")
self.prediction_poll_timer.timeout.connect(self._check_prediction_status)
self.prediction_poll_timer.start(500) # Poll every 500ms
# Use a QTimer to allow event loop to process while waiting for this step
# This ensures that the _check_prediction_status can be called.
# We will exit this event_loop from _check_prediction_status when prediction is done.
logger.debug("Starting event loop for prediction...")
self.event_loop.exec() # This loop is quit by _check_prediction_status
self.prediction_poll_timer.stop()
logger.debug("Event loop for prediction finished.")
if self.test_step != "PREDICTION_COMPLETE":
logger.error(f"Prediction did not complete as expected. Current step: {self.test_step}")
# Check if there were any pending predictions that never cleared
if hasattr(self.main_window, '_pending_predictions'):
logger.error(f"Pending predictions at timeout: {self.main_window._pending_predictions}")
self.cleanup_and_exit(success=False)
return
logger.info("Prediction completed successfully.") # KEEP INFO - Passes filter
# Step 4: Retrieve & Compare Rulelist
self.test_step = "COMPARING_RULES"
logger.info("Step 4: Retrieving and Comparing Rules...") # KEEP INFO - Passes filter
actual_source_rules_list: List[SourceRule] = self.main_window.unified_model.get_all_source_rules()
actual_rules_obj = actual_source_rules_list # Keep the SourceRule list for processing
comparable_actual_rules = self._convert_rules_to_comparable(actual_source_rules_list)
if not self._compare_rules(comparable_actual_rules, self.expected_rules_data):
logger.error("Rule comparison failed. See logs for details.")
self.cleanup_and_exit(success=False)
return
logger.info("Rule comparison successful.") # KEEP INFO - Passes filter
# Step 5: Start Processing
self.test_step = "START_PROCESSING"
logger.info("Step 5: Starting Processing...") # KEEP INFO - Passes filter
processing_settings = {
"output_dir": str(self.cli_args.outputdir), # Ensure it's a string for JSON/config
"overwrite": True,
"workers": 1,
"blender_enabled": False # Basic test, no Blender
}
try:
Path(self.cli_args.outputdir).mkdir(parents=True, exist_ok=True)
logger.debug(f"Ensured output directory exists: {self.cli_args.outputdir}")
except Exception as e:
logger.error(f"Could not create output directory {self.cli_args.outputdir}: {e}")
self.cleanup_and_exit(success=False)
return
if hasattr(self.main_window, 'start_backend_processing') and isinstance(self.main_window.start_backend_processing, Signal):
logger.debug(f"Emitting start_backend_processing with rules count: {len(actual_rules_obj)} and settings: {processing_settings}")
self.main_window.start_backend_processing.emit(actual_rules_obj, processing_settings)
else:
logger.error("'start_backend_processing' signal not found on MainWindow. Cannot start processing.")
self.cleanup_and_exit(success=False)
return
# Step 6: Await Processing Completion
self.test_step = "AWAIT_PROCESSING"
logger.debug("Step 6: Awaiting processing completion...")
self.event_loop.exec() # This loop is quit by _on_all_tasks_finished
if self.test_step != "PROCESSING_COMPLETE":
logger.error(f"Processing did not complete as expected. Current step: {self.test_step}")
self.cleanup_and_exit(success=False)
return
logger.info("Processing completed.") # KEEP INFO - Passes filter
# Step 7: Check Output Path
self.test_step = "CHECK_OUTPUT"
logger.info(f"Step 7: Checking output path: {self.cli_args.outputdir}") # KEEP INFO - Passes filter
output_path = Path(self.cli_args.outputdir)
if not output_path.exists() or not output_path.is_dir():
logger.error(f"Output directory {output_path} does not exist or is not a directory.")
self.cleanup_and_exit(success=False)
return
output_items = list(output_path.iterdir())
if not output_items:
logger.warning(f"Output directory {output_path} is empty. This might be a test failure depending on the case.")
# For a more specific check, one might iterate through actual_rules_obj
# and verify if subdirectories matching asset_name exist.
# e.g. for asset_rule in source_rule.assets:
# expected_asset_dir = output_path / asset_rule.asset_name
# if not expected_asset_dir.is_dir(): logger.error(...)
else:
logger.debug(f"Found {len(output_items)} item(s) in output directory:")
for item in output_items:
logger.debug(f" - {item.name} ({'dir' if item.is_dir() else 'file'})")
logger.info("Output path check completed.") # KEEP INFO - Passes filter
# Step 8: Retrieve & Analyze Logs
self.test_step = "CHECK_LOGS"
logger.debug("Step 8: Retrieving and Analyzing Logs...")
all_logs_text = ""
if self.main_window.log_console and self.main_window.log_console.log_console_output:
all_logs_text = self.main_window.log_console.log_console_output.toPlainText()
else:
logger.warning("Log console or output widget not found. Cannot retrieve logs.")
self._process_and_display_logs(all_logs_text)
logger.info("Log analysis completed.")
# Final Step
logger.info("Test run completed successfully.") # KEEP INFO - Passes filter
self.cleanup_and_exit(success=True)
@Slot()
def _check_prediction_status(self) -> None:
"""Polls the main window for pending predictions."""
# logger.debug(f"Checking prediction status. Pending: {self.main_window._pending_predictions if hasattr(self.main_window, '_pending_predictions') else 'N/A'}")
if hasattr(self.main_window, '_pending_predictions'):
if not self.main_window._pending_predictions: # Assuming _pending_predictions is a list/dict that's empty when done
logger.debug("No pending predictions. Prediction assumed complete.")
self.test_step = "PREDICTION_COMPLETE"
if self.event_loop.isRunning():
self.event_loop.quit()
# else:
# logger.debug(f"Still awaiting predictions: {len(self.main_window._pending_predictions)} remaining.")
else:
logger.warning("'_pending_predictions' attribute not found on MainWindow. Cannot check prediction status automatically.")
# As a fallback, if the attribute is missing, we might assume prediction is instant or needs manual check.
# For now, let's assume it means it's done if the attribute is missing, but this is risky.
# A better approach would be to have a clear signal from MainWindow when predictions are done.
self.test_step = "PREDICTION_COMPLETE" # Risky assumption
if self.event_loop.isRunning():
self.event_loop.quit()
@Slot(int, int, int)
def _on_all_tasks_finished(self, processed_count: int, skipped_count: int, failed_count: int) -> None:
"""Slot for App.all_tasks_finished signal."""
logger.info(f"Signal 'all_tasks_finished' received: Processed={processed_count}, Skipped={skipped_count}, Failed={failed_count}") # KEEP INFO - Passes filter
if self.test_step == "AWAIT_PROCESSING":
logger.debug("Processing completion signal received.") # Covered by the summary log above
if failed_count > 0:
logger.error(f"Processing finished with {failed_count} failed task(s).")
# Even if tasks failed, the test might pass based on output checks.
# The error is logged for information.
self.test_step = "PROCESSING_COMPLETE"
if self.event_loop.isRunning():
self.event_loop.quit()
else:
logger.warning(f"Signal 'all_tasks_finished' received at an unexpected test step: '{self.test_step}'. Counts: P={processed_count}, S={skipped_count}, F={failed_count}")
def _convert_rules_to_comparable(self, source_rules_list: List[SourceRule]) -> Dict[str, Any]:
"""
Converts a list of SourceRule objects to a dictionary structure
suitable for comparison with the expected_rules.json.
"""
logger.debug(f"Converting {len(source_rules_list)} SourceRule objects to comparable dictionary...")
comparable_sources_list = []
for source_rule_obj in source_rules_list:
comparable_asset_list = []
# source_rule_obj.assets is List[AssetRule]
for asset_rule_obj in source_rule_obj.assets:
comparable_file_list = []
# asset_rule_obj.files is List[FileRule]
for file_rule_obj in asset_rule_obj.files:
comparable_file_list.append({
"file_path": file_rule_obj.file_path,
"item_type": file_rule_obj.item_type,
"target_asset_name_override": file_rule_obj.target_asset_name_override
})
comparable_asset_list.append({
"asset_name": asset_rule_obj.asset_name,
"asset_type": asset_rule_obj.asset_type,
"files": comparable_file_list
})
comparable_sources_list.append({
"input_path": Path(source_rule_obj.input_path).name, # Use only the filename
"supplier_identifier": source_rule_obj.supplier_identifier,
"preset_name": source_rule_obj.preset_name,
"assets": comparable_asset_list
})
logger.debug("Conversion to comparable dictionary finished.")
return {"source_rules": comparable_sources_list}
def _compare_rule_item(self, actual_item: Dict[str, Any], expected_item: Dict[str, Any], item_type_name: str, parent_context: str = "") -> bool:
"""
Recursively compares an individual actual rule item dictionary with an expected rule item dictionary.
Logs differences and returns True if they match, False otherwise.
"""
item_match = True
identifier = ""
if item_type_name == "SourceRule":
identifier = expected_item.get('input_path', f'UnknownSource_at_{parent_context}')
elif item_type_name == "AssetRule":
identifier = expected_item.get('asset_name', f'UnknownAsset_at_{parent_context}')
elif item_type_name == "FileRule":
identifier = expected_item.get('file_path', f'UnknownFile_at_{parent_context}')
current_context = f"{parent_context}/{identifier}" if parent_context else identifier
# Log Extra Fields: Iterate through keys in actual_item.
# If a key is in actual_item but not in expected_item (and is not a list container like "assets" or "files"),
# log this as an informational message.
for key in actual_item.keys():
if key not in expected_item and key not in ["assets", "files"]:
logger.debug(f"Field '{key}' present in actual {item_type_name} ({current_context}) but not specified in expected. Value: '{actual_item[key]}'")
# Check Expected Fields: Iterate through keys in expected_item.
for key, expected_value in expected_item.items():
if key not in actual_item:
logger.error(f"Missing expected field '{key}' in actual {item_type_name} ({current_context}).")
item_match = False
continue # Continue to check other fields in the expected_item
actual_value = actual_item[key]
if key == "assets": # List of AssetRule dictionaries
if not self._compare_list_of_rules(actual_value, expected_value, "AssetRule", current_context, "asset_name"):
item_match = False
elif key == "files": # List of FileRule dictionaries
if not self._compare_list_of_rules(actual_value, expected_value, "FileRule", current_context, "file_path"):
item_match = False
else: # Regular field comparison
if actual_value != expected_value:
# Handle None vs "None" string for preset_name specifically if it's a common issue
if key == "preset_name" and actual_value is None and expected_value == "None":
logger.debug(f"Field '{key}' in {item_type_name} ({current_context}): Actual is None, Expected is string \"None\". Treating as match for now.")
elif key == "target_asset_name_override" and actual_value is not None and expected_value is None:
# If actual has a value (e.g. parent asset name) and expected is null/None,
# this is a mismatch according to strict comparison.
# For a more lenient check, this logic could be adjusted here.
# Current strict comparison will flag this as error, which is what the logs show.
logger.error(f"Value mismatch for field '{key}' in {item_type_name} ({current_context}): Actual='{actual_value}', Expected='{expected_value}'.")
item_match = False
else:
logger.error(f"Value mismatch for field '{key}' in {item_type_name} ({current_context}): Actual='{actual_value}', Expected='{expected_value}'.")
item_match = False
return item_match
def _compare_list_of_rules(self, actual_list: List[Dict[str, Any]], expected_list: List[Dict[str, Any]], item_type_name: str, parent_context: str, item_key_field: str) -> bool:
"""
Compares a list of actual rule items against a list of expected rule items.
Items are matched by a key field (e.g., 'asset_name' or 'file_path').
Order independent for matching, but logs count mismatches.
"""
list_match = True
if not isinstance(actual_list, list) or not isinstance(expected_list, list):
logger.error(f"Type mismatch for list of {item_type_name}s in {parent_context}. Expected lists.")
return False
if len(actual_list) != len(expected_list):
logger.error(f"Mismatch in number of {item_type_name}s for {parent_context}. Actual: {len(actual_list)}, Expected: {len(expected_list)}.")
list_match = False # Count mismatch is an error
# If counts differ, we still try to match what we can to provide more detailed feedback,
# but the overall list_match will remain False.
actual_items_map = {item.get(item_key_field): item for item in actual_list if item.get(item_key_field) is not None}
# Keep track of expected items that found a match to identify missing ones more easily
matched_expected_keys = set()
for expected_item in expected_list:
expected_key_value = expected_item.get(item_key_field)
if expected_key_value is None:
logger.error(f"Expected {item_type_name} in {parent_context} is missing key field '{item_key_field}'. Cannot compare this item: {expected_item}")
list_match = False # This specific expected item cannot be processed
continue
actual_item = actual_items_map.get(expected_key_value)
if actual_item:
matched_expected_keys.add(expected_key_value)
if not self._compare_rule_item(actual_item, expected_item, item_type_name, parent_context):
list_match = False # Individual item comparison failed
else:
logger.error(f"Expected {item_type_name} with {item_key_field} '{expected_key_value}' not found in actual items for {parent_context}.")
list_match = False
# Identify actual items that were not matched by any expected item
# This is useful if len(actual_list) >= len(expected_list) but some actual items are "extra"
for actual_key_value, actual_item_data in actual_items_map.items():
if actual_key_value not in matched_expected_keys:
logger.debug(f"Extra actual {item_type_name} with {item_key_field} '{actual_key_value}' found in {parent_context} (not in expected list or already matched).")
if len(actual_list) != len(expected_list): # If counts already flagged a mismatch, this is just detail
pass
else: # Counts matched, but content didn't align perfectly by key
list_match = False
return list_match
def _compare_rules(self, actual_rules_data: Dict[str, Any], expected_rules_data: Dict[str, Any]) -> bool:
"""
Compares the actual rule data (converted from live SourceRule objects)
with the expected rule data (loaded from JSON).
"""
logger.debug("Comparing actual rules with expected rules...")
actual_source_rules = actual_rules_data.get("source_rules", []) if actual_rules_data else []
expected_source_rules = expected_rules_data.get("source_rules", []) if expected_rules_data else []
if not isinstance(actual_source_rules, list):
logger.error(f"Actual 'source_rules' is not a list. Found type: {type(actual_source_rules)}. Comparison aborted.")
return False # Cannot compare if actual data is malformed
if not isinstance(expected_source_rules, list):
logger.error(f"Expected 'source_rules' is not a list. Found type: {type(expected_source_rules)}. Test configuration error. Comparison aborted.")
return False # Test setup error
if not expected_source_rules and not actual_source_rules:
logger.debug("Both expected and actual source rules lists are empty. Considered a match.")
return True
if len(actual_source_rules) != len(expected_source_rules):
logger.error(f"Mismatch in the number of source rules. Actual: {len(actual_source_rules)}, Expected: {len(expected_source_rules)}.")
# Optionally, log more details about which list is longer/shorter or identifiers if available
return False
overall_match_status = True
for i in range(len(expected_source_rules)):
actual_sr = actual_source_rules[i]
expected_sr = expected_source_rules[i]
# For context, use input_path or an index
source_rule_context = expected_sr.get('input_path', f"SourceRule_index_{i}")
if not self._compare_rule_item(actual_sr, expected_sr, "SourceRule", parent_context=source_rule_context):
overall_match_status = False
# Continue checking other source rules to log all discrepancies
if overall_match_status:
logger.debug("All rules match the expected criteria.") # Covered by "Rule comparison successful" summary
else:
logger.warning("One or more rules did not match the expected criteria. See logs above for details.")
return overall_match_status
def _process_and_display_logs(self, logs_text: str) -> None: # logs_text is no longer the primary source for search
"""
Processes and displays logs, potentially filtering them if --search is used.
Also checks for tracebacks.
Sources logs from the in-memory handler for search and detailed analysis.
"""
logger.debug("--- Log Analysis ---")
global autotest_memory_handler # Access the global handler
log_records = []
if autotest_memory_handler and autotest_memory_handler.buffer:
log_records = autotest_memory_handler.buffer
formatted_log_lines = []
# Define a consistent formatter, similar to what might be expected or useful for search
record_formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(name)s: %(message)s')
# Default asctime format includes milliseconds.
for record in log_records:
formatted_log_lines.append(record_formatter.format(record))
lines_for_search_and_traceback = formatted_log_lines
if not lines_for_search_and_traceback:
logger.warning("No log records found in memory handler. No analysis to perform.")
# Still check the console logs_text for tracebacks if it exists, as a fallback
# or if some critical errors didn't make it to the memory handler (unlikely with DEBUG level)
if logs_text:
logger.debug("Checking provided logs_text (from console) for tracebacks as a fallback.")
console_lines = logs_text.splitlines()
traceback_found_console = False
for i, line in enumerate(console_lines):
if line.strip().startswith("Traceback (most recent call last):"):
logger.error(f"!!! TRACEBACK DETECTED in console logs_text around line {i+1} !!!")
traceback_found_console = True
if traceback_found_console:
logger.warning("A traceback was found in the console logs_text.")
else:
logger.info("No tracebacks found in the console logs_text either.")
logger.info("--- End Log Analysis ---")
return
traceback_found = False
if self.cli_args.search:
logger.info(f"Searching {len(lines_for_search_and_traceback)} in-memory log lines for term '{self.cli_args.search}' with {self.cli_args.additional_lines} context lines.")
matched_line_indices = [i for i, line in enumerate(lines_for_search_and_traceback) if self.cli_args.search in line]
if not matched_line_indices:
logger.info(f"Search term '{self.cli_args.search}' not found in in-memory logs.")
else:
logger.info(f"Found {len(matched_line_indices)} match(es) for '{self.cli_args.search}' in in-memory logs:")
collected_lines_to_print = set()
for match_idx in matched_line_indices:
start_idx = max(0, match_idx - self.cli_args.additional_lines)
end_idx = min(len(lines_for_search_and_traceback), match_idx + self.cli_args.additional_lines + 1)
for i in range(start_idx, end_idx):
# Use i directly as index for lines_for_search_and_traceback, line number is for display
collected_lines_to_print.add(f"L{i+1:05d}: {lines_for_search_and_traceback[i]}")
print("--- Filtered Log Output (from Memory Handler) ---")
for line_to_print in sorted(list(collected_lines_to_print)):
print(line_to_print)
print("--- End Filtered Log Output ---")
# Removed: else block that showed last N lines by default (as per original instruction for this section)
# Traceback Check (on lines_for_search_and_traceback)
for i, line in enumerate(lines_for_search_and_traceback):
if line.strip().startswith("Traceback (most recent call last):") or "Traceback (most recent call last):" in line : # More robust check
logger.error(f"!!! TRACEBACK DETECTED in in-memory logs around line index {i} !!!")
logger.error(f"Line content: {line}")
traceback_found = True
if traceback_found:
logger.warning("A traceback was found in the in-memory logs. This usually indicates a significant issue.")
else:
logger.info("No tracebacks found in the in-memory logs.") # This refers to the comprehensive memory logs
logger.info("--- End Log Analysis ---")
def cleanup_and_exit(self, success: bool = True) -> None:
"""Cleans up and exits the application."""
global autotest_memory_handler
if autotest_memory_handler:
logger.debug("Clearing memory log handler buffer and removing handler.")
autotest_memory_handler.buffer = [] # Clear buffer
logging.getLogger().removeHandler(autotest_memory_handler) # Remove handler
autotest_memory_handler.close() # MemoryHandler close is a no-op but good practice
autotest_memory_handler = None
logger.info(f"Test {'succeeded' if success else 'failed'}. Cleaning up and exiting...") # KEEP INFO - Passes filter
q_app = QCoreApplication.instance()
if q_app:
q_app.quit()
sys.exit(0 if success else 1)
# --- Main Execution ---
def main():
"""Main function to run the autotest script."""
cli_args = parse_arguments()
# Logger is configured above, this will now use the new filtered setup
logger.info(f"Parsed CLI arguments: {cli_args}") # KEEP INFO - Passes filter
# Clean and ensure output directory exists
output_dir_path = Path(cli_args.outputdir)
logger.debug(f"Preparing output directory: {output_dir_path}")
try:
if output_dir_path.exists():
logger.debug(f"Output directory {output_dir_path} exists. Cleaning its contents...")
for item in output_dir_path.iterdir():
if item.is_dir():
shutil.rmtree(item)
logger.debug(f"Removed directory: {item}")
else:
item.unlink()
logger.debug(f"Removed file: {item}")
logger.debug(f"Contents of {output_dir_path} cleaned.")
else:
logger.debug(f"Output directory {output_dir_path} does not exist. Creating it.")
output_dir_path.mkdir(parents=True, exist_ok=True) # Ensure it exists after cleaning/if it didn't exist
logger.debug(f"Output directory {output_dir_path} is ready.")
except Exception as e:
logger.error(f"Could not prepare output directory {output_dir_path}: {e}", exc_info=True)
sys.exit(1)
# Initialize QApplication
# Use QCoreApplication if no GUI elements are directly interacted with by the test logic itself,
# but QApplication is needed if MainWindow or its widgets are constructed and used.
# Since MainWindow is instantiated by App, QApplication is appropriate.
q_app = QApplication.instance()
if not q_app:
q_app = QApplication(sys.argv)
if not q_app: # Still no app
logger.error("Failed to initialize QApplication.")
sys.exit(1)
logger.debug("Initializing main.App()...")
try:
# Instantiate main.App() - this should create MainWindow but not show it by default
# if App is designed to not show GUI unless app.main_window.show() is called.
app_instance = App()
except Exception as e:
logger.error(f"Failed to initialize main.App: {e}", exc_info=True)
sys.exit(1)
if not app_instance.main_window:
logger.error("main.App initialized, but main_window is None. Cannot proceed with test.")
sys.exit(1)
logger.debug("Initializing AutoTester...")
try:
tester = AutoTester(app_instance, cli_args)
except Exception as e:
logger.error(f"Failed to initialize AutoTester: {e}", exc_info=True)
sys.exit(1)
# Use QTimer.singleShot to start the test after the Qt event loop has started.
# This ensures that the Qt environment is fully set up.
logger.debug("Scheduling test run...")
QTimer.singleShot(0, tester.run_test)
logger.debug("Starting Qt application event loop...")
exit_code = q_app.exec()
logger.debug(f"Qt application event loop finished with exit code: {exit_code}")
sys.exit(exit_code)
if __name__ == "__main__":
main()

View File

@@ -1,245 +1,4 @@
{
"ASSET_TYPE_DEFINITIONS": {
"Surface": {
"description": "A single Standard PBR material set for a surface.",
"color": "#1f3e5d",
"examples": [
"Set: Wood01_COL + Wood01_NRM + WOOD01_ROUGH",
"Set: Dif_Concrete + Normal_Concrete + Refl_Concrete"
]
},
"Model": {
"description": "A set that contains models, can include PBR textureset",
"color": "#b67300",
"examples": [
"Single = Chair.fbx",
"Set = Plant02.fbx + Plant02_col + Plant02_SSS"
]
},
"Decal": {
"description": "A alphamasked textureset",
"color": "#68ac68",
"examples": [
"Set = DecalGraffiti01_Col + DecalGraffiti01_Alpha",
"Single = DecalLeakStain03"
]
},
"Atlas": {
"description": "A texture, name usually hints that it's an atlas",
"color": "#955b8b",
"examples": [
"Set = FoliageAtlas01_col + FoliageAtlas01_nrm"
]
},
"UtilityMap": {
"description": "A useful image-asset consisting of only a single texture. Therefor each Utilitymap can only contain a single item.",
"color": "#706b87",
"examples": [
"Single = imperfection.png",
"Single = smudges.png",
"Single = scratches.tif"
]
}
},
"FILE_TYPE_DEFINITIONS": {
"MAP_COL": {
"description": "Color/Albedo Map",
"color": "#ffaa00",
"examples": [
"_col.",
"_basecolor.",
"albedo",
"diffuse"
],
"standard_type": "COL",
"bit_depth_rule": "force_8bit",
"is_grayscale": false,
"keybind": "C"
},
"MAP_NRM": {
"description": "Normal Map",
"color": "#cca2f1",
"examples": [
"_nrm.",
"_normal."
],
"standard_type": "NRM",
"bit_depth_rule": "respect",
"is_grayscale": false,
"keybind": "N"
},
"MAP_METAL": {
"description": "Metalness Map",
"color": "#dcf4f2",
"examples": [
"_metal.",
"_met."
],
"standard_type": "METAL",
"bit_depth_rule": "force_8bit",
"is_grayscale": true,
"keybind": "M"
},
"MAP_ROUGH": {
"description": "Roughness Map",
"color": "#bfd6bf",
"examples": [
"_rough.",
"_rgh.",
"_gloss"
],
"standard_type": "ROUGH",
"bit_depth_rule": "force_8bit",
"is_grayscale": true,
"keybind": "R"
},
"MAP_GLOSS": {
"description": "Glossiness Map",
"color": "#d6bfd6",
"examples": [
"_gloss.",
"_gls."
],
"standard_type": "GLOSS",
"bit_depth_rule": "force_8bit",
"is_grayscale": true,
"keybind": "R"
},
"MAP_AO": {
"description": "Ambient Occlusion Map",
"color": "#e3c7c7",
"examples": [
"_ao.",
"_ambientocclusion."
],
"standard_type": "AO",
"bit_depth_rule": "force_8bit",
"is_grayscale": true
},
"MAP_DISP": {
"description": "Displacement/Height Map",
"color": "#c6ddd5",
"examples": [
"_disp.",
"_height."
],
"standard_type": "DISP",
"bit_depth_rule": "respect",
"is_grayscale": true,
"keybind": "D"
},
"MAP_REFL": {
"description": "Reflection/Specular Map",
"color": "#c2c2b9",
"examples": [
"_refl.",
"_specular."
],
"standard_type": "REFL",
"bit_depth_rule": "force_8bit",
"is_grayscale": true,
"keybind": "M"
},
"MAP_SSS": {
"description": "Subsurface Scattering Map",
"color": "#a0d394",
"examples": [
"_sss.",
"_subsurface."
],
"standard_type": "SSS",
"bit_depth_rule": "respect",
"is_grayscale": true
},
"MAP_FUZZ": {
"description": "Fuzz/Sheen Map",
"color": "#a2d1da",
"examples": [
"_fuzz.",
"_sheen."
],
"standard_type": "FUZZ",
"bit_depth_rule": "force_8bit",
"is_grayscale": true
},
"MAP_IDMAP": {
"description": "ID Map (for masking)",
"color": "#ca8fb4",
"examples": [
"_id.",
"_matid."
],
"standard_type": "IDMAP",
"bit_depth_rule": "force_8bit",
"is_grayscale": false
},
"MAP_MASK": {
"description": "Generic Mask Map",
"color": "#c6e2bf",
"examples": [
"_mask."
],
"standard_type": "MASK",
"bit_depth_rule": "force_8bit",
"is_grayscale": true
},
"MAP_IMPERFECTION": {
"description": "Imperfection Map (scratches, dust)",
"color": "#e6d1a6",
"examples": [
"_imp.",
"_imperfection.",
"splatter",
"scratches",
"smudges",
"hairs",
"fingerprints"
],
"standard_type": "IMPERFECTION",
"bit_depth_rule": "force_8bit",
"is_grayscale": true
},
"MODEL": {
"description": "3D Model File",
"color": "#3db2bd",
"examples": [
".fbx",
".obj"
],
"standard_type": "",
"bit_depth_rule": "",
"is_grayscale": false
},
"EXTRA": {
"description": "asset previews or metadata",
"color": "#8c8c8c",
"examples": [
".txt",
".zip",
"preview.",
"_flat.",
"_sphere.",
"_Cube.",
"thumb"
],
"standard_type": "",
"bit_depth_rule": "",
"is_grayscale": false,
"keybind": "E"
},
"FILE_IGNORE": {
"description": "File to be ignored",
"color": "#673d35",
"examples": [
"Thumbs.db",
".DS_Store"
],
"standard_type": "",
"bit_depth_rule": "",
"is_grayscale": false,
"keybind": "X"
}
},
"TARGET_FILENAME_PATTERN": "{base_name}_{map_type}_{resolution}.{ext}",
"RESPECT_VARIANT_MAP_TYPES": [
"COL"
@@ -268,7 +27,7 @@
"OUTPUT_FORMAT_8BIT": "png",
"MAP_MERGE_RULES": [
{
"output_map_type": "NRMRGH",
"output_map_type": "MAP_NRMRGH",
"inputs": {
"R": "MAP_NRM",
"G": "MAP_NRM",
@@ -284,5 +43,13 @@
],
"CALCULATE_STATS_RESOLUTION": "1K",
"DEFAULT_ASSET_CATEGORY": "Surface",
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_"
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_",
"INITIAL_SCALING_MODE": "POT_DOWNSCALE",
"MERGE_DIMENSION_MISMATCH_STRATEGY": "USE_LARGEST",
"ENABLE_LOW_RESOLUTION_FALLBACK": true,
"LOW_RESOLUTION_THRESHOLD": 512,
"general_settings": {
"invert_normal_map_green_channel_globally": false,
"app_version": "Pre-Alpha"
}
}

View File

@@ -0,0 +1,44 @@
{
"ASSET_TYPE_DEFINITIONS": {
"Surface": {
"color": "#1f3e5d",
"description": "A single Standard PBR material set for a surface.",
"examples": [
"Set: Wood01_COL + Wood01_NRM + WOOD01_ROUGH",
"Set: Dif_Concrete + Normal_Concrete + Refl_Concrete"
]
},
"Model": {
"color": "#b67300",
"description": "A set that contains models, can include PBR textureset",
"examples": [
"Single = Chair.fbx",
"Set = Plant02.fbx + Plant02_col + Plant02_SSS"
]
},
"Decal": {
"color": "#68ac68",
"description": "A alphamasked textureset",
"examples": [
"Set = DecalGraffiti01_Col + DecalGraffiti01_Alpha",
"Single = DecalLeakStain03"
]
},
"Atlas": {
"color": "#955b8b",
"description": "A texture, name usually hints that it's an atlas",
"examples": [
"Set = FoliageAtlas01_col + FoliageAtlas01_nrm"
]
},
"UtilityMap": {
"color": "#706b87",
"description": "A useful image-asset consisting of only a single texture. Therefor each Utilitymap can only contain a single item.",
"examples": [
"Single = imperfection.png",
"Single = smudges.png",
"Single = scratches.tif"
]
}
}
}

View File

@@ -0,0 +1,208 @@
{
"FILE_TYPE_DEFINITIONS": {
"MAP_COL": {
"bit_depth_rule": "force_8bit",
"color": "#ffaa00",
"description": "Color/Albedo Map",
"examples": [
"_col.",
"_basecolor.",
"albedo",
"diffuse"
],
"is_grayscale": false,
"keybind": "C",
"standard_type": "COL"
},
"MAP_NRM": {
"bit_depth_rule": "respect",
"color": "#cca2f1",
"description": "Normal Map",
"examples": [
"_nrm.",
"_normal."
],
"is_grayscale": false,
"keybind": "N",
"standard_type": "NRM"
},
"MAP_METAL": {
"bit_depth_rule": "force_8bit",
"color": "#dcf4f2",
"description": "Metalness Map",
"examples": [
"_metal.",
"_met."
],
"is_grayscale": true,
"keybind": "M",
"standard_type": "METAL"
},
"MAP_ROUGH": {
"bit_depth_rule": "force_8bit",
"color": "#bfd6bf",
"description": "Roughness Map",
"examples": [
"_rough.",
"_rgh.",
"_gloss"
],
"is_grayscale": true,
"keybind": "R",
"standard_type": "ROUGH"
},
"MAP_GLOSS": {
"bit_depth_rule": "force_8bit",
"color": "#d6bfd6",
"description": "Glossiness Map",
"examples": [
"_gloss.",
"_gls."
],
"is_grayscale": true,
"keybind": "R",
"standard_type": "GLOSS"
},
"MAP_AO": {
"bit_depth_rule": "force_8bit",
"color": "#e3c7c7",
"description": "Ambient Occlusion Map",
"examples": [
"_ao.",
"_ambientocclusion."
],
"is_grayscale": true,
"keybind": "",
"standard_type": "AO"
},
"MAP_DISP": {
"bit_depth_rule": "respect",
"color": "#c6ddd5",
"description": "Displacement/Height Map",
"examples": [
"_disp.",
"_height."
],
"is_grayscale": true,
"keybind": "D",
"standard_type": "DISP"
},
"MAP_REFL": {
"bit_depth_rule": "force_8bit",
"color": "#c2c2b9",
"description": "Reflection/Specular Map",
"examples": [
"_refl.",
"_specular."
],
"is_grayscale": true,
"keybind": "M",
"standard_type": "REFL"
},
"MAP_SSS": {
"bit_depth_rule": "respect",
"color": "#a0d394",
"description": "Subsurface Scattering Map",
"examples": [
"_sss.",
"_subsurface."
],
"is_grayscale": true,
"keybind": "",
"standard_type": "SSS"
},
"MAP_FUZZ": {
"bit_depth_rule": "force_8bit",
"color": "#a2d1da",
"description": "Fuzz/Sheen Map",
"examples": [
"_fuzz.",
"_sheen."
],
"is_grayscale": true,
"keybind": "",
"standard_type": "FUZZ"
},
"MAP_IDMAP": {
"bit_depth_rule": "force_8bit",
"color": "#ca8fb4",
"description": "ID Map (for masking)",
"examples": [
"_id.",
"_matid."
],
"is_grayscale": false,
"keybind": "",
"standard_type": "IDMAP"
},
"MAP_MASK": {
"bit_depth_rule": "force_8bit",
"color": "#c6e2bf",
"description": "Generic Mask Map",
"examples": [
"_mask."
],
"is_grayscale": true,
"keybind": "",
"standard_type": "MASK"
},
"MAP_IMPERFECTION": {
"bit_depth_rule": "force_8bit",
"color": "#e6d1a6",
"description": "Imperfection Map (scratches, dust)",
"examples": [
"_imp.",
"_imperfection.",
"splatter",
"scratches",
"smudges",
"hairs",
"fingerprints"
],
"is_grayscale": true,
"keybind": "",
"standard_type": "IMPERFECTION"
},
"MODEL": {
"bit_depth_rule": "",
"color": "#3db2bd",
"description": "3D Model File",
"examples": [
".fbx",
".obj"
],
"is_grayscale": false,
"keybind": "",
"standard_type": ""
},
"EXTRA": {
"bit_depth_rule": "",
"color": "#8c8c8c",
"description": "asset previews or metadata",
"examples": [
".txt",
".zip",
"preview.",
"_flat.",
"_sphere.",
"_Cube.",
"thumb"
],
"is_grayscale": false,
"keybind": "E",
"standard_type": "EXTRA"
},
"FILE_IGNORE": {
"bit_depth_rule": "",
"color": "#673d35",
"description": "File to be ignored",
"examples": [
"Thumbs.db",
".DS_Store"
],
"is_grayscale": false,
"keybind": "X",
"standard_type": ""
}
}
}

View File

@@ -3,256 +3,256 @@
{
"input": "MessyTextures/Concrete_Damage_Set/concrete_col.png\nMessyTextures/Concrete_Damage_Set/concrete_N.png\nMessyTextures/Concrete_Damage_Set/concrete_rough.jpg\nMessyTextures/Concrete_Damage_Set/height_map_concrete.tif\nMessyTextures/Concrete_Damage_Set/Thumbs.db\nMessyTextures/Fabric_Pattern/pattern_01_diffuse.tga\nMessyTextures/Fabric_Pattern/pattern_01_ao.png\nMessyTextures/Fabric_Pattern/pattern_01_normal.png\nMessyTextures/Fabric_Pattern/notes.txt\nMessyTextures/Fabric_Pattern/variant_blue_diffuse.tga\nMessyTextures/Fabric_Pattern/fabric_flat.jpg",
"output": {
"individual_file_analysis": [
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_col.png",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_N.png",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_rough.jpg",
"classified_file_type": "MAP_ROUGH",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/height_map_concrete.tif",
"classified_file_type": "MAP_DISP",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/Thumbs.db",
"classified_file_type": "FILE_IGNORE",
"proposed_asset_group_name": null
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_diffuse.tga",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_ao.png",
"classified_file_type": "MAP_AO",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_normal.png",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/notes.txt",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/variant_blue_diffuse.tga",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/fabric_flat.jpg",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "Fabric_Pattern_01"
"individual_file_analysis": [
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_col.png",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_N.png",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_rough.jpg",
"classified_file_type": "MAP_ROUGH",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/height_map_concrete.tif",
"classified_file_type": "MAP_DISP",
"proposed_asset_group_name": "Concrete_Damage_Set"
},
{
"relative_file_path": "MessyTextures/Concrete_Damage_Set/Thumbs.db",
"classified_file_type": "FILE_IGNORE",
"proposed_asset_group_name": null
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_diffuse.tga",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_ao.png",
"classified_file_type": "MAP_AO",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_normal.png",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/notes.txt",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/variant_blue_diffuse.tga",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Fabric_Pattern_01"
},
{
"relative_file_path": "MessyTextures/Fabric_Pattern/fabric_flat.jpg",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "Fabric_Pattern_01"
}
],
"asset_group_classifications": {
"Concrete_Damage_Set": "Surface",
"Fabric_Pattern_01": "Surface"
}
],
"asset_group_classifications": {
"Concrete_Damage_Set": "Surface",
"Fabric_Pattern_01": "Surface"
}
}
},
{
"input": "SciFi_Drone/Drone_Model.fbx\nSciFi_Drone/Textures/Drone_BaseColor.png\nSciFi_Drone/Textures/Drone_Metallic.png\nSciFi_Drone/Textures/Drone_Roughness.png\nSciFi_Drone/Textures/Drone_Normal.png\nSciFi_Drone/Textures/Drone_Emissive.jpg\nSciFi_Drone/ReferenceImages/concept.jpg",
"output": {
"individual_file_analysis": [
{
"relative_file_path": "SciFi_Drone/Drone_Model.fbx",
"classified_file_type": "MODEL",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_BaseColor.png",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Metallic.png",
"classified_file_type": "MAP_METAL",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Roughness.png",
"classified_file_type": "MAP_ROUGH",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Normal.png",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Emissive.jpg",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/ReferenceImages/concept.jpg",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "SciFi_Drone"
"individual_file_analysis": [
{
"relative_file_path": "SciFi_Drone/Drone_Model.fbx",
"classified_file_type": "MODEL",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_BaseColor.png",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Metallic.png",
"classified_file_type": "MAP_METAL",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Roughness.png",
"classified_file_type": "MAP_ROUGH",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Normal.png",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/Textures/Drone_Emissive.jpg",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "SciFi_Drone"
},
{
"relative_file_path": "SciFi_Drone/ReferenceImages/concept.jpg",
"classified_file_type": "EXTRA",
"proposed_asset_group_name": "SciFi_Drone"
}
],
"asset_group_classifications": {
"SciFi_Drone": "Model"
}
],
"asset_group_classifications": {
"SciFi_Drone": "Model"
}
}
},
{
"input": "21_hairs_deposits.tif\n22_hairs_fabric.tif\n23_hairs_fibres.tif\n24_hairs_fibres.tif\n25_bonus_isolatedFingerprints.tif\n26_bonus_isolatedPalmprint.tif\n27_metal_aluminum.tif\n28_metal_castIron.tif\n29_scratcehes_deposits_shapes.tif\n30_scratches_deposits.tif",
"output": {
"individual_file_analysis": [
{
"relative_file_path": "21_hairs_deposits.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Deposits_21"
},
{
"relative_file_path": "22_hairs_fabric.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Fabric_22"
},
{
"relative_file_path": "23_hairs_fibres.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Fibres_23"
},
{
"relative_file_path": "24_hairs_fibres.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Fibres_24"
},
{
"relative_file_path": "25_bonus_isolatedFingerprints.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Bonus_IsolatedFingerprints_25"
},
{
"relative_file_path": "26_bonus_isolatedPalmprint.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Bonus_IsolatedPalmprint_26"
},
{
"relative_file_path": "27_metal_aluminum.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Metal_Aluminum_27"
},
{
"relative_file_path": "28_metal_castIron.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Metal_CastIron_28"
},
{
"relative_file_path": "29_scratcehes_deposits_shapes.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Scratches_Deposits_Shapes_29"
},
{
"relative_file_path": "30_scratches_deposits.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Scratches_Deposits_30"
"individual_file_analysis": [
{
"relative_file_path": "21_hairs_deposits.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Deposits_21"
},
{
"relative_file_path": "22_hairs_fabric.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Fabric_22"
},
{
"relative_file_path": "23_hairs_fibres.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Fibres_23"
},
{
"relative_file_path": "24_hairs_fibres.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Hairs_Fibres_24"
},
{
"relative_file_path": "25_bonus_isolatedFingerprints.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Bonus_IsolatedFingerprints_25"
},
{
"relative_file_path": "26_bonus_isolatedPalmprint.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Bonus_IsolatedPalmprint_26"
},
{
"relative_file_path": "27_metal_aluminum.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Metal_Aluminum_27"
},
{
"relative_file_path": "28_metal_castIron.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Metal_CastIron_28"
},
{
"relative_file_path": "29_scratcehes_deposits_shapes.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Scratches_Deposits_Shapes_29"
},
{
"relative_file_path": "30_scratches_deposits.tif",
"classified_file_type": "MAP_IMPERFECTION",
"proposed_asset_group_name": "Scratches_Deposits_30"
}
],
"asset_group_classifications": {
"Hairs_Deposits_21": "UtilityMap",
"Hairs_Fabric_22": "UtilityMap",
"Hairs_Fibres_23": "UtilityMap",
"Hairs_Fibres_24": "UtilityMap",
"Bonus_IsolatedFingerprints_25": "UtilityMap",
"Bonus_IsolatedPalmprint_26": "UtilityMap",
"Metal_Aluminum_27": "UtilityMap",
"Metal_CastIron_28": "UtilityMap",
"Scratches_Deposits_Shapes_29": "UtilityMap",
"Scratches_Deposits_30": "UtilityMap"
}
],
"asset_group_classifications": {
"Hairs_Deposits_21": "UtilityMap",
"Hairs_Fabric_22": "UtilityMap",
"Hairs_Fibres_23": "UtilityMap",
"Hairs_Fibres_24": "UtilityMap",
"Bonus_IsolatedFingerprints_25": "UtilityMap",
"Bonus_IsolatedPalmprint_26": "UtilityMap",
"Metal_Aluminum_27": "UtilityMap",
"Metal_CastIron_28": "UtilityMap",
"Scratches_Deposits_Shapes_29": "UtilityMap",
"Scratches_Deposits_30": "UtilityMap"
}
}
},
{
"input": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_A_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
"output": {
"individual_file_analysis": [
{
"relative_file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_A"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_A"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_B"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_B"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_C"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_C"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_D"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_D"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_E"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_E"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_F"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_F"
"individual_file_analysis": [
{
"relative_file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_A"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_A"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_B"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_B"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_C"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_C"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_D"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_D"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_E"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_E"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg",
"classified_file_type": "MAP_COL",
"proposed_asset_group_name": "Boards001_F"
},
{
"relative_file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
"classified_file_type": "MAP_NRM",
"proposed_asset_group_name": "Boards001_F"
}
],
"asset_group_classifications": {
"Boards001_A": "Surface",
"Boards001_B": "Surface",
"Boards001_C": "Surface",
"Boards001_D": "Surface",
"Boards001_E": "Surface",
"Boards001_F": "Surface"
}
],
"asset_group_classifications": {
"Boards001_A": "Surface",
"Boards001_B": "Surface",
"Boards001_C": "Surface",
"Boards001_D": "Surface",
"Boards001_E": "Surface",
"Boards001_F": "Surface"
}
}
}
],

View File

@@ -1,5 +1,11 @@
[
"Dimensiva",
"Dinesen",
"Poliigon"
]
{
"Dimensiva": {
"normal_map_type": "OpenGL"
},
"Dinesen": {
"normal_map_type": "OpenGL"
},
"Poliigon": {
"normal_map_type": "OpenGL"
}
}

View File

@@ -3,12 +3,18 @@ import os
from pathlib import Path
import logging
import re
import collections.abc
from typing import Optional
log = logging.getLogger(__name__)
BASE_DIR = Path(__file__).parent
APP_SETTINGS_PATH = BASE_DIR / "config" / "app_settings.json"
LLM_SETTINGS_PATH = BASE_DIR / "config" / "llm_settings.json"
ASSET_TYPE_DEFINITIONS_PATH = BASE_DIR / "config" / "asset_type_definitions.json"
FILE_TYPE_DEFINITIONS_PATH = BASE_DIR / "config" / "file_type_definitions.json"
USER_SETTINGS_PATH = BASE_DIR / "config" / "user_settings.json"
SUPPLIERS_CONFIG_PATH = BASE_DIR / "config" / "suppliers.json"
PRESETS_DIR = BASE_DIR / "Presets"
class ConfigurationError(Exception):
@@ -64,6 +70,25 @@ def _fnmatch_to_regex(pattern: str) -> str:
# For filename matching, we usually want to find the pattern, not match the whole string.
return res
def _deep_merge_dicts(base_dict: dict, override_dict: dict) -> dict:
"""
Recursively merges override_dict into base_dict.
If a key exists in both and both values are dicts, it recursively merges them.
Otherwise, the value from override_dict takes precedence.
Modifies base_dict in place and returns it.
"""
for key, value in override_dict.items():
if isinstance(value, collections.abc.Mapping):
node = base_dict.get(key) # Use .get() to avoid creating empty dicts if not needed for override
if isinstance(node, collections.abc.Mapping):
_deep_merge_dicts(node, value) # node is base_dict[key], modified in place
else:
# If base_dict[key] is not a dict or doesn't exist, override it
base_dict[key] = value
else:
base_dict[key] = value
return base_dict
class Configuration:
"""
@@ -71,7 +96,7 @@ class Configuration:
"""
def __init__(self, preset_name: str):
"""
Loads core config and the specified preset file.
Loads core config, user overrides, and the specified preset file.
Args:
preset_name: The name of the preset (without .json extension).
@@ -81,9 +106,32 @@ class Configuration:
"""
log.debug(f"Initializing Configuration with preset: '{preset_name}'")
self.preset_name = preset_name
# 1. Load core settings
self._core_settings: dict = self._load_core_config()
# 2. Load asset type definitions
self._asset_type_definitions: dict = self._load_asset_type_definitions()
# 3. Load file type definitions
self._file_type_definitions: dict = self._load_file_type_definitions()
# 4. Load user settings
user_settings_overrides: dict = self._load_user_settings()
# 5. Deep merge user settings onto core settings
if user_settings_overrides:
log.info("Applying user setting overrides to core settings.")
# _deep_merge_dicts modifies self._core_settings in place
_deep_merge_dicts(self._core_settings, user_settings_overrides)
# 6. Load LLM settings
self._llm_settings: dict = self._load_llm_config()
# 7. Load preset settings (conceptually overrides combined base + user for shared keys)
self._preset_settings: dict = self._load_preset(preset_name)
# 8. Validate and compile (after all base/user/preset settings are established)
self._validate_configs()
self._compile_regex_patterns()
log.info(f"Configuration loaded successfully using preset: '{self.preset_name}'")
@@ -215,9 +263,79 @@ class Configuration:
except Exception as e:
raise ConfigurationError(f"Failed to read preset file {preset_file}: {e}")
def _load_asset_type_definitions(self) -> dict:
"""Loads asset type definitions from the asset_type_definitions.json file."""
log.debug(f"Loading asset type definitions from: {ASSET_TYPE_DEFINITIONS_PATH}")
if not ASSET_TYPE_DEFINITIONS_PATH.is_file():
raise ConfigurationError(f"Asset type definitions file not found: {ASSET_TYPE_DEFINITIONS_PATH}")
try:
with open(ASSET_TYPE_DEFINITIONS_PATH, 'r', encoding='utf-8') as f:
data = json.load(f)
if "ASSET_TYPE_DEFINITIONS" not in data:
raise ConfigurationError(f"Key 'ASSET_TYPE_DEFINITIONS' not found in {ASSET_TYPE_DEFINITIONS_PATH}")
settings = data["ASSET_TYPE_DEFINITIONS"]
if not isinstance(settings, dict):
raise ConfigurationError(f"'ASSET_TYPE_DEFINITIONS' in {ASSET_TYPE_DEFINITIONS_PATH} must be a dictionary.")
log.debug(f"Asset type definitions loaded successfully.")
return settings
except json.JSONDecodeError as e:
raise ConfigurationError(f"Failed to parse asset type definitions file {ASSET_TYPE_DEFINITIONS_PATH}: Invalid JSON - {e}")
except Exception as e:
raise ConfigurationError(f"Failed to read asset type definitions file {ASSET_TYPE_DEFINITIONS_PATH}: {e}")
def _load_file_type_definitions(self) -> dict:
"""Loads file type definitions from the file_type_definitions.json file."""
log.debug(f"Loading file type definitions from: {FILE_TYPE_DEFINITIONS_PATH}")
if not FILE_TYPE_DEFINITIONS_PATH.is_file():
raise ConfigurationError(f"File type definitions file not found: {FILE_TYPE_DEFINITIONS_PATH}")
try:
with open(FILE_TYPE_DEFINITIONS_PATH, 'r', encoding='utf-8') as f:
data = json.load(f)
if "FILE_TYPE_DEFINITIONS" not in data:
raise ConfigurationError(f"Key 'FILE_TYPE_DEFINITIONS' not found in {FILE_TYPE_DEFINITIONS_PATH}")
settings = data["FILE_TYPE_DEFINITIONS"]
if not isinstance(settings, dict):
raise ConfigurationError(f"'FILE_TYPE_DEFINITIONS' in {FILE_TYPE_DEFINITIONS_PATH} must be a dictionary.")
log.debug(f"File type definitions loaded successfully.")
return settings
except json.JSONDecodeError as e:
raise ConfigurationError(f"Failed to parse file type definitions file {FILE_TYPE_DEFINITIONS_PATH}: Invalid JSON - {e}")
except Exception as e:
raise ConfigurationError(f"Failed to read file type definitions file {FILE_TYPE_DEFINITIONS_PATH}: {e}")
def _load_user_settings(self) -> dict:
"""Loads user override settings from config/user_settings.json."""
log.debug(f"Attempting to load user settings from: {USER_SETTINGS_PATH}")
if not USER_SETTINGS_PATH.is_file():
log.info(f"User settings file not found: {USER_SETTINGS_PATH}. Proceeding without user overrides.")
return {}
try:
with open(USER_SETTINGS_PATH, 'r', encoding='utf-8') as f:
settings = json.load(f)
log.info(f"User settings loaded successfully from {USER_SETTINGS_PATH}.")
return settings
except json.JSONDecodeError as e:
log.warning(f"Failed to parse user settings file {USER_SETTINGS_PATH}: Invalid JSON - {e}. Using empty user settings.")
return {}
except Exception as e:
log.warning(f"Failed to read user settings file {USER_SETTINGS_PATH}: {e}. Using empty user settings.")
return {}
def _validate_configs(self):
"""Performs basic validation checks on loaded settings."""
log.debug("Validating loaded configurations...")
# Validate new definition files first
if not isinstance(self._asset_type_definitions, dict):
raise ConfigurationError("Asset type definitions were not loaded correctly or are not a dictionary.")
if not self._asset_type_definitions: # Check if empty
raise ConfigurationError("Asset type definitions are empty.")
if not isinstance(self._file_type_definitions, dict):
raise ConfigurationError("File type definitions were not loaded correctly or are not a dictionary.")
if not self._file_type_definitions: # Check if empty
raise ConfigurationError("File type definitions are empty.")
# Preset validation
required_preset_keys = [
"preset_name", "supplier_name", "source_naming", "map_type_mapping",
@@ -236,7 +354,7 @@ class Configuration:
if 'target_type' not in rule or not isinstance(rule['target_type'], str):
raise ConfigurationError(f"Preset '{self.preset_name}': Rule at index {index} in 'map_type_mapping' is missing 'target_type' string.")
valid_file_type_keys = self._core_settings.get('FILE_TYPE_DEFINITIONS', {}).keys()
valid_file_type_keys = self._file_type_definitions.keys()
if rule['target_type'] not in valid_file_type_keys:
raise ConfigurationError(
f"Preset '{self.preset_name}': Rule at index {index} in 'map_type_mapping' "
@@ -261,7 +379,7 @@ class Configuration:
raise ConfigurationError("Core config 'IMAGE_RESOLUTIONS' must be a dictionary.")
# Validate DEFAULT_ASSET_CATEGORY
valid_asset_type_keys = self._core_settings.get('ASSET_TYPE_DEFINITIONS', {}).keys()
valid_asset_type_keys = self._asset_type_definitions.keys()
default_asset_category_value = self._core_settings.get('DEFAULT_ASSET_CATEGORY')
if not default_asset_category_value:
raise ConfigurationError("Core config 'DEFAULT_ASSET_CATEGORY' is missing.")
@@ -379,10 +497,33 @@ class Configuration:
"""Gets the configured JPG quality level."""
return self._core_settings.get('JPG_QUALITY', 95)
@property
def invert_normal_green_globally(self) -> bool:
"""Gets the global setting for inverting the green channel of normal maps."""
# Default to False if the setting is missing in the core config
return self._core_settings.get('invert_normal_map_green_channel_globally', False)
@property
def overwrite_existing(self) -> bool:
"""Gets the setting for overwriting existing files from core settings."""
return self._core_settings.get('overwrite_existing', False)
@property
def png_compression_level(self) -> int:
"""Gets the PNG compression level from core settings."""
return self._core_settings.get('PNG_COMPRESSION', 6) # Default to 6 if not found
@property
def resolution_threshold_for_jpg(self) -> int:
"""Gets the pixel dimension threshold for using JPG for 8-bit images."""
return self._core_settings.get('RESOLUTION_THRESHOLD_FOR_JPG', 4096)
value = self._core_settings.get('RESOLUTION_THRESHOLD_FOR_JPG', 4096)
log.info(f"CONFIGURATION_DEBUG: resolution_threshold_for_jpg property returning: {value} (type: {type(value)})")
# Ensure it's an int, as downstream might expect it.
# The .get() default is an int, but if the JSON had null or a string, it might be different.
if not isinstance(value, int):
log.warning(f"CONFIGURATION_DEBUG: RESOLUTION_THRESHOLD_FOR_JPG was not an int, got {type(value)}. Defaulting to 4096.")
return 4096
return value
@property
def respect_variant_map_types(self) -> list:
@@ -400,11 +541,11 @@ class Configuration:
Gets the bit depth rule ('respect', 'force_8bit', 'force_16bit') for a given map type identifier.
The map_type_input can be an FTD key (e.g., "MAP_COL") or a suffixed FTD key (e.g., "MAP_COL-1").
"""
if not self._core_settings or 'FILE_TYPE_DEFINITIONS' not in self._core_settings:
log.warning("FILE_TYPE_DEFINITIONS not found in core settings. Cannot determine bit depth rule.")
if not self._file_type_definitions: # Check if the attribute exists and is not empty
log.warning("File type definitions not loaded. Cannot determine bit depth rule.")
return "respect"
file_type_definitions = self._core_settings['FILE_TYPE_DEFINITIONS']
file_type_definitions = self._file_type_definitions
# 1. Try direct match with map_type_input as FTD key
definition = file_type_definitions.get(map_type_input)
@@ -450,8 +591,8 @@ class Configuration:
from FILE_TYPE_DEFINITIONS.
"""
aliases = set()
file_type_definitions = self._core_settings.get('FILE_TYPE_DEFINITIONS', {})
for _key, definition in file_type_definitions.items():
# _file_type_definitions is guaranteed to be a dict by the loader
for _key, definition in self._file_type_definitions.items():
if isinstance(definition, dict):
standard_type = definition.get('standard_type')
if standard_type and isinstance(standard_type, str) and standard_type.strip():
@@ -459,16 +600,16 @@ class Configuration:
return sorted(list(aliases))
def get_asset_type_definitions(self) -> dict:
"""Returns the ASSET_TYPE_DEFINITIONS dictionary from core settings."""
return self._core_settings.get('ASSET_TYPE_DEFINITIONS', {})
"""Returns the _asset_type_definitions dictionary."""
return self._asset_type_definitions
def get_asset_type_keys(self) -> list:
"""Returns a list of valid asset type keys from core settings."""
return list(self.get_asset_type_definitions().keys())
def get_file_type_definitions_with_examples(self) -> dict:
"""Returns the FILE_TYPE_DEFINITIONS dictionary (including descriptions and examples) from core settings."""
return self._core_settings.get('FILE_TYPE_DEFINITIONS', {})
"""Returns the _file_type_definitions dictionary (including descriptions and examples)."""
return self._file_type_definitions
def get_file_type_keys(self) -> list:
"""Returns a list of valid file type keys from core settings."""
@@ -509,9 +650,27 @@ class Configuration:
"""Returns the LLM request timeout in seconds from LLM settings."""
return self._llm_settings.get('llm_request_timeout', 120)
@property
def app_version(self) -> Optional[str]:
"""Returns the application version from general_settings."""
gs = self._core_settings.get('general_settings')
if isinstance(gs, dict):
return gs.get('app_version')
return None
@property
def enable_low_resolution_fallback(self) -> bool:
"""Gets the setting for enabling low-resolution fallback."""
return self._core_settings.get('ENABLE_LOW_RESOLUTION_FALLBACK', True)
@property
def low_resolution_threshold(self) -> int:
"""Gets the pixel dimension threshold for low-resolution fallback."""
return self._core_settings.get('LOW_RESOLUTION_THRESHOLD', 512)
@property
def FILE_TYPE_DEFINITIONS(self) -> dict:
return self._core_settings.get('FILE_TYPE_DEFINITIONS', {})
return self._file_type_definitions
@property
def keybind_config(self) -> dict[str, list[str]]:
@@ -521,8 +680,8 @@ class Configuration:
Example: {'C': ['MAP_COL'], 'R': ['MAP_ROUGH', 'MAP_GLOSS']}
"""
keybinds = {}
file_type_defs = self._core_settings.get('FILE_TYPE_DEFINITIONS', {})
for ftd_key, ftd_value in file_type_defs.items():
# _file_type_definitions is guaranteed to be a dict by the loader
for ftd_key, ftd_value in self._file_type_definitions.items():
if isinstance(ftd_value, dict) and 'keybind' in ftd_value:
key = ftd_value['keybind']
if key not in keybinds:
@@ -538,25 +697,92 @@ class Configuration:
def load_base_config() -> dict:
"""
Loads only the base configuration from app_settings.json.
Does not load presets or perform merging/validation.
Loads base configuration by merging app_settings.json, user_settings.json (if exists),
asset_type_definitions.json, and file_type_definitions.json.
Does not load presets or perform full validation beyond basic file loading.
Returns a dictionary containing the merged settings. If app_settings.json
fails to load, an empty dictionary is returned. If other files
fail, errors are logged, and the function proceeds with what has been loaded.
"""
base_settings = {}
# 1. Load app_settings.json (critical)
if not APP_SETTINGS_PATH.is_file():
log.error(f"Base configuration file not found: {APP_SETTINGS_PATH}")
# Return empty dict or raise a specific error if preferred
# For now, return empty dict to allow GUI to potentially start with defaults
log.error(f"Critical: Base application settings file not found: {APP_SETTINGS_PATH}. Returning empty configuration.")
return {}
try:
with open(APP_SETTINGS_PATH, 'r', encoding='utf-8') as f:
settings = json.load(f)
return settings
base_settings = json.load(f)
log.info(f"Successfully loaded base application settings from: {APP_SETTINGS_PATH}")
except json.JSONDecodeError as e:
log.error(f"Failed to parse base configuration file {APP_SETTINGS_PATH}: Invalid JSON - {e}")
log.error(f"Critical: Failed to parse base application settings file {APP_SETTINGS_PATH}: Invalid JSON - {e}. Returning empty configuration.")
return {}
except Exception as e:
log.error(f"Failed to read base configuration file {APP_SETTINGS_PATH}: {e}")
log.error(f"Critical: Failed to read base application settings file {APP_SETTINGS_PATH}: {e}. Returning empty configuration.")
return {}
# 2. Attempt to load user_settings.json
user_settings_overrides = {}
if USER_SETTINGS_PATH.is_file():
try:
with open(USER_SETTINGS_PATH, 'r', encoding='utf-8') as f:
user_settings_overrides = json.load(f)
log.info(f"User settings loaded successfully for base_config from {USER_SETTINGS_PATH}.")
except json.JSONDecodeError as e:
log.warning(f"Failed to parse user settings file {USER_SETTINGS_PATH} for base_config: Invalid JSON - {e}. Proceeding without these user overrides.")
except Exception as e:
log.warning(f"Failed to read user settings file {USER_SETTINGS_PATH} for base_config: {e}. Proceeding without these user overrides.")
# 3. Deep merge user settings onto base_settings
if user_settings_overrides:
log.info("Applying user setting overrides to base_settings in load_base_config.")
# _deep_merge_dicts modifies base_settings in place
_deep_merge_dicts(base_settings, user_settings_overrides)
# 4. Load asset_type_definitions.json (non-critical, merge if successful)
if not ASSET_TYPE_DEFINITIONS_PATH.is_file():
log.error(f"Asset type definitions file not found: {ASSET_TYPE_DEFINITIONS_PATH}. Proceeding without it.")
else:
try:
with open(ASSET_TYPE_DEFINITIONS_PATH, 'r', encoding='utf-8') as f:
asset_defs_data = json.load(f)
if "ASSET_TYPE_DEFINITIONS" in asset_defs_data:
if isinstance(asset_defs_data["ASSET_TYPE_DEFINITIONS"], dict):
# Merge into base_settings, which might already contain user overrides
base_settings['ASSET_TYPE_DEFINITIONS'] = asset_defs_data["ASSET_TYPE_DEFINITIONS"]
log.info(f"Successfully loaded and merged ASSET_TYPE_DEFINITIONS from: {ASSET_TYPE_DEFINITIONS_PATH}")
else:
log.error(f"Value under 'ASSET_TYPE_DEFINITIONS' in {ASSET_TYPE_DEFINITIONS_PATH} is not a dictionary. Skipping merge.")
else:
log.error(f"Key 'ASSET_TYPE_DEFINITIONS' not found in {ASSET_TYPE_DEFINITIONS_PATH}. Skipping merge.")
except json.JSONDecodeError as e:
log.error(f"Failed to parse asset type definitions file {ASSET_TYPE_DEFINITIONS_PATH}: Invalid JSON - {e}. Skipping merge.")
except Exception as e:
log.error(f"Failed to read asset type definitions file {ASSET_TYPE_DEFINITIONS_PATH}: {e}. Skipping merge.")
# 5. Load file_type_definitions.json (non-critical, merge if successful)
if not FILE_TYPE_DEFINITIONS_PATH.is_file():
log.error(f"File type definitions file not found: {FILE_TYPE_DEFINITIONS_PATH}. Proceeding without it.")
else:
try:
with open(FILE_TYPE_DEFINITIONS_PATH, 'r', encoding='utf-8') as f:
file_defs_data = json.load(f)
if "FILE_TYPE_DEFINITIONS" in file_defs_data:
if isinstance(file_defs_data["FILE_TYPE_DEFINITIONS"], dict):
# Merge into base_settings
base_settings['FILE_TYPE_DEFINITIONS'] = file_defs_data["FILE_TYPE_DEFINITIONS"]
log.info(f"Successfully loaded and merged FILE_TYPE_DEFINITIONS from: {FILE_TYPE_DEFINITIONS_PATH}")
else:
log.error(f"Value under 'FILE_TYPE_DEFINITIONS' in {FILE_TYPE_DEFINITIONS_PATH} is not a dictionary. Skipping merge.")
else:
log.error(f"Key 'FILE_TYPE_DEFINITIONS' not found in {FILE_TYPE_DEFINITIONS_PATH}. Skipping merge.")
except json.JSONDecodeError as e:
log.error(f"Failed to parse file type definitions file {FILE_TYPE_DEFINITIONS_PATH}: Invalid JSON - {e}. Skipping merge.")
except Exception as e:
log.error(f"Failed to read file type definitions file {FILE_TYPE_DEFINITIONS_PATH}: {e}. Skipping merge.")
return base_settings
def save_llm_config(settings_dict: dict):
"""
Saves the provided LLM settings dictionary to llm_settings.json.
@@ -571,6 +797,18 @@ def save_llm_config(settings_dict: dict):
log.error(f"Failed to save LLM configuration file {LLM_SETTINGS_PATH}: {e}")
# Re-raise as ConfigurationError to signal failure upstream
raise ConfigurationError(f"Failed to save LLM configuration: {e}")
def save_user_config(settings_dict: dict):
"""Saves the provided settings dictionary to user_settings.json."""
log.debug(f"Saving user config to: {USER_SETTINGS_PATH}")
try:
# Ensure parent directory exists (though 'config/' should always exist)
USER_SETTINGS_PATH.parent.mkdir(parents=True, exist_ok=True)
with open(USER_SETTINGS_PATH, 'w', encoding='utf-8') as f:
json.dump(settings_dict, f, indent=4)
log.info(f"User config saved successfully to {USER_SETTINGS_PATH}")
except Exception as e:
log.error(f"Failed to save user configuration file {USER_SETTINGS_PATH}: {e}")
raise ConfigurationError(f"Failed to save user configuration: {e}")
def save_base_config(settings_dict: dict):
"""
Saves the provided settings dictionary to app_settings.json.
@@ -583,3 +821,149 @@ def save_base_config(settings_dict: dict):
except Exception as e:
log.error(f"Failed to save base configuration file {APP_SETTINGS_PATH}: {e}")
raise ConfigurationError(f"Failed to save configuration: {e}")
def load_asset_definitions() -> dict:
"""
Reads config/asset_type_definitions.json.
Returns the dictionary under the "ASSET_TYPE_DEFINITIONS" key.
Handles file not found or JSON errors gracefully (e.g., return empty dict, log error).
"""
log.debug(f"Loading asset type definitions from: {ASSET_TYPE_DEFINITIONS_PATH}")
if not ASSET_TYPE_DEFINITIONS_PATH.is_file():
log.error(f"Asset type definitions file not found: {ASSET_TYPE_DEFINITIONS_PATH}")
return {}
try:
with open(ASSET_TYPE_DEFINITIONS_PATH, 'r', encoding='utf-8') as f:
data = json.load(f)
if "ASSET_TYPE_DEFINITIONS" not in data:
log.error(f"Key 'ASSET_TYPE_DEFINITIONS' not found in {ASSET_TYPE_DEFINITIONS_PATH}")
return {}
settings = data["ASSET_TYPE_DEFINITIONS"]
if not isinstance(settings, dict):
log.error(f"'ASSET_TYPE_DEFINITIONS' in {ASSET_TYPE_DEFINITIONS_PATH} must be a dictionary.")
return {}
log.debug(f"Asset type definitions loaded successfully.")
return settings
except json.JSONDecodeError as e:
log.error(f"Failed to parse asset type definitions file {ASSET_TYPE_DEFINITIONS_PATH}: Invalid JSON - {e}")
return {}
except Exception as e:
log.error(f"Failed to read asset type definitions file {ASSET_TYPE_DEFINITIONS_PATH}: {e}")
return {}
def save_asset_definitions(data: dict):
"""
Takes a dictionary (representing the content for the "ASSET_TYPE_DEFINITIONS" key).
Writes it to config/asset_type_definitions.json under the root key "ASSET_TYPE_DEFINITIONS".
Handles potential I/O errors.
"""
log.debug(f"Saving asset type definitions to: {ASSET_TYPE_DEFINITIONS_PATH}")
try:
with open(ASSET_TYPE_DEFINITIONS_PATH, 'w', encoding='utf-8') as f:
json.dump({"ASSET_TYPE_DEFINITIONS": data}, f, indent=4)
log.info(f"Asset type definitions saved successfully to {ASSET_TYPE_DEFINITIONS_PATH}")
except Exception as e:
log.error(f"Failed to save asset type definitions file {ASSET_TYPE_DEFINITIONS_PATH}: {e}")
raise ConfigurationError(f"Failed to save asset type definitions: {e}")
def load_file_type_definitions() -> dict:
"""
Reads config/file_type_definitions.json.
Returns the dictionary under the "FILE_TYPE_DEFINITIONS" key.
Handles errors gracefully.
"""
log.debug(f"Loading file type definitions from: {FILE_TYPE_DEFINITIONS_PATH}")
if not FILE_TYPE_DEFINITIONS_PATH.is_file():
log.error(f"File type definitions file not found: {FILE_TYPE_DEFINITIONS_PATH}")
return {}
try:
with open(FILE_TYPE_DEFINITIONS_PATH, 'r', encoding='utf-8') as f:
data = json.load(f)
if "FILE_TYPE_DEFINITIONS" not in data:
log.error(f"Key 'FILE_TYPE_DEFINITIONS' not found in {FILE_TYPE_DEFINITIONS_PATH}")
return {}
settings = data["FILE_TYPE_DEFINITIONS"]
if not isinstance(settings, dict):
log.error(f"'FILE_TYPE_DEFINITIONS' in {FILE_TYPE_DEFINITIONS_PATH} must be a dictionary.")
return {}
log.debug(f"File type definitions loaded successfully.")
return settings
except json.JSONDecodeError as e:
log.error(f"Failed to parse file type definitions file {FILE_TYPE_DEFINITIONS_PATH}: Invalid JSON - {e}")
return {}
except Exception as e:
log.error(f"Failed to read file type definitions file {FILE_TYPE_DEFINITIONS_PATH}: {e}")
return {}
def save_file_type_definitions(data: dict):
"""
Takes a dictionary (representing content for "FILE_TYPE_DEFINITIONS" key).
Writes it to config/file_type_definitions.json under the root key "FILE_TYPE_DEFINITIONS".
Handles errors.
"""
log.debug(f"Saving file type definitions to: {FILE_TYPE_DEFINITIONS_PATH}")
try:
with open(FILE_TYPE_DEFINITIONS_PATH, 'w', encoding='utf-8') as f:
json.dump({"FILE_TYPE_DEFINITIONS": data}, f, indent=4)
log.info(f"File type definitions saved successfully to {FILE_TYPE_DEFINITIONS_PATH}")
except Exception as e:
log.error(f"Failed to save file type definitions file {FILE_TYPE_DEFINITIONS_PATH}: {e}")
raise ConfigurationError(f"Failed to save file type definitions: {e}")
def load_supplier_settings() -> dict:
"""
Reads config/suppliers.json.
Returns the entire dictionary.
Handles file not found (return empty dict) or JSON errors.
If the loaded data is a list (old format), convert it in memory to the new
dictionary format, defaulting normal_map_type to "OpenGL" for each supplier.
"""
log.debug(f"Loading supplier settings from: {SUPPLIERS_CONFIG_PATH}")
if not SUPPLIERS_CONFIG_PATH.is_file():
log.warning(f"Supplier settings file not found: {SUPPLIERS_CONFIG_PATH}. Returning empty dict.")
return {}
try:
with open(SUPPLIERS_CONFIG_PATH, 'r', encoding='utf-8') as f:
data = json.load(f)
if isinstance(data, list):
log.warning(f"Supplier settings in {SUPPLIERS_CONFIG_PATH} is in the old list format. Converting to new dictionary format.")
new_data = {}
for supplier_name in data:
if isinstance(supplier_name, str):
new_data[supplier_name] = {"normal_map_type": "OpenGL"}
else:
log.warning(f"Skipping non-string item '{supplier_name}' during old format conversion of supplier settings.")
log.info(f"Supplier settings converted to new format: {new_data}")
return new_data
if not isinstance(data, dict):
log.error(f"Supplier settings in {SUPPLIERS_CONFIG_PATH} must be a dictionary. Found {type(data)}. Returning empty dict.")
return {}
log.debug(f"Supplier settings loaded successfully.")
return data
except json.JSONDecodeError as e:
log.error(f"Failed to parse supplier settings file {SUPPLIERS_CONFIG_PATH}: Invalid JSON - {e}. Returning empty dict.")
return {}
except Exception as e:
log.error(f"Failed to read supplier settings file {SUPPLIERS_CONFIG_PATH}: {e}. Returning empty dict.")
return {}
def save_supplier_settings(data: dict):
"""
Takes a dictionary (in the new format).
Writes it directly to config/suppliers.json.
Handles errors.
"""
log.debug(f"Saving supplier settings to: {SUPPLIERS_CONFIG_PATH}")
if not isinstance(data, dict):
log.error(f"Data for save_supplier_settings must be a dictionary. Got {type(data)}.")
raise ConfigurationError(f"Invalid data type for saving supplier settings: {type(data)}")
try:
with open(SUPPLIERS_CONFIG_PATH, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2) # Using indent=2 as per the example for suppliers.json
log.info(f"Supplier settings saved successfully to {SUPPLIERS_CONFIG_PATH}")
except Exception as e:
log.error(f"Failed to save supplier settings file {SUPPLIERS_CONFIG_PATH}: {e}")
raise ConfigurationError(f"Failed to save supplier settings: {e}")

Binary file not shown.

BIN
context_portal/context.db Normal file

Binary file not shown.

View File

@@ -0,0 +1,137 @@
# Plan for New Definitions Editor UI
## 1. Overview
This document outlines the plan to create a new, dedicated UI for managing "Asset Type Definitions", "File Type Definitions", and "Supplier Settings". This editor will provide a more structured and user-friendly way to manage these core application configurations, which are currently stored in separate JSON files.
## 2. General Design Principles
* **Dedicated Dialog:** The editor will be a new `QDialog` (e.g., `DefinitionsEditorDialog`).
* **Access Point:** Launched from the `MainWindow` menu bar (e.g., under a "Definitions" menu or "Edit" -> "Edit Definitions...").
* **Tabbed Interface:** The dialog will use a `QTabWidget` to separate the management of different definition types.
* **List/Details View:** Each tab will generally follow a two-pane layout:
* **Left Pane:** A `QListWidget` displaying the primary keys or names of the definitions (e.g., asset type names, file type IDs, supplier names). Includes "Add" and "Remove" buttons for managing these primary entries.
* **Right Pane:** A details area (e.g., `QGroupBox` with a `QFormLayout`) that shows the specific settings for the item selected in the left-pane list.
* **Data Persistence:** The dialog will load from and save to the respective JSON configuration files:
* Asset Types: `config/asset_type_definitions.json`
* File Types: `config/file_type_definitions.json`
* Supplier Settings: `config/suppliers.json` (This file will be refactored from a simple list to a dictionary of supplier objects).
* **User Experience:** Standard "Save" and "Cancel" buttons, with a check for unsaved changes.
## 3. Tab-Specific Plans
### 3.1. Asset Type Definitions Tab
* **Manages:** `config/asset_type_definitions.json`
* **UI Sketch:**
```mermaid
graph LR
subgraph AssetTypeTab [Asset Type Definitions Tab]
direction LR
AssetList[QListWidget (Asset Type Keys e.g., "Surface")] --> AssetDetailsGroup{Details for Selected Asset Type};
end
subgraph AssetDetailsGroup
direction TB
Desc[Description: QTextEdit]
Color[Color: QPushButton ("Choose Color...") + Color Swatch Display]
Examples[Examples: QListWidget + Add/Remove Example Buttons]
end
AssetActions["Add Asset Type (Prompt for Name)\nRemove Selected Asset Type"] --> AssetList
```
* **Details:**
* **Left Pane:** `QListWidget` for asset type names. "Add Asset Type" (prompts for new key) and "Remove Selected Asset Type" buttons.
* **Right Pane (Details):**
* `description`: `QTextEdit`.
* `color`: `QPushButton` opening `QColorDialog`, with an adjacent `QLabel` to display the color swatch.
* `examples`: `QListWidget` with "Add Example" (`QInputDialog.getText`) and "Remove Selected Example" buttons.
### 3.2. File Type Definitions Tab
* **Manages:** `config/file_type_definitions.json`
* **UI Sketch:**
```mermaid
graph LR
subgraph FileTypeTab [File Type Definitions Tab]
direction LR
FileList[QListWidget (File Type Keys e.g., "MAP_COL")] --> FileDetailsGroup{Details for Selected File Type};
end
subgraph FileDetailsGroup
direction TB
DescF[Description: QTextEdit]
ColorF[Color: QPushButton ("Choose Color...") + Color Swatch Display]
ExamplesF[Examples: QListWidget + Add/Remove Example Buttons]
StdType[Standard Type: QLineEdit]
BitDepth[Bit Depth Rule: QComboBox ("respect", "force_8bit", "force_16bit")]
IsGrayscale[Is Grayscale: QCheckBox]
Keybind[Keybind: QLineEdit (1 char)]
end
FileActions["Add File Type (Prompt for ID)\nRemove Selected File Type"] --> FileList
```
* **Details:**
* **Left Pane:** `QListWidget` for file type IDs. "Add File Type" (prompts for new key) and "Remove Selected File Type" buttons.
* **Right Pane (Details):**
* `description`: `QTextEdit`.
* `color`: `QPushButton` opening `QColorDialog`, with an adjacent `QLabel` for color swatch.
* `examples`: `QListWidget` with "Add Example" and "Remove Selected Example" buttons.
* `standard_type`: `QLineEdit`.
* `bit_depth_rule`: `QComboBox` (options: "respect", "force_8bit", "force_16bit").
* `is_grayscale`: `QCheckBox`.
* `keybind`: `QLineEdit` (validation for single character recommended).
### 3.3. Supplier Settings Tab
* **Manages:** `config/suppliers.json` (This file will be refactored to a dictionary structure, e.g., `{"SupplierName": {"normal_map_type": "OpenGL", ...}}`).
* **UI Sketch:**
```mermaid
graph LR
subgraph SupplierTab [Supplier Settings Tab]
direction LR
SupplierList[QListWidget (Supplier Names)] --> SupplierDetailsGroup{Details for Selected Supplier};
end
subgraph SupplierDetailsGroup
direction TB
NormalMapType[Normal Map Type: QComboBox ("OpenGL", "DirectX")]
%% Future supplier-specific settings can be added here
end
SupplierActions["Add Supplier (Prompt for Name)\nRemove Selected Supplier"] --> SupplierList
```
* **Details:**
* **Left Pane:** `QListWidget` for supplier names. "Add Supplier" (prompts for new name) and "Remove Selected Supplier" buttons.
* **Right Pane (Details):**
* `normal_map_type`: `QComboBox` (options: "OpenGL", "DirectX"). Default for new suppliers: "OpenGL".
* *(Space for future supplier-specific settings).*
* **Data Handling Note for `config/suppliers.json`:**
* The editor will load from and save to `config/suppliers.json` using the new dictionary format (supplier name as key, object of settings as value).
* Initial implementation might require `config/suppliers.json` to be manually updated to this new format if it currently exists as a simple list. Alternatively, the editor could attempt an automatic conversion on first load if the old list format is detected, or prompt the user. For the first pass, assuming the editor works with the new format is simpler.
## 4. Implementation Steps (High-Level)
1. **(Potentially Manual First Step) Refactor `config/suppliers.json`:** If `config/suppliers.json` exists as a list, manually convert it to the new dictionary structure (e.g., `{"SupplierName": {"normal_map_type": "OpenGL"}}`) before starting UI development for this tab, or plan for the editor to handle this conversion.
2. **Create `DefinitionsEditorDialog` Class:** Inherit from `QDialog`.
3. **Implement UI Structure:** Main `QTabWidget`, and for each tab, the two-pane layout with `QListWidget`, `QGroupBox` for details, and relevant input widgets (`QLineEdit`, `QTextEdit`, `QComboBox`, `QCheckBox`, `QPushButton`).
4. **Implement Loading Logic:**
* For each tab, read data from its corresponding JSON file.
* Populate the left-pane `QListWidget` with the primary keys/names.
* Store the full data structure internally (e.g., in dictionaries within the dialog instance).
5. **Implement Display Logic:**
* When an item is selected in a `QListWidget`, populate the right-pane detail fields with the data for that item.
6. **Implement Editing Logic:**
* Ensure that changes made in the detail fields (text edits, combobox selections, checkbox states, color choices, list example modifications) update the corresponding internal data structure for the currently selected item.
7. **Implement Add/Remove Functionality:**
* For each definition type (Asset Type, File Type, Supplier), implement the "Add" and "Remove" buttons.
* "Add": Prompt for a unique key/name, create a new default entry in the internal data, and add it to the `QListWidget`.
* "Remove": Remove the selected item from the `QListWidget` and the internal data.
* For "examples" lists within Asset and File types, implement their "Add Example" and "Remove Selected Example" buttons.
8. **Implement Saving Logic:**
* When the main "Save" button is clicked:
* Write the (potentially modified) Asset Type definitions data structure to `config/asset_type_definitions.json`.
* Write File Type definitions to `config/file_type_definitions.json`.
* Write Supplier settings (in the new dictionary format) to `config/suppliers.json`.
* Consider creating new dedicated save functions in `configuration.py` for each of these files if they don't already exist or if existing ones are not suitable.
9. **Implement Unsaved Changes Check & Cancel Logic.**
10. **Integrate Dialog Launch:** Add a menu action in `MainWindow.py` to open the `DefinitionsEditorDialog`.
This plan provides a comprehensive approach to creating a dedicated editor for these crucial application definitions.

View File

@@ -0,0 +1,113 @@
# Refactoring Plan for Preferences Window (ConfigEditorDialog)
## 1. Overview
This document outlines the plan to refactor the preferences window (`gui/config_editor_dialog.py`). The primary goal is to address issues related to misaligned scope, poor user experience for certain data types, and incomplete interactivity. The refactoring will focus on making the `ConfigEditorDialog` a robust editor for settings in `config/app_settings.json` that are intended to be overridden by the user via `config/user_settings.json`.
## 2. Assessment Summary
* **Misaligned Scope:** The dialog currently includes UI for "Asset Type Definitions" and "File Type Definitions". However, these are managed in separate dedicated JSON files ([`config/asset_type_definitions.json`](config/asset_type_definitions.json) and [`config/file_type_definitions.json`](config/file_type_definitions.json)) and are not saved by this dialog (which targets `config/user_settings.json`).
* **Poor UX for Data Types:**
* Lists (e.g., `RESPECT_VARIANT_MAP_TYPES`) are edited as comma-separated strings.
* Dictionary-like structures (e.g., `IMAGE_RESOLUTIONS`) are handled inconsistently (JSON defines as dict, UI attempts list-of-pairs).
* Editing complex list-of-objects (e.g., `MAP_MERGE_RULES`) is functionally incomplete.
* **Incomplete Interactivity:** Many table-based editors lack "Add/Remove Row" functionality and proper cell delegates for intuitive editing.
* **LLM Settings:** Confirmed to be correctly managed by the separate `LLMEditorWidget` and `config/llm_settings.json`, so they are out of scope for this specific dialog refactor.
## 3. Refactoring Phases and Plan Details
```mermaid
graph TD
A[Start: Current State] --> B{Phase 1: Correct Scope & Critical UX/Data Fixes};
B --> C{Phase 2: Enhance MAP_MERGE_RULES Editor};
C --> D{Phase 3: General UX & Table Interactivity};
D --> E[End: Refactored Preferences Window];
subgraph "Phase 1: Correct Scope & Critical UX/Data Fixes"
B1[Remove Definitions Editing from ConfigEditorDialog]
B2[Improve List Editing for RESPECT_VARIANT_MAP_TYPES]
B3[Fix IMAGE_RESOLUTIONS Handling (Dictionary)]
B4[Handle Simple Nested Settings (e.g., general_settings)]
end
subgraph "Phase 2: Enhance MAP_MERGE_RULES Editor"
C1[Implement Add/Remove for Merge Rules]
C2[Improve Rule Detail Editing (ComboBoxes, SpinBoxes)]
end
subgraph "Phase 3: General UX & Table Interactivity"
D1[Implement IMAGE_RESOLUTIONS Table Add/Remove Buttons]
D2[Implement Necessary Table Cell Delegates (e.g., for IMAGE_RESOLUTIONS values)]
D3[Review/Refine Tab Layout & Widget Grouping]
end
B --> B1; B --> B2; B --> B3; B --> B4;
C --> C1; C --> C2;
D --> D1; D --> D2; D --> D3;
```
### Phase 1: Correct Scope & Critical UX/Data Fixes (in `gui/config_editor_dialog.py`)
1. **Remove Definitions Editing:**
* **Action:** In `populate_definitions_tab`, remove the inner `QTabWidget` and the code that creates/populates the "Asset Types" and "File Types" tables.
* The `DEFAULT_ASSET_CATEGORY` `QComboBox` (for the setting from `app_settings.json`) should remain. Its items should be populated using keys obtained from the `Configuration` class (which loads the actual `ASSET_TYPE_DEFINITIONS` from its dedicated file).
* **Rationale:** Simplifies the dialog to settings managed via `user_settings.json`. Editing of the full definition files requires dedicated UI (see Future Enhancements note).
2. **Improve `RESPECT_VARIANT_MAP_TYPES` Editing:**
* **Action:** In `populate_output_naming_tab`, replace the `QLineEdit` for `RESPECT_VARIANT_MAP_TYPES` with a `QListWidget` and "Add"/"Remove" buttons.
* "Add" button: Use `QInputDialog.getItem` with items populated from `Configuration.get_file_type_keys()` (or similar method accessing loaded `FILE_TYPE_DEFINITIONS`) to allow users to select a valid file type key.
* "Remove" button: Remove the selected item from the `QListWidget`.
* Update `save_settings` to read the list of strings from this `QListWidget`.
* Update `populate_widgets_from_settings` to populate this `QListWidget`.
3. **Fix `IMAGE_RESOLUTIONS` Handling:**
* **Action:** In `populate_image_processing_tab`:
* The `QTableWidget` for `IMAGE_RESOLUTIONS` should have two columns: "Name" (string, for the dictionary key) and "Resolution (px)" (integer, for the dictionary value).
* In `populate_image_resolutions_table`, ensure it correctly populates from the dictionary structure in `self.settings['IMAGE_RESOLUTIONS']` (from `app_settings.json`).
* In `save_settings`, ensure it correctly reads data from the table and reconstructs the `IMAGE_RESOLUTIONS` dictionary (e.g., `{"4K": 4096, "2K": 2048}`) when saving to `user_settings.json`.
* ComboBoxes `CALCULATE_STATS_RESOLUTION` and `RESOLUTION_THRESHOLD_FOR_JPG` should be populated with the *keys* (names like "4K", "2K") from the `IMAGE_RESOLUTIONS` dictionary. `RESOLUTION_THRESHOLD_FOR_JPG` should also include "Never" and "Always" options. The `save_settings` method needs to correctly map these special ComboBox values back to appropriate storable values if necessary (e.g., sentinel numbers or specific strings if the backend configuration expects them for "Never"/"Always").
4. **Handle Simple Nested Settings (e.g., `general_settings`):**
* **Action:** For `general_settings.invert_normal_map_green_channel_globally` (from `config/app_settings.json`):
* Add a `QCheckBox` labeled "Invert Normal Map Green Channel Globally" to an appropriate tab (e.g., "Image Processing" or a "General" tab after layout review).
* Update `populate_widgets_from_settings` to read `self.settings.get('general_settings', {}).get('invert_normal_map_green_channel_globally', False)`.
* Update `save_settings` to write this value back to `target_file_content.setdefault('general_settings', {})['invert_normal_map_green_channel_globally'] = widget.isChecked()`.
### Phase 2: Enhance `MAP_MERGE_RULES` Editor (in `gui/config_editor_dialog.py`)
1. **Rule Management:**
* **Action:** In `populate_map_merging_tab`:
* Connect the "Add Rule" button:
* Create a default new rule dictionary (e.g., `{"output_map_type": "NEW_RULE", "inputs": {}, "defaults": {}, "output_bit_depth": "respect_inputs"}`).
* Add it to the internal list of rules that will be saved (e.g., a copy of `self.settings['MAP_MERGE_RULES']` that gets modified).
* Add a new `QListWidgetItem` for it and select it to display its details.
* Connect the "Remove Rule" button:
* Remove the selected rule from the internal list and the `QListWidget`.
* Clear the details panel.
2. **Rule Details Panel Improvements (`display_merge_rule_details`):**
* **`output_map_type`:** Change the `QLineEdit` to a `QComboBox`. Populate its items from `Configuration.get_file_type_keys()`.
* **`inputs` Table:** The "Input Map Type" column cells should use a `QComboBox` delegate, populated with `Configuration.get_file_type_keys()` plus an empty/None option.
* **`defaults` Table:** The "Default Value" column cells should use a `QDoubleSpinBox` delegate (e.g., range 0.0 to 1.0, or 0-255 if appropriate for specific channel types).
* Ensure changes in these detail editors update the underlying rule data associated with the selected `QListWidgetItem` and the internal list of rules.
### Phase 3: General UX & Table Interactivity (in `gui/config_editor_dialog.py`)
1. **Implement `IMAGE_RESOLUTIONS` Table Add/Remove Buttons:**
* **Action:** In `populate_image_processing_tab`, connect the "Add Row" and "Remove Row" buttons for the `IMAGE_RESOLUTIONS` table.
* "Add Row": Prompt for "Name" (string) and "Resolution (px)" (integer).
* "Remove Row": Remove the selected row from the table and the underlying data.
2. **Implement Necessary Table Cell Delegates:**
* **Action:** For the `IMAGE_RESOLUTIONS` table, the "Resolution (px)" column should use a `QSpinBox` delegate or a `QLineEdit` with integer validation to ensure correct data input.
3. **Review/Refine Tab Layout & Widget Grouping:**
* **Action:** After the functional changes, review the overall layout of tabs and the grouping of settings within `gui/config_editor_dialog.py`.
* Ensure settings from `config/app_settings.json` are logically placed and clearly labeled.
* Verify widget labels are descriptive and tooltips are helpful where needed.
* Confirm correct mapping between UI widgets and the keys in `app_settings.json` (e.g., `OUTPUT_FILENAME_PATTERN` vs. `TARGET_FILENAME_PATTERN`).
## 4. Future Enhancements (Out of Scope for this Refactor)
* **Dedicated Editors for Definitions:** As per user feedback, if `ASSET_TYPE_DEFINITIONS` and `FILE_TYPE_DEFINITIONS` require UI-based editing, dedicated dialogs/widgets should be created. These would read from and save to their respective files ([`config/asset_type_definitions.json`](config/asset_type_definitions.json) and [`config/file_type_definitions.json`](config/file_type_definitions.json)) and could adopt a list/details UI similar to the `MAP_MERGE_RULES` editor.
* **Live Updates:** Consider mechanisms for applying some settings without requiring an application restart, if feasible for specific settings.
This plan aims to create a more focused, usable, and correct preferences window.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -126,12 +126,15 @@ class SupplierSearchDelegate(QStyledItemDelegate):
"""Loads the list of known suppliers from the JSON config file."""
try:
with open(SUPPLIERS_CONFIG_PATH, 'r') as f:
suppliers = json.load(f)
if isinstance(suppliers, list):
suppliers_data = json.load(f) # Renamed variable for clarity
if isinstance(suppliers_data, list):
# Ensure all items are strings
return sorted([str(s) for s in suppliers if isinstance(s, str)])
else:
log.warning(f"'{SUPPLIERS_CONFIG_PATH}' does not contain a valid list. Starting fresh.")
return sorted([str(s) for s in suppliers_data if isinstance(s, str)])
elif isinstance(suppliers_data, dict): # ADDED: Handle dictionary case
# If it's a dictionary, extract keys as supplier names
return sorted([str(key) for key in suppliers_data.keys() if isinstance(key, str)])
else: # MODIFIED: Updated warning message
log.warning(f"'{SUPPLIERS_CONFIG_PATH}' does not contain a valid list or dictionary of suppliers. Starting fresh.")
return []
except FileNotFoundError:
log.info(f"'{SUPPLIERS_CONFIG_PATH}' not found. Starting with an empty supplier list.")

View File

@@ -1,6 +1,7 @@
# gui/llm_editor_widget.py
import json
import logging
import copy # Added for deepcopy
from PySide6.QtWidgets import (
QWidget, QVBoxLayout, QTabWidget, QPlainTextEdit, QGroupBox,
QHBoxLayout, QPushButton, QFormLayout, QLineEdit, QDoubleSpinBox,
@@ -24,6 +25,7 @@ class LLMEditorWidget(QWidget):
def __init__(self, parent=None):
super().__init__(parent)
self._unsaved_changes = False
self.original_llm_settings = {} # Initialize original_llm_settings
self._init_ui()
self._connect_signals()
self.save_button.setEnabled(False) # Initially disabled
@@ -131,6 +133,7 @@ class LLMEditorWidget(QWidget):
try:
with open(LLM_CONFIG_PATH, 'r', encoding='utf-8') as f:
settings = json.load(f)
self.original_llm_settings = copy.deepcopy(settings) # Store a deep copy
# Populate Prompt Settings
self.prompt_editor.setPlainText(settings.get("llm_predictor_prompt", ""))
@@ -159,9 +162,9 @@ class LLMEditorWidget(QWidget):
logger.info("LLM settings loaded successfully.")
except FileNotFoundError:
logger.warning(f"LLM settings file not found: {LLM_CONFIG_PATH}. Using defaults and disabling editor.")
logger.warning(f"LLM settings file not found: {LLM_CONFIG_PATH}. Using defaults.")
QMessageBox.warning(self, "Load Error",
f"LLM settings file not found:\n{LLM_CONFIG_PATH}\n\nPlease ensure the file exists. Using default values.")
f"LLM settings file not found:\n{LLM_CONFIG_PATH}\n\nNew settings will be created if you save.")
# Reset to defaults (optional, or leave fields empty)
self.prompt_editor.clear()
self.endpoint_url_edit.clear()
@@ -169,19 +172,21 @@ class LLMEditorWidget(QWidget):
self.model_name_edit.clear()
self.temperature_spinbox.setValue(0.7)
self.timeout_spinbox.setValue(120)
# self.setEnabled(False) # Disabling might be too harsh if user wants to create settings
self.original_llm_settings = {} # Start with empty original settings if file not found
except json.JSONDecodeError as e:
logger.error(f"Error decoding JSON from {LLM_CONFIG_PATH}: {e}")
QMessageBox.critical(self, "Load Error",
f"Failed to parse LLM settings file:\n{LLM_CONFIG_PATH}\n\nError: {e}\n\nPlease check the file for syntax errors. Editor will be disabled.")
self.setEnabled(False) # Disable editor on critical load error
self.original_llm_settings = {} # Reset original settings on JSON error
except Exception as e: # Catch other potential errors during loading/populating
logger.error(f"An unexpected error occurred loading LLM settings: {e}", exc_info=True)
QMessageBox.critical(self, "Load Error",
f"An unexpected error occurred while loading settings:\n{e}\n\nEditor will be disabled.")
self.setEnabled(False)
self.original_llm_settings = {} # Reset original settings on other errors
# Reset unsaved changes flag and disable save button after loading
@@ -201,26 +206,38 @@ class LLMEditorWidget(QWidget):
"""Gather data from UI, save to JSON file, and handle errors."""
logger.info("Attempting to save LLM settings...")
settings_dict = {}
# 1.a. Load Current Target File
target_file_content = {}
try:
with open(LLM_CONFIG_PATH, 'r', encoding='utf-8') as f:
target_file_content = json.load(f)
except FileNotFoundError:
logger.info(f"{LLM_CONFIG_PATH} not found. Will create a new one.")
target_file_content = {} # Start with an empty dict if file doesn't exist
except json.JSONDecodeError as e:
logger.error(f"Error decoding existing {LLM_CONFIG_PATH}: {e}. Starting with an empty config for save.")
QMessageBox.warning(self, "Warning",
f"Could not parse existing LLM settings file ({LLM_CONFIG_PATH}).\n"
f"Any pre-existing settings in that file might be overwritten if you save now.\nError: {e}")
target_file_content = {} # Start fresh if current file is corrupt
# 1.b. Gather current UI settings into current_llm_settings
current_llm_settings = {}
parsed_examples = []
has_errors = False
has_errors = False # For example parsing
# Gather API Settings
settings_dict["llm_endpoint_url"] = self.endpoint_url_edit.text().strip()
settings_dict["llm_api_key"] = self.api_key_edit.text() # Keep as is, don't strip
settings_dict["llm_model_name"] = self.model_name_edit.text().strip()
settings_dict["llm_temperature"] = self.temperature_spinbox.value()
settings_dict["llm_request_timeout"] = self.timeout_spinbox.value()
current_llm_settings["llm_endpoint_url"] = self.endpoint_url_edit.text().strip()
current_llm_settings["llm_api_key"] = self.api_key_edit.text() # Keep as is
current_llm_settings["llm_model_name"] = self.model_name_edit.text().strip()
current_llm_settings["llm_temperature"] = self.temperature_spinbox.value()
current_llm_settings["llm_request_timeout"] = self.timeout_spinbox.value()
current_llm_settings["llm_predictor_prompt"] = self.prompt_editor.toPlainText().strip()
# Gather Prompt Settings
settings_dict["llm_predictor_prompt"] = self.prompt_editor.toPlainText().strip()
# Gather and Parse Examples
for i in range(self.examples_tab_widget.count()):
example_editor = self.examples_tab_widget.widget(i)
if isinstance(example_editor, QTextEdit):
example_text = example_editor.toPlainText().strip()
if not example_text: # Skip empty examples silently
if not example_text:
continue
try:
parsed_example = json.loads(example_text)
@@ -231,40 +248,58 @@ class LLMEditorWidget(QWidget):
logger.warning(f"Invalid JSON in '{tab_name}': {e}. Skipping example.")
QMessageBox.warning(self, "Invalid Example",
f"The content in '{tab_name}' is not valid JSON and will not be saved.\n\nError: {e}\n\nPlease correct it or remove the tab.")
# Optionally switch to the tab with the error:
# self.examples_tab_widget.setCurrentIndex(i)
else:
logger.warning(f"Widget at index {i} in examples tab is not a QTextEdit. Skipping.")
logger.warning(f"Widget at index {i} in examples tab is not a QTextEdit. Skipping.")
if has_errors:
logger.warning("LLM settings not saved due to invalid JSON in examples.")
# Keep save button enabled if there were errors, allowing user to fix and retry
# self.save_button.setEnabled(True)
# self._unsaved_changes = True
return # Stop saving process
return
settings_dict["llm_predictor_examples"] = parsed_examples
current_llm_settings["llm_predictor_examples"] = parsed_examples
# Save the dictionary to file
# 1.c. Identify Changes and Update Target File Content
changed_settings_count = 0
for key, current_value in current_llm_settings.items():
original_value = self.original_llm_settings.get(key)
# Special handling for lists (e.g., examples) - direct comparison works
# For other types, direct comparison also works.
# This includes new keys present in current_llm_settings but not in original_llm_settings
if key not in self.original_llm_settings or current_value != original_value:
target_file_content[key] = current_value
logger.debug(f"Setting '{key}' changed or added. Old: '{original_value}', New: '{current_value}'")
changed_settings_count +=1
if changed_settings_count == 0 and self._unsaved_changes:
logger.info("Save called, but no actual changes detected compared to original loaded settings.")
# If _unsaved_changes was true, it means UI interaction happened,
# but values might have been reverted to original.
# We still proceed to save target_file_content as it might contain
# values from a file that was modified externally since last load.
# Or, if the file didn't exist, it will now be created with current UI values.
# 1.d. Save Updated Content
try:
save_llm_config(settings_dict)
save_llm_config(target_file_content) # Save the potentially modified target_file_content
QMessageBox.information(self, "Save Successful", f"LLM settings saved to:\n{LLM_CONFIG_PATH}")
# Update original_llm_settings to reflect the newly saved state
self.original_llm_settings = copy.deepcopy(target_file_content)
self.save_button.setEnabled(False)
self._unsaved_changes = False
self.settings_saved.emit() # Notify MainWindow or others
self.settings_saved.emit()
logger.info("LLM settings saved successfully.")
except ConfigurationError as e:
logger.error(f"Failed to save LLM settings: {e}")
QMessageBox.critical(self, "Save Error", f"Could not save LLM settings.\n\nError: {e}")
# Keep save button enabled as save failed
self.save_button.setEnabled(True)
self.save_button.setEnabled(True) # Keep save enabled
self._unsaved_changes = True
except Exception as e: # Catch unexpected errors during save
except Exception as e:
logger.error(f"An unexpected error occurred during LLM settings save: {e}", exc_info=True)
QMessageBox.critical(self, "Save Error", f"An unexpected error occurred while saving settings:\n{e}")
self.save_button.setEnabled(True)
self.save_button.setEnabled(True) # Keep save enabled
self._unsaved_changes = True
# --- Example Management Slots ---

View File

@@ -27,6 +27,7 @@ from .llm_editor_widget import LLMEditorWidget
from .log_console_widget import LogConsoleWidget
from .main_panel_widget import MainPanelWidget
from .definitions_editor_dialog import DefinitionsEditorDialog
# --- Backend Imports for Data Structures ---
from rule_structure import SourceRule, AssetRule, FileRule
@@ -861,6 +862,11 @@ class MainWindow(QMainWindow):
self.preferences_action = QAction("&Preferences...", self)
self.preferences_action.triggered.connect(self._open_config_editor)
edit_menu.addAction(self.preferences_action)
edit_menu.addSeparator()
self.definitions_editor_action = QAction("Edit Definitions...", self)
self.definitions_editor_action.triggered.connect(self._open_definitions_editor)
edit_menu.addAction(self.definitions_editor_action)
view_menu = self.menu_bar.addMenu("&View")
@@ -904,6 +910,17 @@ class MainWindow(QMainWindow):
log.exception(f"Error opening configuration editor dialog: {e}")
QMessageBox.critical(self, "Error", f"An error occurred while opening the configuration editor:\n{e}")
@Slot() # PySide6.QtCore.Slot
def _open_definitions_editor(self):
log.debug("Opening Definitions Editor dialog.")
try:
# DefinitionsEditorDialog is imported at the top of the file
dialog = DefinitionsEditorDialog(self)
dialog.exec_() # Use exec_() for modal dialog
log.debug("Definitions Editor dialog closed.")
except Exception as e:
log.exception(f"Error opening Definitions Editor dialog: {e}")
QMessageBox.critical(self, "Error", f"An error occurred while opening the Definitions Editor:\n{e}")
@Slot(bool)
def _toggle_log_console_visibility(self, checked):

View File

@@ -20,7 +20,8 @@ script_dir = Path(__file__).parent
project_root = script_dir.parent
PRESETS_DIR = project_root / "Presets"
TEMPLATE_PATH = PRESETS_DIR / "_template.json"
APP_SETTINGS_PATH_LOCAL = project_root / "config" / "app_settings.json"
APP_SETTINGS_PATH_LOCAL = project_root / "config" / "app_settings.json" # Retain for other settings if used elsewhere
FILE_TYPE_DEFINITIONS_PATH = project_root / "config" / "file_type_definitions.json"
log = logging.getLogger(__name__)
@@ -63,18 +64,19 @@ class PresetEditorWidget(QWidget):
"""Loads FILE_TYPE_DEFINITIONS keys from app_settings.json."""
keys = []
try:
if APP_SETTINGS_PATH_LOCAL.is_file():
with open(APP_SETTINGS_PATH_LOCAL, 'r', encoding='utf-8') as f:
if FILE_TYPE_DEFINITIONS_PATH.is_file():
with open(FILE_TYPE_DEFINITIONS_PATH, 'r', encoding='utf-8') as f:
settings = json.load(f)
# The FILE_TYPE_DEFINITIONS key is at the root of file_type_definitions.json
ftd = settings.get("FILE_TYPE_DEFINITIONS", {})
keys = list(ftd.keys())
log.debug(f"Successfully loaded {len(keys)} FILE_TYPE_DEFINITIONS keys.")
log.debug(f"Successfully loaded {len(keys)} FILE_TYPE_DEFINITIONS keys from {FILE_TYPE_DEFINITIONS_PATH}.")
else:
log.error(f"app_settings.json not found at {APP_SETTINGS_PATH_LOCAL} for PresetEditorWidget.")
log.error(f"file_type_definitions.json not found at {FILE_TYPE_DEFINITIONS_PATH} for PresetEditorWidget.")
except json.JSONDecodeError as e:
log.error(f"Failed to parse app_settings.json in PresetEditorWidget: {e}")
log.error(f"Failed to parse file_type_definitions.json in PresetEditorWidget: {e}")
except Exception as e:
log.error(f"Error loading FILE_TYPE_DEFINITIONS keys in PresetEditorWidget: {e}")
log.error(f"Error loading FILE_TYPE_DEFINITIONS keys from {FILE_TYPE_DEFINITIONS_PATH} in PresetEditorWidget: {e}")
return keys
def _init_ui(self):

View File

@@ -552,6 +552,13 @@ class UnifiedViewModel(QAbstractItemModel):
supplier_col_index = self.createIndex(existing_source_row, self.COL_SUPPLIER, existing_source_rule)
self.dataChanged.emit(supplier_col_index, supplier_col_index, [Qt.DisplayRole, Qt.EditRole])
# Always update the preset_name from the new_source_rule, as this reflects the latest prediction context
if existing_source_rule.preset_name != new_source_rule.preset_name:
log.debug(f" Updating preset_name for SourceRule '{source_path}' from '{existing_source_rule.preset_name}' to '{new_source_rule.preset_name}'")
existing_source_rule.preset_name = new_source_rule.preset_name
# Note: preset_name is not directly displayed in the view, so no dataChanged needed for a specific column,
# but if it influenced other display elements, dataChanged would be emitted for those.
# --- Merge AssetRules ---
existing_assets_dict = {asset.asset_name: asset for asset in existing_source_rule.assets}

25
main.py
View File

@@ -4,6 +4,7 @@ import time
import os
import logging
from pathlib import Path
import re # Added for checking incrementing token
from concurrent.futures import ProcessPoolExecutor, as_completed
import subprocess
import shutil
@@ -238,9 +239,14 @@ class ProcessingTask(QRunnable):
# output_dir should already be a Path object
pattern = getattr(config, 'output_directory_pattern', None)
if pattern:
log.debug(f"Calculating next incrementing value for dir: {output_dir} using pattern: {pattern}")
next_increment_str = get_next_incrementing_value(output_dir, pattern)
log.info(f"Calculated next incrementing value for {output_dir}: {next_increment_str}")
# Only call get_next_incrementing_value if the pattern contains an incrementing token
if re.search(r"\[IncrementingValue\]|#+", pattern):
log.debug(f"Incrementing token found in pattern '{pattern}'. Calculating next value for dir: {output_dir}")
next_increment_str = get_next_incrementing_value(output_dir, pattern)
log.info(f"Calculated next incrementing value for {output_dir}: {next_increment_str}")
else:
log.debug(f"No incrementing token found in pattern '{pattern}'. Skipping increment calculation.")
next_increment_str = None # Or a default like "00" if downstream expects a string, but None is cleaner if handled.
else:
log.warning(f"Cannot calculate incrementing value: 'output_directory_pattern' not found in configuration for preset {config.preset_name}")
except Exception as e:
@@ -401,11 +407,13 @@ class App(QObject):
# --- Get paths needed for ProcessingTask ---
try:
# Access output path via MainPanelWidget
output_base_path_str = self.main_window.main_panel_widget.output_path_edit.text().strip()
# Get output_dir from processing_settings passed from autotest.py
output_base_path_str = processing_settings.get("output_dir")
log.info(f"APP_DEBUG: Received output_dir in processing_settings: {output_base_path_str}")
if not output_base_path_str:
log.error("Cannot queue tasks: Output directory path is empty in the GUI.")
self.main_window.statusBar().showMessage("Error: Output directory cannot be empty.", 5000)
log.error("Cannot queue tasks: Output directory path is empty in processing_settings.")
# self.main_window.statusBar().showMessage("Error: Output directory cannot be empty.", 5000) # GUI specific
return
output_base_path = Path(output_base_path_str)
# Basic validation - check if it's likely a valid path structure (doesn't guarantee existence/writability here)
@@ -477,8 +485,9 @@ class App(QObject):
engine=task_engine,
rule=rule,
workspace_path=workspace_path,
output_base_path=output_base_path
output_base_path=output_base_path # This is Path(output_base_path_str)
)
log.info(f"APP_DEBUG: Passing to ProcessingTask: output_base_path = {output_base_path}")
task.signals.finished.connect(self._on_task_finished)
log.debug(f"DEBUG: Calling thread_pool.start() for task {i+1}")
self.thread_pool.start(task)

View File

@@ -195,17 +195,25 @@ def _process_archive_task(archive_path: Path, output_dir: Path, processed_dir: P
# Assuming config object has 'output_directory_pattern' attribute/key
pattern = getattr(config, 'output_directory_pattern', None) # Use getattr for safety
if pattern:
log.debug(f"[Task:{archive_path.name}] Calculating next incrementing value for dir: {output_dir} using pattern: {pattern}")
next_increment_str = get_next_incrementing_value(output_dir, pattern)
log.info(f"[Task:{archive_path.name}] Calculated next incrementing value: {next_increment_str}")
if re.search(r"\[IncrementingValue\]|#+", pattern):
log.debug(f"[Task:{archive_path.name}] Incrementing token found in pattern '{pattern}'. Calculating next value for dir: {output_dir}")
next_increment_str = get_next_incrementing_value(output_dir, pattern)
log.info(f"[Task:{archive_path.name}] Calculated next incrementing value: {next_increment_str}")
else:
log.debug(f"[Task:{archive_path.name}] No incrementing token found in pattern '{pattern}'. Skipping increment calculation.")
next_increment_str = None
else:
# Check if config is a dict as fallback (depends on load_config implementation)
if isinstance(config, dict):
pattern = config.get('output_directory_pattern')
if pattern:
log.debug(f"[Task:{archive_path.name}] Calculating next incrementing value for dir: {output_dir} using pattern (from dict): {pattern}")
next_increment_str = get_next_incrementing_value(output_dir, pattern)
log.info(f"[Task:{archive_path.name}] Calculated next incrementing value (from dict): {next_increment_str}")
if re.search(r"\[IncrementingValue\]|#+", pattern):
log.debug(f"[Task:{archive_path.name}] Incrementing token found in pattern '{pattern}' (from dict). Calculating next value for dir: {output_dir}")
next_increment_str = get_next_incrementing_value(output_dir, pattern)
log.info(f"[Task:{archive_path.name}] Calculated next incrementing value (from dict): {next_increment_str}")
else:
log.debug(f"[Task:{archive_path.name}] No incrementing token found in pattern '{pattern}' (from dict). Skipping increment calculation.")
next_increment_str = None
else:
log.warning(f"[Task:{archive_path.name}] Cannot calculate incrementing value: 'output_directory_pattern' not found in configuration dictionary.")
else:

View File

@@ -1,3 +1,4 @@
import dataclasses # Added import
from dataclasses import dataclass
from pathlib import Path
from typing import Dict, List, Optional
@@ -5,6 +6,86 @@ from typing import Dict, List, Optional
from rule_structure import AssetRule, FileRule, SourceRule
from configuration import Configuration
# Imports needed for new dataclasses
import numpy as np
from typing import Any, Tuple, Union
# --- Stage Input/Output Dataclasses ---
# Item types for PrepareProcessingItemsStage output
@dataclass
class MergeTaskDefinition:
"""Represents a merge task identified by PrepareProcessingItemsStage."""
task_data: Dict # The original task data from context.merged_image_tasks
task_key: str # e.g., "merged_task_0"
# Output for RegularMapProcessorStage
@dataclass
class ProcessedRegularMapData:
processed_image_data: np.ndarray
final_internal_map_type: str
source_file_path: Path
original_bit_depth: Optional[int]
original_dimensions: Optional[Tuple[int, int]] # (width, height)
transformations_applied: List[str]
resolution_key: Optional[str] = None # Added field
status: str = "Processed"
error_message: Optional[str] = None
# Output for MergedTaskProcessorStage
@dataclass
class ProcessedMergedMapData:
merged_image_data: np.ndarray
output_map_type: str # Internal type
source_bit_depths: List[int]
final_dimensions: Optional[Tuple[int, int]] # (width, height)
transformations_applied_to_inputs: Dict[str, List[str]] # Map type -> list of transforms
status: str = "Processed"
error_message: Optional[str] = None
# Input for InitialScalingStage
@dataclass
class InitialScalingInput:
image_data: np.ndarray
initial_scaling_mode: str # Moved before fields with defaults
original_dimensions: Optional[Tuple[int, int]] # (width, height)
resolution_key: Optional[str] = None # Added field
# Configuration needed
# Output for InitialScalingStage
@dataclass
class InitialScalingOutput:
scaled_image_data: np.ndarray
scaling_applied: bool
final_dimensions: Tuple[int, int] # (width, height)
resolution_key: Optional[str] = None # Added field
# Input for SaveVariantsStage
@dataclass
class SaveVariantsInput:
image_data: np.ndarray # Final data (potentially scaled)
internal_map_type: str # Final internal type (e.g., MAP_ROUGH, MAP_COL-1)
source_bit_depth_info: List[int]
# Configuration needed
output_filename_pattern_tokens: Dict[str, Any]
image_resolutions: List[int]
file_type_defs: Dict[str, Dict]
output_format_8bit: str
output_format_16bit_primary: str
output_format_16bit_fallback: str
png_compression_level: int
jpg_quality: int
output_filename_pattern: str
resolution_threshold_for_jpg: Optional[int] # Added for JPG conversion
# Output for SaveVariantsStage
@dataclass
class SaveVariantsOutput:
saved_files_details: List[Dict]
status: str = "Processed"
error_message: Optional[str] = None
# Add a field to AssetProcessingContext for the prepared items
@dataclass
class AssetProcessingContext:
source_rule: SourceRule
@@ -14,11 +95,16 @@ class AssetProcessingContext:
output_base_path: Path
effective_supplier: Optional[str]
asset_metadata: Dict
processed_maps_details: Dict[str, Dict[str, Dict]]
merged_maps_details: Dict[str, Dict[str, Dict]]
processed_maps_details: Dict[str, Dict] # Will store final results per item_key
merged_maps_details: Dict[str, Dict] # This might become redundant? Keep for now.
files_to_process: List[FileRule]
loaded_data_cache: Dict
config_obj: Configuration
status_flags: Dict
incrementing_value: Optional[str]
sha5_value: Optional[str]
sha5_value: Optional[str] # Keep existing fields
# New field for prepared items
processing_items: Optional[List[Union[FileRule, MergeTaskDefinition]]] = None
# Temporary storage during pipeline execution (managed by orchestrator)
# Keys could be FileRule object hash/id or MergeTaskDefinition task_key
intermediate_results: Optional[Dict[Any, Union[ProcessedRegularMapData, ProcessedMergedMapData, InitialScalingOutput]]] = None

View File

@@ -1,126 +1,513 @@
from typing import List, Dict, Optional
from pathlib import Path
# --- Imports ---
import logging
import shutil
import tempfile
import logging
from pathlib import Path
from typing import List, Dict, Optional, Any, Union # Added Any, Union
import numpy as np # Added numpy
from configuration import Configuration
from rule_structure import SourceRule, AssetRule
from .asset_context import AssetProcessingContext
from rule_structure import SourceRule, AssetRule, FileRule, ProcessingItem # Added ProcessingItem
# Import new context classes and stages
from .asset_context import (
AssetProcessingContext,
MergeTaskDefinition,
ProcessedRegularMapData,
ProcessedMergedMapData,
InitialScalingInput,
InitialScalingOutput,
SaveVariantsInput,
SaveVariantsOutput,
)
from .stages.base_stage import ProcessingStage
# Import the new stages we created
from .stages.prepare_processing_items import PrepareProcessingItemsStage
from .stages.regular_map_processor import RegularMapProcessorStage
from .stages.merged_task_processor import MergedTaskProcessorStage
from .stages.initial_scaling import InitialScalingStage
from .stages.save_variants import SaveVariantsStage
log = logging.getLogger(__name__)
# --- PipelineOrchestrator Class ---
class PipelineOrchestrator:
"""
Orchestrates the processing of assets based on source rules and a series of processing stages.
Manages the overall flow, including the core item processing sequence.
"""
def __init__(self, config_obj: Configuration, stages: List[ProcessingStage]):
def __init__(self, config_obj: Configuration,
pre_item_stages: List[ProcessingStage],
post_item_stages: List[ProcessingStage]):
"""
Initializes the PipelineOrchestrator.
Args:
config_obj: The main configuration object.
stages: A list of processing stages to be executed in order.
pre_item_stages: Stages to run before the core item processing loop.
post_item_stages: Stages to run after the core item processing loop.
"""
self.config_obj: Configuration = config_obj
self.stages: List[ProcessingStage] = stages
self.pre_item_stages: List[ProcessingStage] = pre_item_stages
self.post_item_stages: List[ProcessingStage] = post_item_stages
# Instantiate the core item processing stages internally
self._prepare_stage = PrepareProcessingItemsStage()
self._regular_processor_stage = RegularMapProcessorStage()
self._merged_processor_stage = MergedTaskProcessorStage()
self._scaling_stage = InitialScalingStage()
self._save_stage = SaveVariantsStage()
def _execute_specific_stages(
self, context: AssetProcessingContext,
stages_to_run: List[ProcessingStage],
stage_group_name: str,
stop_on_skip: bool = True
) -> AssetProcessingContext:
"""Executes a specific list of stages."""
asset_name = context.asset_rule.asset_name if context.asset_rule else "Unknown"
log.debug(f"Asset '{asset_name}': Executing {stage_group_name} stages...")
for stage in stages_to_run:
stage_name = stage.__class__.__name__
log.debug(f"Asset '{asset_name}': Executing {stage_group_name} stage: {stage_name}")
try:
# Check if stage expects context directly or specific input
# For now, assume outer stages take context directly
# This might need refinement if outer stages also adopt Input/Output pattern
context = stage.execute(context)
except Exception as e:
log.error(f"Asset '{asset_name}': Error during outer stage '{stage_name}': {e}", exc_info=True)
context.status_flags["asset_failed"] = True
context.status_flags["asset_failed_stage"] = stage_name
context.status_flags["asset_failed_reason"] = str(e)
# Update overall metadata immediately on outer stage failure
context.asset_metadata["status"] = f"Failed: Error in stage {stage_name}"
context.asset_metadata["error_message"] = str(e)
break # Stop processing outer stages for this asset on error
if stop_on_skip and context.status_flags.get("skip_asset"):
log.info(f"Asset '{asset_name}': Skipped by outer stage '{stage_name}'. Reason: {context.status_flags.get('skip_reason', 'N/A')}")
break # Skip remaining outer stages for this asset
return context
def process_source_rule(
self,
source_rule: SourceRule,
workspace_path: Path,
output_base_path: Path,
overwrite: bool, # Not used in this initial implementation, but part of the signature
overwrite: bool,
incrementing_value: Optional[str],
sha5_value: Optional[str] # Corrected from sha5_value to sha256_value as per typical usage, assuming typo
sha5_value: Optional[str] # Keep param name consistent for now
) -> Dict[str, List[str]]:
"""
Processes a single source rule, iterating through its asset rules and applying all stages.
Args:
source_rule: The source rule to process.
workspace_path: The base path of the workspace.
output_base_path: The base path for output files.
overwrite: Whether to overwrite existing files (not fully implemented yet).
incrementing_value: An optional incrementing value for versioning or naming.
sha5_value: An optional SHA5 hash value for the asset (assuming typo, likely sha256).
Returns:
A dictionary summarizing the processing status of assets.
Processes a single source rule, applying pre-processing stages,
the core item processing loop (Prepare, Process, Scale, Save),
and post-processing stages.
"""
overall_status: Dict[str, List[str]] = {
"processed": [],
"skipped": [],
"failed": [],
}
engine_temp_dir_path: Optional[Path] = None # Initialize to None
engine_temp_dir_path: Optional[Path] = None
try:
# Create a temporary directory for this processing run if needed by any stage
# This temp dir is for the entire source_rule processing, not per asset.
# Individual stages might create their own sub-temp dirs if necessary.
# --- Setup Temporary Directory ---
temp_dir_path_str = tempfile.mkdtemp(prefix=self.config_obj.temp_dir_prefix)
engine_temp_dir_path = Path(temp_dir_path_str)
log.debug(f"PipelineOrchestrator created temporary directory: {engine_temp_dir_path} using prefix '{self.config_obj.temp_dir_prefix}'")
log.debug(f"PipelineOrchestrator created temporary directory: {engine_temp_dir_path}")
# --- Process Each Asset Rule ---
for asset_rule in source_rule.assets:
log.debug(f"Orchestrator: Processing asset '{asset_rule.asset_name}'")
asset_name = asset_rule.asset_name
log.info(f"Orchestrator: Processing asset '{asset_name}'")
# --- Initialize Asset Context ---
context = AssetProcessingContext(
source_rule=source_rule,
asset_rule=asset_rule,
workspace_path=workspace_path, # This is the path to the source files (e.g. extracted archive)
engine_temp_dir=engine_temp_dir_path, # Pass the orchestrator's temp dir
workspace_path=workspace_path,
engine_temp_dir=engine_temp_dir_path,
output_base_path=output_base_path,
effective_supplier=None, # Will be set by SupplierDeterminationStage
asset_metadata={}, # Will be populated by stages
processed_maps_details={}, # Will be populated by stages
merged_maps_details={}, # Will be populated by stages
files_to_process=[], # Will be populated by FileRuleFilterStage
loaded_data_cache={}, # For image loading cache within this asset's processing
effective_supplier=None,
asset_metadata={},
processed_maps_details={}, # Final results per item
merged_maps_details={}, # Keep for potential backward compat or other uses?
files_to_process=[], # Populated by FileRuleFilterStage (assumed in outer_stages)
loaded_data_cache={},
config_obj=self.config_obj,
status_flags={"skip_asset": False, "asset_failed": False}, # Initialize common flags
status_flags={"skip_asset": False, "asset_failed": False},
incrementing_value=incrementing_value,
sha5_value=sha5_value
sha5_value=sha5_value,
processing_items=[], # Initialize new fields
intermediate_results={}
)
for stage_idx, stage in enumerate(self.stages):
log.debug(f"Asset '{asset_rule.asset_name}': Executing stage {stage_idx + 1}/{len(self.stages)}: {stage.__class__.__name__}")
# --- Execute Pre-Item-Processing Outer Stages ---
# (e.g., MetadataInit, SupplierDet, FileRuleFilter, GlossToRough, NormalInvert)
# Identify which outer stages run before the item loop
# This requires knowing the intended order. Assume all run before for now.
context = self._execute_specific_stages(context, self.pre_item_stages, "pre-item", stop_on_skip=True)
# Check if asset should be skipped or failed after pre-processing
if context.status_flags.get("asset_failed"):
log.error(f"Asset '{asset_name}': Failed during pre-processing stage '{context.status_flags.get('asset_failed_stage', 'Unknown')}'. Skipping item processing.")
overall_status["failed"].append(f"{asset_name} (Failed in {context.status_flags.get('asset_failed_stage', 'Pre-Processing')})")
continue # Move to the next asset rule
if context.status_flags.get("skip_asset"):
log.info(f"Asset '{asset_name}': Skipped during pre-processing. Skipping item processing.")
overall_status["skipped"].append(asset_name)
continue # Move to the next asset rule
# --- Prepare Processing Items ---
log.debug(f"Asset '{asset_name}': Preparing processing items...")
try:
log.info(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': Attempting to call _prepare_stage.execute(). Current context.status_flags: {context.status_flags}")
# Prepare stage modifies context directly
context = self._prepare_stage.execute(context)
log.info(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': Successfully RETURNED from _prepare_stage.execute(). context.processing_items count: {len(context.processing_items) if context.processing_items is not None else 'None'}. context.status_flags: {context.status_flags}")
except Exception as e:
log.error(f"ORCHESTRATOR_TRACE: Asset '{asset_name}': EXCEPTION during _prepare_stage.execute(): {e}", exc_info=True)
context.status_flags["asset_failed"] = True
context.status_flags["asset_failed_stage"] = "PrepareProcessingItemsStage"
context.status_flags["asset_failed_reason"] = str(e)
overall_status["failed"].append(f"{asset_name} (Failed in Prepare Items)")
continue # Move to next asset
if context.status_flags.get('prepare_items_failed'):
log.error(f"Asset '{asset_name}': Failed during item preparation. Reason: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')}. Skipping item processing loop.")
overall_status["failed"].append(f"{asset_name} (Failed Prepare Items: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')})")
continue # Move to next asset
if not context.processing_items:
log.info(f"Asset '{asset_name}': No items to process after preparation stage.")
# Status will be determined at the end
# --- Core Item Processing Loop ---
log.info("ORCHESTRATOR: Starting processing items loop for asset '%s'", asset_name) # Corrected indentation and message
log.info(f"Asset '{asset_name}': Starting core item processing loop for {len(context.processing_items)} items...")
asset_had_item_errors = False
for item_index, item in enumerate(context.processing_items):
item_key: Any = None # Key for storing results (FileRule object or task_key string)
item_log_prefix = f"Asset '{asset_name}', Item {item_index + 1}/{len(context.processing_items)}"
processed_data: Optional[Union[ProcessedRegularMapData, ProcessedMergedMapData]] = None
scaled_data_output: Optional[InitialScalingOutput] = None # Store output object
saved_data: Optional[SaveVariantsOutput] = None
item_status = "Failed" # Default item status
current_image_data: Optional[np.ndarray] = None # Track current image data ref
try:
context = stage.execute(context)
# The 'item' is now expected to be a ProcessingItem or MergeTaskDefinition
if isinstance(item, ProcessingItem):
item_key = f"{item.source_file_info_ref}_{item.map_type_identifier}_{item.resolution_key}"
item_log_prefix = f"Asset '{asset_name}', ProcItem '{item_key}'"
log.info(f"{item_log_prefix}: Starting processing.")
# Data for ProcessingItem is already loaded by PrepareProcessingItemsStage
current_image_data = item.image_data
current_dimensions = item.current_dimensions
item_resolution_key = item.resolution_key
# Transformations (like gloss to rough, normal invert) are assumed to be applied
# by RegularMapProcessorStage if it's still used, or directly in PrepareProcessingItemsStage
# before creating the ProcessingItem, or a new dedicated transformation stage.
# For now, assume item.image_data is ready for scaling/saving.
# Store initial ProcessingItem data as "processed_data" for consistency if RegularMapProcessor is bypassed
# This is a simplification; a dedicated transformation stage would be cleaner.
# For now, we assume transformations happened before or within PrepareProcessingItemsStage.
# The 'processed_data' variable here is more of a placeholder for what would feed into scaling.
# Create a simple ProcessedRegularMapData-like structure for logging/details if needed,
# or adapt the final_details population later.
# For now, we'll directly use 'item' fields.
# 2. Scale (Optional)
scaling_mode = getattr(context.config_obj, "INITIAL_SCALING_MODE", "NONE")
# Pass the item's resolution_key to InitialScalingInput
scale_input = InitialScalingInput(
image_data=current_image_data,
original_dimensions=current_dimensions,
initial_scaling_mode=scaling_mode,
resolution_key=item_resolution_key # Pass the key
)
# Add _source_file_path for logging within InitialScalingStage if available
setattr(scale_input, '_source_file_path', item.source_file_info_ref)
log.debug(f"{item_log_prefix}: Calling InitialScalingStage. Input res_key: {scale_input.resolution_key}")
scaled_data_output = self._scaling_stage.execute(scale_input)
current_image_data = scaled_data_output.scaled_image_data
current_dimensions = scaled_data_output.final_dimensions # Dimensions after scaling
# The resolution_key from item is passed through by InitialScalingOutput
output_resolution_key = scaled_data_output.resolution_key
log.debug(f"{item_log_prefix}: InitialScalingStage output. Scaled: {scaled_data_output.scaling_applied}, New Dims: {current_dimensions}, Output ResKey: {output_resolution_key}")
context.intermediate_results[item_key] = scaled_data_output
# 3. Save Variants
if current_image_data is None or current_image_data.size == 0:
log.warning(f"{item_log_prefix}: Skipping save stage because image data is empty.")
context.processed_maps_details[item_key] = {"status": "Skipped", "notes": "No image data to save", "stage": "SaveVariantsStage"}
continue
log.debug(f"{item_log_prefix}: Preparing to save variant with resolution key '{output_resolution_key}'...")
output_filename_tokens = {
'asset_name': asset_name,
'output_base_directory': context.engine_temp_dir,
'supplier': context.effective_supplier or 'UnknownSupplier',
'resolution': output_resolution_key # Use the key from the item/scaling stage
}
# Determine image_resolutions argument for save_image_variants
save_specific_resolutions = {}
if output_resolution_key == "LOWRES":
# For LOWRES, the "resolution value" is its actual dimension.
# image_saving_utils needs a dict like {"LOWRES": 64} if current_dim is 64x64
# Assuming current_dimensions[0] is width.
save_specific_resolutions = {"LOWRES": current_dimensions[0] if current_dimensions else 0}
log.debug(f"{item_log_prefix}: Preparing to save LOWRES variant. Dimensions: {current_dimensions}. Save resolutions arg: {save_specific_resolutions}")
elif output_resolution_key in context.config_obj.image_resolutions:
save_specific_resolutions = {output_resolution_key: context.config_obj.image_resolutions[output_resolution_key]}
else:
log.warning(f"{item_log_prefix}: Resolution key '{output_resolution_key}' not found in config.image_resolutions and not LOWRES. Saving might fail or use full res.")
# Fallback: pass all configured resolutions, image_saving_utils will try to match by size.
# This might not be ideal if the key is truly unknown.
# Or, more strictly, fail here if key is unknown and not LOWRES.
# For now, let image_saving_utils handle it by passing all.
save_specific_resolutions = context.config_obj.image_resolutions
save_input = SaveVariantsInput(
image_data=current_image_data,
internal_map_type=item.map_type_identifier,
source_bit_depth_info=[item.bit_depth] if item.bit_depth is not None else [8], # Default to 8 if not set
output_filename_pattern_tokens=output_filename_tokens,
image_resolutions=save_specific_resolutions, # Pass the specific resolution(s)
file_type_defs=getattr(context.config_obj, "FILE_TYPE_DEFINITIONS", {}),
output_format_8bit=context.config_obj.get_8bit_output_format(),
output_format_16bit_primary=context.config_obj.get_16bit_output_formats()[0],
output_format_16bit_fallback=context.config_obj.get_16bit_output_formats()[1],
png_compression_level=context.config_obj.png_compression_level,
jpg_quality=context.config_obj.jpg_quality,
output_filename_pattern=context.config_obj.output_filename_pattern,
resolution_threshold_for_jpg=getattr(context.config_obj, "resolution_threshold_for_jpg", None)
)
saved_data = self._save_stage.execute(save_input)
if saved_data and saved_data.status.startswith("Processed"):
item_status = saved_data.status
log.info(f"{item_log_prefix}: Item successfully processed and saved. Status: {item_status}")
context.processed_maps_details[item_key] = {
"status": item_status,
"saved_files_info": saved_data.saved_files_details,
"internal_map_type": item.map_type_identifier,
"resolution_key": output_resolution_key,
"original_dimensions": item.original_dimensions,
"final_dimensions": current_dimensions, # Dimensions after scaling
"source_file": item.source_file_info_ref,
}
else:
error_msg = saved_data.error_message if saved_data else "Save stage returned None"
log.error(f"{item_log_prefix}: Failed during save stage. Error: {error_msg}")
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Save Error: {error_msg}", "stage": "SaveVariantsStage"}
asset_had_item_errors = True
item_status = "Failed"
elif isinstance(item, MergeTaskDefinition):
# --- This part needs similar refactoring for resolution_key if merged outputs can be LOWRES ---
# --- For now, assume merged tasks always produce standard resolutions ---
item_key = item.task_key
item_log_prefix = f"Asset '{asset_name}', MergeTask '{item_key}'"
log.info(f"{item_log_prefix}: Processing MergeTask.")
# 1. Process Merge Task
processed_data = self._merged_processor_stage.execute(context, item)
if not processed_data or processed_data.status != "Processed":
error_msg = processed_data.error_message if processed_data else "Merge processor returned None"
log.error(f"{item_log_prefix}: Failed during merge processing. Error: {error_msg}")
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Merge Error: {error_msg}", "stage": "MergedTaskProcessorStage"}
asset_had_item_errors = True
continue
context.intermediate_results[item_key] = processed_data
current_image_data = processed_data.merged_image_data
current_dimensions = processed_data.final_dimensions
# 2. Scale Merged Output (Optional)
# Merged tasks typically don't have a single "resolution_key" like LOWRES from source.
# They produce an image that then gets downscaled to 1K, PREVIEW etc.
# So, resolution_key for InitialScalingInput here would be None or a default.
scaling_mode = getattr(context.config_obj, "INITIAL_SCALING_MODE", "NONE")
scale_input = InitialScalingInput(
image_data=current_image_data,
original_dimensions=current_dimensions,
initial_scaling_mode=scaling_mode,
resolution_key=None # Merged outputs are not "LOWRES" themselves before this scaling
)
setattr(scale_input, '_source_file_path', f"MergeTask_{item_key}") # For logging
log.debug(f"{item_log_prefix}: Calling InitialScalingStage for merged data.")
scaled_data_output = self._scaling_stage.execute(scale_input)
current_image_data = scaled_data_output.scaled_image_data
current_dimensions = scaled_data_output.final_dimensions
# Merged items don't have a specific output_resolution_key from source,
# they will be saved to all applicable resolutions from config.
# So scaled_data_output.resolution_key will be None here.
context.intermediate_results[item_key] = scaled_data_output
# 3. Save Merged Variants
if current_image_data is None or current_image_data.size == 0:
log.warning(f"{item_log_prefix}: Skipping save for merged task, image data is empty.")
context.processed_maps_details[item_key] = {"status": "Skipped", "notes": "No merged image data to save", "stage": "SaveVariantsStage"}
continue
output_filename_tokens = {
'asset_name': asset_name,
'output_base_directory': context.engine_temp_dir,
'supplier': context.effective_supplier or 'UnknownSupplier',
# 'resolution' token will be filled by image_saving_utils for each variant
}
# For merged tasks, we usually want to generate all standard resolutions.
# The `resolution_key` from the item itself is not applicable here for the `resolution` token.
# The `image_saving_utils.save_image_variants` will iterate through `context.config_obj.image_resolutions`.
save_input = SaveVariantsInput(
image_data=current_image_data,
internal_map_type=processed_data.output_map_type,
source_bit_depth_info=processed_data.source_bit_depths,
output_filename_pattern_tokens=output_filename_tokens,
image_resolutions=context.config_obj.image_resolutions, # Pass all configured resolutions
file_type_defs=getattr(context.config_obj, "FILE_TYPE_DEFINITIONS", {}),
output_format_8bit=context.config_obj.get_8bit_output_format(),
output_format_16bit_primary=context.config_obj.get_16bit_output_formats()[0],
output_format_16bit_fallback=context.config_obj.get_16bit_output_formats()[1],
png_compression_level=context.config_obj.png_compression_level,
jpg_quality=context.config_obj.jpg_quality,
output_filename_pattern=context.config_obj.output_filename_pattern,
resolution_threshold_for_jpg=getattr(context.config_obj, "resolution_threshold_for_jpg", None)
)
saved_data = self._save_stage.execute(save_input)
if saved_data and saved_data.status.startswith("Processed"):
item_status = saved_data.status
log.info(f"{item_log_prefix}: Merged task successfully processed and saved. Status: {item_status}")
context.processed_maps_details[item_key] = {
"status": item_status,
"saved_files_info": saved_data.saved_files_details,
"internal_map_type": processed_data.output_map_type,
"final_dimensions": current_dimensions,
}
else:
error_msg = saved_data.error_message if saved_data else "Save stage for merged task returned None"
log.error(f"{item_log_prefix}: Failed during save stage for merged task. Error: {error_msg}")
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Save Error (Merged): {error_msg}", "stage": "SaveVariantsStage"}
asset_had_item_errors = True
item_status = "Failed"
else:
log.warning(f"{item_log_prefix}: Unknown item type in loop: {type(item)}. Skipping.")
# Ensure some key exists to prevent KeyError if item_key was not set
unknown_item_key = f"unknown_item_at_index_{item_index}"
context.processed_maps_details[unknown_item_key] = {"status": "Skipped", "notes": f"Unknown item type {type(item)}"}
asset_had_item_errors = True
continue
except Exception as e:
log.error(f"Asset '{asset_rule.asset_name}': Error during stage '{stage.__class__.__name__}': {e}", exc_info=True)
context.status_flags["asset_failed"] = True
context.asset_metadata["status"] = f"Failed: Error in stage {stage.__class__.__name__}"
context.asset_metadata["error_message"] = str(e)
break # Stop processing stages for this asset on error
log.exception(f"Asset '{asset_name}', Item Loop Index {item_index}: Unhandled exception: {e}")
# Ensure details are recorded even on unhandled exception
if item_key is not None:
context.processed_maps_details[item_key] = {"status": "Failed", "notes": f"Unhandled Loop Error: {e}", "stage": "OrchestratorLoop"}
else:
log.error(f"Asset '{asset_name}': Unhandled exception in item loop before item key was set.")
asset_had_item_errors = True
item_status = "Failed"
# Optionally break loop or continue? Continue for now to process other items.
if context.status_flags.get("skip_asset"):
log.info(f"Asset '{asset_rule.asset_name}': Skipped by stage '{stage.__class__.__name__}'. Reason: {context.status_flags.get('skip_reason', 'N/A')}")
break # Skip remaining stages for this asset
log.info("ORCHESTRATOR: Finished processing items loop for asset '%s'", asset_name)
log.info(f"Asset '{asset_name}': Finished core item processing loop.")
# --- Execute Post-Item-Processing Outer Stages ---
# (e.g., OutputOrganization, MetadataFinalizationSave)
# Identify which outer stages run after the item loop
# This needs better handling based on stage purpose. Assume none run after for now.
if not context.status_flags.get("asset_failed"):
log.info("ORCHESTRATOR: Executing post-item-processing outer stages for asset '%s'", asset_name)
context = self._execute_specific_stages(context, self.post_item_stages, "post-item", stop_on_skip=False)
# --- Final Asset Status Determination ---
final_asset_status = "Unknown"
fail_reason = ""
if context.status_flags.get("asset_failed"):
final_asset_status = "Failed"
fail_reason = f"(Failed in {context.status_flags.get('asset_failed_stage', 'Unknown Stage')}: {context.status_flags.get('asset_failed_reason', 'Unknown Reason')})"
elif context.status_flags.get("skip_asset"):
final_asset_status = "Skipped"
fail_reason = f"(Skipped: {context.status_flags.get('skip_reason', 'Unknown Reason')})"
elif asset_had_item_errors:
final_asset_status = "Failed"
fail_reason = "(One or more items failed)"
elif not context.processing_items:
# No items prepared, no errors -> consider skipped or processed based on definition?
final_asset_status = "Skipped" # Or "Processed (No Items)"
fail_reason = "(No items to process)"
elif not context.processed_maps_details and context.processing_items:
# Items were prepared, but none resulted in processed_maps_details entry
final_asset_status = "Skipped" # Or Failed?
fail_reason = "(All processing items skipped or failed internally)"
elif context.processed_maps_details:
# Check if all items in processed_maps_details are actually processed successfully
all_processed_ok = all(
str(details.get("status", "")).startswith("Processed")
for details in context.processed_maps_details.values()
)
some_processed_ok = any(
str(details.get("status", "")).startswith("Processed")
for details in context.processed_maps_details.values()
)
if all_processed_ok:
final_asset_status = "Processed"
elif some_processed_ok:
final_asset_status = "Partial" # Introduce a partial status? Or just Failed?
fail_reason = "(Some items failed)"
final_asset_status = "Failed" # Treat partial as Failed for overall status
else: # No items processed successfully
final_asset_status = "Failed"
fail_reason = "(All items failed)"
else:
# Should not happen if processing_items existed
final_asset_status = "Failed"
fail_reason = "(Unknown state after item processing)"
# Update overall status list
if final_asset_status == "Processed":
overall_status["processed"].append(asset_name)
elif final_asset_status == "Skipped":
overall_status["skipped"].append(f"{asset_name} {fail_reason}")
else: # Failed or Unknown
overall_status["failed"].append(f"{asset_name} {fail_reason}")
log.info(f"Asset '{asset_name}' final status: {final_asset_status} {fail_reason}")
# Clean up intermediate results for the asset to save memory
context.intermediate_results = {}
# Refined status collection
if context.status_flags.get('skip_asset'):
overall_status["skipped"].append(asset_rule.asset_name)
elif context.status_flags.get('asset_failed') or str(context.asset_metadata.get('status', '')).startswith("Failed"):
overall_status["failed"].append(asset_rule.asset_name)
elif context.asset_metadata.get('status') == "Processed":
overall_status["processed"].append(asset_rule.asset_name)
else: # Default or unknown state
log.warning(f"Asset '{asset_rule.asset_name}': Unknown status after pipeline execution. Metadata status: '{context.asset_metadata.get('status')}'. Marking as failed.")
overall_status["failed"].append(f"{asset_rule.asset_name} (Unknown Status: {context.asset_metadata.get('status')})")
log.debug(f"Asset '{asset_rule.asset_name}' final status: {context.asset_metadata.get('status', 'N/A')}, Flags: {context.status_flags}")
except Exception as e:
log.error(f"PipelineOrchestrator.process_source_rule failed: {e}", exc_info=True)
# Mark all remaining assets as failed if a top-level error occurs
processed_or_skipped_or_failed = set(overall_status["processed"] + overall_status["skipped"] + overall_status["failed"])
log.error(f"PipelineOrchestrator.process_source_rule failed critically: {e}", exc_info=True)
# Mark all assets from this source rule that weren't finished as failed
processed_or_skipped_or_failed = set(overall_status["processed"]) | \
set(name.split(" ")[0] for name in overall_status["skipped"]) | \
set(name.split(" ")[0] for name in overall_status["failed"])
for asset_rule in source_rule.assets:
if asset_rule.asset_name not in processed_or_skipped_or_failed:
overall_status["failed"].append(f"{asset_rule.asset_name} (Orchestrator Error)")
overall_status["failed"].append(f"{asset_rule.asset_name} (Orchestrator Error: {e})")
finally:
# --- Cleanup Temporary Directory ---
if engine_temp_dir_path and engine_temp_dir_path.exists():
try:
log.debug(f"PipelineOrchestrator cleaning up temporary directory: {engine_temp_dir_path}")

View File

@@ -18,7 +18,8 @@ class AlphaExtractionToMaskStage(ProcessingStage):
Extracts an alpha channel from a suitable source map (e.g., Albedo, Diffuse)
to generate a MASK map if one is not explicitly defined.
"""
SUITABLE_SOURCE_MAP_TYPES = ["ALBEDO", "DIFFUSE", "BASE_COLOR"] # Map types likely to have alpha
# Use MAP_ prefixed types for internal logic checks
SUITABLE_SOURCE_MAP_TYPES = ["MAP_COL", "MAP_ALBEDO", "MAP_BASECOLOR"] # Map types likely to have alpha
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
@@ -38,7 +39,8 @@ class AlphaExtractionToMaskStage(ProcessingStage):
# A. Check for Existing MASK Map
for file_rule in context.files_to_process:
# Assuming file_rule has 'map_type' and 'file_path' (instead of filename_pattern)
if hasattr(file_rule, 'map_type') and file_rule.map_type == "MASK":
# Check for existing MASK map using the correct item_type field and MAP_ prefix
if file_rule.item_type == "MAP_MASK":
file_path_for_log = file_rule.file_path if hasattr(file_rule, 'file_path') else "Unknown file path"
logger.info(
f"Asset '{asset_name_for_log}': MASK map already defined by FileRule "
@@ -51,8 +53,10 @@ class AlphaExtractionToMaskStage(ProcessingStage):
source_file_rule_id_for_alpha: Optional[str] = None # This ID comes from processed_maps_details keys
for file_rule_id, details in context.processed_maps_details.items():
# Check for suitable source map using the standardized internal_map_type field
internal_map_type = details.get('internal_map_type') # Use the standardized field
if details.get('status') == 'Processed' and \
details.get('map_type') in self.SUITABLE_SOURCE_MAP_TYPES:
internal_map_type in self.SUITABLE_SOURCE_MAP_TYPES:
try:
temp_path = Path(details['temp_processed_file'])
if not temp_path.exists():
@@ -153,15 +157,16 @@ class AlphaExtractionToMaskStage(ProcessingStage):
context.processed_maps_details[new_mask_processed_map_key] = {
'map_type': "MASK",
'internal_map_type': "MAP_MASK", # Use the standardized MAP_ prefixed field
'map_type': "MASK", # Keep standard type for metadata/naming consistency if needed
'source_file': str(source_image_path),
'temp_processed_file': str(mask_temp_path),
'original_dimensions': original_dims,
'processed_dimensions': (alpha_channel.shape[1], alpha_channel.shape[0]),
'status': 'Processed',
'notes': (
f"Generated from alpha of {source_map_details_for_alpha['map_type']} "
f"(Source Detail ID: {source_file_rule_id_for_alpha})" # Changed from Source Rule ID
f"Generated from alpha of {source_map_details_for_alpha.get('internal_map_type', 'unknown type')} " # Use internal_map_type for notes
f"(Source Detail ID: {source_file_rule_id_for_alpha})"
),
# 'file_rule_id': new_mask_file_rule_id_str # FileRule doesn't have an ID to link here directly
}

View File

@@ -51,7 +51,8 @@ class GlossToRoughConversionStage(ProcessingStage):
# Iterate using the index (map_key_index) as the key, which is now standard.
for map_key_index, map_details in context.processed_maps_details.items():
processing_map_type = map_details.get('processing_map_type', '')
# Use the standardized internal_map_type field
internal_map_type = map_details.get('internal_map_type', '')
map_status = map_details.get('status')
original_temp_path_str = map_details.get('temp_processed_file')
# source_file_rule_idx from details should align with map_key_index.
@@ -70,11 +71,12 @@ class GlossToRoughConversionStage(ProcessingStage):
processing_tag = f"mki_{map_key_index}_fallback_tag"
if not processing_map_type.startswith("MAP_GLOSS"):
# logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Type '{processing_map_type}' is not GLOSS. Skipping.")
# Check if the map is a GLOSS map using the standardized internal_map_type
if not internal_map_type.startswith("MAP_GLOSS"):
# logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Type '{internal_map_type}' is not GLOSS. Skipping.")
continue
logger.info(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index} (Tag: {processing_tag}): Identified potential GLOSS map (Type: {processing_map_type}).")
logger.info(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index} (Tag: {processing_tag}): Identified potential GLOSS map (Type: {internal_map_type}).")
if map_status not in successful_conversion_statuses:
logger.warning(
@@ -163,9 +165,9 @@ class GlossToRoughConversionStage(ProcessingStage):
# Update context.processed_maps_details for this map_key_index
map_details['temp_processed_file'] = str(new_temp_path)
map_details['original_map_type_before_conversion'] = processing_map_type
map_details['processing_map_type'] = "MAP_ROUGH"
map_details['map_type'] = "Roughness"
map_details['original_map_type_before_conversion'] = internal_map_type # Store the original internal type
map_details['internal_map_type'] = "MAP_ROUGH" # Use the standardized MAP_ prefixed field
map_details['map_type'] = "Roughness" # Keep standard type for metadata/naming consistency if needed
map_details['status'] = "Converted_To_Rough"
map_details['notes'] = map_details.get('notes', '') + "; Converted from GLOSS by GlossToRoughConversionStage"
if 'base_pot_resolution_name' in map_details:

View File

@@ -1,700 +0,0 @@
import uuid
import dataclasses
import re
import os
import logging
from pathlib import Path
from typing import Optional, Tuple, Dict
import cv2
import numpy as np
from .base_stage import ProcessingStage
from ..asset_context import AssetProcessingContext
from rule_structure import FileRule
from utils.path_utils import sanitize_filename
from ...utils import image_processing_utils as ipu
logger = logging.getLogger(__name__)
class IndividualMapProcessingStage(ProcessingStage):
"""
Processes individual texture map files based on FileRules.
This stage finds the source file, loads it, applies transformations
(resize, color space), saves a temporary processed version, and updates
the AssetProcessingContext with details.
"""
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
"""
Executes the individual map processing logic.
"""
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
if context.status_flags.get('skip_asset', False):
logger.info(f"Asset '{asset_name_for_log}': Skipping individual map processing due to skip_asset flag.")
return context
if not hasattr(context, 'processed_maps_details') or context.processed_maps_details is None:
context.processed_maps_details = {}
logger.debug(f"Asset '{asset_name_for_log}': Initialized processed_maps_details.")
if not context.files_to_process:
logger.info(f"Asset '{asset_name_for_log}': No files to process in this stage.")
return context
# Source path for the asset group comes from SourceRule
if not context.source_rule or not context.source_rule.input_path:
logger.error(f"Asset '{asset_name_for_log}': SourceRule or SourceRule.input_path is not set. Cannot determine source base path.")
context.status_flags['individual_map_processing_failed'] = True
# Mark all file_rules as failed
for fr_idx, file_rule_to_fail in enumerate(context.files_to_process):
# Use fr_idx as the key for status update for these early failures
map_type_for_fail = file_rule_to_fail.item_type_override or file_rule_to_fail.item_type or "UnknownMapType"
self._update_file_rule_status(context, fr_idx, 'Failed', map_type=map_type_for_fail, details="SourceRule.input_path missing")
return context
# The workspace_path in the context should be the directory where files are extracted/available.
source_base_path = context.workspace_path
if not source_base_path.is_dir():
logger.error(f"Asset '{asset_name_for_log}': Workspace path '{source_base_path}' is not a valid directory.")
context.status_flags['individual_map_processing_failed'] = True
for fr_idx, file_rule_to_fail in enumerate(context.files_to_process):
# Use fr_idx as the key for status update
map_type_for_fail = file_rule_to_fail.item_type_override or file_rule_to_fail.item_type or "UnknownMapType"
self._update_file_rule_status(context, fr_idx, 'Failed', map_type=map_type_for_fail, details="Workspace path invalid")
return context
# Fetch config settings once before the loop
respect_variant_map_types = getattr(context.config_obj, "respect_variant_map_types", [])
image_resolutions = getattr(context.config_obj, "image_resolutions", {})
output_filename_pattern = getattr(context.config_obj, "output_filename_pattern", "[assetname]_[maptype]_[resolution].[ext]")
for file_rule_idx, file_rule in enumerate(context.files_to_process):
# file_rule_idx will be the key for processed_maps_details.
# processing_instance_tag is for unique temp files and detailed logging for this specific run.
processing_instance_tag = f"map_{file_rule_idx}_{uuid.uuid4().hex[:8]}"
current_map_key = file_rule_idx # Key for processed_maps_details
if not file_rule.file_path: # Ensure file_path exists, critical for later stages if they rely on it from FileRule
logger.error(f"Asset '{asset_name_for_log}', FileRule at index {file_rule_idx} has an empty or None file_path. Skipping this rule.")
self._update_file_rule_status(context, current_map_key, 'Failed',
processing_tag=processing_instance_tag,
details="FileRule has no file_path")
continue
initial_current_map_type = file_rule.item_type_override or file_rule.item_type or "UnknownMapType"
# --- START NEW SUFFIXING LOGIC ---
final_current_map_type = initial_current_map_type # Default to initial
# 1. Determine Base Map Type from initial_current_map_type
base_map_type_match = re.match(r"(MAP_[A-Z]{3})", initial_current_map_type)
if base_map_type_match and context.asset_rule:
true_base_map_type = base_map_type_match.group(1) # This is "MAP_XXX"
# 2. Count Occurrences and Find Index of current_file_rule in context.asset_rule.files
peers_of_same_base_type_in_asset_rule = []
for fr_asset in context.asset_rule.files:
fr_asset_item_type = fr_asset.item_type_override or fr_asset.item_type or "UnknownMapType"
fr_asset_base_map_type_match = re.match(r"(MAP_[A-Z]{3})", fr_asset_item_type)
if fr_asset_base_map_type_match:
fr_asset_base_map_type = fr_asset_base_map_type_match.group(1)
if fr_asset_base_map_type == true_base_map_type:
peers_of_same_base_type_in_asset_rule.append(fr_asset)
num_occurrences_of_base_type = len(peers_of_same_base_type_in_asset_rule)
current_instance_index = 0 # 1-based
try:
current_instance_index = peers_of_same_base_type_in_asset_rule.index(file_rule) + 1
except ValueError:
logger.warning(
f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Initial Type: '{initial_current_map_type}', Base: '{true_base_map_type}'): "
f"Could not find its own instance in the list of peers from asset_rule.files. "
f"Number of peers found: {num_occurrences_of_base_type}. Suffixing may be affected."
)
# 3. Determine Suffix
map_type_for_respect_check = true_base_map_type.replace("MAP_", "") # e.g., "COL"
is_in_respect_list = map_type_for_respect_check in respect_variant_map_types
suffix_to_append = ""
if num_occurrences_of_base_type > 1:
if current_instance_index > 0:
suffix_to_append = f"-{current_instance_index}"
else:
logger.warning(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}': Index for multi-occurrence map type '{true_base_map_type}' (count: {num_occurrences_of_base_type}) not determined. Omitting numeric suffix.")
elif num_occurrences_of_base_type == 1 and is_in_respect_list:
suffix_to_append = "-1"
# 4. Form the final_current_map_type
if suffix_to_append:
final_current_map_type = true_base_map_type + suffix_to_append
else:
final_current_map_type = initial_current_map_type
current_map_type = final_current_map_type
# --- END NEW SUFFIXING LOGIC ---
# --- START: Filename-friendly map type derivation ---
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: --- Starting Filename-Friendly Map Type Logic for: {current_map_type} ---")
filename_friendly_map_type = current_map_type # Fallback
# 1. Access FILE_TYPE_DEFINITIONS
file_type_definitions = None
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Attempting to access context.config_obj.FILE_TYPE_DEFINITIONS.")
try:
file_type_definitions = context.config_obj.FILE_TYPE_DEFINITIONS
if not file_type_definitions: # Check if it's None or empty
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS is present but empty or None.")
else:
sample_defs_log = {k: file_type_definitions[k] for k in list(file_type_definitions.keys())[:2]} # Log first 2 for brevity
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Accessed FILE_TYPE_DEFINITIONS. Sample: {sample_defs_log}, Total keys: {len(file_type_definitions)}.")
except AttributeError:
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Could not access context.config_obj.FILE_TYPE_DEFINITIONS via direct attribute.")
base_map_key_val = None # Renamed from base_map_key to avoid conflict with current_map_key
suffix_part = ""
if file_type_definitions and isinstance(file_type_definitions, dict) and len(file_type_definitions) > 0:
base_map_key_val = None
suffix_part = ""
sorted_known_base_keys = sorted(list(file_type_definitions.keys()), key=len, reverse=True)
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Sorted known base keys for parsing: {sorted_known_base_keys}")
for known_key in sorted_known_base_keys:
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Checking if '{current_map_type}' starts with '{known_key}'")
if current_map_type.startswith(known_key):
base_map_key_val = known_key
suffix_part = current_map_type[len(known_key):]
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Match found! current_map_type: '{current_map_type}', base_map_key_val: '{base_map_key_val}', suffix_part: '{suffix_part}'")
break
if base_map_key_val is None:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Could not parse base_map_key_val from '{current_map_type}' using known keys. Fallback: filename_friendly_map_type = '{filename_friendly_map_type}'.")
else:
definition = file_type_definitions.get(base_map_key_val)
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Definition for '{base_map_key_val}': {definition}")
if definition and isinstance(definition, dict):
standard_type_alias = definition.get("standard_type")
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Standard type alias for '{base_map_key_val}': '{standard_type_alias}'")
if standard_type_alias and isinstance(standard_type_alias, str) and standard_type_alias.strip():
filename_friendly_map_type = standard_type_alias.strip() + suffix_part
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Successfully transformed map type: '{current_map_type}' -> '{filename_friendly_map_type}' (standard_type_alias: '{standard_type_alias}', suffix_part: '{suffix_part}').")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Standard type alias for '{base_map_key_val}' is missing, empty, or not a string (value: '{standard_type_alias}'). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: No definition or invalid definition for '{base_map_key_val}' (value: {definition}). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
elif file_type_definitions is None:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS not available for lookup (was None). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
elif not isinstance(file_type_definitions, dict):
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS is not a dictionary (type: {type(file_type_definitions)}). Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: FILE_TYPE_DEFINITIONS is an empty dictionary. Using fallback. filename_friendly_map_type = '{filename_friendly_map_type}'.")
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Final filename_friendly_map_type: '{filename_friendly_map_type}'")
# --- END: Filename-friendly map type derivation ---
if not current_map_type or not current_map_type.startswith("MAP_") or current_map_type == "MAP_GEN_COMPOSITE":
logger.debug(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}': Skipping, item_type '{current_map_type}' (initial: '{initial_current_map_type}') not targeted for individual processing.")
continue
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Type: {current_map_type}, Initial Type: {initial_current_map_type}, Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Starting individual processing.")
# A. Find Source File (using file_rule.file_path as the pattern relative to source_base_path)
source_file_path = self._find_source_file(source_base_path, file_rule.file_path, asset_name_for_log, processing_instance_tag)
if not source_file_path:
logger.error(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Source file not found with path/pattern '{file_rule.file_path}' in '{source_base_path}'.")
self._update_file_rule_status(context, current_map_key, 'Failed',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
details="Source file not found")
continue
# B. Load and Transform Image
image_data: Optional[np.ndarray] = ipu.load_image(str(source_file_path))
if image_data is None:
logger.error(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Failed to load image from '{source_file_path}'.")
self._update_file_rule_status(context, current_map_key, 'Failed',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
source_file=str(source_file_path),
details="Image load failed")
continue
original_height, original_width = image_data.shape[:2]
logger.debug(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Loaded image '{source_file_path}' with dimensions {original_width}x{original_height}.")
# 1. Initial Power-of-Two (POT) Downscaling
pot_width = ipu.get_nearest_power_of_two_downscale(original_width)
pot_height = ipu.get_nearest_power_of_two_downscale(original_height)
# Maintain aspect ratio for initial POT scaling, using the smaller of the scaled dimensions
# This ensures we only downscale.
if original_width > 0 and original_height > 0 : # Avoid division by zero
aspect_ratio = original_width / original_height
# Calculate new dimensions based on POT width, then POT height, and pick the one that results in downscale or same size
pot_h_from_w = int(pot_width / aspect_ratio)
pot_w_from_h = int(pot_height * aspect_ratio)
# Option 1: Scale by width, adjust height
candidate1_w, candidate1_h = pot_width, ipu.get_nearest_power_of_two_downscale(pot_h_from_w)
# Option 2: Scale by height, adjust width
candidate2_w, candidate2_h = ipu.get_nearest_power_of_two_downscale(pot_w_from_h), pot_height
# Ensure candidates are not upscaling
if candidate1_w > original_width or candidate1_h > original_height:
candidate1_w, candidate1_h = original_width, original_height # Fallback to original if upscaling
if candidate2_w > original_width or candidate2_h > original_height:
candidate2_w, candidate2_h = original_width, original_height # Fallback to original if upscaling
# Choose the candidate that results in a larger area (preferring less downscaling if multiple POT options)
# but still respects the POT downscale logic for each dimension individually.
# The actual POT dimensions are already calculated by get_nearest_power_of_two_downscale.
# We need to decide if we base the aspect ratio calc on pot_width or pot_height.
# The goal is to make one dimension POT and the other POT while maintaining aspect as much as possible, only downscaling.
final_pot_width = ipu.get_nearest_power_of_two_downscale(original_width)
final_pot_height = ipu.get_nearest_power_of_two_downscale(original_height)
# If original aspect is not 1:1, one of the POT dimensions might need further adjustment to maintain aspect
# after the other dimension is set to its POT.
# We prioritize fitting within the *downscaled* POT dimensions.
# Scale to fit within final_pot_width, adjust height, then make height POT (downscale)
scaled_h_for_pot_w = max(1, round(final_pot_width / aspect_ratio))
h1 = ipu.get_nearest_power_of_two_downscale(scaled_h_for_pot_w)
w1 = final_pot_width
if h1 > final_pot_height: # If this adjustment made height too big, re-evaluate
h1 = final_pot_height
w1 = ipu.get_nearest_power_of_two_downscale(max(1, round(h1 * aspect_ratio)))
# Scale to fit within final_pot_height, adjust width, then make width POT (downscale)
scaled_w_for_pot_h = max(1, round(final_pot_height * aspect_ratio))
w2 = ipu.get_nearest_power_of_two_downscale(scaled_w_for_pot_h)
h2 = final_pot_height
if w2 > final_pot_width: # If this adjustment made width too big, re-evaluate
w2 = final_pot_width
h2 = ipu.get_nearest_power_of_two_downscale(max(1, round(w2 / aspect_ratio)))
# Choose the option that results in larger area (less aggressive downscaling)
# while ensuring both dimensions are POT and not upscaled from original.
if w1 * h1 >= w2 * h2:
base_pot_width, base_pot_height = w1, h1
else:
base_pot_width, base_pot_height = w2, h2
# Final check to ensure no upscaling from original dimensions
base_pot_width = min(base_pot_width, original_width)
base_pot_height = min(base_pot_height, original_height)
# And ensure they are POT
base_pot_width = ipu.get_nearest_power_of_two_downscale(base_pot_width)
base_pot_height = ipu.get_nearest_power_of_two_downscale(base_pot_height)
else: # Handle cases like 0-dim images, though load_image should prevent this
base_pot_width, base_pot_height = 1, 1
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Original dims: ({original_width},{original_height}), Initial POT Scaled Dims: ({base_pot_width},{base_pot_height}).")
# Calculate and store aspect ratio change string
if original_width > 0 and original_height > 0 and base_pot_width > 0 and base_pot_height > 0:
aspect_change_str = ipu.normalize_aspect_ratio_change(
original_width, original_height,
base_pot_width, base_pot_height
)
if aspect_change_str:
# This will overwrite if multiple maps are processed; specified by requirements.
context.asset_metadata['aspect_ratio_change_string'] = aspect_change_str
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type {current_map_type}: Calculated aspect ratio change string: '{aspect_change_str}' (Original: {original_width}x{original_height}, Base POT: {base_pot_width}x{base_pot_height}). Stored in asset_metadata.")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type {current_map_type}: Failed to calculate aspect ratio change string.")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type {current_map_type}: Skipping aspect ratio change string calculation due to invalid dimensions (Original: {original_width}x{original_height}, Base POT: {base_pot_width}x{base_pot_height}).")
base_pot_image_data = image_data.copy()
if (base_pot_width, base_pot_height) != (original_width, original_height):
interpolation = cv2.INTER_AREA # Good for downscaling
base_pot_image_data = ipu.resize_image(base_pot_image_data, base_pot_width, base_pot_height, interpolation=interpolation)
if base_pot_image_data is None:
logger.error(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Failed to resize image to base POT dimensions.")
self._update_file_rule_status(context, current_map_key, 'Failed',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
source_file=str(source_file_path),
original_dimensions=(original_width, original_height),
details="Base POT resize failed")
continue
# Color Profile Management (after initial POT resize, before multi-res saving)
# Initialize transform settings with defaults for color management
transform_settings = {
"color_profile_management": False, # Default, can be overridden by FileRule
"target_color_profile": "sRGB", # Default
"output_format_settings": None # For JPG quality, PNG compression
}
if file_rule.channel_merge_instructions and 'transform' in file_rule.channel_merge_instructions:
custom_transform_settings = file_rule.channel_merge_instructions['transform']
if isinstance(custom_transform_settings, dict):
transform_settings.update(custom_transform_settings)
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Loaded transform settings for color/output from file_rule.")
if transform_settings['color_profile_management'] and transform_settings['target_color_profile'] == "RGB":
if len(base_pot_image_data.shape) == 3 and base_pot_image_data.shape[2] == 3: # BGR to RGB
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Converting BGR to RGB for base POT image.")
base_pot_image_data = ipu.convert_bgr_to_rgb(base_pot_image_data)
elif len(base_pot_image_data.shape) == 3 and base_pot_image_data.shape[2] == 4: # BGRA to RGBA
logger.info(f"Asset '{asset_name_for_log}', FileRule path '{file_rule.file_path}' (Key: {current_map_key}, Proc. Tag: {processing_instance_tag}): Converting BGRA to RGBA for base POT image.")
base_pot_image_data = ipu.convert_bgra_to_rgba(base_pot_image_data)
# Ensure engine_temp_dir exists before saving base POT
if not context.engine_temp_dir.exists():
try:
context.engine_temp_dir.mkdir(parents=True, exist_ok=True)
logger.info(f"Asset '{asset_name_for_log}': Created engine_temp_dir at '{context.engine_temp_dir}'")
except OSError as e:
logger.error(f"Asset '{asset_name_for_log}': Failed to create engine_temp_dir '{context.engine_temp_dir}': {e}")
self._update_file_rule_status(context, current_map_key, 'Failed',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
source_file=str(source_file_path),
details="Failed to create temp directory for base POT")
continue
temp_filename_suffix = Path(source_file_path).suffix
base_pot_temp_filename = f"{processing_instance_tag}_basePOT{temp_filename_suffix}" # Use processing_instance_tag
base_pot_temp_path = context.engine_temp_dir / base_pot_temp_filename
# Determine save parameters for base POT image (can be different from variants if needed)
base_save_params = []
base_output_ext = temp_filename_suffix.lstrip('.') # Default to original, can be overridden by format rules
# TODO: Add logic here to determine base_output_ext and base_save_params based on bit depth and config, similar to variants.
# For now, using simple save.
if not ipu.save_image(str(base_pot_temp_path), base_pot_image_data, params=base_save_params):
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Failed to save base POT image to '{base_pot_temp_path}'.")
self._update_file_rule_status(context, current_map_key, 'Failed',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
source_file=str(source_file_path),
original_dimensions=(original_width, original_height),
base_pot_dimensions=(base_pot_width, base_pot_height),
details="Base POT image save failed")
continue
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Successfully saved base POT image to '{base_pot_temp_path}' with dims ({base_pot_width}x{base_pot_height}).")
# Initialize/update the status for this map in processed_maps_details
self._update_file_rule_status(
context,
current_map_key, # Use file_rule_idx as key
'BasePOTSaved', # Intermediate status, will be updated after variant check
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag, # Store the tag
source_file=str(source_file_path),
original_dimensions=(original_width, original_height),
base_pot_dimensions=(base_pot_width, base_pot_height),
temp_processed_file=str(base_pot_temp_path) # Store path to the saved base POT
)
# 2. Multiple Resolution Output (Variants)
processed_at_least_one_resolution_variant = False
# Resolution variants are attempted for all map types individually processed.
# The filter at the beginning of the loop ensures only relevant maps reach this stage.
generate_variants_for_this_map_type = True
if generate_variants_for_this_map_type: # This will now always be true if code execution reaches here
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Map type '{current_map_type}' is eligible for individual processing. Attempting to generate resolution variants.")
# Sort resolutions from largest to smallest
sorted_resolutions = sorted(image_resolutions.items(), key=lambda item: item[1], reverse=True)
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Sorted resolutions for variant processing: {sorted_resolutions}")
for res_key, res_max_dim in sorted_resolutions:
current_w, current_h = base_pot_image_data.shape[1], base_pot_image_data.shape[0]
if current_w <= 0 or current_h <=0:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Base POT image has zero dimension ({current_w}x{current_h}). Skipping this resolution variant.")
continue
if max(current_w, current_h) >= res_max_dim:
target_w_res, target_h_res = current_w, current_h
if max(current_w, current_h) > res_max_dim:
if current_w >= current_h:
target_w_res = res_max_dim
target_h_res = max(1, round(target_w_res / (current_w / current_h)))
else:
target_h_res = res_max_dim
target_w_res = max(1, round(target_h_res * (current_w / current_h)))
else:
logger.debug(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Base POT image ({current_w}x{current_h}) is smaller than target max dim {res_max_dim}. Skipping this resolution variant.")
continue
target_w_res = min(target_w_res, current_w)
target_h_res = min(target_h_res, current_h)
if target_w_res <=0 or target_h_res <=0:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Calculated target variant dims are zero or negative ({target_w_res}x{target_h_res}). Skipping.")
continue
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Processing variant for {res_max_dim}. Base POT Dims: ({current_w}x{current_h}), Target Dims for {res_key}: ({target_w_res}x{target_h_res}).")
output_image_data_for_res = base_pot_image_data
if (target_w_res, target_h_res) != (current_w, current_h):
interpolation_res = cv2.INTER_AREA
output_image_data_for_res = ipu.resize_image(base_pot_image_data, target_w_res, target_h_res, interpolation=interpolation_res)
if output_image_data_for_res is None:
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Failed to resize image for resolution variant {res_key}.")
continue
assetname_placeholder = context.asset_rule.asset_name if context.asset_rule else "UnknownAsset"
resolution_placeholder = res_key
# TODO: Implement proper output format/extension determination for variants
output_ext_variant = temp_filename_suffix.lstrip('.')
temp_output_filename_variant = output_filename_pattern.replace("[assetname]", sanitize_filename(assetname_placeholder)) \
.replace("[maptype]", sanitize_filename(filename_friendly_map_type)) \
.replace("[resolution]", sanitize_filename(resolution_placeholder)) \
.replace("[ext]", output_ext_variant)
temp_output_filename_variant = f"{processing_instance_tag}_variant_{temp_output_filename_variant}" # Use processing_instance_tag
temp_output_path_variant = context.engine_temp_dir / temp_output_filename_variant
save_params_variant = []
if transform_settings.get('output_format_settings'):
if output_ext_variant.lower() in ['jpg', 'jpeg']:
quality = transform_settings['output_format_settings'].get('quality', context.config_obj.get("JPG_QUALITY", 95))
save_params_variant = [cv2.IMWRITE_JPEG_QUALITY, quality]
elif output_ext_variant.lower() == 'png':
compression = transform_settings['output_format_settings'].get('compression_level', context.config_obj.get("PNG_COMPRESSION_LEVEL", 6))
save_params_variant = [cv2.IMWRITE_PNG_COMPRESSION, compression]
save_success_variant = ipu.save_image(str(temp_output_path_variant), output_image_data_for_res, params=save_params_variant)
if not save_success_variant:
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Failed to save temporary variant image to '{temp_output_path_variant}'.")
continue
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Res {res_key}: Successfully saved temporary variant map to '{temp_output_path_variant}' with dims ({target_w_res}x{target_h_res}).")
processed_at_least_one_resolution_variant = True
if 'variants' not in context.processed_maps_details[current_map_key]: # Use current_map_key (file_rule_idx)
context.processed_maps_details[current_map_key]['variants'] = []
context.processed_maps_details[current_map_key]['variants'].append({ # Use current_map_key (file_rule_idx)
'resolution_key': res_key,
'temp_path': str(temp_output_path_variant),
'dimensions': (target_w_res, target_h_res),
'resolution_name': f"{target_w_res}x{target_h_res}"
})
if 'processed_files' not in context.asset_metadata:
context.asset_metadata['processed_files'] = []
context.asset_metadata['processed_files'].append({
'processed_map_key': current_map_key, # Use current_map_key (file_rule_idx)
'resolution_key': res_key,
'path': str(temp_output_path_variant),
'type': 'temporary_map_variant',
'map_type': current_map_type,
'dimensions_w': target_w_res,
'dimensions_h': target_h_res
})
# Calculate and store image statistics for the lowest resolution output
lowest_res_image_data_for_stats = None
image_to_stat_path_for_log = "N/A"
source_of_stats_image = "unknown"
if processed_at_least_one_resolution_variant and \
current_map_key in context.processed_maps_details and \
'variants' in context.processed_maps_details[current_map_key] and \
context.processed_maps_details[current_map_key]['variants']:
variants_list = context.processed_maps_details[current_map_key]['variants']
valid_variants_for_stats = [
v for v in variants_list
if isinstance(v.get('dimensions'), tuple) and len(v['dimensions']) == 2 and v['dimensions'][0] > 0 and v['dimensions'][1] > 0
]
if valid_variants_for_stats:
smallest_variant = min(valid_variants_for_stats, key=lambda v: v['dimensions'][0] * v['dimensions'][1])
if smallest_variant and 'temp_path' in smallest_variant and smallest_variant.get('dimensions'):
smallest_res_w, smallest_res_h = smallest_variant['dimensions']
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Identified smallest variant for stats: {smallest_variant.get('resolution_key', 'N/A')} ({smallest_res_w}x{smallest_res_h}) at {smallest_variant['temp_path']}")
lowest_res_image_data_for_stats = ipu.load_image(smallest_variant['temp_path'])
image_to_stat_path_for_log = smallest_variant['temp_path']
source_of_stats_image = f"variant {smallest_variant.get('resolution_key', 'N/A')}"
if lowest_res_image_data_for_stats is None:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Failed to load smallest variant image '{smallest_variant['temp_path']}' for stats.")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Could not determine smallest variant for stats from valid variants list (details missing).")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: No valid variants found to determine the smallest one for stats.")
if lowest_res_image_data_for_stats is None:
if base_pot_image_data is not None:
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Using base POT image for stats (dimensions: {base_pot_width}x{base_pot_height}). Smallest variant not available/loaded or no variants generated.")
lowest_res_image_data_for_stats = base_pot_image_data
image_to_stat_path_for_log = f"In-memory base POT image (dims: {base_pot_width}x{base_pot_height})"
source_of_stats_image = "base POT"
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Base POT image data is also None. Cannot calculate stats.")
if lowest_res_image_data_for_stats is not None:
stats_dict = ipu.calculate_image_stats(lowest_res_image_data_for_stats)
if stats_dict and "error" not in stats_dict:
if 'image_stats_lowest_res' not in context.asset_metadata:
context.asset_metadata['image_stats_lowest_res'] = {}
context.asset_metadata['image_stats_lowest_res'][current_map_type] = stats_dict # Keyed by map_type
logger.info(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': Calculated and stored image stats from '{source_of_stats_image}' (source ref: '{image_to_stat_path_for_log}').")
elif stats_dict and "error" in stats_dict:
logger.error(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': Error calculating image stats from '{source_of_stats_image}': {stats_dict['error']}.")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': Failed to calculate image stats from '{source_of_stats_image}' (result was None or empty).")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}, Map Type '{current_map_type}': No image data available (from variant or base POT) to calculate stats.")
# Final status update based on whether variants were generated (and expected)
if generate_variants_for_this_map_type:
if processed_at_least_one_resolution_variant:
self._update_file_rule_status(context, current_map_key, 'Processed_With_Variants',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
details="Successfully processed with multiple resolution variants.")
else:
logger.warning(f"Asset '{asset_name_for_log}', Map Key {current_map_key}, Proc. Tag {processing_instance_tag}: Variants were expected for map type '{current_map_type}', but none were generated (e.g., base POT too small for any variant tier).")
self._update_file_rule_status(context, current_map_key, 'Processed_No_Variants',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
details="Variants expected but none generated (e.g., base POT too small).")
else: # No variants were expected for this map type
self._update_file_rule_status(context, current_map_key, 'Processed_No_Variants',
map_type=filename_friendly_map_type,
processing_map_type=current_map_type,
source_file_rule_index=file_rule_idx,
processing_tag=processing_instance_tag,
details="Processed to base POT; variants not applicable for this map type.")
logger.info(f"Asset '{asset_name_for_log}': Finished individual map processing stage.")
return context
def _find_source_file(self, base_path: Path, pattern: str, asset_name_for_log: str, processing_instance_tag: str) -> Optional[Path]:
"""
Finds a single source file matching the pattern within the base_path.
Logs use processing_instance_tag for specific run tracing.
"""
if not pattern:
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Empty file_path provided in FileRule.")
return None
# If pattern is an absolute path, use it directly
potential_abs_path = Path(pattern)
if potential_abs_path.is_absolute() and potential_abs_path.exists():
logger.debug(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: file_path '{pattern}' is absolute and exists. Using it directly.")
return potential_abs_path
elif potential_abs_path.is_absolute():
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: file_path '{pattern}' is absolute but does not exist.")
# Fall through to try resolving against base_path if it's just a name/relative pattern
# Treat pattern as relative to base_path
# This could be an exact name or a glob pattern
try:
# First, check if pattern is an exact relative path
exact_match_path = base_path / pattern
if exact_match_path.exists() and exact_match_path.is_file():
logger.debug(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Found exact match for '{pattern}' at '{exact_match_path}'.")
return exact_match_path
# If not an exact match, try as a glob pattern (recursive)
matched_files_rglob = list(base_path.rglob(pattern))
if matched_files_rglob:
if len(matched_files_rglob) > 1:
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Multiple files ({len(matched_files_rglob)}) found for pattern '{pattern}' in '{base_path}' (recursive). Using first: {matched_files_rglob[0]}. Files: {matched_files_rglob}")
return matched_files_rglob[0]
# Try non-recursive glob if rglob fails
matched_files_glob = list(base_path.glob(pattern))
if matched_files_glob:
if len(matched_files_glob) > 1:
logger.warning(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Multiple files ({len(matched_files_glob)}) found for pattern '{pattern}' in '{base_path}' (non-recursive). Using first: {matched_files_glob[0]}. Files: {matched_files_glob}")
return matched_files_glob[0]
logger.debug(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: No files found matching pattern '{pattern}' in '{base_path}' (exact, recursive, or non-recursive).")
return None
except Exception as e:
logger.error(f"Asset '{asset_name_for_log}', Proc. Tag {processing_instance_tag}: Error searching for file with pattern '{pattern}' in '{base_path}': {e}")
return None
def _update_file_rule_status(self, context: AssetProcessingContext, map_key_index: int, status: str, **kwargs): # Renamed map_id_hex to map_key_index
"""Helper to update processed_maps_details for a map, keyed by file_rule_idx."""
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
if map_key_index not in context.processed_maps_details:
context.processed_maps_details[map_key_index] = {}
context.processed_maps_details[map_key_index]['status'] = status
for key, value in kwargs.items():
# Ensure source_file_rule_id_hex is not added if it was somehow passed (it shouldn't be)
if key == 'source_file_rule_id_hex':
continue
context.processed_maps_details[map_key_index][key] = value
if 'map_type' not in context.processed_maps_details[map_key_index] and 'map_type' in kwargs:
context.processed_maps_details[map_key_index]['map_type'] = kwargs['map_type']
# Add formatted resolution names
if 'original_dimensions' in kwargs and isinstance(kwargs['original_dimensions'], tuple) and len(kwargs['original_dimensions']) == 2:
orig_w, orig_h = kwargs['original_dimensions']
context.processed_maps_details[map_key_index]['original_resolution_name'] = f"{orig_w}x{orig_h}"
# Determine the correct dimensions to use for 'processed_resolution_name'
# This name refers to the base POT scaled image dimensions before variant generation.
dims_to_log_as_base_processed = None
if 'base_pot_dimensions' in kwargs and isinstance(kwargs['base_pot_dimensions'], tuple) and len(kwargs['base_pot_dimensions']) == 2:
# This key is used when status is 'Processed_With_Variants'
dims_to_log_as_base_processed = kwargs['base_pot_dimensions']
elif 'processed_dimensions' in kwargs and isinstance(kwargs['processed_dimensions'], tuple) and len(kwargs['processed_dimensions']) == 2:
# This key is used when status is 'Processed_No_Variants' (and potentially others)
dims_to_log_as_base_processed = kwargs['processed_dimensions']
if dims_to_log_as_base_processed:
proc_w, proc_h = dims_to_log_as_base_processed
resolution_name_str = f"{proc_w}x{proc_h}"
context.processed_maps_details[map_key_index]['base_pot_resolution_name'] = resolution_name_str
# Ensure 'processed_resolution_name' is also set for OutputOrganizationStage compatibility
context.processed_maps_details[map_key_index]['processed_resolution_name'] = resolution_name_str
elif 'processed_dimensions' in kwargs or 'base_pot_dimensions' in kwargs:
details_for_warning = kwargs.get('processed_dimensions', kwargs.get('base_pot_dimensions'))
logger.warning(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: 'processed_dimensions' or 'base_pot_dimensions' key present but its value is not a valid 2-element tuple: {details_for_warning}")
# If temp_processed_file was passed, ensure it's in the details
if 'temp_processed_file' in kwargs:
context.processed_maps_details[map_key_index]['temp_processed_file'] = kwargs['temp_processed_file']
# Log all details being stored for clarity, including the newly added resolution names
log_details = context.processed_maps_details[map_key_index].copy()
# Avoid logging full image data if it accidentally gets into kwargs
if 'image_data' in log_details: del log_details['image_data']
if 'base_pot_image_data' in log_details: del log_details['base_pot_image_data']
logger.debug(f"Asset '{asset_name_for_log}', Map Key Index {map_key_index}: Status updated to '{status}'. Details: {log_details}")

View File

@@ -0,0 +1,99 @@
import logging
from typing import Tuple, Optional # Added Optional
import cv2 # Assuming cv2 is available for interpolation flags
import numpy as np
from .base_stage import ProcessingStage
# Import necessary context classes and utils
from ..asset_context import InitialScalingInput, InitialScalingOutput
# ProcessingItem is no longer created here, so its import can be removed if not used otherwise.
# For now, keep rule_structure import if other elements from it might be needed,
# but ProcessingItem itself is not directly instantiated by this stage anymore.
# from rule_structure import ProcessingItem
from ...utils import image_processing_utils as ipu
import numpy as np
import cv2 # Added cv2 for interpolation flags (already used implicitly by ipu.resize_image)
log = logging.getLogger(__name__)
class InitialScalingStage(ProcessingStage):
"""
Applies initial Power-of-Two (POT) downscaling to image data if configured
and if the item is not already a 'LOWRES' variant.
"""
def execute(self, input_data: InitialScalingInput) -> InitialScalingOutput:
"""
Applies POT scaling based on input_data.initial_scaling_mode,
unless input_data.resolution_key is 'LOWRES'.
Passes through the resolution_key.
"""
# Safely access source_file_path for logging, if provided by orchestrator via underscore attribute
source_file_path = getattr(input_data, '_source_file_path', "UnknownSourcePath")
log_prefix = f"InitialScalingStage (Source: {source_file_path}, ResKey: {input_data.resolution_key})"
log.debug(f"{log_prefix}: Mode '{input_data.initial_scaling_mode}'. Received resolution_key: '{input_data.resolution_key}'")
image_to_scale = input_data.image_data
current_dimensions_wh = input_data.original_dimensions # Dimensions of the image_to_scale
scaling_mode = input_data.initial_scaling_mode
output_resolution_key = input_data.resolution_key # Pass through the resolution key
if image_to_scale is None or image_to_scale.size == 0:
log.warning(f"{log_prefix}: Input image data is None or empty. Skipping POT scaling.")
return InitialScalingOutput(
scaled_image_data=np.array([]),
scaling_applied=False,
final_dimensions=(0, 0),
resolution_key=output_resolution_key
)
if not current_dimensions_wh:
log.warning(f"{log_prefix}: Original dimensions not provided for POT scaling. Using current image shape.")
h_pre_pot_scale, w_pre_pot_scale = image_to_scale.shape[:2]
else:
w_pre_pot_scale, h_pre_pot_scale = current_dimensions_wh
final_image_data = image_to_scale # Default to original if no scaling happens
scaling_applied = False
# Skip POT scaling if the item is already a LOWRES variant or scaling mode is NONE
if output_resolution_key == "LOWRES":
log.info(f"{log_prefix}: Item is a 'LOWRES' variant. Skipping POT downscaling.")
elif scaling_mode == "NONE":
log.info(f"{log_prefix}: Mode is NONE. No POT scaling applied.")
elif scaling_mode == "POT_DOWNSCALE":
pot_w = ipu.get_nearest_power_of_two_downscale(w_pre_pot_scale)
pot_h = ipu.get_nearest_power_of_two_downscale(h_pre_pot_scale)
if (pot_w, pot_h) != (w_pre_pot_scale, h_pre_pot_scale):
log.info(f"{log_prefix}: Applying POT Downscale from ({w_pre_pot_scale},{h_pre_pot_scale}) to ({pot_w},{pot_h}).")
resized_img = ipu.resize_image(image_to_scale, pot_w, pot_h, interpolation=cv2.INTER_AREA)
if resized_img is not None:
final_image_data = resized_img
scaling_applied = True
log.debug(f"{log_prefix}: POT Downscale applied successfully.")
else:
log.warning(f"{log_prefix}: POT Downscale resize failed. Using pre-POT-scaled data.")
else:
log.info(f"{log_prefix}: Image already POT or smaller. No POT scaling needed.")
else:
log.warning(f"{log_prefix}: Unknown INITIAL_SCALING_MODE '{scaling_mode}'. Defaulting to NONE (no scaling).")
# Determine final dimensions
if final_image_data is not None and final_image_data.size > 0:
final_h, final_w = final_image_data.shape[:2]
final_dims_wh = (final_w, final_h)
else:
final_dims_wh = (0,0)
if final_image_data is None: # Ensure it's an empty array for consistency if None
final_image_data = np.array([])
return InitialScalingOutput(
scaled_image_data=final_image_data,
scaling_applied=scaling_applied,
final_dimensions=final_dims_wh,
resolution_key=output_resolution_key # Pass through the resolution key
)

View File

@@ -1,347 +0,0 @@
import logging
from pathlib import Path
from typing import Dict, Optional, List, Tuple
import numpy as np
import cv2 # For potential direct cv2 operations if ipu doesn't cover all merge needs
from .base_stage import ProcessingStage
from ..asset_context import AssetProcessingContext
from rule_structure import FileRule
from utils.path_utils import sanitize_filename
from ...utils import image_processing_utils as ipu
logger = logging.getLogger(__name__)
class MapMergingStage(ProcessingStage):
"""
Merges individually processed maps based on MAP_MERGE rules.
This stage performs operations like channel packing.
"""
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
"""
Executes the map merging logic.
Args:
context: The asset processing context.
Returns:
The updated asset processing context.
"""
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
if context.status_flags.get('skip_asset'):
logger.info(f"Skipping map merging for asset {asset_name_for_log} as skip_asset flag is set.")
return context
if not hasattr(context, 'merged_maps_details'):
context.merged_maps_details = {}
if not hasattr(context, 'processed_maps_details'):
logger.warning(f"Asset {asset_name_for_log}: 'processed_maps_details' not found in context. Cannot perform map merging.")
return context
if not context.files_to_process: # This list might not be relevant if merge rules are defined elsewhere or implicitly
logger.info(f"Asset {asset_name_for_log}: No files_to_process defined. This stage might rely on config or processed_maps_details directly for merge rules.")
# Depending on design, this might not be an error, so we don't return yet.
logger.info(f"Starting MapMergingStage for asset: {asset_name_for_log}")
# TODO: The logic for identifying merge rules and their inputs needs significant rework
# as FileRule no longer has 'id' or 'merge_settings' directly in the way this stage expects.
# Merge rules are likely defined in the main configuration (context.config_obj.map_merge_rules)
# and need to be matched against available maps in context.processed_maps_details.
# Placeholder for the loop that would iterate over context.config_obj.map_merge_rules
# For now, this stage will effectively do nothing until that logic is implemented.
# Example of how one might start to adapt:
# for configured_merge_rule in context.config_obj.map_merge_rules:
# output_map_type = configured_merge_rule.get('output_map_type')
# inputs_config = configured_merge_rule.get('inputs') # e.g. {"R": "NORMAL", "G": "ROUGHNESS"}
# # ... then find these input map_types in context.processed_maps_details ...
# # ... and perform the merge ...
# # This is a complex change beyond simple attribute renaming.
# The following is the original loop structure, which will likely fail due to missing attributes on FileRule.
# Keeping it commented out to show what was there.
"""
for merge_rule in context.files_to_process: # This iteration logic is likely incorrect for merge rules
if not isinstance(merge_rule, FileRule) or merge_rule.item_type != "MAP_MERGE":
continue
# FileRule does not have merge_settings or id.hex
# This entire block needs to be re-thought based on where merge rules are defined.
# Assuming merge_rule_id_hex would be a generated UUID for this operation.
merge_rule_id_hex = f"merge_op_{uuid.uuid4().hex[:8]}"
current_map_type = merge_rule.item_type_override or merge_rule.item_type
logger.error(f"Asset {asset_name_for_log}, Potential Merge for {current_map_type}: Merge rule processing needs rework. FileRule lacks 'merge_settings' and 'id'. Skipping this rule.")
context.merged_maps_details[merge_rule_id_hex] = {
'map_type': current_map_type,
'status': 'Failed',
'reason': 'Merge rule processing logic in MapMergingStage needs refactor due to FileRule changes.'
}
continue
"""
# For now, let's assume no merge rules are processed until the logic is fixed.
num_merge_rules_attempted = 0
# If context.config_obj.map_merge_rules exists, iterate it here.
# The original code iterated context.files_to_process looking for item_type "MAP_MERGE".
# This implies FileRule objects were being used to define merge operations, which is no longer the case
# if 'merge_settings' and 'id' were removed from FileRule.
# The core merge rules are in context.config_obj.map_merge_rules
# Each rule in there defines an output_map_type and its inputs.
config_merge_rules = context.config_obj.map_merge_rules
if not config_merge_rules:
logger.info(f"Asset {asset_name_for_log}: No map_merge_rules found in configuration. Skipping map merging.")
return context
for rule_idx, configured_merge_rule in enumerate(config_merge_rules):
output_map_type = configured_merge_rule.get('output_map_type')
inputs_map_type_to_channel = configured_merge_rule.get('inputs') # e.g. {"R": "NRM", "G": "NRM", "B": "ROUGH"}
default_values = configured_merge_rule.get('defaults', {}) # e.g. {"R": 0.5, "G": 0.5, "B": 0.5}
# output_bit_depth_rule = configured_merge_rule.get('output_bit_depth', 'respect_inputs') # Not used yet
if not output_map_type or not inputs_map_type_to_channel:
logger.warning(f"Asset {asset_name_for_log}: Invalid configured_merge_rule at index {rule_idx}. Missing 'output_map_type' or 'inputs'. Rule: {configured_merge_rule}")
continue
num_merge_rules_attempted +=1
merge_op_id = f"merge_{sanitize_filename(output_map_type)}_{rule_idx}"
logger.info(f"Asset {asset_name_for_log}: Processing configured merge rule for '{output_map_type}' (Op ID: {merge_op_id})")
loaded_input_maps: Dict[str, np.ndarray] = {} # Key: input_map_type (e.g. "NRM"), Value: image_data
input_map_paths: Dict[str, str] = {} # Key: input_map_type, Value: path_str
target_dims: Optional[Tuple[int, int]] = None
all_inputs_valid = True
# Find and load input maps from processed_maps_details
# This assumes one processed map per map_type. If multiple variants exist, this needs refinement.
required_input_map_types = set(inputs_map_type_to_channel.values())
for required_map_type in required_input_map_types:
found_processed_map_details = None
# The key `p_key_idx` is the file_rule_idx from the IndividualMapProcessingStage
for p_key_idx, p_details in context.processed_maps_details.items(): # p_key_idx is an int
processed_map_identifier = p_details.get('processing_map_type', p_details.get('map_type'))
# Comprehensive list of valid statuses for an input map to be used in merging
valid_input_statuses = ['BasePOTSaved', 'Processed_With_Variants', 'Processed_No_Variants', 'Converted_To_Rough']
is_match = False
if processed_map_identifier == required_map_type:
is_match = True
elif required_map_type.startswith("MAP_") and processed_map_identifier == required_map_type.split("MAP_")[-1]:
is_match = True
elif not required_map_type.startswith("MAP_") and processed_map_identifier == f"MAP_{required_map_type}":
is_match = True
if is_match and p_details.get('status') in valid_input_statuses:
found_processed_map_details = p_details
# The key `p_key_idx` (which is the FileRule index) is implicitly associated with these details.
break
if not found_processed_map_details:
can_be_fully_defaulted = True
channels_requiring_this_map = [
ch_key for ch_key, map_type_val in inputs_map_type_to_channel.items()
if map_type_val == required_map_type
]
if not channels_requiring_this_map:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Internal logic error. Required map_type '{required_map_type}' is not actually used by any output channel. Configuration: {inputs_map_type_to_channel}")
all_inputs_valid = False
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Internal error: required map_type '{required_map_type}' not in use."}
break
for channel_char_needing_default in channels_requiring_this_map:
if default_values.get(channel_char_needing_default) is None:
can_be_fully_defaulted = False
break
if can_be_fully_defaulted:
logger.info(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Required input map_type '{required_map_type}' for output '{output_map_type}' not found or not in usable state. Will attempt to use default values for its channels: {channels_requiring_this_map}.")
else:
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Required input map_type '{required_map_type}' for output '{output_map_type}' not found/unusable, AND not all its required channels ({channels_requiring_this_map}) have defaults. Failing merge op.")
all_inputs_valid = False
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Input '{required_map_type}' missing and defaults incomplete."}
break
if found_processed_map_details:
temp_file_path_str = found_processed_map_details.get('temp_processed_file')
if not temp_file_path_str:
# Log with p_key_idx if available, or just the map type if not (though it should be if found_processed_map_details is set)
log_key_info = f"(Associated Key Index: {p_key_idx})" if 'p_key_idx' in locals() and found_processed_map_details else "" # Use locals() to check if p_key_idx is defined in this scope
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: 'temp_processed_file' missing in details for found map_type '{required_map_type}' {log_key_info}.")
all_inputs_valid = False
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Temp file path missing for input '{required_map_type}'."}
break
temp_file_path = Path(temp_file_path_str)
if not temp_file_path.exists():
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Temp file {temp_file_path} for input map_type '{required_map_type}' does not exist.")
all_inputs_valid = False
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Temp file for input '{required_map_type}' missing."}
break
try:
image_data = ipu.load_image(str(temp_file_path))
if image_data is None: raise ValueError("Loaded image is None")
except Exception as e:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error loading image {temp_file_path} for input map_type '{required_map_type}': {e}")
all_inputs_valid = False
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Error loading input '{required_map_type}'."}
break
loaded_input_maps[required_map_type] = image_data
input_map_paths[required_map_type] = str(temp_file_path)
current_dims = (image_data.shape[1], image_data.shape[0])
if target_dims is None:
target_dims = current_dims
elif current_dims != target_dims:
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Input map '{required_map_type}' dims {current_dims} differ from target {target_dims}. Resizing.")
try:
image_data_resized = ipu.resize_image(image_data, target_dims[0], target_dims[1])
if image_data_resized is None: raise ValueError("Resize returned None")
loaded_input_maps[required_map_type] = image_data_resized
except Exception as e:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Failed to resize '{required_map_type}': {e}")
all_inputs_valid = False
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f"Failed to resize input '{required_map_type}'."}
break
if not all_inputs_valid:
logger.warning(f"Asset {asset_name_for_log}: Skipping merge for Op ID {merge_op_id} ('{output_map_type}') due to invalid inputs.")
continue
if not loaded_input_maps and not any(default_values.get(ch) is not None for ch in inputs_map_type_to_channel.keys()):
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: No input maps loaded and no defaults available for any channel for '{output_map_type}'. Cannot proceed.")
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'No input maps loaded and no defaults available.'}
continue
if target_dims is None:
default_res_key = context.config_obj.get("default_output_resolution_key_for_merge", "1K")
image_resolutions_cfg = getattr(context.config_obj, "image_resolutions", {})
default_max_dim = image_resolutions_cfg.get(default_res_key)
if default_max_dim:
target_dims = (default_max_dim, default_max_dim)
logger.info(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Target dimensions not set by inputs (all defaulted). Using configured default resolution '{default_res_key}': {target_dims}.")
else:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Target dimensions could not be determined for '{output_map_type}' (all inputs defaulted and no default output resolution configured).")
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'Target dimensions undetermined for fully defaulted merge.'}
continue
output_channel_keys = sorted(list(inputs_map_type_to_channel.keys()))
num_output_channels = len(output_channel_keys)
if num_output_channels == 0:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: No output channels defined in 'inputs' for '{output_map_type}'.")
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'No output channels defined.'}
continue
try:
output_dtype = np.uint8
if num_output_channels == 1:
merged_image = np.zeros((target_dims[1], target_dims[0]), dtype=output_dtype)
else:
merged_image = np.zeros((target_dims[1], target_dims[0], num_output_channels), dtype=output_dtype)
except Exception as e:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error creating empty merged image for '{output_map_type}': {e}")
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f'Error creating output canvas: {e}'}
continue
merge_op_failed_detail = False
for i, out_channel_char in enumerate(output_channel_keys):
input_map_type_for_this_channel = inputs_map_type_to_channel[out_channel_char]
source_image = loaded_input_maps.get(input_map_type_for_this_channel)
source_data_this_channel = None
if source_image is not None:
if source_image.dtype != np.uint8:
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Input map '{input_map_type_for_this_channel}' has dtype {source_image.dtype}, expected uint8. Attempting conversion.")
source_image = ipu.convert_to_uint8(source_image)
if source_image is None:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Failed to convert input '{input_map_type_for_this_channel}' to uint8.")
merge_op_failed_detail = True; break
if source_image.ndim == 2:
source_data_this_channel = source_image
elif source_image.ndim == 3:
semantic_to_bgr_idx = {'R': 2, 'G': 1, 'B': 0, 'A': 3}
idx_to_extract = semantic_to_bgr_idx.get(out_channel_char.upper())
if idx_to_extract is not None and idx_to_extract < source_image.shape[2]:
source_data_this_channel = source_image[:, :, idx_to_extract]
logger.debug(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: For output '{out_channel_char}', using source '{input_map_type_for_this_channel}' semantic '{out_channel_char}' (BGR(A) index {idx_to_extract}).")
else:
logger.warning(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Could not map output '{out_channel_char}' to a specific BGR(A) channel of '{input_map_type_for_this_channel}' (shape {source_image.shape}). Defaulting to its channel 0 (Blue).")
source_data_this_channel = source_image[:, :, 0]
else:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Source image '{input_map_type_for_this_channel}' has unexpected dimensions: {source_image.ndim} (shape {source_image.shape}).")
merge_op_failed_detail = True; break
else:
default_val_for_channel = default_values.get(out_channel_char)
if default_val_for_channel is not None:
try:
scaled_default_val = int(float(default_val_for_channel) * 255)
source_data_this_channel = np.full((target_dims[1], target_dims[0]), scaled_default_val, dtype=np.uint8)
logger.info(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Using default value {default_val_for_channel} (scaled to {scaled_default_val}) for output channel '{out_channel_char}' as input map '{input_map_type_for_this_channel}' was missing.")
except ValueError:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Default value '{default_val_for_channel}' for channel '{out_channel_char}' is not a valid float. Cannot scale.")
merge_op_failed_detail = True; break
else:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Input map '{input_map_type_for_this_channel}' for output channel '{out_channel_char}' is missing and no default value provided.")
merge_op_failed_detail = True; break
if source_data_this_channel is None:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Failed to get source data for output channel '{out_channel_char}'.")
merge_op_failed_detail = True; break
try:
if merged_image.ndim == 2:
merged_image = source_data_this_channel
else:
merged_image[:, :, i] = source_data_this_channel
except Exception as e:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error assigning data to output channel '{out_channel_char}' (index {i}): {e}. Merged shape: {merged_image.shape}, Source data shape: {source_data_this_channel.shape}")
merge_op_failed_detail = True; break
if merge_op_failed_detail:
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': 'Error during channel assignment.'}
continue
output_format = 'png'
temp_merged_filename = f"merged_{sanitize_filename(output_map_type)}_{merge_op_id}.{output_format}"
temp_merged_path = context.engine_temp_dir / temp_merged_filename
try:
save_success = ipu.save_image(str(temp_merged_path), merged_image)
if not save_success: raise ValueError("Save image returned false")
except Exception as e:
logger.error(f"Asset {asset_name_for_log}, Merge Op ID {merge_op_id}: Error saving merged image {temp_merged_path}: {e}")
context.merged_maps_details[merge_op_id] = {'map_type': output_map_type, 'status': 'Failed', 'reason': f'Failed to save merged image: {e}'}
continue
logger.info(f"Asset {asset_name_for_log}: Successfully merged and saved '{output_map_type}' (Op ID: {merge_op_id}) to {temp_merged_path}")
context.merged_maps_details[merge_op_id] = {
'map_type': output_map_type,
'temp_merged_file': str(temp_merged_path),
'input_map_types_used': list(inputs_map_type_to_channel.values()),
'input_map_files_used': input_map_paths,
'merged_dimensions': target_dims,
'status': 'Processed'
}
logger.info(f"Finished MapMergingStage for asset: {asset_name_for_log}. Merged maps operations attempted: {num_merge_rules_attempted}, Succeeded: {len([d for d in context.merged_maps_details.values() if d.get('status') == 'Processed'])}")
return context

View File

@@ -0,0 +1,329 @@
import logging
import re
from pathlib import Path
from typing import List, Optional, Tuple, Dict, Any
import cv2
import numpy as np
from .base_stage import ProcessingStage
# Import necessary context classes and utils
from ..asset_context import AssetProcessingContext, MergeTaskDefinition, ProcessedMergedMapData
from ...utils import image_processing_utils as ipu
log = logging.getLogger(__name__)
class MergedTaskProcessorStage(ProcessingStage):
"""
Processes a single merge task defined in the configuration.
Loads inputs, applies transformations to inputs, handles fallbacks/resizing,
performs the merge, and returns the merged data.
"""
def _find_input_map_details_in_context(
self,
required_map_type: str,
processed_map_details_context: Dict[str, Dict[str, Any]],
log_prefix_for_find: str
) -> Optional[Dict[str, Any]]:
"""
Finds the details of a required input map from the context's processed_maps_details.
Prefers exact match for full types (e.g. MAP_TYPE-1), or base type / base type + "-1" for base types (e.g. MAP_TYPE).
Returns the details dictionary for the found map if it has saved_files_info.
"""
# Try exact match first (e.g., rule asks for "MAP_NRM-1" or "MAP_NRM" if that's how it was processed)
for item_key, details in processed_map_details_context.items():
if details.get('internal_map_type') == required_map_type:
if details.get('saved_files_info') and isinstance(details['saved_files_info'], list) and len(details['saved_files_info']) > 0:
log.debug(f"{log_prefix_for_find}: Found exact match for '{required_map_type}' with key '{item_key}'.")
return details
log.warning(f"{log_prefix_for_find}: Found exact match for '{required_map_type}' (key '{item_key}') but no saved_files_info.")
return None # Found type but no usable files
# If exact match not found, and required_map_type is a base type (e.g. "MAP_NRM")
# try to find the primary suffixed version "MAP_NRM-1" or the base type itself if it was processed without a suffix.
if not re.search(r'-\d+$', required_map_type): # if it's a base type like MAP_XXX
# Prefer "MAP_XXX-1" as the primary variant if suffixed types exist
primary_suffixed_type = f"{required_map_type}-1"
for item_key, details in processed_map_details_context.items():
if details.get('internal_map_type') == primary_suffixed_type:
if details.get('saved_files_info') and isinstance(details['saved_files_info'], list) and len(details['saved_files_info']) > 0:
log.debug(f"{log_prefix_for_find}: Found primary suffixed match '{primary_suffixed_type}' for base '{required_map_type}' with key '{item_key}'.")
return details
log.warning(f"{log_prefix_for_find}: Found primary suffixed match '{primary_suffixed_type}' (key '{item_key}') but no saved_files_info.")
return None # Found type but no usable files
log.debug(f"{log_prefix_for_find}: No suitable match found for '{required_map_type}' via exact or primary suffixed type search.")
return None
def execute(
self,
context: AssetProcessingContext,
merge_task: MergeTaskDefinition # Specific item passed by orchestrator
) -> ProcessedMergedMapData:
"""
Processes the given MergeTaskDefinition item.
"""
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
task_key = merge_task.task_key
task_data = merge_task.task_data
log_prefix = f"Asset '{asset_name_for_log}', Task '{task_key}'"
log.info(f"{log_prefix}: Processing Merge Task.")
# Initialize output object with default failure state
result = ProcessedMergedMapData(
merged_image_data=np.array([]), # Placeholder
output_map_type=task_data.get('output_map_type', 'UnknownMergeOutput'),
source_bit_depths=[],
final_dimensions=None,
transformations_applied_to_inputs={},
status="Failed",
error_message="Initialization error"
)
try:
# --- Configuration & Task Data ---
config = context.config_obj
file_type_definitions = getattr(config, "FILE_TYPE_DEFINITIONS", {})
invert_normal_green = config.invert_normal_green_globally
merge_dimension_mismatch_strategy = getattr(config, "MERGE_DIMENSION_MISMATCH_STRATEGY", "USE_LARGEST")
workspace_path = context.workspace_path # Base for resolving relative input paths
# input_map_sources_from_task is no longer used for paths. Paths are sourced from context.processed_maps_details.
target_dimensions_hw = task_data.get('source_dimensions') # Expected dimensions (h, w) for fallback creation, must be in config.
merge_inputs_config = task_data.get('inputs', {}) # e.g., {'R': 'MAP_AO', 'G': 'MAP_ROUGH', ...}
merge_defaults = task_data.get('defaults', {}) # e.g., {'R': 255, 'G': 255, ...}
merge_channels_order = task_data.get('channel_order', 'RGB') # e.g., 'RGB', 'RGBA'
# Target dimensions are crucial if fallbacks are needed.
# Merge inputs config is essential.
# Merge inputs config is essential. Check directly in task_data.
inputs_from_task_data = task_data.get('inputs')
if not isinstance(inputs_from_task_data, dict) or not inputs_from_task_data:
result.error_message = "Merge task data is incomplete (missing or invalid 'inputs' dictionary in task_data)."
log.error(f"{log_prefix}: {result.error_message}")
return result
if not target_dimensions_hw and any(merge_defaults.get(ch) is not None for ch in merge_inputs_config.keys()):
log.warning(f"{log_prefix}: Merge task has defaults defined, but 'source_dimensions' (target_dimensions_hw) is missing in task_data. Fallback image creation might fail if needed.")
# Not returning error yet, as fallbacks might not be triggered.
loaded_inputs_for_merge: Dict[str, np.ndarray] = {} # Channel char -> image data
actual_input_dimensions: List[Tuple[int, int]] = [] # List of (h, w) for loaded files
input_source_bit_depths: Dict[str, int] = {} # Channel char -> bit depth
all_transform_notes: Dict[str, List[str]] = {} # Channel char -> list of transform notes
# --- Load, Transform, and Prepare Inputs ---
log.debug(f"{log_prefix}: Loading and preparing inputs...")
for channel_char, required_map_type_from_rule in merge_inputs_config.items():
# Validate that the required input map type starts with "MAP_"
if not required_map_type_from_rule.startswith("MAP_"):
result.error_message = (
f"Invalid input map type '{required_map_type_from_rule}' for channel '{channel_char}'. "
f"Input map types for merging must start with 'MAP_'."
)
log.error(f"{log_prefix}: {result.error_message}")
return result # Fail the task if an input type is invalid
input_image_data: Optional[np.ndarray] = None
input_source_desc = f"Fallback for {required_map_type_from_rule}"
input_log_prefix = f"{log_prefix}, Input '{required_map_type_from_rule}' (Channel '{channel_char}')"
channel_transform_notes: List[str] = []
# 1. Attempt to load from context.processed_maps_details
found_input_map_details = self._find_input_map_details_in_context(
required_map_type_from_rule, context.processed_maps_details, input_log_prefix
)
if found_input_map_details:
# Assuming the first saved file is the primary one for merging.
# This might need refinement if specific variants (resolutions/formats) are required.
primary_saved_file_info = found_input_map_details['saved_files_info'][0]
input_file_path_str = primary_saved_file_info.get('path')
if input_file_path_str:
input_file_path = Path(input_file_path_str) # Path is absolute from SaveVariantsStage
if input_file_path.is_file():
try:
input_image_data = ipu.load_image(str(input_file_path))
if input_image_data is not None:
log.info(f"{input_log_prefix}: Loaded from context: {input_file_path}")
actual_input_dimensions.append(input_image_data.shape[:2]) # (h, w)
input_source_desc = str(input_file_path)
# Bit depth from the saved variant info
input_source_bit_depths[channel_char] = primary_saved_file_info.get('bit_depth', 8)
else:
log.warning(f"{input_log_prefix}: Failed to load image from {input_file_path} (found in context). Attempting fallback.")
input_image_data = None # Ensure fallback is triggered
except Exception as e:
log.warning(f"{input_log_prefix}: Error loading image from {input_file_path} (found in context): {e}. Attempting fallback.")
input_image_data = None # Ensure fallback is triggered
else:
log.warning(f"{input_log_prefix}: Input file path '{input_file_path}' (from context) not found. Attempting fallback.")
input_image_data = None # Ensure fallback is triggered
else:
log.warning(f"{input_log_prefix}: Found map type '{required_map_type_from_rule}' in context, but 'path' is missing in saved_files_info. Attempting fallback.")
input_image_data = None # Ensure fallback is triggered
else:
log.info(f"{input_log_prefix}: Input map type '{required_map_type_from_rule}' not found in context.processed_maps_details. Attempting fallback.")
input_image_data = None # Ensure fallback is triggered
# 2. Apply Fallback if needed
if input_image_data is None:
fallback_value = merge_defaults.get(channel_char)
if fallback_value is not None:
try:
if not target_dimensions_hw:
result.error_message = f"Cannot create fallback for channel '{channel_char}': 'source_dimensions' (target_dimensions_hw) not defined in task_data."
log.error(f"{log_prefix}: {result.error_message}")
return result # Critical failure if dimensions for fallback are missing
h, w = target_dimensions_hw
# Infer shape/dtype for fallback (simplified)
num_channels = 1 if isinstance(fallback_value, (int, float)) else len(fallback_value) if isinstance(fallback_value, (list, tuple)) else 1
dtype = np.uint8 # Default dtype
shape = (h, w) if num_channels == 1 else (h, w, num_channels)
input_image_data = np.full(shape, fallback_value, dtype=dtype)
log.warning(f"{input_log_prefix}: Using fallback value {fallback_value} (Target Dims: {target_dimensions_hw}).")
input_source_desc = f"Fallback value {fallback_value}"
input_source_bit_depths[channel_char] = 8 # Assume 8-bit for fallbacks
channel_transform_notes.append(f"Used fallback value {fallback_value}")
except Exception as e:
result.error_message = f"Error creating fallback for channel '{channel_char}': {e}"
log.error(f"{log_prefix}: {result.error_message}")
return result # Critical failure
else:
result.error_message = f"Missing input '{required_map_type_from_rule}' and no fallback default provided for channel '{channel_char}'."
log.error(f"{log_prefix}: {result.error_message}")
return result # Critical failure
# 3. Apply Transformations to the loaded/fallback input
if input_image_data is not None:
input_image_data, _, transform_notes = ipu.apply_common_map_transformations(
input_image_data.copy(), # Transform a copy
required_map_type_from_rule, # Use the type required by the rule
invert_normal_green,
file_type_definitions,
input_log_prefix
)
channel_transform_notes.extend(transform_notes)
else:
# This case should be prevented by fallback logic, but as a safeguard:
result.error_message = f"Input data for channel '{channel_char}' is None after load/fallback attempt."
log.error(f"{log_prefix}: {result.error_message} This indicates an internal logic error.")
return result
loaded_inputs_for_merge[channel_char] = input_image_data
all_transform_notes[channel_char] = channel_transform_notes
result.transformations_applied_to_inputs = all_transform_notes # Store notes
# --- Handle Dimension Mismatches (using transformed inputs) ---
log.debug(f"{log_prefix}: Handling dimension mismatches...")
unique_dimensions = set(actual_input_dimensions)
target_merge_dims_hw = target_dimensions_hw # Default
if len(unique_dimensions) > 1:
log.warning(f"{log_prefix}: Mismatched dimensions found among loaded inputs: {unique_dimensions}. Applying strategy: {merge_dimension_mismatch_strategy}")
mismatch_note = f"Mismatched input dimensions ({unique_dimensions}), applied {merge_dimension_mismatch_strategy}"
# Add note to all relevant inputs? Or just a general note? Add general for now.
# result.status_notes.append(mismatch_note) # Need a place for general notes
if merge_dimension_mismatch_strategy == "ERROR_SKIP":
result.error_message = "Dimension mismatch and strategy is ERROR_SKIP."
log.error(f"{log_prefix}: {result.error_message}")
return result
elif merge_dimension_mismatch_strategy == "USE_LARGEST":
max_h = max(h for h, w in unique_dimensions)
max_w = max(w for h, w in unique_dimensions)
target_merge_dims_hw = (max_h, max_w)
elif merge_dimension_mismatch_strategy == "USE_FIRST":
target_merge_dims_hw = actual_input_dimensions[0] if actual_input_dimensions else target_dimensions_hw
# Add other strategies or default to USE_LARGEST
log.info(f"{log_prefix}: Resizing inputs to target merge dimensions: {target_merge_dims_hw}")
# Resize loaded inputs (not fallbacks unless they were treated as having target dims)
for channel_char, img_data in loaded_inputs_for_merge.items():
# Only resize if it was a loaded input that contributed to the mismatch check
if img_data.shape[:2] in unique_dimensions and img_data.shape[:2] != target_merge_dims_hw:
resized_img = ipu.resize_image(img_data, target_merge_dims_hw[1], target_merge_dims_hw[0]) # w, h
if resized_img is None:
result.error_message = f"Failed to resize input for channel '{channel_char}' to {target_merge_dims_hw}."
log.error(f"{log_prefix}: {result.error_message}")
return result
loaded_inputs_for_merge[channel_char] = resized_img
log.debug(f"{log_prefix}: Resized input for channel '{channel_char}'.")
# If target_merge_dims_hw is still None (no source_dimensions and no mismatch), use first loaded input's dimensions
if target_merge_dims_hw is None and actual_input_dimensions:
target_merge_dims_hw = actual_input_dimensions[0]
log.info(f"{log_prefix}: Using dimensions from first loaded input: {target_merge_dims_hw}")
# --- Perform Merge ---
log.debug(f"{log_prefix}: Performing merge operation for channels '{merge_channels_order}'.")
try:
# Final check for valid dimensions before unpacking
if not isinstance(target_merge_dims_hw, tuple) or len(target_merge_dims_hw) != 2:
result.error_message = "Could not determine valid target dimensions for merge operation."
log.error(f"{log_prefix}: {result.error_message} (target_merge_dims_hw: {target_merge_dims_hw})")
return result
output_channels = len(merge_channels_order)
h, w = target_merge_dims_hw # Use the potentially adjusted dimensions
# Determine output dtype (e.g., based on inputs or config) - Assume uint8 for now
output_dtype = np.uint8
if output_channels == 1:
# Assume the first channel in order is the one to use
channel_char_to_use = merge_channels_order[0]
source_img = loaded_inputs_for_merge[channel_char_to_use]
# Ensure it's grayscale (take first channel if it's multi-channel)
if len(source_img.shape) == 3:
merged_image = source_img[:, :, 0].copy().astype(output_dtype)
else:
merged_image = source_img.copy().astype(output_dtype)
elif output_channels > 1:
merged_image = np.zeros((h, w, output_channels), dtype=output_dtype)
for i, channel_char in enumerate(merge_channels_order):
source_img = loaded_inputs_for_merge.get(channel_char)
if source_img is not None:
# Extract the correct channel (e.g., R from RGB, or use grayscale directly)
if len(source_img.shape) == 3:
# Simple approach: take the first channel if source is color. Needs refinement if specific channel mapping (R->R, G->G etc.) is needed.
merged_image[:, :, i] = source_img[:, :, 0]
else: # Grayscale source
merged_image[:, :, i] = source_img
else:
# This case should have been caught by fallback logic earlier
result.error_message = f"Internal error: Missing prepared input for channel '{channel_char}' during final merge assembly."
log.error(f"{log_prefix}: {result.error_message}")
return result
else:
result.error_message = f"Invalid channel_order '{merge_channels_order}' in merge config."
log.error(f"{log_prefix}: {result.error_message}")
return result
result.merged_image_data = merged_image
result.final_dimensions = (merged_image.shape[1], merged_image.shape[0]) # w, h
result.source_bit_depths = list(input_source_bit_depths.values()) # Collect bit depths used
log.info(f"{log_prefix}: Successfully merged inputs into image with shape {result.merged_image_data.shape}")
except Exception as e:
log.exception(f"{log_prefix}: Error during merge operation: {e}")
result.error_message = f"Merge operation failed: {e}"
return result
# --- Success ---
result.status = "Processed"
result.error_message = None
log.info(f"{log_prefix}: Successfully processed merge task.")
except Exception as e:
log.exception(f"{log_prefix}: Unhandled exception during processing: {e}")
result.status = "Failed"
result.error_message = f"Unhandled exception: {e}"
# Ensure image data is empty on failure
if result.merged_image_data is None or result.merged_image_data.size == 0:
result.merged_image_data = np.array([])
return result

View File

@@ -41,7 +41,7 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
# Check Skip Flag
if context.status_flags.get('skip_asset'):
context.asset_metadata['status'] = "Skipped"
context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
# context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
context.asset_metadata['notes'] = context.status_flags.get('skip_reason', 'Skipped early in pipeline')
logger.info(
f"Asset '{asset_name_for_log}': Marked as skipped. Reason: {context.asset_metadata['notes']}"
@@ -51,7 +51,7 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
# However, if we are here, asset_metadata IS initialized.
# A. Finalize Metadata
context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
# context.asset_metadata['processing_end_time'] = datetime.datetime.now().isoformat()
# Determine final status (if not already set to Skipped)
if context.asset_metadata.get('status') != "Skipped":
@@ -115,8 +115,8 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
restructured_processed_maps[map_key] = new_map_entry
# Assign the restructured details. Note: 'processed_map_details' (singular 'map') is the key in asset_metadata.
context.asset_metadata['processed_map_details'] = restructured_processed_maps
context.asset_metadata['merged_map_details'] = getattr(context, 'merged_maps_details', {})
# context.asset_metadata['processed_map_details'] = restructured_processed_maps
# context.asset_metadata['merged_map_details'] = getattr(context, 'merged_maps_details', {})
# (Optional) Add a list of all temporary files
# context.asset_metadata['temporary_files'] = getattr(context, 'temporary_files', []) # Assuming this is populated elsewhere
@@ -203,6 +203,8 @@ class MetadataFinalizationAndSaveStage(ProcessingStage):
return [make_serializable(i) for i in data]
return data
# final_output_files is populated by OutputOrganizationStage. Explicitly remove it as per user request.
context.asset_metadata.pop('final_output_files', None)
serializable_metadata = make_serializable(context.asset_metadata)
with open(metadata_save_path, 'w') as f:

View File

@@ -85,6 +85,7 @@ class MetadataInitializationStage(ProcessingStage):
merged_maps_details.
"""
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
logger.debug(f"METADATA_INIT_DEBUG: Entry - context.output_base_path = {context.output_base_path}") # Added
"""
Executes the metadata initialization logic.
@@ -147,12 +148,15 @@ class MetadataInitializationStage(ProcessingStage):
context.asset_metadata['processing_start_time'] = datetime.datetime.now().isoformat()
context.asset_metadata['status'] = "Pending"
if context.config_obj and hasattr(context.config_obj, 'general_settings') and \
hasattr(context.config_obj.general_settings, 'app_version'):
context.asset_metadata['version'] = context.config_obj.general_settings.app_version
app_version_value = None
if context.config_obj and hasattr(context.config_obj, 'app_version'):
app_version_value = context.config_obj.app_version
if app_version_value:
context.asset_metadata['version'] = app_version_value
else:
logger.warning("App version not found in config_obj.general_settings. Setting version to 'N/A'.")
context.asset_metadata['version'] = "N/A" # Default or placeholder
logger.warning("App version not found using config_obj.app_version. Setting version to 'N/A'.")
context.asset_metadata['version'] = "N/A"
if context.incrementing_value is not None:
context.asset_metadata['incrementing_value'] = context.incrementing_value
@@ -170,4 +174,5 @@ class MetadataInitializationStage(ProcessingStage):
# Example of how you might log the full metadata for debugging:
# logger.debug(f"Initialized metadata: {context.asset_metadata}")
logger.debug(f"METADATA_INIT_DEBUG: Exit - context.output_base_path = {context.output_base_path}") # Added
return context

View File

@@ -38,7 +38,9 @@ class NormalMapGreenChannelStage(ProcessingStage):
# Iterate through processed maps, as FileRule objects don't have IDs directly
for map_id_hex, map_details in context.processed_maps_details.items():
if map_details.get('map_type') == "NORMAL" and map_details.get('status') == 'Processed':
# Check if the map is a processed normal map using the standardized internal_map_type
internal_map_type = map_details.get('internal_map_type')
if internal_map_type and internal_map_type.startswith("MAP_NRM") and map_details.get('status') == 'Processed':
# Check configuration for inversion
# Assuming general_settings is an attribute of config_obj and might be a dict or an object

View File

@@ -5,10 +5,10 @@ from typing import List, Dict, Optional
from .base_stage import ProcessingStage
from ..asset_context import AssetProcessingContext
from utils.path_utils import generate_path_from_pattern, sanitize_filename
from utils.path_utils import generate_path_from_pattern, sanitize_filename, get_filename_friendly_map_type # Absolute import
from rule_structure import FileRule # Assuming these are needed for type hints if not directly in context
log = logging.getLogger(__name__)
logger = logging.getLogger(__name__)
class OutputOrganizationStage(ProcessingStage):
@@ -17,6 +17,16 @@ class OutputOrganizationStage(ProcessingStage):
"""
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
asset_name_for_log_early = context.asset_rule.asset_name if hasattr(context, 'asset_rule') and context.asset_rule else "Unknown Asset (early)"
log.info(f"OUTPUT_ORG_DEBUG: Stage execution started for asset '{asset_name_for_log_early}'.")
logger.debug(f"OUTPUT_ORG_DEBUG: Entry - context.output_base_path = {context.output_base_path}") # Modified
log.info(f"OUTPUT_ORG_DEBUG: Received context.config_obj.output_directory_base (raw from config) = {getattr(context.config_obj, 'output_directory_base', 'N/A')}")
# resolved_base = "N/A"
# if hasattr(context.config_obj, '_settings') and context.config_obj._settings.get('OUTPUT_BASE_DIR'):
# base_dir_from_settings = context.config_obj._settings.get('OUTPUT_BASE_DIR')
# Path resolution logic might be complex
# log.info(f"OUTPUT_ORG_DEBUG: Received context.config_obj._settings.OUTPUT_BASE_DIR (resolved guess) = {resolved_base}")
log.info(f"OUTPUT_ORG_DEBUG: context.processed_maps_details at start: {context.processed_maps_details}")
"""
Copies temporary processed and merged files to their final output locations
based on path patterns and updates AssetProcessingContext.
@@ -34,15 +44,7 @@ class OutputOrganizationStage(ProcessingStage):
return context
final_output_files: List[str] = []
overwrite_existing = False
# Correctly access general_settings and overwrite_existing from config_obj
if hasattr(context.config_obj, 'general_settings'):
if isinstance(context.config_obj.general_settings, dict):
overwrite_existing = context.config_obj.general_settings.get('overwrite_existing', False)
elif hasattr(context.config_obj.general_settings, 'overwrite_existing'): # If general_settings is an object
overwrite_existing = getattr(context.config_obj.general_settings, 'overwrite_existing', False)
else:
logger.warning(f"Asset '{asset_name_for_log}': config_obj.general_settings not found, defaulting overwrite_existing to False.")
overwrite_existing = context.config_obj.overwrite_existing
output_dir_pattern = getattr(context.config_obj, 'output_directory_pattern', "[supplier]/[assetname]")
output_filename_pattern_config = getattr(context.config_obj, 'output_filename_pattern', "[assetname]_[maptype]_[resolution].[ext]")
@@ -53,15 +55,110 @@ class OutputOrganizationStage(ProcessingStage):
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(context.processed_maps_details)} processed individual map entries.")
for processed_map_key, details in context.processed_maps_details.items():
map_status = details.get('status')
base_map_type = details.get('map_type', 'unknown_map_type') # Original map type
# Retrieve the internal map type first
internal_map_type = details.get('internal_map_type', 'unknown_map_type')
# Convert internal type to filename-friendly type using the helper
file_type_definitions = getattr(context.config_obj, "FILE_TYPE_DEFINITIONS", {})
base_map_type = get_filename_friendly_map_type(internal_map_type, file_type_definitions) # Final filename-friendly type
if map_status in ['Processed', 'Processed_No_Variants']:
if not details.get('temp_processed_file'):
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status '{map_status}') due to missing 'temp_processed_file'.")
# --- Handle maps processed by the SaveVariantsStage (identified by having saved_files_info) ---
saved_files_info = details.get('saved_files_info') # This is a list of dicts from SaveVariantsOutput
# Check if 'saved_files_info' exists and is a non-empty list.
# This indicates the item was processed by SaveVariantsStage.
if saved_files_info and isinstance(saved_files_info, list) and len(saved_files_info) > 0:
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(saved_files_info)} variants for map key '{processed_map_key}' (map type: {base_map_type}) from SaveVariantsStage.")
# Use base_map_type (e.g., "COL") as the key for the map entry
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(base_map_type, {})
# map_type is now the key, so no need to store it inside the entry
# map_metadata_entry['map_type'] = base_map_type
map_metadata_entry.setdefault('variant_paths', {}) # Initialize if not present
processed_any_variant_successfully = False
failed_any_variant = False
for variant_index, variant_detail in enumerate(saved_files_info):
# Extract info from the save utility's output structure
temp_variant_path_str = variant_detail.get('path') # Key is 'path'
if not temp_variant_path_str:
logger.warning(f"Asset '{asset_name_for_log}': Variant {variant_index} for map '{processed_map_key}' is missing 'path' in saved_files_info. Skipping.")
# Optionally update variant_detail status if it's mutable and tracked, otherwise just skip
continue
temp_variant_path = Path(temp_variant_path_str)
if not temp_variant_path.is_file():
logger.warning(f"Asset '{asset_name_for_log}': Temporary variant file '{temp_variant_path}' for map '{processed_map_key}' not found. Skipping.")
continue
variant_resolution_key = variant_detail.get('resolution_key', f"varRes{variant_index}")
variant_ext = variant_detail.get('format', temp_variant_path.suffix.lstrip('.')) # Use 'format' key
token_data_variant = {
"assetname": asset_name_for_log,
"supplier": context.effective_supplier or "DefaultSupplier",
"maptype": base_map_type,
"resolution": variant_resolution_key,
"ext": variant_ext,
"incrementingvalue": getattr(context, 'incrementing_value', None),
"sha5": getattr(context, 'sha5_value', None)
}
token_data_variant_cleaned = {k: v for k, v in token_data_variant.items() if v is not None}
output_filename_variant = generate_path_from_pattern(output_filename_pattern_config, token_data_variant_cleaned)
try:
relative_dir_path_str_variant = generate_path_from_pattern(
pattern_string=output_dir_pattern,
token_data=token_data_variant_cleaned
)
logger.debug(f"OUTPUT_ORG_DEBUG: Variants - Using context.output_base_path = {context.output_base_path} for final_variant_path construction.") # Added
final_variant_path = Path(context.output_base_path) / Path(relative_dir_path_str_variant) / Path(output_filename_variant)
logger.debug(f"OUTPUT_ORG_DEBUG: Variants - Constructed final_variant_path = {final_variant_path}") # Added
final_variant_path.parent.mkdir(parents=True, exist_ok=True)
if final_variant_path.exists() and not overwrite_existing:
logger.info(f"Asset '{asset_name_for_log}': Output variant file {final_variant_path} for map '{processed_map_key}' (res: {variant_resolution_key}) exists and overwrite is disabled. Skipping copy.")
# Optionally update variant_detail status if needed
else:
shutil.copy2(temp_variant_path, final_variant_path)
logger.info(f"Asset '{asset_name_for_log}': Copied variant {temp_variant_path} to {final_variant_path} for map '{processed_map_key}'.")
final_output_files.append(str(final_variant_path))
# Optionally update variant_detail status if needed
# Store relative path in metadata
# Store only the filename, as it's relative to the metadata.json location
map_metadata_entry['variant_paths'][variant_resolution_key] = output_filename_variant
processed_any_variant_successfully = True
except Exception as e:
logger.error(f"Asset '{asset_name_for_log}': Failed to copy variant {temp_variant_path} for map key '{processed_map_key}' (res: {variant_resolution_key}). Error: {e}", exc_info=True)
context.status_flags['output_organization_error'] = True
context.asset_metadata['status'] = "Failed (Output Organization Error - Variant)"
# Optionally update variant_detail status if needed
failed_any_variant = True
# Update parent map detail status based on variant outcomes
if failed_any_variant:
details['status'] = 'Organization Failed (Save Utility Variants)'
elif processed_any_variant_successfully:
details['status'] = 'Organized (Save Utility Variants)'
else: # No variants were successfully copied (e.g., all skipped due to existing file or missing temp file)
details['status'] = 'Organization Skipped (No Save Utility Variants Copied/Needed)'
# --- Handle older/other processing statuses (like single file processing) ---
elif map_status in ['Processed', 'Processed_No_Variants', 'Converted_To_Rough']: # Add other single-file statuses if needed
temp_file_path_str = details.get('temp_processed_file')
if not temp_file_path_str:
logger.warning(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status '{map_status}') due to missing 'temp_processed_file'.")
details['status'] = 'Organization Skipped (Missing Temp File)'
continue
temp_file_path = Path(details['temp_processed_file'])
temp_file_path = Path(temp_file_path_str)
if not temp_file_path.is_file():
logger.warning(f"Asset '{asset_name_for_log}': Temporary file '{temp_file_path}' for map '{processed_map_key}' not found. Skipping.")
details['status'] = 'Organization Skipped (Temp File Not Found)'
continue
resolution_str = details.get('processed_resolution_name', details.get('original_resolution_name', 'resX'))
token_data = {
@@ -82,23 +179,33 @@ class OutputOrganizationStage(ProcessingStage):
pattern_string=output_dir_pattern,
token_data=token_data_cleaned
)
logger.debug(f"OUTPUT_ORG_DEBUG: SingleFile - Using context.output_base_path = {context.output_base_path} for final_path construction.") # Added
final_path = Path(context.output_base_path) / Path(relative_dir_path_str) / Path(output_filename)
logger.debug(f"OUTPUT_ORG_DEBUG: SingleFile - Constructed final_path = {final_path}") # Added
final_path.parent.mkdir(parents=True, exist_ok=True)
if final_path.exists() and not overwrite_existing:
logger.info(f"Asset '{asset_name_for_log}': Output file {final_path} for map '{processed_map_key}' exists and overwrite is disabled. Skipping copy.")
details['status'] = 'Organized (Exists, Skipped Copy)'
else:
shutil.copy2(temp_file_path, final_path)
logger.info(f"Asset '{asset_name_for_log}': Copied {temp_file_path} to {final_path} for map '{processed_map_key}'.")
final_output_files.append(str(final_path))
details['status'] = 'Organized'
details['final_output_path'] = str(final_path)
details['status'] = 'Organized'
# Update asset_metadata for metadata.json
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
map_metadata_entry['map_type'] = base_map_type
map_metadata_entry['path'] = str(Path(relative_dir_path_str) / Path(output_filename)) # Store relative path
# Use base_map_type (e.g., "COL") as the key for the map entry
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(base_map_type, {})
# map_type is now the key, so no need to store it inside the entry
# map_metadata_entry['map_type'] = base_map_type
# Store single path in variant_paths, keyed by its resolution string
# Store only the filename, as it's relative to the metadata.json location
map_metadata_entry.setdefault('variant_paths', {})[resolution_str] = output_filename
# Remove old cleanup logic, as variant_paths is now the standard
# if 'variant_paths' in map_metadata_entry:
# del map_metadata_entry['variant_paths']
except Exception as e:
logger.error(f"Asset '{asset_name_for_log}': Failed to copy {temp_file_path} for map key '{processed_map_key}'. Error: {e}", exc_info=True)
@@ -106,204 +213,17 @@ class OutputOrganizationStage(ProcessingStage):
context.asset_metadata['status'] = "Failed (Output Organization Error)"
details['status'] = 'Organization Failed'
elif map_status == 'Processed_With_Variants':
variants = details.get('variants')
if not variants: # No variants list, or it's empty
logger.warning(f"Asset '{asset_name_for_log}': Map key '{processed_map_key}' (status '{map_status}') has no 'variants' list or it is empty. Attempting fallback to base file.")
if not details.get('temp_processed_file'):
logger.error(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (fallback) as 'temp_processed_file' is also missing.")
details['status'] = 'Organization Failed (No Variants, No Temp File)'
continue # Skip to next map key
# Fallback: Process the base temp_processed_file
temp_file_path = Path(details['temp_processed_file'])
resolution_str = details.get('processed_resolution_name', details.get('original_resolution_name', 'baseRes'))
token_data = {
"assetname": asset_name_for_log,
"supplier": context.effective_supplier or "DefaultSupplier",
"maptype": base_map_type,
"resolution": resolution_str,
"ext": temp_file_path.suffix.lstrip('.'),
"incrementingvalue": getattr(context, 'incrementing_value', None),
"sha5": getattr(context, 'sha5_value', None)
}
token_data_cleaned = {k: v for k, v in token_data.items() if v is not None}
output_filename = generate_path_from_pattern(output_filename_pattern_config, token_data_cleaned)
try:
relative_dir_path_str = generate_path_from_pattern(
pattern_string=output_dir_pattern,
token_data=token_data_cleaned
)
final_path = Path(context.output_base_path) / Path(relative_dir_path_str) / Path(output_filename)
final_path.parent.mkdir(parents=True, exist_ok=True)
if final_path.exists() and not overwrite_existing:
logger.info(f"Asset '{asset_name_for_log}': Output file {final_path} for map '{processed_map_key}' (fallback) exists and overwrite is disabled. Skipping copy.")
else:
shutil.copy2(temp_file_path, final_path)
logger.info(f"Asset '{asset_name_for_log}': Copied {temp_file_path} to {final_path} for map '{processed_map_key}' (fallback).")
final_output_files.append(str(final_path))
details['final_output_path'] = str(final_path)
details['status'] = 'Organized (Base File Fallback)'
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
map_metadata_entry['map_type'] = base_map_type
map_metadata_entry['path'] = str(Path(relative_dir_path_str) / Path(output_filename))
if 'variant_paths' in map_metadata_entry: # Clean up if it was somehow set
del map_metadata_entry['variant_paths']
except Exception as e:
logger.error(f"Asset '{asset_name_for_log}': Failed to copy {temp_file_path} (fallback) for map key '{processed_map_key}'. Error: {e}", exc_info=True)
context.status_flags['output_organization_error'] = True
context.asset_metadata['status'] = "Failed (Output Organization Error - Fallback)"
details['status'] = 'Organization Failed (Fallback)'
continue # Finished with this map key due to fallback
# If we are here, 'variants' list exists and is not empty. Proceed with variant processing.
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(variants)} variants for map key '{processed_map_key}' (map type: {base_map_type}).")
map_metadata_entry = context.asset_metadata.setdefault('maps', {}).setdefault(processed_map_key, {})
map_metadata_entry['map_type'] = base_map_type
map_metadata_entry.setdefault('variant_paths', {}) # Initialize if not present
processed_any_variant_successfully = False
failed_any_variant = False
for variant_index, variant_detail in enumerate(variants):
temp_variant_path_str = variant_detail.get('temp_path')
if not temp_variant_path_str:
logger.warning(f"Asset '{asset_name_for_log}': Variant {variant_index} for map '{processed_map_key}' is missing 'temp_path'. Skipping.")
variant_detail['status'] = 'Organization Skipped (Missing Temp Path)'
continue
temp_variant_path = Path(temp_variant_path_str)
variant_resolution_key = variant_detail.get('resolution_key', f"varRes{variant_index}")
variant_ext = temp_variant_path.suffix.lstrip('.')
token_data_variant = {
"assetname": asset_name_for_log,
"supplier": context.effective_supplier or "DefaultSupplier",
"maptype": base_map_type,
"resolution": variant_resolution_key,
"ext": variant_ext,
"incrementingvalue": getattr(context, 'incrementing_value', None),
"sha5": getattr(context, 'sha5_value', None)
}
token_data_variant_cleaned = {k: v for k, v in token_data_variant.items() if v is not None}
output_filename_variant = generate_path_from_pattern(output_filename_pattern_config, token_data_variant_cleaned)
try:
relative_dir_path_str_variant = generate_path_from_pattern(
pattern_string=output_dir_pattern,
token_data=token_data_variant_cleaned
)
final_variant_path = Path(context.output_base_path) / Path(relative_dir_path_str_variant) / Path(output_filename_variant)
final_variant_path.parent.mkdir(parents=True, exist_ok=True)
if final_variant_path.exists() and not overwrite_existing:
logger.info(f"Asset '{asset_name_for_log}': Output variant file {final_variant_path} for map '{processed_map_key}' (res: {variant_resolution_key}) exists and overwrite is disabled. Skipping copy.")
variant_detail['status'] = 'Organized (Exists, Skipped Copy)'
else:
shutil.copy2(temp_variant_path, final_variant_path)
logger.info(f"Asset '{asset_name_for_log}': Copied variant {temp_variant_path} to {final_variant_path} for map '{processed_map_key}'.")
final_output_files.append(str(final_variant_path))
variant_detail['status'] = 'Organized'
variant_detail['final_output_path'] = str(final_variant_path)
# Store the Path object for metadata stage to make it relative later
variant_detail['final_output_path_for_metadata'] = final_variant_path
relative_final_variant_path_str = str(Path(relative_dir_path_str_variant) / Path(output_filename_variant))
map_metadata_entry['variant_paths'][variant_resolution_key] = relative_final_variant_path_str
processed_any_variant_successfully = True
except Exception as e:
logger.error(f"Asset '{asset_name_for_log}': Failed to copy variant {temp_variant_path} for map key '{processed_map_key}' (res: {variant_resolution_key}). Error: {e}", exc_info=True)
context.status_flags['output_organization_error'] = True
context.asset_metadata['status'] = "Failed (Output Organization Error - Variant)"
variant_detail['status'] = 'Organization Failed'
failed_any_variant = True
# Update parent map detail status based on variant outcomes
if failed_any_variant:
details['status'] = 'Organization Failed (Variants)'
elif processed_any_variant_successfully:
# Check if all processable variants were organized
all_attempted_organized = True
for v_detail in variants:
if v_detail.get('temp_path') and not v_detail.get('status', '').startswith('Organized'):
all_attempted_organized = False
break
if all_attempted_organized:
details['status'] = 'Organized (All Attempted Variants)'
else:
details['status'] = 'Partially Organized (Variants)'
elif not any(v.get('temp_path') for v in variants): # No variants had temp_paths to begin with
details['status'] = 'Processed_With_Variants (No Valid Variants to Organize)'
else: # Variants list existed, items had temp_paths, but none were successfully organized (e.g., all skipped due to existing file and no overwrite)
details['status'] = 'Organization Skipped (No Variants Copied/Needed)'
else: # Other statuses like 'Skipped', 'Failed', 'Organization Failed' etc.
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status: '{map_status}') for organization as it's not 'Processed', 'Processed_No_Variants', or 'Processed_With_Variants'.")
# --- Handle other statuses (Skipped, Failed, etc.) ---
else: # Catches statuses not explicitly handled above
logger.debug(f"Asset '{asset_name_for_log}': Skipping map key '{processed_map_key}' (status: '{map_status}') for organization as it's not a recognized final processed state or variant state.")
continue
else:
logger.debug(f"Asset '{asset_name_for_log}': No processed individual maps to organize.")
# B. Organize Merged Maps
if context.merged_maps_details:
logger.debug(f"Asset '{asset_name_for_log}': Organizing {len(context.merged_maps_details)} merged map(s).")
for merge_op_id, details in context.merged_maps_details.items(): # Use merge_op_id
if details.get('status') != 'Processed' or not details.get('temp_merged_file'):
logger.debug(f"Asset '{asset_name_for_log}': Skipping merge op id '{merge_op_id}' due to status '{details.get('status')}' or missing temp file.")
continue
temp_file_path = Path(details['temp_merged_file'])
map_type = details.get('map_type', 'unknown_merged_map') # This is the output_map_type of the merge rule
# Merged maps might not have a simple 'resolution' token like individual maps.
# We'll use a placeholder or derive if possible.
resolution_str = details.get('merged_resolution_name', 'mergedRes')
token_data_merged = {
"assetname": asset_name_for_log,
"supplier": context.effective_supplier or "DefaultSupplier",
"maptype": map_type,
"resolution": resolution_str,
"ext": temp_file_path.suffix.lstrip('.'),
"incrementingvalue": getattr(context, 'incrementing_value', None),
"sha5": getattr(context, 'sha5_value', None)
}
token_data_merged_cleaned = {k: v for k, v in token_data_merged.items() if v is not None}
output_filename_merged = generate_path_from_pattern(output_filename_pattern_config, token_data_merged_cleaned)
try:
relative_dir_path_str_merged = generate_path_from_pattern(
pattern_string=output_dir_pattern,
token_data=token_data_merged_cleaned
)
final_path_merged = Path(context.output_base_path) / Path(relative_dir_path_str_merged) / Path(output_filename_merged)
final_path_merged.parent.mkdir(parents=True, exist_ok=True)
if final_path_merged.exists() and not overwrite_existing:
logger.info(f"Asset '{asset_name_for_log}': Output file {final_path_merged} exists and overwrite is disabled. Skipping copy for merged map.")
else:
shutil.copy2(temp_file_path, final_path_merged)
logger.info(f"Asset '{asset_name_for_log}': Copied merged map {temp_file_path} to {final_path_merged}")
final_output_files.append(str(final_path_merged))
context.merged_maps_details[merge_op_id]['final_output_path'] = str(final_path_merged)
context.merged_maps_details[merge_op_id]['status'] = 'Organized'
except Exception as e:
logger.error(f"Asset '{asset_name_for_log}': Failed to copy merged map {temp_file_path} to destination for merge op id '{merge_op_id}'. Error: {e}", exc_info=True)
context.status_flags['output_organization_error'] = True
context.asset_metadata['status'] = "Failed (Output Organization Error)"
context.merged_maps_details[merge_op_id]['status'] = 'Organization Failed'
else:
logger.debug(f"Asset '{asset_name_for_log}': No merged maps to organize.")
# B. Organize Merged Maps (OBSOLETE BLOCK - Merged maps are handled by the main loop processing context.processed_maps_details)
# The log "No merged maps to organize" will no longer appear from here.
# If merged maps are not appearing, the issue is likely that they are not being added
# to context.processed_maps_details with 'saved_files_info' by the orchestrator/SaveVariantsStage.
# C. Organize Extra Files (e.g., previews, text files)
logger.debug(f"Asset '{asset_name_for_log}': Checking for EXTRA files to organize.")
@@ -337,10 +257,12 @@ class OutputOrganizationStage(ProcessingStage):
token_data=base_token_data_cleaned
)
# Destination: <output_base_path>/<asset_base_output_dir_str>/<extra_subdir_name>/<original_filename>
logger.debug(f"OUTPUT_ORG_DEBUG: ExtraFiles - Using context.output_base_path = {context.output_base_path} for final_dest_path construction.") # Added
final_dest_path = (Path(context.output_base_path) /
Path(asset_base_output_dir_str) /
Path(extra_subdir_name) /
source_file_path.name) # Use original filename
logger.debug(f"OUTPUT_ORG_DEBUG: ExtraFiles - Constructed final_dest_path = {final_dest_path}") # Added
final_dest_path.parent.mkdir(parents=True, exist_ok=True)

View File

@@ -0,0 +1,216 @@
import logging
from typing import List, Union, Optional, Tuple, Dict # Added Dict
from pathlib import Path # Added Path
from .base_stage import ProcessingStage
from ..asset_context import AssetProcessingContext, MergeTaskDefinition
from rule_structure import FileRule, ProcessingItem # Added ProcessingItem
from processing.utils import image_processing_utils as ipu # Added ipu
log = logging.getLogger(__name__)
class PrepareProcessingItemsStage(ProcessingStage):
"""
Identifies and prepares a unified list of ProcessingItem and MergeTaskDefinition objects
to be processed in subsequent stages. Performs initial validation and explodes
FileRules into specific ProcessingItems for each required output variant.
"""
def _get_target_resolutions(self, source_w: int, source_h: int, config_resolutions: dict, file_rule: FileRule) -> Dict[str, int]:
"""
Determines the target output resolutions for a given source image.
Placeholder logic: Uses all config resolutions smaller than or equal to source, plus PREVIEW if smaller.
Needs to be refined to consider FileRule.resolution_override and actual project requirements.
"""
# For now, very basic logic:
# If FileRule has a resolution_override (e.g., (1024,1024)), that might be the *only* target.
# This needs to be clarified. Assuming override means *only* that size.
if file_rule.resolution_override and isinstance(file_rule.resolution_override, tuple) and len(file_rule.resolution_override) == 2:
# How to get a "key" for an arbitrary override? For now, skip if overridden.
# This part of the design (how overrides interact with standard resolutions) is unclear.
# Let's assume for now that if resolution_override is set, we don't generate standard named resolutions.
# This is likely incorrect for a full implementation.
log.warning(f"FileRule '{file_rule.file_path}' has resolution_override. Standard resolution key generation skipped (needs design refinement).")
return {}
target_res = {}
max_source_dim = max(source_w, source_h)
for key, res_val in config_resolutions.items():
if key == "PREVIEW": # Always consider PREVIEW if its value is smaller
if res_val < max_source_dim : # Or just always include PREVIEW? For now, if smaller.
target_res[key] = res_val
elif res_val <= max_source_dim:
target_res[key] = res_val
# Ensure PREVIEW is included if it's defined and smaller than the smallest other target, or if no other targets.
# This logic is still a bit naive.
if "PREVIEW" in config_resolutions and config_resolutions["PREVIEW"] < max_source_dim:
if not target_res or config_resolutions["PREVIEW"] < min(v for k,v in target_res.items() if k != "PREVIEW" and isinstance(v,int)):
target_res["PREVIEW"] = config_resolutions["PREVIEW"]
elif "PREVIEW" in config_resolutions and not target_res : # if only preview is applicable
if config_resolutions["PREVIEW"] <= max_source_dim:
target_res["PREVIEW"] = config_resolutions["PREVIEW"]
if not target_res and max_source_dim > 0 : # If no standard res is smaller, but image exists
log.debug(f"No standard resolutions from config are <= source dimension {max_source_dim}. Only LOWRES (if applicable) or PREVIEW (if smaller) might be generated.")
log.debug(f"Determined target resolutions for source {source_w}x{source_h}: {target_res}")
return target_res
def execute(self, context: AssetProcessingContext) -> AssetProcessingContext:
"""
Populates context.processing_items with ProcessingItem and MergeTaskDefinition objects.
"""
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
log.info(f"Asset '{asset_name_for_log}': Preparing processing items...")
if context.status_flags.get('skip_asset', False):
log.info(f"Asset '{asset_name_for_log}': Skipping item preparation due to skip_asset flag.")
context.processing_items = []
return context
# Output list will now be List[Union[ProcessingItem, MergeTaskDefinition]]
items_to_process: List[Union[ProcessingItem, MergeTaskDefinition]] = []
preparation_failed = False
config = context.config_obj
# --- Process FileRules into ProcessingItems ---
if context.files_to_process:
source_path_valid = True
if not context.source_rule or not context.source_rule.input_path:
log.error(f"Asset '{asset_name_for_log}': SourceRule or SourceRule.input_path is not set.")
source_path_valid = False
preparation_failed = True
context.status_flags['prepare_items_failed_reason'] = "SourceRule.input_path missing"
elif not context.workspace_path or not context.workspace_path.is_dir():
log.error(f"Asset '{asset_name_for_log}': Workspace path '{context.workspace_path}' is invalid.")
source_path_valid = False
preparation_failed = True
context.status_flags['prepare_items_failed_reason'] = "Workspace path invalid"
if source_path_valid:
for file_rule in context.files_to_process:
log_prefix_fr = f"Asset '{asset_name_for_log}', FileRule '{file_rule.file_path}'"
if not file_rule.file_path:
log.warning(f"{log_prefix_fr}: Skipping FileRule with empty file_path.")
continue
item_type = file_rule.item_type_override or file_rule.item_type
if not item_type or item_type == "EXTRA" or not item_type.startswith("MAP_"):
log.debug(f"{log_prefix_fr}: Item type is '{item_type}'. Not creating map ProcessingItems.")
# Optionally, create a different kind of ProcessingItem for EXTRAs if they need pipeline processing
continue
source_image_path = context.workspace_path / file_rule.file_path
if not source_image_path.is_file():
log.error(f"{log_prefix_fr}: Source image file not found at '{source_image_path}'. Skipping this FileRule.")
preparation_failed = True # Individual file error can contribute to overall stage failure
context.status_flags.setdefault('prepare_items_file_errors', []).append(str(source_image_path))
continue
# Load image data to get dimensions and for LOWRES variant
# This data will be passed to subsequent stages via ProcessingItem.
# Consider caching this load if RegularMapProcessorStage also loads.
# For now, load here as dimensions are needed for LOWRES decision.
log.debug(f"{log_prefix_fr}: Loading image from '{source_image_path}' to determine dimensions and prepare items.")
source_image_data = ipu.load_image(str(source_image_path))
if source_image_data is None:
log.error(f"{log_prefix_fr}: Failed to load image from '{source_image_path}'. Skipping this FileRule.")
preparation_failed = True
context.status_flags.setdefault('prepare_items_file_errors', []).append(f"Failed to load {source_image_path}")
continue
orig_h, orig_w = source_image_data.shape[:2]
original_dimensions_wh = (orig_w, orig_h)
source_bit_depth = ipu.get_image_bit_depth(str(source_image_path)) # Get bit depth from file
source_channels = ipu.get_image_channels(source_image_data)
# Determine standard resolutions to generate
# This logic needs to be robust and consider file_rule.resolution_override, etc.
# Using a placeholder _get_target_resolutions for now.
target_resolutions = self._get_target_resolutions(orig_w, orig_h, config.image_resolutions, file_rule)
for res_key, _res_val in target_resolutions.items():
pi = ProcessingItem(
source_file_info_ref=str(source_image_path), # Using full path as ref
map_type_identifier=item_type,
resolution_key=res_key,
image_data=source_image_data.copy(), # Give each PI its own copy
original_dimensions=original_dimensions_wh,
current_dimensions=original_dimensions_wh,
bit_depth=source_bit_depth,
channels=source_channels,
status="Pending"
)
items_to_process.append(pi)
log.debug(f"{log_prefix_fr}: Created standard ProcessingItem: {pi.map_type_identifier}_{pi.resolution_key}")
# Create LOWRES variant if applicable
if config.enable_low_resolution_fallback and max(orig_w, orig_h) < config.low_resolution_threshold:
# Check if a LOWRES item for this source_file_info_ref already exists (e.g. if target_resolutions was empty)
# This check is important if _get_target_resolutions might return empty for small images.
# A more robust way is to ensure LOWRES is distinct from standard resolutions.
# Avoid duplicate LOWRES if _get_target_resolutions somehow already made one (unlikely with current placeholder)
is_lowres_already_added = any(p.resolution_key == "LOWRES" and p.source_file_info_ref == str(source_image_path) for p in items_to_process if isinstance(p, ProcessingItem))
if not is_lowres_already_added:
pi_lowres = ProcessingItem(
source_file_info_ref=str(source_image_path),
map_type_identifier=item_type,
resolution_key="LOWRES",
image_data=source_image_data.copy(), # Fresh copy for LOWRES
original_dimensions=original_dimensions_wh,
current_dimensions=original_dimensions_wh,
bit_depth=source_bit_depth,
channels=source_channels,
status="Pending"
)
items_to_process.append(pi_lowres)
log.info(f"{log_prefix_fr}: Created LOWRES ProcessingItem because {orig_w}x{orig_h} < {config.low_resolution_threshold}px threshold.")
else:
log.debug(f"{log_prefix_fr}: LOWRES item for this source already added by target resolution logic. Skipping duplicate LOWRES creation.")
elif config.enable_low_resolution_fallback:
log.debug(f"{log_prefix_fr}: Image {orig_w}x{orig_h} not below LOWRES threshold {config.low_resolution_threshold}px.")
else: # Source path not valid
log.warning(f"Asset '{asset_name_for_log}': Skipping creation of ProcessingItems from FileRules due to invalid source/workspace path.")
# --- Add MergeTaskDefinitions --- (This part remains largely the same)
merged_tasks_list = getattr(config, 'map_merge_rules', None)
if merged_tasks_list and isinstance(merged_tasks_list, list):
log.debug(f"Asset '{asset_name_for_log}': Found {len(merged_tasks_list)} merge tasks in global config.")
for task_idx, task_data in enumerate(merged_tasks_list):
if isinstance(task_data, dict):
task_key = f"merged_task_{task_idx}"
if not task_data.get('output_map_type') or not isinstance(task_data.get('inputs'), dict):
log.warning(f"Asset '{asset_name_for_log}', Task Index {task_idx}: Skipping merge task due to missing 'output_map_type' or valid 'inputs'. Task data: {task_data}")
continue
merge_def = MergeTaskDefinition(task_data=task_data, task_key=task_key)
items_to_process.append(merge_def)
log.info(f"Asset '{asset_name_for_log}': Added MergeTaskDefinition: Key='{merge_def.task_key}', OutputType='{merge_def.task_data.get('output_map_type', 'N/A')}'")
else:
log.warning(f"Asset '{asset_name_for_log}': Item at index {task_idx} in config.map_merge_rules is not a dict. Skipping. Item: {task_data}")
# ... (rest of merge task handling) ...
if not items_to_process and not preparation_failed: # Check preparation_failed too
log.info(f"Asset '{asset_name_for_log}': No valid items (ProcessingItem or MergeTaskDefinition) found to process.")
context.processing_items = items_to_process
context.intermediate_results = {} # Initialize intermediate results storage
if preparation_failed:
# Set a flag indicating failure during preparation, even if some items might have been added before failure
context.status_flags['prepare_items_failed'] = True
log.error(f"Asset '{asset_name_for_log}': Item preparation failed. Reason: {context.status_flags.get('prepare_items_failed_reason', 'Unknown')}")
# Optionally, clear items if failure means nothing should proceed
# context.processing_items = []
log.info(f"Asset '{asset_name_for_log}': Finished preparing items. Found {len(context.processing_items)} valid items.")
return context

View File

@@ -0,0 +1,220 @@
import logging
import re
from pathlib import Path
from typing import List, Optional, Tuple, Dict
import cv2
import numpy as np
from .base_stage import ProcessingStage # Assuming base_stage is in the same directory
from ..asset_context import AssetProcessingContext, ProcessedRegularMapData
from rule_structure import FileRule, AssetRule
from processing.utils import image_processing_utils as ipu # Absolute import
from utils.path_utils import get_filename_friendly_map_type # Absolute import
log = logging.getLogger(__name__)
class RegularMapProcessorStage(ProcessingStage):
"""
Processes a single regular texture map defined by a FileRule.
Loads the image, determines map type, applies transformations,
and returns the processed data.
"""
# --- Helper Methods (Adapted from IndividualMapProcessingStage) ---
def _get_suffixed_internal_map_type(
self,
asset_rule: Optional[AssetRule],
current_file_rule: FileRule,
initial_internal_map_type: str,
respect_variant_map_types: List[str],
asset_name_for_log: str
) -> str:
"""
Determines the potentially suffixed internal map type (e.g., MAP_COL-1).
"""
final_internal_map_type = initial_internal_map_type # Default
base_map_type_match = re.match(r"(MAP_[A-Z]+)", initial_internal_map_type)
if not base_map_type_match or not asset_rule or not asset_rule.files:
return final_internal_map_type # Cannot determine suffix without base type or asset rule files
true_base_map_type = base_map_type_match.group(1) # This is "MAP_XXX"
# Find all FileRules in the asset with the same base map type
peers_of_same_base_type = []
for fr_asset in asset_rule.files:
fr_asset_item_type = fr_asset.item_type_override or fr_asset.item_type or "UnknownMapType"
fr_asset_base_match = re.match(r"(MAP_[A-Z]+)", fr_asset_item_type)
if fr_asset_base_match and fr_asset_base_match.group(1) == true_base_map_type:
peers_of_same_base_type.append(fr_asset)
num_occurrences = len(peers_of_same_base_type)
current_instance_index = 0 # 1-based index
try:
# Find the index based on the FileRule object itself (requires object identity)
current_instance_index = peers_of_same_base_type.index(current_file_rule) + 1
except ValueError:
# Fallback: try matching by file_path if object identity fails (less reliable)
try:
current_instance_index = [fr.file_path for fr in peers_of_same_base_type].index(current_file_rule.file_path) + 1
log.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Found peer index using file_path fallback for suffixing.")
except (ValueError, AttributeError): # Catch AttributeError if file_path is None
log.warning(
f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}' (Initial Type: '{initial_internal_map_type}', Base: '{true_base_map_type}'): "
f"Could not find its own instance in the list of {num_occurrences} peers from asset_rule.files using object identity or path. Suffixing may be incorrect."
)
# Keep index 0, suffix logic below will handle it
# Determine Suffix
map_type_for_respect_check = true_base_map_type.replace("MAP_", "") # e.g., "COL"
is_in_respect_list = map_type_for_respect_check in respect_variant_map_types
suffix_to_append = ""
if num_occurrences > 1:
if current_instance_index > 0:
suffix_to_append = f"-{current_instance_index}"
else:
# If index is still 0 (not found), don't add suffix to avoid ambiguity
log.warning(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Index for multi-occurrence map type '{true_base_map_type}' (count: {num_occurrences}) not determined. Omitting numeric suffix.")
elif num_occurrences == 1 and is_in_respect_list:
suffix_to_append = "-1" # Add suffix even for single instance if in respect list
if suffix_to_append:
final_internal_map_type = true_base_map_type + suffix_to_append
if final_internal_map_type != initial_internal_map_type:
log.debug(f"Asset '{asset_name_for_log}', FileRule path '{current_file_rule.file_path}': Suffixed internal map type determined: '{initial_internal_map_type}' -> '{final_internal_map_type}'")
return final_internal_map_type
# --- Execute Method ---
def execute(
self,
context: AssetProcessingContext,
file_rule: FileRule # Specific item passed by orchestrator
) -> ProcessedRegularMapData:
"""
Processes the given FileRule item.
"""
asset_name_for_log = context.asset_rule.asset_name if context.asset_rule else "Unknown Asset"
log_prefix = f"Asset '{asset_name_for_log}', File '{file_rule.file_path}'"
log.info(f"{log_prefix}: Processing Regular Map.")
# Initialize output object with default failure state
result = ProcessedRegularMapData(
processed_image_data=np.array([]), # Placeholder
final_internal_map_type="Unknown",
source_file_path=Path(file_rule.file_path or "InvalidPath"),
original_bit_depth=None,
original_dimensions=None,
transformations_applied=[],
status="Failed",
error_message="Initialization error"
)
try:
# --- Configuration ---
config = context.config_obj
file_type_definitions = getattr(config, "FILE_TYPE_DEFINITIONS", {})
respect_variant_map_types = getattr(config, "respect_variant_map_types", [])
invert_normal_green = config.invert_normal_green_globally
# --- Determine Map Type (with suffix) ---
initial_internal_map_type = file_rule.item_type_override or file_rule.item_type or "UnknownMapType"
if not initial_internal_map_type or initial_internal_map_type == "UnknownMapType":
result.error_message = "Map type (item_type) not defined in FileRule."
log.error(f"{log_prefix}: {result.error_message}")
return result # Early exit
# Explicitly skip if the determined type doesn't start with "MAP_"
if not initial_internal_map_type.startswith("MAP_"):
result.status = "Skipped (Invalid Type)"
result.error_message = f"FileRule item_type '{initial_internal_map_type}' does not start with 'MAP_'. Skipping processing."
log.warning(f"{log_prefix}: {result.error_message}")
return result # Early exit
processing_map_type = self._get_suffixed_internal_map_type(
context.asset_rule, file_rule, initial_internal_map_type, respect_variant_map_types, asset_name_for_log
)
result.final_internal_map_type = processing_map_type # Store initial suffixed type
# --- Find and Load Source File ---
if not file_rule.file_path: # Should have been caught by Prepare stage, but double-check
result.error_message = "FileRule has empty file_path."
log.error(f"{log_prefix}: {result.error_message}")
return result
source_base_path = context.workspace_path
potential_source_path = source_base_path / file_rule.file_path
source_file_path_found: Optional[Path] = None
if potential_source_path.is_file():
source_file_path_found = potential_source_path
log.info(f"{log_prefix}: Found source file: {source_file_path_found}")
else:
# Optional: Add globbing fallback if needed, similar to original stage
log.warning(f"{log_prefix}: Source file not found directly at '{potential_source_path}'. Add globbing if necessary.")
result.error_message = f"Source file not found at '{potential_source_path}'"
log.error(f"{log_prefix}: {result.error_message}")
return result
result.source_file_path = source_file_path_found # Update result with found path
# Load image
source_image_data = ipu.load_image(str(source_file_path_found))
if source_image_data is None:
result.error_message = f"Failed to load image from '{source_file_path_found}'."
log.error(f"{log_prefix}: {result.error_message}")
return result
original_height, original_width = source_image_data.shape[:2]
result.original_dimensions = (original_width, original_height)
log.debug(f"{log_prefix}: Loaded image {result.original_dimensions[0]}x{result.original_dimensions[1]}.")
# Get original bit depth
try:
result.original_bit_depth = ipu.get_image_bit_depth(str(source_file_path_found))
log.info(f"{log_prefix}: Determined source bit depth: {result.original_bit_depth}")
except Exception as e:
log.warning(f"{log_prefix}: Could not determine source bit depth for {source_file_path_found}: {e}. Setting to None.")
result.original_bit_depth = None # Indicate failure to determine
# --- Apply Transformations ---
transformed_image_data, final_map_type, transform_notes = ipu.apply_common_map_transformations(
source_image_data.copy(), # Pass a copy to avoid modifying original load
processing_map_type,
invert_normal_green,
file_type_definitions,
log_prefix
)
result.processed_image_data = transformed_image_data
result.final_internal_map_type = final_map_type # Update if Gloss->Rough changed it
result.transformations_applied = transform_notes
# --- Determine Resolution Key for LOWRES ---
if config.enable_low_resolution_fallback and result.original_dimensions:
w, h = result.original_dimensions
if max(w, h) < config.low_resolution_threshold:
result.resolution_key = "LOWRES"
log.info(f"{log_prefix}: Image dimensions ({w}x{h}) are below threshold ({config.low_resolution_threshold}px). Flagging as LOWRES.")
# --- Success ---
result.status = "Processed"
result.error_message = None
log.info(f"{log_prefix}: Successfully processed regular map. Final type: '{result.final_internal_map_type}', ResolutionKey: {result.resolution_key}.")
except Exception as e:
log.exception(f"{log_prefix}: Unhandled exception during processing: {e}")
result.status = "Failed"
result.error_message = f"Unhandled exception: {e}"
# Ensure image data is empty on failure if it wasn't set
if result.processed_image_data is None or result.processed_image_data.size == 0:
result.processed_image_data = np.array([])
return result

View File

@@ -0,0 +1,98 @@
import logging
from typing import List, Dict, Optional # Added Optional
import numpy as np
from .base_stage import ProcessingStage
# Import necessary context classes and utils
from ..asset_context import SaveVariantsInput, SaveVariantsOutput
from processing.utils import image_saving_utils as isu # Absolute import
from utils.path_utils import get_filename_friendly_map_type # Absolute import
log = logging.getLogger(__name__)
class SaveVariantsStage(ProcessingStage):
"""
Takes final processed image data and configuration, calls the
save_image_variants utility, and returns the results.
"""
def execute(self, input_data: SaveVariantsInput) -> SaveVariantsOutput:
"""
Calls isu.save_image_variants with data from input_data.
"""
internal_map_type = input_data.internal_map_type
# The input_data for SaveVariantsStage doesn't directly contain the ProcessingItem.
# It receives data *derived* from a ProcessingItem by previous stages.
# For debugging, we'd need to pass more context or rely on what's in output_filename_pattern_tokens.
resolution_key_from_tokens = input_data.output_filename_pattern_tokens.get('resolution', 'UnknownResKey')
log_prefix = f"Save Variants Stage (Type: {internal_map_type}, ResKey: {resolution_key_from_tokens})"
log.info(f"{log_prefix}: Starting.")
log.debug(f"{log_prefix}: Input image_data shape: {input_data.image_data.shape if input_data.image_data is not None else 'None'}")
log.debug(f"{log_prefix}: Input source_bit_depth_info: {input_data.source_bit_depth_info}")
log.debug(f"{log_prefix}: Configured image_resolutions for saving: {input_data.image_resolutions}")
log.debug(f"{log_prefix}: Output filename pattern tokens: {input_data.output_filename_pattern_tokens}")
# Initialize output object with default failure state
result = SaveVariantsOutput(
saved_files_details=[],
status="Failed",
error_message="Initialization error"
)
if input_data.image_data is None or input_data.image_data.size == 0:
result.error_message = "Input image data is None or empty."
log.error(f"{log_prefix}: {result.error_message}")
return result
try:
# --- Prepare arguments for save_image_variants ---
# Get the filename-friendly base map type using the helper
# This assumes the save utility expects the friendly type. Adjust if needed.
base_map_type_friendly = get_filename_friendly_map_type(
internal_map_type, input_data.file_type_defs
)
log.debug(f"{log_prefix}: Using filename-friendly base type '{base_map_type_friendly}' for saving.")
save_args = {
"source_image_data": input_data.image_data,
"base_map_type": base_map_type_friendly, # Use the friendly type
"source_bit_depth_info": input_data.source_bit_depth_info,
"image_resolutions": input_data.image_resolutions,
"file_type_defs": input_data.file_type_defs,
"output_format_8bit": input_data.output_format_8bit,
"output_format_16bit_primary": input_data.output_format_16bit_primary,
"output_format_16bit_fallback": input_data.output_format_16bit_fallback,
"png_compression_level": input_data.png_compression_level,
"jpg_quality": input_data.jpg_quality,
"output_filename_pattern_tokens": input_data.output_filename_pattern_tokens,
"output_filename_pattern": input_data.output_filename_pattern,
"resolution_threshold_for_jpg": input_data.resolution_threshold_for_jpg, # Added
}
log.debug(f"{log_prefix}: Calling save_image_variants utility with args: {save_args}")
saved_files_details: List[Dict] = isu.save_image_variants(**save_args)
if saved_files_details:
log.info(f"{log_prefix}: Save utility completed successfully. Saved {len(saved_files_details)} variants: {[details.get('filepath') for details in saved_files_details]}")
result.saved_files_details = saved_files_details
result.status = "Processed"
result.error_message = None
else:
# This might not be an error, maybe no variants were configured?
log.warning(f"{log_prefix}: Save utility returned no saved file details. This might be expected if no resolutions/formats matched.")
result.saved_files_details = []
result.status = "Processed (No Output)" # Indicate processing happened but nothing saved
result.error_message = "Save utility reported no files saved (check configuration/resolutions)."
except Exception as e:
log.exception(f"{log_prefix}: Error calling or executing save_image_variants: {e}")
result.status = "Failed"
result.error_message = f"Save utility call failed: {e}"
result.saved_files_details = [] # Ensure empty list on error
return result

View File

@@ -56,5 +56,12 @@ class SupplierDeterminationStage(ProcessingStage):
if 'supplier_error' in context.status_flags:
del context.status_flags['supplier_error']
# merged_image_tasks are loaded from app_settings.json into Configuration object,
# not from supplier-specific presets.
# Ensure the attribute exists on context for PrepareProcessingItemsStage,
# which will get it from context.config_obj.
if not hasattr(context, 'merged_image_tasks'):
context.merged_image_tasks = []
return context

View File

@@ -163,6 +163,47 @@ def calculate_target_dimensions(
# --- Image Statistics ---
def get_image_bit_depth(image_path_str: str) -> Optional[int]:
"""
Determines the bit depth of an image file.
"""
try:
# Use IMREAD_UNCHANGED to preserve original bit depth
img = cv2.imread(image_path_str, cv2.IMREAD_UNCHANGED)
if img is None:
# logger.error(f"Failed to read image for bit depth: {image_path_str}") # Use print for utils
print(f"Warning: Failed to read image for bit depth: {image_path_str}")
return None
dtype_to_bit_depth = {
np.dtype('uint8'): 8,
np.dtype('uint16'): 16,
np.dtype('float32'): 32, # Typically for EXR etc.
np.dtype('int8'): 8, # Unlikely for images but good to have
np.dtype('int16'): 16, # Unlikely
# Add other dtypes if necessary
}
bit_depth = dtype_to_bit_depth.get(img.dtype)
if bit_depth is None:
# logger.warning(f"Unknown dtype {img.dtype} for image {image_path_str}, cannot determine bit depth.") # Use print for utils
print(f"Warning: Unknown dtype {img.dtype} for image {image_path_str}, cannot determine bit depth.")
pass # Return None
return bit_depth
except Exception as e:
# logger.error(f"Error getting bit depth for {image_path_str}: {e}") # Use print for utils
print(f"Error getting bit depth for {image_path_str}: {e}")
return None
def get_image_channels(image_data: np.ndarray) -> Optional[int]:
"""Determines the number of channels in an image."""
if image_data is None:
return None
if len(image_data.shape) == 2: # Grayscale
return 1
elif len(image_data.shape) == 3: # Color
return image_data.shape[2]
return None # Unknown shape
def calculate_image_stats(image_data: np.ndarray) -> Optional[Dict]:
"""
Calculates min, max, mean for a given numpy image array.
@@ -396,3 +437,89 @@ def save_image(
except Exception: # as e:
# print(f"Error saving image {path_obj}: {e}") # Optional: for debugging utils
return False
# --- Common Map Transformations ---
import re
import logging
ipu_log = logging.getLogger(__name__)
def apply_common_map_transformations(
image_data: np.ndarray,
processing_map_type: str, # The potentially suffixed internal type
invert_normal_green: bool,
file_type_definitions: Dict[str, Dict],
log_prefix: str
) -> Tuple[np.ndarray, str, List[str]]:
"""
Applies common in-memory transformations (Gloss-to-Rough, Normal Green Invert).
Returns potentially transformed image data, potentially updated map type, and notes.
"""
transformation_notes = []
current_image_data = image_data # Start with original data
updated_processing_map_type = processing_map_type # Start with original type
# Gloss-to-Rough
# Check if the base type is Gloss (before suffix)
base_map_type_match = re.match(r"(MAP_GLOSS)", processing_map_type)
if base_map_type_match:
ipu_log.info(f"{log_prefix}: Applying Gloss-to-Rough conversion.")
inversion_succeeded = False
if np.issubdtype(current_image_data.dtype, np.floating):
current_image_data = 1.0 - current_image_data
current_image_data = np.clip(current_image_data, 0.0, 1.0)
ipu_log.debug(f"{log_prefix}: Inverted float image data for Gloss->Rough.")
inversion_succeeded = True
elif np.issubdtype(current_image_data.dtype, np.integer):
max_val = np.iinfo(current_image_data.dtype).max
current_image_data = max_val - current_image_data
ipu_log.debug(f"{log_prefix}: Inverted integer image data (max_val: {max_val}) for Gloss->Rough.")
inversion_succeeded = True
else:
ipu_log.error(f"{log_prefix}: Unsupported image data type {current_image_data.dtype} for GLOSS map. Cannot invert.")
transformation_notes.append("Gloss-to-Rough FAILED (unsupported dtype)")
if inversion_succeeded:
# Update the type string itself (e.g., MAP_GLOSS-1 -> MAP_ROUGH-1)
updated_processing_map_type = processing_map_type.replace("GLOSS", "ROUGH")
ipu_log.info(f"{log_prefix}: Map type updated: '{processing_map_type}' -> '{updated_processing_map_type}'")
transformation_notes.append("Gloss-to-Rough applied")
# Normal Green Invert
# Check if the base type is Normal (before suffix)
base_map_type_match_nrm = re.match(r"(MAP_NRM)", processing_map_type)
if base_map_type_match_nrm and invert_normal_green:
ipu_log.info(f"{log_prefix}: Applying Normal Map Green Channel Inversion (Global Setting).")
current_image_data = invert_normal_map_green_channel(current_image_data)
transformation_notes.append("Normal Green Inverted (Global)")
return current_image_data, updated_processing_map_type, transformation_notes
# --- Normal Map Utilities ---
def invert_normal_map_green_channel(normal_map: np.ndarray) -> np.ndarray:
"""
Inverts the green channel of a normal map.
Assumes the normal map is in RGB or RGBA format (channel order R, G, B, A).
"""
if normal_map is None or len(normal_map.shape) < 3 or normal_map.shape[2] < 3:
# Not a valid color image with at least 3 channels
return normal_map
# Ensure data is mutable
inverted_map = normal_map.copy()
# Invert the green channel (index 1)
# Handle different data types
if np.issubdtype(inverted_map.dtype, np.floating):
inverted_map[:, :, 1] = 1.0 - inverted_map[:, :, 1]
elif np.issubdtype(inverted_map.dtype, np.integer):
max_val = np.iinfo(inverted_map.dtype).max
inverted_map[:, :, 1] = max_val - inverted_map[:, :, 1]
else:
# Unsupported dtype, return original
print(f"Warning: Unsupported dtype {inverted_map.dtype} for normal map green channel inversion.")
return normal_map
return inverted_map

View File

@@ -0,0 +1,297 @@
import logging
import cv2
import numpy as np
from pathlib import Path
from typing import List, Dict, Any, Tuple, Optional
# Potentially import ipu from ...utils import image_processing_utils as ipu
# Assuming ipu is available in the same utils directory or parent
try:
from . import image_processing_utils as ipu
except ImportError:
# Fallback for different import structures if needed, adjust based on actual project structure
# For this project structure, the relative import should work.
logging.warning("Could not import image_processing_utils using relative path. Attempting absolute import.")
try:
from processing.utils import image_processing_utils as ipu
except ImportError:
logging.error("Could not import image_processing_utils.")
ipu = None # Handle case where ipu is not available
logger = logging.getLogger(__name__)
def save_image_variants(
source_image_data: np.ndarray,
base_map_type: str, # Filename-friendly map type
source_bit_depth_info: List[Optional[int]],
image_resolutions: Dict[str, int],
file_type_defs: Dict[str, Dict[str, Any]],
output_format_8bit: str,
output_format_16bit_primary: str,
output_format_16bit_fallback: str,
png_compression_level: int,
jpg_quality: int,
output_filename_pattern_tokens: Dict[str, Any], # Must include 'output_base_directory': Path and 'asset_name': str
output_filename_pattern: str,
resolution_threshold_for_jpg: Optional[int] = None, # Added
# Consider adding ipu or relevant parts of it if not importing globally
) -> List[Dict[str, Any]]:
"""
Centralizes image saving logic, generating and saving various resolution variants
according to configuration.
Args:
source_image_data (np.ndarray): High-res image data (in memory, potentially transformed).
base_map_type (str): Final map type (e.g., "COL", "ROUGH", "NORMAL", "MAP_NRMRGH").
This is the filename-friendly map type.
source_bit_depth_info (List[Optional[int]]): List of original source bit depth(s)
(e.g., [8], [16], [8, 16]). Can contain None.
image_resolutions (Dict[str, int]): Dictionary mapping resolution keys (e.g., "4K")
to max dimensions (e.g., 4096).
file_type_defs (Dict[str, Dict[str, Any]]): Dictionary defining properties for map types,
including 'bit_depth_rule'.
output_format_8bit (str): File extension for 8-bit output (e.g., "jpg", "png").
output_format_16bit_primary (str): Primary file extension for 16-bit output (e.g., "png", "tif").
output_format_16bit_fallback (str): Fallback file extension for 16-bit output.
png_compression_level (int): Compression level for PNG output (0-9).
jpg_quality (int): Quality level for JPG output (0-100).
output_filename_pattern_tokens (Dict[str, Any]): Dictionary of tokens for filename
pattern replacement. Must include
'output_base_directory' (Path) and
'asset_name' (str).
output_filename_pattern (str): Pattern string for generating output filenames
(e.g., "[assetname]_[maptype]_[resolution].[ext]").
Returns:
List[Dict[str, Any]]: A list of dictionaries, each containing details about a saved file.
Example: [{'path': str, 'resolution_key': str, 'format': str,
'bit_depth': int, 'dimensions': (w,h)}, ...]
"""
if ipu is None:
logger.error("image_processing_utils is not available. Cannot save images.")
return []
saved_file_details = []
source_h, source_w = source_image_data.shape[:2]
source_max_dim = max(source_h, source_w)
# 1. Use provided configuration inputs (already available as function arguments)
logger.info(f"SaveImageVariants: Starting for map type: {base_map_type}. Source shape: {source_image_data.shape}, Source bit depths: {source_bit_depth_info}")
logger.debug(f"SaveImageVariants: Resolutions: {image_resolutions}, File Type Defs: {file_type_defs.keys()}, Output Formats: 8bit={output_format_8bit}, 16bit_pri={output_format_16bit_primary}, 16bit_fall={output_format_16bit_fallback}")
logger.debug(f"SaveImageVariants: PNG Comp: {png_compression_level}, JPG Qual: {jpg_quality}")
logger.debug(f"SaveImageVariants: Output Tokens: {output_filename_pattern_tokens}, Output Pattern: {output_filename_pattern}")
logger.debug(f"SaveImageVariants: Received resolution_threshold_for_jpg: {resolution_threshold_for_jpg}") # Log received threshold
# 2. Determine Target Bit Depth
target_bit_depth = 8 # Default
bit_depth_rule = file_type_defs.get(base_map_type, {}).get('bit_depth_rule', 'force_8bit')
if bit_depth_rule not in ['force_8bit', 'respect_inputs']:
logger.warning(f"Unknown bit_depth_rule '{bit_depth_rule}' for map type '{base_map_type}'. Defaulting to 'force_8bit'.")
bit_depth_rule = 'force_8bit'
if bit_depth_rule == 'respect_inputs':
# Check if any source bit depth is > 8, ignoring None
if any(depth is not None and depth > 8 for depth in source_bit_depth_info):
target_bit_depth = 16
else:
target_bit_depth = 8
logger.info(f"Bit depth rule 'respect_inputs' applied. Source bit depths: {source_bit_depth_info}. Target bit depth: {target_bit_depth}")
else: # force_8bit
target_bit_depth = 8
logger.info(f"Bit depth rule 'force_8bit' applied. Target bit depth: {target_bit_depth}")
# 3. Determine Output File Format(s)
if target_bit_depth == 8:
output_ext = output_format_8bit.lstrip('.').lower()
elif target_bit_depth == 16:
# Prioritize primary, fallback to fallback if primary is not supported/desired
# For now, just use primary. More complex logic might be needed later.
output_ext = output_format_16bit_primary.lstrip('.').lower()
# Basic fallback logic example (can be expanded)
if output_ext not in ['png', 'tif']: # Assuming common 16-bit formats
output_ext = output_format_16bit_fallback.lstrip('.').lower()
logger.warning(f"Primary 16-bit format '{output_format_16bit_primary}' might not be suitable. Using fallback '{output_format_16bit_fallback}'.")
else:
logger.error(f"Unsupported target bit depth: {target_bit_depth}. Defaulting to 8-bit format.")
output_ext = output_format_8bit.lstrip('.').lower()
current_output_ext = output_ext # Store the initial extension based on bit depth
logger.info(f"SaveImageVariants: Determined target bit depth: {target_bit_depth}, Initial output format: {current_output_ext} for map type {base_map_type}")
# 4. Generate and Save Resolution Variants
# Sort resolutions by max dimension descending
sorted_resolutions = sorted(image_resolutions.items(), key=lambda item: item[1], reverse=True)
for res_key, res_max_dim in sorted_resolutions:
logger.info(f"SaveImageVariants: Processing variant {res_key} ({res_max_dim}px) for {base_map_type}")
# --- Prevent Upscaling ---
# Skip this resolution variant if its target dimension is larger than the source image's largest dimension.
if res_max_dim > source_max_dim:
logger.info(f"SaveImageVariants: Skipping variant {res_key} ({res_max_dim}px) for {base_map_type} because target resolution is larger than source ({source_max_dim}px).")
continue # Skip to the next resolution
# Calculate target dimensions for valid variants (equal or smaller than source)
if source_max_dim == res_max_dim:
# Use source dimensions if target is equal
target_w_res, target_h_res = source_w, source_h
logger.info(f"SaveImageVariants: Using source resolution ({source_w}x{source_h}) for {res_key} variant of {base_map_type} as target matches source.")
else: # Downscale (source_max_dim > res_max_dim)
# Downscale, maintaining aspect ratio
aspect_ratio = source_w / source_h
if source_w >= source_h: # Use >= to handle square images correctly
target_w_res = res_max_dim
target_h_res = max(1, int(res_max_dim / aspect_ratio)) # Ensure height is at least 1
else:
target_h_res = res_max_dim
target_w_res = max(1, int(res_max_dim * aspect_ratio)) # Ensure width is at least 1
logger.info(f"SaveImageVariants: Calculated downscale for {base_map_type} {res_key}: from ({source_w}x{source_h}) to ({target_w_res}x{target_h_res})")
# Resize source_image_data (only if necessary)
if (target_w_res, target_h_res) == (source_w, source_h):
# No resize needed if dimensions match
variant_data = source_image_data.copy() # Copy to avoid modifying original if needed later
logger.debug(f"SaveImageVariants: No resize needed for {base_map_type} {res_key}, using copy of source data.")
else:
# Perform resize only if dimensions differ (i.e., downscaling)
interpolation_method = cv2.INTER_AREA # Good for downscaling
try:
variant_data = ipu.resize_image(source_image_data, target_w_res, target_h_res, interpolation=interpolation_method)
if variant_data is None: # Check if resize failed
raise ValueError("ipu.resize_image returned None")
logger.debug(f"SaveImageVariants: Resized variant data shape for {base_map_type} {res_key}: {variant_data.shape}")
except Exception as e:
logger.error(f"SaveImageVariants: Error resizing image for {base_map_type} {res_key} variant: {e}")
continue # Skip this variant if resizing fails
# Filename Construction
current_tokens = output_filename_pattern_tokens.copy()
current_tokens['maptype'] = base_map_type
current_tokens['resolution'] = res_key
# Determine final extension for this variant, considering JPG threshold
final_variant_ext = current_output_ext
# --- Start JPG Threshold Logging ---
logger.debug(f"SaveImageVariants: JPG Threshold Check for {base_map_type} {res_key}:")
logger.debug(f" - target_bit_depth: {target_bit_depth}")
logger.debug(f" - resolution_threshold_for_jpg: {resolution_threshold_for_jpg}")
logger.debug(f" - target_w_res: {target_w_res}, target_h_res: {target_h_res}")
logger.debug(f" - max(target_w_res, target_h_res): {max(target_w_res, target_h_res)}")
logger.debug(f" - current_output_ext: {current_output_ext}")
cond_bit_depth = target_bit_depth == 8
cond_threshold_not_none = resolution_threshold_for_jpg is not None
cond_res_exceeded = False
if cond_threshold_not_none: # Avoid comparison if threshold is None
cond_res_exceeded = max(target_w_res, target_h_res) > resolution_threshold_for_jpg
cond_is_png = current_output_ext == 'png'
logger.debug(f" - Condition (target_bit_depth == 8): {cond_bit_depth}")
logger.debug(f" - Condition (resolution_threshold_for_jpg is not None): {cond_threshold_not_none}")
logger.debug(f" - Condition (max(res) > threshold): {cond_res_exceeded}")
logger.debug(f" - Condition (current_output_ext == 'png'): {cond_is_png}")
# --- End JPG Threshold Logging ---
if cond_bit_depth and cond_threshold_not_none and cond_res_exceeded and cond_is_png:
final_variant_ext = 'jpg'
logger.info(f"SaveImageVariants: Overriding 8-bit PNG to JPG for {base_map_type} {res_key} due to resolution {max(target_w_res, target_h_res)}px > threshold {resolution_threshold_for_jpg}px.")
current_tokens['ext'] = final_variant_ext
try:
# Replace placeholders in the pattern
filename = output_filename_pattern
for token, value in current_tokens.items():
# Ensure value is string for replacement, handle Path objects later
filename = filename.replace(f"[{token}]", str(value))
# Construct full output path
output_base_directory = current_tokens.get('output_base_directory')
if not isinstance(output_base_directory, Path):
logger.error(f"'output_base_directory' token is missing or not a Path object: {output_base_directory}. Cannot save file.")
continue # Skip this variant
output_path = output_base_directory / filename
logger.info(f"SaveImageVariants: Constructed output path for {base_map_type} {res_key}: {output_path}")
# Ensure parent directory exists
output_path.parent.mkdir(parents=True, exist_ok=True)
logger.debug(f"SaveImageVariants: Ensured directory exists for {base_map_type} {res_key}: {output_path.parent}")
except Exception as e:
logger.error(f"SaveImageVariants: Error constructing filepath for {base_map_type} {res_key} variant: {e}")
continue # Skip this variant if path construction fails
# Prepare Save Parameters
save_params_cv2 = []
if final_variant_ext == 'jpg': # Check against final_variant_ext
save_params_cv2.append(cv2.IMWRITE_JPEG_QUALITY)
save_params_cv2.append(jpg_quality)
logger.debug(f"SaveImageVariants: Using JPG quality: {jpg_quality} for {base_map_type} {res_key}")
elif final_variant_ext == 'png': # Check against final_variant_ext
save_params_cv2.append(cv2.IMWRITE_PNG_COMPRESSION)
save_params_cv2.append(png_compression_level)
logger.debug(f"SaveImageVariants: Using PNG compression level: {png_compression_level} for {base_map_type} {res_key}")
# Add other format specific parameters if needed (e.g., TIFF compression)
# Bit Depth Conversion is handled by ipu.save_image via output_dtype_target
image_data_for_save = variant_data # Use the resized variant data directly
# Determine the target dtype for ipu.save_image
output_dtype_for_save: Optional[np.dtype] = None
if target_bit_depth == 8:
output_dtype_for_save = np.uint8
elif target_bit_depth == 16:
output_dtype_for_save = np.uint16
# Add other target bit depths like float16/float32 if necessary
# elif target_bit_depth == 32: # Assuming float32 for EXR etc.
# output_dtype_for_save = np.float32
# Saving
try:
# ipu.save_image is expected to handle the actual cv2.imwrite call
logger.debug(f"SaveImageVariants: Attempting to save {base_map_type} {res_key} to {output_path} with params {save_params_cv2}, target_dtype: {output_dtype_for_save}")
success = ipu.save_image(
str(output_path),
image_data_for_save,
output_dtype_target=output_dtype_for_save, # Pass the target dtype
params=save_params_cv2
)
if success:
logger.info(f"SaveImageVariants: Successfully saved {base_map_type} {res_key} variant to {output_path}")
# Collect details for the returned list
saved_file_details.append({
'path': str(output_path),
'resolution_key': res_key,
'format': final_variant_ext, # Log the actual saved format
'bit_depth': target_bit_depth,
'dimensions': (target_w_res, target_h_res)
})
else:
logger.error(f"SaveImageVariants: Failed to save {base_map_type} {res_key} variant to {output_path} (ipu.save_image returned False)")
except Exception as e:
logger.error(f"SaveImageVariants: Error during ipu.save_image for {base_map_type} {res_key} variant to {output_path}: {e}", exc_info=True)
# Continue to next variant even if one fails
# Discard in-memory variant after saving (Python's garbage collection handles this)
del variant_data
del image_data_for_save
# 5. Return List of Saved File Details
logger.info(f"Finished saving variants for map type: {base_map_type}. Saved {len(saved_file_details)} variants.")
return saved_file_details
# Optional Helper Functions (can be added here if needed)
# def _determine_target_bit_depth(...): ...
# def _determine_output_format(...): ...
# def _construct_variant_filepath(...): ...

View File

@@ -7,7 +7,7 @@ import tempfile
import logging
from pathlib import Path
from typing import List, Dict, Tuple, Optional, Set
log = logging.getLogger(__name__)
# Attempt to import image processing libraries
try:
import cv2
@@ -21,7 +21,6 @@ except ImportError as e:
np = None
try:
from configuration import Configuration, ConfigurationError
from rule_structure import SourceRule, AssetRule, FileRule
@@ -50,6 +49,7 @@ if not log.hasHandlers():
from processing.pipeline.orchestrator import PipelineOrchestrator
# from processing.pipeline.asset_context import AssetProcessingContext # AssetProcessingContext is used by the orchestrator
# Import stages that will be passed to the orchestrator (outer stages)
from processing.pipeline.stages.supplier_determination import SupplierDeterminationStage
from processing.pipeline.stages.asset_skip_logic import AssetSkipLogicStage
from processing.pipeline.stages.metadata_initialization import MetadataInitializationStage
@@ -57,8 +57,8 @@ from processing.pipeline.stages.file_rule_filter import FileRuleFilterStage
from processing.pipeline.stages.gloss_to_rough_conversion import GlossToRoughConversionStage
from processing.pipeline.stages.alpha_extraction_to_mask import AlphaExtractionToMaskStage
from processing.pipeline.stages.normal_map_green_channel import NormalMapGreenChannelStage
from processing.pipeline.stages.individual_map_processing import IndividualMapProcessingStage
from processing.pipeline.stages.map_merging import MapMergingStage
# Removed: from processing.pipeline.stages.individual_map_processing import IndividualMapProcessingStage
# Removed: from processing.pipeline.stages.map_merging import MapMergingStage
from processing.pipeline.stages.metadata_finalization_save import MetadataFinalizationAndSaveStage
from processing.pipeline.stages.output_organization import OutputOrganizationStage
@@ -94,22 +94,33 @@ class ProcessingEngine:
self.loaded_data_cache: dict = {} # Cache for loaded/resized data within a single process call
# --- Pipeline Orchestrator Setup ---
self.stages = [
# Define pre-item and post-item processing stages
pre_item_stages = [
SupplierDeterminationStage(),
AssetSkipLogicStage(),
MetadataInitializationStage(),
FileRuleFilterStage(),
GlossToRoughConversionStage(),
AlphaExtractionToMaskStage(),
NormalMapGreenChannelStage(),
IndividualMapProcessingStage(),
MapMergingStage(),
MetadataFinalizationAndSaveStage(),
OutputOrganizationStage(),
GlossToRoughConversionStage(), # Assumed to run on context.files_to_process if needed by old logic
AlphaExtractionToMaskStage(), # Same assumption as above
NormalMapGreenChannelStage(), # Same assumption as above
# Note: The new RegularMapProcessorStage and MergedTaskProcessorStage handle their own transformations
# on the specific items they process. These global transformation stages might need review
# if they were intended to operate on a broader scope or if their logic is now fully
# encapsulated in the new item-specific processor stages. For now, keeping them as pre-stages.
]
post_item_stages = [
OutputOrganizationStage(), # Must run after all items are saved to temp
MetadataFinalizationAndSaveStage(),# Must run after output organization to have final paths
]
try:
self.pipeline_orchestrator = PipelineOrchestrator(config_obj=self.config_obj, stages=self.stages)
log.info("PipelineOrchestrator initialized successfully in ProcessingEngine.")
self.pipeline_orchestrator = PipelineOrchestrator(
config_obj=self.config_obj,
pre_item_stages=pre_item_stages,
post_item_stages=post_item_stages
)
log.info("PipelineOrchestrator initialized successfully in ProcessingEngine with pre and post stages.")
except Exception as e:
log.error(f"Failed to initialize PipelineOrchestrator in ProcessingEngine: {e}", exc_info=True)
self.pipeline_orchestrator = None # Ensure it's None if init fails

44
projectBrief.md Normal file
View File

@@ -0,0 +1,44 @@
# Project Brief: Asset Processor Tool
## 1. Main Goal & Purpose
The primary goal of the Asset Processor Tool is to provide **CG artists and 3D content teams with a friendly, fast, and flexible interface to process and organize 3D asset source files into a standardized library format.** It automates repetitive and complex tasks involved in preparing assets from various suppliers for use in production pipelines.
## 2. Key Features & Components
* **Automated Asset Processing:** Ingests 3D asset source files (texture sets, models, etc.) from `.zip`, `.rar`, `.7z` archives, or folders.
* **Preset-Driven Workflow:** Utilizes configurable JSON presets to interpret different asset sources (e.g., from various online vendors or internal standards), defining rules for file classification and processing.
* **Comprehensive File Operations:**
* **Classification:** Automatically identifies map types (Color, Normal, Roughness, etc.), models, and other file categories based on preset rules.
* **Image Processing:** Performs tasks like image resizing (to standard resolutions like 1K, 2K, 4K, avoiding upscaling), glossiness-to-roughness conversion, normal map green channel inversion (OpenGL/DirectX handling), alpha channel extraction, bit-depth adjustments, and low-resolution fallback generation for small source images.
* **Channel Merging:** Combines channels from different source maps into packed textures (e.g., Normal + Roughness + Metallic into a single NRMRGH map).
* **Metadata Generation:** Creates a detailed `metadata.json` file for each processed asset, containing information about maps, categories, processing settings, and more, for downstream tool integration.
* **Flexible Output Organization:** Generates a clean, structured output directory based on user-configurable naming patterns and tokens.
* **Multiple User Interfaces:**
* **Graphical User Interface (GUI):** The primary interface, designed to be user-friendly, offering drag-and-drop functionality, an integrated preset editor, a live preview table for rule validation and overrides, and clear processing controls.
* **Directory Monitor:** An automated script that watches a specified folder for new asset archives and processes them based on preset names embedded in the archive filename.
* **Command-Line Interface (CLI):** Intended for batch processing and scripting (currently with limited core functionality).
* **Optional Blender Integration:** Can automatically run Blender scripts post-processing to create PBR node groups and materials in specified `.blend` files, linking to the newly processed textures.
* **Hierarchical Rule System:** Allows for dynamic, granular overrides of preset configurations at the source, asset, or individual file level via the GUI.
* **Experimental LLM Prediction:** Includes an option to use a Large Language Model for file interpretation and rule prediction.
## 3. Target Audience
* **CG Artists:** Individual artists looking for an efficient way to manage and prepare their personal or downloaded asset libraries.
* **3D Content Creation Teams:** Studios or groups needing a standardized pipeline for processing and organizing assets from multiple sources.
* **Technical Artists/Pipeline Developers:** Who may extend or integrate the tool into broader production workflows.
## 4. Overall Architectural Style & Key Technologies
* **Core Language:** Python
* **GUI Framework:** PySide6
* **Configuration:** Primarily JSON-based (application settings, user overrides, type definitions, supplier settings, presets, LLM settings).
* **Processing Architecture:** A modular, staged processing pipeline orchestrated by a central engine. Each stage performs a discrete task on an `AssetProcessingContext` object.
* **Key Libraries:** OpenCV (image processing), NumPy (numerical operations), py7zr/rarfile (archive handling), watchdog (directory monitoring).
* **Design Principles:** Modularity, configurability, and user-friendliness (especially for the GUI).
## 5. Foundational Information
* The tool aims to significantly reduce manual effort and ensure consistency in asset preparation.
* It is designed to be adaptable to various asset sources and pipeline requirements through its extensive configuration options and preset system.
* The output `metadata.json` is key for enabling further automation and integration with other tools or digital content creation (DCC) applications.

View File

@@ -1,6 +1,7 @@
import dataclasses
import json
from typing import List, Dict, Any, Tuple, Optional
import numpy as np # Added for ProcessingItem
@dataclasses.dataclass
class FileRule:
file_path: str = None
@@ -10,8 +11,12 @@ class FileRule:
resolution_override: Tuple[int, int] = None
channel_merge_instructions: Dict[str, Any] = dataclasses.field(default_factory=dict)
output_format_override: str = None
processing_items: List['ProcessingItem'] = dataclasses.field(default_factory=list) # Added field
def to_json(self) -> str:
# Need to handle ProcessingItem serialization if it contains non-serializable types like np.ndarray
# For now, assume asdict handles it or it's handled before calling to_json for persistence.
# A custom asdict_factory might be needed for robust serialization.
return json.dumps(dataclasses.asdict(self), indent=4)
@classmethod
@@ -54,4 +59,43 @@ class SourceRule:
data = json.loads(json_string)
# Manually deserialize nested AssetRule objects
data['assets'] = [AssetRule.from_json(json.dumps(asset_data)) for asset_data in data.get('assets', [])]
# Need to handle ProcessingItem deserialization if it was serialized
# For now, from_json for FileRule doesn't explicitly handle processing_items from JSON.
return cls(**data)
@dataclasses.dataclass
class ProcessingItem:
"""
Represents a specific version of an image map to be processed and saved.
This could be a standard resolution (1K, 2K), a preview, or a special
variant like 'LOWRES'.
"""
source_file_info_ref: str # Reference to the original SourceFileInfo or unique ID of the source image
map_type_identifier: str # The internal map type (e.g., "MAP_COL", "MAP_ROUGH")
resolution_key: str # The resolution identifier (e.g., "1K", "PREVIEW", "LOWRES")
image_data: np.ndarray # The actual image data for this item
original_dimensions: Tuple[int, int] # (width, height) of the source image for this item
current_dimensions: Tuple[int, int] # (width, height) of the image_data in this item
target_filename: str = "" # Will be populated by SaveVariantsStage
is_extra: bool = False # If this item should be treated as an 'extra' file
bit_depth: Optional[int] = None
channels: Optional[int] = None
file_extension: Optional[str] = None # Determined during saving based on format
processing_applied_log: List[str] = dataclasses.field(default_factory=list)
status: str = "Pending" # e.g., Pending, Processed, Failed
error_message: Optional[str] = None
# __getstate__ and __setstate__ might be needed if we pickle these objects
# and np.ndarray causes issues. For JSON, image_data would typically not be serialized.
def __getstate__(self):
state = self.__dict__.copy()
# Don't pickle image_data if it's large or not needed for state
if 'image_data' in state: # Or a more sophisticated check
del state['image_data'] # Example: remove it
return state
def __setstate__(self, state):
self.__dict__.update(state)
# Potentially re-initialize or handle missing 'image_data'
if 'image_data' not in self.__dict__:
self.image_data = None # Or load it if a path was stored instead

View File

@@ -163,6 +163,39 @@ def sanitize_filename(name: str) -> str:
if not name: name = "invalid_name"
return name
def get_filename_friendly_map_type(internal_map_type: str, file_type_definitions: Optional[Dict[str, Dict]]) -> str:
"""Derives a filename-friendly map type from the internal map type."""
filename_friendly_map_type = internal_map_type # Fallback
if not file_type_definitions or not isinstance(file_type_definitions, dict) or not file_type_definitions:
logger.warning(f"Filename-friendly lookup: FILE_TYPE_DEFINITIONS not available or invalid. Falling back to internal type: {internal_map_type}")
return filename_friendly_map_type
base_map_key_val = None
suffix_part = ""
# Sort keys by length descending to match longest prefix first (e.g., MAP_ROUGHNESS before MAP_ROUGH)
sorted_known_base_keys = sorted(list(file_type_definitions.keys()), key=len, reverse=True)
for known_key in sorted_known_base_keys:
if internal_map_type.startswith(known_key):
base_map_key_val = known_key
suffix_part = internal_map_type[len(known_key):]
break
if base_map_key_val:
definition = file_type_definitions.get(base_map_key_val)
if definition and isinstance(definition, dict):
standard_type_alias = definition.get("standard_type")
if standard_type_alias and isinstance(standard_type_alias, str) and standard_type_alias.strip():
filename_friendly_map_type = standard_type_alias.strip() + suffix_part
logger.debug(f"Filename-friendly lookup: Transformed '{internal_map_type}' -> '{filename_friendly_map_type}'")
else:
logger.warning(f"Filename-friendly lookup: Standard type alias for '{base_map_key_val}' is missing or invalid. Falling back.")
else:
logger.warning(f"Filename-friendly lookup: No valid definition for '{base_map_key_val}'. Falling back.")
else:
logger.warning(f"Filename-friendly lookup: Could not parse base key from '{internal_map_type}'. Falling back.")
return filename_friendly_map_type
# --- Basic Unit Tests ---
if __name__ == "__main__":
print("Running basic tests for path_utils.generate_path_from_pattern...")