Major Comment and codebase cleanup

This commit is contained in:
2025-05-06 22:47:26 +02:00
parent ddb5a43a21
commit 932b39fd01
109 changed files with 622 additions and 10137 deletions

View File

@@ -1,106 +0,0 @@
# DRAFT README Enhancements - Architecture Section & Refinements
**(Note: This is a draft. Integrate the "Architecture" section and the refinements into the main `readme.md` file.)**
---
## Refinements to Existing Sections
**(Suggest adding these points or similar wording to the relevant existing sections)**
* **In Features:**
* Add: **Responsive GUI:** Utilizes background threads for processing and file preview generation, ensuring the user interface remains responsive.
* Add: **Optimized Classification:** Pre-compiles regular expressions from presets for faster file identification during classification.
* **In Directory Structure:**
* Update Core Logic bullet: `* **Core Logic:** main.py, monitor.py, asset_processor.py, configuration.py, config.py` (explicitly add `configuration.py`).
---
## Architecture
**(Suggest adding this new section, perhaps after "Features" or "Directory Structure")**
This section provides a higher-level overview of the tool's internal structure and design, intended for developers or users interested in the technical implementation.
### Core Components
The tool is primarily built around several key Python modules:
* **`config.py`**: Defines core, global settings (output paths, resolutions, default behaviors, format rules, etc.) that are generally not supplier-specific.
* **`Presets/*.json`**: Supplier-specific JSON files defining rules for interpreting source assets (filename patterns, map type keywords, model identification, etc.).
* **`configuration.py` (`Configuration` class)**: Responsible for loading the core `config.py` settings and merging them with a selected preset JSON file. Crucially, it also **pre-compiles** regular expression patterns defined in the preset (e.g., for map keywords, extra files, 16-bit variants) upon initialization. This pre-compilation significantly speeds up the file classification process.
* **`asset_processor.py` (`AssetProcessor` class)**: Contains the core logic for processing a *single* asset. It orchestrates the pipeline steps: workspace setup, extraction, file classification, metadata determination, map processing, channel merging, metadata file generation, and output organization.
* **`main.py`**: Serves as the entry point for the Command-Line Interface (CLI). It handles argument parsing, sets up logging, manages the parallel processing pool, and calls `AssetProcessor` for each input asset via a wrapper function.
* **`gui/`**: Contains modules related to the Graphical User Interface (GUI), built using PySide6.
* **`monitor.py`**: Implements the directory monitoring functionality for automated processing.
### Parallel Processing (CLI & GUI)
To accelerate the processing of multiple assets, the tool utilizes Python's `concurrent.futures.ProcessPoolExecutor`.
* Both `main.py` (for CLI) and `gui/processing_handler.py` (for GUI background tasks) create a process pool.
* The actual processing for each asset is delegated to the `main.process_single_asset_wrapper` function. This wrapper is executed in a separate worker process within the pool.
* The wrapper function is responsible for instantiating the `Configuration` and `AssetProcessor` classes for the specific asset being processed in that worker. This isolates each asset's processing environment.
* Results (success, skip, failure, error messages) are communicated back from the worker processes to the main coordinating script (either `main.py` or `gui/processing_handler.py`).
### Asset Processing Pipeline (`AssetProcessor` class)
The `AssetProcessor` class executes a sequence of steps for each asset:
1. **`_setup_workspace()`**: Creates a temporary directory for processing.
2. **`_extract_input()`**: Extracts the input ZIP archive or copies the input folder contents into the temporary workspace.
3. **`_inventory_and_classify_files()`**: This is a critical step that scans the workspace and classifies each file based on rules defined in the loaded `Configuration` (which includes the preset). It uses the pre-compiled regex patterns for efficiency. Key logic includes:
* Identifying files explicitly marked for the `Extra/` folder.
* Identifying model files.
* Matching potential texture maps against keyword patterns.
* Identifying and prioritizing 16-bit variants (e.g., `_NRM16.tif`) over their 8-bit counterparts based on `source_naming.bit_depth_variants` patterns. Ignored 8-bit files are tracked.
* Handling map variants (e.g., multiple Color maps) by assigning suffixes (`-1`, `-2`) based on the `RESPECT_VARIANT_MAP_TYPES` setting in `config.py` and the order of keywords defined in the preset's `map_type_mapping`.
* Classifying any remaining files as 'Unrecognised' (which are also moved to the `Extra/` folder).
4. **`_determine_base_metadata()`**: Determines the asset's base name, category (Texture, Asset, Decal), and archetype (e.g., Wood, Metal) based on classified files and preset rules (`source_naming`, `asset_category_rules`, `archetype_rules`).
5. **Skip Check**: If `overwrite` is false, checks if the final output directory and metadata file already exist. If so, processing for this asset stops early.
6. **`_process_maps()`**: Iterates through classified texture maps. For each map:
* Loads the image data (handling potential Gloss->Roughness inversion).
* Resizes the map to each target resolution specified in `config.py`, avoiding upscaling.
* Determines the output bit depth based on `MAP_BIT_DEPTH_RULES` (`respect` source or `force_8bit`).
* Determines the output file format (`.jpg`, `.png`, `.exr`) based on a combination of factors:
* The `RESOLUTION_THRESHOLD_FOR_JPG` (forces JPG for 8-bit maps above the threshold).
* The original input file format (e.g., `.jpg` inputs tend to produce `.jpg` outputs if 8-bit and below threshold).
* The target bit depth (16-bit outputs use configured `OUTPUT_FORMAT_16BIT_PRIMARY` or `_FALLBACK`).
* Configured 8-bit format (`OUTPUT_FORMAT_8BIT`).
* Saves the processed map for each resolution, applying appropriate compression/quality settings. Includes fallback logic if saving in the primary format fails (e.g., EXR -> PNG).
* Calculates basic image statistics (Min/Max/Mean) for a reference resolution (`CALCULATE_STATS_RESOLUTION`).
7. **`_merge_maps()`**: Combines channels from different processed maps into new textures (e.g., NRMRGH) based on `MAP_MERGE_RULES` defined in `config.py`. It determines the output format for merged maps similarly to `_process_maps`, considering the formats of the input maps involved.
8. **`_generate_metadata_file()`**: Collects all gathered information (asset name, maps present, resolutions, stats, etc.) and writes it to the `metadata.json` file.
9. **`_organize_output_files()`**: Moves the processed maps, merged maps, models, metadata file, and any 'Extra'/'Unrecognised'/'Ignored' files from the temporary workspace to the final structured output directory (`<output_base>/<supplier>/<asset_name>/`).
10. **`_cleanup_workspace()`**: Removes the temporary workspace directory.
### GUI Architecture (`gui/`)
The GUI provides an interactive way to use the tool and manage presets.
* **Framework**: Built using `PySide6`, the official Python bindings for the Qt framework.
* **Main Window (`main_window.py`)**: Defines the main application window, which includes:
* An integrated preset editor panel (using `QSplitter`).
* A processing panel with drag-and-drop support, a file preview table, and processing controls.
* **Threading Model**: To prevent the UI from freezing during potentially long operations, background tasks are run in separate `QThread`s:
* **`ProcessingHandler` (`processing_handler.py`)**: Manages the execution of the main processing pipeline (using `ProcessPoolExecutor` and `main.process_single_asset_wrapper`, similar to the CLI) in a background thread.
* **`PredictionHandler` (`prediction_handler.py`)**: Manages the generation of file previews in a background thread. It calls `AssetProcessor.get_detailed_file_predictions()`, which performs the extraction and classification steps without full image processing, making it much faster.
* **Communication**: Qt's **signal and slot mechanism** is used for communication between the background threads (`ProcessingHandler`, `PredictionHandler`) and the main GUI thread (`MainWindow`). For example, signals are emitted to update the progress bar, populate the preview table, and report completion status or errors.
* **Preset Editor**: The editor allows creating, modifying, and saving preset JSON files directly within the GUI. Changes are tracked, and users are prompted to save before closing or loading another preset if changes are pending.
### Monitor Architecture (`monitor.py`)
The `monitor.py` script enables automated processing of assets dropped into a designated input directory.
* **File System Watching**: Uses the `watchdog` library (specifically `PollingObserver` for cross-platform compatibility) to monitor the specified `INPUT_DIR`.
* **Event Handling**: A custom `ZipHandler` detects `on_created` events for `.zip` files.
* **Filename Parsing**: It expects filenames in the format `[preset]_filename.zip` and uses a regular expression (`PRESET_FILENAME_REGEX`) to extract the `preset` name.
* **Preset Validation**: Checks if the extracted preset name corresponds to a valid `.json` file in the `Presets/` directory.
* **Processing Trigger**: If the filename format and preset are valid, it calls the `main.run_processing` function (the same core logic used by the CLI) to process the detected ZIP file using the extracted preset.
* **File Management**: Moves the source ZIP file to either a `PROCESSED_DIR` (on success/skip) or an `ERROR_DIR` (on failure or invalid preset) after the processing attempt.
### Error Handling
* Custom exception classes (`ConfigurationError`, `AssetProcessingError`) are defined and used to signal specific types of errors during configuration loading or asset processing.
* Standard Python logging is used throughout the application (CLI, GUI, Monitor, Core Logic) to record information, warnings, and errors. Log levels can be configured.
* Worker processes in the processing pool capture exceptions and report them back to the main process for logging and status updates.

View File

@@ -1,150 +0,0 @@
Implementation Plan: Path Token Data Generation
This plan outlines the steps required to implement data generation/retrieval for the [IncrementingValue], ####, and [Sha5] path tokens used in OUTPUT_DIRECTORY_PATTERN and OUTPUT_FILENAME_PATTERN.
1. Goal Recap
Enable the use of [IncrementingValue] (or ####), [Time], and [Sha5] tokens within the output path patterns used by processing_engine.py. Implement logic to generate/retrieve data for these tokens and pass it to utils.path_utils.generate_path_from_pattern. Confirm handling of [Date] and [ApplicationPath].
2. Analysis Summary & Existing Token Handling
[Date], [Time], [ApplicationPath]: Handled automatically by utils/path_utils.py. No changes needed.
[IncrementingValue] / ####: Requires data provision based on scanning existing output directories. Implementation detailed below.
[Sha5]: Requires data provision (first 5 chars of SHA-256 hash of original input file). Implementation detailed below.
Path Generation Points: _save_image() and _generate_metadata_file() in processing_engine.py.
3. Implementation Plan per Token
3.1. [IncrementingValue] / #### (Directory Scan Logic)
Scope & Behavior: Determine the next available incrementing number by scanning existing directories in the final output_base_path that match the OUTPUT_DIRECTORY_PATTERN structure. The value represents the next sequence number globally across the pattern structure.
Location: New utility function get_next_incrementing_value in utils/path_utils.py, called from orchestrating code (main.py / monitor.py).
Mechanism:
get_next_incrementing_value(output_base_path: Path, output_directory_pattern: str) -> str:
Parses output_directory_pattern to find the incrementing token (#### or [IncrementingValue]) and determine padding digits.
Constructs a glob pattern based on the pattern structure (e.g., [0-9][0-9]_* for ##_*).
Uses output_base_path.glob() to find matching directories.
Extracts numerical prefixes from matching directory names using regex.
Finds the maximum existing integer value (or -1 if none).
Calculates next_value = max_value + 1.
Formats next_value as a zero-padded string based on the pattern's digits.
Returns the formatted string.
Orchestrator (main.py/monitor.py):
Load Configuration to get OUTPUT_DIRECTORY_PATTERN.
Get output_base_path.
Call next_increment_str = get_next_incrementing_value(output_base_path, config.output_directory_pattern).
Pass next_increment_str to ProcessingEngine.process as incrementing_value.
Integration (processing_engine.py):
Accept incrementing_value: Optional[str] in process signature.
Store on self.current_incrementing_value.
Add to token_data (key: 'incrementingvalue') in _save_image and _generate_metadata_file.
3.2. [Sha5]
Scope & Behavior: Calculate SHA-256 hash of the original input source file, take the first 5 characters.
Location: Orchestrating code (main.py / monitor.py) before ProcessingEngine invocation.
Mechanism: Use new utility function calculate_sha256 in utils/hash_utils.py. Call this in the orchestrator, get the first 5 chars, pass to ProcessingEngine.process.
Integration (processing_engine.py): Accept sha5_value: Optional[str] in process, store on self.current_sha5_value, add to token_data (key: 'sha5') in _save_image and _generate_metadata_file.
4. Proposed Code Changes
4.1. utils/hash_utils.py (New File)
# utils/hash_utils.py
import hashlib
import logging
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
def calculate_sha256(file_path: Path) -> Optional[str]:
"""Calculates the SHA-256 hash of a file."""
# Implementation as detailed in the previous plan revision...
if not isinstance(file_path, Path): return None
if not file_path.is_file(): return None
sha256_hash = hashlib.sha256()
try:
with open(file_path, "rb") as f:
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
return sha256_hash.hexdigest()
except OSError as e:
logger.error(f"Error reading file {file_path} for SHA-256: {e}", exc_info=True)
return None
except Exception as e:
logger.error(f"Unexpected error calculating SHA-256 for {file_path}: {e}", exc_info=True)
return None
python
4.2. utils/path_utils.py (Additions/Modifications)
# (In utils/path_utils.py)
import re
import logging
from pathlib import Path
from typing import Optional, Dict
logger = logging.getLogger(__name__)
# ... (existing generate_path_from_pattern function) ...
def get_next_incrementing_value(output_base_path: Path, output_directory_pattern: str) -> str:
"""Determines the next incrementing value based on existing directories."""
# Implementation as detailed in the previous plan revision...
logger.debug(f"Calculating next increment value for pattern '{output_directory_pattern}' in '{output_base_path}'")
match = re.match(r"(.*?)(\[IncrementingValue\]|(#+))(.*)", output_directory_pattern)
if not match: return "00" # Default fallback
prefix_pattern, increment_token, suffix_pattern = match.groups()
num_digits = len(increment_token) if increment_token.startswith("#") else 2
glob_increment_part = f"[{'0-9' * num_digits}]"
glob_prefix = re.sub(r'\[[^\]]+\]', '*', prefix_pattern)
glob_suffix = re.sub(r'\[[^\]]+\]', '*', suffix_pattern)
glob_pattern = f"{glob_prefix}{glob_increment_part}{glob_suffix}"
max_value = -1
try:
extract_prefix_re = re.escape(prefix_pattern)
extract_suffix_re = re.escape(suffix_pattern)
extract_regex = re.compile(rf"^{extract_prefix_re}(\d{{{num_digits}}}){extract_suffix_re}.*")
for item in output_base_path.glob(glob_pattern):
if item.is_dir():
num_match = extract_regex.match(item.name)
if num_match:
try: max_value = max(max_value, int(num_match.group(1)))
except (ValueError, IndexError): pass
except Exception as e: logger.error(f"Error searching increment values: {e}", exc_info=True)
next_value = max_value + 1
format_string = f"{{:0{num_digits}d}}"
next_value_str = format_string.format(next_value)
logger.info(f"Determined next incrementing value: {next_value_str}")
return next_value_str
python
4.3. main.py / monitor.py (Orchestration - Revised Call)
Imports: Add from utils.hash_utils import calculate_sha256, from utils.path_utils import get_next_incrementing_value.
Before ProcessingEngine.process call:
Get archive_path, output_dir.
Load config = Configuration(...).
full_sha = calculate_sha256(archive_path).
sha5_value = full_sha[:5] if full_sha else None.
next_increment_str = get_next_incrementing_value(output_dir, config.output_directory_pattern).
Modify call: engine.process(..., incrementing_value=next_increment_str, sha5_value=sha5_value).
4.4. processing_engine.py
Imports: Ensure Optional, logging, generate_path_from_pattern are imported.
process Method:
Update signature: def process(..., incrementing_value: Optional[str] = None, sha5_value: Optional[str] = None) -> ...:
Store args: self.current_incrementing_value = incrementing_value, self.current_sha5_value = sha5_value.
_save_image & _generate_metadata_file Methods:
Before calling generate_path_from_pattern, add stored values to token_data:
# Add new token data if available
if hasattr(self, 'current_incrementing_value') and self.current_incrementing_value is not None:
token_data['incrementingvalue'] = self.current_incrementing_value
if hasattr(self, 'current_sha5_value') and self.current_sha5_value is not None:
token_data['sha5'] = self.current_sha5_value
log.debug(f"Token data for path generation: {token_data}")

View File

@@ -1,124 +0,0 @@
# Blender Integration Plan: Node Groups from Processed Assets
**Objective:** Develop a Python script (`blenderscripts/create_nodegroups.py`) to run manually inside Blender. This script will scan the output directory generated by the Asset Processor Tool, read `metadata.json` files, and create/update corresponding PBR node groups in the active Blender file, leveraging the pre-calculated metadata.
**Key Principles:**
* **Leverage Existing Tool Output:** Rely entirely on the structured output and `metadata.json` from the Asset Processor Tool. Avoid reprocessing or recalculating data already available.
* **Blender Environment:** The script is designed solely for Blender's Python environment (`bpy`).
* **Manual Execution:** Users will manually run this script from Blender's Text Editor.
* **Target Active File:** All operations modify the currently open `.blend` file.
* **Assume Templates:** The script will assume node group templates (`Template_PBRSET`, `Template_PBRTYPE`) exist in the active file. Error handling will be added if they are missing.
* **Focus on Node Groups:** The script's scope is limited to creating and updating the node groups, not materials.
**Detailed Plan:**
1. **Script Setup & Configuration:**
* Create a new Python file named `create_nodegroups.py` (intended location: `blenderscripts/`).
* Import necessary modules (`bpy`, `os`, `json`, `pathlib`).
* Define user-configurable variables at the top:
* `PROCESSED_ASSET_LIBRARY_ROOT`: Path to the root output directory of the Asset Processor Tool.
* `PARENT_TEMPLATE_NAME`: Name of the parent node group template (e.g., `"Template_PBRSET"`).
* `CHILD_TEMPLATE_NAME`: Name of the child node group template (e.g., `"Template_PBRTYPE"`).
* `ASPECT_RATIO_NODE_LABEL`: Label of the Value node in the parent template for aspect ratio correction (e.g., `"AspectRatioCorrection"`).
* `STATS_NODE_PREFIX`: Prefix for Combine XYZ nodes storing stats in the parent template (e.g., `"Histogram-"`).
* `ENABLE_MANIFEST`: Boolean flag to enable/disable the manifest system (default: `True`).
2. **Manifest Handling:**
* **Location:** Use a separate JSON file named `[ActiveBlendFileName]_manifest.json`, located in the same directory as the active `.blend` file.
* **Loading:** Implement `load_manifest(context)` that finds and reads the manifest JSON file. If not found or invalid, return an empty dictionary.
* **Saving:** Implement `save_manifest(context, manifest_data)` that writes the `manifest_data` dictionary to the manifest JSON file.
* **Checking:** Implement helper functions `is_asset_processed(manifest_data, asset_name)` and `is_map_processed(manifest_data, asset_name, map_type, resolution)` to check against the loaded manifest.
* **Updating:** Update the manifest dictionary in memory as assets/maps are processed. Save the manifest once at the end of the script.
3. **Core Logic - `process_library()` function:**
* Get Blender context.
* Load manifest data (if enabled).
* Validate that `PROCESSED_ASSET_LIBRARY_ROOT` exists.
* Validate that template node groups exist in `bpy.data.node_groups`. Exit gracefully with an error message if not found.
* Initialize counters (new groups, updated groups, etc.).
* **Scan Directory:** Use `os.walk` or `pathlib.rglob` to find all `metadata.json` files within the `PROCESSED_ASSET_LIBRARY_ROOT`.
* **Iterate Metadata:** For each `metadata.json` found:
* Parse the JSON data. Extract key information: `asset_name`, `supplier_name`, `archetype`, `maps` (dictionary of maps with resolutions, paths, stats), `aspect_ratio_change_string`.
* Check manifest if asset is already processed (if enabled). Skip if true.
* **Parent Group Handling:**
* Determine target parent group name (e.g., `f"PBRSET_{asset_name}"`).
* Find existing group or create a copy from `PARENT_TEMPLATE_NAME`.
* Mark group as asset (`asset_mark()`) if not already.
* **Apply Metadata:**
* Find the aspect ratio node using `ASPECT_RATIO_NODE_LABEL`. Calculate the correction factor based on the `aspect_ratio_change_string` from metadata (using helper function `calculate_factor_from_string`) and set the node's default value.
* For relevant map types (e.g., ROUGH, DISP), find the stats node (`STATS_NODE_PREFIX` + map type). Set the X, Y, Z inputs using the `min`, `max`, `mean` values stored in the map's metadata entry for the reference resolution.
* **Apply Asset Tags:** Use `asset_data.tags.new()` to add the `supplier_name` and `archetype` tags (checking for existence first).
* **Child Group Handling (Iterate through `maps` in metadata):**
* For each `map_type` (e.g., "COL", "NRM") and its data in the metadata:
* Determine target child group name (e.g., `f"PBRTYPE_{asset_name}_{map_type}"`).
* Find existing child group or create a copy from `CHILD_TEMPLATE_NAME`.
* Find the corresponding placeholder node in the *parent* group (by label matching `map_type`). Assign the child node group to this placeholder (`placeholder_node.node_tree = child_group`).
* Link the child group's output to the corresponding parent group's output socket. Ensure the parent output socket type is `NodeSocketColor`.
* **Image Node Handling (Iterate through resolutions for the map type):**
* For each `resolution` (e.g., "4K", "2K") and its `image_path` in the metadata:
* Check manifest if this specific map/resolution is processed (if enabled). Skip if true.
* Find the corresponding Image Texture node in the *child* group (by label matching `resolution`, e.g., "4K").
* Load the image using `bpy.data.images.load(image_path, check_existing=True)`. Handle potential file-not-found errors.
* Assign the loaded image to the `image_node.image`.
* Set the `image_node.image.colorspace_settings.name` based on the `map_type` (using a helper function `get_color_space`).
* Update manifest dictionary for this map/resolution (if enabled).
* Update manifest dictionary for the processed asset (if enabled).
* Save manifest data (if enabled and changes were made).
* Print summary (duration, groups created/updated, etc.).
4. **Helper Functions:**
* `find_nodes_by_label(node_tree, label, node_type)`: Reusable function to find nodes.
* `calculate_factor_from_string(aspect_string)`: Parses the `aspect_ratio_change_string` from metadata and returns the appropriate UV X-scaling factor.
* `get_color_space(map_type)`: Returns the appropriate Blender color space name for a given map type string.
* `add_tag_if_new(asset_data, tag_name)`: Adds a tag if it doesn't exist.
* Manifest loading/saving/checking functions.
5. **Execution Block (`if __name__ == "__main__":`)**
* Add pre-run checks (templates exist, library path valid, blend file saved if manifest enabled).
* Call the main `process_library()` function.
* Include basic timing and print statements for start/end.
**Mermaid Diagram:**
```mermaid
graph TD
A[Start Script in Blender] --> B(Load Config: Lib Path, Template Names, Node Labels);
B --> C{Check Templates Exist};
C -- Templates OK --> D(Load Manifest from adjacent .json file);
C -- Templates Missing --> X(Error & Exit);
D --> E{Scan Processed Library for metadata.json};
E --> F{For each metadata.json};
F --> G{Parse Metadata (Asset Name, Supplier, Archetype, Maps, Aspect Str, Stats)};
G --> H{Is Asset in Manifest?};
H -- Yes --> F;
H -- No --> I{Find/Create Parent Group (PBRSET_)};
I --> J(Mark as Asset & Apply Supplier + Archetype Tags);
J --> K(Find Aspect Node & Set Value from Aspect String);
K --> M{For each Map Type in Metadata};
M --> N(Find Stats Node & Set Values from Stats in Metadata);
N --> O{Find/Create Child Group (PBRTYPE_)};
O --> P(Assign Child to Parent Placeholder);
P --> Q(Link Child Output to Parent Output);
Q --> R{For each Resolution of Map};
R --> S{Is Map/Res in Manifest?};
S -- Yes --> R;
S -- No --> T(Find Image Node in Child);
T --> U(Load Processed Image);
U --> V(Assign Image to Node);
V --> W(Set Image Color Space);
W --> W1(Update Manifest Dict for Map/Res);
W1 --> R;
R -- All Resolutions Done --> M;
M -- All Map Types Done --> X1(Update Manifest Dict for Asset);
X1 --> F;
F -- All metadata.json Processed --> Y(Save Manifest Dict to adjacent .json file);
Y --> Z(Print Summary & Finish);
subgraph "Manifest Operations (External File)"
D; H; S; W1; X1; Y;
end
subgraph "Node/Asset Operations"
I; J; K; N; O; P; Q; T; U; V; W;
end

View File

@@ -1,52 +0,0 @@
# Blender Integration Plan v2
## Goal
Add an optional step to `main.py` to run `blenderscripts/create_nodegroups.py` and `blenderscripts/create_materials.py` on specified `.blend` files after asset processing is complete.
## Proposed Plan
1. **Update `config.py`:**
* Add two new optional configuration variables: `DEFAULT_NODEGROUP_BLEND_PATH` and `DEFAULT_MATERIALS_BLEND_PATH`. These will store the default paths to the Blender files.
2. **Update `main.py` Argument Parser:**
* Add two new optional command-line arguments: `--nodegroup-blend` and `--materials-blend`.
* These arguments will accept file paths to the respective `.blend` files.
* If provided, these arguments will override the default paths specified in `config.py`.
3. **Update `blenderscripts/create_nodegroups.py` and `blenderscripts/create_materials.py`:**
* Modify both scripts to accept the processed asset library root path (`PROCESSED_ASSET_LIBRARY_ROOT`) as a command-line argument. This will be passed to the script when executed by Blender using the `--` separator.
* Update the scripts to read this path from `sys.argv` instead of using the hardcoded variable.
4. **Update `main.py` Execution Flow:**
* After the main asset processing loop (`run_processing`) completes and the summary is reported, check if the `--nodegroup-blend` or `--materials-blend` arguments (or their fallbacks from `config.py`) were provided.
* If a path for the nodegroup `.blend` file is available:
* Construct a command to execute Blender in the background (`-b`), load the specified nodegroup `.blend` file, run the `create_nodegroups.py` script using `--python`, pass the processed asset root directory as an argument after `--`, and save the `.blend` file (`-S`).
* Execute this command using the `execute_command` tool.
* If a path for the materials `.blend` file is available:
* Construct a similar command to execute Blender in the background, load the specified materials `.blend` file, run the `create_materials.py` script using `--python`, pass the processed asset root directory as an argument after `--`, and save the `.blend` file (`-S`).
* Execute this command using the `execute_command` tool.
* Include error handling for the execution of the Blender commands.
## Execution Flow Diagram
```mermaid
graph TD
A[Asset Processing Complete] --> B[Report Summary];
B --> C{Nodegroup Blend Path Specified?};
C -- Yes --> D[Get Nodegroup Blend Path (Arg or Config)];
D --> E[Construct Blender Command for Nodegroups];
E --> F[Execute Command: blender -b nodegroup.blend --python create_nodegroups.py -- <asset_root> -S];
F --> G{Command Successful?};
G -- Yes --> H{Materials Blend Path Specified?};
G -- No --> I[Log Nodegroup Error];
I --> H;
H -- Yes --> J[Get Materials Blend Path (Arg or Config)];
J --> K[Construct Blender Command for Materials];
K --> L[Execute Command: blender -b materials.blend --python create_materials.py -- <asset_root> -S];
L --> M{Command Successful?};
M -- Yes --> N[End main.py];
M -- No --> O[Log Materials Error];
O --> N;
H -- No --> N;
C -- No --> H;

View File

@@ -1,131 +0,0 @@
# Blender Material Creation Script Plan
This document outlines the plan for creating a new Blender script (`create_materials.py`) in the `blenderscripts/` directory. This script will scan the processed asset library output by the Asset Processor Tool, read the `metadata.json` files, and create or update Blender materials that link to the corresponding PBRSET node groups found in a specified Blender Asset Library. The script will also set the material's viewport properties using pre-calculated statistics from the metadata. The script will skip processing an asset if the corresponding material already exists in the current Blender file.
## 1. Script Location and Naming
* Create a new file: `blenderscripts/create_materials.py`.
## 2. Script Structure
The script will follow a similar structure to `blenderscripts/create_nodegroups.py`, including:
* Import statements (`bpy`, `os`, `json`, `pathlib`, `time`, `base64`).
* A `--- USER CONFIGURATION ---` section at the top.
* Helper functions.
* A main processing function (e.g., `process_library_for_materials`).
* An execution block (`if __name__ == "__main__":`) to run the main function.
## 3. Configuration Variables
The script will include the following configuration variables in the `--- USER CONFIGURATION ---` section:
* `PROCESSED_ASSET_LIBRARY_ROOT`: Path to the root output directory of the Asset Processor Tool (same as in `create_nodegroups.py`). This is used to find the `metadata.json` files and reference images for previews.
* `PBRSET_ASSET_LIBRARY_NAME`: The name of the Blender Asset Library (configured in Blender Preferences) that contains the PBRSET node groups created by `create_nodegroups.py`.
* `TEMPLATE_MATERIAL_NAME`: Name of the required template material in the Blender file (e.g., "Template_PBRMaterial").
* `PLACEHOLDER_NODE_LABEL`: Label of the placeholder Group node within the template material's node tree where the PBRSET node group will be linked (e.g., "PBRSET_PLACEHOLDER").
* `MATERIAL_NAME_PREFIX`: Prefix for the created materials (e.g., "Mat_").
* `PBRSET_GROUP_PREFIX`: Prefix used for the PBRSET node groups created by `create_nodegroups.py` (e.g., "PBRSET_").
* `REFERENCE_MAP_TYPES`: List of map types to look for to find a reference image for the material preview (e.g., `["COL", "COL-1"]`).
* `REFERENCE_RESOLUTION_ORDER`: Preferred resolution order for the reference image (e.g., `["1K", "512", "2K", "4K"]`).
* `IMAGE_FILENAME_PATTERN`: Assumed filename pattern for processed images (same as in `create_nodegroups.py`).
* `FALLBACK_IMAGE_EXTENSIONS`: Fallback extensions for finding image files (same as in `create_nodegroups.py`).
* `VIEWPORT_COLOR_MAP_TYPES`: List of map types to check in metadata's `image_stats_1k` for viewport diffuse color.
* `VIEWPORT_ROUGHNESS_MAP_TYPES`: List of map types to check in metadata's `image_stats_1k` for viewport roughness.
* `VIEWPORT_METALLIC_MAP_TYPES`: List of map types to check in metadata's `image_stats_1k` for viewport metallic.
## 4. Helper Functions
The script will include the following helper functions:
* `find_nodes_by_label(node_tree, label, node_type=None)`: Reusable from `create_nodegroups.py` to find nodes in a node tree.
* `add_tag_if_new(asset_data, tag_name)`: Reusable from `create_nodegroups.py` to add asset tags.
* `reconstruct_image_path_with_fallback(...)`: Reusable from `create_nodegroups.py` to find image paths (needed for setting the custom preview).
* `get_stat_value(stats_dict, map_type_list, stat_key)`: A helper function to safely retrieve a specific statistic from the `image_stats_1k` dictionary.
## 5. Main Processing Logic (`process_library_for_materials`)
The main function will perform the following steps:
* **Pre-run Checks:**
* Verify `PROCESSED_ASSET_LIBRARY_ROOT` exists and is a directory.
* Verify the `PBRSET_ASSET_LIBRARY_NAME` exists in Blender's user preferences (`bpy.context.preferences.filepaths.asset_libraries`).
* Verify the `TEMPLATE_MATERIAL_NAME` material exists and uses nodes.
* Verify the `PLACEHOLDER_NODE_LABEL` Group node exists in the template material's node tree.
* **Scan for Metadata:**
* Iterate through supplier directories within `PROCESSED_ASSET_LIBRARY_ROOT`.
* Iterate through asset directories within each supplier directory.
* Identify `metadata.json` files.
* **Process Each Metadata File:**
* Load the `metadata.json` file.
* Extract `asset_name`, `supplier_name`, `archetype`, `processed_map_resolutions`, `merged_map_resolutions`, `map_details`, and `image_stats_1k`.
* Determine the expected PBRSET node group name: `f"{PBRSET_GROUP_PREFIX}{asset_name}"`.
* Determine the target material name: `f"{MATERIAL_NAME_PREFIX}{asset_name}"`.
* **Find or Create Material:**
* Check if a material with the `target_material_name` already exists in `bpy.data.materials`.
* If it exists, log a message indicating the asset is being skipped and move to the next metadata file.
* If it doesn't exist, copy the `TEMPLATE_MATERIAL_NAME` material and rename the copy to `target_material_name` (create mode). Handle potential copy failures.
* **Find Placeholder Node:**
* Find the node with `PLACEHOLDER_NODE_LABEL` in the target material's node tree using `find_nodes_by_label`. Handle cases where the node is not found or is not a Group node.
* **Find and Link PBRSET Node Group from Asset Library:**
* Get the path to the `.blend` file associated with the `PBRSET_ASSET_LIBRARY_NAME` from user preferences.
* Use `bpy.data.libraries.load(filepath, link=True)` to link the node group with `target_pbrset_group_name` from the external `.blend` file into the current file. Handle cases where the library or the node group is not found.
* Once linked, get the reference to the newly linked node group in `bpy.data.node_groups`.
* **Link Linked Node Group to Placeholder:**
* If both the placeholder node and the *newly linked* PBRSET node group are found, assign the linked node group to the `node_tree` property of the placeholder node.
* **Mark Material as Asset:**
* If the material is new or not already marked, call `material.asset_mark()`.
* **Copy Asset Tags:**
* If both the material and the *linked* PBRSET node group have asset data, copy tags (supplier, archetype) from the node group to the material using `add_tag_if_new`.
* **Set Custom Preview:**
* Find a suitable reference image path (e.g., lowest resolution COL map) using `reconstruct_image_path_with_fallback` and the `REFERENCE_MAP_TYPES` and `REFERENCE_RESOLUTION_ORDER` configurations.
* If a reference image path is found, use `bpy.ops.ed.lib_id_load_custom_preview` to set the custom preview for the material. This operation requires overriding the context.
* **Set Viewport Properties (using metadata stats):**
* Check if `image_stats_1k` is present and valid in the metadata.
* **Diffuse Color:** Use the `get_stat_value` helper to get the 'mean' stat for map types in `VIEWPORT_COLOR_MAP_TYPES`. If found and is a valid color list `[R, G, B]`, set `material.diffuse_color` to this value.
* **Roughness:** Use the `get_stat_value` helper to get the 'mean' stat for map types in `VIEWPORT_ROUGHNESS_MAP_TYPES`. If found, get the first value (for grayscale). Check if stats for `VIEWPORT_METALLIC_MAP_TYPES` exist. If metallic stats are *not* found, invert the roughness value (`1.0 - value`) before assigning it to `material.roughness`. Clamp the final value between 0.0 and 1.0.
* **Metallic:** Use the `get_stat_value` helper to get the 'mean' stat for map types in `VIEWPORT_METALLIC_MAP_TYPES`. If found, get the first value (for grayscale) and assign it to `material.metallic`. If metallic stats are *not* found, set `material.metallic` to 0.0.
* **Error Handling and Reporting:**
* Include `try...except` blocks to catch errors during file reading, JSON parsing, Blender operations, linking, etc.
* Print informative messages about progress, creation/update status, and errors.
* **Summary Report:**
* Print a summary of how many metadata files were processed, materials created/updated, node groups linked, errors encountered, and assets skipped.
## 6. Process Flow Diagram (Updated)
```mermaid
graph TD
A[Start Script] --> B{Pre-run Checks Pass?};
B -- No --> C[Abort Script];
B -- Yes --> D[Scan Processed Asset Root];
D --> E{Found metadata.json?};
E -- No --> F[Finish Script (No Assets)];
E -- Yes --> G[Loop through metadata.json files];
G --> H[Read metadata.json];
H --> I{Metadata Valid?};
I -- No --> J[Log Error, Skip Asset];
I -- Yes --> K[Extract Asset Info & Stats];
K --> L[Find or Create Material];
L --> M{Material Exists?};
M -- Yes --> N[Log Skip, Continue Loop];
M -- No --> O[Find Placeholder Node in Material];
O --> P[Find PBRSET NG in Library & Link];
P --> Q{Linking Successful?};
Q -- No --> R[Log Error, Skip Asset];
Q -- Yes --> S[Link Linked NG to Placeholder];
S --> T[Mark Material as Asset];
T --> U[Copy Asset Tags];
U --> V[Find Reference Image Path for Preview];
V --> W{Reference Image Found?};
W -- Yes --> X[Set Custom Material Preview];
W -- No --> Y[Log Warning (No Preview)];
X --> Z[Set Viewport Properties from Stats];
Y --> Z;
Z --> AA[Increment Counters];
AA --> G;
G --> AB[Print Summary Report];
AB --> AC[End Script];
J --> G;
N --> G;
R --> G;

View File

@@ -1,103 +0,0 @@
# Blender Addon Plan: Material Merger
**Version:** 1.1 (Includes Extensibility Consideration)
**1. Goal:**
Create a standalone Blender addon that allows users to select two existing materials (generated by the Asset Processor Tool, or previously merged by this addon) and merge them into a new material. The merge should preserve their individual node structures (including custom tweaks) and combine their final outputs using a dedicated `MaterialMerge` node group.
**2. Core Functionality (Approach 2 - Node Copying):**
* **Trigger:** User selects two materials in Blender and invokes an operator (e.g., via a button in the Shader Editor's UI panel).
* **New Material Creation:** The addon creates a new Blender material, named appropriately (e.g., `MAT_Merged_<NameA>_<NameB>`).
* **Node Copying:**
* For *each* selected source material:
* Iterate through its node tree.
* Copy all nodes *except* the `Material Output` node into the *new* material's node tree, attempting to preserve relative layout and offsetting subsequent copies.
* **Identify Final Outputs:** Determine the node providing the final BSDF shader output and the node providing the final Displacement output *before* the original `Material Output` node.
* In a base material (from Asset Processor), these are expected to be the `PBR_BSDF` node group (BSDF output) and the `PBR_Handler` node group (Displacement output).
* In an already-merged material, these will be the outputs of its top-level `MaterialMerge` node group.
* Store references to these final output nodes and their relevant sockets.
* **MaterialMerge Node:**
* **Link/Append** the `MaterialMerge` node group into the new material's node tree.
* **Assumption:** This node group exists in `blender_files/utility_nodegroups.blend` relative to the addon's location.
* **Assumption:** Socket names are `Shader A`, `Shader B`, `Displacement A`, `Displacement B` (inputs) and `BSDF`, `Displacement` (outputs).
* **Connections:**
* Connect the identified final BSDF output of the *first* source material's copied structure to the `MaterialMerge` node's `Shader A` input.
* Connect the identified final Displacement output of the *first* source material's copied structure to the `MaterialMerge` node's `Displacement A` input.
* Connect the identified final BSDF output of the *second* source material's copied structure to the `MaterialMerge` node's `Shader B` input.
* Connect the identified final Displacement output of the *second* source material's copied structure to the `MaterialMerge` node's `Displacement B` input.
* Connect the `MaterialMerge` node's `BSDF` output to the new material's `Material Output` node's `Surface` input.
* Connect the `MaterialMerge` node's `Displacement` output to the new material's `Material Output` node's `Displacement` input.
* **Layout:** Optionally, attempt a basic auto-layout (`node_tree.nodes.update()`) or arrange the key nodes logically.
**3. User Interface (UI):**
* A simple panel in the Blender Shader Editor (Properties region - 'N' panel).
* Two dropdowns or search fields allowing the user to select existing materials from the current `.blend` file.
* A button labeled "Merge Selected Materials".
* Status messages/feedback (e.g., "Merged material created: [Name]", "Error: Could not find required nodes in [Material Name]").
**4. Addon Structure (Python):**
* `__init__.py`: Registers the addon, panel, and operator classes.
* `operator.py`: Contains the `OT_MergeMaterials` operator class implementing the core logic.
* `panel.py`: Contains the `PT_MaterialMergePanel` class defining the UI layout.
* (Optional) `utils.py`: Helper functions for node finding, copying, linking, identifying final outputs, etc.
**5. Error Handling:**
* Check if two valid materials are selected.
* Verify that the selected materials have node trees.
* Handle cases where the expected final BSDF/Displacement output nodes cannot be reliably identified in one or both source materials.
* Handle potential errors during node copying.
* Handle errors if the `utility_nodegroups.blend` file or the `MaterialMerge` node group within it cannot be found/linked.
**6. Assumptions to Verify (Based on User Feedback):**
* **Node Identification:**
* Base Material Handler: Node named `PBR_Handler`.
* Base Material BSDF: Node named `PBR_BSDF`.
* Merged Material Outputs: The `BSDF` and `Displacement` outputs of the top-level `MaterialMerge` node.
* **`MaterialMerge` Node:**
* Location: `blender_files/utility_nodegroups.blend` (relative path).
* Input Sockets: `Shader A`, `Shader B`, `Displacement A`, `Displacement B`.
* Output Sockets: `BSDF`, `Displacement`.
**7. Future Extensibility - Recursive Merging:**
* The core merging logic (copying nodes, identifying final outputs, connecting to a new `MaterialMerge` node) is designed to inherently support selecting an already-merged material as an input without requiring separate code paths initially. The identification of final BSDF/Displacement outputs needs to correctly handle both base materials and merged materials (checking for `PBR_BSDF`/`PBR_Handler` or the outputs of an existing `MaterialMerge` node).
**8. Mermaid Diagram of Node Flow:**
```mermaid
graph TD
subgraph New Merged Material
subgraph Copied from Source A (Mat_A or Merge_A)
%% Nodes representing the structure of Source A
Structure_A[...]
Final_BSDF_A[Final BSDF Output A]
Final_Disp_A[Final Displacement Output A]
Structure_A --> Final_BSDF_A
Structure_A --> Final_Disp_A
end
subgraph Copied from Source B (Mat_B or Merge_B)
%% Nodes representing the structure of Source B
Structure_B[...]
Final_BSDF_B[Final BSDF Output B]
Final_Disp_B[Final Displacement Output B]
Structure_B --> Final_BSDF_B
Structure_B --> Final_Disp_B
end
Merge[MaterialMerge]
Output[Material Output]
Final_BSDF_A -- BSDF --> Merge -- Shader A --> Merge
Final_Disp_A -- Displacement --> Merge -- Displacement A --> Merge
Final_BSDF_B -- BSDF --> Merge -- Shader B --> Merge
Final_Disp_B -- Displacement --> Merge -- Displacement B --> Merge
Merge -- BSDF --> Output -- Surface --> Output
Merge -- Displacement --> Output -- Displacement --> Output
end

View File

@@ -1,124 +0,0 @@
# Architectural Plan: Data Flow Refinement (v3)
**Date:** 2025-04-30
**Author:** Roo (Architect Mode)
**Status:** Approved
## 1. Goal
Refine the application's data flow to establish the GUI as the single source of truth for processing rules. This involves moving prediction/preset logic upstream from the backend processor and ensuring the backend receives a *complete* `SourceRule` object for processing, thereby simplifying the processor itself. This version of the plan involves creating a new processing module (`processing_engine.py`) instead of refactoring the existing `asset_processor.py`.
## 2. Proposed Data Flow
The refined data flow centralizes rule generation and modification within the GUI components before passing a complete, explicit rule set to the backend. The `SourceRule` object structure serves as a consistent data contract throughout the pipeline.
```mermaid
sequenceDiagram
participant User
participant GUI_MainWindow as GUI (main_window.py)
participant GUI_Predictor as Predictor (prediction_handler.py)
participant GUI_UnifiedView as Unified View (unified_view_model.py)
participant Main as main.py
participant ProcessingEngine as New Backend (processing_engine.py)
participant Config as config.py
User->>+GUI_MainWindow: Selects Input & Preset
Note over GUI_MainWindow: Scans input, gets file list
GUI_MainWindow->>+GUI_Predictor: Request Prediction(File List, Preset Name, Input ID)
GUI_Predictor->>+Config: Load Preset Rules & Canonical Types
Config-->>-GUI_Predictor: Return Rules & Types
%% Prediction Logic (Internal to Predictor)
Note over GUI_Predictor: Perform file analysis (based on list), apply preset rules, generate COMPLETE SourceRule hierarchy (only overridable fields populated)
GUI_Predictor-->>-GUI_MainWindow: Return List[SourceRule] (Initial Rules)
GUI_MainWindow->>+GUI_UnifiedView: Populate View(List[SourceRule])
GUI_UnifiedView->>+Config: Read Allowed Asset/File Types for Dropdowns
Config-->>-GUI_UnifiedView: Return Allowed Types
Note over GUI_UnifiedView: Display rules, allow user edits
User->>GUI_UnifiedView: Modifies Rules (Overrides)
GUI_UnifiedView-->>GUI_MainWindow: Update SourceRule Objects in Memory
User->>+GUI_MainWindow: Trigger Processing
GUI_MainWindow->>+Main: Send Final List[SourceRule]
Main->>+ProcessingEngine: Queue Task(SourceRule) for each input
Note over ProcessingEngine: Execute processing based *solely* on the provided SourceRule and static config. No internal prediction/fallback.
ProcessingEngine-->>-Main: Processing Result
Main-->>-GUI_MainWindow: Update Status
GUI_MainWindow-->>User: Show Result/Status
```
## 3. Module-Specific Changes
* **`config.py`:**
* **Add Canonical Lists:** Introduce `ALLOWED_ASSET_TYPES` (e.g., `["Surface", "Model", "Decal", "Atlas", "UtilityMap"]`) and `ALLOWED_FILE_TYPES` (e.g., `["MAP_COL", "MAP_NRM", ..., "MODEL", "EXTRA", "FILE_IGNORE"]`).
* **Purpose:** Single source of truth for GUI dropdowns and validation.
* **Existing Config:** Retains static definitions like `IMAGE_RESOLUTIONS`, `MAP_MERGE_RULES`, `JPG_QUALITY`, etc.
* **`rule_structure.py`:**
* **Remove Enums:** Remove `AssetType` and `ItemType` Enums. Update `AssetRule.asset_type`, `FileRule.item_type_override`, etc., to use string types validated against `config.py` lists.
* **Field Retention:** Keep `FileRule.resolution_override` and `FileRule.channel_merge_instructions` fields for structural consistency, but they will not be populated or used for overrides in this flow.
* **`gui/prediction_handler.py` (or equivalent):**
* **Enhance Prediction Logic:** Modify `run_prediction` method.
* **Input:** Accept `input_source_identifier` (string), `file_list` (List[str] of relative paths), and `preset_name` (string) when called from GUI.
* **Load Config:** Read `ALLOWED_ASSET_TYPES`, `ALLOWED_FILE_TYPES`, and preset rules.
* **Relocate Classification:** Integrate classification/naming logic (previously in `asset_processor.py`) to operate on the provided `file_list`.
* **Generate Complete Rules:** Populate `SourceRule`, `AssetRule`, and `FileRule` objects.
* Set initial values only for *overridable* fields (e.g., `asset_type`, `item_type_override`, `target_asset_name_override`, `supplier_identifier`, `output_format_override`) based on preset rules/defaults.
* Explicitly **do not** populate static config fields like `FileRule.resolution_override` or `FileRule.channel_merge_instructions`.
* **Temporary Files (If needed for non-GUI):** May need logic later to handle direct path inputs (CLI/Docker) involving temporary extraction/cleanup, but the primary GUI flow uses the provided list.
* **Output:** Emit `rule_hierarchy_ready` signal with the `List[SourceRule]`.
* **NEW: `processing_engine.py` (New Module):**
* **Purpose:** Contains a new class (e.g., `ProcessingEngine`) for executing the processing pipeline based solely on a complete `SourceRule` and static configuration. Replaces `asset_processor.py` in the main workflow.
* **Initialization (`__init__`):** Takes the static `Configuration` object as input.
* **Core Method (`process`):** Accepts a single, complete `SourceRule` object. Orchestrates processing steps (workspace setup, extraction, map processing, merging, metadata, organization, cleanup).
* **Helper Methods (Refactored Logic):** Implement simplified versions of processing helpers (e.g., `_process_individual_maps`, `_merge_maps_from_source`, `_generate_metadata_file`, `_organize_output_files`, `_load_and_transform_source`, `_save_image`).
* Retrieve *overridable* parameters directly from the input `SourceRule`.
* Retrieve *static configuration* parameters (resolutions, merge rules) **only** from the stored `Configuration` object.
* Contain **no** prediction, classification, or fallback logic.
* **Dependencies:** `rule_structure.py`, `configuration.py`, `config.py`, cv2, numpy, etc.
* **`asset_processor.py` (Old Module):**
* **Status:** Remains in the codebase **unchanged** for reference.
* **Usage:** No longer called by `main.py` or GUI for standard processing.
* **`gui/main_window.py`:**
* **Scan Input:** Perform initial directory/archive scan to get the file list for each directory/archieve.
* **Initiate Prediction:** Call `PredictionHandler` with the file list, preset, and input identifier.
* **Receive/Pass Rules:** Handle `rule_hierarchy_ready`, pass `SourceRule` list to `UnifiedViewModel`.
* **Send Final Rules:** Send the final `SourceRule` list to `main.py`.
* **`gui/unified_view_model.py` / `gui/delegates.py`:**
* **Load Dropdown Options:** Source dropdowns (`AssetType`, `ItemType`) from `config.py`.
* **Data Handling:** Read/write user modifications to overridable fields in `SourceRule` objects.
* **No UI for Static Config:** Do not provide UI editing for resolution or merge instructions.
* **`main.py`:**
* **Receive Rule List:** Accept `List[SourceRule]` from GUI.
* **Instantiate New Engine:** Import and instantiate the new `ProcessingEngine` from `processing_engine.py`.
* **Queue Tasks:** Iterate `SourceRule` list, queue tasks.
* **Call New Engine:** Pass the individual `SourceRule` object to `ProcessingEngine.process` for each task.
## 4. Rationale / Benefits
* **Single Source of Truth:** GUI holds the final `SourceRule` objects.
* **Backend Simplification:** New `processing_engine.py` is focused solely on execution based on explicit rules and static config.
* **Decoupling:** Reduced coupling between GUI/prediction and backend processing.
* **Clarity:** Clearer data flow and component responsibilities.
* **Maintainability:** Easier maintenance and debugging.
* **Centralized Definitions:** `config.py` centralizes allowed types.
* **Preserves Reference:** Keeps `asset_processor.py` available for comparison.
* **Consistent Data Contract:** `SourceRule` structure is consistent from predictor output to engine input, enabling potential GUI bypass.
## 5. Potential Issues / Considerations
* **`PredictionHandler` Complexity:** Will require careful implementation of classification/rule population logic.
* **Performance:** Prediction logic needs to remain performant (threading).
* **Rule Structure Completeness:** Ensure `SourceRule` dataclasses hold all necessary *overridable* fields.
* **Preset Loading:** Robust preset loading/interpretation needed in `PredictionHandler`.
* **Static Config Loading:** Ensure the new `ProcessingEngine` correctly loads and uses the static `Configuration` object.
## 6. Documentation
This document (`ProjectNotes/Data_Flow_Refinement_Plan.md`) serves as the architectural plan. Relevant sections of the Developer Guide will need updating upon implementation.

View File

@@ -1,118 +0,0 @@
# Data Interface: GUI Preview Edits to Processing Handler
## 1. Purpose
This document defines the data structures and interface used to pass user edits made in the GUI's file preview table (specifically changes to 'Status' and 'Predicted Asset' output name) to the backend `ProcessingHandler`. It also incorporates a structure for future asset-level properties.
## 2. Data Structures
Two primary data structures are used:
### 2.1. File Data (`file_list`)
* **Type:** `list[dict]`
* **Description:** A flat list where each dictionary represents a single file identified during the prediction phase. This list is modified by the GUI to reflect user edits to file status and output names.
* **Dictionary Keys (per file):**
* `original_path` (`str`): The full path to the source file. (Read-only by GUI)
* `predicted_asset_name` (`str | None`): The name of the asset group the file belongs to, derived from input. (Read-only by GUI)
* `predicted_output_name` (`str | None`): The backend's predicted final output filename. **EDITABLE** by the user in the GUI ('Predicted Asset' column).
* `status` (`str`): The backend's predicted status (e.g., 'Mapped', 'Ignored'). **EDITABLE** by the user in the GUI ('Status' column).
* `details` (`str | None`): Additional information or error messages. (Read-only by GUI, potentially updated by model validation).
* `source_asset` (`str`): An identifier for the source asset group (e.g., input folder/zip name). (Read-only by GUI)
### 2.2. Asset Properties (`asset_properties`)
* **Type:** `dict[str, dict]`
* **Description:** A dictionary mapping the `source_asset` identifier (string key) to a dictionary of asset-level properties determined by the backend prediction/preset. This structure is initially read-only in the GUI but designed for future expansion (e.g., editing asset category).
* **Asset Properties Dictionary Keys (Example):**
* `asset_category` (`str`): The determined category (e.g., 'surface', 'model').
* `asset_tags` (`list[str]`): Any relevant tags associated with the asset.
* *(Other future asset-level properties can be added here)*
## 3. Data Flow & Interface
```mermaid
graph LR
subgraph Backend
A[Prediction Logic] -- Generates --> B(file_list);
A -- Generates --> C(asset_properties);
end
subgraph GUI Components
D[PredictionHandler] -- prediction_results_ready(file_list, asset_properties) --> E(PreviewTableModel);
E -- Stores & Allows Edits --> F(Internal file_list);
E -- Stores --> G(Internal asset_properties);
H[MainWindow] -- Retrieves --> F;
H -- Retrieves --> G;
H -- Passes (file_list, asset_properties) --> I[ProcessingHandler];
end
subgraph Backend
I -- Uses Edited --> F;
I -- Uses Read-Only --> G;
end
style A fill:#lightblue,stroke:#333,stroke-width:1px
style B fill:#lightgreen,stroke:#333,stroke-width:1px
style C fill:#lightyellow,stroke:#333,stroke-width:1px
style D fill:#f9f,stroke:#333,stroke-width:2px
style E fill:#ccf,stroke:#333,stroke-width:2px
style F fill:#lightgreen,stroke:#333,stroke-width:1px
style G fill:#lightyellow,stroke:#333,stroke-width:1px
style H fill:#f9f,stroke:#333,stroke-width:2px
style I fill:#lightblue,stroke:#333,stroke-width:2px
```
1. **Prediction:** The `PredictionHandler` generates both the `file_list` and `asset_properties`.
2. **Signal:** It emits a signal (e.g., `prediction_results_ready`) containing both structures, likely as a tuple `(file_list, asset_properties)`.
3. **Table Model:** The `PreviewTableModel` receives the tuple. It stores `asset_properties` (read-only for now). It stores `file_list` and allows user edits to the `status` and `predicted_output_name` values within this list.
4. **Processing Trigger:** When the user initiates processing, the `MainWindow` retrieves the (potentially modified) `file_list` and the (unmodified) `asset_properties` from the `PreviewTableModel`.
5. **Processing Execution:** The `MainWindow` passes both structures to the `ProcessingHandler`.
6. **Handler Logic:** The `ProcessingHandler` iterates through the `file_list`. For each file, it uses the potentially edited `status` and `predicted_output_name`. If asset-level information is needed, it uses the file's `source_asset` key to look up the data in the `asset_properties` dictionary.
## 4. Example Data Passed to `ProcessingHandler`
* **`file_list` (Example):**
```python
[
{
'original_path': 'C:/Path/To/AssetA/AssetA_Diffuse.png',
'predicted_asset_name': 'AssetA',
'predicted_output_name': 'T_AssetA_BC.tga',
'status': 'Ignored', # <-- User Edit
'details': 'User override: Ignored',
'source_asset': 'AssetA'
},
{
'original_path': 'C:/Path/To/AssetA/AssetA_Normal.png',
'predicted_asset_name': 'AssetA',
'predicted_output_name': 'T_AssetA_Normals_DX.tga', # <-- User Edit
'status': 'Mapped',
'details': None,
'source_asset': 'AssetA'
},
# ... other files
]
```
* **`asset_properties` (Example):**
```python
{
'AssetA': {
'asset_category': 'surface',
'asset_tags': ['wood', 'painted']
},
'AssetB': {
'asset_category': 'model',
'asset_tags': ['metal', 'sci-fi']
}
# ... other assets
}
```
## 5. Implications
* **`PredictionHandler`:** Needs modification to generate and emit both `file_list` and `asset_properties`. Signal signature changes.
* **`PreviewTableModel`:** Needs modification to receive, store, and provide both structures. Must implement editing capabilities (`flags`, `setData`) for the relevant columns using the `file_list`. Needs methods like `get_edited_data()` returning both structures.
* **`MainWindow`:** Needs modification to retrieve both structures from the table model and pass them to the `ProcessingHandler`.
* **`ProcessingHandler`:** Needs modification to accept both structures in its processing method signature. Must update logic to use the edited `status` and `predicted_output_name` from `file_list` and look up data in `asset_properties` using `source_asset` when needed.

View File

@@ -1,37 +0,0 @@
# FEAT-003: Selective Nodegroup Generation and Category Tagging - Implementation Plan
**Objective:** Modify `blenderscripts/create_nodegroups.py` to read the asset category from `metadata.json`, conditionally create nodegroups for "Surface" and "Decal" assets, and add the category as a tag to the Blender asset.
**Plan:**
1. **Modify `blenderscripts/create_nodegroups.py`:**
* Locate the main loop in `blenderscripts/create_nodegroups.py` that iterates through the processed assets.
* Inside this loop, for each asset directory, construct the path to the `metadata.json` file.
* Read the `metadata.json` file using Python's `json` module.
* Extract the `category` value from the parsed JSON data.
* Implement a conditional check: If the extracted `category` is *not* "Surface" and *not* "Decal", skip the existing nodegroup creation logic for this asset and proceed to the tagging step.
* If the `category` *is* "Surface" or "Decal", execute the existing nodegroup creation logic.
* After the conditional nodegroup creation (or skipping), use the Blender Python API (`bpy`) to find the corresponding Blender asset (likely the created node group, if applicable, or potentially the asset representation even if the nodegroup was skipped).
* Add the extracted `category` string as a tag to the found Blender asset.
2. **Testing:**
* Prepare a test set of processed assets that includes examples of "Surface", "Decal", and "Asset" categories, each with a corresponding `metadata.json` file.
* Run the modified `create_nodegroups.py` script within Blender, pointing it to the test asset library root.
* Verify in Blender that:
* Node groups were created only for the "Surface" and "Decal" assets.
* No node groups were created for "Asset" category assets.
* All processed assets (Surface, Decal, and Asset) have a tag corresponding to their category ("Surface", "Decal", or "Asset") in the Blender asset browser.
```mermaid
graph TD
A[Start Script] --> B{Iterate Assets};
B --> C[Read metadata.json];
C --> D[Get Category];
D --> E{Category in ["Surface", "Decal"]?};
E -- Yes --> F[Create Nodegroup];
E -- No --> G[Skip Nodegroup];
F --> H[Find Blender Asset];
G --> H[Find Blender Asset];
H --> I[Add Category Tag];
I --> B;
B -- No More Assets --> J[End Script];

View File

@@ -1,49 +0,0 @@
# Plan for Adding .rar and .7z Support
**Goal:** Extend the Asset Processor Tool to accept `.rar` and `.7z` files as input sources, in addition to the currently supported `.zip` files and folders.
**Plan:**
1. **Add Required Libraries:**
* Update the `requirements.txt` file to include `py7zr` and `rarfile` as dependencies. This will ensure these libraries are installed when setting up the project.
2. **Modify Input Extraction Logic:**
* Locate the `_extract_input` method within the `AssetProcessor` class in `asset_processor.py`.
* Modify this method to check the file extension of the input source.
* If the extension is `.zip`, retain the existing extraction logic using Python's built-in `zipfile` module.
* If the extension is `.rar`, implement extraction using the `rarfile` library.
* If the extension is `.7z`, implement extraction using the `py7zr` library.
* Include error handling for cases where the archive might be corrupted, encrypted (since we are not implementing password support at this stage, these should likely be skipped or logged as errors), or uses an unsupported compression method. Log appropriate warnings or errors in such cases.
* If the input is a directory, retain the existing logic to copy its contents to the temporary workspace.
3. **Update CLI and Monitor Input Handling:**
* Review `main.py` (CLI entry point) and `monitor.py` (Directory Monitor).
* Ensure that the argument parsing in `main.py` can accept `.rar` and `.7z` file paths as valid inputs.
* In `monitor.py`, modify the `ZipHandler` (or create a new handler) to watch for `.rar` and `.7z` file creation events in the watched directory, in addition to `.zip` files. The logic for triggering processing via `main.run_processing` should then be extended to handle these new file types.
4. **Update Documentation:**
* Edit `Documentation/00_Overview.md` to explicitly mention `.rar` and `.7z` as supported input formats in the overview section.
* Update `Documentation/01_User_Guide/02_Features.md` to list `.rar` and `.7z` alongside `.zip` and folders in the features list.
* Modify `Documentation/01_User_Guide/03_Installation.md` to include instructions for installing the new `py7zr` and `rarfile` dependencies (likely via `pip install -r requirements.txt`).
* Revise `Documentation/02_Developer_Guide/05_Processing_Pipeline.md` to accurately describe the updated `_extract_input` method, detailing how `.zip`, `.rar`, `.7z`, and directories are handled.
5. **Testing:**
* Prepare sample `.rar` and `.7z` files (including nested directories and various file types) to test the extraction logic thoroughly.
* Test processing of these new archive types via both the CLI and the Directory Monitor.
* Verify that the subsequent processing steps (classification, map processing, metadata generation, etc.) work correctly with files extracted from `.rar` and `.7z` archives.
Here is a simplified flow diagram illustrating the updated input handling:
```mermaid
graph TD
A[Input Source] --> B{Is it a file or directory?};
B -- Directory --> C[Copy Contents to Workspace];
B -- File --> D{What is the file extension?};
D -- .zip --> E[Extract using zipfile];
D -- .rar --> F[Extract using rarfile];
D -- .7z --> G[Extract using py7zr];
E --> H[Temporary Workspace];
F --> H;
G --> H;
C --> H;
H --> I[Processing Pipeline Starts];

View File

@@ -1,51 +0,0 @@
# Plan: Implement "Force Lossless" Format for Specific Map Types
**Goal:** Modify the asset processor to ensure specific map types ("NRM", "DISP") are always saved in a lossless format (PNG or EXR based on bit depth), overriding the JPG threshold and input format rules. This rule should apply to both individually processed maps and merged maps.
**Steps:**
1. **Add Configuration Setting (`config.py`):**
* Introduce a new list named `FORCE_LOSSLESS_MAP_TYPES` in `config.py`.
* Populate this list: `FORCE_LOSSLESS_MAP_TYPES = ["NRM", "DISP"]`.
2. **Expose Setting in `Configuration` Class (`configuration.py`):**
* Add a default value for `FORCE_LOSSLESS_MAP_TYPES` in the `_load_core_config` method's `default_core_settings` dictionary.
* Add a new property to the `Configuration` class to access this list:
```python
@property
def force_lossless_map_types(self) -> list:
"""Gets the list of map types that must always be saved losslessly."""
return self._core_settings.get('FORCE_LOSSLESS_MAP_TYPES', [])
```
3. **Modify `_process_maps` Method (`asset_processor.py`):**
* Locate the section determining the output format (around line 805).
* **Before** the `if output_bit_depth == 8 and target_dim >= threshold:` check (line 811), insert the new logic:
* Check if the current `map_type` is in `self.config.force_lossless_map_types`.
* If yes, determine the appropriate lossless format (`png` or configured 16-bit format like `exr`) based on `output_bit_depth`, set `output_format`, `output_ext`, `save_params`, and `needs_float16` accordingly, and skip the subsequent `elif` / `else` blocks for format determination.
* Use an `elif` for the existing JPG threshold check and the final `else` for the rule-based logic, ensuring they only run if `force_lossless` is false.
4. **Modify `_merge_maps` Method (`asset_processor.py`):**
* Locate the section determining the output format for the merged map (around line 1151).
* **Before** the `if output_bit_depth == 8 and target_dim >= threshold:` check (line 1158), insert similar logic as in step 3:
* Check if the `output_map_type` (the type of the *merged* map) is in `self.config.force_lossless_map_types`.
* If yes, determine the appropriate lossless format based on the merged map's `output_bit_depth`, set `output_format`, `output_ext`, `save_params`, and `needs_float16`, and skip the subsequent `elif` / `else` blocks.
* Use `elif` and `else` for the existing threshold and hierarchy logic.
**Process Flow Diagram:**
```mermaid
graph TD
subgraph Format Determination (per resolution, _process_maps & _merge_maps)
A[Start] --> B(Get map_type / output_map_type);
B --> C{Determine output_bit_depth};
C --> D{Is map_type in FORCE_LOSSLESS_MAP_TYPES?};
D -- Yes --> E[Set format = Lossless (PNG/EXR based on bit_depth)];
D -- No --> F{Is output_bit_depth == 8 AND target_dim >= threshold?};
F -- Yes --> G[Set format = JPG];
F -- No --> H[Set format based on input/hierarchy/rules];
G --> I[Set Save Params];
H --> I;
E --> I;
I --> J[Save Image];
end

View File

@@ -1,17 +0,0 @@
Here is a summary of our session focused on adding OpenEXR support:
Goal:
Enable the Asset Processor Tool to reliably write .exr files for 16-bit maps, compatible with Windows executables and Docker.
Approach:
We decided to use the dedicated openexr Python library for writing EXR files, rather than compiling a custom OpenCV build. The plan involved modifying asset_processor.py to use this library, updating the Dockerfile with necessary system libraries, and outlining steps for PyInstaller on Windows.
Issues & Fixes:
Initial Changes: Modified asset_processor.py and Dockerfile.
NameError: name 'log' is not defined: Fixed by moving the logger initialization earlier in asset_processor.py.
AttributeError: module 'Imath' has no attribute 'Header' & NameError: name 'fallback_fmt' is not defined: Attempted fixes by changing Imath.Header to OpenEXR.Header and correcting one instance of fallback_fmt.
TypeError: __init__(): incompatible constructor arguments... Invoked with: HALF & NameError: name 'fallback_fmt' is not defined (again): Corrected the OpenEXR.Channel constructor call and fixed the remaining fallback_fmt typo.
concurrent.futures.process.BrokenProcessPool: The script crashed when attempting the EXR save, likely due to an internal OpenEXR library error or a Windows dependency issue.
Next Steps (when resuming):
Investigate the BrokenProcessPool error, possibly by testing EXR saving with a simple array or verifying Windows dependencies for the OpenEXR library.

View File

@@ -1,52 +0,0 @@
# GUI Blender Integration Plan
## Goal
Add a checkbox and input fields to the GUI (`gui/main_window.py`) to enable/disable Blender script execution and specify the `.blend` file paths, defaulting to `config.py` values. Integrate this control into the processing logic (`gui/processing_handler.py`).
## Proposed Plan
1. **Modify `gui/main_window.py`:**
* Add a `QCheckBox` (e.g., `self.blender_integration_checkbox`) to the processing panel layout.
* Add two pairs of `QLineEdit` widgets and `QPushButton` browse buttons for the nodegroup `.blend` path (`self.nodegroup_blend_path_input`, `self.browse_nodegroup_blend_button`) and the materials `.blend` path (`self.materials_blend_path_input`, `self.browse_materials_blend_button`).
* Initialize the text of the `QLineEdit` widgets by reading the `DEFAULT_NODEGROUP_BLEND_PATH` and `DEFAULT_MATERIALS_BLEND_PATH` values from `config.py` when the GUI starts.
* Connect signals from the browse buttons to new methods that open a `QFileDialog` to select `.blend` files and update the corresponding input fields.
* Modify the slot connected to the "Start Processing" button to:
* Read the checked state of `self.blender_integration_checkbox`.
* Read the text from `self.nodegroup_blend_path_input` and `self.materials_blend_path_input`.
* Pass these three pieces of information (checkbox state, nodegroup path, materials path) to the `ProcessingHandler` when initiating the processing task.
2. **Modify `gui/processing_handler.py`:**
* Add parameters to the method that starts the processing (likely `start_processing`) to accept the Blender integration flag (boolean), the nodegroup `.blend` path (string), and the materials `.blend` path (string).
* Implement the logic for finding the Blender executable (reading `BLENDER_EXECUTABLE_PATH` from `config.py` or checking PATH) within `processing_handler.py`.
* Implement the logic for executing the Blender scripts using `subprocess.run` within `processing_handler.py`. This logic should be similar to the `run_blender_script` function added to `main.py` in the previous step.
* Ensure this Blender script execution logic is conditional based on the received integration flag and runs *after* the main asset processing (handled by the worker pool) is complete.
## Execution Flow Diagram (GUI)
```mermaid
graph TD
A[GUI: User Clicks Start Processing] --> B{Blender Integration Checkbox Checked?};
B -- Yes --> C[Get Blend File Paths from Input Fields];
C --> D[Pass Paths and Flag to ProcessingHandler];
D --> E[ProcessingHandler: Start Asset Processing (Worker Pool)];
E --> F[ProcessingHandler: Asset Processing Complete];
F --> G{Blender Integration Flag True?};
G -- Yes --> H[ProcessingHandler: Find Blender Executable];
H --> I{Blender Executable Found?};
I -- Yes --> J{Nodegroup Blend Path Valid?};
J -- Yes --> K[ProcessingHandler: Run Nodegroup Script in Blender];
K --> L{Script Successful?};
L -- Yes --> M{Materials Blend Path Valid?};
L -- No --> N[ProcessingHandler: Report Nodegroup Error];
N --> M;
M -- Yes --> O[ProcessingHandler: Run Materials Script in Blender];
O --> P{Script Successful?};
P -- Yes --> Q[ProcessingHandler: Report Completion];
P -- No --> R[ProcessingHandler: Report Materials Error];
R --> Q;
M -- No --> Q;
J -- No --> M[Skip Nodegroup Script];
I -- No --> Q[Skip Blender Scripts];
G -- No --> Q;
B -- No --> E;

View File

@@ -1,43 +0,0 @@
# GUI Enhancement Plan
## Objective
Implement two new features in the Graphical User Interface (GUI) of the Asset Processor Tool:
1. Automatically switch the preview to "simple view" when more than 10 input files (ZIPs or folders) are added to the queue.
2. Remove the specific visual area labeled "Drag and drop folders here" while keeping the drag-and-drop functionality active for the main processing panel.
## Implementation Plan
The changes will be made in the `gui/main_window.py` file.
1. **Implement automatic preview switch:**
* Locate the `add_input_paths` method.
* After adding the `newly_added_paths` to `self.current_asset_paths`, check the total number of items in `self.current_asset_paths`.
* If the count is greater than 10, programmatically set the state of the "Disable Detailed Preview" menu action (`self.toggle_preview_action`) to `checked=True`. This will automatically trigger the `update_preview` method, which will then render the simple list view.
2. **Remove the "Drag and drop folders here" visual area:**
* Locate the `setup_main_panel_ui` method.
* Find the creation of the `self.drag_drop_area` QFrame and its associated QLabel (`drag_drop_label`).
* Add a line after the creation of `self.drag_drop_area` to hide this widget (`self.drag_drop_area.setVisible(False)`). This will remove the visual box and label while keeping the drag-and-drop functionality enabled for the main window.
## Workflow Diagram
```mermaid
graph TD
A[User drops files/folders] --> B{Call add_input_paths}
B --> C[Add paths to self.current_asset_paths]
C --> D{Count items in self.current_asset_paths}
D{Count > 10?} -->|Yes| E[Set toggle_preview_action.setChecked(True)]
D{Count > 10?} -->|No| F[Keep current preview state]
E --> G[update_preview triggered]
F --> G[update_preview triggered]
G --> H{Check toggle_preview_action state}
H{Checked (Simple)?} -->|Yes| I[Display simple list in preview_table]
H{Checked (Simple)?} -->|No| J[Run PredictionHandler for detailed preview]
J --> K[Display detailed results in preview_table]
L[GUI Initialization] --> M[Call setup_main_panel_ui]
M --> N[Create drag_drop_area QFrame]
N --> O[Hide drag_drop_area QFrame]
O --> P[Main window accepts drops]
P --> B

View File

@@ -1,103 +0,0 @@
# GUI Feature Enhancement Plan
**Overall Goal:** Modify the GUI (`gui/main_window.py`, `gui/prediction_handler.py`) to make the output path configurable, improve UI responsiveness during preview generation, and add a toggle to switch between detailed file preview and a simple input path list.
**Detailed Plan:**
1. **Feature: Configurable Output Path**
* **File:** `gui/main_window.py`
* **Changes:**
* **UI Addition:**
* Below the `preset_combo` layout, add a new `QHBoxLayout`.
* Inside this layout, add:
* A `QLabel` with text "Output Directory:".
* A `QLineEdit` (e.g., `self.output_path_edit`) to display/edit the path. Make it read-only initially if preferred, or editable.
* A `QPushButton` (e.g., `self.browse_output_button`) with text "Browse...".
* **Initialization (`__init__` or `setup_main_panel_ui`):**
* Read the default `OUTPUT_BASE_DIR` from `core_config`.
* Resolve this path relative to the project root (`project_root / output_base_dir_config`).
* Set the initial text of `self.output_path_edit` to this resolved default path.
* **Browse Button Logic:**
* Connect the `clicked` signal of `self.browse_output_button` to a new method (e.g., `_browse_for_output_directory`).
* Implement `_browse_for_output_directory`:
* Use `QFileDialog.getExistingDirectory` to let the user select a folder.
* If a directory is selected, update the text of `self.output_path_edit`.
* **Processing Logic (`start_processing`):**
* Instead of reading/resolving the path from `core_config`, get the path string directly from `self.output_path_edit.text()`.
* Convert this string to a `Path` object.
* **Add Validation:** Before passing the path to the handler, check if the directory exists. If not, attempt to create it using `output_dir.mkdir(parents=True, exist_ok=True)`. Handle potential `OSError` exceptions during creation and show an error message if it fails. Also, consider adding a basic writability check if possible.
* Pass the validated `output_dir_str` to `self.processing_handler.run_processing`.
2. **Feature: Responsive UI (Address Prediction Bottleneck)**
* **File:** `gui/prediction_handler.py`
* **Changes:**
* **Import:** Add `from concurrent.futures import ThreadPoolExecutor, as_completed`.
* **Modify `run_prediction`:**
* Inside the `try` block (after loading `config`), create a `ThreadPoolExecutor` (e.g., `with ThreadPoolExecutor(max_workers=...) as executor:`). Determine a reasonable `max_workers` count (e.g., `os.cpu_count() // 2` or a fixed number like 4 or 8).
* Instead of iterating through `input_paths` sequentially, submit a task to the executor for each `input_path_str`.
* The task submitted should be a helper method (e.g., `_predict_single_asset`) that takes `input_path_str` and the loaded `config` object as arguments.
* `_predict_single_asset` will contain the logic currently inside the loop: instantiate `AssetProcessor`, call `get_detailed_file_predictions`, handle exceptions, and return the list of prediction dictionaries for that *single* asset (or an error dictionary).
* Store the `Future` objects returned by `executor.submit`.
* Use `as_completed(futures)` to process results as they become available.
* Append the results from each completed future to the `all_file_results` list.
* Emit `prediction_results_ready` once at the very end with the complete `all_file_results` list.
* **File:** `gui/main_window.py`
* **Changes:**
* No changes needed in the `on_prediction_results_ready` slot itself, as the handler will still emit the full list at the end.
3. **Feature: Preview Toggle**
* **File:** `gui/main_window.py`
* **Changes:**
* **UI Addition:**
* Add a `QCheckBox` (e.g., `self.disable_preview_checkbox`) with text "Disable Detailed Preview". Place it logically, perhaps near the `overwrite_checkbox` or above the `preview_table`. Set its default state to unchecked.
* **Modify `update_preview`:**
* At the beginning of the method, check `self.disable_preview_checkbox.isChecked()`.
* **If Checked (Simple View):**
* Clear the `preview_table`.
* Set simplified table headers (e.g., `self.preview_table.setColumnCount(1); self.preview_table.setHorizontalHeaderLabels(["Input Path"])`). Adjust column resize modes.
* Iterate through `self.current_asset_paths`. For each path, add a row to the table containing just the path string.
* Set status bar message (e.g., "Preview disabled. Showing input list.").
* **Crucially:** `return` from the method here to prevent the `PredictionHandler` from being started.
* **If Unchecked (Detailed View):**
* Ensure the table headers and column count are set back to the detailed view configuration (Status, Original Path, Predicted Name, Details).
* Continue with the rest of the existing `update_preview` logic to start the `PredictionHandler`.
* **Connect Signal:** In `__init__` or `setup_main_panel_ui`, connect the `toggled` signal of `self.disable_preview_checkbox` to the `self.update_preview` slot.
* **Initial State:** Ensure the first call to `update_preview` (if any) respects the initial unchecked state of the checkbox.
**Mermaid Diagram:**
```mermaid
graph TD
subgraph MainWindow
A[User Action: Add Asset / Change Preset / Toggle Preview] --> B{Update Preview Triggered};
B --> C{Is 'Disable Preview' Checked?};
C -- Yes --> D[Show Simple List View in Table];
C -- No --> E[Set Detailed Table Headers];
E --> F[Start PredictionHandler Thread];
F --> G[PredictionHandler Runs];
G --> H[Slot: Populate Table with Detailed Results];
I[User Clicks Start Processing] --> J{Get Output Path from UI LineEdit};
J --> K[Validate/Create Output Path];
K -- Path OK --> L[Start ProcessingHandler Thread];
K -- Path Error --> M[Show Error Message];
L --> N[ProcessingHandler Runs];
N --> O[Update UI (Progress, Status)];
P[User Clicks Browse...] --> Q[Show QFileDialog];
Q --> R[Update Output Path LineEdit];
end
subgraph PredictionHandler [Background Thread]
style PredictionHandler fill:#f9f,stroke:#333,stroke-width:2px
F --> S{Use ThreadPoolExecutor};
S --> T[Run _predict_single_asset Concurrently];
T --> U[Collect Results];
U --> V[Emit prediction_results_ready (Full List)];
V --> H;
end
subgraph ProcessingHandler [Background Thread]
style ProcessingHandler fill:#ccf,stroke:#333,stroke-width:2px
L --> N;
end

View File

@@ -1,78 +0,0 @@
# GUI Log Console Feature Plan
**Overall Goal:** Add a log console panel to the GUI's editor panel, controlled by a "View" menu action. Move the "Disable Detailed Preview" control to the same "View" menu.
**Detailed Plan:**
1. **Create Custom Log Handler:**
* **New File/Location:** Potentially add this to a new `gui/log_handler.py` or keep it within `gui/main_window.py` if simple enough.
* **Implementation:**
* Define a class `QtLogHandler(logging.Handler, QObject)` that inherits from both `logging.Handler` and `QObject` (for signals).
* Add a Qt signal, e.g., `log_record_received = Signal(str)`.
* Override the `emit(self, record)` method:
* Format the log record using `self.format(record)`.
* Emit the `log_record_received` signal with the formatted string.
2. **Modify `gui/main_window.py`:**
* **Imports:** Add `QMenuBar`, `QMenu`, `QAction` from `PySide6.QtWidgets`. Import the new `QtLogHandler`.
* **UI Elements (`__init__` / `setup_editor_panel_ui`):**
* **Menu Bar:**
* Create `self.menu_bar = self.menuBar()`.
* Create `view_menu = self.menu_bar.addMenu("&View")`.
* **Log Console:**
* Create `self.log_console_output = QTextEdit()`. Set it to read-only (`self.log_console_output.setReadOnly(True)`).
* Create a container widget, e.g., `self.log_console_widget = QWidget()`. Create a layout for it (e.g., `QVBoxLayout`) and add `self.log_console_output` to this layout.
* In `setup_editor_panel_ui`, insert `self.log_console_widget` into the `editor_layout` *before* adding the `list_layout` (the preset list).
* Initially hide the console: `self.log_console_widget.setVisible(False)`.
* **Menu Actions:**
* Create `self.toggle_log_action = QAction("Show Log Console", self, checkable=True)`. Connect `self.toggle_log_action.toggled.connect(self._toggle_log_console_visibility)`. Add it to `view_menu`.
* Create `self.toggle_preview_action = QAction("Disable Detailed Preview", self, checkable=True)`. Connect `self.toggle_preview_action.toggled.connect(self.update_preview)`. Add it to `view_menu`.
* **Remove Old Checkbox:** Delete the lines creating and adding `self.disable_preview_checkbox`.
* **Logging Setup (`__init__`):**
* Instantiate the custom handler: `self.log_handler = QtLogHandler()`.
* Connect its signal: `self.log_handler.log_record_received.connect(self._append_log_message)`.
* Add the handler to the logger: `log.addHandler(self.log_handler)`. Set an appropriate level if needed (e.g., `self.log_handler.setLevel(logging.INFO)`).
* **New Slots:**
* Implement `_toggle_log_console_visibility(self, checked)`: This slot will simply call `self.log_console_widget.setVisible(checked)`.
* Implement `_append_log_message(self, message)`:
* Append the `message` string to `self.log_console_output`.
* Optional: Add logic to limit the number of lines in the text edit to prevent performance issues.
* Optional: Add basic HTML formatting for colors based on log level.
* **Modify `update_preview`:**
* Replace the check for `self.disable_preview_checkbox.isChecked()` with `self.toggle_preview_action.isChecked()`.
* Update the log messages within this method to reflect checking the action state.
**Mermaid Diagram:**
```mermaid
graph TD
subgraph MainWindow
A[Initialization] --> B(Create Menu Bar);
B --> C(Add View Menu);
C --> D(Add 'Show Log Console' Action);
C --> E(Add 'Disable Detailed Preview' Action);
A --> F(Create Log Console QTextEdit);
F --> G(Place Log Console Widget in Layout [Hidden]);
A --> H(Create & Add QtLogHandler);
H --> I(Connect Log Handler Signal to _append_log_message);
D -- Toggled --> J[_toggle_log_console_visibility];
J --> K(Show/Hide Log Console Widget);
E -- Toggled --> L[update_preview];
M[update_preview] --> N{Is 'Disable Preview' Action Checked?};
N -- Yes --> O[Show Simple List View];
N -- No --> P[Start PredictionHandler];
Q[Any Log Message] -- Emitted by Logger --> H;
I --> R[_append_log_message];
R --> S(Append Message to Log Console QTextEdit);
end
subgraph QtLogHandler
style QtLogHandler fill:#lightgreen,stroke:#333,stroke-width:2px
T1[emit(record)] --> T2(Format Record);
T2 --> T3(Emit log_record_received Signal);
T3 --> I;
end

View File

@@ -1,65 +0,0 @@
# GUI Overhaul Plan: Unified Hierarchical View
**Task:** Implement a UI overhaul for the Asset Processor Tool GUI to address usability issues and streamline the workflow for viewing and editing processing rules.
**Context:**
* A hierarchical rule system (`SourceRule`, `AssetRule`, `FileRule` in `rule_structure.py`) is used by the core engine (`asset_processor.py`).
* The current GUI (`gui/main_window.py`, `gui/rule_hierarchy_model.py`, `gui/rule_editor_widget.py`) uses a `QTreeView` for hierarchy, a separate `RuleEditorWidget` for editing selected items, and a `QTableView` (`PreviewTableModel`) for previewing file classifications.
* Relevant files analyzed: `gui/main_window.py`, `gui/rule_editor_widget.py`, `gui/rule_hierarchy_model.py`.
**Identified Issues with Current UI:**
1. **Window Resizing:** Selecting Source/Asset items causes window expansion because `RuleEditorWidget` displays large child lists (`assets`, `files`) as simple labels.
2. **GUI Not Updating on Add:** Potential regression where adding new inputs doesn't reliably update the preview/hierarchy.
3. **Incorrect Source Display:** Tree view shows "Source: None" instead of the input path (likely `SourceRule.input_path` is None when model receives it).
4. **Preview Table Stale:** Changes made in `RuleEditorWidget` (e.g., overrides) are not reflected in the `PreviewTableModel` because the `_on_rule_updated` slot in `main_window.py` doesn't trigger a refresh.
**Agreed-Upon Overhaul Plan:**
The goal is to create a more unified and streamlined experience by merging the hierarchy, editing overrides, and preview aspects into a single view, reducing redundancy.
1. **UI Structure Redesign:**
* **Left Panel:** Retain the existing Preset Editor panel (`main_window.py`'s `editor_panel`) for managing preset files (`.json`) and their complex rules (naming patterns, map type mappings, archetype rules, etc.).
* **Right Panel:** Replace the current three-part splitter (Hierarchy Tree, Rule Editor, Preview Table) with a **single Unified Hierarchical View**.
* Implementation: Use a `QTreeView` with a custom `QAbstractItemModel` and custom `QStyledItemDelegate`s for inline editing.
* Hierarchy Display: Show Input Source(s) -> Assets -> Files.
* Visual Cues: Use distinct background colors for rows representing Inputs, Assets, and Files.
2. **Unified View Columns & Functionality:**
* **Column 1: Name/Hierarchy:** Displays input path, asset name, or file name with indentation.
* **Column 2+: Editable Attributes (Context-Dependent):** Implement inline editors using delegates:
* **Input Row:** Optional editable field for `Supplier` override.
* **Asset Row:** `QComboBox` delegate for `Asset-Type` override (e.g., `GENERIC`, `DECAL`, `MODEL`).
* **File Row:**
* `QLineEdit` delegate for `Target Asset Name` override.
* `QComboBox` delegate for `Item-Type` override (e.g., `MAP-COL`, `MAP-NRM`, `EXTRA`, `MODEL_FILE`).
* **Column X: Status (Optional, Post-Processing):** Non-editable column showing processing status icon/text (Pending, Success, Warning, Error).
* **Column Y: Output Path (Optional, Post-Processing):** Non-editable column showing the final output path after successful processing.
3. **Data Flow and Initialization:**
* When inputs are added and a preset selected, `PredictionHandler` runs.
* `PredictionHandler` generates the `SourceRule` hierarchy *and* predicts initial `Asset-Type`, `Item-Type`, and `Target Asset Name`.
* The Unified View's model is populated with this `SourceRule`.
* *Initial values* in inline editors are set based on these *predicted* values.
* User edits in the Unified View directly modify attributes on the `SourceRule`, `AssetRule`, or `FileRule` objects held by the model.
4. **Dropdown Options Source:**
* Available options in dropdowns (`Asset-Type`, `Item-Type`) should be sourced from globally defined lists or Enums (e.g., in `rule_structure.py` or `config.py`).
5. **Addressing Original Issues (How the Plan Fixes Them):**
* **Window Resizing:** Resolved by removing `RuleEditorWidget`.
* **GUI Not Updating on Add:** Fix requires ensuring `add_input_paths` triggers `PredictionHandler` and updates the new Unified View model correctly.
* **Incorrect Source Display:** Fix requires ensuring `PredictionHandler` correctly populates `SourceRule.input_path`.
* **Preview Table Stale:** Resolved by merging preview/editing; edits are live in the main view.
**Implementation Tasks:**
* Modify `gui/main_window.py`: Remove the right-side splitter, `RuleEditorWidget`, `PreviewTableModel`/`View`. Instantiate the new Unified View. Adapt `add_input_paths`, `start_processing`, `_on_rule_hierarchy_ready`, etc., to interact with the new view/model.
* Create/Modify Model (`gui/rule_hierarchy_model.py` or new file): Implement a `QAbstractItemModel` supporting multiple columns, hierarchical data, and providing data/flags for inline editing.
* Create Delegates (`gui/delegates.py`?): Implement `QStyledItemDelegate` subclasses for `QComboBox` and `QLineEdit` editors in the tree view.
* Modify `gui/prediction_handler.py`: Ensure it predicts initial override values (`Asset-Type`, `Item-Type`, `Target Asset Name`) and includes them in the data passed back to the main window (likely within the `SourceRule` structure or alongside it). Ensure `SourceRule.input_path` is correctly set.
* Modify `gui/processing_handler.py`: Update it to potentially signal back status/output path updates that can be reflected in the new Unified View model's optional columns.
* Define Dropdown Sources: Add necessary Enums or lists to `rule_structure.py` or `config.py`.
This plan provides a clear path forward for implementing the UI overhaul.

View File

@@ -1,119 +0,0 @@
# Asset Processor GUI Development Plan
This document outlines the plan for developing a Graphical User Interface (GUI) for the Asset Processor Tool.
**I. Foundation & Framework Choice**
1. **Choose GUI Framework:** PySide6 (LGPL license, powerful features).
2. **Project Structure:** Create `gui/` directory. Core logic remains separate.
3. **Dependencies:** Add `PySide6` to `requirements.txt`.
**II. Core GUI Layout & Basic Functionality**
1. **Main Window:** `gui/main_window.py`.
2. **Layout Elements:**
* Drag & Drop Area (`QWidget` subclass).
* Preset Dropdown (`QComboBox`).
* Preview Area (`QListWidget`).
* Progress Bar (`QProgressBar`).
* Control Buttons (`QPushButton`: Start, Cancel, Manage Presets).
* Status Bar (`QStatusBar`).
**III. Input Handling & Predictive Preview**
1. **Connect Drag & Drop:** Validate drops, add valid paths to Preview List.
2. **Refactor for Prediction:** Create/refactor methods in `AssetProcessor`/`Configuration` to predict output names without full processing.
3. **Implement Preview Update:** On preset change or file add, load config, call prediction logic, update Preview List items (e.g., "Input: ... -> Output: ...").
4. **Responsive Preview:** Utilize PySide6 list widget efficiency (consider model/view for very large lists).
**IV. Processing Integration & Progress Reporting**
1. **Adapt Processing Logic:** Refactor `main.py`'s `ProcessPoolExecutor` loop into a callable function/class (`gui/processing_handler.py`).
2. **Background Execution:** Run processing logic in a `QThread` to keep GUI responsive.
3. **Progress Updates (Signals & Slots):**
* Background thread emits signals: `progress_update(current, total)`, `file_status_update(path, status, msg)`, `processing_finished(stats)`.
* Main window connects slots to these signals to update UI elements (`QProgressBar`, `QListWidget` items, status bar).
4. **Completion/Error Handling:** Re-enable controls, display summary stats, report errors on `processing_finished`.
**V. Preset Management Interface (Sub-Task)**
1. **New Window/Dialog:** `gui/preset_editor.py`.
2. **Functionality:** List, load, edit (tree view/form), create, save/save as presets (`.json` in `presets/`). Basic validation.
**VI. Refinements & Additional Features (Ideas)**
1. **Cancel Button:** Implement cancellation signal to background thread/workers.
2. **Log Viewer:** Add `QTextEdit` to display `logging` output.
3. **Output Directory Selection:** Add browse button/field.
4. **Configuration Options:** Expose key `config.py` settings.
5. **Clearer Error Display:** Tooltips, status bar updates, or error panel.
**VII. Packaging (Deployment)**
1. **Tooling:** PyInstaller or cx_Freeze.
2. **Configuration:** Build scripts (`.spec`) to bundle code, dependencies, assets, and `presets/`.
**High-Level Mermaid Diagram:**
```mermaid
graph TD
subgraph GUI Application (PySide6)
A[Main Window] --> B(Drag & Drop Area);
A --> C(Preset Dropdown);
A --> D(Preview List Widget);
A --> E(Progress Bar);
A --> F(Start Button);
A -- Contains --> SB(Status Bar);
A --> CB(Cancel Button);
A --> MB(Manage Presets Button);
B -- fileDroppedSignal --> X(Handle Input Files);
C -- currentIndexChangedSignal --> Y(Update Preview);
X -- Adds paths --> D;
X -- Triggers --> Y;
Y -- Reads --> C;
Y -- Uses --> J(Prediction Logic);
Y -- Updates --> D;
F -- clickedSignal --> Z(Start Processing);
CB -- clickedSignal --> AA(Cancel Processing);
MB -- clickedSignal --> L(Preset Editor Dialog);
Z -- Starts --> BB(Processing Thread: QThread);
AA -- Signals --> BB;
BB -- progressSignal(curr, total) --> E;
BB -- fileStatusSignal(path, status, msg) --> D;
BB -- finishedSignal(stats) --> AB(Handle Processing Finished);
AB -- Updates --> SB;
AB -- Enables/Disables --> F;
AB -- Enables/Disables --> CB;
AB -- Enables/Disables --> C;
AB -- Enables/Disables --> B;
L -- Modifies --> H(Presets Dir: presets/*.json);
end
subgraph Backend Logic (Existing + Refactored)
H -- Loaded by --> C;
H -- Loaded/Saved by --> L;
J -- Reads --> M(configuration.py / asset_processor.py);
BB -- Runs --> K(Adapted main.py Logic);
K -- Uses --> N(ProcessPoolExecutor);
N -- Runs --> O(process_single_asset_wrapper);
O -- Uses --> M;
O -- Reports Status --> K;
K -- Reports Progress --> BB;
end
classDef gui fill:#f9f,stroke:#333,stroke-width:2px;
classDef backend fill:#ccf,stroke:#333,stroke-width:2px;
classDef thread fill:#ffc,stroke:#333,stroke-width:1px;
class A,B,C,D,E,F,CB,MB,L,SB,X,Y,Z,AA,AB gui;
class H,J,K,M,N,O backend;
class BB thread;
```
This plan provides a roadmap for the GUI development.

View File

@@ -1,42 +0,0 @@
# GUI Preset Selection Plan
## Objective
Modify the GUI so that no preset is selected by default, and the preview table is only populated after the user explicitly selects a preset. This aims to prevent accidental processing with an unintended preset and clearly indicate to the user that a preset selection is required for preview.
## Plan
1. **Modify `gui/main_window.py`:**
* Remove the logic that selects a default preset during initialization.
* Initialize the preview table to display the text "please select a preset".
* Disable the mechanism that triggers the `PredictionHandler` for preview generation until a preset is selected.
* Update the slot connected to the preset selection signal:
* When a preset is selected, clear the placeholder text and enable the preview generation mechanism.
* Pass the selected preset configuration to the `PredictionHandler`.
* Trigger the `PredictionHandler` to generate and display the preview.
* (Optional but recommended) Add logic to handle the deselection of a preset, which should clear the preview table and display the "please select a preset" text again, and disable preview generation.
2. **Review `gui/prediction_handler.py`:**
* Verify that the `PredictionHandler`'s methods that generate predictions (`get_detailed_file_predictions`) correctly handle being called only when a valid preset is provided. No major changes are expected here, but it's good practice to confirm.
3. **Update Preview Table Handling (`gui/preview_table_model.py` and `gui/main_window.py`):**
* Ensure the `PreviewTableModel` can gracefully handle having no data when no preset is selected.
* In `gui/main_window.py`, configure the `QTableView` or its parent widget to display the placeholder text "please select a preset" when the model is empty or no preset is active.
## Data Flow Change
The current data flow for preview generation is roughly:
Initialization -> Default Preset Loaded -> Trigger PredictionHandler -> Update Preview Table
The proposed data flow would be:
Initialization -> No Preset Selected -> Preview Table Empty/Placeholder -> User Selects Preset -> Trigger PredictionHandler with Selected Preset -> Update Preview Table
```mermaid
graph TD
A[GUI Initialization] --> B{Is a Preset Selected?};
B -- Yes (Current) --> C[Load Default Preset];
B -- No (Proposed) --> D[Preview Table Empty/Placeholder];
C --> E[Trigger PredictionHandler];
D --> F[User Selects Preset];
F --> E;
E --> G[Update Preview Table];

View File

@@ -1,59 +0,0 @@
# Plan: Implement Alternating Row Colors Per Asset Group in GUI Preview Table
## Objective
Modify the GUI preview table to display alternating background colors for rows based on the asset group they belong to, rather than alternating colors for each individual row. The visual appearance should be similar to the default alternating row colors (dark greys, no border, no rounded corners).
## Current State
The preview table in the GUI uses a `QTableView` with `setAlternatingRowColors(True)` enabled, which applies alternating background colors based on the row index. The `PreviewTableModel` groups file prediction data by `source_asset` in its internal `_table_rows` structure and provides data to the view.
## Proposed Plan
To achieve alternating colors per asset group, we will implement custom coloring logic within the `PreviewTableModel`.
1. **Disable Default Alternating Colors:**
* In `gui/main_window.py`, locate the initialization of the `preview_table_view` (a `QTableView`).
* Change `self.preview_table_view.setAlternatingRowColors(True)` to `self.preview_table_view.setAlternatingRowColors(False)`.
2. **Modify `PreviewTableModel.data()`:**
* Open `gui/preview_table_model.py`.
* In the `data()` method, add a case to handle the `Qt.ItemDataRole.BackgroundRole`.
* Inside this case, retrieve the `source_asset` for the current row from the `self._table_rows` structure.
* Maintain a sorted list of unique `source_asset` values. This can be done when the data is set in `set_data()`.
* Find the index of the current row's `source_asset` within this sorted list.
* Based on whether the index is even or odd, return a `QColor` object representing one of the two desired grey colors.
* Ensure that the `Qt.ItemDataRole.BackgroundRole` is handled correctly for all columns in the row.
3. **Define Colors:**
* Define two `QColor` objects within the `PreviewTableModel` class to represent the two grey colors for alternating groups. These should be chosen to be visually similar to the default alternating row colors.
## Visual Representation of Data Flow with Custom Coloring
```mermaid
graph TD
A[QTableView] --> B{Requests Data for Row/Column};
B --> C[PreviewSortFilterProxyModel];
C --> D[PreviewTableModel];
D -- data(index, role) --> E{Check Role};
E -- Qt.ItemDataRole.BackgroundRole --> F{Get source_asset for row};
F --> G{Determine Asset Group Index};
G --> H{Assign Color based on Index Parity};
H --> I[Return QColor];
E -- Other Roles --> J[Return Display/Tooltip/Foreground Data};
I --> C;
J --> C;
C --> A{Displays Data with Custom Background Color};
style D fill:#f9f,stroke:#333,stroke-width:2px
style H fill:#ccf,stroke:#333,stroke-width:1px
style I fill:#ccf,stroke:#333,stroke-width:1px
```
## Implementation Steps (for Code Mode)
1. Modify `gui/main_window.py` to disable default alternating row colors.
2. Modify `gui/preview_table_model.py` to:
* Define the two grey `QColor` objects.
* Update `set_data()` to create and store a sorted list of unique asset groups.
* Implement the `Qt.ItemDataRole.BackgroundRole` logic in the `data()` method to return alternating colors based on the asset group index.

View File

@@ -1,49 +0,0 @@
# Plan: Enhance GUI Preview Table Coloring
## Objective
Modify the GUI preview table to apply status-based text coloring to all relevant cells in a row, providing a more consistent visual indication of a file's status.
## Current State
The `PreviewTableModel` in `gui/preview_table_model.py` currently applies status-based text colors only to the "Status" column (based on the main file's status) and the "Additional Files" column (based on the additional file's status). Other cells in the row do not have status-based coloring.
## Proposed Change
Extend the status-based text coloring logic in the `PreviewTableModel`'s `data()` method to apply the relevant status color to any cell that corresponds to either the main file or an additional file in that row.
## Plan
1. **Modify the `data()` method in `gui/preview_table_model.py`:**
* Locate the section handling the `Qt.ItemDataRole.ForegroundRole`.
* Currently, this section checks the column index (`col`) to decide which file's status to use for coloring (main file for `COL_STATUS`, additional file for `COL_ADDITIONAL_FILES`).
* We will change this logic to determine which file (main or additional) the *current row and column* corresponds to, and then use that file's status to look up the color.
* For columns related to the main file (`COL_STATUS`, `COL_PREDICTED_ASSET`, `COL_ORIGINAL_PATH`, `COL_PREDICTED_OUTPUT`, `COL_DETAILS`), if the row contains a `main_file`, use the `main_file`'s status for coloring.
* For the `COL_ADDITIONAL_FILES` column, if the row contains `additional_file_details`, use the `additional_file_details`' status for coloring.
* If a cell does not correspond to a file (e.g., a main file column in a row that only has an additional file), return `None` for the `ForegroundRole` to use the default text color.
## Detailed Steps
1. Open `gui/preview_table_model.py`.
2. Navigate to the `data()` method.
3. Find the `if role == Qt.ItemDataRole.ForegroundRole:` block.
4. Inside this block, modify the logic to determine the `status` variable based on the current `col` and the presence of `main_file` or `additional_file_details` in `row_data`.
5. Use the determined `status` to look up the color in `self.STATUS_COLORS`.
6. Return the color if found, otherwise return `None`.
## Modified Color Logic Flow
```mermaid
graph TD
A[data(index, role)] --> B{role == Qt.ItemDataRole.ForegroundRole?};
B -- Yes --> C{Determine relevant file for cell (row, col)};
C -- Cell corresponds to Main File --> D{Get main_file status};
C -- Cell corresponds to Additional File --> E{Get additional_file_details status};
C -- Cell is empty --> F[status = None];
D --> G{Lookup color in STATUS_COLORS};
E --> G;
F --> H[Return None];
G -- Color found --> I[Return Color];
G -- No color found --> H;
B -- No --> J[Handle other roles];
J --> K[Return data based on role];

View File

@@ -1,90 +0,0 @@
# GUI Preview Table Restructure Plan
## Objective
Restructure the Graphical User Interface (GUI) preview table to group files by source asset and display "Ignored" and "Extra" files in a new "Additional Files" column, aligned with the mapped files of the same asset.
## Analysis
Based on the review of `gui/prediction_handler.py` and `gui/preview_table_model.py`:
* The `PredictionHandler` provides a flat list of file prediction dictionaries.
* The `PreviewTableModel` currently stores and displays this flat list directly.
* The `PreviewSortFilterProxyModel` sorts this flat list.
* The data transformation to achieve the desired grouped layout must occur within the `PreviewTableModel`.
## Proposed Plan
1. **Modify `gui/preview_table_model.py`:**
* **Add New Column:**
* Define a new constant: `COL_ADDITIONAL_FILES = 5`.
* Add "Additional Files" to the `_headers_detailed` list.
* **Introduce New Internal Data Structure:**
* Create a new internal list, `self._table_rows`, to store dictionaries representing the final rows to be displayed in the table.
* **Update `set_data(self, data: list)`:**
* Process the incoming flat `data` list (received from `PredictionHandler`).
* Group file dictionaries by their `source_asset`.
* Within each asset group, separate files into two lists:
* `main_files`: Files with status "Mapped", "Model", or "Error".
* `additional_files`: Files with status "Ignored", "Extra", "Unrecognised", or "Unmatched Extra".
* Determine the maximum number of rows needed for this asset block: `max(len(main_files), len(additional_files))`.
* Build the row dictionaries for `self._table_rows` for this asset block:
* For `i` from 0 to `max_rows - 1`:
* Get the `i`-th file from `main_files` (or `None` if `i` is out of bounds).
* Get the `i`-th file from `additional_files` (or `None` if `i` is out of bounds).
* Create a row dictionary containing:
* `source_asset`: The asset name.
* `predicted_asset`: From the `main_file` (if exists).
* `details`: From the `main_file` (if exists).
* `original_path`: From the `main_file` (if exists).
* `additional_file_path`: Path from the `additional_file` (if exists).
* `additional_file_details`: The original dictionary of the `additional_file` (if exists, for tooltips).
* `is_main_row`: Boolean flag (True if this row corresponds to a file in `main_files`, False otherwise).
* Append these row dictionaries to `self._table_rows`.
* After processing all assets, call `self.beginResetModel()` and `self.endResetModel()`.
* **Update `rowCount`:** Return `len(self._table_rows)` when in detailed mode.
* **Update `columnCount`:** Return `len(self._headers_detailed)`.
* **Update `data(self, index, role)`:**
* Retrieve the row dictionary: `row_data = self._table_rows[index.row()]`.
* For `Qt.ItemDataRole.DisplayRole`:
* If `index.column()` is `COL_ADDITIONAL_FILES`, return `row_data.get('additional_file_path', '')`.
* For other columns (`COL_STATUS`, `COL_PREDICTED_ASSET`, `COL_ORIGINAL_PATH`, `COL_DETAILS`), return data from the `main_file` part of `row_data` if `row_data['is_main_row']` is True, otherwise return an empty string or appropriate placeholder.
* For `Qt.ItemDataRole.ToolTipRole`:
* If `index.column()` is `COL_ADDITIONAL_FILES` and `row_data.get('additional_file_details')` exists, generate a tooltip using the status and details from `additional_file_details`.
* For other columns, use the existing tooltip logic based on the `main_file` data.
* For `Qt.ItemDataRole.ForegroundRole`:
* Apply existing color-coding based on the status of the `main_file` if `row_data['is_main_row']` is True.
* For the `COL_ADDITIONAL_FILES` cell and for rows where `row_data['is_main_row']` is False, use neutral styling (default text color).
* **Update `headerData`:** Return the correct header for `COL_ADDITIONAL_FILES`.
2. **Modify `gui/preview_table_model.py` (`PreviewSortFilterProxyModel`):**
* **Update `lessThan(self, left, right)`:**
* Retrieve the row dictionaries for `left` and `right` indices from the source model (`model._table_rows[left.row()]`, etc.).
* **Level 1: Source Asset:** Compare `source_asset` from the row dictionaries.
* **Level 2: Row Type:** If assets are the same, compare `is_main_row` (True sorts before False).
* **Level 3 (Main Rows):** If both are main rows (`is_main_row` is True), compare `original_path`.
* **Level 4 (Additional-Only Rows):** If both are additional-only rows (`is_main_row` is False), compare `additional_file_path`.
## Clarifications & Decisions
* **Error Handling:** "Error" files will remain in the main columns, similar to "Mapped" files, with their current "Error" status.
* **Sorting within Asset:** The proposed sorting logic within an asset block is acceptable (mapped rows by original path, additional-only rows by additional file path).
* **Styling of Additional Column:** Use neutral text and background styling for the "Additional Files" column, relying on tooltips for specific file details.
## Mermaid Diagram (Updated Data Flow)
```mermaid
graph LR
A[PredictionHandler] -- prediction_results_ready(flat_list) --> B(PreviewTableModel);
subgraph PreviewTableModel
C[set_data] -- Processes flat_list --> D{Internal Grouping & Transformation};
D -- Creates --> E[_table_rows (Structured List)];
F[data()] -- Reads from --> E;
end
B -- Provides data via data() --> G(QTableView via Proxy);
style B fill:#f9f,stroke:#333,stroke-width:2px
style C fill:#ccf,stroke:#333,stroke-width:1px
style D fill:#lightgrey,stroke:#333,stroke-width:1px
style E fill:#ccf,stroke:#333,stroke-width:1px
style F fill:#ccf,stroke:#333,stroke-width:1px

View File

@@ -1,123 +0,0 @@
# Asset Processor GUI Refactor Plan
This document outlines the plan to refactor the Asset Processor GUI based on user requirements.
## Goals
1. **Improve File Visibility:** Display all files found within an asset in the preview list, including those that don't match the preset, are moved to 'Extra', or have errors, along with their status.
2. **Integrate Preset Editor:** Move the preset editing functionality from the separate dialog into a collapsible panel within the main window.
## Goal 1: Improve File Visibility in Preview List
**Problem:** The current preview (`PredictionHandler` calling `AssetProcessor.predict_output_structure`) only shows files that successfully match a map type rule and get a predicted output name. It doesn't show files that are ignored, moved to 'Extra', or encounter errors during classification.
**Solution:** Leverage the more comprehensive classification logic already present in `AssetProcessor._inventory_and_classify_files` for the GUI preview.
**Plan Steps:**
1. **Modify `asset_processor.py`:**
* Create a new method in `AssetProcessor`, perhaps named `get_detailed_file_predictions()`.
* This new method will perform the core steps of `_setup_workspace()`, `_extract_input()`, and `_inventory_and_classify_files()`.
* It will then iterate through *all* categories in `self.classified_files` ('maps', 'models', 'extra', 'ignored').
* For each file, it will determine a 'status' (e.g., "Mapped", "Model", "Extra", "Ignored", "Error") and attempt to predict the output name (similar to `predict_output_structure` for maps, maybe just the original name for others).
* It will return a more detailed list of dictionaries, each containing: `{'original_path': str, 'predicted_name': str | None, 'status': str, 'details': str | None}`.
* Crucially, this method will *not* perform the actual processing (`_process_maps`, `_merge_maps`, etc.) or file moving, only the classification and prediction. It should also include cleanup (`_cleanup_workspace`).
2. **Modify `gui/prediction_handler.py`:**
* Update `PredictionHandler.run_prediction` to call the new `AssetProcessor.get_detailed_file_predictions()` method instead of `predict_output_structure()`.
* Adapt the code that processes the results to handle the new dictionary format (including the 'status' and 'details' fields).
* Emit this enhanced list via the `prediction_results_ready` signal.
3. **Modify `gui/main_window.py`:**
* In `setup_ui`, add a new column to `self.preview_table` for "Status". Adjust column count and header labels.
* In `on_prediction_results_ready`, populate the new "Status" column using the data received from `PredictionHandler`.
* Consider adding tooltips to the status column to show the 'details' (e.g., the reason for being ignored or moved to extra).
* Optionally, use background colors or icons in the status column for better visual distinction.
## Goal 2: Integrate Preset Editor into Main Window
**Problem:** Preset editing requires opening a separate modal dialog, interrupting the main workflow.
**Solution:** Embed the preset editing controls directly into the main window within a collapsible panel.
**Plan Steps:**
1. **Modify `gui/main_window.py` - UI Changes:**
* Remove the "Manage Presets" button (`self.manage_presets_button`).
* Add a collapsible panel (potentially using a `QFrame` with show/hide logic triggered by a button, or a `QDockWidget` if more appropriate) to the left side of the main layout.
* Inside this panel:
* Add the `QListWidget` for displaying presets (`self.preset_list`).
* Add the "New" and "Delete" buttons below the list. (The "Load" button becomes implicit - selecting a preset in the list loads it into the editor).
* Recreate the `QTabWidget` (`self.preset_editor_tabs`) with the "General & Naming" and "Mapping & Rules" tabs.
* Recreate *all* the widgets currently inside the `PresetEditorDialog` tabs (QLineEdit, QTextEdit, QSpinBox, QListWidget+controls, QTableWidget+controls) within the corresponding tabs in the main window's panel. Give them appropriate instance names (e.g., `self.editor_preset_name`, `self.editor_supplier_name`, etc.).
* Add "Save" and "Save As..." buttons within the collapsible panel, likely at the bottom.
2. **Modify `gui/main_window.py` - Logic Integration:**
* Adapt the `populate_presets` method to populate the new `self.preset_list` in the panel.
* Connect `self.preset_list.currentItemChanged` to a new method `load_selected_preset_for_editing`. This method will handle checking for unsaved changes in the editor panel and then load the selected preset's data into the editor widgets (similar to `PresetEditorDialog.load_preset`).
* Implement `save_preset`, `save_preset_as`, `new_preset`, `delete_preset` methods directly within `MainWindow`, adapting the logic from `PresetEditorDialog`. These will interact with the editor widgets in the panel.
* Implement `check_unsaved_changes` logic for the editor panel, prompting the user if they try to load/create/delete a preset or close the application with unsaved edits in the panel.
* Connect the editor widgets' change signals (`textChanged`, `valueChanged`, `itemChanged`, etc.) to a `mark_editor_unsaved` method in `MainWindow`.
* Ensure the main preset selection `QComboBox` (`self.preset_combo`) is repopulated when presets are saved/deleted via the editor panel.
3. **Cleanup:**
* Delete the `gui/preset_editor_dialog.py` file.
* Remove imports and references to `PresetEditorDialog` from `gui/main_window.py`.
## Visual Plan
**Current Layout:**
```mermaid
graph TD
subgraph "Current Main Window Layout"
A[Preset Combo + Manage Button] --> B(Drag & Drop Area);
B --> C{File Preview Table};
C --> D[Progress Bar];
D --> E[Options + Start/Cancel Buttons];
end
subgraph "Current Preset Editor (Separate Dialog)"
F[Preset List + Load/New/Delete] --> G{Tab Widget};
subgraph "Tab Widget"
G1[General & Naming Tab]
G2[Mapping & Rules Tab]
end
G --> H[Save / Save As / Close Buttons];
end
A -- Manage Button Click --> F;
style F fill:#f9f,stroke:#333,stroke-width:2px
style G fill:#f9f,stroke:#333,stroke-width:2px
style H fill:#f9f,stroke:#333,stroke-width:2px
```
**Proposed Layout:**
```mermaid
graph TD
subgraph "Proposed Main Window Layout"
direction LR
subgraph "Collapsible Preset Editor Panel (Left)"
P_List[Preset List] --> P_Buttons[New / Delete Buttons]
P_Buttons --> P_Tabs{Tab Widget}
subgraph "Editor Tabs"
P_Tab1[General & Naming]
P_Tab2[Mapping & Rules]
end
P_Tabs --> P_Save[Save / Save As Buttons]
end
subgraph "Main Area (Right)"
M_Preset[Preset Combo (for processing)] --> M_DragDrop(Drag & Drop Area)
M_DragDrop --> M_Preview{File Preview Table (with Status Column)}
M_Preview --> M_Progress[Progress Bar]
M_Progress --> M_Controls[Options + Start/Cancel Buttons]
end
P_List -- Selection Loads --> P_Tabs;
P_Save -- Updates --> P_List;
P_List -- Updates --> M_Preset;
style M_Preview fill:#ccf,stroke:#333,stroke-width:2px
style P_List fill:#cfc,stroke:#333,stroke-width:2px
style P_Tabs fill:#cfc,stroke:#333,stroke-width:2px
style P_Save fill:#cfc,stroke:#333,stroke-width:2px
end

View File

@@ -1,63 +0,0 @@
# Plan: Update GUI Preview Status
**Objective:** Modify the Asset Processor GUI preview to distinguish between files explicitly marked as "Extra" by preset patterns and those that are simply unclassified.
**Current Statuses:**
* Mapped
* Ignored
* Extra (Includes both explicitly matched and unclassified files)
**Proposed Statuses:**
* Mapped
* Ignored
* Extra (Files explicitly matched by `move_to_extra_patterns` in the preset)
* Unrecognised (Files not matching any map, model, or explicit extra pattern)
**Visual Plan:**
```mermaid
graph TD
A[Start: User Request] --> B{Analyze Request: Split 'Extra' status};
B --> C{Info Gathering};
C --> D[Read gui/prediction_handler.py];
D --> E[Read asset_processor.py];
E --> F[Read Presets/Poliigon.json];
F --> G[Read gui/main_window.py];
G --> H{Identify Code Locations};
H --> I[asset_processor.py: get_detailed_file_predictions()];
H --> J[gui/main_window.py: on_prediction_results_ready()];
I --> K{Plan Code Changes};
J --> K;
K --> L[Modify asset_processor.py: Differentiate status based on 'reason'];
K --> M[Modify gui/main_window.py: Add color rule for 'Unrecognised' (#92371f)];
L --> N{Final Plan};
M --> N;
N --> O[Present Plan to User];
O --> P{User Approval + Color Choice};
P --> Q[Switch to Code Mode for Implementation];
subgraph "Code Modification"
L
M
end
subgraph "Information Gathering"
D
E
F
G
end
```
**Implementation Steps:**
1. **Modify `asset_processor.py` (`get_detailed_file_predictions` method):**
* Locate the loop processing the `self.classified_files["extra"]` list (around line 1448).
* Inside this loop, check the `reason` associated with each file:
* If `reason == 'Unclassified'`, set the output `status` to `"Unrecognised"`.
* Otherwise (if the reason indicates an explicit pattern match), set the output `status` to `"Extra"`.
* Adjust the `details` string provided in the output for clarity (e.g., show pattern match reason for "Extra", maybe just "[Unrecognised]" for the new status).
2. **Modify `gui/main_window.py` (`on_prediction_results_ready` method):**
* Locate the section where text color is applied based on the `status` (around line 673).
* Add a new `elif` condition to handle `status == "Unrecognised"` and assign it the color `QColor("#92371f")`.

View File

@@ -1,194 +0,0 @@
# Implementation Plan: GUI User-Friendliness Enhancements
This document outlines the plan for implementing three key GUI improvements for the Asset Processor Tool, focusing on user-friendliness and workflow efficiency.
**Target Audience:** Developers implementing these features.
**Status:** Planning Phase
## Feature 1: Editable Asset Name
**Goal:** Allow users to edit the name of an asset directly in the main view, and automatically update the 'Target Asset' field of all associated child files to reflect the new name.
**Affected Components:**
* `gui/unified_view_model.py` (`UnifiedViewModel`)
* `gui/delegates.py` (`LineEditDelegate`)
* `gui/main_window.py` (or view setup location)
* `rule_structure.py` (`AssetRule`, `FileRule`)
* Potentially a new handler or modifications to `gui/asset_restructure_handler.py`
**Implementation Steps:**
1. **Enable Editing in Model (`UnifiedViewModel`):**
* Modify `flags()`: For an index pointing to an `AssetRule`, return `Qt.ItemIsEditable` in addition to default flags when `index.column()` is `COL_NAME`.
* Modify `setData()`:
* Add logic to handle `isinstance(item, AssetRule)` and `column == self.COL_NAME`.
* Get the `new_asset_name` from the `value`.
* **Validation:** Before proceeding, check if an `AssetRule` with `new_asset_name` already exists within the same parent `SourceRule`. If so, log a warning and return `False` to prevent duplicate names.
* Store the `old_asset_name = item.asset_name`.
* If `new_asset_name` is valid and different from `old_asset_name`:
* Update `item.asset_name = new_asset_name`.
* Set `changed = True`.
* **Crucial - Child Update:** Iterate through *all* `SourceRule`s, `AssetRule`s, and `FileRule`s in the model (`self._source_rules`). For each `FileRule` found where `file_rule.target_asset_name_override == old_asset_name`, update `file_rule.target_asset_name_override = new_asset_name`. Emit `dataChanged` for the `COL_TARGET_ASSET` index of each modified `FileRule`. (See Potential Challenges regarding performance).
* Emit `dataChanged` for the edited `AssetRule`'s `COL_NAME` index.
* Return `changed`.
* **(Alternative Signal Approach):** Instead of performing the child update directly in `setData`, emit a new signal like `assetNameChanged = Signal(QModelIndex, str, str)` carrying the `AssetRule` index, old name, and new name. A dedicated handler would connect to this signal to perform the child updates. This improves separation of concerns.
2. **Assign Delegate (`main_window.py` / View Setup):**
* Ensure the `LineEditDelegate` is assigned to the view for the `COL_NAME` using `view.setItemDelegateForColumn(UnifiedViewModel.COL_NAME, line_edit_delegate_instance)`.
3. **Handling Child Updates (if using Signal Approach):**
* Create a new handler class (e.g., `AssetNameChangeHandler`) or add a slot to `AssetRestructureHandler`.
* Connect the `UnifiedViewModel.assetNameChanged` signal to this slot.
* The slot receives the `AssetRule` index, old name, and new name. It iterates through the model's `FileRule`s, updates their `target_asset_name_override` where it matches the old name, and emits `dataChanged` for those files.
**Data Model Impact:**
* `AssetRule.asset_name` becomes directly mutable via the GUI.
* The relationship between files and their intended parent asset (represented by `FileRule.target_asset_name_override`) is maintained automatically when the parent asset's name changes.
**Potential Challenges/Considerations:**
* **Performance:** The child update logic requires iterating through potentially all files in the model. For very large datasets, this could be slow. Consider optimizing by maintaining an index/lookup map (`Dict[str, List[FileRule]]`) mapping target asset override names to the list of `FileRule`s using them. This map would need careful updating whenever overrides change or files are moved.
* **Duplicate Asset Names:** The plan includes basic validation in `setData`. Robust handling (e.g., user feedback, preventing the edit) is needed.
* **Undo/Redo:** Reversing an asset name change requires reverting the name *and* reverting all the child `target_asset_name_override` changes, adding complexity.
* **Scope of Child Update:** The current plan updates *any* `FileRule` whose override matches the old name. Confirm if this update should be restricted only to files originally under the renamed asset or within the same `SourceRule`. The current approach seems most logical based on how `target_asset_name_override` works.
```mermaid
sequenceDiagram
participant User
participant View
participant LineEditDelegate
participant UnifiedViewModel
participant AssetNameChangeHandler
User->>View: Edits AssetRule Name in COL_NAME
View->>LineEditDelegate: setModelData(editor, model, index)
LineEditDelegate->>UnifiedViewModel: setData(index, new_name, EditRole)
UnifiedViewModel->>UnifiedViewModel: Validate new_name (no duplicates)
UnifiedViewModel->>UnifiedViewModel: Update AssetRule.asset_name
alt Signal Approach
UnifiedViewModel->>AssetNameChangeHandler: emit assetNameChanged(index, old_name, new_name)
AssetNameChangeHandler->>UnifiedViewModel: Iterate through FileRules
loop For each FileRule where target_override == old_name
AssetNameChangeHandler->>UnifiedViewModel: Update FileRule.target_asset_name_override = new_name
UnifiedViewModel->>View: emit dataChanged(file_rule_target_index)
end
else Direct Approach in setData
UnifiedViewModel->>UnifiedViewModel: Iterate through FileRules
loop For each FileRule where target_override == old_name
UnifiedViewModel->>UnifiedViewModel: Update FileRule.target_asset_name_override = new_name
UnifiedViewModel->>View: emit dataChanged(file_rule_target_index)
end
end
UnifiedViewModel->>View: emit dataChanged(asset_rule_name_index)
```
## Feature 2: Item Type Field Conversion
**Goal:** Replace the `QComboBox` delegate for the "Item Type" column (for `FileRule`s) with a `QLineEdit` that provides auto-suggestions based on defined file types, similar to the existing "Supplier" field.
**Affected Components:**
* `gui/main_window.py` (or view setup location)
* `gui/delegates.py` (Requires a new delegate)
* `gui/unified_view_model.py` (`UnifiedViewModel`)
* `config/app_settings.json` (Source of file type definitions)
**Implementation Steps:**
1. **Create New Delegate (`delegates.py`):**
* Create a new class `ItemTypeSearchDelegate(QStyledItemDelegate)`.
* **`createEditor(self, parent, option, index)`:**
* Create a `QLineEdit` instance.
* Get the list of valid item type keys: `item_keys = index.model()._file_type_keys` (add error handling).
* Create a `QCompleter` using `item_keys` and set it on the `QLineEdit` (configure case sensitivity, filter mode, completion mode as in `SupplierSearchDelegate`).
* Return the editor.
* **`setEditorData(self, editor, index)`:**
* Get the current value using `index.model().data(index, Qt.EditRole)`.
* Set the editor's text (`editor.setText(str(value) if value is not None else "")`).
* **`setModelData(self, editor, model, index)`:**
* Get the `final_text = editor.text().strip()`.
* Determine the `value_to_set = final_text if final_text else None`.
* Call `model.setData(index, value_to_set, Qt.EditRole)`.
* **Important:** Unlike `SupplierSearchDelegate`, do *not* add `final_text` to the list of known types or save anything back to config. Suggestions are strictly based on `config/app_settings.json`.
* **`updateEditorGeometry(self, editor, option, index)`:**
* Standard implementation: `editor.setGeometry(option.rect)`.
2. **Assign Delegate (`main_window.py` / View Setup):**
* Instantiate the new `ItemTypeSearchDelegate`.
* Find where delegates are set for the view.
* Replace the `ComboBoxDelegate` assignment for `UnifiedViewModel.COL_ITEM_TYPE` with the new `ItemTypeSearchDelegate` instance: `view.setItemDelegateForColumn(UnifiedViewModel.COL_ITEM_TYPE, item_type_search_delegate_instance)`.
**Data Model Impact:**
* None. The underlying data (`FileRule.item_type_override`) and its handling remain the same. Only the GUI editor changes.
**Potential Challenges/Considerations:**
* None significant. This is a relatively straightforward replacement of one delegate type with another, leveraging existing patterns from `SupplierSearchDelegate` and data loading from `UnifiedViewModel`.
## Feature 3: Drag-and-Drop File Re-parenting
**Goal:** Enable users to drag one or more `FileRule` rows and drop them onto an `AssetRule` row to change the parent asset of the dragged files.
**Affected Components:**
* `gui/main_panel_widget.py` or `gui/main_window.py` (View management)
* `gui/unified_view_model.py` (`UnifiedViewModel`)
**Implementation Steps:**
1. **Enable Drag/Drop in View (`main_panel_widget.py` / `main_window.py`):**
* Get the `QTreeView` instance (`view`).
* `view.setSelectionMode(QAbstractItemView.ExtendedSelection)` (Allow selecting multiple files)
* `view.setDragEnabled(True)`
* `view.setAcceptDrops(True)`
* `view.setDropIndicatorShown(True)`
* `view.setDefaultDropAction(Qt.MoveAction)`
* `view.setDragDropMode(QAbstractItemView.InternalMove)`
2. **Implement Drag/Drop Support in Model (`UnifiedViewModel`):**
* **`flags(self, index)`:**
* Modify to include `Qt.ItemIsDragEnabled` if `index.internalPointer()` is a `FileRule`.
* Modify to include `Qt.ItemIsDropEnabled` if `index.internalPointer()` is an `AssetRule`.
* Return the combined flags.
* **`supportedDropActions(self)`:**
* Return `Qt.MoveAction`.
* **`mimeData(self, indexes)`:**
* Create `QMimeData`.
* Encode information about the dragged rows (which must be `FileRule`s). Store a list of tuples, each containing `(source_parent_row, source_parent_col, source_row)` for each valid `FileRule` index in `indexes`. Use a custom MIME type (e.g., `"application/x-filerule-index-list"`).
* Return the `QMimeData`.
* **`canDropMimeData(self, data, action, row, column, parent)`:**
* Check if `action == Qt.MoveAction`.
* Check if `data.hasFormat("application/x-filerule-index-list")`.
* Check if `parent.isValid()` and `parent.internalPointer()` is an `AssetRule`.
* Return `True` if all conditions met, `False` otherwise.
* **`dropMimeData(self, data, action, row, column, parent)`:**
* Check `action` and MIME type again for safety.
* Get the target `AssetRule` item: `target_asset = parent.internalPointer()`. If not an `AssetRule`, return `False`.
* Decode the `QMimeData` to get the list of source index information.
* Create a list `files_to_move = []` containing the actual `QModelIndex` objects for the source `FileRule`s (reconstruct them using the decoded info and `self.index()`).
* Iterate through `files_to_move`:
* Get the `source_file_index`.
* Get the `file_item = source_file_index.internalPointer()`.
* Get the `old_parent_asset = getattr(file_item, 'parent_asset', None)`.
* If `target_asset != old_parent_asset`:
* Call `self.moveFileRule(source_file_index, parent)`. This handles the actual move within the model structure and emits `beginMoveRows`/`endMoveRows`.
* **After successful move:** Update the file's override: `file_item.target_asset_name_override = target_asset.asset_name`.
* Emit `self.dataChanged.emit(moved_file_index, moved_file_index, [Qt.DisplayRole, Qt.EditRole])` for the `COL_TARGET_ASSET` column of the *now moved* file (get its new index).
* **Cleanup:** After the loop, identify any original parent `AssetRule`s that became empty as a result of the moves. Call `self.removeAssetRule(empty_asset_rule)` for each.
* Return `True`.
**Data Model Impact:**
* Changes the parentage of `FileRule` items within the model's internal structure.
* Updates `FileRule.target_asset_name_override` to match the `asset_name` of the new parent `AssetRule`, ensuring consistency between the visual structure and the override field.
**Potential Challenges/Considerations:**
* **MIME Data Encoding/Decoding:** Ensure the index information is reliably encoded and decoded, especially handling potential model changes between drag start and drop. Using persistent IDs instead of row/column numbers might be more robust if available.
* **Cleanup Logic:** Reliably identifying and removing empty parent assets after potentially moving multiple files from different original parents requires careful tracking.
* **Transactionality:** If moving multiple files and one part fails, should the whole operation roll back? The current plan doesn't explicitly handle this; errors are logged, and subsequent steps might proceed.
* **Interaction with `AssetRestructureHandler`:** The plan suggests handling the move and override update directly within `dropMimeData`. This means the existing `AssetRestructureHandler` won't be triggered by the override change *during* the drop. Ensure the cleanup logic (removing empty parents) is correctly handled either in `dropMimeData` or by ensuring `moveFileRule` emits signals that the handler *can* use for cleanup.

View File

@@ -1,34 +0,0 @@
# Plan to Resolve ISSUE-011: Blender nodegroup script creates empty assets for skipped items
**Issue:** The Blender nodegroup creation script (`blenderscripts/create_nodegroups.py`) creates empty asset entries in the target .blend file for assets belonging to categories that the script is designed to skip, even though it correctly identifies them as skippable.
**Root Cause:** The script creates the parent node group, marks it as an asset, and applies tags *before* checking if the asset category is one that should be skipped for full nodegroup generation.
**Plan:**
1. **Analyze `blenderscripts/create_nodegroups.py` (Completed):** Confirmed that parent group creation and asset marking occur before the asset category skip check.
2. **Modify `blenderscripts/create_nodegroups.py`:**
* Relocate the code block responsible for creating/updating the parent node group, marking it as an asset, and applying tags (currently lines 605-645) to *after* the conditional check `if asset_category not in CATEGORIES_FOR_NODEGROUP_GENERATION:` (line 646).
* This ensures that if an asset's category is in the list of categories to be skipped, the `continue` statement will be hit before any actions are taken to create the parent asset entry in the Blender file.
3. **Testing:**
* Use test assets that represent both categories that *should* and *should not* result in full nodegroup generation based on the `CATEGORIES_FOR_NODEGROUP_GENERATION` list.
* Run the asset processor with these test assets, ensuring the Blender script is executed.
* Inspect the resulting `.blend` file to confirm:
* No `PBRSET_` node groups are created for assets belonging to skipped categories.
* `PBRSET_` node groups are correctly created and populated for assets belonging to categories in `CATEGORIES_FOR_NODEGROUP_GENERATION`.
4. **Update Ticket Status:**
* Once the fix is implemented and verified, update the `Status` field in `Tickets/ISSUE-011-blender-nodegroup-empty-assets.md` to `Resolved`.
**Logic Flow:**
```mermaid
graph TD
A[create_nodegroups.py] --> B{Load Asset Metadata};
B --> C{Determine Asset Category};
C --> D{Is Category Skipped?};
D -- Yes --> E[Exit Processing for Asset];
D -- No --> F{Create/Update Parent Group};
F --> G{Mark as Asset & Add Tags};
G --> H{Proceed with Child Group Creation etc.};

View File

@@ -1,68 +0,0 @@
# Map Variant Handling Plan (Revised)
**Goal:**
1. Ensure map types listed in a new `RESPECT_VARIANT_MAP_TYPES` config setting (initially just "COL") always receive a numeric suffix (`-1`, `-2`, etc.), based on their order determined by preset keywords and alphabetical sorting within keywords.
2. Ensure all other map types *never* receive a numeric suffix.
3. Correctly prioritize 16-bit map variants (identified by `bit_depth_variants` in presets) over their 8-bit counterparts, ensuring the 8-bit version is ignored/marked as extra and the 16-bit version is correctly classified ("Mapped") in the GUI preview.
**Affected Files:**
* `config.py`: To define the `RESPECT_VARIANT_MAP_TYPES` list.
* `asset_processor.py`: To modify the classification and suffix assignment logic according to the new rule.
* `Presets/Poliigon.json`: To remove the conflicting pattern.
**Plan Details:**
```mermaid
graph TD
A[Start] --> B(Modify config.py);
B --> C(Modify asset_processor.py);
C --> D(Modify Presets/Poliigon.json);
D --> E{Review Revised Plan};
E -- Approve --> F(Optional: Write Plan to MD);
F --> G(Switch to Code Mode);
E -- Request Changes --> B;
G --> H[End Plan];
subgraph Modifications
B[1. Add RESPECT_VARIANT_MAP_TYPES list to config.py]
C[2. Update suffix logic in asset_processor.py (_inventory_and_classify_files)]
D[3. Remove "*_16BIT*" pattern from move_to_extra_patterns in Presets/Poliigon.json]
end
```
1. **Modify `config.py`:**
* **Action:** Introduce a new configuration list named `RESPECT_VARIANT_MAP_TYPES`.
* **Value:** Initialize it as `RESPECT_VARIANT_MAP_TYPES = ["COL"]`.
* **Location:** Add this near other map-related settings like `STANDARD_MAP_TYPES`.
* **Purpose:** To explicitly define which map types should always respect variant numbering via suffixes.
2. **Modify `asset_processor.py`:**
* **File:** `asset_processor.py`
* **Method:** `_inventory_and_classify_files`
* **Location:** Within Step 5, replacing the suffix assignment logic (currently lines ~470-474).
* **Action:** Implement the new conditional logic for assigning the `final_map_type`.
* **New Logic:** Inside the loop iterating through `final_ordered_candidates` (for each `base_map_type`):
```python
# Determine final map type based on the new rule
if base_map_type in self.config.respect_variant_map_types: # Check the new config list
# Always assign suffix for types in the list
final_map_type = f"{base_map_type}-{i + 1}"
else:
# Never assign suffix for types NOT in the list
final_map_type = base_map_type
# Assign to the final map list entry
final_map_list.append({
"map_type": final_map_type,
# ... rest of the dictionary assignment ...
})
```
* **Purpose:** To implement the strict rule: only types in `RESPECT_VARIANT_MAP_TYPES` get a suffix; all others do not.
3. **Modify `Presets/Poliigon.json`:**
* **File:** `Presets/Poliigon.json`
* **Location:** Within the `move_to_extra_patterns` list (currently line ~28).
* **Action:** Remove the string `"*_16BIT*"`.
* **Purpose:** To prevent premature classification of 16-bit variants as "Extra", allowing the specific 16-bit prioritization logic to function correctly.

View File

@@ -1,90 +0,0 @@
# Memory Optimization Plan: Strategy 2 - Load Grayscale Directly
This plan outlines the steps to implement memory optimization strategy #2, which involves loading known grayscale map types directly as grayscale images using OpenCV's `IMREAD_GRAYSCALE` flag. This reduces the memory footprint compared to loading them with `IMREAD_UNCHANGED` and then potentially converting later.
## 1. Identify Target Grayscale Map Types
Define a list of map type names (case-insensitive check recommended during implementation) that should always be treated as single-channel grayscale data.
**Initial List:**
```python
GRAYSCALE_MAP_TYPES = ['HEIGHT', 'ROUGH', 'METAL', 'AO', 'OPC', 'MASK']
```
*(Note: This list might need adjustment based on specific preset configurations or workflow requirements.)*
## 2. Modify `_process_maps` Loading Logic
Locate the primary image loading section within the `_process_maps` method in `asset_processor.py` (around line 608).
**Change:** Before calling `cv2.imread`, determine the correct flag based on the `map_type`:
```python
# (Define GRAYSCALE_MAP_TYPES list earlier in the scope or class)
# ... inside the loop ...
full_source_path = self.temp_dir / source_path_rel
# Determine the read flag
read_flag = cv2.IMREAD_GRAYSCALE if map_type.upper() in GRAYSCALE_MAP_TYPES else cv2.IMREAD_UNCHANGED
log.debug(f"Loading source {source_path_rel.name} with flag: {'GRAYSCALE' if read_flag == cv2.IMREAD_GRAYSCALE else 'UNCHANGED'}")
# Load the image using the determined flag
img_loaded = cv2.imread(str(full_source_path), read_flag)
if img_loaded is None:
raise AssetProcessingError(f"Failed to load image file: {full_source_path.name} with flag {read_flag}")
# ... rest of the processing logic ...
```
## 3. Modify `_merge_maps` Loading Logic
Locate the image loading section within the resolution loop in the `_merge_maps` method (around line 881).
**Change:** Apply the same conditional logic to determine the `imread` flag when loading input maps for merging:
```python
# ... inside the loop ...
input_file_path = self.temp_dir / res_details['path']
# Determine the read flag (reuse GRAYSCALE_MAP_TYPES list)
read_flag = cv2.IMREAD_GRAYSCALE if map_type.upper() in GRAYSCALE_MAP_TYPES else cv2.IMREAD_UNCHANGED
log.debug(f"Loading merge input {input_file_path.name} ({map_type}) with flag: {'GRAYSCALE' if read_flag == cv2.IMREAD_GRAYSCALE else 'UNCHANGED'}")
# Load the image using the determined flag
img = cv2.imread(str(input_file_path), read_flag)
if img is None:
raise AssetProcessingError(f"Failed to load merge input {input_file_path.name} with flag {read_flag}")
# ... rest of the merging logic ...
```
## 4. Verification
During implementation in `code` mode:
* Ensure the `GRAYSCALE_MAP_TYPES` list is defined appropriately (e.g., as a class constant or module-level constant).
* Confirm that downstream code (e.g., stats calculation, channel extraction, data type conversions) correctly handles numpy arrays that might be 2D (grayscale) instead of 3D (BGR/BGRA). The existing code appears to handle this, but it's important to verify.
## Mermaid Diagram of Change
```mermaid
graph TD
subgraph _process_maps
A[Loop through map_info] --> B{Is map_type Grayscale?};
B -- Yes --> C[imread(path, GRAYSCALE)];
B -- No --> D[imread(path, UNCHANGED)];
C --> E[Process Image];
D --> E;
end
subgraph _merge_maps
F[Loop through resolutions] --> G[Loop through required_input_types];
G --> H{Is map_type Grayscale?};
H -- Yes --> I[imread(path, GRAYSCALE)];
H -- No --> J[imread(path, UNCHANGED)];
I --> K[Use Image in Merge];
J --> K;
end
style B fill:#f9f,stroke:#333,stroke-width:2px
style H fill:#f9f,stroke:#333,stroke-width:2px

View File

@@ -1,103 +0,0 @@
# Plan: Implement Input-Based Output Format Logic
This plan outlines the steps to modify the Asset Processor Tool to determine the output format of texture maps based on the input file format and specific rules.
## Requirements Summary
Based on user clarifications:
1. **JPG Input -> JPG Output:** If the original source map is a JPG, the output for that map (at all processed resolutions) will also be JPG (8-bit).
2. **TIF Input -> PNG/EXR Output:** If the original source map is a TIF, the output will be PNG (if the target bit depth is 8-bit, or if 16-bit PNG is the configured preference) or EXR (if the target bit depth is 16-bit and EXR is the configured preference).
3. **Other Inputs (PNG, etc.) -> Configured Output:** For other input formats, the output will follow the existing logic based on target bit depth (using configured 16-bit or 8-bit formats, typically EXR/PNG).
4. **`force_8bit` Rule:** If a map type has a `force_8bit` rule, it overrides the input format. Even if the input was 16-bit TIF, the output will be 8-bit PNG.
5. **Merged Maps:** The output format is determined by the highest format in the hierarchy (EXR > TIF > PNG > JPG) based on the *original* formats of the input files used in the merge. However, if the highest format is TIF, the actual output will be PNG/EXR based on the target bit depth. The target bit depth itself is determined separately by the merge rule's `output_bit_depth` setting.
6. **JPG Resizing:** Resized JPGs will be saved as JPG.
## Implementation Plan
**Phase 1: Data Gathering Enhancement**
1. **Modify `_inventory_and_classify_files` in `asset_processor.py`:**
* When classifying map files, extract and store the original file extension (e.g., `.jpg`, `.tif`, `.png`) along with the `source_path`, `map_type`, etc., within the `self.classified_files["maps"]` list.
**Phase 2: Implement New Logic for Individual Maps (`_process_maps`)**
1. **Modify `_process_maps` in `asset_processor.py`:**
* Inside the loop processing each `map_info`:
* Retrieve the stored original file extension.
* Determine the target output bit depth (8 or 16) using the existing logic (`config.get_bit_depth_rule`, source data type).
* **Implement New Format Determination:**
* Initialize `output_format` and `output_ext`.
* Check the `force_8bit` rule first: If the rule is `force_8bit`, set `output_format = 'png'` and `output_ext = '.png'`, regardless of input format.
* If not `force_8bit`:
* If `original_extension == '.jpg'` and `target_bit_depth == 8`: Set `output_format = 'jpg'`, `output_ext = '.jpg'`.
* If `original_extension == '.tif'`:
* If `target_bit_depth == 16`: Determine format (EXR/PNG) and extension based on `config.get_16bit_output_formats()`.
* If `target_bit_depth == 8`: Set `output_format = 'png'`, `output_ext = '.png'`.
* If `original_extension` is neither `.jpg` nor `.tif` (e.g., `.png`):
* If `target_bit_depth == 16`: Determine format (EXR/PNG) and extension based on `config.get_16bit_output_formats()`.
* If `target_bit_depth == 8`: Set `output_format = config.get_8bit_output_format()` (likely 'png'), `output_ext = f".{output_format}"`.
* **Remove Old Logic:** Delete the code block that checks `self.config.resolution_threshold_for_jpg`.
* Set `save_params` based on the *newly determined* `output_format` (e.g., `cv2.IMWRITE_JPEG_QUALITY` for JPG, `cv2.IMWRITE_PNG_COMPRESSION` for PNG).
* Proceed with data type conversion (if needed based on target bit depth and format requirements like EXR needing float16) and saving using `cv2.imwrite` with the determined `output_path_temp` (using the new `output_ext`) and `save_params`. Ensure fallback logic (e.g., EXR -> PNG) still functions correctly if needed.
**Phase 3: Implement New Logic for Merged Maps (`_merge_maps`)**
1. **Modify `_merge_maps` in `asset_processor.py`:**
* Inside the loop for each `current_res_key`:
* When loading input maps (`loaded_inputs`), also retrieve and store their *original* file extensions (obtained during classification and now available via `self.classified_files["maps"]`, potentially needing a lookup based on `res_details['path']` or storing it earlier).
* **Determine Highest Input Format:** Iterate through the original extensions of the loaded inputs for this resolution. Use the hierarchy (EXR > TIF > PNG > JPG) to find the highest format present.
* **Determine Final Output Format:**
* Start with the `highest_input_format`.
* If `highest_input_format == 'tif'`:
* Check the target bit depth determined by the merge rule (`output_bit_depth`).
* If `target_bit_depth == 16`: Set final format based on `config.get_16bit_output_formats()` (EXR/PNG).
* If `target_bit_depth == 8`: Set final format to `png`.
* Otherwise (JPG, PNG, EXR), the final format is the `highest_input_format`.
* Set `output_ext` based on the `final_output_format`.
* Set `save_params` based on the `final_output_format`.
* Proceed with merging channels, converting the merged data to the target bit depth specified by the *merge rule*, and saving using `cv2.imwrite` with the determined `merged_output_path_temp` (using the new `output_ext`) and `save_params`.
**Phase 4: Configuration and Documentation**
1. **Modify `config.py`:**
* Comment out or remove the `RESOLUTION_THRESHOLD_FOR_JPG` variable as it's no longer used. Add a comment explaining why it was removed.
2. **Update `readme.md`:**
* Modify the "Features" section (around line 21) and the "Configuration" section (around lines 36-40, 86) to accurately describe the new output format logic:
* Explain that JPG inputs result in JPG outputs.
* Explain that TIF inputs result in PNG/EXR outputs based on target bit depth and config.
* Explain the merged map format determination based on input hierarchy (with the TIF->PNG/EXR adjustment).
* Mention the removal of the JPG resolution threshold.
## Visual Plan (Mermaid)
```mermaid
graph TD
A[Start] --> B(Phase 1: Enhance Classification);
B --> B1(Store original extension in classified_files['maps']);
B1 --> C(Phase 2: Modify _process_maps);
C --> C1(Get original extension);
C --> C2(Determine target bit depth);
C --> C3(Apply New Format Logic);
C3 -- force_8bit --> C3a[Format=PNG];
C3 -- input=.jpg, 8bit --> C3b[Format=JPG];
C3 -- input=.tif, 16bit --> C3c[Format=EXR/PNG (Config)];
C3 -- input=.tif, 8bit --> C3d[Format=PNG];
C3 -- input=other, 16bit --> C3c;
C3 -- input=other, 8bit --> C3e[Format=PNG (Config)];
C3a --> C4(Remove JPG Threshold Check);
C3b --> C4;
C3c --> C4;
C3d --> C4;
C3e --> C4;
C4 --> C5(Set Save Params & Save);
C5 --> D(Phase 3: Modify _merge_maps);
D --> D1(Get original extensions of inputs);
D --> D2(Find highest format via hierarchy);
D2 --> D3(Adjust TIF -> PNG/EXR based on target bit depth);
D3 --> D4(Determine target bit depth from rule);
D4 --> D5(Set Save Params & Save Merged);
D5 --> E(Phase 4: Config & Docs);
E --> E1(Update config.py - Remove threshold);
E --> E2(Update readme.md);
E2 --> F[End];

View File

@@ -1,127 +0,0 @@
# Revised Plan: Implement Input-Based Output Format Logic with JPG Threshold Override
This plan outlines the steps to modify the Asset Processor Tool to determine the output format of texture maps based on the input file format, specific rules, and a JPG resolution threshold override.
## Requirements Summary (Revised)
Based on user clarifications:
1. **JPG Threshold Override:** If the target output bit depth is 8-bit AND the image resolution is greater than or equal to `RESOLUTION_THRESHOLD_FOR_JPG` (defined in `config.py`), the output format **must** be JPG.
2. **Input-Based Logic (if threshold not met):**
* **JPG Input -> JPG Output:** If the original source map is JPG and the target is 8-bit (and below threshold), output JPG.
* **TIF Input -> PNG/EXR Output:** If the original source map is TIF:
* If target is 16-bit, output EXR or PNG based on `OUTPUT_FORMAT_16BIT_PRIMARY` config.
* If target is 8-bit (and below threshold), output PNG.
* **Other Inputs (PNG, etc.) -> Configured Output:** For other input formats (and below threshold if 8-bit):
* If target is 16-bit, output EXR or PNG based on `OUTPUT_FORMAT_16BIT_PRIMARY` config.
* If target is 8-bit, output PNG (or format specified by `OUTPUT_FORMAT_8BIT`).
3. **`force_8bit` Rule:** If a map type has a `force_8bit` rule, the target bit depth is 8-bit. The output format will then be determined by the JPG threshold override or the input-based logic (resulting in JPG or PNG).
4. **Merged Maps:**
* Determine target `output_bit_depth` from the merge rule (`respect_inputs`, `force_8bit`, etc.).
* **Check JPG Threshold Override:** If target `output_bit_depth` is 8-bit AND resolution >= threshold, the final output format is JPG.
* **Else (Hierarchy Logic):** Determine the highest format among original inputs (EXR > TIF > PNG > JPG).
* If highest was TIF, adjust based on target bit depth (16-bit -> EXR/PNG config; 8-bit -> PNG).
* Otherwise, use the highest format found (EXR, PNG, JPG).
* **JPG 8-bit Check:** If the final format is JPG but the target bit depth was 16, force the merged data to 8-bit before saving.
5. **JPG Resizing:** Resized JPGs will be saved as JPG if the logic determines JPG as the output format.
## Implementation Plan (Revised)
**Phase 1: Data Gathering Enhancement** (Already Done & Correct)
* `_inventory_and_classify_files` stores the original file extension in `self.classified_files["maps"]`.
**Phase 2: Modify `_process_maps`**
1. Retrieve the `original_extension` from `map_info`.
2. Determine the target `output_bit_depth` (8 or 16).
3. Get the `threshold = self.config.resolution_threshold_for_jpg`.
4. Get the `target_dim` for the current resolution loop iteration.
5. **New Format Logic (Revised):**
* Initialize `output_format`, `output_ext`, `save_params`, `needs_float16`.
* **Check JPG Threshold Override:**
* If `output_bit_depth == 8` AND `target_dim >= threshold`: Set format to JPG, set JPG params.
* **Else (Apply Input/Rule-Based Logic):**
* If `bit_depth_rule == 'force_8bit'`: Set format to PNG (8-bit), set PNG params.
* Else if `original_extension == '.jpg'` and `output_bit_depth == 8`: Set format to JPG, set JPG params.
* Else if `original_extension == '.tif'`:
* If `output_bit_depth == 16`: Set format to EXR/PNG (16-bit config), set params, set `needs_float16` if EXR.
* If `output_bit_depth == 8`: Set format to PNG, set PNG params.
* Else (other inputs like `.png`):
* If `output_bit_depth == 16`: Set format to EXR/PNG (16-bit config), set params, set `needs_float16` if EXR.
* If `output_bit_depth == 8`: Set format to PNG (8-bit config), set PNG params.
6. Proceed with data type conversion and saving.
**Phase 3: Modify `_merge_maps`**
1. Retrieve original extensions of inputs (`input_original_extensions`).
2. Determine target `output_bit_depth` from the merge rule.
3. Get the `threshold = self.config.resolution_threshold_for_jpg`.
4. Get the `target_dim` for the current resolution loop iteration.
5. **New Format Logic (Revised):**
* Initialize `final_output_format`, `output_ext`, `save_params`, `needs_float16`.
* **Check JPG Threshold Override:**
* If `output_bit_depth == 8` AND `target_dim >= threshold`: Set `final_output_format = 'jpg'`.
* **Else (Apply Hierarchy/Rule-Based Logic):**
* Determine `highest_input_format` (EXR > TIF > PNG > JPG).
* Start with `final_output_format = highest_input_format`.
* If `highest_input_format == 'tif'`: Adjust based on target bit depth (16->EXR/PNG config; 8->PNG).
* Set `output_format = final_output_format`.
* Set `output_ext`, `save_params`, `needs_float16` based on `output_format`.
* **JPG 8-bit Check:** If `output_format == 'jpg'` and `output_bit_depth == 16`, force final merged data to 8-bit before saving and update `output_bit_depth` variable.
6. Proceed with merging, data type conversion, and saving.
**Phase 4: Configuration and Documentation**
1. **Modify `config.py`:** Ensure `RESOLUTION_THRESHOLD_FOR_JPG` is uncommented and set correctly (revert previous change).
2. **Update `readme.md`:** Clarify the precedence: 8-bit maps >= threshold become JPG, otherwise the input-based logic applies.
## Visual Plan (Mermaid - Revised)
```mermaid
graph TD
A[Start] --> B(Phase 1: Enhance Classification - Done);
B --> C(Phase 2: Modify _process_maps);
C --> C1(Get original extension);
C --> C2(Determine target bit depth);
C --> C3(Get target_dim & threshold);
C --> C4{8bit AND >= threshold?};
C4 -- Yes --> C4a[Format=JPG];
C4 -- No --> C5(Apply Input/Rule Logic);
C5 -- force_8bit --> C5a[Format=PNG];
C5 -- input=.jpg, 8bit --> C5b[Format=JPG];
C5 -- input=.tif, 16bit --> C5c[Format=EXR/PNG (Config)];
C5 -- input=.tif, 8bit --> C5d[Format=PNG];
C5 -- input=other, 16bit --> C5c;
C5 -- input=other, 8bit --> C5e[Format=PNG (Config)];
C4a --> C6(Set Save Params & Save);
C5a --> C6;
C5b --> C6;
C5c --> C6;
C5d --> C6;
C5e --> C6;
C6 --> D(Phase 3: Modify _merge_maps);
D --> D1(Get original extensions of inputs);
D --> D2(Determine target bit depth from rule);
D --> D3(Get target_dim & threshold);
D --> D4{8bit AND >= threshold?};
D4 -- Yes --> D4a[FinalFormat=JPG];
D4 -- No --> D5(Apply Hierarchy Logic);
D5 --> D5a(Find highest input format);
D5a --> D5b{Highest = TIF?};
D5b -- Yes --> D5c{Target 16bit?};
D5c -- Yes --> D5d[FinalFormat=EXR/PNG (Config)];
D5c -- No --> D5e[FinalFormat=PNG];
D5b -- No --> D5f[FinalFormat=HighestInput];
D4a --> D6(Set Save Params);
D5d --> D6;
D5e --> D6;
D5f --> D6;
D6 --> D7{Format=JPG AND Target=16bit?};
D7 -- Yes --> D7a(Force data to 8bit);
D7 -- No --> D8(Save Merged);
D7a --> D8;
D8 --> E(Phase 4: Config & Docs);
E --> E1(Uncomment threshold in config.py);
E --> E2(Update readme.md);
E2 --> F[End];

View File

@@ -1,52 +0,0 @@
3. Tab Breakdown and Widget Specifications:
Tab 1: General
OUTPUT_BASE_DIR: QLineEdit + QPushButton (opens QFileDialog.getExistingDirectory). Label: "Output Base Directory".
EXTRA_FILES_SUBDIR: QLineEdit. Label: "Subdirectory for Extra Files".
METADATA_FILENAME: QLineEdit. Label: "Metadata Filename".
Tab 2: Output & Naming
TARGET_FILENAME_PATTERN: QLineEdit. Label: "Output Filename Pattern". (Tooltip explaining placeholders recommended).
STANDARD_MAP_TYPES: QListWidget + "Add"/"Remove" QPushButtons. Label: "Standard Map Types".
RESPECT_VARIANT_MAP_TYPES: QLineEdit. Label: "Map Types Respecting Variants (comma-separated)".
ASPECT_RATIO_DECIMALS: QSpinBox (Min: 0, Max: ~6). Label: "Aspect Ratio Precision (Decimals)".
Tab 3: Image Processing
IMAGE_RESOLUTIONS: QTableWidget (Columns: "Name", "Resolution (px)") + "Add Row"/"Remove Row" QPushButtons. Label: "Defined Image Resolutions".
CALCULATE_STATS_RESOLUTION: QComboBox (populated from IMAGE_RESOLUTIONS keys). Label: "Resolution for Stats Calculation".
PNG_COMPRESSION_LEVEL: QSpinBox (Range: 0-9). Label: "PNG Compression Level".
JPG_QUALITY: QSpinBox (Range: 1-100). Label: "JPG Quality".
RESOLUTION_THRESHOLD_FOR_JPG: QComboBox (populated from IMAGE_RESOLUTIONS keys + "Never"/"Always"). Label: "Use JPG Above Resolution".
OUTPUT_FORMAT_8BIT: QComboBox (Options: "png", "jpg"). Label: "Output Format (8-bit)".
OUTPUT_FORMAT_16BIT_PRIMARY: QComboBox (Options: "png", "exr", "tif"). Label: "Primary Output Format (16-bit+)".
OUTPUT_FORMAT_16BIT_FALLBACK: QComboBox (Options: "png", "exr", "tif"). Label: "Fallback Output Format (16-bit+)".
Tab 4: Definitions (Overall QVBoxLayout)
Top Widget: DEFAULT_ASSET_CATEGORY: QComboBox (populated dynamically from Asset Types table below). Label: "Default Asset Category".
Bottom Widget: Inner QTabWidget:
Inner Tab 1: Asset Types
ASSET_TYPE_DEFINITIONS: QTableWidget (Columns: "Type Name", "Description", "Color", "Examples (comma-sep.)") + "Add Row"/"Remove Row" QPushButtons.
"Color" cell: QPushButton opening QColorDialog, button background shows color. Use QStyledItemDelegate.
"Examples" cell: Editable QLineEdit.
Inner Tab 2: File Types
FILE_TYPE_DEFINITIONS: QTableWidget (Columns: "Type ID", "Description", "Color", "Examples (comma-sep.)", "Standard Type", "Bit Depth Rule") + "Add Row"/"Remove Row" QPushButtons.
"Color" cell: QPushButton opening QColorDialog. Use QStyledItemDelegate.
"Examples" cell: Editable QLineEdit.
"Standard Type" cell: QComboBox (populated from STANDARD_MAP_TYPES + empty option). Use QStyledItemDelegate.
"Bit Depth Rule" cell: QComboBox (Options: "respect", "force_8bit"). Use QStyledItemDelegate.
Tab 5: Map Merging
Layout: QHBoxLayout.
Left Side: QListWidget displaying output_map_type for each rule. "Add Rule"/"Remove Rule" QPushButtons below. Label: "Merge Rules".
Right Side: QStackedWidget or dynamically populated QWidget showing details for the selected rule.
Rule Detail Form:
output_map_type: QLineEdit. Label: "Output Map Type Name".
inputs: QTableWidget (Fixed Rows: R, G, B, A. Columns: "Channel", "Input Map Type"). Label: "Channel Inputs". "Input Map Type" cell: QComboBox (populated from STANDARD_MAP_TYPES).
defaults: QTableWidget (Fixed Rows: R, G, B, A. Columns: "Channel", "Default Value"). Label: "Channel Defaults (if input missing)". "Default Value" cell: QDoubleSpinBox (Range: 0.0 - 1.0).
output_bit_depth: QComboBox (Options: "respect_inputs", "force_8bit", "force_16bit"). Label: "Output Bit Depth".
Tab 6: Postprocess Scripts
DEFAULT_NODEGROUP_BLEND_PATH: QLineEdit + QPushButton (opens QFileDialog.getOpenFileName, filter: "*.blend"). Label: "Default Node Group Library (.blend)".
DEFAULT_MATERIALS_BLEND_PATH: QLineEdit + QPushButton (opens QFileDialog.getOpenFileName, filter: "*.blend"). Label: "Default Materials Library (.blend)".
BLENDER_EXECUTABLE_PATH: QLineEdit + QPushButton (opens QFileDialog.getOpenFileName). Label: "Blender Executable Path".