Initial commit

This commit is contained in:
2025-04-29 18:26:13 +02:00
commit 30b5b7ec23
2366 changed files with 6634852 additions and 0 deletions

View File

@@ -0,0 +1,139 @@
---
ID: FEAT-008
Type: Feature
Status: Backlog
Priority: Medium
Labels: [gui, enhancement]
Created: 2025-04-22
Updated: 2025-04-22
Related: gui/main_window.py, gui/prediction_handler.py
---
# [FEAT-008]: Refine GUI Preview Table Display and Sorting
## Description
Enhance the usability and clarity of the GUI's detailed file preview table. This involves improving the default sorting to group by asset and then status, removing redundant information, simplifying status text, and rearranging columns.
## Current Behavior
* The preview table lists all files from all assets together without clear grouping.
* The table does not sort by status by default, or the sort order isn't optimized for clarity.
* The table includes a "Predicted Output" column which is considered redundant.
* Status text can be verbose (e.g., "unmatched extra", "[Unmatched Extra (Regex match: #####)]", "Ignored (Superseed by 16bit variant for ####)").
* The "Original Path" column is not necessarily the rightmost column.
## Desired Behavior / Goals
* The preview table should group files by the asset they belong to (e.g., based on the input ZIP/folder name or derived asset name).
* Within each asset group, the table should sort rows by 'Status' by default.
* The default secondary sort order for 'Status' should prioritize actionable or problematic statuses, grouping models with their maps: Error > (Mapped & Model) > Ignored > Extra. (Note: Assumes 'Unrecognised' files are displayed as 'Extra').
* Within the 'Mapped & Model' group, files should be sorted alphabetically by original path or filename to keep related items together.
* The "Predicted Output" column should be removed from the table view.
* Status display text should be made more concise:
* "unmatched extra" should be displayed as "Extra".
* "[Unmatched Extra (Regex match: #####)]" should be displayed as "[Extra=#####]".
* "Ignored (Superseed by 16bit variant for ####)" should be displayed as "Superseeded by 16bit ####".
* Other statuses ("Mapped", "Model", "Error") should remain clear.
* The "Original Path" column should be positioned as the rightmost column in the table.
## Implementation Notes (Optional)
* Modifications will likely be needed in `gui/main_window.py` (table view setup, column management, data population, sorting proxy model) and `gui/prediction_handler.py` (ensure prediction results include the source asset identifier).
* The table model (`QAbstractTableModel` or similar) needs to store the source asset identifier for each file row.
* Implement a custom multi-level sorting logic using `QSortFilterProxyModel`.
* The primary sort key will be the asset identifier.
* The secondary sort key will be the status, mapping 'Mapped' and 'Model' to the same priority level.
* The tertiary sort key will be the original path/filename.
* Update the column hiding/showing logic to remove the "Predicted Output" column.
* Implement string formatting or replacement logic for the status display text.
* Adjust the column order.
* Consider the performance implications of grouping and multi-level sorting, especially with a large number of assets/files. Add necessary optimizations (e.g., efficient data structures, potentially deferring sorting until needed).
## Acceptance Criteria (Optional)
* [ ] When assets are added to the queue and detailed preview is active, the table visually groups files by their source asset.
* [ ] Within each asset group, the table automatically sorts by the Status column according to the specified multi-level order.
* [ ] Clicking the Status header cycles through ascending/descending sort based on the custom multi-level order: Asset > Status (Error > (Mapped & Model) > Ignored > Extra) > Filename.
* [ ] The "Predicted Output" column is not present in the detailed preview table.
* [ ] Status text for relevant items displays concisely as: "Extra", "[Extra=#####]", "Superseeded by 16bit ####".
* [ ] The "Original Path" column is visually the last column on the right side of the table.
---
## Implementation Plan (Generated 2025-04-22)
This plan outlines the steps to implement the GUI preview refinements described in this ticket.
**Goal:** Enhance the GUI's detailed file preview table for better usability and clarity by implementing grouping, refined sorting, simplified status text, and adjusted column layout.
**Affected Files:**
* `gui/main_window.py`: Handles table view setup, data population trigger, and column management.
* `gui/preview_table_model.py`: Contains the table model (`PreviewTableModel`) and the sorting proxy model (`PreviewSortFilterProxyModel`).
**Plan Steps:**
1. **Correct Data Population in `main_window.py`:**
* **Action:** Modify the `on_prediction_results_ready` slot (around line 893).
* **Change:** Update the code to call `self.preview_model.set_data(results)` instead of populating the old `self.preview_table`.
* **Remove:** Delete code manually setting headers, rows, and items on `self.preview_table`.
* **Keep:** Retain the initial setup of `self.preview_table_view` (lines 400-424).
2. **Implement Status Text Simplification in `preview_table_model.py`:**
* **Action:** Modify the `data()` method within the `PreviewTableModel` class (around line 56).
* **Change:** Inside the `if role == Qt.ItemDataRole.DisplayRole:` block for `COL_STATUS`, add logic to transform the raw status string:
* "Unmatched Extra" -> "Extra"
* "[Unmatched Extra (Regex match: PATTERN)]" -> "[Extra=PATTERN]"
* "Ignored (Superseed by 16bit variant for FILENAME)" -> "Superseeded by 16bit FILENAME"
* Otherwise, return original status.
* **Note:** Ensure `ROLE_RAW_STATUS` still returns the original status for sorting.
3. **Refine Sorting Logic in `preview_table_model.py`:**
* **Action:** Modify the `lessThan` method within the `PreviewSortFilterProxyModel` class (around line 166).
* **Change:** Adjust the "Level 2: Sort by Status" logic to use a priority mapping where "Mapped" and "Model" have the same priority index, causing the sort to fall through to Level 3 (Path) for items within this group.
```python
# Example Priority Mapping
STATUS_PRIORITY = {
"Error": 0,
"Mapped": 1,
"Model": 1,
"Ignored": 2,
"Extra": 3,
"Unrecognised": 3,
"Unmatched Extra": 3,
"[No Status]": 99
}
# ... comparison logic using STATUS_PRIORITY ...
```
* **Remove/Update:** Replace the old `STATUS_ORDER` list with this priority dictionary logic.
4. **Adjust Column Order in `main_window.py`:**
* **Action:** Modify the `setup_main_panel_ui` method (around lines 404-414).
* **Change:** After setting up `self.preview_table_view`, explicitly move the "Original Path" column to the last visual position using `header.moveSection()`.
* **Verify:** Ensure the "Predicted Output" column remains hidden.
**Visual Plan (Mermaid):**
```mermaid
graph TD
A[Start FEAT-008 Implementation] --> B(Refactor `main_window.py::on_prediction_results_ready`);
B --> C{Use `self.preview_model.set_data()`?};
C -- Yes --> D(Remove manual `QTableWidget` population);
C -- No --> E[ERROR: Incorrect data flow];
D --> F(Modify `preview_table_model.py::PreviewTableModel::data()`);
F --> G{Implement Status Text Simplification?};
G -- Yes --> H(Add formatting logic for DisplayRole);
G -- No --> I[ERROR: Status text not simplified];
H --> J(Modify `preview_table_model.py::PreviewSortFilterProxyModel::lessThan()`);
J --> K{Implement Mapped/Model Grouping & Custom Sort Order?};
K -- Yes --> L(Use priority mapping for status comparison);
K -- No --> M[ERROR: Sorting incorrect];
L --> N(Modify `main_window.py::setup_main_panel_ui()`);
N --> O{Move 'Original Path' Column to End?};
O -- Yes --> P(Use `header.moveSection()`);
O -- No --> Q[ERROR: Column order incorrect];
P --> R(Verify All Acceptance Criteria);
R --> S[End FEAT-008 Implementation];
subgraph main_window.py Modifications
B; D; N; P;
end
subgraph preview_table_model.py Modifications
F; H; J; L;
end

View File

@@ -0,0 +1,76 @@
# FEAT-009: GUI - Unify Preset Editor Selection and Processing Preset
**Status:** Resolved
**Priority:** Medium
**Assigned:** TBD
**Reporter:** Roo (Architect Mode)
**Date:** 2025-04-22
## Description
This ticket proposes a GUI modification to streamline the preset handling workflow by unifying the preset selection mechanism. Currently, the preset selected for editing in the left panel can be different from the preset selected for processing/previewing in the right panel. This change aims to eliminate this duality, making the preset loaded in the editor the single source of truth for both editing and processing actions.
## Current Behavior
1. **Preset Editor Panel (Left):** Users select a preset from the 'Preset List'. This action loads the selected preset's details into the 'Preset Editor Tabs' for viewing or modification (See `readme.md` lines 257-260).
2. **Processing Panel (Right):** A separate 'Preset Selector' dropdown exists. Users select a preset from this dropdown *specifically* for processing the queued assets and generating the file preview in the 'Preview Table' (See `readme.md` lines 262, 266).
3. This allows for a scenario where the preset being edited is different from the preset used for previewing and processing, potentially causing confusion.
## Proposed Behavior
1. **Unified Selection:** Selecting a preset from the 'Preset List' in the left panel will immediately make it the active preset for *both* editing (loading its details into the 'Preset Editor Tabs') *and* processing/previewing.
2. **Dropdown Removal:** The separate 'Preset Selector' dropdown in the right 'Processing Panel' will be removed.
3. **Dynamic Updates:** The 'Preview Table' in the right panel will dynamically update based on the preset selected in the left panel's 'Preset List'.
4. **Processing Logic:** The 'Start Processing' action will use the currently active preset (selected from the left panel's list).
## Rationale
* **Improved User Experience:** Simplifies the UI by removing a redundant control and creates a more intuitive workflow.
* **Reduced Complexity:** Eliminates the need to manage two separate preset states (editing vs. processing).
* **Consistency:** Ensures that the preview and processing actions always reflect the preset currently being viewed/edited.
## Implementation Notes/Tasks
* **UI Modification (`gui/main_window.py`):**
* Remove the `QComboBox` (or similar widget) used for the 'Preset Selector' from the right 'Processing Panel' layout.
* **Signal/Slot Connection (`gui/main_window.py`):**
* Ensure the signal emitted when a preset is selected in the left panel's 'Preset List' (`QListWidget` or similar) is connected to slots responsible for:
* Loading the preset into the editor tabs.
* Updating the application's state to reflect the new active preset for processing.
* Triggering an update of the 'Preview Table'.
* **State Management:**
* Modify how the currently active preset for processing is stored and accessed. It should now directly reference the preset selected in the left panel list.
* **Handler Updates (`gui/prediction_handler.py`, `gui/processing_handler.py`):**
* Ensure these handlers correctly retrieve and use the single, unified active preset state when generating previews or initiating processing.
## Acceptance Criteria
1. The 'Preset Selector' dropdown is no longer visible in the right 'Processing Panel'.
2. Selecting a preset in the 'Preset List' (left panel) successfully loads its details into the 'Preset Editor Tabs'.
3. Selecting a preset in the 'Preset List' (left panel) updates the 'Preview Table' (right panel) to reflect the rules of the newly selected preset.
4. Initiating processing via the 'Start Processing' button uses the preset currently selected in the 'Preset List' (left panel).
5. The application remains stable and performs processing correctly with the unified preset selection.
## Workflow Diagram (Mermaid)
```mermaid
graph TD
subgraph "Current GUI"
A[Preset List (Left Panel)] -- Selects --> B(Preset Editor Tabs);
C[Preset Selector Dropdown (Right Panel)] -- Selects --> D(Active Processing Preset);
D -- Affects --> E(Preview Table);
D -- Used by --> F(Processing Logic);
end
subgraph "Proposed GUI"
G[Preset List (Left Panel)] -- Selects --> H(Preset Editor Tabs);
G -- Also Sets --> I(Active Preset for Editing & Processing);
I -- Affects --> J(Preview Table);
I -- Used by --> K(Processing Logic);
L(Preset Selector Dropdown Removed);
end
A --> G;
B --> H;
E --> J;
F --> K;

View File

@@ -0,0 +1,49 @@
# FEAT-GUI-NoDefaultPreset: Prevent Default Preset Selection in GUI
## Objective
Modify the Graphical User Interface (GUI) to prevent any preset from being selected by default on application startup. Instead, the GUI should prompt the user to explicitly select a preset from the list before displaying the detailed file preview. This aims to avoid accidental processing with an unintended preset.
## Problem Description
Currently, when the GUI application starts, a preset from the available list is automatically selected. This can lead to user confusion if they are not aware of this behavior and proceed to add assets and process them using a preset they did not intend to use. The preview table also populates automatically based on this default selection, which might not be desired until a conscious preset choice is made.
## Failed Attempts
1. **Attempt 1: Remove Default Selection Logic and Add Placeholder Text:**
* **Approach:** Removed code that explicitly set a default selected item in the preset list during initialization. Added a `QLabel` with placeholder text ("Please select a preset...") to the preview area and attempted to use `setPlaceholderText` on the `QTableView` (this was incorrect as `QTableView` does not have this method). Managed visibility of the placeholder label and table view.
* **Result:** The `setPlaceholderText` call failed with an `AttributeError`. Even after removing the erroneous line and adding a dedicated `QLabel`, a preset was still being selected automatically in the list on startup, and the placeholder was not consistently shown. This suggested that simply populating the `QListWidget` might implicitly trigger a selection.
2. **Attempt 2: Explicitly Clear Selection and Refine Visibility Logic:**
* **Approach:** Added explicit calls (`setCurrentItem(None)`, `clearSelection()`) after populating the preset list to ensure no item was selected. Refined the visibility logic for the placeholder label and table view in `_clear_editor` and `_load_selected_preset_for_editing`. Added logging to track selection and visibility changes.
* **Result:** Despite explicitly clearing the selection, testing indicated that a preset was still being selected on startup, and the placeholder was not consistently displayed. This reinforced the suspicion that the `QListWidget`'s behavior upon population was automatically triggering a selection and the associated signal.
## Proposed Plan: Implement "-- Select a Preset --" Placeholder Item
This approach makes the "no selection" state an explicit, selectable item in the preset list, giving us more direct control over the initial state and subsequent behavior.
1. **Modify `populate_presets` Method:**
* Add a `QListWidgetItem` with the text "-- Select a Preset --" at the very beginning of the list (index 0) after clearing the list but before adding actual preset items.
* Store a special, non-`Path` value (e.g., `None` or a unique string like `"__PLACEHOLDER__"`) in this placeholder item's `UserRole` data to distinguish it from real presets.
* After adding all real preset items, explicitly set the current item to this placeholder item using `self.editor_preset_list.setCurrentRow(0)`.
2. **Modify `_load_selected_preset_for_editing` Method (Slot for `currentItemChanged`):**
* At the beginning of the method, check if the `current_item` is the placeholder item by examining its `UserRole` data.
* If the placeholder item is selected:
* Call `self._clear_editor()` to reset all editor fields.
* Call `self.preview_model.clear_data()` to ensure the preview table model is empty.
* Explicitly set `self.preview_placeholder_label.setVisible(True)` and `self.preview_table_view.setVisible(False)`.
* Return from the method without proceeding to load a preset or call `update_preview`.
* If a real preset item is selected, proceed with the existing logic: get the `Path` from the item's data, call `_load_preset_for_editing(preset_path)`, call `self.update_preview()`, set `self.preview_placeholder_label.setVisible(False)` and `self.preview_table_view.setVisible(True)`.
3. **Modify `start_processing` Method:**
* Before initiating the processing, check if the currently selected item in `editor_preset_list` is the placeholder item.
* If the placeholder item is selected, display a warning message to the user (e.g., "Please select a valid preset before processing.") using the status bar and return from the method.
* If a real preset is selected, proceed with the existing processing logic.
4. **Modify `update_preview` Method:**
* Add a check at the beginning of the method. Get the `current_item` from `editor_preset_list`. If it is the placeholder item, clear the preview model (`self.preview_model.clear_data()`) and return immediately. This prevents the prediction handler from running when no valid preset is selected.
## Next Steps
Implement the proposed plan by modifying the specified methods in `gui/main_window.py`. Test the GUI on startup and when selecting different items in the preset list to ensure the desired behavior is achieved.

View File

@@ -0,0 +1,38 @@
---
ID: ISSUE-004
Type: Issue
Status: Resolved
Priority: High
Labels: [bug, core, image-processing]
Created: 2025-04-22
Updated: 2025-04-22
Related:
---
# [ISSUE-004]: Color channel swapping in image processing (Normal Maps, Stats)
## Description
There appears to be a general issue with how color channels (specifically Red and Blue) are handled in several parts of the image processing pipeline. This has been observed in normal map channel packing and potentially in the calculation of image statistics per channel, where the Red and Blue channels seem to be swapped relative to their intended order.
## Current Behavior
When processing images where individual color channels are accessed or manipulated (e.g., during normal map channel packing or calculating per-channel image statistics), the Red and Blue channels appear to be swapped. For example, in normal map packing, the channel intended for Blue might contain Red data, and vice versa, while Green remains correct. This suggests a consistent R/B channel inversion issue in the underlying image processing logic.
## Desired Behavior / Goals
The tool should correctly handle color channels according to standard RGB ordering in all image processing operations, including channel packing and image statistics calculation. The Red, Green, and Blue channels should consistently correspond to their intended data.
## Implementation Notes (Optional)
(This likely points to an issue in the image loading, channel splitting, merging, or processing functions, possibly related to the library used for image manipulation (e.g., OpenCV). Need to investigate how channels are accessed and ordered in relevant code sections like `_process_maps`, `_merge_maps`, and image statistics calculation.)
## Acceptance Criteria (Optional)
* [ ] Process an asset with a normal map and a map requiring channel packing (e.g., NRMRGH).
* [ ] Verify that the channels in the output normal map and packed map are in the correct R, G, B order.
* [ ] Verify that the calculated image statistics (Min/Max/Mean) for Red and Blue channels accurately reflect the data in those specific channels, not the swapped data.
## Resolution
The root cause was identified as a mismatch between OpenCV's default BGR channel order upon image loading (`cv2.imread`) and subsequent code assuming an RGB channel order, particularly in channel indexing during map merging (`_merge_maps`) and potentially in statistics calculation (`_calculate_image_stats`).
The fix involved the following changes in `asset_processor.py`:
1. **BGR to RGB Conversion:** Immediately after loading a 3-channel image using `cv2.imread` in the `_process_maps` function, the image is converted to RGB color space using `img_processed = cv2.cvtColor(img_loaded, cv2.COLOR_BGR2RGB)`. Grayscale or 4-channel images are handled appropriately without conversion.
2. **Updated Channel Indexing:** The channel indexing logic within the `_merge_maps` function was updated to reflect the RGB order (Red = index 0, Green = index 1, Blue = index 2) when extracting channels from the now-RGB source images for merging.
3. **Statistics Assumption Update:** Comments in `_calculate_image_stats` were updated to reflect that the input data is now expected in RGB order.
This ensures consistent RGB channel ordering throughout the processing pipeline after the initial load.

View File

@@ -0,0 +1,83 @@
---
ID: ISSUE-010
Type: Issue
Status: Resolved
Priority: High
Labels: [bug, core, image-processing, regression, resolved]
Created: 2025-04-22
Updated: 2025-04-22
Related: #ISSUE-004, asset_processor.py
---
# [ISSUE-010]: Color Channel Swapping Regression for RGB/Merged Maps after ISSUE-004 Fix
## Description
The resolution implemented for `ISSUE-004` successfully corrected the BGR/RGB channel handling for image statistics calculation and potentially normal map packing. However, this fix appears to have introduced a regression where standard RGB color maps and maps generated through merging operations are now being saved with swapped Blue and Red channels (BGR order) instead of the expected RGB order.
## Current Behavior
- Standard RGB texture maps (e.g., Diffuse, Albedo) loaded and processed are saved with BGR channel order.
- Texture maps created by merging channels (e.g., ORM, NRMRGH) are saved with BGR channel order for their color components.
- Image statistics (`metadata.json`) are calculated correctly based on RGB data.
- Grayscale maps are handled correctly.
- RGBA maps are assumed to be handled correctly (needs verification).
## Desired Behavior / Goals
- All processed color texture maps (both original RGB and merged maps) should be saved with the standard RGB channel order.
- Image statistics should continue to be calculated correctly based on RGB data.
- Grayscale and RGBA maps should retain their correct handling.
- The fix should not reintroduce the problems solved by `ISSUE-004`.
## Implementation Notes (Optional)
The issue likely stems from the universal application of `cv2.cvtColor(img_loaded, cv2.COLOR_BGR2RGB)` in `_process_maps` introduced in the `ISSUE-004` fix. While this ensures consistent RGB data for internal operations like statistics and merging (which now expects RGB), the final saving step (`cv2.imwrite`) might implicitly expect BGR data for color images, or the conversion needs to be selectively undone before saving certain map types.
Investigation needed in `asset_processor.py`:
- Review `_process_maps`: Where is the BGR->RGB conversion happening? Is it applied to all 3-channel images?
- Review `_process_maps` saving logic: How are different map types saved? Does `cv2.imwrite` expect RGB or BGR?
- Review `_merge_maps`: How are channels combined? Does the saving logic here also need adjustment?
- Determine if the BGR->RGB conversion should be conditional or if a final RGB->BGR conversion is needed before saving specific map types.
## Acceptance Criteria (Optional)
* [x] Process an asset containing standard RGB maps (e.g., Color/Albedo). Verify the output map has correct RGB channel order.
* [x] Process an asset requiring map merging (e.g., ORM from Roughness, Metallic, AO). Verify the output merged map has correct RGB(A) channel order.
* [x] Verify image statistics in `metadata.json` remain correct (reflecting RGB values).
* [x] Verify grayscale maps are processed and saved correctly.
* [x] Verify normal maps are processed and saved correctly (as per ISSUE-004 fix).
* [ ] (Optional) Verify RGBA maps are processed and saved correctly.
---
## Resolution
The issue was resolved by implementing conditional RGB to BGR conversion immediately before saving images using `cv2.imwrite` within the `_process_maps` and `_merge_maps` methods in `asset_processor.py`.
The logic checks if the image is 3-channel and if the target output format is *not* EXR. If both conditions are true, the image data is converted from the internal RGB representation back to BGR, which is the expected channel order for `cv2.imwrite` when saving formats like PNG, JPG, and TIF.
This approach ensures that color maps are saved with the correct channel order in standard formats while leaving EXR files (which handle RGB correctly) and grayscale/single-channel images unaffected. It also preserves the internal RGB representation used for accurate image statistics calculation and channel merging, thus not reintroducing the issues fixed by `ISSUE-004`.
## Implementation Plan (Generated 2025-04-22)
**Goal:** Correct the BGR/RGB channel order regression for saved color maps (introduced by the `ISSUE-004` fix) while maintaining the correct handling for statistics, grayscale maps, and EXR files.
**Core Idea:** Convert the image data back from RGB to BGR *just before* saving with `cv2.imwrite`, but only for 3-channel images and formats where this conversion is necessary (e.g., PNG, JPG, TIF, but *not* EXR).
**Plan Steps:**
1. **Modify `_process_maps` Saving Logic (`asset_processor.py`):**
* Before the primary `cv2.imwrite` call (around line 1182) and the fallback call (around line 1206), add logic to check if the image is 3-channel and the output format is *not* 'exr'. If true, convert the image from RGB to BGR using `cv2.cvtColor` and add a debug log message.
2. **Modify `_merge_maps` Saving Logic (`asset_processor.py`):**
* Apply the same conditional RGB to BGR conversion logic before the primary `cv2.imwrite` call (around line 1505) and the fallback call (around line 1531), including a debug log message.
3. **Verification Strategy:**
* Test with various 3-channel color maps (Albedo, Emission, etc.) and merged maps (NRMRGH, etc.) saved as PNG/JPG/TIF.
* Verify correct RGB order in outputs.
* Verify grayscale, EXR, and statistics calculation remain correct.
**Mermaid Diagram:**
```mermaid
graph TD
subgraph "_process_maps / _merge_maps Saving Step"
A[Prepare Final Image Data (in RGB)] --> B{Is it 3-Channel?};
B -- Yes --> C{Output Format != 'exr'?};
B -- No --> E[Save Image using cv2.imwrite];
C -- Yes --> D(Convert Image RGB -> BGR);
C -- No --> E;
D --> E;
end

View File

@@ -0,0 +1,29 @@
---
ID: ISSUE-011
Type: Issue
Status: Backlog
Priority: Medium
Labels: [blender, bug]
Created: 2025-04-22
Updated: 2025-04-22
Related:
---
# [ISSUE-011]: Blender nodegroup script creates empty assets for skipped items
## Description
The Blender nodegroup creation script (`blenderscripts/create_nodegroups.py`) incorrectly generates empty asset entries in the target .blend file for assets that were skipped during the main processing pipeline. This occurs even though the main asset processor correctly identifies and skips these assets based on the overwrite flag.
## Current Behavior
When running the asset processor with the `--overwrite` flag set to false, if an asset's output directory and metadata.json already exist, the main processing pipeline correctly skips processing that asset. However, the subsequent Blender nodegroup creation script still attempts to create a nodegroup for this skipped asset, resulting in an empty or incomplete asset entry in the target .blend file.
## Desired Behavior / Goals
The Blender nodegroup creation script should only attempt to create nodegroups for assets that were *actually processed* by the main asset processor, not those that were skipped. It should check if the asset was processed successfully before attempting nodegroup creation.
## Implementation Notes (Optional)
The `main.py` script, which orchestrates the processing and calls the Blender scripts, needs to pass information about which assets were successfully processed to the Blender nodegroup script. The Blender script should then filter its operations based on this information.
## Acceptance Criteria (Optional)
* [ ] Running the asset processor with `--overwrite` false on inputs that already have processed outputs should result in the main processing skipping the asset.
* [ ] The Blender nodegroup script, when run after a skipped processing run, should *not* create or update a nodegroup for the skipped asset.
* [ ] Only assets that were fully processed should have corresponding nodegroups created/updated by the Blender script.

View File

@@ -0,0 +1,29 @@
---
ID: ISSUE-012
Type: Issue
Status: Backlog
Priority: High
Labels: [core, bug, image processing]
Created: 2025-04-22
Updated: 2025-04-22
Related:
---
# [ISSUE-012]: MASK map processing fails to extract alpha channel from RGBA images
## Description
When processing texture sets that include MASK maps provided as RGBA images, the asset processor is expected to extract the alpha channel to represent the mask. However, the current implementation appears to be converting the RGBA image to grayscale instead of isolating the alpha channel. This results in incorrect MASK maps in the processed output.
## Current Behavior
The asset processor, when encountering an RGBA image classified as a MASK map, converts the image to grayscale. The alpha channel information is lost or ignored during this process.
## Desired Behavior / Goals
For RGBA images classified as MASK maps, the asset processor should extract the alpha channel and use it as the content for the output MASK map. The resulting MASK map should be a single-channel (grayscale) image representing the transparency or mask information from the original alpha channel.
## Implementation Notes (Optional)
The image processing logic within `asset_processor.py` (likely in the `_process_maps` method or a related helper function) needs to be updated to specifically handle RGBA input images for MASK map types. It should identify the alpha channel and save it as a single-channel output image. Libraries like OpenCV or Pillow should provide functionality for accessing individual channels.
## Acceptance Criteria (Optional)
* [ ] Process an asset with an RGBA image designated as a MASK map according to the preset.
* [ ] Verify that the output MASK map is a single-channel grayscale image.
* [ ] Confirm that the grayscale values in the output MASK map correspond to the alpha values of the original RGBA input image.

53
Tickets/ISSUE-012-plan.md Normal file
View File

@@ -0,0 +1,53 @@
# Plan for ISSUE-012: MASK map processing fails to extract alpha channel from RGBA images
## Issue Description
When processing texture sets that include MASK maps provided as RGBA images, the asset processor is expected to extract the alpha channel to represent the mask. However, the current implementation appears to be converting the RGBA image to grayscale instead of isolating the alpha channel, resulting in incorrect MASK maps. This issue affects all RGBA images classified as MASK, including plain 'MASK' and MASK variants (e.g., MASK-1).
## Analysis
Based on the code in `asset_processor.py`, specifically the `_load_and_transform_source` method, the issue likely stems from the order of operations. The current logic converts 4-channel BGRA images to 3-channel RGB *before* checking specifically for MASK types and attempting to extract the alpha channel. This means the alpha channel is lost before the code gets a chance to extract it. The condition `if map_type.upper() == 'MASK':` is too strict and does not cover MASK variants.
## Detailed Plan
1. **Analyze `_load_and_transform_source`:** Re-examine the `_load_and_transform_source` method in `asset_processor.py` to confirm the exact sequence of image loading, color space conversions, and MASK-specific handling. (Completed during initial analysis).
2. **Modify MASK Handling Condition:** Change the current condition `if map_type.upper() == 'MASK':` to use the `_get_base_map_type` helper function, so it correctly identifies all MASK variants (e.g., 'MASK', 'MASK-1', 'MASK-2') and applies the special handling logic to them. The condition will become `if _get_base_map_type(map_type) == 'MASK':`.
3. **Reorder and Refine Logic:**
* The image will still be loaded with `cv2.IMREAD_UNCHANGED` for MASK types to ensure the alpha channel is initially present.
* Immediately after loading, and *before* any general color space conversions (like BGR->RGB), check if the base map type is 'MASK' and if the loaded image is 4-channel (RGBA/BGRA).
* If both conditions are true, extract the alpha channel (`img_loaded[:, :, 3]`) and use this single-channel data for subsequent processing steps (`img_prepared`).
* If the base map type is 'MASK' but the loaded image is 3-channel (RGB/BGR), convert it to grayscale (`cv2.cvtColor(img_loaded, cv2.COLOR_BGR2GRAY)`) and use this for `img_prepared`.
* If the base map type is 'MASK' and the loaded image is already 1-channel (grayscale), keep it as is.
* If the base map type is *not* 'MASK', the existing BGR->RGB conversion logic for 3/4 channel images will be applied as before.
4. **Review Subsequent Steps:** Verify that the rest of the `_load_and_transform_source` method (Gloss->Rough inversion, resizing, dtype conversion) correctly handles the single-channel image data that will now be produced for RGBA MASK inputs.
5. **Testing:** Use the acceptance criteria outlined in `ISSUE-012` to ensure the fix works correctly.
## Proposed Code Logic Flow
```mermaid
graph TD
A[Load Image (IMREAD_UNCHANGED for MASK)] --> B{Loading Successful?};
B -- No --> C[Handle Load Error];
B -- Yes --> D[Get Original Dtype & Shape];
D --> E{Base Map Type is MASK?};
E -- Yes --> F{Loaded Image is 4-Channel?};
F -- Yes --> G[Extract Alpha Channel];
F -- No --> H{Loaded Image is 3-Channel?};
H -- Yes --> I[Convert BGR/RGB to Grayscale];
H -- No --> J[Keep as is (Assume Grayscale)];
G --> K[Set img_prepared to Mask Data];
I --> K;
J --> K;
E -- No --> L{Loaded Image is 3 or 4-Channel?};
L -- Yes --> M[Convert BGR/BGRA to RGB];
L -- No --> N[Keep as is];
M --> O[Set img_prepared to RGB Data];
N --> O;
K --> P[Proceed with other transformations (Gloss, Resize, etc.)];
O --> P;
P --> Q[Cache Result & Return];
C --> Q;
```
## Acceptance Criteria (from ISSUE-012)
* [ ] Process an asset with an RGBA image designated as a MASK map according to the preset.
* [ ] Verify that the output MASK map is a single-channel grayscale image.
* [ ] Confirm that the grayscale values in the output MASK map correspond to the alpha values of the original RGBA input image.

View File

@@ -0,0 +1,30 @@
---
ID: ISSUE-013
Type: Issue
Status: Backlog
Priority: Medium
Labels: [core, bug, refactor, image processing]
Created: 2025-04-22
Updated: 2025-04-22
Related: #REFACTOR-001-merge-from-source
---
# [ISSUE-013]: Image statistics calculation missing for roughness maps after merged map refactor
## Description
Following the recent refactor of the merged map processing logic, the calculation of image statistics (Min/Max/Mean) for roughness maps appears to have been omitted or broken. This data is valuable for quality control and metadata, and needs to be reimplemented for roughness maps, particularly when they are part of a merged texture like NRMRGH.
## Current Behavior
Image statistics (Min/Max/Mean) are not being calculated or stored for roughness maps after the merged map refactor.
## Desired Behavior / Goals
Reimplement the image statistics calculation for roughness maps. Ensure that statistics are calculated correctly for standalone roughness maps and for roughness data when it is part of a merged map (e.g., the green channel in an NRMRGH map). The calculated statistics should be included in the `metadata.json` file for the asset.
## Implementation Notes (Optional)
Review the changes made during the merged map refactor (`REFACTOR-001-merge-from-source`). Identify where the image statistics calculation for roughness maps was previously handled and integrate it into the new merged map processing flow. Special consideration may be needed to extract the correct channel (green for NRMRGH) for calculation. The `_process_maps` and `_merge_maps` methods in `asset_processor.py` are likely relevant areas.
## Acceptance Criteria (Optional)
* [ ] Process an asset that includes a roughness map (standalone or as part of a merged map).
* [ ] Verify that Min/Max/Mean statistics for the roughness data are calculated.
* [ ] Confirm that the calculated roughness statistics are present and correct in the output `metadata.json` file.

67
Tickets/ISSUE-013-plan.md Normal file
View File

@@ -0,0 +1,67 @@
# Plan to Resolve ISSUE-013: Merged Map Roughness Statistics
**Objective:** Reimplement the calculation and inclusion of image statistics (Min/Max/Mean) for roughness data, covering both standalone roughness maps and roughness data used as a source channel in merged maps (specifically the green channel for NRMRGH), and ensure these statistics are present in the output `metadata.json` file.
**Proposed Changes:**
1. **Modify `_merge_maps_from_source`:**
* Within the loop that processes each resolution for a merge rule, after successfully loading and transforming the source images into `loaded_inputs_data`, iterate through the `inputs_mapping` for the current rule.
* Identify if any of the source map types in the `inputs_mapping` is 'ROUGH'.
* If 'ROUGH' is used as a source, retrieve the corresponding loaded image data from `loaded_inputs_data`.
* Calculate image statistics (Min/Max/Mean) for this ROUGH source image data using the existing `_calculate_image_stats` helper function.
* Store these calculated statistics temporarily, associated with the output merged map type (e.g., 'NRMRGH') and the target channel it's mapped to (e.g., 'G').
2. **Update Asset Metadata Structure:**
* Introduce a new key in the asset's metadata dictionary (e.g., `merged_map_channel_stats`) to store statistics for specific channels within merged maps. This structure will hold the stats per merged map type, per channel, and per resolution (specifically the stats resolution defined in the config).
3. **Modify `_generate_metadata_file`:**
* When generating the final `metadata.json` file, retrieve the accumulated `merged_map_channel_stats` from the asset's metadata dictionary.
* Include these statistics under a dedicated section in the output JSON, alongside the existing `image_stats_1k` for individual maps.
**Data Flow for Statistics Calculation:**
```mermaid
graph TD
A[Source Image Files] --> B{_load_and_transform_source};
B --> C[Resized/Transformed Image Data (float32)];
C --> D{Is this source for a ROUGH map?};
D -- Yes --> E{_calculate_image_stats};
E --> F[Calculated Stats (Min/Max/Mean)];
F --> G[Store in merged_map_channel_stats (in asset metadata)];
C --> H{_merge_maps_from_source};
H --> I[Merged Image Data];
I --> J[_save_image];
J --> K[Saved Merged Map File];
G --> L[_generate_metadata_file];
L --> M[metadata.json];
subgraph Individual Map Processing
A --> B;
B --> C;
C --> N{Is this an individual map?};
N -- Yes --> E;
E --> O[Store in image_stats_1k (in asset metadata)];
C --> P[_save_image];
P --> Q[Saved Individual Map File];
O --> L;
Q --> S[Organize Output Files];
end
subgraph Merged Map Processing
A --> B;
B --> C;
C --> D;
D -- Yes --> E;
E --> G;
C --> H;
H --> I;
I --> J;
J --> K;
K --> S;
end
L --> M;
M --> S;
```
This plan ensures that statistics are calculated for the roughness data at the appropriate stage (after loading and transformation, before merging) and correctly included in the metadata file, addressing the issue described in ISSUE-013.

View File

@@ -0,0 +1,49 @@
# Issue: GUI Preview Not Updating on File Drop Without Preset Selection
**ID:** ISSUE-GUI-PreviewNotUpdatingOnDrop
**Date:** 2025-04-28
**Status:** Open
**Priority:** High
**Description:**
The file preview list (`QTableView` in the main panel) does not populate when files or folders are dropped onto the application if a valid preset has not been explicitly selected *after* the application starts and *before* the drop event.
**Root Cause:**
- The `add_input_paths` method in `gui/main_window.py` correctly calls `update_preview` after files are added via drag-and-drop.
- However, the `update_preview` method checks `self.editor_preset_list.currentItem()` to get the selected preset.
- If no preset is selected, or if the placeholder "--- Select a Preset ---" is selected, `update_preview` correctly identifies this and returns *before* starting the `PredictionHandler` thread.
- This prevents the file prediction from running and the preview table from being populated, even if assets have been added.
**Expected Behavior:**
Dropping files should trigger a preview update based on the last valid preset selected by the user, or potentially prompt the user to select a preset if none has ever been selected. The preview should not remain empty simply because the user hasn't re-clicked the preset list immediately before dropping files.
**Proposed Solution:**
1. Modify `MainWindow` to store the last validly selected preset path.
2. Update the `update_preview` method:
- Instead of relying solely on `currentItem()`, use the stored last valid preset path if `currentItem()` is invalid or the placeholder.
- If no valid preset has ever been selected, display a clear message in the status bar or a dialog box prompting the user to select a preset first.
- Ensure the prediction handler is triggered with the correct preset name based on this updated logic.
**Affected Files:**
- `gui/main_window.py`
**Debugging Steps Taken:**
- Confirmed `AssetProcessor.__init__` error was fixed.
- Added logging to `PredictionHandler` and `MainWindow`.
- Logs confirmed `update_preview` is called on drop, but exits early due to no valid preset being selected via `currentItem()`.
## Related Changes (Session 2025-04-28)
During a recent session, work was done to implement a feature allowing GUI editing of the file preview list, specifically the file status and predicted output name.
This involved defining a data interface (`ProjectNotes/Data_Structures/Preview_Edit_Interface.md`) and modifying several GUI and backend components.
Key files modified for this feature include:
- `gui/prediction_handler.py`
- `gui/preview_table_model.py`
- `gui/main_window.py`
- `gui/processing_handler.py`
- Core `asset_processor` logic
The changes involved passing the editable `file_list` and `asset_properties` data structure through the prediction handler, to the preview table model, and finally to the processing handler. The preview table model was made editable to allow user interaction. The backend logic was updated to utilize this editable data.
Debugging steps taken during this implementation included fixing indentation errors and resolving an `AssetProcessor` instantiation issue that occurred during the prediction phase.

View File

@@ -0,0 +1,91 @@
---
ID: REFACTOR-001
Type: Refactor
Status: Resolved
Priority: Medium
Labels: [refactor, core, image-processing, quality]
Created: 2025-04-22
Updated: 2025-04-22 # Keep original update date, or use today's? Using today's for now.
Related: #ISSUE-010 #asset_processor.py #config.py
---
# [REFACTOR-001]: Refactor Map Merging to Use Source Files Directly
## Description
Currently, the `_merge_maps` function in `asset_processor.py` loads map data from temporary files that have already been processed by `_process_maps` (resized, format converted, etc.). This intermediate save/load step can introduce quality degradation, especially if the intermediate files are saved using lossy compression (e.g., JPG) or if bit depth conversions occur before merging. This is particularly noticeable in merged maps like NRMRGH where subtle details from the source Normal map might be lost or altered due to recompression.
## Goals
1. **Improve Quality:** Modify the map merging process to load channel data directly from the *selected original source files* (after classification and 16-bit prioritization) instead of intermediate processed files, thus avoiding potential quality loss from recompression.
2. **Maintain Modularity:** Refactor the processing logic to avoid significant code duplication between individual map processing and the merging process.
3. **Preserve Functionality:** Ensure all existing functionality (resizing, gloss inversion, format/bit depth rules, merging logic) is retained and applied correctly in the new structure.
## Proposed Solution: Restructure with Helper Functions
Introduce two helper functions within the `AssetProcessor` class:
1. **`_load_and_transform_source(source_path_rel, map_type, target_resolution_key)`:**
* Responsible for loading the specified source file.
* Performs initial preparation: BGR->RGB conversion, Gloss->Roughness inversion (if applicable), MASK extraction (if applicable).
* Resizes the prepared data to the target resolution.
* Returns the resized NumPy array and original source dtype.
2. **`_save_image(image_data, map_type, resolution_key, asset_base_name, source_info, output_bit_depth_rule, temp_dir)`:**
* Encapsulates all logic for saving an image.
* Determines final output format and bit depth based on rules and source info.
* Performs final data type conversions (e.g., to uint8, uint16, float16).
* Performs final color space conversion (RGB->BGR for non-EXR).
* Constructs the output filename.
* Saves the image using `cv2.imwrite`, including fallback logic (e.g., EXR->PNG).
* Returns details of the saved temporary file.
**Modified Workflow:**
```mermaid
graph TD
A[Input Files] --> B(_inventory_and_classify_files);
B --> C{Selected Source Maps Info};
subgraph Core Processing Logic
C --> PIM(_process_individual_map);
C --> MFS(_merge_maps_from_source);
PIM --> LTS([_load_and_transform_source]);
MFS --> LTS;
end
LTS --> ImgData{Loaded, Prepared, Resized Image Data};
subgraph Saving Logic
PIM --> SI([_save_image]);
MFS --> SI;
ImgData --> SI; // Pass image data to save helper
end
SI --> SaveResults{Saved Temp File Details};
SaveResults --> Results(Processing Results); // Collect results from saving
Results --> Meta(_generate_metadata_file);
Meta --> Org(_organize_output_files);
Org --> Final[Final Output Structure];
```
**Function Changes:**
* **`_process_maps`** was renamed to `_process_individual_map`, responsible only for maps *not* used in merges. It calls `_load_and_transform_source` and `_save_image`.
* **`_merge_maps`** was replaced by `_merge_maps_from_source`. It identifies required source paths, calls `_load_and_transform_source` for each input at each target resolution, merges the results, determines saving parameters, and calls `_save_image`.
* The main `process` loop coordinates calls to the new functions and handles caching.
## Resolution (2025-04-22)
The refactoring described above was implemented in `asset_processor.py`. The `_merge_maps_from_source` function now utilizes the `_load_and_transform_source` and `_save_image` helpers, loading data directly from the classified source files instead of intermediate processed files. User testing confirmed the changes work correctly and improve merged map quality.
## Acceptance Criteria (Optional)
* [X] Merged maps (e.g., NRMRGH) are generated correctly using data loaded directly from selected source files.
* [X] Visual inspection confirms improved quality/reduced artifacts in merged maps compared to the previous method.
* [X] Individual maps (not part of any merge rule) are still processed and saved correctly.
* [X] All existing configuration options (resolutions, bit depth rules, format rules, gloss inversion) function as expected within the new structure.
* [X] Processing time remains within acceptable limits (potential performance impact of repeated source loading/resizing needs monitoring).
* [X] Code remains modular and maintainable, with minimal duplication of core logic.

View File

@@ -0,0 +1,68 @@
# Refactoring Plan for REFACTOR-001: Merge Maps From Source
This plan details the steps to implement the refactoring described in `Tickets/REFACTOR-001-merge-from-source.md`. The goal is to improve merged map quality by loading data directly from original source files, avoiding intermediate compression artifacts.
## Final Plan Summary:
1. **Implement `_load_and_transform_source` Helper:**
* Loads source image, performs initial prep (BGR->RGB, gloss inversion), resizes to target resolution.
* Includes an in-memory cache (passed from `process`) using a `(source_path, resolution_key)` key to store and retrieve resized results, avoiding redundant loading and resizing within a single `process` call.
2. **Implement `_save_image` Helper:**
* Encapsulates all saving logic: determining output format/bit depth based on rules, final data type/color space conversions, filename construction, saving with `cv2.imwrite`, and fallback logic (e.g., EXR->PNG).
3. **Refactor `_process_maps` (Potential Rename):**
* Modify to process only maps *not* used as inputs for any merge rule.
* Calls `_load_and_transform_source` (passing cache) and `_save_image`.
4. **Replace `_merge_maps` with `_merge_maps_from_source`:**
* Iterates through merge rules.
* Calls `_load_and_transform_source` (passing cache) for each required *source* input at target resolutions.
* Merges the resulting channel data.
* Calls `_save_image` to save the final merged map.
5. **Update `process` Method:**
* Initializes an empty cache dictionary (`loaded_data_cache = {}`) at the beginning of the method.
* Passes this cache dictionary to all calls to `_load_and_transform_source` within its scope.
* Coordinates calls to the refactored/new processing and merging functions.
* Ensures results are collected correctly for metadata generation.
## New Workflow Visualization:
```mermaid
graph TD
A2[Input Files] --> B2(_inventory_and_classify_files);
B2 --> C2{Classified Maps Info};
subgraph Processing Logic
C2 --> D2(_process_individual_map);
C2 --> E2(_merge_maps_from_source);
D2 --> F2([_load_and_transform_source w/ Cache]);
E2 --> F2;
end
F2 --> G2{Loaded/Transformed Data (Cached)};
subgraph Saving Logic
G2 --> H2([_save_image]);
D2 --> H2;
E2 --> H2;
end
H2 -- Saves Temp Files --> I2{Processed/Merged Map Details};
I2 --> J2(_generate_metadata_file);
J2 --> K2(_organize_output_files);
## Current Status (as of 2025-04-22 ~19:45 CET)
* **DONE:** Step 1: Implemented `_load_and_transform_source` helper function (including caching logic) and inserted into `asset_processor.py`.
* **DONE:** Step 2: Implemented `_save_image` helper function and inserted into `asset_processor.py`.
* **DONE:** Step 5 (Partial): Updated `process` method to initialize `loaded_data_cache` and updated calls to use new function names (`_process_individual_maps`, `_merge_maps_from_source`) and pass the cache.
* **DONE:** Renamed function definitions: `_process_maps` -> `_process_individual_maps`, `_merge_maps` -> `_merge_maps_from_source`, and added `loaded_data_cache` parameter.
* **DONE:** Corrected syntax errors introduced during previous `apply_diff` operations (related to docstrings).
## Remaining Steps:
1. **DONE:** Modify `_process_individual_maps` Logic: Updated the internal logic to correctly utilize `_load_and_transform_source` (with cache) and `_save_image` for maps not involved in merging.
2. **DONE:** Modify `_merge_maps_from_source` Logic: Updated the internal logic to correctly utilize `_load_and_transform_source` (with cache) for *source* files, perform the channel merge, and then use `_save_image` for the merged result.
3. **DONE:** Testing: User confirmed the refactored code works correctly.

View File

@@ -0,0 +1,43 @@
# BUG: GUI - Persistent Crash When Toggling "Disable Detailed Preview"
**Ticket Type:** Bug
**Priority:** High
**Status:** Resolved
**Description:**
The GUI application crashes with a `Fatal Python error: _PyThreadState_Attach: non-NULL old thread state` when toggling the "Disable Detailed Preview" option in the View menu. This issue persisted despite attempted fixes aimed at resolving potential threading conflicts.
This was a follow-up to a previous ticket regarding the "Disable Detailed Preview" feature regression (refer to ISSUE-GUI-DisableDetailedPreview-Regression.md). While the initial fix addressed the preview display logic, it did not eliminate the crash.
**Symptoms:**
The application terminates unexpectedly with the fatal Python error traceback when the "Disable Detailed Preview" menu item is toggled on or off, particularly after assets have been added to the queue and the detailed preview has been generated or is in the process of being generated.
**Steps to Reproduce:**
1. Launch the GUI (`python -m gui.main_window`).
2. (Optional but recommended for diagnosis) Check the "Verbose Logging (DEBUG)" option in the View menu.
3. Add one or more asset files (ZIPs or folders) to the drag and drop area.
4. Wait for the detailed preview to populate (or start populating).
5. Toggle the "Disable Detailed Preview" option in the View menu. The crash should occur.
6. Toggle the option again if the first toggle didn't cause the crash.
**Attempted Fixes:**
1. Modified `gui/preview_table_model.py` to introduce a `_simple_mode` flag and `set_simple_mode` method to control the data and column presentation for detailed vs. simple views.
2. Modified `gui/main_window.py` (`update_preview` method) to:
* Utilize the `PreviewTableModel.set_simple_mode` method based on the "Disable Detailed Preview" menu action state.
* Configure the `QTableView`'s column visibility and resize modes according to the selected preview mode.
* Request cancellation of the `PredictionHandler` via `prediction_handler.request_cancel()` if it is running when `update_preview` is called. (Note: `request_cancel` did not exist in `PredictionHandler`).
3. Added extensive logging with timestamps and thread IDs to `gui/main_window.py`, `gui/preview_table_model.py`, and `gui/prediction_handler.py` to diagnose threading behavior.
**Diagnosis:**
Analysis of logs revealed that the crash occurred consistently when toggling the preview back ON, specifically during the `endResetModel` call within `PreviewTableModel.set_simple_mode(False)`. The root cause was identified as a state inconsistency in the `QTableView` (or associated models) caused by a redundant call to `PreviewTableModel.set_data` immediately following `PreviewTableModel.set_simple_mode(True)` within the `MainWindow.update_preview` method when switching *to* simple mode. This resulted in two consecutive `beginResetModel`/`endResetModel` calls on the main thread, leaving the model/view in an unstable state that triggered the crash on the subsequent toggle. Additionally, it was found that `PredictionHandler` lacked a `request_cancel` method, although this was not the direct cause of the crash.
**Resolution:**
1. Removed the redundant call to `self.preview_model.set_data(list(self.current_asset_paths))` within the `if simple_mode_enabled:` block in `MainWindow.update_preview`. The `set_simple_mode(True)` call is sufficient to switch the model's internal mode.
2. Added an explicit call to `self.preview_model.set_data(list(self.current_asset_paths))` within the `MainWindow.add_input_paths` method, specifically for the case when the GUI is in simple preview mode. This ensures the simple view is updated correctly when new files are added without relying on the problematic `set_data` call in `update_preview`.
3. Corrected instances of `QThread.currentThreadId()` to `QThread.currentThread()` in logging statements across the relevant files.
4. Added the missing `QThread` import in `gui/prediction_handler.py`.
**Relevant Files/Components:**
* `gui/main_window.py`
* `gui/preview_table_model.py`
* `gui/prediction_handler.py`

View File

@@ -0,0 +1,32 @@
---
ID: FEAT-003
Type: Feature
Status: Complete
Priority: Medium
Labels: [feature, blender, metadata]
Created: 2025-04-22
Updated: 2025-04-22
Related:
---
# [FEAT-003]: Selective Nodegroup Generation and Category Tagging in Blender
## Description
Enhance the Blender nodegroup creation script (`blenderscripts/create_nodegroups.py`) to only generate nodegroups for assets classified as "Surface" or "Decal" based on the `category` field in their `metadata.json` file. Additionally, store the asset's category (Surface, Decal, or Asset) as a tag on the generated Blender asset for better organization and filtering within Blender.
## Current Behavior
The current nodegroup generation script processes all assets found in the processed asset library root, regardless of their `category` specified in `metadata.json`. It does not add the asset's category as a tag in Blender.
## Desired Behavior / Goals
1. The script should read the `category` field from the `metadata.json` file for each processed asset.
2. If the `category` is "Surface" or "Decal", the script should proceed with generating the nodegroup.
3. If the `category` is "Asset" (or any other category), the script should skip nodegroup generation for that asset.
4. The script should add the asset's `category` (e.g., "Surface", "Decal", "Asset") as a tag to the corresponding generated Blender asset.
## Implementation Notes (Optional)
(This will require modifying `blenderscripts/create_nodegroups.py` to read the `metadata.json` file, check the `category` field, and use the Blender Python API (`bpy`) to add tags to the created asset.)
## Acceptance Criteria (Optional)
* [x] Run the nodegroup generation script on a processed asset library containing assets of different categories (Surface, Decal, Asset).
* [x] Verify that nodegroups are created only for Surface and Decal assets.
* [x] Verify that assets in the Blender file (both those with generated nodegroups and those skipped) have a tag corresponding to their category from `metadata.json`.

View File

@@ -0,0 +1,162 @@
---
ID: FEAT-004
Type: Feature
Status: Complete
Priority: Medium
Labels: [core, gui, cli, feature, enhancement]
Created: 2025-04-22
Updated: 2025-04-22
Related: #ISSUE-001
---
# [FEAT-004]: Handle Multi-Asset Inputs Based on Source Naming Index
## Description
Currently, when an input ZIP or folder contains files from multiple distinct assets (as identified by the `source_naming.part_indices.base_name` rule in the preset), the tool's fallback logic uses `os.path.commonprefix` to determine a single, often incorrect, asset name. This prevents the tool from correctly processing inputs containing multiple assets and leads to incorrect predictions in the GUI.
## Current Behavior
When processing an input containing files from multiple assets (e.g., `3-HeartOak...` and `3-Oak-Classic...` in the same ZIP), the `_determine_base_metadata` method identifies multiple potential base names based on the configured index. It then falls back to calculating the common prefix of all relevant file stems, resulting in a truncated or incorrect asset name (e.g., "3-"). The processing pipeline and GUI prediction then proceed using this incorrect name.
## Desired Behavior / Goals
The tool should accurately detect when a single input (ZIP/folder) contains files belonging to multiple distinct assets, as defined by the `source_naming.part_indices.base_name` rule. For each distinct base name identified, the tool should process the corresponding subset of files as a separate, independent asset. This includes generating a correct output directory structure and a complete `metadata.json` file for each detected asset within the input. The GUI preview should also accurately reflect the presence of multiple assets and their predicted names.
## Implementation Notes (Optional)
* Modify `AssetProcessor._determine_base_metadata` to return a list of distinct base names and a mapping of files to their determined base names.
* Adjust the main processing orchestration (`main.py`, `gui/processing_handler.py`) to iterate over the list of distinct base names returned by `_determine_base_metadata`.
* For each distinct base name, create a new processing context (potentially a new `AssetProcessor` instance or a modified approach) that operates only on the files associated with that specific base name.
* Ensure temporary workspace handling and cleanup correctly manage files for multiple assets from a single input.
* Update `AssetProcessor.get_detailed_file_predictions` to correctly identify and group files by distinct base names for accurate GUI preview display.
* Consider edge cases: what if some files don't match any determined base name? (They should likely still go to 'Extra/'). What if the index method yields no names? (Fallback to input name as currently).
## Acceptance Criteria (Optional)
* [ ] Processing a ZIP file containing files for two distinct assets (e.g., 'AssetA' and 'AssetB') using a preset with `base_name_index` results in two separate output directories (`<output_base>/<supplier>/AssetA/` and `<output_base>/<supplier>/AssetB/`), each containing the correctly processed files and metadata for that asset.
* [ ] The GUI preview accurately lists the files from the multi-asset input and shows the correct predicted asset name for each file based on its determined base name (e.g., files belonging to 'AssetA' show 'AssetA' as the predicted name).
* [ ] The CLI processing of a multi-asset input correctly processes and outputs each asset separately.
* [ ] The tool handles cases where some files in a multi-asset input do not match any determined base name (e.g., they are correctly classified as 'Unrecognised' or 'Extra').
---
## Implementation Plan (Generated by Architect Mode)
**Goal:** Modify the tool to correctly identify and process multiple distinct assets within a single input (ZIP/folder) based on the `source_naming.part_indices.base_name` rule, placing unmatched files into the `Extra/` folder of each processed asset.
**Phase 1: Core Logic Refactoring (`asset_processor.py`)**
1. **Refactor `_determine_base_metadata`:**
* **Input:** Takes the list of all file paths (relative to temp dir) found after extraction.
* **Logic:**
* Iterates through relevant file stems (maps, models).
* Uses the `source_naming_separator` and `source_naming_indices['base_name']` to extract potential base names for each file stem.
* Identifies the set of *distinct* base names found across all files.
* Creates a mapping: `Dict[Path, Optional[str]]` where keys are relative file paths and values are the determined base name string (or `None` if a file doesn't match any base name according to the index rule).
* **Output:** Returns a tuple: `(distinct_base_names: List[str], file_to_base_name_map: Dict[Path, Optional[str]])`.
* **Remove:** Logic setting `self.metadata["asset_name"]`, `asset_category`, and `archetype`.
2. **Create New Method `_determine_single_asset_metadata`:**
* **Input:** Takes a specific `asset_base_name` (string) and the list of `classified_files` *filtered* for that asset.
* **Logic:** Contains the logic previously in `_determine_base_metadata` for determining `asset_category` and `archetype` based *only* on the files associated with the given `asset_base_name`.
* **Output:** Returns a dictionary containing `{"asset_category": str, "archetype": str}` for the specific asset.
3. **Modify `_inventory_and_classify_files`:**
* No major changes needed here initially, as it classifies based on file patterns independent of the final asset name. However, ensure the `classified_files` structure remains suitable for later filtering.
4. **Refactor `AssetProcessor.process` Method:**
* Change the overall flow to handle multiple assets.
* **Steps:**
1. `_setup_workspace()`
2. `_extract_input()`
3. `_inventory_and_classify_files()` -> Get initial `self.classified_files` (all files).
4. Call the *new* `_determine_base_metadata()` using all relevant files -> Get `distinct_base_names` list and `file_to_base_name_map`.
5. Initialize an overall status dictionary (e.g., `{"processed": [], "skipped": [], "failed": []}`).
6. **Loop** through each `current_asset_name` in `distinct_base_names`:
* Log the start of processing for `current_asset_name`.
* **Filter Files:** Create temporary filtered lists of maps, models, etc., from `self.classified_files` based on the `file_to_base_name_map` for the `current_asset_name`.
* **Determine Metadata:** Call `_determine_single_asset_metadata(current_asset_name, filtered_files)` -> Get category/archetype for this asset. Store these along with `current_asset_name` and supplier name in a temporary `current_asset_metadata` dict.
* **Skip Check:** Perform the skip check logic specifically for `current_asset_name` using the `output_base_path`, supplier name, and `current_asset_name`. If skipped, update overall status and `continue` to the next asset name.
* **Process:** Call `_process_maps()`, `_merge_maps()`, passing the *filtered* file lists and potentially the `current_asset_metadata`. These methods need to operate only on the provided subset of files.
* **Generate Metadata:** Call `_generate_metadata_file()`, passing the `current_asset_metadata` and the results from map/merge processing for *this asset*. This method will now write `metadata.json` specific to `current_asset_name`.
* **Organize Output:** Call `_organize_output_files()`, passing the `current_asset_name`. This method needs modification:
* It will move the processed files for the *current asset* to the correct subfolder (`<output_base>/<supplier>/<current_asset_name>/`).
* It will also identify files from the *original* input whose base name was `None` in the `file_to_base_name_map` (the "unmatched" files).
* It will copy these "unmatched" files into the `Extra/` subfolder for the *current asset being processed in this loop iteration*.
* Update overall status based on the success/failure of this asset's processing.
7. `_cleanup_workspace()` (only after processing all assets from the input).
8. **Return:** Return the overall status dictionary summarizing results across all detected assets.
5. **Adapt `_process_maps`, `_merge_maps`, `_generate_metadata_file`, `_organize_output_files`:**
* Ensure these methods accept and use the filtered file lists and the specific `asset_name` for the current iteration.
* `_organize_output_files` needs the logic to handle copying the "unmatched" files into the current asset's `Extra/` folder.
**Phase 2: Update Orchestration (`main.py`, `gui/processing_handler.py`)**
1. **Modify `main.process_single_asset_wrapper`:**
* The call `processor.process()` will now return the overall status dictionary.
* The wrapper needs to interpret this dictionary to return a single representative status ("processed" if any succeeded, "skipped" if all skipped, "failed" if any failed) and potentially a consolidated error message for the main loop/GUI.
2. **Modify `gui.processing_handler.ProcessingHandler.run`:**
* No major changes needed here, as it relies on `process_single_asset_wrapper`. The status updates emitted back to the GUI might need slight adjustments if more detailed per-asset status is desired in the future, but for now, the overall status from the wrapper should suffice.
**Phase 3: Update GUI Prediction (`asset_processor.py`, `gui/prediction_handler.py`, `gui/main_window.py`)**
1. **Modify `AssetProcessor.get_detailed_file_predictions`:**
* This method must now perform the multi-asset detection:
* Call the refactored `_determine_base_metadata` to get the `distinct_base_names` and `file_to_base_name_map`.
* Iterate through all classified files (maps, models, extra, ignored).
* For each file, look up its corresponding base name in the `file_to_base_name_map`.
* The returned dictionary for each file should now include:
* `original_path`: str
* `predicted_asset_name`: str | None (The base name determined for this file, or None if unmatched)
* `predicted_output_name`: str | None (The predicted final filename, e.g., `AssetName_Color_4K.png`, or original name for models/extra)
* `status`: str ("Mapped", "Model", "Extra", "Unrecognised", "Ignored", **"Unmatched Extra"** - new status for files with `None` base name).
* `details`: str | None
2. **Update `gui.prediction_handler.PredictionHandler`:**
* Ensure it correctly passes the results from `get_detailed_file_predictions` (including the new `predicted_asset_name` and `status` values) back to the main window via signals.
3. **Update `gui.main_window.MainWindow`:**
* Modify the preview table model/delegate to display the `predicted_asset_name`. A new column might be needed.
* Update the logic that colors rows or displays status icons to handle the new "Unmatched Extra" status distinctly from regular "Extra" or "Unrecognised".
**Visual Plan (`AssetProcessor.process` Sequence)**
```mermaid
sequenceDiagram
participant Client as Orchestrator (main.py / GUI Handler)
participant AP as AssetProcessor
participant Config as Configuration
participant FS as File System
Client->>AP: process(input_path, config, output_base, overwrite)
AP->>AP: _setup_workspace()
AP->>FS: Create temp_dir
AP->>AP: _extract_input()
AP->>FS: Extract/Copy files to temp_dir
AP->>AP: _inventory_and_classify_files()
AP-->>AP: self.classified_files (all files)
AP->>AP: _determine_base_metadata()
AP-->>AP: distinct_base_names, file_to_base_name_map
AP->>AP: Initialize overall_status = {}
loop For each current_asset_name in distinct_base_names
AP->>AP: Log start for current_asset_name
AP->>AP: Filter self.classified_files using file_to_base_name_map
AP-->>AP: filtered_files_for_asset
AP->>AP: _determine_single_asset_metadata(current_asset_name, filtered_files_for_asset)
AP-->>AP: current_asset_metadata (category, archetype)
AP->>AP: Perform Skip Check for current_asset_name
alt Skip Check == True
AP->>AP: Update overall_status (skipped)
AP->>AP: continue loop
end
AP->>AP: _process_maps(filtered_files_for_asset, current_asset_metadata)
AP-->>AP: processed_map_details_asset
AP->>AP: _merge_maps(filtered_files_for_asset, current_asset_metadata)
AP-->>AP: merged_map_details_asset
AP->>AP: _generate_metadata_file(current_asset_metadata, processed_map_details_asset, merged_map_details_asset)
AP->>FS: Write metadata.json for current_asset_name
AP->>AP: _organize_output_files(current_asset_name, file_to_base_name_map)
AP->>FS: Move processed files for current_asset_name
AP->>FS: Copy unmatched files to Extra/ for current_asset_name
AP->>AP: Update overall_status (processed/failed for this asset)
end
AP->>AP: _cleanup_workspace()
AP->>FS: Delete temp_dir
AP-->>Client: Return overall_status dictionary

View File

@@ -0,0 +1,56 @@
# Ticket: FEAT-011 - Implement Power-of-Two Texture Resizing
**Status:** Open
**Priority:** High
**Assignee:** TBD
**Reporter:** Roo (Architect Mode)
## Description
The current asset processing pipeline resizes textures based on a target maximum dimension (e.g., 4K = 4096px) while maintaining the original aspect ratio. This results in non-power-of-two (NPOT) dimensions for non-square textures, which is suboptimal for rendering performance and compatibility with certain systems.
This feature implements a "Stretch/Squash" approach to ensure all output textures have power-of-two (POT) dimensions for each target resolution key.
## Proposed Solution
1. **Resizing Logic Change:**
* Modify the `calculate_target_dimensions` helper function in `asset_processor.py`.
* **Step 1:** Calculate intermediate dimensions (`scaled_w`, `scaled_h`) by scaling the original image (orig_w, orig_h) to fit within the target resolution key's maximum dimension (e.g., 4096 for "4K") while maintaining the original aspect ratio (using existing logic).
* **Step 2:** Implement a new helper function `get_nearest_pot(value: int) -> int` to find the closest power-of-two value for a given integer.
* **Step 3:** Apply `get_nearest_pot()` to `scaled_w` to get the final target power-of-two width (`pot_w`).
* **Step 4:** Apply `get_nearest_pot()` to `scaled_h` to get the final target power-of-two height (`pot_h`).
* **Step 5:** Return `(pot_w, pot_h)` from `calculate_target_dimensions`. The `_process_maps` function will then use these POT dimensions in `cv2.resize`.
2. **Helper Function `get_nearest_pot`:**
* This function will take an integer `value`.
* It will find the powers of two immediately below (`lower_pot`) and above (`upper_pot`) the value.
* It will return the power of two that is numerically closer to the original `value`. (e.g., `get_nearest_pot(1365)` would return 1024, as `1365 - 1024 = 341` and `2048 - 1365 = 683`).
3. **Filename Convention:**
* The original resolution tag (e.g., `_4K`, `_2K`) defined in `config.py` will be kept in the output filename, even though the final dimensions are POT. This maintains consistency with the processing target.
4. **Metadata:**
* The existing aspect ratio change metadata calculation (`_normalize_aspect_ratio_change`) will remain unchanged. This metadata can be used downstream to potentially correct the aspect ratio distortion introduced by the stretch/squash resizing.
## Implementation Diagram
```mermaid
graph TD
A[Original Dimensions (W, H)] --> B{Target Resolution Key (e.g., "4K")};
B --> C{Get Max Dimension (e.g., 4096)};
A & C --> D[Calculate Scaled Dimensions (scaled_w, scaled_h) - Maintain Aspect Ratio];
D --> E[scaled_w];
D --> F[scaled_h];
E --> G[Find Nearest POT(scaled_w) -> pot_w];
F --> H[Find Nearest POT(scaled_h) -> pot_h];
G & H --> I[Final POT Dimensions (pot_w, pot_h)];
I --> J[Use (pot_w, pot_h) in cv2.resize];
```
## Acceptance Criteria
* All textures output by the `_process_maps` function have power-of-two dimensions (width and height are both powers of 2).
* The resizing uses the "Stretch/Squash" method based on the nearest POT value for each dimension calculated *after* initial aspect-preserving scaling.
* The output filename retains the original resolution key (e.g., `_4K`).
* The `get_nearest_pot` helper function correctly identifies the closest power of two.
* The aspect ratio metadata calculation remains unchanged.

View File

@@ -0,0 +1,29 @@
---
ID: ISSUE-001
Type: Issue
Status: Backlog
Priority: Medium
Labels: [bug, config, gui]
Created: 2025-04-22
Updated: 2025-04-22
Related:
---
# [ISSUE-001]: Source file naming rules from JSON are not respected
## Description
The tool is not correctly applying the "Source file naming rules" defined in the JSON presets. Specifically, the "base Name index" and "Map type index" values within the `source_naming` section of the preset JSON are not being respected during file processing.
## Current Behavior
When processing assets, the tool (observed in the GUI) does not use the specified "base Name index" and "Map type index" from the active preset's `source_naming` rules to determine the asset's base name and individual map types from the source filenames.
## Desired Behavior / Goals
The tool should accurately read and apply the "base Name index" and "Map type index" values from the selected preset's `source_naming` rules to correctly parse asset base names and map types from source filenames.
## Implementation Notes (Optional)
(Add any thoughts on how this could be implemented, technical challenges, relevant code sections, or ideas for a solution.)
## Acceptance Criteria (Optional)
(Define clear, testable criteria that must be met for the ticket to be considered complete.)
* [ ] Processing an asset with a preset that uses specific `base_name_index` and `map_type_index` values results in the correct asset name and map types being identified according to those indices.
* [ ] This behavior is consistent in both the GUI and CLI.

View File

@@ -0,0 +1,28 @@
---
ID: ISSUE-002
Type: Issue
Status: Mostly Resolved
Priority: High
Labels: [bug, core, file-classification]
Created: 2025-04-22
Updated: 2025-04-22
Related:
---
# [ISSUE-002]: Incorrect COL-# numbering with multiple assets in one directory
## Description
When processing a directory containing multiple distinct asset sets (e.g., `Assetname1` and `Assetname2`), the numbering for map types that require variant suffixes (like "COL") is incorrectly incremented across all assets in the directory rather than being reset for each individual asset.
## Current Behavior
If an input directory contains files for `Assetname1` and `Assetname2`, and both have multiple "COL" maps, the numbering continues sequentially across both sets. For example, `Assetname1` might get `_COL-1`, `_COL-2`, while `Assetname2` incorrectly gets `_COL-3`, `_COL-4`, instead of starting its own sequence (`_COL-1`, `_COL-2`). The "COL value accumulates across directory, not comparing names".
## Desired Behavior / Goals
The tool should correctly identify distinct asset sets within a single input directory and apply variant numbering (like "COL-#") independently for each asset set. The numbering should reset for each new asset encountered in the directory.
## Implementation Notes (Optional)
(This likely requires adjusting the file classification or inventory logic to group files by asset name before applying variant numbering rules.)
## Acceptance Criteria (Optional)
* [ ] Processing a directory containing multiple asset sets with variant map types results in correct, independent numbering for each asset set (e.g., `Assetname1_COL-1`, `Assetname1_COL-2`, `Assetname2_COL-1`, `Assetname2_COL-2`).
* [ ] The numbering is based on the files belonging to a specific asset name, not the overall count of variant maps in the entire input directory.

View File

@@ -0,0 +1,29 @@
---
ID: ISSUE-005
Type: Issue
Status: Resolved
Priority: High
Labels: [bug, core, image-processing]
Created: 2025-04-22
Updated: 2025-04-22
Related:
---
# [ISSUE-005]: Alpha Mask channel not processed correctly
## Description
When processing source images that contain an alpha channel intended for use as a MASK map, the tool's output for the MASK map is an RGBA image instead of a grayscale image derived solely from the alpha channel.
## Current Behavior
If a source image (e.g., a PNG or TIF) has an alpha channel and is classified as a MASK map type, the resulting output MASK file retains the RGB channels (potentially with incorrect data or black/white values) in addition to the alpha channel, resulting in an RGBA output image.
## Desired Behavior / Goals
When processing a source image with an alpha channel for a MASK map type, the tool should extract only the alpha channel data and output a single-channel (grayscale) image representing the mask. The RGB channels from the source should be discarded for the MASK output.
## Implementation Notes (Optional)
(This requires modifying the image processing logic for MASK map types to specifically isolate and save only the alpha channel as a grayscale image. Need to check the relevant sections in `asset_processor.py` related to map processing and saving.)
## Acceptance Criteria (Optional)
* [ ] Process an asset containing a source image with an alpha channel intended as a MASK map.
* [ ] Verify that the output MASK file is a grayscale image (single channel) and accurately represents the alpha channel data from the source.
* [ ] Verify that the output MASK file does not contain redundant or incorrect RGB channel data.

View File

@@ -0,0 +1,32 @@
---
ID: ISSUE-006 # e.g., FEAT-001, ISSUE-002
Type: Issue # Choose one: Issue or Feature
Status: Mostly Resolved # Choose one
Priority: Medium # Choose one
Labels: [core, bug, map_processing, multi_asset] # Add relevant labels from the list or define new ones
Created: 2025-04-22
Updated: 2025-04-22
Related: #FEAT-004-handle-multi-asset-inputs.md # Links to other tickets (e.g., #ISSUE-YYY), relevant files, or external URLs
---
# [ISSUE-006]: COL-# Suffixes Incorrectly Increment Across Multi-Asset Inputs
## Description
When processing an input (ZIP or folder) that contains files for multiple distinct assets, the numeric suffixes applied to map types listed in `RESPECT_VARIANT_MAP_TYPES` (such as "COL") are currently incremented globally across all files in the input, rather than being reset and incremented independently for each detected asset group.
## Current Behavior
If an input contains files for AssetA (e.g., AssetA_COL.png, AssetA_COL_Variant.png) and AssetB (e.g., AssetB_COL.png), the output might incorrectly number them as AssetA_COL-1.png, AssetA_COL-2.png, and AssetB_COL-3.png. The expectation is that numbering should restart for each asset, resulting in AssetA_COL-1.png, AssetA_COL-2.png, and AssetB_COL-1.png.
## Desired Behavior / Goals
The numeric suffix for map types in `RESPECT_VARIANT_MAP_TYPES` should be determined and applied independently for each distinct asset detected within a multi-asset input. The numbering should start from -1 for each asset group.
## Implementation Notes (Optional)
- The logic for assigning suffixes is primarily within `AssetProcessor._inventory_and_classify_files`.
- This method currently classifies all files from the input together before determining asset groups.
- The classification logic needs to be adjusted to perform suffix assignment *after* files have been grouped by their determined asset name.
- This might require modifying the output of `_inventory_and_classify_files` or adding a new step after `_determine_base_metadata` to re-process or re-structure the classified files per asset for suffix assignment.
## Acceptance Criteria (Optional)
* [ ] Processing a multi-asset input containing multiple "COL" variants for different assets results in correct COL-# suffixes starting from -1 for each asset's output files.
* [ ] The GUI preview accurately reflects the correct COL-# numbering for each file based on its predicted asset name.
* [ ] The CLI processing output confirms the correct numbering in the generated filenames.

View File

@@ -0,0 +1,30 @@
---
ID: ISSUE-007 # e.g., FEAT-001, ISSUE-002
Type: Issue # Choose one: Issue or Feature
Status: Resolved # Choose one
Priority: Medium # Choose one
Labels: [core, bug, map_processing, suffix, regression] # Add relevant labels from the list or define new ones
Created: 2025-04-22
Updated: 2025-04-22
Related: #ISSUE-006-col-increment-multi-asset.md # Links to other tickets (e.g., #ISSUE-YYY), relevant files, or external URLs
---
# [ISSUE-007]: Suffix Not Applied to Single Maps in RESPECT_VARIANT_MAP_TYPES
## Description
Map types listed in `config.py`'s `RESPECT_VARIANT_MAP_TYPES` (e.g., "COL") are expected to always receive a numeric suffix (e.g., "-1"), even if only one map of that type exists for a given asset. Following the fix for #ISSUE-006, this behavior is no longer occurring. Single maps of these types are now output without a suffix.
## Current Behavior
An asset containing only one map file designated as "COL" (e.g., `AssetA_COL.png`) results in processed output files named like `AssetA_COL_4K.png`, without the `-1` suffix.
## Desired Behavior / Goals
An asset containing only one map file designated as "COL" (or any other type listed in `RESPECT_VARIANT_MAP_TYPES`) should result in processed output files named like `AssetA_COL-1_4K.png`, correctly applying the numeric suffix even when it's the sole variant.
## Implementation Notes (Optional)
- The per-asset suffix assignment logic added in `AssetProcessor.process` (around line 233 in the previous diff) likely needs adjustment.
- The condition `if respect_variants:` might need to be modified, or the loop/enumeration logic needs to explicitly handle the case where `len(maps_in_group) == 1` for types listed in `RESPECT_VARIANT_MAP_TYPES`.
## Acceptance Criteria (Optional)
* [ ] Processing an asset with a single "COL" map results in output files with the `COL-1` suffix.
* [ ] Processing an asset with multiple "COL" maps still results in correctly incremented suffixes (`COL-1`, `COL-2`, etc.).
* [ ] Map types *not* listed in `RESPECT_VARIANT_MAP_TYPES` continue to receive no suffix if only one exists.

View File

@@ -0,0 +1,24 @@
# ISSUE: GUI - "Disable Detailed Preview" feature regression
**Ticket Type:** Issue
**Priority:** Medium
**Description:**
The "Disable Detailed Preview" feature in the GUI is currently not functioning correctly. When attempting to disable the detailed file preview (via the View menu), the GUI does not switch to the simpler asset list view. This regression prevents users from using the less detailed preview mode, which may impact performance or usability, especially when dealing with inputs containing a large number of files.
**Steps to Reproduce:**
1. Launch the GUI (`python -m gui.main_window`).
2. Load an asset (ZIP or folder) into the drag and drop area. Observe the detailed preview table populating.
3. Go to the "View" menu.
4. Select/Deselect the "Detailed File Preview" option.
**Expected Result:**
The preview table should switch between the detailed file list view and the simple asset list view when the menu option is toggled.
**Actual Result:**
The preview table remains in the detailed file list view regardless of the "Detailed File Preview" menu option state.
**Relevant Files/Components:**
* `gui/main_window.py`: Likely contains the logic for the View menu and handling the toggle state.
* `gui/prediction_handler.py`: Manages the background process that generates the detailed preview data. The GUI needs to be able to stop or not request this process when detailed preview is disabled.
* `gui/preview_table_model.py`: Manages the data and display logic for the preview table. It should adapt its display based on whether detailed preview is enabled or disabled.

75
Tickets/Ticket-README.md Normal file
View File

@@ -0,0 +1,75 @@
# Issue and Feature Tracking
This directory is used to track issues and feature ideas for the Asset Processor Tool using Markdown files.
## Structure
All ticket files are stored directly within the `Tickets/` directory.
```mermaid
graph TD
A[Asset_processor_tool] --> B(Tickets);
A --> C(...other files/dirs...);
B --> D(ISSUE-XXX-....md);
B --> E(FEAT-XXX-....md);
B --> F(_template.md);
```
## File Naming Convention
Ticket files should follow the convention: `TYPE-ID-short-description.md`
* `TYPE`: `ISSUE` for bug reports or problems, `FEAT` for new features or enhancements.
* `ID`: A sequential three-digit number (e.g., `001`, `002`).
* `short-description`: A brief, hyphenated summary of the ticket's content.
Examples:
* `ISSUE-001-gui-preview-bug.md`
* `FEAT-002-add-dark-mode.md`
## Ticket Template (`_template.md`)
Use the `_template.md` file as a starting point for creating new tickets. It includes YAML front matter for structured metadata and standard Markdown headings for the ticket content.
```markdown
---
ID: TYPE-XXX # e.g., FEAT-001, ISSUE-002
Type: Issue | Feature # Choose one: Issue or Feature
Status: Backlog | Planned | In Progress | Blocked | Needs Review | Done | Won't Fix # Choose one
Priority: Low | Medium | High # Choose one
Labels: [gui, cli, core, blender, bug, feature, enhancement, docs, config] # Add relevant labels from the list or define new ones
Created: YYYY-MM-DD
Updated: YYYY-MM-DD
Related: # Links to other tickets (e.g., #ISSUE-YYY), relevant files, or external URLs
---
# [TYPE-XXX]: Brief Title of Issue/Feature
## Description
(Provide a detailed explanation of the issue or feature request. What is the problem you are trying to solve, or what is the new functionality you are proposing?)
## Current Behavior
(Describe what happens currently. If reporting a bug, explain the steps to reproduce it. If proposing a feature, describe the current state without the feature.)
## Desired Behavior / Goals
(Describe what *should* happen if the issue is resolved, or what the feature aims to achieve. Be specific about the desired outcome.)
## Implementation Notes (Optional)
(Add any thoughts on how this could be implemented, potential technical challenges, relevant code sections, or ideas for a solution.)
## Acceptance Criteria (Optional)
(Define clear, testable criteria that must be met for the ticket to be considered complete.)
* [ ] Criterion 1: The first condition that must be satisfied.
* [ ] Criterion 2: The second condition that must be satisfied.
* [ ] Add more criteria as needed.
```
## How to Use
1. Create a new Markdown file in the `Tickets/` directory following the naming convention (`TYPE-ID-short-description.md`).
2. Copy the content from `_template.md` into your new file.
3. Fill in the YAML front matter and the Markdown sections with details about the issue or feature.
4. Update the `Status` and `Updated` fields as you work on the ticket.
5. Use the `Related` field to link to other relevant tickets or project files.
```

30
Tickets/_template.md Normal file
View File

@@ -0,0 +1,30 @@
---
ID: TYPE-XXX # e.g., FEAT-001, ISSUE-002
Type: Issue | Feature # Choose one: Issue or Feature
Status: Backlog | Planned | In Progress | Blocked | Needs Review | Done | Won't Fix # Choose one
Priority: Low | Medium | High # Choose one
Labels: [gui, cli, core, blender, bug, feature, enhancement, docs, config] # Add relevant labels from the list or define new ones
Created: YYYY-MM-DD
Updated: YYYY-MM-DD
Related: # Links to other tickets (e.g., #ISSUE-YYY), relevant files, or external URLs
---
# [TYPE-XXX]: Brief Title of Issue/Feature
## Description
(Provide a detailed explanation of the issue or feature request. What is the problem you are trying to solve, or what is the new functionality you are proposing?)
## Current Behavior
(Describe what happens currently. If reporting a bug, explain the steps to reproduce it. If proposing a feature, describe the current state without the feature.)
## Desired Behavior / Goals
(Describe what *should* happen if the issue is resolved, or what the feature aims to achieve. Be specific about the desired outcome.)
## Implementation Notes (Optional)
(Add any thoughts on how this could be implemented, potential technical challenges, relevant code sections, or ideas for a solution.)
## Acceptance Criteria (Optional)
(Define clear, testable criteria that must be met for the ticket to be considered complete.)
* [ ] Criterion 1: The first condition that must be satisfied.
* [ ] Criterion 2: The second condition that must be satisfied.
* [ ] Add more criteria as needed.