Initial commit
This commit is contained in:
43
Tickets/Resolved/BUG-GUI-PreviewToggleCrash.md
Normal file
43
Tickets/Resolved/BUG-GUI-PreviewToggleCrash.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# BUG: GUI - Persistent Crash When Toggling "Disable Detailed Preview"
|
||||
|
||||
**Ticket Type:** Bug
|
||||
**Priority:** High
|
||||
**Status:** Resolved
|
||||
|
||||
**Description:**
|
||||
The GUI application crashes with a `Fatal Python error: _PyThreadState_Attach: non-NULL old thread state` when toggling the "Disable Detailed Preview" option in the View menu. This issue persisted despite attempted fixes aimed at resolving potential threading conflicts.
|
||||
|
||||
This was a follow-up to a previous ticket regarding the "Disable Detailed Preview" feature regression (refer to ISSUE-GUI-DisableDetailedPreview-Regression.md). While the initial fix addressed the preview display logic, it did not eliminate the crash.
|
||||
|
||||
**Symptoms:**
|
||||
The application terminates unexpectedly with the fatal Python error traceback when the "Disable Detailed Preview" menu item is toggled on or off, particularly after assets have been added to the queue and the detailed preview has been generated or is in the process of being generated.
|
||||
|
||||
**Steps to Reproduce:**
|
||||
1. Launch the GUI (`python -m gui.main_window`).
|
||||
2. (Optional but recommended for diagnosis) Check the "Verbose Logging (DEBUG)" option in the View menu.
|
||||
3. Add one or more asset files (ZIPs or folders) to the drag and drop area.
|
||||
4. Wait for the detailed preview to populate (or start populating).
|
||||
5. Toggle the "Disable Detailed Preview" option in the View menu. The crash should occur.
|
||||
6. Toggle the option again if the first toggle didn't cause the crash.
|
||||
|
||||
**Attempted Fixes:**
|
||||
1. Modified `gui/preview_table_model.py` to introduce a `_simple_mode` flag and `set_simple_mode` method to control the data and column presentation for detailed vs. simple views.
|
||||
2. Modified `gui/main_window.py` (`update_preview` method) to:
|
||||
* Utilize the `PreviewTableModel.set_simple_mode` method based on the "Disable Detailed Preview" menu action state.
|
||||
* Configure the `QTableView`'s column visibility and resize modes according to the selected preview mode.
|
||||
* Request cancellation of the `PredictionHandler` via `prediction_handler.request_cancel()` if it is running when `update_preview` is called. (Note: `request_cancel` did not exist in `PredictionHandler`).
|
||||
3. Added extensive logging with timestamps and thread IDs to `gui/main_window.py`, `gui/preview_table_model.py`, and `gui/prediction_handler.py` to diagnose threading behavior.
|
||||
|
||||
**Diagnosis:**
|
||||
Analysis of logs revealed that the crash occurred consistently when toggling the preview back ON, specifically during the `endResetModel` call within `PreviewTableModel.set_simple_mode(False)`. The root cause was identified as a state inconsistency in the `QTableView` (or associated models) caused by a redundant call to `PreviewTableModel.set_data` immediately following `PreviewTableModel.set_simple_mode(True)` within the `MainWindow.update_preview` method when switching *to* simple mode. This resulted in two consecutive `beginResetModel`/`endResetModel` calls on the main thread, leaving the model/view in an unstable state that triggered the crash on the subsequent toggle. Additionally, it was found that `PredictionHandler` lacked a `request_cancel` method, although this was not the direct cause of the crash.
|
||||
|
||||
**Resolution:**
|
||||
1. Removed the redundant call to `self.preview_model.set_data(list(self.current_asset_paths))` within the `if simple_mode_enabled:` block in `MainWindow.update_preview`. The `set_simple_mode(True)` call is sufficient to switch the model's internal mode.
|
||||
2. Added an explicit call to `self.preview_model.set_data(list(self.current_asset_paths))` within the `MainWindow.add_input_paths` method, specifically for the case when the GUI is in simple preview mode. This ensures the simple view is updated correctly when new files are added without relying on the problematic `set_data` call in `update_preview`.
|
||||
3. Corrected instances of `QThread.currentThreadId()` to `QThread.currentThread()` in logging statements across the relevant files.
|
||||
4. Added the missing `QThread` import in `gui/prediction_handler.py`.
|
||||
|
||||
**Relevant Files/Components:**
|
||||
* `gui/main_window.py`
|
||||
* `gui/preview_table_model.py`
|
||||
* `gui/prediction_handler.py`
|
||||
@@ -0,0 +1,32 @@
|
||||
---
|
||||
ID: FEAT-003
|
||||
Type: Feature
|
||||
Status: Complete
|
||||
Priority: Medium
|
||||
Labels: [feature, blender, metadata]
|
||||
Created: 2025-04-22
|
||||
Updated: 2025-04-22
|
||||
Related:
|
||||
---
|
||||
|
||||
# [FEAT-003]: Selective Nodegroup Generation and Category Tagging in Blender
|
||||
|
||||
## Description
|
||||
Enhance the Blender nodegroup creation script (`blenderscripts/create_nodegroups.py`) to only generate nodegroups for assets classified as "Surface" or "Decal" based on the `category` field in their `metadata.json` file. Additionally, store the asset's category (Surface, Decal, or Asset) as a tag on the generated Blender asset for better organization and filtering within Blender.
|
||||
|
||||
## Current Behavior
|
||||
The current nodegroup generation script processes all assets found in the processed asset library root, regardless of their `category` specified in `metadata.json`. It does not add the asset's category as a tag in Blender.
|
||||
|
||||
## Desired Behavior / Goals
|
||||
1. The script should read the `category` field from the `metadata.json` file for each processed asset.
|
||||
2. If the `category` is "Surface" or "Decal", the script should proceed with generating the nodegroup.
|
||||
3. If the `category` is "Asset" (or any other category), the script should skip nodegroup generation for that asset.
|
||||
4. The script should add the asset's `category` (e.g., "Surface", "Decal", "Asset") as a tag to the corresponding generated Blender asset.
|
||||
|
||||
## Implementation Notes (Optional)
|
||||
(This will require modifying `blenderscripts/create_nodegroups.py` to read the `metadata.json` file, check the `category` field, and use the Blender Python API (`bpy`) to add tags to the created asset.)
|
||||
|
||||
## Acceptance Criteria (Optional)
|
||||
* [x] Run the nodegroup generation script on a processed asset library containing assets of different categories (Surface, Decal, Asset).
|
||||
* [x] Verify that nodegroups are created only for Surface and Decal assets.
|
||||
* [x] Verify that assets in the Blender file (both those with generated nodegroups and those skipped) have a tag corresponding to their category from `metadata.json`.
|
||||
162
Tickets/Resolved/FEAT-004-handle-multi-asset-inputs.md
Normal file
162
Tickets/Resolved/FEAT-004-handle-multi-asset-inputs.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
ID: FEAT-004
|
||||
Type: Feature
|
||||
Status: Complete
|
||||
Priority: Medium
|
||||
Labels: [core, gui, cli, feature, enhancement]
|
||||
Created: 2025-04-22
|
||||
Updated: 2025-04-22
|
||||
Related: #ISSUE-001
|
||||
---
|
||||
|
||||
# [FEAT-004]: Handle Multi-Asset Inputs Based on Source Naming Index
|
||||
|
||||
## Description
|
||||
Currently, when an input ZIP or folder contains files from multiple distinct assets (as identified by the `source_naming.part_indices.base_name` rule in the preset), the tool's fallback logic uses `os.path.commonprefix` to determine a single, often incorrect, asset name. This prevents the tool from correctly processing inputs containing multiple assets and leads to incorrect predictions in the GUI.
|
||||
|
||||
## Current Behavior
|
||||
When processing an input containing files from multiple assets (e.g., `3-HeartOak...` and `3-Oak-Classic...` in the same ZIP), the `_determine_base_metadata` method identifies multiple potential base names based on the configured index. It then falls back to calculating the common prefix of all relevant file stems, resulting in a truncated or incorrect asset name (e.g., "3-"). The processing pipeline and GUI prediction then proceed using this incorrect name.
|
||||
|
||||
## Desired Behavior / Goals
|
||||
The tool should accurately detect when a single input (ZIP/folder) contains files belonging to multiple distinct assets, as defined by the `source_naming.part_indices.base_name` rule. For each distinct base name identified, the tool should process the corresponding subset of files as a separate, independent asset. This includes generating a correct output directory structure and a complete `metadata.json` file for each detected asset within the input. The GUI preview should also accurately reflect the presence of multiple assets and their predicted names.
|
||||
|
||||
## Implementation Notes (Optional)
|
||||
* Modify `AssetProcessor._determine_base_metadata` to return a list of distinct base names and a mapping of files to their determined base names.
|
||||
* Adjust the main processing orchestration (`main.py`, `gui/processing_handler.py`) to iterate over the list of distinct base names returned by `_determine_base_metadata`.
|
||||
* For each distinct base name, create a new processing context (potentially a new `AssetProcessor` instance or a modified approach) that operates only on the files associated with that specific base name.
|
||||
* Ensure temporary workspace handling and cleanup correctly manage files for multiple assets from a single input.
|
||||
* Update `AssetProcessor.get_detailed_file_predictions` to correctly identify and group files by distinct base names for accurate GUI preview display.
|
||||
* Consider edge cases: what if some files don't match any determined base name? (They should likely still go to 'Extra/'). What if the index method yields no names? (Fallback to input name as currently).
|
||||
|
||||
## Acceptance Criteria (Optional)
|
||||
* [ ] Processing a ZIP file containing files for two distinct assets (e.g., 'AssetA' and 'AssetB') using a preset with `base_name_index` results in two separate output directories (`<output_base>/<supplier>/AssetA/` and `<output_base>/<supplier>/AssetB/`), each containing the correctly processed files and metadata for that asset.
|
||||
* [ ] The GUI preview accurately lists the files from the multi-asset input and shows the correct predicted asset name for each file based on its determined base name (e.g., files belonging to 'AssetA' show 'AssetA' as the predicted name).
|
||||
* [ ] The CLI processing of a multi-asset input correctly processes and outputs each asset separately.
|
||||
* [ ] The tool handles cases where some files in a multi-asset input do not match any determined base name (e.g., they are correctly classified as 'Unrecognised' or 'Extra').
|
||||
---
|
||||
## Implementation Plan (Generated by Architect Mode)
|
||||
|
||||
**Goal:** Modify the tool to correctly identify and process multiple distinct assets within a single input (ZIP/folder) based on the `source_naming.part_indices.base_name` rule, placing unmatched files into the `Extra/` folder of each processed asset.
|
||||
|
||||
**Phase 1: Core Logic Refactoring (`asset_processor.py`)**
|
||||
|
||||
1. **Refactor `_determine_base_metadata`:**
|
||||
* **Input:** Takes the list of all file paths (relative to temp dir) found after extraction.
|
||||
* **Logic:**
|
||||
* Iterates through relevant file stems (maps, models).
|
||||
* Uses the `source_naming_separator` and `source_naming_indices['base_name']` to extract potential base names for each file stem.
|
||||
* Identifies the set of *distinct* base names found across all files.
|
||||
* Creates a mapping: `Dict[Path, Optional[str]]` where keys are relative file paths and values are the determined base name string (or `None` if a file doesn't match any base name according to the index rule).
|
||||
* **Output:** Returns a tuple: `(distinct_base_names: List[str], file_to_base_name_map: Dict[Path, Optional[str]])`.
|
||||
* **Remove:** Logic setting `self.metadata["asset_name"]`, `asset_category`, and `archetype`.
|
||||
|
||||
2. **Create New Method `_determine_single_asset_metadata`:**
|
||||
* **Input:** Takes a specific `asset_base_name` (string) and the list of `classified_files` *filtered* for that asset.
|
||||
* **Logic:** Contains the logic previously in `_determine_base_metadata` for determining `asset_category` and `archetype` based *only* on the files associated with the given `asset_base_name`.
|
||||
* **Output:** Returns a dictionary containing `{"asset_category": str, "archetype": str}` for the specific asset.
|
||||
|
||||
3. **Modify `_inventory_and_classify_files`:**
|
||||
* No major changes needed here initially, as it classifies based on file patterns independent of the final asset name. However, ensure the `classified_files` structure remains suitable for later filtering.
|
||||
|
||||
4. **Refactor `AssetProcessor.process` Method:**
|
||||
* Change the overall flow to handle multiple assets.
|
||||
* **Steps:**
|
||||
1. `_setup_workspace()`
|
||||
2. `_extract_input()`
|
||||
3. `_inventory_and_classify_files()` -> Get initial `self.classified_files` (all files).
|
||||
4. Call the *new* `_determine_base_metadata()` using all relevant files -> Get `distinct_base_names` list and `file_to_base_name_map`.
|
||||
5. Initialize an overall status dictionary (e.g., `{"processed": [], "skipped": [], "failed": []}`).
|
||||
6. **Loop** through each `current_asset_name` in `distinct_base_names`:
|
||||
* Log the start of processing for `current_asset_name`.
|
||||
* **Filter Files:** Create temporary filtered lists of maps, models, etc., from `self.classified_files` based on the `file_to_base_name_map` for the `current_asset_name`.
|
||||
* **Determine Metadata:** Call `_determine_single_asset_metadata(current_asset_name, filtered_files)` -> Get category/archetype for this asset. Store these along with `current_asset_name` and supplier name in a temporary `current_asset_metadata` dict.
|
||||
* **Skip Check:** Perform the skip check logic specifically for `current_asset_name` using the `output_base_path`, supplier name, and `current_asset_name`. If skipped, update overall status and `continue` to the next asset name.
|
||||
* **Process:** Call `_process_maps()`, `_merge_maps()`, passing the *filtered* file lists and potentially the `current_asset_metadata`. These methods need to operate only on the provided subset of files.
|
||||
* **Generate Metadata:** Call `_generate_metadata_file()`, passing the `current_asset_metadata` and the results from map/merge processing for *this asset*. This method will now write `metadata.json` specific to `current_asset_name`.
|
||||
* **Organize Output:** Call `_organize_output_files()`, passing the `current_asset_name`. This method needs modification:
|
||||
* It will move the processed files for the *current asset* to the correct subfolder (`<output_base>/<supplier>/<current_asset_name>/`).
|
||||
* It will also identify files from the *original* input whose base name was `None` in the `file_to_base_name_map` (the "unmatched" files).
|
||||
* It will copy these "unmatched" files into the `Extra/` subfolder for the *current asset being processed in this loop iteration*.
|
||||
* Update overall status based on the success/failure of this asset's processing.
|
||||
7. `_cleanup_workspace()` (only after processing all assets from the input).
|
||||
8. **Return:** Return the overall status dictionary summarizing results across all detected assets.
|
||||
|
||||
5. **Adapt `_process_maps`, `_merge_maps`, `_generate_metadata_file`, `_organize_output_files`:**
|
||||
* Ensure these methods accept and use the filtered file lists and the specific `asset_name` for the current iteration.
|
||||
* `_organize_output_files` needs the logic to handle copying the "unmatched" files into the current asset's `Extra/` folder.
|
||||
|
||||
**Phase 2: Update Orchestration (`main.py`, `gui/processing_handler.py`)**
|
||||
|
||||
1. **Modify `main.process_single_asset_wrapper`:**
|
||||
* The call `processor.process()` will now return the overall status dictionary.
|
||||
* The wrapper needs to interpret this dictionary to return a single representative status ("processed" if any succeeded, "skipped" if all skipped, "failed" if any failed) and potentially a consolidated error message for the main loop/GUI.
|
||||
|
||||
2. **Modify `gui.processing_handler.ProcessingHandler.run`:**
|
||||
* No major changes needed here, as it relies on `process_single_asset_wrapper`. The status updates emitted back to the GUI might need slight adjustments if more detailed per-asset status is desired in the future, but for now, the overall status from the wrapper should suffice.
|
||||
|
||||
**Phase 3: Update GUI Prediction (`asset_processor.py`, `gui/prediction_handler.py`, `gui/main_window.py`)**
|
||||
|
||||
1. **Modify `AssetProcessor.get_detailed_file_predictions`:**
|
||||
* This method must now perform the multi-asset detection:
|
||||
* Call the refactored `_determine_base_metadata` to get the `distinct_base_names` and `file_to_base_name_map`.
|
||||
* Iterate through all classified files (maps, models, extra, ignored).
|
||||
* For each file, look up its corresponding base name in the `file_to_base_name_map`.
|
||||
* The returned dictionary for each file should now include:
|
||||
* `original_path`: str
|
||||
* `predicted_asset_name`: str | None (The base name determined for this file, or None if unmatched)
|
||||
* `predicted_output_name`: str | None (The predicted final filename, e.g., `AssetName_Color_4K.png`, or original name for models/extra)
|
||||
* `status`: str ("Mapped", "Model", "Extra", "Unrecognised", "Ignored", **"Unmatched Extra"** - new status for files with `None` base name).
|
||||
* `details`: str | None
|
||||
|
||||
2. **Update `gui.prediction_handler.PredictionHandler`:**
|
||||
* Ensure it correctly passes the results from `get_detailed_file_predictions` (including the new `predicted_asset_name` and `status` values) back to the main window via signals.
|
||||
|
||||
3. **Update `gui.main_window.MainWindow`:**
|
||||
* Modify the preview table model/delegate to display the `predicted_asset_name`. A new column might be needed.
|
||||
* Update the logic that colors rows or displays status icons to handle the new "Unmatched Extra" status distinctly from regular "Extra" or "Unrecognised".
|
||||
|
||||
**Visual Plan (`AssetProcessor.process` Sequence)**
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Client as Orchestrator (main.py / GUI Handler)
|
||||
participant AP as AssetProcessor
|
||||
participant Config as Configuration
|
||||
participant FS as File System
|
||||
|
||||
Client->>AP: process(input_path, config, output_base, overwrite)
|
||||
AP->>AP: _setup_workspace()
|
||||
AP->>FS: Create temp_dir
|
||||
AP->>AP: _extract_input()
|
||||
AP->>FS: Extract/Copy files to temp_dir
|
||||
AP->>AP: _inventory_and_classify_files()
|
||||
AP-->>AP: self.classified_files (all files)
|
||||
AP->>AP: _determine_base_metadata()
|
||||
AP-->>AP: distinct_base_names, file_to_base_name_map
|
||||
|
||||
AP->>AP: Initialize overall_status = {}
|
||||
loop For each current_asset_name in distinct_base_names
|
||||
AP->>AP: Log start for current_asset_name
|
||||
AP->>AP: Filter self.classified_files using file_to_base_name_map
|
||||
AP-->>AP: filtered_files_for_asset
|
||||
AP->>AP: _determine_single_asset_metadata(current_asset_name, filtered_files_for_asset)
|
||||
AP-->>AP: current_asset_metadata (category, archetype)
|
||||
AP->>AP: Perform Skip Check for current_asset_name
|
||||
alt Skip Check == True
|
||||
AP->>AP: Update overall_status (skipped)
|
||||
AP->>AP: continue loop
|
||||
end
|
||||
AP->>AP: _process_maps(filtered_files_for_asset, current_asset_metadata)
|
||||
AP-->>AP: processed_map_details_asset
|
||||
AP->>AP: _merge_maps(filtered_files_for_asset, current_asset_metadata)
|
||||
AP-->>AP: merged_map_details_asset
|
||||
AP->>AP: _generate_metadata_file(current_asset_metadata, processed_map_details_asset, merged_map_details_asset)
|
||||
AP->>FS: Write metadata.json for current_asset_name
|
||||
AP->>AP: _organize_output_files(current_asset_name, file_to_base_name_map)
|
||||
AP->>FS: Move processed files for current_asset_name
|
||||
AP->>FS: Copy unmatched files to Extra/ for current_asset_name
|
||||
AP->>AP: Update overall_status (processed/failed for this asset)
|
||||
end
|
||||
AP->>AP: _cleanup_workspace()
|
||||
AP->>FS: Delete temp_dir
|
||||
AP-->>Client: Return overall_status dictionary
|
||||
56
Tickets/Resolved/FEAT-011-pot-resizing.md
Normal file
56
Tickets/Resolved/FEAT-011-pot-resizing.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Ticket: FEAT-011 - Implement Power-of-Two Texture Resizing
|
||||
|
||||
**Status:** Open
|
||||
**Priority:** High
|
||||
**Assignee:** TBD
|
||||
**Reporter:** Roo (Architect Mode)
|
||||
|
||||
## Description
|
||||
|
||||
The current asset processing pipeline resizes textures based on a target maximum dimension (e.g., 4K = 4096px) while maintaining the original aspect ratio. This results in non-power-of-two (NPOT) dimensions for non-square textures, which is suboptimal for rendering performance and compatibility with certain systems.
|
||||
|
||||
This feature implements a "Stretch/Squash" approach to ensure all output textures have power-of-two (POT) dimensions for each target resolution key.
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
1. **Resizing Logic Change:**
|
||||
* Modify the `calculate_target_dimensions` helper function in `asset_processor.py`.
|
||||
* **Step 1:** Calculate intermediate dimensions (`scaled_w`, `scaled_h`) by scaling the original image (orig_w, orig_h) to fit within the target resolution key's maximum dimension (e.g., 4096 for "4K") while maintaining the original aspect ratio (using existing logic).
|
||||
* **Step 2:** Implement a new helper function `get_nearest_pot(value: int) -> int` to find the closest power-of-two value for a given integer.
|
||||
* **Step 3:** Apply `get_nearest_pot()` to `scaled_w` to get the final target power-of-two width (`pot_w`).
|
||||
* **Step 4:** Apply `get_nearest_pot()` to `scaled_h` to get the final target power-of-two height (`pot_h`).
|
||||
* **Step 5:** Return `(pot_w, pot_h)` from `calculate_target_dimensions`. The `_process_maps` function will then use these POT dimensions in `cv2.resize`.
|
||||
|
||||
2. **Helper Function `get_nearest_pot`:**
|
||||
* This function will take an integer `value`.
|
||||
* It will find the powers of two immediately below (`lower_pot`) and above (`upper_pot`) the value.
|
||||
* It will return the power of two that is numerically closer to the original `value`. (e.g., `get_nearest_pot(1365)` would return 1024, as `1365 - 1024 = 341` and `2048 - 1365 = 683`).
|
||||
|
||||
3. **Filename Convention:**
|
||||
* The original resolution tag (e.g., `_4K`, `_2K`) defined in `config.py` will be kept in the output filename, even though the final dimensions are POT. This maintains consistency with the processing target.
|
||||
|
||||
4. **Metadata:**
|
||||
* The existing aspect ratio change metadata calculation (`_normalize_aspect_ratio_change`) will remain unchanged. This metadata can be used downstream to potentially correct the aspect ratio distortion introduced by the stretch/squash resizing.
|
||||
|
||||
## Implementation Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Original Dimensions (W, H)] --> B{Target Resolution Key (e.g., "4K")};
|
||||
B --> C{Get Max Dimension (e.g., 4096)};
|
||||
A & C --> D[Calculate Scaled Dimensions (scaled_w, scaled_h) - Maintain Aspect Ratio];
|
||||
D --> E[scaled_w];
|
||||
D --> F[scaled_h];
|
||||
E --> G[Find Nearest POT(scaled_w) -> pot_w];
|
||||
F --> H[Find Nearest POT(scaled_h) -> pot_h];
|
||||
G & H --> I[Final POT Dimensions (pot_w, pot_h)];
|
||||
I --> J[Use (pot_w, pot_h) in cv2.resize];
|
||||
```
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
* All textures output by the `_process_maps` function have power-of-two dimensions (width and height are both powers of 2).
|
||||
* The resizing uses the "Stretch/Squash" method based on the nearest POT value for each dimension calculated *after* initial aspect-preserving scaling.
|
||||
* The output filename retains the original resolution key (e.g., `_4K`).
|
||||
* The `get_nearest_pot` helper function correctly identifies the closest power of two.
|
||||
* The aspect ratio metadata calculation remains unchanged.
|
||||
@@ -0,0 +1,29 @@
|
||||
---
|
||||
ID: ISSUE-001
|
||||
Type: Issue
|
||||
Status: Backlog
|
||||
Priority: Medium
|
||||
Labels: [bug, config, gui]
|
||||
Created: 2025-04-22
|
||||
Updated: 2025-04-22
|
||||
Related:
|
||||
---
|
||||
|
||||
# [ISSUE-001]: Source file naming rules from JSON are not respected
|
||||
|
||||
## Description
|
||||
The tool is not correctly applying the "Source file naming rules" defined in the JSON presets. Specifically, the "base Name index" and "Map type index" values within the `source_naming` section of the preset JSON are not being respected during file processing.
|
||||
|
||||
## Current Behavior
|
||||
When processing assets, the tool (observed in the GUI) does not use the specified "base Name index" and "Map type index" from the active preset's `source_naming` rules to determine the asset's base name and individual map types from the source filenames.
|
||||
|
||||
## Desired Behavior / Goals
|
||||
The tool should accurately read and apply the "base Name index" and "Map type index" values from the selected preset's `source_naming` rules to correctly parse asset base names and map types from source filenames.
|
||||
|
||||
## Implementation Notes (Optional)
|
||||
(Add any thoughts on how this could be implemented, technical challenges, relevant code sections, or ideas for a solution.)
|
||||
|
||||
## Acceptance Criteria (Optional)
|
||||
(Define clear, testable criteria that must be met for the ticket to be considered complete.)
|
||||
* [ ] Processing an asset with a preset that uses specific `base_name_index` and `map_type_index` values results in the correct asset name and map types being identified according to those indices.
|
||||
* [ ] This behavior is consistent in both the GUI and CLI.
|
||||
@@ -0,0 +1,28 @@
|
||||
---
|
||||
ID: ISSUE-002
|
||||
Type: Issue
|
||||
Status: Mostly Resolved
|
||||
Priority: High
|
||||
Labels: [bug, core, file-classification]
|
||||
Created: 2025-04-22
|
||||
Updated: 2025-04-22
|
||||
Related:
|
||||
---
|
||||
|
||||
# [ISSUE-002]: Incorrect COL-# numbering with multiple assets in one directory
|
||||
|
||||
## Description
|
||||
When processing a directory containing multiple distinct asset sets (e.g., `Assetname1` and `Assetname2`), the numbering for map types that require variant suffixes (like "COL") is incorrectly incremented across all assets in the directory rather than being reset for each individual asset.
|
||||
|
||||
## Current Behavior
|
||||
If an input directory contains files for `Assetname1` and `Assetname2`, and both have multiple "COL" maps, the numbering continues sequentially across both sets. For example, `Assetname1` might get `_COL-1`, `_COL-2`, while `Assetname2` incorrectly gets `_COL-3`, `_COL-4`, instead of starting its own sequence (`_COL-1`, `_COL-2`). The "COL value accumulates across directory, not comparing names".
|
||||
|
||||
## Desired Behavior / Goals
|
||||
The tool should correctly identify distinct asset sets within a single input directory and apply variant numbering (like "COL-#") independently for each asset set. The numbering should reset for each new asset encountered in the directory.
|
||||
|
||||
## Implementation Notes (Optional)
|
||||
(This likely requires adjusting the file classification or inventory logic to group files by asset name before applying variant numbering rules.)
|
||||
|
||||
## Acceptance Criteria (Optional)
|
||||
* [ ] Processing a directory containing multiple asset sets with variant map types results in correct, independent numbering for each asset set (e.g., `Assetname1_COL-1`, `Assetname1_COL-2`, `Assetname2_COL-1`, `Assetname2_COL-2`).
|
||||
* [ ] The numbering is based on the files belonging to a specific asset name, not the overall count of variant maps in the entire input directory.
|
||||
29
Tickets/Resolved/ISSUE-005-alpha-mask-channel-incorrect.md
Normal file
29
Tickets/Resolved/ISSUE-005-alpha-mask-channel-incorrect.md
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
ID: ISSUE-005
|
||||
Type: Issue
|
||||
Status: Resolved
|
||||
Priority: High
|
||||
Labels: [bug, core, image-processing]
|
||||
Created: 2025-04-22
|
||||
Updated: 2025-04-22
|
||||
Related:
|
||||
---
|
||||
|
||||
# [ISSUE-005]: Alpha Mask channel not processed correctly
|
||||
|
||||
## Description
|
||||
When processing source images that contain an alpha channel intended for use as a MASK map, the tool's output for the MASK map is an RGBA image instead of a grayscale image derived solely from the alpha channel.
|
||||
|
||||
## Current Behavior
|
||||
If a source image (e.g., a PNG or TIF) has an alpha channel and is classified as a MASK map type, the resulting output MASK file retains the RGB channels (potentially with incorrect data or black/white values) in addition to the alpha channel, resulting in an RGBA output image.
|
||||
|
||||
## Desired Behavior / Goals
|
||||
When processing a source image with an alpha channel for a MASK map type, the tool should extract only the alpha channel data and output a single-channel (grayscale) image representing the mask. The RGB channels from the source should be discarded for the MASK output.
|
||||
|
||||
## Implementation Notes (Optional)
|
||||
(This requires modifying the image processing logic for MASK map types to specifically isolate and save only the alpha channel as a grayscale image. Need to check the relevant sections in `asset_processor.py` related to map processing and saving.)
|
||||
|
||||
## Acceptance Criteria (Optional)
|
||||
* [ ] Process an asset containing a source image with an alpha channel intended as a MASK map.
|
||||
* [ ] Verify that the output MASK file is a grayscale image (single channel) and accurately represents the alpha channel data from the source.
|
||||
* [ ] Verify that the output MASK file does not contain redundant or incorrect RGB channel data.
|
||||
32
Tickets/Resolved/ISSUE-006-col-increment-multi-asset.md
Normal file
32
Tickets/Resolved/ISSUE-006-col-increment-multi-asset.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
ID: ISSUE-006 # e.g., FEAT-001, ISSUE-002
|
||||
Type: Issue # Choose one: Issue or Feature
|
||||
Status: Mostly Resolved # Choose one
|
||||
Priority: Medium # Choose one
|
||||
Labels: [core, bug, map_processing, multi_asset] # Add relevant labels from the list or define new ones
|
||||
Created: 2025-04-22
|
||||
Updated: 2025-04-22
|
||||
Related: #FEAT-004-handle-multi-asset-inputs.md # Links to other tickets (e.g., #ISSUE-YYY), relevant files, or external URLs
|
||||
---
|
||||
|
||||
# [ISSUE-006]: COL-# Suffixes Incorrectly Increment Across Multi-Asset Inputs
|
||||
|
||||
## Description
|
||||
When processing an input (ZIP or folder) that contains files for multiple distinct assets, the numeric suffixes applied to map types listed in `RESPECT_VARIANT_MAP_TYPES` (such as "COL") are currently incremented globally across all files in the input, rather than being reset and incremented independently for each detected asset group.
|
||||
|
||||
## Current Behavior
|
||||
If an input contains files for AssetA (e.g., AssetA_COL.png, AssetA_COL_Variant.png) and AssetB (e.g., AssetB_COL.png), the output might incorrectly number them as AssetA_COL-1.png, AssetA_COL-2.png, and AssetB_COL-3.png. The expectation is that numbering should restart for each asset, resulting in AssetA_COL-1.png, AssetA_COL-2.png, and AssetB_COL-1.png.
|
||||
|
||||
## Desired Behavior / Goals
|
||||
The numeric suffix for map types in `RESPECT_VARIANT_MAP_TYPES` should be determined and applied independently for each distinct asset detected within a multi-asset input. The numbering should start from -1 for each asset group.
|
||||
|
||||
## Implementation Notes (Optional)
|
||||
- The logic for assigning suffixes is primarily within `AssetProcessor._inventory_and_classify_files`.
|
||||
- This method currently classifies all files from the input together before determining asset groups.
|
||||
- The classification logic needs to be adjusted to perform suffix assignment *after* files have been grouped by their determined asset name.
|
||||
- This might require modifying the output of `_inventory_and_classify_files` or adding a new step after `_determine_base_metadata` to re-process or re-structure the classified files per asset for suffix assignment.
|
||||
|
||||
## Acceptance Criteria (Optional)
|
||||
* [ ] Processing a multi-asset input containing multiple "COL" variants for different assets results in correct COL-# suffixes starting from -1 for each asset's output files.
|
||||
* [ ] The GUI preview accurately reflects the correct COL-# numbering for each file based on its predicted asset name.
|
||||
* [ ] The CLI processing output confirms the correct numbering in the generated filenames.
|
||||
30
Tickets/Resolved/ISSUE-007-respect-variant-single-map.md
Normal file
30
Tickets/Resolved/ISSUE-007-respect-variant-single-map.md
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
ID: ISSUE-007 # e.g., FEAT-001, ISSUE-002
|
||||
Type: Issue # Choose one: Issue or Feature
|
||||
Status: Resolved # Choose one
|
||||
Priority: Medium # Choose one
|
||||
Labels: [core, bug, map_processing, suffix, regression] # Add relevant labels from the list or define new ones
|
||||
Created: 2025-04-22
|
||||
Updated: 2025-04-22
|
||||
Related: #ISSUE-006-col-increment-multi-asset.md # Links to other tickets (e.g., #ISSUE-YYY), relevant files, or external URLs
|
||||
---
|
||||
|
||||
# [ISSUE-007]: Suffix Not Applied to Single Maps in RESPECT_VARIANT_MAP_TYPES
|
||||
|
||||
## Description
|
||||
Map types listed in `config.py`'s `RESPECT_VARIANT_MAP_TYPES` (e.g., "COL") are expected to always receive a numeric suffix (e.g., "-1"), even if only one map of that type exists for a given asset. Following the fix for #ISSUE-006, this behavior is no longer occurring. Single maps of these types are now output without a suffix.
|
||||
|
||||
## Current Behavior
|
||||
An asset containing only one map file designated as "COL" (e.g., `AssetA_COL.png`) results in processed output files named like `AssetA_COL_4K.png`, without the `-1` suffix.
|
||||
|
||||
## Desired Behavior / Goals
|
||||
An asset containing only one map file designated as "COL" (or any other type listed in `RESPECT_VARIANT_MAP_TYPES`) should result in processed output files named like `AssetA_COL-1_4K.png`, correctly applying the numeric suffix even when it's the sole variant.
|
||||
|
||||
## Implementation Notes (Optional)
|
||||
- The per-asset suffix assignment logic added in `AssetProcessor.process` (around line 233 in the previous diff) likely needs adjustment.
|
||||
- The condition `if respect_variants:` might need to be modified, or the loop/enumeration logic needs to explicitly handle the case where `len(maps_in_group) == 1` for types listed in `RESPECT_VARIANT_MAP_TYPES`.
|
||||
|
||||
## Acceptance Criteria (Optional)
|
||||
* [ ] Processing an asset with a single "COL" map results in output files with the `COL-1` suffix.
|
||||
* [ ] Processing an asset with multiple "COL" maps still results in correctly incremented suffixes (`COL-1`, `COL-2`, etc.).
|
||||
* [ ] Map types *not* listed in `RESPECT_VARIANT_MAP_TYPES` continue to receive no suffix if only one exists.
|
||||
@@ -0,0 +1,24 @@
|
||||
# ISSUE: GUI - "Disable Detailed Preview" feature regression
|
||||
|
||||
**Ticket Type:** Issue
|
||||
**Priority:** Medium
|
||||
|
||||
**Description:**
|
||||
The "Disable Detailed Preview" feature in the GUI is currently not functioning correctly. When attempting to disable the detailed file preview (via the View menu), the GUI does not switch to the simpler asset list view. This regression prevents users from using the less detailed preview mode, which may impact performance or usability, especially when dealing with inputs containing a large number of files.
|
||||
|
||||
**Steps to Reproduce:**
|
||||
1. Launch the GUI (`python -m gui.main_window`).
|
||||
2. Load an asset (ZIP or folder) into the drag and drop area. Observe the detailed preview table populating.
|
||||
3. Go to the "View" menu.
|
||||
4. Select/Deselect the "Detailed File Preview" option.
|
||||
|
||||
**Expected Result:**
|
||||
The preview table should switch between the detailed file list view and the simple asset list view when the menu option is toggled.
|
||||
|
||||
**Actual Result:**
|
||||
The preview table remains in the detailed file list view regardless of the "Detailed File Preview" menu option state.
|
||||
|
||||
**Relevant Files/Components:**
|
||||
* `gui/main_window.py`: Likely contains the logic for the View menu and handling the toggle state.
|
||||
* `gui/prediction_handler.py`: Manages the background process that generates the detailed preview data. The GUI needs to be able to stop or not request this process when detailed preview is disabled.
|
||||
* `gui/preview_table_model.py`: Manages the data and display logic for the preview table. It should adapt its display based on whether detailed preview is enabled or disabled.
|
||||
Reference in New Issue
Block a user