Prototype > PreAlpha #67
124
ProjectNotes/Data_Flow_Refinement_Plan.md
Normal file
124
ProjectNotes/Data_Flow_Refinement_Plan.md
Normal file
@ -0,0 +1,124 @@
|
||||
# Architectural Plan: Data Flow Refinement (v3)
|
||||
|
||||
**Date:** 2025-04-30
|
||||
|
||||
**Author:** Roo (Architect Mode)
|
||||
|
||||
**Status:** Approved
|
||||
|
||||
## 1. Goal
|
||||
|
||||
Refine the application's data flow to establish the GUI as the single source of truth for processing rules. This involves moving prediction/preset logic upstream from the backend processor and ensuring the backend receives a *complete* `SourceRule` object for processing, thereby simplifying the processor itself. This version of the plan involves creating a new processing module (`processing_engine.py`) instead of refactoring the existing `asset_processor.py`.
|
||||
|
||||
## 2. Proposed Data Flow
|
||||
|
||||
The refined data flow centralizes rule generation and modification within the GUI components before passing a complete, explicit rule set to the backend. The `SourceRule` object structure serves as a consistent data contract throughout the pipeline.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant User
|
||||
participant GUI_MainWindow as GUI (main_window.py)
|
||||
participant GUI_Predictor as Predictor (prediction_handler.py)
|
||||
participant GUI_UnifiedView as Unified View (unified_view_model.py)
|
||||
participant Main as main.py
|
||||
participant ProcessingEngine as New Backend (processing_engine.py)
|
||||
participant Config as config.py
|
||||
|
||||
User->>+GUI_MainWindow: Selects Input & Preset
|
||||
Note over GUI_MainWindow: Scans input, gets file list
|
||||
GUI_MainWindow->>+GUI_Predictor: Request Prediction(File List, Preset Name, Input ID)
|
||||
GUI_Predictor->>+Config: Load Preset Rules & Canonical Types
|
||||
Config-->>-GUI_Predictor: Return Rules & Types
|
||||
%% Prediction Logic (Internal to Predictor)
|
||||
Note over GUI_Predictor: Perform file analysis (based on list), apply preset rules, generate COMPLETE SourceRule hierarchy (only overridable fields populated)
|
||||
GUI_Predictor-->>-GUI_MainWindow: Return List[SourceRule] (Initial Rules)
|
||||
GUI_MainWindow->>+GUI_UnifiedView: Populate View(List[SourceRule])
|
||||
GUI_UnifiedView->>+Config: Read Allowed Asset/File Types for Dropdowns
|
||||
Config-->>-GUI_UnifiedView: Return Allowed Types
|
||||
Note over GUI_UnifiedView: Display rules, allow user edits
|
||||
User->>GUI_UnifiedView: Modifies Rules (Overrides)
|
||||
GUI_UnifiedView-->>GUI_MainWindow: Update SourceRule Objects in Memory
|
||||
User->>+GUI_MainWindow: Trigger Processing
|
||||
GUI_MainWindow->>+Main: Send Final List[SourceRule]
|
||||
Main->>+ProcessingEngine: Queue Task(SourceRule) for each input
|
||||
Note over ProcessingEngine: Execute processing based *solely* on the provided SourceRule and static config. No internal prediction/fallback.
|
||||
ProcessingEngine-->>-Main: Processing Result
|
||||
Main-->>-GUI_MainWindow: Update Status
|
||||
GUI_MainWindow-->>User: Show Result/Status
|
||||
```
|
||||
|
||||
## 3. Module-Specific Changes
|
||||
|
||||
* **`config.py`:**
|
||||
* **Add Canonical Lists:** Introduce `ALLOWED_ASSET_TYPES` (e.g., `["Surface", "Model", "Decal", "Atlas", "UtilityMap"]`) and `ALLOWED_FILE_TYPES` (e.g., `["MAP_COL", "MAP_NRM", ..., "MODEL", "EXTRA", "FILE_IGNORE"]`).
|
||||
* **Purpose:** Single source of truth for GUI dropdowns and validation.
|
||||
* **Existing Config:** Retains static definitions like `IMAGE_RESOLUTIONS`, `MAP_MERGE_RULES`, `JPG_QUALITY`, etc.
|
||||
|
||||
* **`rule_structure.py`:**
|
||||
* **Remove Enums:** Remove `AssetType` and `ItemType` Enums. Update `AssetRule.asset_type`, `FileRule.item_type_override`, etc., to use string types validated against `config.py` lists.
|
||||
* **Field Retention:** Keep `FileRule.resolution_override` and `FileRule.channel_merge_instructions` fields for structural consistency, but they will not be populated or used for overrides in this flow.
|
||||
|
||||
* **`gui/prediction_handler.py` (or equivalent):**
|
||||
* **Enhance Prediction Logic:** Modify `run_prediction` method.
|
||||
* **Input:** Accept `input_source_identifier` (string), `file_list` (List[str] of relative paths), and `preset_name` (string) when called from GUI.
|
||||
* **Load Config:** Read `ALLOWED_ASSET_TYPES`, `ALLOWED_FILE_TYPES`, and preset rules.
|
||||
* **Relocate Classification:** Integrate classification/naming logic (previously in `asset_processor.py`) to operate on the provided `file_list`.
|
||||
* **Generate Complete Rules:** Populate `SourceRule`, `AssetRule`, and `FileRule` objects.
|
||||
* Set initial values only for *overridable* fields (e.g., `asset_type`, `item_type_override`, `target_asset_name_override`, `supplier_identifier`, `output_format_override`) based on preset rules/defaults.
|
||||
* Explicitly **do not** populate static config fields like `FileRule.resolution_override` or `FileRule.channel_merge_instructions`.
|
||||
* **Temporary Files (If needed for non-GUI):** May need logic later to handle direct path inputs (CLI/Docker) involving temporary extraction/cleanup, but the primary GUI flow uses the provided list.
|
||||
* **Output:** Emit `rule_hierarchy_ready` signal with the `List[SourceRule]`.
|
||||
|
||||
* **NEW: `processing_engine.py` (New Module):**
|
||||
* **Purpose:** Contains a new class (e.g., `ProcessingEngine`) for executing the processing pipeline based solely on a complete `SourceRule` and static configuration. Replaces `asset_processor.py` in the main workflow.
|
||||
* **Initialization (`__init__`):** Takes the static `Configuration` object as input.
|
||||
* **Core Method (`process`):** Accepts a single, complete `SourceRule` object. Orchestrates processing steps (workspace setup, extraction, map processing, merging, metadata, organization, cleanup).
|
||||
* **Helper Methods (Refactored Logic):** Implement simplified versions of processing helpers (e.g., `_process_individual_maps`, `_merge_maps_from_source`, `_generate_metadata_file`, `_organize_output_files`, `_load_and_transform_source`, `_save_image`).
|
||||
* Retrieve *overridable* parameters directly from the input `SourceRule`.
|
||||
* Retrieve *static configuration* parameters (resolutions, merge rules) **only** from the stored `Configuration` object.
|
||||
* Contain **no** prediction, classification, or fallback logic.
|
||||
* **Dependencies:** `rule_structure.py`, `configuration.py`, `config.py`, cv2, numpy, etc.
|
||||
|
||||
* **`asset_processor.py` (Old Module):**
|
||||
* **Status:** Remains in the codebase **unchanged** for reference.
|
||||
* **Usage:** No longer called by `main.py` or GUI for standard processing.
|
||||
|
||||
* **`gui/main_window.py`:**
|
||||
* **Scan Input:** Perform initial directory/archive scan to get the file list for each directory/archieve.
|
||||
* **Initiate Prediction:** Call `PredictionHandler` with the file list, preset, and input identifier.
|
||||
* **Receive/Pass Rules:** Handle `rule_hierarchy_ready`, pass `SourceRule` list to `UnifiedViewModel`.
|
||||
* **Send Final Rules:** Send the final `SourceRule` list to `main.py`.
|
||||
|
||||
* **`gui/unified_view_model.py` / `gui/delegates.py`:**
|
||||
* **Load Dropdown Options:** Source dropdowns (`AssetType`, `ItemType`) from `config.py`.
|
||||
* **Data Handling:** Read/write user modifications to overridable fields in `SourceRule` objects.
|
||||
* **No UI for Static Config:** Do not provide UI editing for resolution or merge instructions.
|
||||
|
||||
* **`main.py`:**
|
||||
* **Receive Rule List:** Accept `List[SourceRule]` from GUI.
|
||||
* **Instantiate New Engine:** Import and instantiate the new `ProcessingEngine` from `processing_engine.py`.
|
||||
* **Queue Tasks:** Iterate `SourceRule` list, queue tasks.
|
||||
* **Call New Engine:** Pass the individual `SourceRule` object to `ProcessingEngine.process` for each task.
|
||||
|
||||
## 4. Rationale / Benefits
|
||||
|
||||
* **Single Source of Truth:** GUI holds the final `SourceRule` objects.
|
||||
* **Backend Simplification:** New `processing_engine.py` is focused solely on execution based on explicit rules and static config.
|
||||
* **Decoupling:** Reduced coupling between GUI/prediction and backend processing.
|
||||
* **Clarity:** Clearer data flow and component responsibilities.
|
||||
* **Maintainability:** Easier maintenance and debugging.
|
||||
* **Centralized Definitions:** `config.py` centralizes allowed types.
|
||||
* **Preserves Reference:** Keeps `asset_processor.py` available for comparison.
|
||||
* **Consistent Data Contract:** `SourceRule` structure is consistent from predictor output to engine input, enabling potential GUI bypass.
|
||||
|
||||
## 5. Potential Issues / Considerations
|
||||
|
||||
* **`PredictionHandler` Complexity:** Will require careful implementation of classification/rule population logic.
|
||||
* **Performance:** Prediction logic needs to remain performant (threading).
|
||||
* **Rule Structure Completeness:** Ensure `SourceRule` dataclasses hold all necessary *overridable* fields.
|
||||
* **Preset Loading:** Robust preset loading/interpretation needed in `PredictionHandler`.
|
||||
* **Static Config Loading:** Ensure the new `ProcessingEngine` correctly loads and uses the static `Configuration` object.
|
||||
|
||||
## 6. Documentation
|
||||
|
||||
This document (`ProjectNotes/Data_Flow_Refinement_Plan.md`) serves as the architectural plan. Relevant sections of the Developer Guide will need updating upon implementation.
|
||||
65
ProjectNotes/GUI_Overhaul_Plan_Unified_View.md
Normal file
65
ProjectNotes/GUI_Overhaul_Plan_Unified_View.md
Normal file
@ -0,0 +1,65 @@
|
||||
# GUI Overhaul Plan: Unified Hierarchical View
|
||||
|
||||
**Task:** Implement a UI overhaul for the Asset Processor Tool GUI to address usability issues and streamline the workflow for viewing and editing processing rules.
|
||||
|
||||
**Context:**
|
||||
|
||||
* A hierarchical rule system (`SourceRule`, `AssetRule`, `FileRule` in `rule_structure.py`) is used by the core engine (`asset_processor.py`).
|
||||
* The current GUI (`gui/main_window.py`, `gui/rule_hierarchy_model.py`, `gui/rule_editor_widget.py`) uses a `QTreeView` for hierarchy, a separate `RuleEditorWidget` for editing selected items, and a `QTableView` (`PreviewTableModel`) for previewing file classifications.
|
||||
* Relevant files analyzed: `gui/main_window.py`, `gui/rule_editor_widget.py`, `gui/rule_hierarchy_model.py`.
|
||||
|
||||
**Identified Issues with Current UI:**
|
||||
|
||||
1. **Window Resizing:** Selecting Source/Asset items causes window expansion because `RuleEditorWidget` displays large child lists (`assets`, `files`) as simple labels.
|
||||
2. **GUI Not Updating on Add:** Potential regression where adding new inputs doesn't reliably update the preview/hierarchy.
|
||||
3. **Incorrect Source Display:** Tree view shows "Source: None" instead of the input path (likely `SourceRule.input_path` is None when model receives it).
|
||||
4. **Preview Table Stale:** Changes made in `RuleEditorWidget` (e.g., overrides) are not reflected in the `PreviewTableModel` because the `_on_rule_updated` slot in `main_window.py` doesn't trigger a refresh.
|
||||
|
||||
**Agreed-Upon Overhaul Plan:**
|
||||
|
||||
The goal is to create a more unified and streamlined experience by merging the hierarchy, editing overrides, and preview aspects into a single view, reducing redundancy.
|
||||
|
||||
1. **UI Structure Redesign:**
|
||||
* **Left Panel:** Retain the existing Preset Editor panel (`main_window.py`'s `editor_panel`) for managing preset files (`.json`) and their complex rules (naming patterns, map type mappings, archetype rules, etc.).
|
||||
* **Right Panel:** Replace the current three-part splitter (Hierarchy Tree, Rule Editor, Preview Table) with a **single Unified Hierarchical View**.
|
||||
* Implementation: Use a `QTreeView` with a custom `QAbstractItemModel` and custom `QStyledItemDelegate`s for inline editing.
|
||||
* Hierarchy Display: Show Input Source(s) -> Assets -> Files.
|
||||
* Visual Cues: Use distinct background colors for rows representing Inputs, Assets, and Files.
|
||||
|
||||
2. **Unified View Columns & Functionality:**
|
||||
* **Column 1: Name/Hierarchy:** Displays input path, asset name, or file name with indentation.
|
||||
* **Column 2+: Editable Attributes (Context-Dependent):** Implement inline editors using delegates:
|
||||
* **Input Row:** Optional editable field for `Supplier` override.
|
||||
* **Asset Row:** `QComboBox` delegate for `Asset-Type` override (e.g., `GENERIC`, `DECAL`, `MODEL`).
|
||||
* **File Row:**
|
||||
* `QLineEdit` delegate for `Target Asset Name` override.
|
||||
* `QComboBox` delegate for `Item-Type` override (e.g., `MAP-COL`, `MAP-NRM`, `EXTRA`, `MODEL_FILE`).
|
||||
* **Column X: Status (Optional, Post-Processing):** Non-editable column showing processing status icon/text (Pending, Success, Warning, Error).
|
||||
* **Column Y: Output Path (Optional, Post-Processing):** Non-editable column showing the final output path after successful processing.
|
||||
|
||||
3. **Data Flow and Initialization:**
|
||||
* When inputs are added and a preset selected, `PredictionHandler` runs.
|
||||
* `PredictionHandler` generates the `SourceRule` hierarchy *and* predicts initial `Asset-Type`, `Item-Type`, and `Target Asset Name`.
|
||||
* The Unified View's model is populated with this `SourceRule`.
|
||||
* *Initial values* in inline editors are set based on these *predicted* values.
|
||||
* User edits in the Unified View directly modify attributes on the `SourceRule`, `AssetRule`, or `FileRule` objects held by the model.
|
||||
|
||||
4. **Dropdown Options Source:**
|
||||
* Available options in dropdowns (`Asset-Type`, `Item-Type`) should be sourced from globally defined lists or Enums (e.g., in `rule_structure.py` or `config.py`).
|
||||
|
||||
5. **Addressing Original Issues (How the Plan Fixes Them):**
|
||||
* **Window Resizing:** Resolved by removing `RuleEditorWidget`.
|
||||
* **GUI Not Updating on Add:** Fix requires ensuring `add_input_paths` triggers `PredictionHandler` and updates the new Unified View model correctly.
|
||||
* **Incorrect Source Display:** Fix requires ensuring `PredictionHandler` correctly populates `SourceRule.input_path`.
|
||||
* **Preview Table Stale:** Resolved by merging preview/editing; edits are live in the main view.
|
||||
|
||||
**Implementation Tasks:**
|
||||
|
||||
* Modify `gui/main_window.py`: Remove the right-side splitter, `RuleEditorWidget`, `PreviewTableModel`/`View`. Instantiate the new Unified View. Adapt `add_input_paths`, `start_processing`, `_on_rule_hierarchy_ready`, etc., to interact with the new view/model.
|
||||
* Create/Modify Model (`gui/rule_hierarchy_model.py` or new file): Implement a `QAbstractItemModel` supporting multiple columns, hierarchical data, and providing data/flags for inline editing.
|
||||
* Create Delegates (`gui/delegates.py`?): Implement `QStyledItemDelegate` subclasses for `QComboBox` and `QLineEdit` editors in the tree view.
|
||||
* Modify `gui/prediction_handler.py`: Ensure it predicts initial override values (`Asset-Type`, `Item-Type`, `Target Asset Name`) and includes them in the data passed back to the main window (likely within the `SourceRule` structure or alongside it). Ensure `SourceRule.input_path` is correctly set.
|
||||
* Modify `gui/processing_handler.py`: Update it to potentially signal back status/output path updates that can be reflected in the new Unified View model's optional columns.
|
||||
* Define Dropdown Sources: Add necessary Enums or lists to `rule_structure.py` or `config.py`.
|
||||
|
||||
This plan provides a clear path forward for implementing the UI overhaul.
|
||||
Binary file not shown.
Binary file not shown.
@ -1,6 +1,14 @@
|
||||
# config.py
|
||||
# Core settings defining the pipeline standards and output format.
|
||||
|
||||
# --- Core Definitions ---
|
||||
ALLOWED_ASSET_TYPES = ["Surface", "Model", "Decal", "Atlas", "UtilityMap"]
|
||||
ALLOWED_FILE_TYPES = [
|
||||
"MAP_COL", "MAP_NRM", "MAP_METAL", "MAP_ROUGH", "MAP_AO", "MAP_DISP",
|
||||
"MAP_REFL", "MAP_SSS", "MAP_FUZZ", "MAP_IDMAP", "MAP_MASK",
|
||||
"MAP_IMPERFECTION", # Added for imperfection maps
|
||||
"MODEL", "EXTRA", "FILE_IGNORE"
|
||||
]
|
||||
# --- Target Output Standards ---
|
||||
TARGET_FILENAME_PATTERN = "{base_name}_{map_type}_{resolution}.{ext}"
|
||||
STANDARD_MAP_TYPES = [
|
||||
|
||||
Binary file not shown.
Binary file not shown.
89
gui/delegates.py
Normal file
89
gui/delegates.py
Normal file
@ -0,0 +1,89 @@
|
||||
# gui/delegates.py
|
||||
from PySide6.QtWidgets import QStyledItemDelegate, QLineEdit, QComboBox
|
||||
from PySide6.QtCore import Qt, QModelIndex
|
||||
from config import ALLOWED_ASSET_TYPES, ALLOWED_FILE_TYPES # Import config lists
|
||||
|
||||
class LineEditDelegate(QStyledItemDelegate):
|
||||
"""Delegate for editing string values using a QLineEdit."""
|
||||
def createEditor(self, parent, option, index):
|
||||
# Creates the QLineEdit editor widget used for editing.
|
||||
editor = QLineEdit(parent)
|
||||
return editor
|
||||
|
||||
def setEditorData(self, editor: QLineEdit, index: QModelIndex):
|
||||
# Sets the editor's initial data based on the model's data.
|
||||
# Use EditRole to get the raw data suitable for editing.
|
||||
value = index.model().data(index, Qt.EditRole)
|
||||
editor.setText(str(value) if value is not None else "")
|
||||
|
||||
def setModelData(self, editor: QLineEdit, model, index: QModelIndex):
|
||||
# Commits the editor's data back to the model.
|
||||
value = editor.text()
|
||||
# Pass the potentially modified text back to the model's setData.
|
||||
model.setData(index, value, Qt.EditRole)
|
||||
|
||||
def updateEditorGeometry(self, editor, option, index):
|
||||
# Ensures the editor widget is placed correctly within the cell.
|
||||
editor.setGeometry(option.rect)
|
||||
|
||||
|
||||
class ComboBoxDelegate(QStyledItemDelegate):
|
||||
"""
|
||||
Delegate for editing string values from a predefined list using a QComboBox.
|
||||
Determines the list source based on column index.
|
||||
"""
|
||||
def createEditor(self, parent, option, index: QModelIndex):
|
||||
# Creates the QComboBox editor widget.
|
||||
editor = QComboBox(parent)
|
||||
column = index.column()
|
||||
model = index.model() # Get the model instance
|
||||
|
||||
# Add a "clear" option first, associating None with it.
|
||||
editor.addItem("---", None) # UserData = None
|
||||
|
||||
# Populate based on column using lists from config
|
||||
items_list = None
|
||||
if column == 2: # Asset-Type Override (AssetRule)
|
||||
items_list = ALLOWED_ASSET_TYPES
|
||||
elif column == 4: # Item-Type Override (FileRule)
|
||||
items_list = ALLOWED_FILE_TYPES
|
||||
|
||||
if items_list:
|
||||
for item_str in items_list:
|
||||
# Add item with the string itself as text and UserData
|
||||
editor.addItem(item_str, item_str)
|
||||
else:
|
||||
# If the delegate is incorrectly applied to another column,
|
||||
# it will just have the "---" option.
|
||||
pass
|
||||
|
||||
return editor
|
||||
|
||||
def setEditorData(self, editor: QComboBox, index: QModelIndex):
|
||||
# Sets the combo box's current item based on the model's string data.
|
||||
# Get the current string value (or None) from the model via EditRole.
|
||||
value = index.model().data(index, Qt.EditRole) # This should be a string or None
|
||||
|
||||
idx = -1
|
||||
if value is not None:
|
||||
# Find the index corresponding to the string value.
|
||||
idx = editor.findText(value)
|
||||
else:
|
||||
# If the model value is None, find the "---" item.
|
||||
idx = editor.findData(None) # Find the item with UserData == None
|
||||
|
||||
# Set the current index, defaulting to 0 ("---") if not found.
|
||||
editor.setCurrentIndex(idx if idx != -1 else 0)
|
||||
|
||||
|
||||
def setModelData(self, editor: QComboBox, model, index: QModelIndex):
|
||||
# Commits the selected combo box data (string or None) back to the model.
|
||||
# Get the UserData associated with the currently selected item.
|
||||
# This will be the string value or None (for the "---" option).
|
||||
value = editor.currentData() # This is either the string or None
|
||||
# Pass this string value or None back to the model's setData.
|
||||
model.setData(index, value, Qt.EditRole)
|
||||
|
||||
def updateEditorGeometry(self, editor, option, index):
|
||||
# Ensures the editor widget is placed correctly within the cell.
|
||||
editor.setGeometry(option.rect)
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,17 +1,19 @@
|
||||
from rule_structure import SourceRule, AssetRule, FileRule
|
||||
# gui/prediction_handler.py
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import time # For potential delays if needed
|
||||
import os # For cpu_count
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed # For parallel prediction
|
||||
import time
|
||||
import os
|
||||
import re # Import regex
|
||||
import tempfile # Added for temporary extraction directory
|
||||
import zipfile # Added for zip file handling
|
||||
# import patoolib # Potential import for rar/7z - Add later if zip works
|
||||
from collections import defaultdict
|
||||
from typing import List, Dict, Any # For type hinting
|
||||
|
||||
# --- PySide6 Imports ---
|
||||
from PySide6.QtCore import QObject, Signal, QThread, Slot # Import QThread and Slot
|
||||
from PySide6.QtCore import QObject, Signal, QThread, Slot
|
||||
|
||||
# --- Backend Imports ---
|
||||
# Adjust path to ensure modules can be found relative to this file's location
|
||||
import sys
|
||||
script_dir = Path(__file__).parent
|
||||
project_root = script_dir.parent
|
||||
@ -20,15 +22,22 @@ if str(project_root) not in sys.path:
|
||||
|
||||
try:
|
||||
from configuration import Configuration, ConfigurationError
|
||||
from asset_processor import AssetProcessor, AssetProcessingError
|
||||
# AssetProcessor might not be needed directly anymore if logic is moved here
|
||||
# from asset_processor import AssetProcessor, AssetProcessingError
|
||||
from rule_structure import SourceRule, AssetRule, FileRule # Removed AssetType, ItemType
|
||||
import config as app_config # Import project's config module
|
||||
# Import the lists directly for easier access
|
||||
from config import ALLOWED_ASSET_TYPES, ALLOWED_FILE_TYPES
|
||||
BACKEND_AVAILABLE = True
|
||||
except ImportError as e:
|
||||
print(f"ERROR (PredictionHandler): Failed to import backend modules: {e}")
|
||||
print(f"ERROR (PredictionHandler): Failed to import backend/config modules: {e}")
|
||||
# Define placeholders if imports fail
|
||||
Configuration = None
|
||||
AssetProcessor = None
|
||||
# AssetProcessor = None
|
||||
ConfigurationError = Exception
|
||||
AssetProcessingError = Exception
|
||||
# AssetProcessingError = Exception
|
||||
SourceRule, AssetRule, FileRule, AssetType, ItemType = (None,)*5 # Placeholder for rule structures
|
||||
app_config = None # Placeholder for config
|
||||
BACKEND_AVAILABLE = False
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
@ -37,17 +46,155 @@ if not log.hasHandlers():
|
||||
logging.basicConfig(level=logging.INFO, format='%(levelname)s (PredictHandler): %(message)s')
|
||||
|
||||
|
||||
# Helper function for classification (can be moved outside class if preferred)
|
||||
def classify_files(file_list: List[str], config: Configuration) -> Dict[str, List[Dict[str, Any]]]:
|
||||
"""
|
||||
Analyzes a list of files based on configuration rules to group them by asset
|
||||
and determine initial file properties.
|
||||
|
||||
Args:
|
||||
file_list: List of absolute file paths.
|
||||
config: The loaded Configuration object containing naming rules.
|
||||
|
||||
Returns:
|
||||
A dictionary grouping file information by predicted asset name.
|
||||
Example:
|
||||
{
|
||||
'AssetName1': [
|
||||
{'file_path': '/path/to/AssetName1_Color.png', 'item_type': 'Color', 'asset_name': 'AssetName1'},
|
||||
{'file_path': '/path/to/AssetName1_Normal.png', 'item_type': 'Normal', 'asset_name': 'AssetName1'}
|
||||
],
|
||||
# ... other assets
|
||||
}
|
||||
Returns an empty dict if classification fails or no files are provided.
|
||||
"""
|
||||
temp_grouped_files = defaultdict(list)
|
||||
extra_files_to_associate = [] # Store tuples: (file_path_str, filename)
|
||||
primary_asset_names = set() # Store asset names derived from map files
|
||||
|
||||
# --- Validation ---
|
||||
if not file_list or not config:
|
||||
log.warning("Classification skipped: Missing file list or config.")
|
||||
return {}
|
||||
if not hasattr(config, 'compiled_map_keyword_regex') or not config.compiled_map_keyword_regex:
|
||||
log.warning("Classification skipped: Missing compiled map keyword regex.")
|
||||
# Don't return yet, might still find extras
|
||||
if not hasattr(config, 'compiled_extra_regex'):
|
||||
log.warning("Configuration object missing 'compiled_extra_regex'. Cannot classify extra files.")
|
||||
# Continue, but extras won't be found
|
||||
|
||||
compiled_map_regex = getattr(config, 'compiled_map_keyword_regex', {})
|
||||
compiled_extra_regex = getattr(config, 'compiled_extra_regex', [])
|
||||
num_map_rules = sum(len(patterns) for patterns in compiled_map_regex.values())
|
||||
num_extra_rules = len(compiled_extra_regex)
|
||||
|
||||
log.debug(f"Starting classification for {len(file_list)} files using {num_map_rules} map keyword patterns and {num_extra_rules} extra patterns.")
|
||||
|
||||
# --- Initial Pass: Classify Maps and Identify Extras ---
|
||||
for file_path_str in file_list:
|
||||
file_path = Path(file_path_str)
|
||||
filename = file_path.name
|
||||
is_extra = False
|
||||
is_map = False
|
||||
|
||||
# 1. Check for Extra Files FIRST
|
||||
for extra_pattern in compiled_extra_regex:
|
||||
if extra_pattern.search(filename):
|
||||
log.debug(f"File '{filename}' matched EXTRA pattern: {extra_pattern.pattern}")
|
||||
extra_files_to_associate.append((file_path_str, filename))
|
||||
is_extra = True
|
||||
break # Stop checking extra patterns for this file
|
||||
|
||||
if is_extra:
|
||||
continue # Move to the next file if it's an extra
|
||||
|
||||
# 2. Check for Map Files
|
||||
# TODO: Consider rule priority if multiple patterns match the same file
|
||||
for target_type, patterns_list in compiled_map_regex.items():
|
||||
for compiled_regex, original_keyword, rule_index in patterns_list:
|
||||
match = compiled_regex.search(filename)
|
||||
if match:
|
||||
matched_item_type = target_type # The standard type (e.g., MAP_COL)
|
||||
asset_name = None
|
||||
# --- Asset Name Extraction Logic (Simplified Heuristic) ---
|
||||
match_start_index = match.start(1)
|
||||
if match_start_index > 0:
|
||||
potential_name = filename[:match_start_index].rstrip('_- .')
|
||||
asset_name = potential_name if potential_name else file_path.stem
|
||||
else:
|
||||
asset_name = file_path.stem
|
||||
if not asset_name: asset_name = file_path.stem
|
||||
|
||||
log.debug(f"File '{filename}' matched keyword '{original_keyword}' (rule {rule_index}) for item_type '{matched_item_type}'. Assigned asset name: '{asset_name}'")
|
||||
temp_grouped_files[asset_name].append({
|
||||
'file_path': file_path_str,
|
||||
'item_type': matched_item_type,
|
||||
'asset_name': asset_name
|
||||
})
|
||||
primary_asset_names.add(asset_name) # Mark this as a primary asset name
|
||||
is_map = True
|
||||
break # Stop checking patterns for this file
|
||||
if is_map:
|
||||
break # Stop checking target types for this file
|
||||
|
||||
# 3. Handle Unmatched Files (Not Extra, Not Map)
|
||||
if not is_extra and not is_map:
|
||||
log.debug(f"File '{filename}' did not match any map/extra pattern. Grouping by stem as FILE_IGNORE.")
|
||||
asset_name = file_path.stem
|
||||
temp_grouped_files[asset_name].append({
|
||||
'file_path': file_path_str,
|
||||
'item_type': "FILE_IGNORE",
|
||||
'asset_name': asset_name
|
||||
})
|
||||
|
||||
# --- Determine Primary Asset Name ---
|
||||
# Simple heuristic: if only one name derived from maps, use it. Otherwise, log warning.
|
||||
final_primary_asset_name = None
|
||||
if len(primary_asset_names) == 1:
|
||||
final_primary_asset_name = list(primary_asset_names)[0]
|
||||
log.debug(f"Determined single primary asset name: '{final_primary_asset_name}'")
|
||||
elif len(primary_asset_names) > 1:
|
||||
# TODO: Implement a better heuristic for multiple assets (e.g., longest common prefix)
|
||||
final_primary_asset_name = list(primary_asset_names)[0] # Fallback: use the first one found
|
||||
log.warning(f"Multiple potential primary asset names found: {primary_asset_names}. Using '{final_primary_asset_name}' for associating extra files. Consider refining asset name extraction.")
|
||||
else:
|
||||
# No maps found, but maybe extras exist? Associate with the first asset group found.
|
||||
if temp_grouped_files and extra_files_to_associate:
|
||||
final_primary_asset_name = list(temp_grouped_files.keys())[0]
|
||||
log.warning(f"No map files found to determine primary asset name. Associating extras with first group found: '{final_primary_asset_name}'.")
|
||||
else:
|
||||
log.debug("No primary asset name determined (no maps found).")
|
||||
|
||||
|
||||
# --- Associate Extra Files ---
|
||||
if final_primary_asset_name and extra_files_to_associate:
|
||||
log.debug(f"Associating {len(extra_files_to_associate)} extra file(s) with primary asset '{final_primary_asset_name}'")
|
||||
for file_path_str, filename in extra_files_to_associate:
|
||||
temp_grouped_files[final_primary_asset_name].append({
|
||||
'file_path': file_path_str,
|
||||
'item_type': "EXTRA", # Assign specific type
|
||||
'asset_name': final_primary_asset_name # Associate with primary asset
|
||||
})
|
||||
elif extra_files_to_associate:
|
||||
log.warning(f"Could not determine a primary asset name to associate {len(extra_files_to_associate)} extra file(s) with. They will be ignored.")
|
||||
# Optionally, create a separate 'Extras' asset group?
|
||||
# for file_path_str, filename in extra_files_to_associate:
|
||||
# temp_grouped_files["_Extras_"].append(...)
|
||||
|
||||
|
||||
log.debug(f"Classification complete. Found {len(temp_grouped_files)} potential assets.")
|
||||
return dict(temp_grouped_files)
|
||||
|
||||
|
||||
class PredictionHandler(QObject):
|
||||
"""
|
||||
Handles running predictions in a separate thread to avoid GUI freezes.
|
||||
Generates the initial SourceRule hierarchy based on file lists and presets.
|
||||
"""
|
||||
# --- Signals ---
|
||||
# Emits a list of dictionaries, each representing a file row for the table
|
||||
# Dict format: {'original_path': str, 'predicted_asset_name': str | None, 'predicted_output_name': str | None, 'status': str, 'details': str | None, 'source_asset': str}
|
||||
prediction_results_ready = Signal(list)
|
||||
# Emitted when the hierarchical rule structure is ready
|
||||
rule_hierarchy_ready = Signal(object) # Emits a SourceRule object
|
||||
# Emitted when all predictions for a batch are done
|
||||
# Emitted when the hierarchical rule structure is ready for a single source
|
||||
rule_hierarchy_ready = Signal(list) # Emits a LIST containing ONE SourceRule object
|
||||
# Emitted when prediction/hierarchy generation for a source is done
|
||||
prediction_finished = Signal()
|
||||
# Emitted for status updates
|
||||
status_message = Signal(str, int)
|
||||
@ -55,102 +202,72 @@ class PredictionHandler(QObject):
|
||||
def __init__(self, parent=None):
|
||||
super().__init__(parent)
|
||||
self._is_running = False
|
||||
# No explicit cancel needed for prediction for now, it should be fast per-item
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
return self._is_running
|
||||
|
||||
def _predict_single_asset(self, input_path_str: str, config: Configuration, rules: SourceRule) -> list[dict] | dict:
|
||||
# Removed _predict_single_asset method
|
||||
|
||||
@Slot(str, list, str) # Explicitly define types for the slot
|
||||
def run_prediction(self, input_source_identifier: str, original_input_paths: list[str], preset_name: str):
|
||||
"""
|
||||
Helper method to run detailed file prediction for a single input path.
|
||||
Runs within the ThreadPoolExecutor.
|
||||
Returns a list of file prediction dictionaries for the input, or a dictionary representing an error.
|
||||
"""
|
||||
input_path = Path(input_path_str)
|
||||
source_asset_name = input_path.name # For reference in error reporting
|
||||
|
||||
try:
|
||||
# Create AssetProcessor instance (needs dummy output path for prediction)
|
||||
# The detailed prediction method handles its own workspace setup/cleanup
|
||||
processor = AssetProcessor(input_path, config, Path(".")) # Dummy output path
|
||||
|
||||
# Get the detailed file predictions
|
||||
# This method returns a list of dictionaries
|
||||
detailed_predictions = processor.get_detailed_file_predictions(rules)
|
||||
|
||||
if detailed_predictions is None:
|
||||
log.error(f"AssetProcessor.get_detailed_file_predictions returned None for {input_path_str}.")
|
||||
# Return a list containing a single error entry for consistency
|
||||
return [{
|
||||
'original_path': source_asset_name,
|
||||
'predicted_asset_name': None,
|
||||
'predicted_output_name': None,
|
||||
'status': 'Error',
|
||||
'details': 'Prediction returned no results',
|
||||
'source_asset': source_asset_name
|
||||
}]
|
||||
|
||||
# Add the source_asset name to each prediction result for grouping later
|
||||
for prediction in detailed_predictions:
|
||||
prediction['source_asset'] = source_asset_name
|
||||
|
||||
log.debug(f"Generated {len(detailed_predictions)} detailed predictions for {input_path_str}.")
|
||||
return detailed_predictions # Return the list of dictionaries
|
||||
|
||||
except AssetProcessingError as e:
|
||||
log.error(f"Asset processing error during prediction for {input_path_str}: {e}")
|
||||
# Return a list containing a single error entry for consistency
|
||||
return [{
|
||||
'original_path': source_asset_name,
|
||||
'predicted_asset_name': None,
|
||||
'predicted_output_name': None,
|
||||
'status': 'Error',
|
||||
'details': f'Asset Error: {e}',
|
||||
'source_asset': source_asset_name
|
||||
}]
|
||||
except Exception as e:
|
||||
log.exception(f"Unexpected error during prediction for {input_path_str}: {e}")
|
||||
# Return a list containing a single error entry for consistency
|
||||
return [{
|
||||
'original_path': source_asset_name,
|
||||
'predicted_asset_name': None,
|
||||
'predicted_output_name': None,
|
||||
'status': 'Error',
|
||||
'details': f'Unexpected Error: {e}',
|
||||
'source_asset': source_asset_name
|
||||
}]
|
||||
|
||||
|
||||
@Slot()
|
||||
def run_prediction(self, input_paths: list[str], preset_name: str, rules: SourceRule):
|
||||
"""
|
||||
Runs the prediction logic for the given paths and preset using a ThreadPoolExecutor.
|
||||
Generates the hierarchical rule structure and detailed file predictions.
|
||||
Generates the initial SourceRule hierarchy for a given source identifier
|
||||
(which could be a folder or archive path), extracting the actual file list first.
|
||||
file list, and preset name. Populates only overridable fields based on
|
||||
classification and preset defaults.
|
||||
This method is intended to be run in a separate QThread.
|
||||
"""
|
||||
thread_id = QThread.currentThread()
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] --> Entered PredictionHandler.run_prediction.")
|
||||
# Note: file_list argument is renamed to original_input_paths for clarity,
|
||||
# but the signal passes the list of source paths, not the content files yet.
|
||||
# We use input_source_identifier as the primary path to analyze.
|
||||
log.info(f"VERIFY: PredictionHandler received request. Source: '{input_source_identifier}', Original Paths: {original_input_paths}, Preset: '{preset_name}'") # DEBUG Verify
|
||||
log.info(f"Source Identifier: '{input_source_identifier}', Preset: '{preset_name}'")
|
||||
|
||||
if self._is_running:
|
||||
log.warning("Prediction is already running.")
|
||||
log.warning("Prediction is already running for another source. Aborting this run.")
|
||||
# Don't emit finished, let the running one complete.
|
||||
return
|
||||
if not BACKEND_AVAILABLE:
|
||||
log.error("Backend modules not available. Cannot run prediction.")
|
||||
log.error("Backend/config modules not available. Cannot run prediction.")
|
||||
self.status_message.emit("Error: Backend components missing.", 5000)
|
||||
self.prediction_finished.emit()
|
||||
# self.prediction_finished.emit() # Don't emit finished if never started properly
|
||||
return
|
||||
if not preset_name:
|
||||
log.warning("No preset selected for prediction.")
|
||||
self.status_message.emit("No preset selected.", 3000)
|
||||
self.prediction_finished.emit()
|
||||
# self.prediction_finished.emit()
|
||||
return
|
||||
# Check the identifier path itself
|
||||
source_path = Path(input_source_identifier)
|
||||
if not source_path.exists():
|
||||
log.warning(f"Input source path does not exist: '{input_source_identifier}'. Skipping prediction.")
|
||||
self.status_message.emit("Input path not found.", 3000)
|
||||
self.rule_hierarchy_ready.emit([])
|
||||
self.prediction_finished.emit()
|
||||
return
|
||||
|
||||
|
||||
self._is_running = True
|
||||
thread_id = QThread.currentThread() # Get current thread object
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] --> Entered PredictionHandler.run_prediction. Starting run for {len(input_paths)} items, Preset='{preset_name}'")
|
||||
self.status_message.emit(f"Updating preview for {len(input_paths)} items...", 0)
|
||||
self.status_message.emit(f"Analyzing '{source_path.name}'...", 0)
|
||||
|
||||
config: Configuration | None = None
|
||||
allowed_asset_types: List[str] = []
|
||||
allowed_file_types: List[str] = [] # These are ItemType names
|
||||
|
||||
config = None # Load config once if possible
|
||||
try:
|
||||
config = Configuration(preset_name)
|
||||
# Load allowed types from the project's config module
|
||||
if app_config:
|
||||
allowed_asset_types = getattr(app_config, 'ALLOWED_ASSET_TYPES', [])
|
||||
allowed_file_types = getattr(app_config, 'ALLOWED_FILE_TYPES', [])
|
||||
log.debug(f"Loaded allowed AssetTypes: {allowed_asset_types}")
|
||||
log.debug(f"Loaded allowed FileTypes (ItemTypes): {allowed_file_types}")
|
||||
else:
|
||||
log.warning("Project config module not loaded. Cannot get allowed types.")
|
||||
|
||||
except ConfigurationError as e:
|
||||
log.error(f"Failed to load configuration for preset '{preset_name}': {e}")
|
||||
self.status_message.emit(f"Error loading preset '{preset_name}': {e}", 5000)
|
||||
@ -158,131 +275,142 @@ class PredictionHandler(QObject):
|
||||
self._is_running = False
|
||||
return
|
||||
except Exception as e:
|
||||
log.exception(f"Unexpected error loading configuration for preset '{preset_name}': {e}")
|
||||
log.exception(f"Unexpected error loading configuration or allowed types for preset '{preset_name}': {e}")
|
||||
self.status_message.emit(f"Unexpected error loading preset '{preset_name}'.", 5000)
|
||||
self.prediction_finished.emit()
|
||||
self._is_running = False
|
||||
return
|
||||
|
||||
# Create the root SourceRule object
|
||||
# For now, use a generic name. Later, this might be derived from input paths.
|
||||
source_rule = SourceRule()
|
||||
log.debug(f"Created root SourceRule object.")
|
||||
|
||||
# Collect all detailed file prediction results from completed futures
|
||||
all_file_prediction_results = []
|
||||
|
||||
futures = []
|
||||
max_workers = min(max(1, (os.cpu_count() or 1) // 2), 8)
|
||||
log.info(f"Using ThreadPoolExecutor with max_workers={max_workers} for prediction.")
|
||||
|
||||
log.debug(f"DEBUG: Calling classify_files with file_list: {original_input_paths}") # DEBUG LOG
|
||||
# --- Perform Classification ---
|
||||
try:
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
# Submit tasks for each input path
|
||||
for input_path_str in input_paths:
|
||||
# _predict_single_asset now returns a list of file prediction dicts or an error dict list
|
||||
future = executor.submit(self._predict_single_asset, input_path_str, config, rules)
|
||||
futures.append(future)
|
||||
classified_assets = classify_files(original_input_paths, config)
|
||||
except Exception as e:
|
||||
log.exception(f"Error during file classification for source '{input_source_identifier}': {e}")
|
||||
self.status_message.emit(f"Error classifying files: {e}", 5000)
|
||||
self.prediction_finished.emit()
|
||||
self._is_running = False
|
||||
return
|
||||
|
||||
# Process results as they complete
|
||||
for future in as_completed(futures):
|
||||
try:
|
||||
result = future.result()
|
||||
if isinstance(result, list):
|
||||
# Extend the main list with results from this asset
|
||||
all_file_prediction_results.extend(result)
|
||||
elif isinstance(result, dict) and result.get('status') == 'Error':
|
||||
# Handle error dictionaries returned by _predict_single_asset (should be in a list now, but handle single dict for safety)
|
||||
all_file_prediction_results.append(result)
|
||||
else:
|
||||
log.error(f'Prediction task returned unexpected result type: {type(result)}')
|
||||
all_file_prediction_results.append({
|
||||
'original_path': '[Unknown Asset - Unexpected Result]',
|
||||
'predicted_asset_name': None,
|
||||
'predicted_output_name': None,
|
||||
'status': 'Error',
|
||||
'details': f'Unexpected result type: {type(result)}',
|
||||
'source_asset': '[Unknown]'
|
||||
})
|
||||
if not classified_assets:
|
||||
log.warning(f"Classification yielded no assets for source '{input_source_identifier}'.")
|
||||
self.status_message.emit("No assets identified from files.", 3000)
|
||||
self.rule_hierarchy_ready.emit([]) # Emit empty list
|
||||
self.prediction_finished.emit()
|
||||
self._is_running = False
|
||||
return
|
||||
|
||||
except Exception as exc:
|
||||
log.error(f'Prediction task generated an exception: {exc}', exc_info=True)
|
||||
all_file_prediction_results.append({
|
||||
'original_path': '[Unknown Asset - Executor Error]',
|
||||
'predicted_asset_name': None,
|
||||
'predicted_output_name': None,
|
||||
'status': 'Error',
|
||||
'details': f'Executor Error: {exc}',
|
||||
'source_asset': '[Unknown]'
|
||||
})
|
||||
# --- Build the Hierarchy ---
|
||||
source_rules_list = []
|
||||
try:
|
||||
# Determine SourceRule level overrides/defaults
|
||||
# Get supplier name from the config property
|
||||
supplier_identifier = config.supplier_name # Use the property
|
||||
|
||||
except Exception as pool_exc:
|
||||
log.exception(f"An error occurred with the prediction ThreadPoolExecutor: {pool_exc}")
|
||||
self.status_message.emit(f"Error during prediction setup: {pool_exc}", 5000)
|
||||
all_file_prediction_results.append({
|
||||
'original_path': '[Prediction Pool Error]',
|
||||
'predicted_asset_name': None,
|
||||
'predicted_output_name': None,
|
||||
'status': 'Error',
|
||||
'details': f'Pool Error: {pool_exc}',
|
||||
'source_asset': '[System]'
|
||||
})
|
||||
# Create the single SourceRule for this input source
|
||||
source_rule = SourceRule(
|
||||
input_path=input_source_identifier, # Use the identifier provided
|
||||
supplier_identifier=supplier_identifier # Set overridable field
|
||||
)
|
||||
log.debug(f"Created SourceRule for identifier: {input_source_identifier} with supplier: {supplier_identifier}")
|
||||
|
||||
asset_rules = []
|
||||
for asset_name, files_info in classified_assets.items():
|
||||
if not files_info: continue # Skip empty asset groups
|
||||
|
||||
# Determine AssetRule level overrides/defaults
|
||||
# TODO: Implement logic to determine asset_type based on file types present?
|
||||
# For now, default to MATERIAL if common material maps are present, else GENERIC.
|
||||
# This requires checking item_types in files_info.
|
||||
item_types_in_asset = {f_info['item_type'] for f_info in files_info}
|
||||
predicted_asset_type = "Surface" # Default to "Surface" string
|
||||
# Simple heuristic: if common material types exist, assume Surface
|
||||
# Use strings directly from config.py's ALLOWED_FILE_TYPES
|
||||
material_indicators = {"MAP_COL", "MAP_NRM", "MAP_ROUGH", "MAP_METAL", "MAP_AO", "MAP_DISP"}
|
||||
if any(it in material_indicators for it in item_types_in_asset):
|
||||
predicted_asset_type = "Surface" # Predict as "Surface" string
|
||||
|
||||
# Ensure the predicted type is allowed, fallback if necessary
|
||||
# Now predicted_asset_type is already a string
|
||||
if allowed_asset_types and predicted_asset_type not in allowed_asset_types:
|
||||
log.warning(f"Predicted AssetType '{predicted_asset_type}' for asset '{asset_name}' is not in ALLOWED_ASSET_TYPES. Falling back.")
|
||||
# Fallback logic: use the default from config if allowed, else first allowed type
|
||||
default_type = getattr(app_config, 'DEFAULT_ASSET_CATEGORY', 'Surface')
|
||||
if default_type in allowed_asset_types:
|
||||
predicted_asset_type = default_type
|
||||
elif allowed_asset_types:
|
||||
predicted_asset_type = allowed_asset_types[0]
|
||||
else:
|
||||
pass # Keep the original prediction if allowed list is empty
|
||||
|
||||
|
||||
# --- Build the hierarchical rule structure (SourceRule -> AssetRule -> FileRule) ---
|
||||
# Group file prediction results by predicted_asset_name
|
||||
grouped_by_asset = defaultdict(list)
|
||||
for file_pred in all_file_prediction_results:
|
||||
# Group by predicted_asset_name, handle None or errors
|
||||
asset_name = file_pred.get('predicted_asset_name')
|
||||
if asset_name is None:
|
||||
# Group files without a predicted asset name under a special key or ignore for hierarchy?
|
||||
# Let's group them under their source_asset name for now, but mark them clearly.
|
||||
asset_name = f"[{file_pred.get('source_asset', 'UnknownSource')}]" # Use source asset name as a fallback identifier
|
||||
log.debug(f"File '{file_pred.get('original_path', 'UnknownPath')}' has no predicted asset name, grouping under '{asset_name}' for hierarchy.")
|
||||
grouped_by_asset[asset_name].append(file_pred)
|
||||
asset_rule = AssetRule(
|
||||
asset_name=asset_name, # This is determined by classification
|
||||
asset_type=predicted_asset_type, # Set overridable field (use the string)
|
||||
# asset_type_override=None # This is for user edits, leave as None initially
|
||||
)
|
||||
log.debug(f"Created AssetRule for asset: {asset_name} with type: {predicted_asset_type}")
|
||||
|
||||
# Create AssetRule objects from the grouped results
|
||||
asset_rules = []
|
||||
for asset_name, file_preds in grouped_by_asset.items():
|
||||
# Determine the source_path for the AssetRule (use the source_asset from the first file in the group)
|
||||
source_asset_path = file_preds[0].get('source_asset', asset_name) # Fallback to asset_name if source_asset is missing
|
||||
asset_rule = AssetRule(asset_name=asset_name)
|
||||
file_rules = []
|
||||
for file_info in files_info:
|
||||
# Determine FileRule level overrides/defaults
|
||||
item_type_override = file_info['item_type'] # From classification
|
||||
target_asset_name_override = file_info['asset_name'] # From classification
|
||||
|
||||
# Create FileRule objects from the file prediction dictionaries
|
||||
for file_pred in file_preds:
|
||||
file_rule = FileRule(
|
||||
file_path=file_pred.get('original_path', 'UnknownPath'),
|
||||
map_type_override=None, # Assuming these are not predicted here
|
||||
resolution_override=None, # Assuming these are not predicted here
|
||||
channel_merge_instructions={}, # Assuming these are not predicted here
|
||||
output_format_override=None # Assuming these are not predicted here
|
||||
)
|
||||
asset_rule.files.append(file_rule)
|
||||
# Ensure the predicted item type is allowed (check against prefixed version), skipping EXTRA and FILE_IGNORE
|
||||
# Only prefix if it's a map type that doesn't already have the prefix
|
||||
prefixed_item_type = f"MAP_{item_type_override}" if not item_type_override.startswith("MAP_") and item_type_override not in ["FILE_IGNORE", "EXTRA", "MODEL"] else item_type_override
|
||||
# Check if the (potentially prefixed) type is allowed, but only if it's not supposed to be ignored or extra
|
||||
if allowed_file_types and prefixed_item_type not in allowed_file_types and item_type_override not in ["FILE_IGNORE", "EXTRA"]:
|
||||
log.warning(f"Predicted ItemType '{item_type_override}' (checked as '{prefixed_item_type}') for file '{file_info['file_path']}' is not in ALLOWED_FILE_TYPES. Setting to FILE_IGNORE.")
|
||||
item_type_override = "FILE_IGNORE" # Fallback to FILE_IGNORE string
|
||||
# Output format is determined by the engine, not predicted here. Leave as None.
|
||||
output_format_override = None
|
||||
|
||||
asset_rules.append(asset_rule)
|
||||
file_rule = FileRule(
|
||||
file_path=file_info['file_path'], # This is static info based on input
|
||||
# --- Populate ONLY Overridable Fields ---
|
||||
item_type_override=item_type_override,
|
||||
target_asset_name_override=target_asset_name_override,
|
||||
output_format_override=output_format_override,
|
||||
# --- Leave Static Fields as Default/None ---
|
||||
resolution_override=None,
|
||||
channel_merge_instructions={},
|
||||
# etc.
|
||||
)
|
||||
file_rules.append(file_rule)
|
||||
|
||||
# Populate the SourceRule with the collected AssetRules
|
||||
source_rule.assets = asset_rules
|
||||
log.debug(f"Built SourceRule with {len(asset_rules)} AssetRule(s).")
|
||||
asset_rule.files = file_rules
|
||||
asset_rules.append(asset_rule)
|
||||
|
||||
# Populate the SourceRule with its assets
|
||||
source_rule.assets = asset_rules
|
||||
log.debug(f"Built SourceRule '{source_rule.input_path}' with {len(asset_rules)} AssetRule(s).")
|
||||
source_rules_list.append(source_rule) # Add the single completed SourceRule
|
||||
|
||||
except Exception as e:
|
||||
log.exception(f"Error building rule hierarchy for source '{input_source_identifier}': {e}")
|
||||
self.status_message.emit(f"Error building rules: {e}", 5000)
|
||||
# Don't emit hierarchy, just finish
|
||||
self.prediction_finished.emit()
|
||||
self._is_running = False
|
||||
# Removed erroneous temp_dir_obj cleanup
|
||||
return
|
||||
|
||||
|
||||
# Emit the hierarchical rule structure
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Parallel prediction run finished. Preparing to emit rule hierarchy.")
|
||||
self.rule_hierarchy_ready.emit(source_rule)
|
||||
# --- Emit Results ---
|
||||
# DEBUG Verify: Log the hierarchy being emitted
|
||||
log.info(f"VERIFY: Emitting rule_hierarchy_ready with {len(source_rules_list)} SourceRule(s).")
|
||||
for i, rule in enumerate(source_rules_list):
|
||||
log.debug(f" VERIFY Rule {i}: Input='{rule.input_path}', Assets={len(rule.assets)}")
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Prediction run finished. Emitting hierarchy for '{input_source_identifier}'.")
|
||||
self.rule_hierarchy_ready.emit(source_rules_list) # Emit list containing the one SourceRule
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Emitted rule_hierarchy_ready signal.")
|
||||
|
||||
# Emit the combined list of detailed file results for the table view
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Preparing to emit {len(all_file_prediction_results)} file results for table view.")
|
||||
log.debug(f"[{time.time():.4f}][T:{thread_id}] Type of all_file_prediction_results before emit: {type(all_file_prediction_results)}")
|
||||
try:
|
||||
log.debug(f"[{time.time():.4f}][T:{thread_id}] Content of all_file_prediction_results (first 5) before emit: {all_file_prediction_results[:5]}")
|
||||
except Exception as e:
|
||||
log.error(f"[{time.time():.4f}][T:{thread_id}] Error logging all_file_prediction_results content: {e}")
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Emitting prediction_results_ready signal...")
|
||||
self.prediction_results_ready.emit(all_file_prediction_results)
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Emitted prediction_results_ready signal.")
|
||||
# Removed prediction_results_ready signal emission
|
||||
|
||||
self.status_message.emit("Preview update complete.", 3000)
|
||||
self.status_message.emit(f"Analysis complete for '{input_source_identifier}'.", 3000)
|
||||
self.prediction_finished.emit()
|
||||
self._is_running = False
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] <-- Exiting PredictionHandler.run_prediction.")
|
||||
# Removed temp_dir_obj cleanup - not relevant here
|
||||
log.info(f"[{time.time():.4f}][T:{thread_id}] <-- Exiting PredictionHandler.run_prediction.")
|
||||
|
||||
319
gui/unified_view_model.py
Normal file
319
gui/unified_view_model.py
Normal file
@ -0,0 +1,319 @@
|
||||
# gui/unified_view_model.py
|
||||
from PySide6.QtCore import QAbstractItemModel, QModelIndex, Qt
|
||||
from pathlib import Path # Added for file_name extraction
|
||||
from rule_structure import SourceRule, AssetRule, FileRule # Removed AssetType, ItemType import
|
||||
|
||||
class UnifiedViewModel(QAbstractItemModel):
|
||||
"""
|
||||
A QAbstractItemModel for displaying and editing the hierarchical structure
|
||||
of SourceRule -> AssetRule -> FileRule.
|
||||
"""
|
||||
Columns = [
|
||||
"Name", "Supplier Override", "Asset-Type Override",
|
||||
"Target Asset Name Override", "Item-Type Override",
|
||||
"Status", "Output Path"
|
||||
]
|
||||
|
||||
COL_NAME = 0
|
||||
COL_SUPPLIER = 1
|
||||
COL_ASSET_TYPE = 2
|
||||
COL_TARGET_ASSET = 3
|
||||
COL_ITEM_TYPE = 4
|
||||
COL_STATUS = 5
|
||||
COL_OUTPUT_PATH = 6
|
||||
|
||||
def __init__(self, parent=None):
|
||||
super().__init__(parent)
|
||||
self._source_rules = [] # Now stores a list of SourceRule objects
|
||||
|
||||
def load_data(self, source_rules_list: list): # Accepts a list
|
||||
"""Loads or reloads the model with a list of SourceRule objects."""
|
||||
self.beginResetModel()
|
||||
self._source_rules = source_rules_list if source_rules_list else [] # Assign the new list
|
||||
# Ensure back-references for parent lookup are set on the NEW items
|
||||
for source_rule in self._source_rules:
|
||||
for asset_rule in source_rule.assets:
|
||||
asset_rule.parent_source = source_rule # Set parent SourceRule
|
||||
for file_rule in asset_rule.files:
|
||||
file_rule.parent_asset = asset_rule # Set parent AssetRule
|
||||
self.endResetModel()
|
||||
|
||||
def clear_data(self):
|
||||
"""Clears the model data."""
|
||||
self.beginResetModel()
|
||||
self._source_rules = [] # Clear the list
|
||||
self.endResetModel()
|
||||
|
||||
def get_all_source_rules(self) -> list:
|
||||
"""Returns the internal list of SourceRule objects."""
|
||||
return self._source_rules
|
||||
def rowCount(self, parent: QModelIndex = QModelIndex()) -> int:
|
||||
"""Returns the number of rows under the given parent."""
|
||||
if not parent.isValid():
|
||||
# Parent is the invisible root. Children are the SourceRules.
|
||||
return len(self._source_rules)
|
||||
|
||||
parent_item = parent.internalPointer()
|
||||
|
||||
if isinstance(parent_item, SourceRule):
|
||||
# Parent is a SourceRule. Children are AssetRules.
|
||||
return len(parent_item.assets)
|
||||
elif isinstance(parent_item, AssetRule):
|
||||
# Parent is an AssetRule. Children are FileRules.
|
||||
return len(parent_item.files)
|
||||
elif isinstance(parent_item, FileRule):
|
||||
return 0 # FileRules have no children
|
||||
|
||||
return 0 # Should not happen for valid items
|
||||
|
||||
|
||||
def columnCount(self, parent: QModelIndex = QModelIndex()) -> int:
|
||||
"""Returns the number of columns."""
|
||||
return len(self.Columns)
|
||||
|
||||
def parent(self, index: QModelIndex) -> QModelIndex:
|
||||
"""Returns the parent of the model item with the given index."""
|
||||
if not index.isValid():
|
||||
return QModelIndex()
|
||||
|
||||
child_item = index.internalPointer()
|
||||
if child_item is None:
|
||||
return QModelIndex()
|
||||
|
||||
# Determine the parent based on the item type
|
||||
if isinstance(child_item, SourceRule):
|
||||
# Parent is the invisible root
|
||||
return QModelIndex()
|
||||
elif isinstance(child_item, AssetRule):
|
||||
# Parent is a SourceRule. Find its row in the _source_rules list.
|
||||
parent_item = getattr(child_item, 'parent_source', None)
|
||||
if parent_item and parent_item in self._source_rules:
|
||||
try:
|
||||
parent_row = self._source_rules.index(parent_item)
|
||||
return self.createIndex(parent_row, 0, parent_item)
|
||||
except ValueError:
|
||||
return QModelIndex() # Should not happen if parent_source is correct
|
||||
else:
|
||||
return QModelIndex() # Parent SourceRule not found or reference missing
|
||||
|
||||
elif isinstance(child_item, FileRule):
|
||||
# Parent is an AssetRule. Find its row within its parent SourceRule.
|
||||
parent_item = getattr(child_item, 'parent_asset', None) # Get parent AssetRule
|
||||
if parent_item:
|
||||
grandparent_item = getattr(parent_item, 'parent_source', None) # Get the SourceRule
|
||||
if grandparent_item:
|
||||
try:
|
||||
parent_row = grandparent_item.assets.index(parent_item)
|
||||
# We need the index of the grandparent (SourceRule) to create the parent index
|
||||
grandparent_row = self._source_rules.index(grandparent_item)
|
||||
return self.createIndex(parent_row, 0, parent_item) # Create index for the AssetRule parent
|
||||
except ValueError:
|
||||
return QModelIndex() # Parent AssetRule or Grandparent SourceRule not found in respective lists
|
||||
else:
|
||||
return QModelIndex() # Grandparent (SourceRule) reference missing
|
||||
else:
|
||||
return QModelIndex() # Parent AssetRule reference missing
|
||||
|
||||
return QModelIndex() # Should not be reached
|
||||
|
||||
|
||||
def index(self, row: int, column: int, parent: QModelIndex = QModelIndex()) -> QModelIndex:
|
||||
"""Returns the index of the item in the model specified by the given row, column and parent index."""
|
||||
if not self.hasIndex(row, column, parent):
|
||||
return QModelIndex()
|
||||
|
||||
parent_item = None
|
||||
if not parent.isValid():
|
||||
# Parent is invisible root. Children are SourceRules.
|
||||
if row < len(self._source_rules):
|
||||
child_item = self._source_rules[row]
|
||||
return self.createIndex(row, column, child_item)
|
||||
else:
|
||||
return QModelIndex() # Row out of bounds for top-level items
|
||||
else:
|
||||
# Parent is a valid index, get its item
|
||||
parent_item = parent.internalPointer()
|
||||
|
||||
child_item = None
|
||||
if isinstance(parent_item, SourceRule):
|
||||
# Parent is SourceRule. Children are AssetRules.
|
||||
if row < len(parent_item.assets):
|
||||
child_item = parent_item.assets[row]
|
||||
# Ensure parent reference is set
|
||||
if not hasattr(child_item, 'parent_source'):
|
||||
child_item.parent_source = parent_item
|
||||
elif isinstance(parent_item, AssetRule):
|
||||
# Parent is AssetRule. Children are FileRules.
|
||||
if row < len(parent_item.files):
|
||||
child_item = parent_item.files[row]
|
||||
# Ensure parent reference is set
|
||||
if not hasattr(child_item, 'parent_asset'):
|
||||
child_item.parent_asset = parent_item
|
||||
|
||||
if child_item:
|
||||
# Create index for the child item under the parent
|
||||
return self.createIndex(row, column, child_item)
|
||||
else:
|
||||
# Invalid row or parent type has no children (FileRule)
|
||||
return QModelIndex()
|
||||
|
||||
def data(self, index: QModelIndex, role: int = Qt.DisplayRole):
|
||||
"""Returns the data stored under the given role for the item referred to by the index."""
|
||||
if not index.isValid(): # Check only index validity, data list might be empty but valid
|
||||
return None
|
||||
|
||||
item = index.internalPointer()
|
||||
column = index.column()
|
||||
|
||||
# --- Handle different item types ---
|
||||
if isinstance(item, SourceRule): # This might only be relevant if SourceRule is displayed
|
||||
if role == Qt.DisplayRole:
|
||||
if column == 0: return item.input_path
|
||||
# Use supplier_override if set, otherwise empty string
|
||||
if column == self.COL_SUPPLIER: return item.supplier_override if item.supplier_override is not None else ""
|
||||
# Other columns return None or "" for SourceRule
|
||||
elif role == Qt.EditRole:
|
||||
# Return supplier_override for editing
|
||||
if column == self.COL_SUPPLIER: return item.supplier_override if item.supplier_override is not None else ""
|
||||
return None # Default for SourceRule for other roles/columns
|
||||
|
||||
elif isinstance(item, AssetRule):
|
||||
if role == Qt.DisplayRole:
|
||||
if column == self.COL_NAME: return item.asset_name
|
||||
# Use asset_type_override if set, otherwise fall back to predicted asset_type
|
||||
if column == self.COL_ASSET_TYPE:
|
||||
display_value = item.asset_type_override if item.asset_type_override is not None else item.asset_type
|
||||
return display_value if display_value else ""
|
||||
# Placeholder columns
|
||||
if column == self.COL_STATUS: return "" # Status (Not handled yet)
|
||||
if column == self.COL_OUTPUT_PATH: return "" # Output Path (Not handled yet)
|
||||
elif role == Qt.EditRole:
|
||||
# Return asset_type_override for editing (delegate expects string or None)
|
||||
if column == self.COL_ASSET_TYPE:
|
||||
return item.asset_type_override # Return string or None
|
||||
return None # Default for AssetRule
|
||||
|
||||
|
||||
elif isinstance(item, FileRule):
|
||||
if role == Qt.DisplayRole:
|
||||
if column == self.COL_NAME: return Path(item.file_path).name # Display only filename
|
||||
# Use target_asset_name_override if set, otherwise empty string
|
||||
if column == self.COL_TARGET_ASSET:
|
||||
return item.target_asset_name_override if item.target_asset_name_override is not None else ""
|
||||
# Use item_type_override if set, otherwise empty string (assuming predicted isn't stored directly)
|
||||
if column == self.COL_ITEM_TYPE:
|
||||
# Assuming item_type_override stores the string name of the ItemType enum
|
||||
return item.item_type_override if item.item_type_override else ""
|
||||
if column == self.COL_STATUS: return "" # Status (Not handled yet)
|
||||
if column == self.COL_OUTPUT_PATH: return "" # Output Path (Not handled yet)
|
||||
elif role == Qt.EditRole:
|
||||
# Return target_asset_name_override for editing
|
||||
if column == self.COL_TARGET_ASSET: return item.target_asset_name_override if item.target_asset_name_override is not None else ""
|
||||
# Return item_type_override for editing (delegate expects string or None)
|
||||
if column == self.COL_ITEM_TYPE: return item.item_type_override # Return string or None
|
||||
return None # Default for FileRule
|
||||
|
||||
return None # Should not be reached if item is one of the known types
|
||||
|
||||
def setData(self, index: QModelIndex, value, role: int = Qt.EditRole) -> bool:
|
||||
"""Sets the role data for the item at index to value."""
|
||||
if not index.isValid() or role != Qt.EditRole: # Check only index and role
|
||||
return False
|
||||
|
||||
item = index.internalPointer()
|
||||
if item is None: # Extra check for safety
|
||||
return False
|
||||
column = index.column()
|
||||
changed = False
|
||||
|
||||
# --- Handle different item types ---
|
||||
if isinstance(item, SourceRule): # If SourceRule is editable
|
||||
if column == self.COL_SUPPLIER:
|
||||
# Ensure value is string or None
|
||||
new_value = str(value).strip() if value is not None else None
|
||||
if new_value == "": new_value = None # Treat empty string as None
|
||||
# Update supplier_override
|
||||
if item.supplier_override != new_value:
|
||||
item.supplier_override = new_value
|
||||
changed = True
|
||||
|
||||
elif isinstance(item, AssetRule):
|
||||
if column == self.COL_ASSET_TYPE:
|
||||
# Delegate provides string value (e.g., "Surface", "Model") or None
|
||||
new_value = str(value) if value is not None else None
|
||||
if new_value == "": new_value = None # Treat empty string as None
|
||||
# Update asset_type_override
|
||||
if item.asset_type_override != new_value:
|
||||
item.asset_type_override = new_value
|
||||
changed = True
|
||||
|
||||
elif isinstance(item, FileRule):
|
||||
if column == self.COL_TARGET_ASSET: # Target Asset Name Override
|
||||
# Ensure value is string or None
|
||||
new_value = str(value).strip() if value is not None else None
|
||||
if new_value == "": new_value = None # Treat empty string as None
|
||||
# Update target_asset_name_override
|
||||
if item.target_asset_name_override != new_value:
|
||||
item.target_asset_name_override = new_value
|
||||
changed = True
|
||||
elif column == self.COL_ITEM_TYPE: # Item-Type Override
|
||||
# Delegate provides string value (e.g., "MAP_COL") or None
|
||||
new_value = str(value) if value is not None else None
|
||||
if new_value == "": new_value = None # Treat empty string as None
|
||||
# Update item_type_override
|
||||
if item.item_type_override != new_value:
|
||||
item.item_type_override = new_value
|
||||
changed = True
|
||||
|
||||
|
||||
if changed:
|
||||
# Emit dataChanged for the specific index and affected roles
|
||||
self.dataChanged.emit(index, index, [Qt.DisplayRole, Qt.EditRole])
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def flags(self, index: QModelIndex) -> Qt.ItemFlags:
|
||||
"""Returns the item flags for the given index."""
|
||||
if not index.isValid():
|
||||
return Qt.NoItemFlags # No flags for invalid index
|
||||
|
||||
# Start with default flags for a valid item
|
||||
default_flags = Qt.ItemIsEnabled | Qt.ItemIsSelectable
|
||||
|
||||
item = index.internalPointer()
|
||||
column = index.column()
|
||||
|
||||
can_edit = False
|
||||
# Determine editability based on item type and column
|
||||
if isinstance(item, SourceRule): # If SourceRule is displayed/editable
|
||||
if column == 1: can_edit = True
|
||||
elif isinstance(item, AssetRule):
|
||||
if column == 2: can_edit = True
|
||||
elif isinstance(item, FileRule):
|
||||
if column == 3: can_edit = True
|
||||
if column == 4: can_edit = True
|
||||
|
||||
if can_edit:
|
||||
return default_flags | Qt.ItemIsEditable
|
||||
else:
|
||||
return default_flags
|
||||
|
||||
def headerData(self, section: int, orientation: Qt.Orientation, role: int = Qt.DisplayRole):
|
||||
"""Returns the data for the given role and section in the header."""
|
||||
if orientation == Qt.Horizontal and role == Qt.DisplayRole:
|
||||
if 0 <= section < len(self.Columns):
|
||||
return self.Columns[section]
|
||||
# Optionally handle Vertical header (row numbers)
|
||||
# if orientation == Qt.Vertical and role == Qt.DisplayRole:
|
||||
# return str(section + 1)
|
||||
return None
|
||||
|
||||
# Helper to get item from index
|
||||
def getItem(self, index: QModelIndex):
|
||||
"""Safely returns the item associated with the index."""
|
||||
if index.isValid():
|
||||
item = index.internalPointer()
|
||||
if item: # Ensure internal pointer is not None
|
||||
return item
|
||||
return None # Return None for invalid index or None pointer
|
||||
1424
processing_engine.py
Normal file
1424
processing_engine.py
Normal file
File diff suppressed because it is too large
Load Diff
@ -1,11 +1,11 @@
|
||||
import dataclasses
|
||||
import json
|
||||
from typing import List, Dict, Any, Tuple
|
||||
|
||||
@dataclasses.dataclass
|
||||
class FileRule:
|
||||
file_path: str = None
|
||||
map_type_override: str = None
|
||||
item_type_override: str = None # Renamed from map_type_override
|
||||
target_asset_name_override: str = None # Added override field
|
||||
resolution_override: Tuple[int, int] = None
|
||||
channel_merge_instructions: Dict[str, Any] = dataclasses.field(default_factory=dict)
|
||||
output_format_override: str = None # Potentially others identified during integration
|
||||
@ -21,7 +21,8 @@ class FileRule:
|
||||
@dataclasses.dataclass
|
||||
class AssetRule:
|
||||
asset_name: str = None
|
||||
asset_type: str = None
|
||||
asset_type: str = None # Predicted type
|
||||
asset_type_override: str = None # Added override field
|
||||
common_metadata: Dict[str, Any] = dataclasses.field(default_factory=dict)
|
||||
files: List[FileRule] = dataclasses.field(default_factory=list)
|
||||
|
||||
@ -37,7 +38,8 @@ class AssetRule:
|
||||
|
||||
@dataclasses.dataclass
|
||||
class SourceRule:
|
||||
supplier_identifier: str = None
|
||||
supplier_identifier: str = None # Predicted/Original identifier
|
||||
supplier_override: str = None # Added override field
|
||||
high_level_sorting_parameters: Dict[str, Any] = dataclasses.field(default_factory=dict)
|
||||
assets: List[AssetRule] = dataclasses.field(default_factory=list)
|
||||
input_path: str = None
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user