Pre-Codebase-review commit :3
Codebase dedublication and Cleanup refactor Documentation updated as well Preferences update Removed testfiles from repository
This commit is contained in:
parent
667f119c61
commit
ce26d54a5d
4
.gitignore
vendored
4
.gitignore
vendored
@ -29,3 +29,7 @@ build/
|
|||||||
Thumbs.db
|
Thumbs.db
|
||||||
gui/__pycache__
|
gui/__pycache__
|
||||||
__pycache__
|
__pycache__
|
||||||
|
|
||||||
|
Testfiles
|
||||||
|
Testfiles/
|
||||||
|
Testfiles_
|
||||||
|
|||||||
@ -12,9 +12,9 @@ This documentation strictly excludes details on environment setup, dependency in
|
|||||||
|
|
||||||
## Architecture and Codebase Summary
|
## Architecture and Codebase Summary
|
||||||
|
|
||||||
For developers interested in contributing, the tool's architecture is designed around a **Core Processing Engine** (`asset_processor.py`) that handles the pipeline for single assets. This engine is supported by a **Configuration System** (`configuration.py` and `config.py` with `Presets/*.json`) and a new **Hierarchical Rule System** (`rule_structure.py`) that allows dynamic overrides of static configurations at Source, Asset, and File levels. Multiple interfaces are provided: a **Graphical User Interface** (`gui/`), a **Command-Line Interface** (`main.py`), and a **Directory Monitor** (`monitor.py`). Optional **Blender Integration** (`blenderscripts/`) is also included. Key new files supporting the hierarchical rule system include `rule_structure.py`, `gui/rule_hierarchy_model.py`, and `gui/rule_editor_widget.py`.
|
For developers interested in contributing, the tool's architecture centers on a **Core Processing Engine** (`processing_engine.py`) executing a pipeline based on a **Hierarchical Rule System** (`rule_structure.py`) and a **Configuration System** (`configuration.py` loading `config/app_settings.json` and `Presets/*.json`). The **Graphical User Interface** (`gui/`) has been significantly refactored: `MainWindow` (`main_window.py`) acts as a coordinator, delegating tasks to specialized widgets (`MainPanelWidget`, `PresetEditorWidget`, `LogConsoleWidget`) and background handlers (`RuleBasedPredictionHandler`, `LLMPredictionHandler`, `LLMInteractionHandler`, `AssetRestructureHandler`). The **Directory Monitor** (`monitor.py`) now processes archives asynchronously using a thread pool and utility functions (`utils/prediction_utils.py`, `utils/workspace_utils.py`). The **Command-Line Interface** entry point (`main.py`) primarily launches the GUI, with core CLI functionality currently non-operational. Optional **Blender Integration** (`blenderscripts/`) remains. A new `utils/` directory houses shared helper functions.
|
||||||
|
|
||||||
The codebase is organized into key directories and files reflecting these components. The `gui/` directory contains all GUI-related code, `Presets/` holds configuration presets, and `blenderscripts/` contains scripts for Blender interaction. The core logic resides in files like `asset_processor.py`, `configuration.py`, `config.py`, `main.py`, and `monitor.py`. The processing pipeline involves steps such as file classification, map processing, channel merging, and metadata generation.
|
The codebase reflects this structure. The `gui/` directory contains the refactored UI components, `utils/` holds shared utilities, `Presets/` contains JSON presets, and `blenderscripts/` holds Blender scripts. Core logic resides in `processing_engine.py`, `configuration.py`, `rule_structure.py`, `monitor.py`, and `main.py`. The processing pipeline, executed by `processing_engine.py`, relies entirely on the input `SourceRule` and static configuration for steps like map processing, channel merging, and metadata generation.
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
|
|||||||
@ -21,16 +21,18 @@ python -m gui.main_window
|
|||||||
* **Preset Selector:** Choose the preset to use for *processing* the current queue. This dropdown now includes a new option: "- LLM Interpretation -". Selecting this option will use the experimental LLM Predictor instead of the traditional rule-based prediction system defined in presets.
|
* **Preset Selector:** Choose the preset to use for *processing* the current queue. This dropdown now includes a new option: "- LLM Interpretation -". Selecting this option will use the experimental LLM Predictor instead of the traditional rule-based prediction system defined in presets.
|
||||||
* **Output Directory:** Set the output path (defaults to `config/app_settings.json`, use "Browse...")
|
* **Output Directory:** Set the output path (defaults to `config/app_settings.json`, use "Browse...")
|
||||||
* **Drag and Drop Area:** Add asset `.zip`, `.rar`, `.7z` files, or folders by dragging and dropping them here.
|
* **Drag and Drop Area:** Add asset `.zip`, `.rar`, `.7z` files, or folders by dragging and dropping them here.
|
||||||
* **Preview Table:** Shows queued assets in a hierarchical view (Source -> Asset -> File). Initially, this area displays a message prompting you to select a preset. Once a preset is selected from the Preset List, the detailed file preview will load here. The mode of the preview depends on the "View" menu:
|
* **Preview Table:** Shows queued assets in a hierarchical view (Source -> Asset -> File). Assets (files, directories, archives) added via drag-and-drop appear immediately in the table.
|
||||||
* **Detailed Preview (Default):** Lists all files, predicted status (`Mapped`, `Model`, `Extra`, `Unrecognised`, `Ignored`, `Error`), output name, etc., based on the selected *processing* preset. The columns displayed are: Name, Target Asset, Supplier, Asset Type, Item Type. The "Target Asset" column stretches to fill available space, while others resize to content. The previous "Status" and "Output Path" columns have been removed. Text colors are applied to cells based on the status of the individual file they represent. Rows use alternating background colors per asset group for visual separation.
|
* If no preset is selected ("-- Select a Preset --"), added items (including files within directories/archives) are displayed with empty prediction fields (Target Asset, Asset Type, Item Type), which can be manually edited.
|
||||||
* **Simple View (Preview Disabled):** Lists only top-level input asset paths.
|
* If a valid preset or LLM mode is selected, the table populates with prediction results as they become available.
|
||||||
|
* The table always displays the detailed view structure with columns: Name, Target Asset, Supplier, Asset Type, Item Type. The "Target Asset" column stretches to fill available space.
|
||||||
|
* **Coloring:** The *text color* of file items is determined by their Item Type (colors defined in `config/app_settings.json`). The *background color* of file items is a 30% darker shade of their parent asset's background, helping to visually group files within an asset. Asset rows themselves may use alternating background colors based on the application theme.
|
||||||
* **Progress Bar:** Shows overall processing progress.
|
* **Progress Bar:** Shows overall processing progress.
|
||||||
* **Blender Post-Processing:** Checkbox to enable Blender scripts. If enabled, shows fields and browse buttons for target `.blend` files (defaults from `config/app_settings.json`).
|
* **Blender Post-Processing:** Checkbox to enable Blender scripts. If enabled, shows fields and browse buttons for target `.blend` files (defaults from `config/app_settings.json`).
|
||||||
* **Options & Controls (Bottom):**
|
* **Options & Controls (Bottom):**
|
||||||
* `Overwrite Existing`: Checkbox to force reprocessing.
|
* `Overwrite Existing`: Checkbox to force reprocessing.
|
||||||
* `Workers`: Spinbox for concurrent processes.
|
* `Workers`: Spinbox for concurrent processes.
|
||||||
* `Clear Queue`: Button to clear the queue and preview.
|
* `Clear Queue`: Button to clear the queue and preview.
|
||||||
* `Start Processing`: Button to start processing the queue. This button is disabled until a valid preset is selected from the Preset List.
|
* `Start Processing`: Button to start processing the queue. This button is enabled as long as there are items listed in the Preview Table. When clicked, any items that do not have a value assigned in the "Target Asset" column will be automatically ignored for that processing run.
|
||||||
* `Cancel`: Button to attempt stopping processing.
|
* `Cancel`: Button to attempt stopping processing.
|
||||||
* **Re-interpret Selected with LLM:** This button appears when the "- LLM Interpretation -" preset is selected. It allows you to re-process only the currently selected items in the Preview Table using the LLM, without affecting other items in the queue. This is useful for refining predictions on specific assets.
|
* **Re-interpret Selected with LLM:** This button appears when the "- LLM Interpretation -" preset is selected. It allows you to re-process only the currently selected items in the Preview Table using the LLM, without affecting other items in the queue. This is useful for refining predictions on specific assets.
|
||||||
* **Status Bar:** Displays current status, errors, and completion messages. During LLM processing, the status bar will show messages indicating the progress of the LLM requests.
|
* **Status Bar:** Displays current status, errors, and completion messages. During LLM processing, the status bar will show messages indicating the progress of the LLM requests.
|
||||||
|
|||||||
@ -6,17 +6,17 @@ This document provides a high-level overview of the Asset Processor Tool's archi
|
|||||||
|
|
||||||
The Asset Processor Tool is designed to process 3D asset source files into a standardized library format. Its high-level architecture consists of:
|
The Asset Processor Tool is designed to process 3D asset source files into a standardized library format. Its high-level architecture consists of:
|
||||||
|
|
||||||
1. **Core Processing Engine (`processing_engine.py`):** The primary component responsible for executing the asset processing pipeline for a single input asset based on a provided `SourceRule` object and static configuration. The older `asset_processor.py` remains in the codebase for reference but is no longer used in the main processing flow.
|
1. **Core Processing Engine (`processing_engine.py`):** The primary component responsible for executing the asset processing pipeline for a single input asset based on a provided `SourceRule` object and static configuration. The previous `asset_processor.py` has been removed.
|
||||||
2. **Prediction System:** Responsible for analyzing input files and generating the initial `SourceRule` hierarchy with predicted values. This system now includes two alternative components:
|
2. **Prediction System:** Responsible for analyzing input files and generating the initial `SourceRule` hierarchy with predicted values. This system utilizes a base handler (`gui/base_prediction_handler.py::BasePredictionHandler`) with specific implementations:
|
||||||
* **Rule-Based Predictor (`prediction_handler.py`):** Uses predefined rules from presets to classify files and determine initial processing parameters.
|
* **Rule-Based Predictor (`gui/prediction_handler.py::RuleBasedPredictionHandler`):** Uses predefined rules from presets to classify files and determine initial processing parameters.
|
||||||
* **LLM Predictor (`gui/llm_prediction_handler.py`):** An experimental alternative that uses a Large Language Model (LLM) to interpret file contents and context to predict processing parameters. Its role is to generate `SourceRule` objects based on LLM output, which are then used by the processing pipeline.
|
* **LLM Predictor (`gui/llm_prediction_handler.py::LLMPredictionHandler`):** An experimental alternative that uses a Large Language Model (LLM) to interpret file contents and context to predict processing parameters.
|
||||||
3. **Configuration System (`Configuration`):** Handles loading core settings (including centralized type definitions and LLM-specific configuration) and merging them with supplier-specific rules defined in JSON presets and the persistent `config/suppliers.json` file.
|
3. **Configuration System (`Configuration`):** Handles loading core settings (including centralized type definitions and LLM-specific configuration) and merging them with supplier-specific rules defined in JSON presets and the persistent `config/suppliers.json` file.
|
||||||
4. **Multiple Interfaces:** Provides different ways to interact with the tool:
|
4. **Multiple Interfaces:** Provides different ways to interact with the tool:
|
||||||
* Graphical User Interface (GUI)
|
* Graphical User Interface (GUI)
|
||||||
* Command-Line Interface (CLI)
|
* Command-Line Interface (CLI) - *Note: The primary CLI execution logic (`run_cli` in `main.py`) is currently non-functional/commented out post-refactoring.*
|
||||||
* Directory Monitor for automated processing.
|
* Directory Monitor for automated processing.
|
||||||
The GUI now acts as the primary source of truth for processing rules, generating and managing the `SourceRule` hierarchy before sending it to the processing engine. It also accumulates prediction results from multiple input sources before updating the view. The CLI and Monitor interfaces can also generate `SourceRule` objects to bypass the GUI for automated workflows.
|
The GUI acts as the primary source of truth for processing rules, coordinating the generation and management of the `SourceRule` hierarchy before sending it to the processing engine. It accumulates prediction results from multiple input sources before updating the view. The Monitor interface can also generate `SourceRule` objects (using `utils/prediction_utils.py`) to bypass the GUI for automated workflows.
|
||||||
5. **Optional Integration:** Includes scripts and logic for integrating with external software, specifically Blender, to automate material and node group creation.
|
5. **Optional Integration:** Includes scripts (`blenderscripts/`) for integrating with Blender. Logic for executing these scripts was intended to be centralized in `utils/blender_utils.py`, but this utility has not yet been implemented.
|
||||||
|
|
||||||
## Hierarchical Rule System
|
## Hierarchical Rule System
|
||||||
|
|
||||||
@ -35,22 +35,33 @@ This hierarchy allows for fine-grained control over processing parameters. The G
|
|||||||
* `Presets/*.json`: Supplier-specific JSON files defining rules for file interpretation and initial prediction.
|
* `Presets/*.json`: Supplier-specific JSON files defining rules for file interpretation and initial prediction.
|
||||||
* `configuration.py` (`Configuration` class): Loads `config/app_settings.json` settings and merges them with a selected preset, pre-compiling regex patterns for efficiency. This static configuration is used by the processing engine.
|
* `configuration.py` (`Configuration` class): Loads `config/app_settings.json` settings and merges them with a selected preset, pre-compiling regex patterns for efficiency. This static configuration is used by the processing engine.
|
||||||
* `rule_structure.py`: Defines the `SourceRule`, `AssetRule`, and `FileRule` dataclasses used to represent the hierarchical processing rules.
|
* `rule_structure.py`: Defines the `SourceRule`, `AssetRule`, and `FileRule` dataclasses used to represent the hierarchical processing rules.
|
||||||
* `gui/`: Directory containing modules for the Graphical User Interface (GUI), built with PySide6. The GUI is responsible for generating and managing the `SourceRule` hierarchy via the Unified View, accumulating prediction results, and interacting with background handlers (`ProcessingHandler`, `PredictionHandler`).
|
* `gui/`: Directory containing modules for the Graphical User Interface (GUI), built with PySide6. The `MainWindow` (`main_window.py`) acts as a coordinator, orchestrating interactions between various components. Key GUI components include:
|
||||||
* `unified_view_model.py`: Implements the `QAbstractItemModel` for the Unified Hierarchical View, holding the `SourceRule` data, handling inline editing (including direct model restructuring for `target_asset_name_override`), and managing row coloring based on config definitions.
|
* `main_panel_widget.py::MainPanelWidget`: Contains the primary controls for loading sources, selecting presets, viewing/editing rules, and initiating processing.
|
||||||
* `delegates.py`: Contains custom `QStyledItemDelegate` implementations for inline editing in the Unified View, including the new `SupplierSearchDelegate` for supplier name auto-completion and management.
|
* `preset_editor_widget.py::PresetEditorWidget`: Provides the interface for managing presets.
|
||||||
* `prediction_handler.py`: Generates the initial `SourceRule` hierarchy with predicted values for a single input source based on its files and the selected preset. It uses the `"standard_type"` from the configuration's `FILE_TYPE_DEFINITIONS` to populate `FileRule.standard_map_type` and implements a two-pass classification logic to handle and prioritize bit-depth variants (e.g., `_DISP16_` vs `_DISP_`).
|
* `log_console_widget.py::LogConsoleWidget`: Displays application logs.
|
||||||
* `processing_engine.py` (`ProcessingEngine` class): The new core component that executes the processing pipeline for a single `SourceRule` object using the static `Configuration`. A new instance is created per task for state isolation. It contains no internal prediction or fallback logic. Supplier overrides from the GUI are correctly preserved and used by the engine for output path generation and metadata.
|
* `unified_view_model.py::UnifiedViewModel`: Implements the `QAbstractItemModel` for the hierarchical rule view, holding `SourceRule` data and managing display logic (coloring, etc.). Caches configuration data for performance.
|
||||||
* `asset_processor.py` (`AssetProcessor` class): The older processing engine, kept for reference but not used in the main processing flow.
|
* `rule_hierarchy_model.py::RuleHierarchyModel`: A simpler model used internally by the `UnifiedViewModel` to manage the `SourceRule` data structure.
|
||||||
* `main.py`: The entry point for the Command-Line Interface (CLI). It handles argument parsing, logging, parallel processing orchestration, and triggering Blender scripts. It now orchestrates processing by passing `SourceRule` objects to the `ProcessingEngine`.
|
* `delegates.py`: Contains custom `QStyledItemDelegate` implementations for inline editing in the rule view.
|
||||||
* `monitor.py`: Implements the directory monitoring feature using `watchdog`.
|
* `asset_restructure_handler.py::AssetRestructureHandler`: Handles complex model updates when a file's target asset is changed via the GUI, ensuring the `SourceRule` hierarchy is correctly modified.
|
||||||
|
* `base_prediction_handler.py::BasePredictionHandler`: Abstract base class for prediction logic.
|
||||||
|
* `prediction_handler.py::RuleBasedPredictionHandler`: Generates the initial `SourceRule` hierarchy based on presets and file analysis. Inherits from `BasePredictionHandler`.
|
||||||
|
* `llm_prediction_handler.py::LLMPredictionHandler`: Experimental predictor using an LLM. Inherits from `BasePredictionHandler`.
|
||||||
|
* `llm_interaction_handler.py::LLMInteractionHandler`: Manages communication with the LLM service for the LLM predictor.
|
||||||
|
* `processing_engine.py` (`ProcessingEngine` class): The core component that executes the processing pipeline for a single `SourceRule` object using the static `Configuration`. A new instance is created per task for state isolation.
|
||||||
|
* `main.py`: The main entry point for the application. Primarily launches the GUI. Contains commented-out/non-functional CLI logic (`run_cli`).
|
||||||
|
* `monitor.py`: Implements the directory monitoring feature using `watchdog`. It now processes archives asynchronously using a `ThreadPoolExecutor`, leveraging `utils.prediction_utils.py` for rule generation and `utils.workspace_utils.py` for workspace management before invoking the `ProcessingEngine`.
|
||||||
* `blenderscripts/`: Contains Python scripts designed to be executed *within* Blender for post-processing tasks.
|
* `blenderscripts/`: Contains Python scripts designed to be executed *within* Blender for post-processing tasks.
|
||||||
|
* `utils/`: Directory containing utility modules:
|
||||||
|
* `workspace_utils.py`: Contains functions like `prepare_processing_workspace` for handling temporary directories and archive extraction.
|
||||||
|
* `prediction_utils.py`: Contains functions like `generate_source_rule_from_archive` used by the monitor for rule-based prediction.
|
||||||
|
* `blender_utils.py`: (Intended location for Blender script execution logic, currently not implemented).
|
||||||
|
|
||||||
## Processing Pipeline (Simplified)
|
## Processing Pipeline (Simplified)
|
||||||
|
|
||||||
The primary processing engine (`processing_engine.py`) executes a series of steps for each asset based on the provided `SourceRule` object and static configuration:
|
The primary processing engine (`processing_engine.py`) executes a series of steps for each asset based on the provided `SourceRule` object and static configuration:
|
||||||
|
|
||||||
1. Extraction of input to a temporary workspace.
|
1. Extraction of input to a temporary workspace (using `utils.workspace_utils.py`).
|
||||||
2. Classification of files (map, model, extra, ignored, unrecognised) using preset rules.
|
2. Classification of files (map, model, extra, ignored, unrecognised) based *only* on the provided `SourceRule` object (classification/prediction happens *before* the engine is called).
|
||||||
3. Determination of base metadata (asset name, category, archetype).
|
3. Determination of base metadata (asset name, category, archetype).
|
||||||
4. Skip check if output exists and overwrite is not forced.
|
4. Skip check if output exists and overwrite is not forced.
|
||||||
5. Processing of maps (resize, format/bit depth conversion, inversion, stats calculation).
|
5. Processing of maps (resize, format/bit depth conversion, inversion, stats calculation).
|
||||||
@ -58,6 +69,6 @@ The primary processing engine (`processing_engine.py`) executes a series of step
|
|||||||
7. Generation of `metadata.json` file.
|
7. Generation of `metadata.json` file.
|
||||||
8. Organization of processed files into the final output structure.
|
8. Organization of processed files into the final output structure.
|
||||||
9. Cleanup of the temporary workspace.
|
9. Cleanup of the temporary workspace.
|
||||||
10. (Optional) Execution of Blender scripts for post-processing.
|
10. (Optional) Execution of Blender scripts (currently triggered directly, intended to use `utils.blender_utils.py`).
|
||||||
|
|
||||||
This architecture allows for a modular design, separating configuration, rule generation/management (primarily in the GUI), and core processing execution. The `SourceRule` object serves as a clear data contract between the GUI/prediction layer and the processing engine. Parallel processing is utilized for efficiency, and background threads keep the GUI responsive.
|
This architecture allows for a modular design, separating configuration, rule generation/management (GUI, Monitor utilities), and core processing execution. The `SourceRule` object serves as a clear data contract between the rule generation layer and the processing engine. Parallel processing (in Monitor) and background threads (in GUI) are utilized for efficiency and responsiveness.
|
||||||
@ -4,69 +4,90 @@ This document outlines the key files and directories within the Asset Processor
|
|||||||
|
|
||||||
```
|
```
|
||||||
Asset_processor_tool/
|
Asset_processor_tool/
|
||||||
├── asset_processor.py # Older core class, kept for reference (not used in main flow)
|
├── configuration.py # Class for loading and accessing configuration (merges app_settings.json and presets)
|
||||||
├── config.py # Core settings, constants, and definitions for allowed asset/file types
|
|
||||||
├── config/ # Directory for configuration files
|
|
||||||
│ └── suppliers.json # Persistent list of known supplier names for GUI auto-completion
|
|
||||||
├── configuration.py # Class for loading and accessing configuration (merges config.py and presets)
|
|
||||||
├── detailed_documentation_plan.md # (Existing file, potentially outdated)
|
|
||||||
├── Dockerfile # Instructions for building the Docker container image
|
├── Dockerfile # Instructions for building the Docker container image
|
||||||
├── documentation_plan.md # Plan for the new documentation structure (this plan)
|
├── main.py # Main application entry point (primarily GUI launcher)
|
||||||
├── documentation.txt # Original developer documentation (to be migrated)
|
├── monitor.py # Directory monitoring script for automated processing (async)
|
||||||
├── main.py # CLI Entry Point & processing orchestrator (calls processing_engine)
|
├── processing_engine.py # Core class handling single asset processing based on SourceRule
|
||||||
├── monitor.py # Directory monitoring script for automated processing
|
|
||||||
├── processing_engine.py # New core class handling single asset processing based on SourceRule
|
|
||||||
├── readme.md # Original main documentation file (to be migrated)
|
|
||||||
├── readme.md.bak # Backup of readme.md
|
|
||||||
├── requirements-docker.txt # Dependencies specifically for the Docker environment
|
├── requirements-docker.txt # Dependencies specifically for the Docker environment
|
||||||
├── requirements.txt # Python package dependencies for standard execution
|
├── requirements.txt # Python package dependencies for standard execution
|
||||||
├── rule_structure.py # Dataclasses for hierarchical rules (SourceRule, AssetRule, FileRule)
|
├── rule_structure.py # Dataclasses for hierarchical rules (SourceRule, AssetRule, FileRule)
|
||||||
├── blenderscripts/ # Scripts for integration with Blender
|
├── blenderscripts/ # Scripts for integration with Blender
|
||||||
│ ├── create_materials.py # Script to create materials linking to node groups
|
│ ├── create_materials.py # Script to create materials linking to node groups
|
||||||
│ └── create_nodegroups.py # Script to create node groups from processed assets
|
│ └── create_nodegroups.py # Script to create node groups from processed assets
|
||||||
├── Deprecated-POC/ # Directory containing original proof of concept scripts
|
├── config/ # Directory for configuration files
|
||||||
│ ├── Blender-MaterialsFromNodegroups.py
|
│ ├── app_settings.json # Core settings, constants, and type definitions
|
||||||
│ ├── Blender-NodegroupsFromPBRSETS.py
|
│ └── suppliers.json # Persistent list of known supplier names for GUI auto-completion
|
||||||
│ └── Standalonebatcher-Main.py
|
├── Deprecated/ # Contains old code, documentation, and POC scripts
|
||||||
├── Documentation/ # New directory for organized documentation (this structure)
|
│ ├── ...
|
||||||
|
├── Documentation/ # Directory for organized documentation (this structure)
|
||||||
│ ├── 00_Overview.md
|
│ ├── 00_Overview.md
|
||||||
│ ├── 01_User_Guide/
|
│ ├── 01_User_Guide/
|
||||||
│ └── 02_Developer_Guide/
|
│ └── 02_Developer_Guide/
|
||||||
├── gui/ # Contains files related to the Graphical User Interface
|
├── gui/ # Contains files related to the Graphical User Interface (PySide6)
|
||||||
│ ├── delegates.py # Custom delegates for inline editing in Unified View
|
│ ├── asset_restructure_handler.py # Handles model updates for target asset changes
|
||||||
│ ├── main_window.py # Main GUI application window and layout
|
│ ├── base_prediction_handler.py # Abstract base class for prediction logic
|
||||||
│ ├── processing_handler.py # Handles background processing logic for the GUI
|
│ ├── config_editor_dialog.py # Dialog for editing configuration files
|
||||||
│ ├── prediction_handler.py # Generates initial SourceRule hierarchy with predictions
|
│ ├── delegates.py # Custom delegates for inline editing in rule view
|
||||||
│ ├── unified_view_model.py # Model for the Unified Hierarchical View
|
│ ├── llm_interaction_handler.py # Manages communication with LLM service
|
||||||
│ └── ... # Other GUI components
|
│ ├── llm_prediction_handler.py # LLM-based prediction handler
|
||||||
├── Presets/ # Preset definition files
|
│ ├── log_console_widget.py # Widget for displaying logs
|
||||||
|
│ ├── main_panel_widget.py # Main panel containing core GUI controls
|
||||||
|
│ ├── main_window.py # Main GUI application window (coordinator)
|
||||||
|
│ ├── prediction_handler.py # Rule-based prediction handler
|
||||||
|
│ ├── preset_editor_widget.py # Widget for managing presets
|
||||||
|
│ ├── preview_table_model.py # Model for the (deprecated?) preview table
|
||||||
|
│ ├── rule_editor_widget.py # Widget containing the rule hierarchy view and editor
|
||||||
|
│ ├── rule_hierarchy_model.py # Internal model for rule hierarchy data
|
||||||
|
│ └── unified_view_model.py # QAbstractItemModel for the rule hierarchy view
|
||||||
|
├── llm_prototype/ # Files related to the experimental LLM predictor prototype
|
||||||
|
│ ├── ...
|
||||||
|
├── Presets/ # Preset definition files (JSON)
|
||||||
│ ├── _template.json # Template for creating new presets
|
│ ├── _template.json # Template for creating new presets
|
||||||
│ ├── Poliigon.json # Example preset for Poliigon assets
|
│ ├── Poliigon.json # Example preset for Poliigon assets
|
||||||
│ └── ... # Other presets
|
│ └── ... # Other presets
|
||||||
├── Project Notes/ # Directory for issue and feature tracking (Markdown files)
|
├── ProjectNotes/ # Directory for developer notes, plans, etc. (Markdown files)
|
||||||
│ ├── ... # Various planning and note files
|
│ ├── ...
|
||||||
└── Testfiles/ # Directory containing example input assets for testing
|
├── PythonCheatsheats/ # Utility Python reference files
|
||||||
└── ... # Example asset ZIPs
|
│ ├── ...
|
||||||
|
├── Testfiles/ # Directory containing example input assets for testing
|
||||||
|
│ ├── ...
|
||||||
|
├── Tickets/ # Directory for issue and feature tracking (Markdown files)
|
||||||
|
│ ├── ...
|
||||||
|
└── utils/ # Utility modules shared across the application
|
||||||
|
├── prediction_utils.py # Utilities for prediction (e.g., used by monitor)
|
||||||
|
└── workspace_utils.py # Utilities for managing processing workspaces
|
||||||
```
|
```
|
||||||
|
|
||||||
**Key Files and Directories:**
|
**Key Files and Directories:**
|
||||||
|
|
||||||
* `asset_processor.py`: Contains the older `AssetProcessor` class. It is kept for reference but is no longer used in the main processing flow orchestrated by `main.py` or the GUI.
|
* `config/`: Directory containing configuration files.
|
||||||
* `config.py`: Stores global default settings, constants, core rules, and centralized definitions for allowed asset and file types (`ASSET_TYPE_DEFINITIONS`, `FILE_TYPE_DEFINITIONS`) used for validation, GUI dropdowns, and coloring.
|
* `app_settings.json`: Stores global default settings, constants, core rules, and centralized definitions for allowed asset and file types (`ASSET_TYPE_DEFINITIONS`, `FILE_TYPE_DEFINITIONS`) used for validation, GUI elements, and coloring. Replaces the old `config.py`.
|
||||||
* `config/`: Directory containing configuration files, such as `suppliers.json`.
|
* `suppliers.json`: A JSON file storing a persistent list of known supplier names, used by the GUI for auto-completion.
|
||||||
* `config/suppliers.json`: A JSON file storing a persistent list of known supplier names, used by the GUI's `SupplierSearchDelegate` for auto-completion.
|
* `configuration.py`: Defines the `Configuration` class. Responsible for loading core settings from `config/app_settings.json` and merging them with a specified preset JSON file (`Presets/*.json`). Pre-compiles regex patterns from presets for efficiency. An instance of this class is passed to the `ProcessingEngine`.
|
||||||
* `configuration.py`: Defines the `Configuration` class. Responsible for loading core settings from `config.py` and merging them with a specified preset JSON file (`Presets/*.json`). Pre-compiles regex patterns from presets for efficiency. An instance of this class is passed to the `ProcessingEngine`.
|
* `rule_structure.py`: Defines the `SourceRule`, `AssetRule`, and `FileRule` dataclasses. These structures represent the hierarchical processing rules and are the primary data contract passed from the rule generation layer (GUI, Monitor) to the processing engine.
|
||||||
* `rule_structure.py`: Defines the `SourceRule`, `AssetRule`, and `FileRule` dataclasses. These structures represent the hierarchical processing rules and are the primary data contract passed from the GUI/prediction layer to the processing engine.
|
* `processing_engine.py`: Defines the `ProcessingEngine` class. This is the core component that executes the processing pipeline for a single asset based *solely* on a provided `SourceRule` object and the static `Configuration`. It contains no internal prediction or fallback logic.
|
||||||
* `processing_engine.py`: Defines the new `ProcessingEngine` class. This is the core component that executes the processing pipeline for a single asset based *solely* on a provided `SourceRule` object and the static `Configuration`. It contains no internal prediction or fallback logic.
|
* `main.py`: Main entry point for the application. Primarily responsible for initializing and launching the GUI (`gui.main_window.MainWindow`). Contains non-functional/commented-out CLI logic (`run_cli`).
|
||||||
* `main.py`: Entry point for the Command-Line Interface (CLI). It handles argument parsing, logging setup, parallel processing orchestration (using `concurrent.futures.ProcessPoolExecutor`), and triggering Blender scripts. It now orchestrates processing by generating or receiving `SourceRule` objects and passing them to the `ProcessingEngine`.
|
* `monitor.py`: Implements the automated directory monitoring feature using `watchdog`. It now processes detected archives asynchronously using a `ThreadPoolExecutor`. It utilizes `utils.prediction_utils.generate_source_rule_from_archive` for rule-based prediction and `utils.workspace_utils.prepare_processing_workspace` for workspace setup before invoking the `ProcessingEngine`.
|
||||||
* `monitor.py`: Implements the automated directory monitoring feature using the `watchdog` library. Contains the `ZipHandler` class to detect new ZIP files and trigger processing via `main.run_processing`.
|
* `gui/`: Directory containing all code related to the Graphical User Interface (GUI), built with PySide6. The `MainWindow` acts as a coordinator, delegating functionality to specialized widgets and handlers.
|
||||||
* `gui/`: Directory containing all code related to the Graphical User Interface (GUI), built with PySide6. The GUI is responsible for managing user input, generating and editing the `SourceRule` hierarchy, and interacting with background handlers.
|
* `main_window.py`: Defines the `MainWindow` class. Acts as the main application window and coordinator, connecting signals and slots between different GUI components.
|
||||||
* `main_window.py`: Defines the `MainWindow` class, the main application window structure, UI layout, event handling, and menu setup. Integrates the Unified Hierarchical View. Manages GUI-specific logging (`QtLogHandler`).
|
* `main_panel_widget.py`: Defines `MainPanelWidget`, containing the primary user controls (source loading, preset selection, rule view/editor integration, processing buttons).
|
||||||
* `unified_view_model.py`: Implements the `QAbstractItemModel` for the Unified Hierarchical View (`QTreeView`). It holds the `SourceRule` hierarchy and provides data and flags for display and inline editing.
|
* `preset_editor_widget.py`: Defines `PresetEditorWidget` for managing presets (loading, saving, editing).
|
||||||
* `delegates.py`: Contains custom `QStyledItemDelegate` implementations (e.g., for `QComboBox`, `QLineEdit`) used by the Unified View to provide inline editors for rule attributes.
|
* `log_console_widget.py`: Defines `LogConsoleWidget` for displaying application logs within the GUI.
|
||||||
* `processing_handler.py`: Defines the `ProcessingHandler` class (runs on a `QThread`). Manages the execution of the `ProcessingEngine` in background processes and communicates status/results back to the GUI.
|
* `rule_editor_widget.py`: Defines `RuleEditorWidget`, which houses the `QTreeView` for displaying the rule hierarchy.
|
||||||
* `prediction_handler.py`: Defines the `PredictionHandler` class (runs on a `QThread`). Generates the initial `SourceRule` hierarchy with predicted values based on input files and the selected preset. Emits a signal with the generated `SourceRule` list for the GUI.
|
* `unified_view_model.py`: Defines `UnifiedViewModel` (`QAbstractItemModel`) for the rule hierarchy view. Holds `SourceRule` data, manages display logic (coloring), handles inline editing requests, and caches configuration data for performance.
|
||||||
|
* `rule_hierarchy_model.py`: Defines `RuleHierarchyModel`, a simpler internal model used by `UnifiedViewModel` to manage the underlying `SourceRule` data structure.
|
||||||
|
* `delegates.py`: Contains custom `QStyledItemDelegate` implementations used by the `UnifiedViewModel` to provide appropriate inline editors (e.g., dropdowns, text boxes) for different rule attributes.
|
||||||
|
* `asset_restructure_handler.py`: Defines `AssetRestructureHandler`. Handles the complex logic of modifying the `SourceRule` hierarchy when a user changes a file's target asset via the GUI, ensuring data integrity. Triggered by signals from the model.
|
||||||
|
* `base_prediction_handler.py`: Defines the abstract `BasePredictionHandler` class, providing a common interface and threading (`QRunnable`) for prediction tasks.
|
||||||
|
* `prediction_handler.py`: Defines `RuleBasedPredictionHandler` (inherits from `BasePredictionHandler`). Generates the initial `SourceRule` hierarchy with predicted values based on input files and the selected preset rules. Runs in a background thread.
|
||||||
|
* `llm_prediction_handler.py`: Defines `LLMPredictionHandler` (inherits from `BasePredictionHandler`). Experimental handler using an LLM for prediction. Runs in a background thread.
|
||||||
|
* `llm_interaction_handler.py`: Defines `LLMInteractionHandler`. Manages the communication details (API calls, etc.) with the LLM service, used by `LLMPredictionHandler`.
|
||||||
|
* `utils/`: Directory containing shared utility modules.
|
||||||
|
* `workspace_utils.py`: Provides functions for managing processing workspaces, such as creating temporary directories and extracting archives (`prepare_processing_workspace`). Used by `main.py` (ProcessingTask) and `monitor.py`.
|
||||||
|
* `prediction_utils.py`: Provides utility functions related to prediction, such as generating a `SourceRule` from an archive (`generate_source_rule_from_archive`), used by `monitor.py`.
|
||||||
* `blenderscripts/`: Contains Python scripts (`create_nodegroups.py`, `create_materials.py`) designed to be executed *within* Blender for post-processing.
|
* `blenderscripts/`: Contains Python scripts (`create_nodegroups.py`, `create_materials.py`) designed to be executed *within* Blender for post-processing.
|
||||||
* `Presets/`: Contains supplier-specific configuration files in JSON format, used by the `PredictionHandler` for initial rule generation.
|
* `Presets/`: Contains supplier-specific configuration files in JSON format, used by the `RuleBasedPredictionHandler` for initial rule generation.
|
||||||
* `Testfiles/`: Contains example input assets for testing purposes.
|
* `Testfiles/`: Contains example input assets for testing purposes.
|
||||||
* `Tickets/`: Directory for issue and feature tracking using Markdown files.
|
* `Tickets/`: Directory for issue and feature tracking using Markdown files.
|
||||||
|
* `Deprecated/`: Contains older code, documentation, and proof-of-concept scripts that are no longer actively used.
|
||||||
@ -6,7 +6,7 @@ This document describes the major classes and modules that form the core of the
|
|||||||
|
|
||||||
The `ProcessingEngine` class is the new core component responsible for executing the asset processing pipeline for a *single* input asset. Unlike the older `AssetProcessor`, this engine operates *solely* based on a complete `SourceRule` object provided to its `process()` method and the static `Configuration` object passed during initialization. It contains no internal prediction, classification, or fallback logic. Its key responsibilities include:
|
The `ProcessingEngine` class is the new core component responsible for executing the asset processing pipeline for a *single* input asset. Unlike the older `AssetProcessor`, this engine operates *solely* based on a complete `SourceRule` object provided to its `process()` method and the static `Configuration` object passed during initialization. It contains no internal prediction, classification, or fallback logic. Its key responsibilities include:
|
||||||
|
|
||||||
* Setting up and cleaning up a temporary workspace for processing.
|
* Setting up and cleaning up a temporary workspace for processing (potentially using `utils.workspace_utils`).
|
||||||
* Extracting or copying input files to the workspace.
|
* Extracting or copying input files to the workspace.
|
||||||
* Processing files based on the explicit rules and predicted values contained within the input `SourceRule`.
|
* Processing files based on the explicit rules and predicted values contained within the input `SourceRule`.
|
||||||
* Processing texture maps (resizing, format/bit depth conversion, inversion, stats calculation) using parameters from the `SourceRule` or static `Configuration`.
|
* Processing texture maps (resizing, format/bit depth conversion, inversion, stats calculation) using parameters from the `SourceRule` or static `Configuration`.
|
||||||
@ -14,10 +14,6 @@ The `ProcessingEngine` class is the new core component responsible for executing
|
|||||||
* Generating the `metadata.json` file containing details about the processed asset, incorporating information from the `SourceRule`.
|
* Generating the `metadata.json` file containing details about the processed asset, incorporating information from the `SourceRule`.
|
||||||
* Organizing the final output files into the structured library directory.
|
* Organizing the final output files into the structured library directory.
|
||||||
|
|
||||||
## `AssetProcessor` (`asset_processor.py`)
|
|
||||||
|
|
||||||
The `AssetProcessor` class is the older processing engine. It is kept in the codebase for reference but is **no longer used** in the main processing flow orchestrated by `main.py` or the GUI. Its original role was similar to the new `ProcessingEngine`, but it included internal prediction, classification, and fallback logic based on hierarchical rules and static configuration.
|
|
||||||
|
|
||||||
## `Rule Structure` (`rule_structure.py`)
|
## `Rule Structure` (`rule_structure.py`)
|
||||||
|
|
||||||
This module defines the data structures used to represent the hierarchical processing rules:
|
This module defines the data structures used to represent the hierarchical processing rules:
|
||||||
@ -26,13 +22,13 @@ This module defines the data structures used to represent the hierarchical proce
|
|||||||
* `AssetRule`: A dataclass representing rules applied at the asset level. It contains nested `FileRule` objects.
|
* `AssetRule`: A dataclass representing rules applied at the asset level. It contains nested `FileRule` objects.
|
||||||
* `FileRule`: A dataclass representing rules applied at the file level.
|
* `FileRule`: A dataclass representing rules applied at the file level.
|
||||||
|
|
||||||
These classes hold specific rule parameters (e.g., `supplier_identifier`, `asset_type`, `asset_type_override`, `item_type`, `item_type_override`, `target_asset_name_override`). Attributes like `asset_type` and `item_type_override` now use string types, which are validated against centralized lists in `config.py`. These structures support serialization (Pickle, JSON) to allow them to be passed between different parts of the application, including across process boundaries.
|
These classes hold specific rule parameters (e.g., `supplier_identifier`, `asset_type`, `asset_type_override`, `item_type`, `item_type_override`, `target_asset_name_override`). Attributes like `asset_type` and `item_type_override` now use string types, which are validated against centralized lists in `config/app_settings.json`. These structures support serialization (Pickle, JSON) to allow them to be passed between different parts of the application, including across process boundaries.
|
||||||
|
|
||||||
## `Configuration` (`configuration.py`)
|
## `Configuration` (`configuration.py`)
|
||||||
|
|
||||||
The `Configuration` class manages the tool's settings. It is responsible for:
|
The `Configuration` class manages the tool's settings. It is responsible for:
|
||||||
|
|
||||||
* Loading the core default settings defined in `config.py`.
|
* Loading the core default settings defined in `config/app_settings.json`.
|
||||||
* Loading the supplier-specific rules from a selected preset JSON file (`Presets/*.json`).
|
* Loading the supplier-specific rules from a selected preset JSON file (`Presets/*.json`).
|
||||||
* Merging the core settings and preset rules into a single, unified configuration object.
|
* Merging the core settings and preset rules into a single, unified configuration object.
|
||||||
* Validating the loaded configuration to ensure required settings are present.
|
* Validating the loaded configuration to ensure required settings are present.
|
||||||
@ -40,86 +36,147 @@ The `Configuration` class manages the tool's settings. It is responsible for:
|
|||||||
|
|
||||||
An instance of the `Configuration` class is typically created once per application run (or per processing batch) and passed to the `ProcessingEngine`.
|
An instance of the `Configuration` class is typically created once per application run (or per processing batch) and passed to the `ProcessingEngine`.
|
||||||
|
|
||||||
## `MainWindow` (`gui/main_window.py`)
|
## GUI Components (`gui/`)
|
||||||
|
|
||||||
The `MainWindow` class is the main application window for the Graphical User Interface (GUI). It handles the overall UI layout and user interaction:
|
The GUI has been refactored into several key components:
|
||||||
|
|
||||||
* Defines the main application window structure and layout using PySide6 widgets.
|
### `MainWindow` (`gui/main_window.py`)
|
||||||
* Arranges the Preset Editor panel (left) and the Unified Hierarchical View (right).
|
|
||||||
* Setting up the menu bar, including the "View" menu for toggling the Log Console.
|
|
||||||
* Connecting user interactions (button clicks, drag-and-drop events, edits in the Unified View) to corresponding methods (slots) within the `MainWindow` or other handler classes.
|
|
||||||
* Managing the display of application logs in the UI console using a custom `QtLogHandler`.
|
|
||||||
* Interacting with background handlers (`ProcessingHandler`, `PredictionHandler`) via Qt signals and slots to ensure thread-safe updates to the UI during long-running operations.
|
|
||||||
* Receiving the initial `SourceRule` hierarchy from the `PredictionHandler` and populating the `UnifiedViewModel`.
|
|
||||||
* Sending the final, potentially user-modified, `SourceRule` list to `main.py` to initiate processing via the `ProcessingEngine`.
|
|
||||||
|
|
||||||
## `Unified View Model` (`gui/unified_view_model.py`)
|
The `MainWindow` class acts as the main application window and **coordinator** for the GUI. Its primary responsibilities now include:
|
||||||
|
|
||||||
|
* Setting up the main window structure and menu bar.
|
||||||
|
* Instantiating and arranging the major GUI widgets:
|
||||||
|
* `MainPanelWidget` (containing core controls and the rule editor)
|
||||||
|
* `PresetEditorWidget`
|
||||||
|
* `LogConsoleWidget`
|
||||||
|
* Connecting signals and slots between these widgets, the underlying models (`UnifiedViewModel`), and background handlers (`RuleBasedPredictionHandler`, `LLMPredictionHandler`, `LLMInteractionHandler`).
|
||||||
|
* Managing the overall application state related to GUI interactions (e.g., enabling/disabling controls).
|
||||||
|
* Handling top-level actions like loading sources (drag-and-drop), initiating predictions, and starting the processing task (via `main.ProcessingTask`).
|
||||||
|
* Managing the `QThreadPool` for running background tasks (prediction).
|
||||||
|
* Implementing slots like `_handle_prediction_completion` to update the model/view when prediction results are ready.
|
||||||
|
|
||||||
|
### `MainPanelWidget` (`gui/main_panel_widget.py`)
|
||||||
|
|
||||||
|
This widget contains the central part of the GUI, including:
|
||||||
|
|
||||||
|
* Controls for loading source files/directories.
|
||||||
|
* The preset selection dropdown.
|
||||||
|
* Buttons for initiating prediction and processing.
|
||||||
|
* The `RuleEditorWidget` which houses the hierarchical rule view.
|
||||||
|
|
||||||
|
### `PresetEditorWidget` (`gui/preset_editor_widget.py`)
|
||||||
|
|
||||||
|
This widget provides the interface for managing presets:
|
||||||
|
|
||||||
|
* Loading, saving, and editing preset files (`Presets/*.json`).
|
||||||
|
* Displaying preset rules and settings.
|
||||||
|
|
||||||
|
### `LogConsoleWidget` (`gui/log_console_widget.py`)
|
||||||
|
|
||||||
|
This widget displays application logs within the GUI:
|
||||||
|
|
||||||
|
* Provides a text area for log messages.
|
||||||
|
* Integrates with Python's `logging` system via a custom `QtLogHandler`.
|
||||||
|
* Can be shown/hidden via the main window's "View" menu.
|
||||||
|
|
||||||
|
### `UnifiedViewModel` (`gui/unified_view_model.py`)
|
||||||
|
|
||||||
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
||||||
|
|
||||||
* Wrap a list of `SourceRule` objects and expose their hierarchical structure (Source -> Asset -> File) to a `QTreeView` (the Unified Hierarchical View).
|
* Wrap a list of `SourceRule` objects and expose their hierarchical structure (Source -> Asset -> File) to a `QTreeView` (the Unified Hierarchical View).
|
||||||
* Provide methods (`data`, `index`, `parent`, `rowCount`, `columnCount`, `flags`, `setData`) required by `QAbstractItemModel` to allow the `QTreeView` to display the rule hierarchy and support inline editing of specific attributes (e.g., `supplier_override`, `asset_type_override`, `item_type_override`, `target_asset_name_override`).
|
* Provide methods (`data`, `index`, `parent`, `rowCount`, `columnCount`, `flags`, `setData`) required by `QAbstractItemModel` to allow the `QTreeView` to display the rule hierarchy and support inline editing of specific attributes (e.g., `supplier_override`, `asset_type_override`, `item_type_override`, `target_asset_name_override`).
|
||||||
* Handle the direct restructuring of the underlying `SourceRule` hierarchy when `target_asset_name_override` is edited, including moving `FileRule`s and managing `AssetRule` creation/deletion.
|
* Handle requests for data editing (`setData`) by validating input and updating the underlying `RuleHierarchyModel`. **Note:** Complex restructuring logic (e.g., moving files between assets when `target_asset_name_override` changes) is now delegated to the `AssetRestructureHandler`.
|
||||||
* Determine row background colors based on the `asset_type` and `item_type`/`item_type_override` using color metadata from `config.py`.
|
* Determine row background colors based on the `asset_type` and `item_type`/`item_type_override` using color metadata from the `Configuration`.
|
||||||
* Hold the `SourceRule` data that is the single source of truth for the GUI's processing rules.
|
* Hold the `SourceRule` data (via `RuleHierarchyModel`) that is the single source of truth for the GUI's processing rules.
|
||||||
|
* Cache configuration data (`ASSET_TYPE_DEFINITIONS`, `FILE_TYPE_DEFINITIONS`, color maps) during initialization for improved performance in the `data()` method.
|
||||||
|
* Includes the `update_rules_for_sources` method, which intelligently merges new prediction results into the existing model data, preserving user overrides where possible.
|
||||||
|
|
||||||
## `Delegates` (`gui/delegates.py`)
|
### `RuleHierarchyModel` (`gui/rule_hierarchy_model.py`)
|
||||||
|
|
||||||
|
A simpler, non-Qt model used internally by `UnifiedViewModel` to manage the list of `SourceRule` objects and provide methods for accessing and modifying the hierarchy.
|
||||||
|
|
||||||
|
### `AssetRestructureHandler` (`gui/asset_restructure_handler.py`)
|
||||||
|
|
||||||
|
This handler contains the complex logic required to modify the `SourceRule` hierarchy when a file's target asset is changed via the GUI's `UnifiedViewModel`. It:
|
||||||
|
|
||||||
|
* Is triggered by a signal (`targetAssetOverrideChanged`) from the `UnifiedViewModel`.
|
||||||
|
* Uses dedicated methods on the `RuleHierarchyModel` (`moveFileRule`, `createAssetRule`, `removeAssetRule`) to safely move `FileRule` objects between `AssetRule`s, creating or removing `AssetRule`s as needed.
|
||||||
|
* Ensures data consistency during these potentially complex restructuring operations.
|
||||||
|
|
||||||
|
### `Delegates` (`gui/delegates.py`)
|
||||||
|
|
||||||
This module contains custom `QStyledItemDelegate` implementations used by the Unified Hierarchical View (`QTreeView`) to provide inline editors for specific data types or rule attributes. Examples include delegates for:
|
This module contains custom `QStyledItemDelegate` implementations used by the Unified Hierarchical View (`QTreeView`) to provide inline editors for specific data types or rule attributes. Examples include delegates for:
|
||||||
|
|
||||||
* `ComboBoxDelegate`: For selecting from predefined lists of allowed asset and file types, sourced from `config.py`.
|
* `ComboBoxDelegate`: For selecting from predefined lists of allowed asset and file types, sourced from the `Configuration` (originally from `config/app_settings.json`).
|
||||||
* `LineEditDelegate`: For free-form text editing, such as the `target_asset_name_override`.
|
* `LineEditDelegate`: For free-form text editing, such as the `target_asset_name_override`.
|
||||||
* `SupplierSearchDelegate`: A new delegate for the "Supplier" column. It provides a `QLineEdit` with auto-completion suggestions loaded from `config/suppliers.json` and handles adding/saving new suppliers.
|
* `SupplierSearchDelegate`: For the "Supplier" column. Provides a `QLineEdit` with auto-completion suggestions loaded from `config/suppliers.json` and handles adding/saving new suppliers.
|
||||||
|
|
||||||
These delegates handle the presentation and editing of data within the tree view cells, interacting with the `UnifiedViewModel` to get and set data.
|
These delegates handle the presentation and editing of data within the tree view cells, interacting with the `UnifiedViewModel` to get and set data.
|
||||||
|
|
||||||
## `ProcessingHandler` (`gui/processing_handler.py`)
|
## Prediction Handlers (`gui/`)
|
||||||
|
|
||||||
The `ProcessingHandler` class is designed to run in a separate `QThread` within the GUI. Its purpose is to manage the execution of the main asset processing pipeline using the **`ProcessingEngine`** in the background, preventing the GUI from freezing. It:
|
Prediction logic is handled by classes inheriting from a common base class, running in background threads.
|
||||||
|
|
||||||
* Manages a `concurrent.futures.ProcessPoolExecutor` to run individual asset processing tasks (`ProcessingEngine.process()`) in separate worker processes.
|
### `BasePredictionHandler` (`gui/base_prediction_handler.py`)
|
||||||
* Submits processing tasks to the pool, passing the relevant `SourceRule` object and `Configuration` instance to the `ProcessingEngine`.
|
|
||||||
* Monitors task completion and communicates progress, status updates, and results back to the `MainWindow` using Qt signals.
|
|
||||||
* Handles the execution of optional Blender scripts via subprocess calls after asset processing is complete.
|
|
||||||
* Provides logic for cancelling ongoing processing tasks.
|
|
||||||
|
|
||||||
## `PredictionHandler` (`gui/prediction_handler.py`)
|
An abstract base class (`QRunnable`) for prediction handlers. It defines the common structure and signals (`prediction_signal`) used by specific predictor implementations. It's designed to be run in a `QThreadPool`.
|
||||||
|
|
||||||
The `PredictionHandler` class runs in a separate `QThread` in the GUI and is responsible for generating the initial `SourceRule` hierarchy with predicted values based on the input files and the selected preset *when the rule-based prediction method is selected*. It:
|
### `RuleBasedPredictionHandler` (`gui/prediction_handler.py`)
|
||||||
|
|
||||||
* Takes an input source identifier (path), a list of files within that source, and the selected preset name as input.
|
This class (inheriting from `BasePredictionHandler`) is responsible for generating the initial `SourceRule` hierarchy using predefined rules from presets. It:
|
||||||
* Uses logic (including accessing preset rules and the `Configuration`'s allowed types) to analyze files and predict initial values for overridable fields in the `SourceRule`, `AssetRule`, and `FileRule` objects (e.g., `supplier_identifier`, `asset_type`, `item_type`, `target_asset_name_override`).
|
|
||||||
* Constructs a `SourceRule` hierarchy for the single input source.
|
|
||||||
* Emits a signal (`rule_hierarchy_ready`) with the input source identifier and the generated `SourceRule` object (within a list) to the `MainWindow` for accumulation and eventual population of the `UnifiedViewModel`.
|
|
||||||
|
|
||||||
## `LLMPredictionHandler` (`gui/llm_prediction_handler.py`)
|
* Takes an input source identifier, file list, and `Configuration` object.
|
||||||
|
* Analyzes files based on regex patterns and rules defined in the loaded preset.
|
||||||
|
* Constructs a `SourceRule` hierarchy with predicted values.
|
||||||
|
* Emits the `prediction_signal` with the generated `SourceRule` object.
|
||||||
|
|
||||||
The `LLMPredictionHandler` class is an experimental component that runs in a separate `QThread` and provides an alternative to the `PredictionHandler` by using a Large Language Model (LLM) for prediction. Its key responsibilities include:
|
### `LLMPredictionHandler` (`gui/llm_prediction_handler.py`)
|
||||||
* Communicating with an external LLM API endpoint (configured via `app_settings.json`).
|
|
||||||
* Sending relevant file information and context to the LLM based on the `llm_predictor_prompt` and `llm_predictor_examples` settings.
|
|
||||||
* Parsing the LLM's response to extract predicted values for `SourceRule`, `AssetRule`, and `FileRule` objects.
|
|
||||||
* Constructs a `SourceRule` hierarchy based on the LLM's interpretation.
|
|
||||||
* Emits a signal (`llm_prediction_ready`) with the input source identifier and the generated `SourceRule` object (within a list) to the `MainWindow` for accumulation and population of the `UnifiedViewModel`.
|
|
||||||
|
|
||||||
## `UnifiedViewModel` (`gui/unified_view_model.py`)
|
An experimental predictor (inheriting from `BasePredictionHandler`) that uses a Large Language Model (LLM). It:
|
||||||
|
|
||||||
*(Note: This section is being moved here from the GUI Internals document for better organization as it's a key component.)*
|
* Takes an input source identifier, file list, and `Configuration` object.
|
||||||
|
* Interacts with the `LLMInteractionHandler` to send data to the LLM and receive predictions.
|
||||||
|
* Parses the LLM response to construct a `SourceRule` hierarchy.
|
||||||
|
* Emits the `prediction_signal` with the generated `SourceRule` object.
|
||||||
|
|
||||||
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
### `LLMInteractionHandler` (`gui/llm_interaction_handler.py`)
|
||||||
* Wrap a list of `SourceRule` objects and expose their hierarchical structure (Source -> Asset -> File) to a `QTreeView` (the Unified Hierarchical View).
|
|
||||||
* Provide methods (`data`, `index`, `parent`, `rowCount`, `columnCount`, `flags`, `setData`) required by `QAbstractItemModel` to allow the `QTreeView` to display the rule hierarchy and support inline editing of specific attributes (e.g., `supplier_override`, `asset_type_override`, `item_type_override`, `target_asset_name_override`).
|
|
||||||
* Handle the direct restructuring of the underlying `SourceRule` hierarchy when `target_asset_name_override` is edited, including moving `FileRule`s and managing `AssetRule` creation/deletion.
|
|
||||||
* Determine row background colors based on the `asset_type` and `item_type`/`item_type_override` using color metadata from the `Configuration`.
|
|
||||||
* Hold the `SourceRule` data that is the single source of truth for the GUI's processing rules.
|
|
||||||
* Includes the `update_rules_for_sources` method, which is called by `MainWindow` to update the model's internal `SourceRule` data with new prediction results (from either the `PredictionHandler` or `LLMPredictionHandler`) and trigger the view to refresh.
|
|
||||||
|
|
||||||
## `ZipHandler` (`monitor.py`)
|
This class manages the specifics of communicating with the configured LLM API:
|
||||||
|
|
||||||
The `ZipHandler` is a custom event handler used by the `monitor.py` script, built upon the `watchdog` library. It is responsible for:
|
* Handles constructing prompts based on templates and input data.
|
||||||
|
* Sends requests to the LLM endpoint.
|
||||||
|
* Receives and potentially pre-processes the LLM's response before returning it to the `LLMPredictionHandler`.
|
||||||
|
|
||||||
* Detecting file system events, specifically the creation of new `.zip` files, in the monitored input directory.
|
## Utility Modules (`utils/`)
|
||||||
* Validating the filename format of detected ZIPs to extract the intended preset name.
|
|
||||||
* Triggering the main asset processing logic (`main.run_processing`) for valid new ZIP files.
|
|
||||||
* Managing the movement of processed source ZIP files to 'processed' or 'error' directories.
|
|
||||||
|
|
||||||
These key components work together to provide the tool's functionality, separating concerns and utilizing concurrency for performance and responsiveness. The `SourceRule` object serves as a clear data contract between the GUI/prediction layer and the processing engine.
|
Common utility functions have been extracted into separate modules:
|
||||||
|
|
||||||
|
### `workspace_utils.py`
|
||||||
|
|
||||||
|
Contains functions related to managing the processing workspace:
|
||||||
|
|
||||||
|
* `prepare_processing_workspace`: Creates temporary directories, extracts archive files (ZIP, RAR, 7z), and returns the path to the prepared workspace. Used by `main.ProcessingTask` and `monitor.py`.
|
||||||
|
|
||||||
|
### `prediction_utils.py`
|
||||||
|
|
||||||
|
Contains utility functions supporting prediction tasks:
|
||||||
|
|
||||||
|
* `generate_source_rule_from_archive`: A helper function used by `monitor.py` to perform rule-based prediction directly on an archive file without needing the full GUI setup. It extracts files temporarily, runs prediction logic similar to `RuleBasedPredictionHandler`, and returns a `SourceRule`.
|
||||||
|
|
||||||
|
## Monitor (`monitor.py`)
|
||||||
|
|
||||||
|
The `monitor.py` script implements the directory monitoring feature. It has been refactored to:
|
||||||
|
|
||||||
|
* Use `watchdog` to detect new archive files in the input directory.
|
||||||
|
* Use a `ThreadPoolExecutor` to process detected archives asynchronously in a `_process_archive_task` function.
|
||||||
|
* Within the task, it:
|
||||||
|
* Loads the necessary `Configuration`.
|
||||||
|
* Calls `utils.prediction_utils.generate_source_rule_from_archive` to get the `SourceRule`.
|
||||||
|
* Calls `utils.workspace_utils.prepare_processing_workspace` to set up the workspace.
|
||||||
|
* Instantiates and runs the `ProcessingEngine`.
|
||||||
|
* Handles moving the source archive to 'processed' or 'error' directories.
|
||||||
|
* Cleans up the workspace.
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
These key components, along with the refactored GUI structure and new utility modules, work together to provide the tool's functionality. The architecture emphasizes separation of concerns (configuration, rule generation, processing, UI), utilizes background processing for responsiveness (GUI prediction, Monitor tasks), and relies on the `SourceRule` object as the central data structure passed between different stages of the workflow.
|
||||||
@ -6,70 +6,61 @@ The `ProcessingEngine.process()` method orchestrates the following pipeline base
|
|||||||
|
|
||||||
The pipeline steps are:
|
The pipeline steps are:
|
||||||
|
|
||||||
1. **Workspace Setup (`_setup_workspace`)**:
|
1. **Workspace Preparation (External)**:
|
||||||
* Creates a temporary directory using `tempfile.mkdtemp()` to isolate the processing of the current asset.
|
* Before the `ProcessingEngine` is invoked, the calling code (e.g., `main.ProcessingTask`, `monitor._process_archive_task`) is responsible for setting up a temporary workspace.
|
||||||
|
* This typically involves using `utils.workspace_utils.prepare_processing_workspace`, which creates a temporary directory and extracts the input source (archive or folder) into it.
|
||||||
|
* The path to this prepared workspace is passed to the `ProcessingEngine` during initialization.
|
||||||
|
|
||||||
2. **Input Extraction (`_extract_input`)**:
|
2. **Prediction and Rule Generation (External)**:
|
||||||
* If the input is a supported archive type (.zip, .rar, .7z), it's extracted into the temporary workspace using the appropriate library (`zipfile`, `rarfile`, or `py7zr`).
|
* Also handled before the `ProcessingEngine` is invoked.
|
||||||
* If the input is a directory, its contents are copied into the temporary workspace.
|
* Either the `RuleBasedPredictionHandler`, `LLMPredictionHandler` (triggered by the GUI), or `utils.prediction_utils.generate_source_rule_from_archive` (used by the Monitor) analyzes the input files and generates a `SourceRule` object.
|
||||||
* Includes basic error handling for invalid or password-protected archives.
|
* This `SourceRule` contains predicted classifications and initial overrides.
|
||||||
|
* If using the GUI, the user can modify these rules.
|
||||||
|
* The final `SourceRule` object is the primary input to the `ProcessingEngine.process()` method.
|
||||||
|
|
||||||
3. **Prediction and Rule Generation (Handled Externally)**:
|
3. **File Inventory (`_inventory_and_classify_files`)**:
|
||||||
* Before the `ProcessingEngine` is invoked, either the `PredictionHandler` (rule-based) or the `LLMPredictionHandler` (LLM-based) is used (typically triggered by the GUI) to analyze the input files and generate a `SourceRule` object.
|
* Scans the contents of the *already prepared* temporary workspace.
|
||||||
* This `SourceRule` object contains the predicted classifications (`item_type`, `asset_type`, etc.) and any initial overrides based on the chosen prediction method (preset rules or LLM interpretation).
|
* This step primarily inventories the files present. The *classification* (determining `item_type`, etc.) is taken directly from the input `SourceRule`.
|
||||||
* The GUI allows the user to review and modify these predicted rules before processing begins.
|
|
||||||
* The final, potentially user-modified, `SourceRule` object is the primary input to the `ProcessingEngine`.
|
|
||||||
|
|
||||||
4. **File Inventory (`_inventory_and_classify_files`)**:
|
|
||||||
* Scans the contents of the temporary workspace.
|
|
||||||
* This step primarily inventories the files present. The *classification* itself (determining `item_type`, etc.) has already been performed by the external prediction handler and is stored within the input `SourceRule`. The engine uses the classifications provided in the `SourceRule`.
|
|
||||||
* Stores the file paths and their associated rules from the `SourceRule` in `self.classified_files`.
|
* Stores the file paths and their associated rules from the `SourceRule` in `self.classified_files`.
|
||||||
|
|
||||||
5. **Base Metadata Determination (`_determine_base_metadata`, `_determine_single_asset_metadata`)**:
|
4. **Base Metadata Determination (`_determine_base_metadata`, `_determine_single_asset_metadata`)**:
|
||||||
* Determines the base asset name, category, and archetype using the explicit values provided in the input `SourceRule` object and the static configuration from the `Configuration` object. Overrides (like `supplier_identifier`, `asset_type`, and `asset_name_override`), including supplier overrides from the GUI, are taken directly from the `SourceRule`.
|
* Determines the base asset name, category, and archetype using the explicit values provided in the input `SourceRule` and the static `Configuration`. Overrides (like `supplier_identifier`, `asset_type`, `asset_name_override`) are taken directly from the `SourceRule`.
|
||||||
|
|
||||||
5. **Skip Check**:
|
5. **Skip Check**:
|
||||||
* If the `overwrite` flag (passed during initialization) is `False`, the tool checks if the final output directory for the determined asset name already exists and contains a `metadata.json` file.
|
* If the `overwrite` flag is `False`, checks if the final output directory already exists and contains `metadata.json`.
|
||||||
* If both exist, processing for this specific asset is skipped, marked as "skipped", and the pipeline moves to the next asset (if processing multiple assets from one source) or finishes.
|
* If so, processing for this asset is skipped.
|
||||||
|
|
||||||
6. **Map Processing (`_process_maps`)**:
|
6. **Map Processing (`_process_maps`)**:
|
||||||
* Iterates through the files classified as texture maps for the current asset based on the `SourceRule`. Configuration values used in this step, such as target resolutions, bit depth rules, and output format rules, are retrieved directly from the static `Configuration` object or explicit overrides in the `SourceRule`.
|
* Iterates through files classified as maps in the `SourceRule`.
|
||||||
* Loads the image using `cv2.imread` (handling grayscale and unchanged flags). Converts BGR to RGB internally for consistency (except for saving non-EXR formats).
|
* Loads images (`cv2.imread`).
|
||||||
* Handles Glossiness-to-Roughness inversion if necessary (loads gloss, inverts `1.0 - img/norm`, prioritizes gloss source if both exist).
|
* Handles Glossiness-to-Roughness inversion.
|
||||||
* Resizes the image to target resolutions defined in `IMAGE_RESOULTIONS` (from `Configuration`) using `cv2.resize` (`INTER_LANCZOS4` for downscaling). Upscaling is generally avoided by checks.
|
* Resizes images based on `Configuration`.
|
||||||
* Determines the output bit depth based on `MAP_BIT_DEPTH_RULES` (from `Configuration`) or overrides in the `SourceRule`.
|
* Determines output bit depth and format based on `Configuration` and `SourceRule`.
|
||||||
* Determines the output file format (`.jpg`, `.png`, `.exr`) based on a hierarchy of rules defined in the `Configuration` or overrides in the `SourceRule`.
|
* Converts data types and saves images (`cv2.imwrite`).
|
||||||
* Converts the NumPy array data type appropriately before saving (e.g., float to uint8/uint16 with scaling).
|
* Calculates image statistics.
|
||||||
* Saves the processed map using `cv2.imwrite` (converting RGB back to BGR if saving to non-EXR formats). Includes fallback logic (e.g., attempting PNG if saving 16-bit EXR fails).
|
* Stores processed map details.
|
||||||
* Calculates image statistics (Min/Max/Mean) using `_calculate_image_stats` on normalized float64 data for the `CALCULATE_STATS_RESOLUTION` (from `Configuration`).
|
|
||||||
* Determines the aspect ratio change string (e.g., `"EVEN"`, `"X150"`) using `_normalize_aspect_ratio_change`.
|
|
||||||
* Stores details about each processed map (path, resolution, format, stats, etc.) in `processed_maps_details_asset`.
|
|
||||||
|
|
||||||
7. **Map Merging (`_merge_maps_from_source`)**:
|
7. **Map Merging (`_merge_maps_from_source`)**:
|
||||||
* Iterates through the `MAP_MERGE_RULES` defined in the `Configuration`.
|
* Iterates through `MAP_MERGE_RULES` in `Configuration`.
|
||||||
* Identifies the required *source* map files needed as input for each merge rule based on the classified files in the `SourceRule`.
|
* Identifies required source maps based on `SourceRule`.
|
||||||
* Determines common resolutions available across the required input maps.
|
* Loads source channels, handling missing inputs with defaults from `Configuration` or `SourceRule`.
|
||||||
* Loads the necessary source map channels for each common resolution (using a helper `_load_and_transform_source` which includes caching).
|
* Merges channels (`cv2.merge`).
|
||||||
* Converts inputs to normalized float32 (0-1).
|
* Determines output format/bit depth and saves the merged map.
|
||||||
* Injects default channel values (from rule `defaults` in `Configuration` or overrides in `SourceRule`) if an input channel is missing.
|
* Stores merged map details.
|
||||||
* Merges channels using `cv2.merge`.
|
|
||||||
* Determines output bit depth and format based on rules in `Configuration` or overrides in `SourceRule`. Handles potential JPG 16-bit conflict by forcing 8-bit.
|
|
||||||
* Saves the merged map using the `_save_image` helper (includes data type/color space conversions and fallback).
|
|
||||||
* Stores details about each merged map in `merged_maps_details_asset`.
|
|
||||||
|
|
||||||
8. **Metadata File Generation (`_generate_metadata_file`)**:
|
8. **Metadata File Generation (`_generate_metadata_file`)**:
|
||||||
* Collects all determined information for the current asset: base metadata, details from `processed_maps_details_asset` and `merged_maps_details_asset`, list of ignored files, source preset used, etc. This information is derived from the input `SourceRule` and the processing results.
|
* Collects asset metadata, processed/merged map details, ignored files list, etc., primarily from the `SourceRule` and internal processing results.
|
||||||
* Writes this collected data into the `metadata.json` file within the temporary workspace using `json.dump`.
|
* Writes data to `metadata.json` in the temporary workspace.
|
||||||
|
|
||||||
9. **Output Organization (`_organize_output_files`)**:
|
9. **Output Organization (`_organize_output_files`)**:
|
||||||
* Creates the final structured output directory: `<output_base_dir>/<supplier_name>/<asset_name>/`. The `supplier_name` used here is derived from the `SourceRule`, ensuring that supplier overrides from the GUI are respected in the output path.
|
* Creates the final structured output directory (`<output_base_dir>/<supplier_name>/<asset_name>/`), using the supplier name from the `SourceRule`.
|
||||||
* Creates subdirectories `Extra/`, `Unrecognised/`, and `Ignored/` within the asset directory.
|
* Moves processed maps, merged maps, models, metadata, and other classified files from the temporary workspace to the final output directory.
|
||||||
* Moves the processed maps, merged maps, model files, `metadata.json`, and files classified as Extra, Unrecognised, or Ignored from the temporary workspace into their respective locations in the final output directory structure.
|
|
||||||
|
|
||||||
10. **Workspace Cleanup (`_cleanup_workspace`)**:
|
10. **Workspace Cleanup (External)**:
|
||||||
* Removes the temporary workspace directory and its contents using `shutil.rmtree()`. This is called within a `finally` block to ensure cleanup is attempted even if errors occur during processing.
|
* After the `ProcessingEngine.process()` method completes (successfully or with errors), the *calling code* is responsible for cleaning up the temporary workspace directory created in Step 1. This is often done in a `finally` block where `utils.workspace_utils.prepare_processing_workspace` was called.
|
||||||
|
|
||||||
11. **(Optional) Blender Script Execution**:
|
11. **(Optional) Blender Script Execution (External)**:
|
||||||
* If triggered via CLI arguments (`--nodegroup-blend`, `--materials-blend`) or GUI controls, the orchestrator (`main.py` or `gui/processing_handler.py`) executes the corresponding Blender scripts (`blenderscripts/*.py`) using `subprocess.run` after the `ProcessingEngine.process()` call completes successfully for an asset batch. See `Developer Guide: Blender Integration Internals` for more details.
|
* If triggered (e.g., via CLI arguments or GUI controls), the orchestrating code (e.g., `main.ProcessingTask`) executes the corresponding Blender scripts (`blenderscripts/*.py`) using `subprocess.run` *after* the `ProcessingEngine.process()` call completes successfully.
|
||||||
|
* *Note: Centralized logic for this was intended for `utils/blender_utils.py`, but this utility has not yet been implemented.* See `Developer Guide: Blender Integration Internals` for more details.
|
||||||
|
|
||||||
This pipeline, executed by the `ProcessingEngine`, provides a clear and explicit processing flow based on the complete rule set provided by the GUI or other interfaces.
|
This pipeline, executed by the `ProcessingEngine`, provides a clear and explicit processing flow based on the complete rule set provided by the GUI or other interfaces.
|
||||||
@ -8,103 +8,156 @@ The GUI is built using `PySide6`, which provides Python bindings for the Qt fram
|
|||||||
|
|
||||||
## Main Window (`gui/main_window.py`)
|
## Main Window (`gui/main_window.py`)
|
||||||
|
|
||||||
The `MainWindow` class is the central component of the GUI application. It is responsible for:
|
The `MainWindow` class acts as the central **coordinator** for the GUI application. It is responsible for:
|
||||||
|
|
||||||
* Defining the main application window structure and layout using PySide6 widgets.
|
* Setting up the main application window structure and menu bar.
|
||||||
* Arranging the Preset Editor panel (left) and the **Unified Hierarchical View** (right).
|
* Instantiating and arranging the major GUI widgets:
|
||||||
* Setting up the menu bar, including the "View" menu for toggling the Log Console.
|
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the core controls, preset selection, and the rule editor.
|
||||||
* Connecting user interactions (button clicks, drag-and-drop events, edits in the Unified View) to corresponding methods (slots) within the `MainWindow` or other handler classes.
|
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Handles preset loading, saving, and editing.
|
||||||
* Managing the display of application logs in the UI console using a custom `QtLogHandler`.
|
* `LogConsoleWidget` (`gui/log_console_widget.py`): Displays application logs.
|
||||||
* Interacting with background handlers (`ProcessingHandler`, `PredictionHandler`, `LLMPredictionHandler`) via Qt signals and slots to ensure thread-safe updates to the UI during long-running operations.
|
* Instantiating key models and handlers:
|
||||||
* Accumulating prediction results from either the `PredictionHandler` (for rule-based presets) or `LLMPredictionHandler` (for LLM interpretation) for multiple input sources before updating the `UnifiedViewModel`.
|
* `UnifiedViewModel` (`gui/unified_view_model.py`): The model for the rule hierarchy view.
|
||||||
* Receiving the initial `SourceRule` hierarchy from the appropriate prediction handler (`rule_hierarchy_ready` or `llm_prediction_ready` signals) and calling the `UnifiedViewModel`'s `update_rules_for_sources` method to populate the view model.
|
* `LLMInteractionHandler` (`gui/llm_interaction_handler.py`): Manages communication with the LLM service.
|
||||||
* Sending the final, potentially user-modified, `SourceRule` list to `main.py` to initiate processing via the `ProcessingEngine`.
|
* Connecting signals and slots between these components to orchestrate the application flow.
|
||||||
* Handling the selection in the processing preset dropdown (`self.preset_selector`), distinguishing between standard presets and the special `"- LLM Interpretation -"` value.
|
* Handling top-level user interactions like drag-and-drop for loading sources (`add_input_paths`). This method now handles the "placeholder" state (no preset selected) by scanning directories or inspecting archives (ZIP) and creating placeholder `SourceRule`/`AssetRule`/`FileRule` objects to immediately populate the `UnifiedViewModel` with the file structure.
|
||||||
* Initializing and managing the `self.llm_processing_queue` (a `deque`) when LLM interpretation is selected, adding items to be processed by the LLM.
|
* Initiating predictions based on the selected preset mode (Rule-Based or LLM) when presets change or sources are added.
|
||||||
* Implementing the `_start_llm_prediction` method to initiate the LLM prediction process for the queued items by calling `_process_next_llm_item`.
|
* Starting the processing task (`_on_process_requested`): This slot now filters the `SourceRule` list obtained from the `UnifiedViewModel`, excluding sources where no asset has a `Target Asset` name assigned, before emitting the `start_backend_processing` signal. It also manages enabling/disabling controls.
|
||||||
* Implementing the `_process_next_llm_item` method, which takes the next item from the `llm_processing_queue`, prepares the necessary data, and starts the `LLMPredictionHandler` thread to process that single item.
|
* Managing the `QThreadPool` for running background prediction tasks (`RuleBasedPredictionHandler`, `LLMPredictionHandler`).
|
||||||
* Connecting signals from the `LLMPredictionHandler` instance:
|
* Implementing slots to handle results from background tasks:
|
||||||
* `llm_prediction_ready` signal to a slot (e.g., `_on_llm_prediction_ready`) that receives the generated `SourceRule`, updates the `UnifiedViewModel` (via `update_rules_for_sources`), and calls `_process_next_llm_item` to continue processing the queue.
|
* `_handle_prediction_completion(source_id, source_rule_list)`: Receives results from either prediction handler via the `prediction_signal`. It calls `self.unified_view_model.update_rules_for_sources()` to update the view model, preserving user overrides where possible. For LLM predictions, it also triggers processing the next item in the queue.
|
||||||
* `llm_status_update` signal to a slot (e.g., `_on_llm_status_update`) to display LLM processing status messages in the status bar.
|
* Slots to handle status updates from the LLM handler.
|
||||||
* `finished` signal to handle thread cleanup.
|
|
||||||
|
|
||||||
## Threading and Background Tasks
|
## Threading and Background Tasks
|
||||||
|
|
||||||
To keep the UI responsive during intensive operations like asset processing and rule prediction, the GUI utilizes background threads managed by `QThread`.
|
To keep the UI responsive, prediction tasks run in background threads managed by a `QThreadPool`.
|
||||||
|
|
||||||
* **`ProcessingHandler` (`gui/processing_handler.py`):** This class is designed to run in a separate `QThread`. It manages the execution of the main asset processing pipeline using the **`ProcessingEngine`** for multiple assets concurrently using `concurrent.futures.ProcessPoolExecutor`. It submits individual asset processing tasks to the pool, passing the relevant `SourceRule` object and `Configuration` instance to the `ProcessingEngine`. It monitors task completion and communicates progress, status updates, and results back to the `MainWindow` on the main UI thread using Qt signals. It also handles the execution of optional Blender scripts via subprocess calls after processing.
|
* **`BasePredictionHandler` (`gui/base_prediction_handler.py`):** An abstract `QRunnable` base class defining the common interface and signals (`prediction_signal`, `status_signal`) for prediction tasks.
|
||||||
* **`PredictionHandler` (`gui/prediction_handler.py`):** Runs in a `QThread` when a rule-based preset is selected. Generates the initial `SourceRule` hierarchy based on preset rules and emits `rule_hierarchy_ready`.
|
* **`RuleBasedPredictionHandler` (`gui/prediction_handler.py`):** Inherits from `BasePredictionHandler`. Runs as a `QRunnable` in the thread pool when a rule-based preset is selected. Generates the `SourceRule` hierarchy based on preset rules and emits `prediction_signal`.
|
||||||
* **`LLMPredictionHandler` (`gui/llm_prediction_handler.py`):** Runs in a `QThread` when "- LLM Interpretation -" is selected. Communicates with the LLM API, parses the response, generates the `SourceRule` hierarchy for a *single* input item at a time, and emits `llm_prediction_ready` and `llm_status_update`.
|
* **`LLMPredictionHandler` (`gui/llm_prediction_handler.py`):** Inherits from `BasePredictionHandler`. Runs as a `QRunnable` in the thread pool when "- LLM Interpretation -" is selected. Interacts with `LLMInteractionHandler`, parses the response, generates the `SourceRule` hierarchy for a *single* input item, and emits `prediction_signal` and `status_signal`.
|
||||||
|
* **`LLMInteractionHandler` (`gui/llm_interaction_handler.py`):** Manages the communication with the LLM service. This handler itself may perform network operations but typically runs synchronously within the `LLMPredictionHandler`'s thread.
|
||||||
|
|
||||||
|
*(Note: The actual processing via `ProcessingEngine` is now handled by `main.ProcessingTask`, which runs in a separate process managed outside the GUI's direct threading model, though the GUI initiates it).*
|
||||||
|
|
||||||
## Communication (Signals and Slots)
|
## Communication (Signals and Slots)
|
||||||
|
|
||||||
Communication between the main UI thread (`MainWindow`) and the background threads (`ProcessingHandler`, `PredictionHandler`, `LLMPredictionHandler`) relies heavily on Qt's signals and slots mechanism. This is a thread-safe way for objects in different threads to communicate.
|
Communication between the `MainWindow` (main UI thread) and the background prediction tasks relies on Qt's signals and slots.
|
||||||
|
|
||||||
* Background handlers emit signals to indicate events (e.g., progress updated, file status changed, task finished, prediction ready, LLM status update).
|
* Prediction handlers (`RuleBasedPredictionHandler`, `LLMPredictionHandler`) emit signals from the `BasePredictionHandler`:
|
||||||
* The `MainWindow` connects slots (methods) to these signals. When a signal is emitted, the connected slot is invoked on the thread that owns the receiving object (the main UI thread for `MainWindow`), ensuring UI updates happen safely. Key signals/slots related to LLM integration:
|
* `prediction_signal(source_id, source_rule_list)`: Indicates prediction for a source is complete.
|
||||||
* `LLMPredictionHandler.llm_prediction_ready(source_id, source_rule_list)` -> `MainWindow._on_llm_prediction_ready(source_id, source_rule_list)` (updates model via `update_rules_for_sources`, processes next queue item)
|
* `status_signal(message)`: Provides status updates (primarily from LLM handler).
|
||||||
* `LLMPredictionHandler.llm_status_update(message)` -> `MainWindow._on_llm_status_update(message)` (updates status bar)
|
* The `MainWindow` connects slots to these signals:
|
||||||
* `LLMPredictionHandler.finished` -> `MainWindow._on_llm_thread_finished` (handles thread cleanup)
|
* `prediction_signal` -> `MainWindow._handle_prediction_completion(source_id, source_rule_list)`
|
||||||
|
* `status_signal` -> `MainWindow._on_status_update(message)` (updates status bar)
|
||||||
|
* Signals from the `UnifiedViewModel` (`dataChanged`, `layoutChanged`) trigger updates in the `QTreeView`.
|
||||||
|
* Signals from the `UnifiedViewModel` (`targetAssetOverrideChanged`) trigger the `AssetRestructureHandler`.
|
||||||
|
|
||||||
## Preset Editor
|
## Preset Editor (`gui/preset_editor_widget.py`)
|
||||||
|
|
||||||
The GUI includes an integrated preset editor panel. This allows users to interactively create, load, modify, and save preset `.json` files directly within the application. The editor typically uses standard UI widgets to display and edit the key fields of the preset structure.
|
The `PresetEditorWidget` provides a dedicated interface for managing presets. It handles loading, displaying, editing, and saving preset `.json` files. It communicates with the `MainWindow` (e.g., via signals) when a preset is loaded or saved.
|
||||||
|
|
||||||
## Unified Hierarchical View (`gui/unified_view_model.py`, `gui/delegates.py`)
|
## Unified Hierarchical View
|
||||||
|
|
||||||
## Unified Hierarchical View (`gui/unified_view_model.py`, `gui/delegates.py`, `gui/main_window.py`)
|
The core rule editing interface is built around a `QTreeView` managed within the `MainPanelWidget`, using a custom model and delegates.
|
||||||
|
|
||||||
The core of the GUI's rule editing interface is the Unified Hierarchical View, implemented using a `QTreeView` with a custom model and delegates. This view is managed within the `MainWindow`.
|
* **`UnifiedViewModel` (`gui/unified_view_model.py`):** Implements `QAbstractItemModel`.
|
||||||
|
* Wraps the `RuleHierarchyModel` to expose the `SourceRule` list (Source -> Asset -> File) to the `QTreeView`.
|
||||||
|
* Provides data for display and flags for editing.
|
||||||
|
* **Handles `setData` requests:** Validates input and updates the underlying `RuleHierarchyModel`. Crucially, it **delegates** complex restructuring (when `target_asset_name_override` changes) to the `AssetRestructureHandler` by emitting the `targetAssetOverrideChanged` signal.
|
||||||
|
* **Row Coloring:** Provides data for `Qt.ForegroundRole` (text color) based on the `item_type` and the colors defined in `config/app_settings.json`. Provides data for `Qt.BackgroundRole` based on calculating a 30% darker shade of the parent asset's background color.
|
||||||
|
* **Caching:** Caches configuration data (`ASSET_TYPE_DEFINITIONS`, `FILE_TYPE_DEFINITIONS`, color maps) in `__init__` for performance.
|
||||||
|
* **`update_rules_for_sources` Method:** Intelligently merges new prediction results or placeholder rules into the existing model data, preserving user overrides where applicable.
|
||||||
|
* *(Note: The previous concept of switching between "simple" and "detailed" display modes has been removed. The model always represents the full detailed structure.)*
|
||||||
|
* **`RuleHierarchyModel` (`gui/rule_hierarchy_model.py`):** A non-Qt model holding the actual list of `SourceRule` objects. Provides methods for accessing and modifying the hierarchy (used by `UnifiedViewModel` and `AssetRestructureHandler`).
|
||||||
|
* **`AssetRestructureHandler` (`gui/asset_restructure_handler.py`):** Contains the logic to modify the `RuleHierarchyModel` when a file's target asset is changed. It listens for the `targetAssetOverrideChanged` signal from the `UnifiedViewModel` and uses methods on the `RuleHierarchyModel` (`moveFileRule`, `createAssetRule`, `removeAssetRule`) to perform the restructuring safely.
|
||||||
|
* **`Delegates` (`gui/delegates.py`):** Custom `QStyledItemDelegate` implementations provide inline editors:
|
||||||
|
* **`ComboBoxDelegate`:** For selecting predefined types (from `Configuration`).
|
||||||
|
* **`LineEditDelegate`:** For free-form text editing.
|
||||||
|
* **`SupplierSearchDelegate`:** For supplier names with auto-completion (using `config/suppliers.json`).
|
||||||
|
|
||||||
* **`Unified View Model` (`gui/unified_view_model.py`):** This class implements a `QAbstractItemModel` to expose the structure of a list of `SourceRule` objects (Source -> Asset -> File) to the `QTreeView`. It holds the `SourceRule` data that is the single source of truth for the GUI's processing rules. It provides data and flags for display in multiple columns and supports inline editing of specific rule attributes (e.g., asset type, item type override, target asset name override) by interacting with delegates.
|
**Data Flow Diagram (GUI Rule Management - Refactored):**
|
||||||
* **Column Order and Resizing:** The view currently displays the following columns in order: Name, Target Asset, Supplier, Asset Type, Item Type. The "Target Asset" column is set to stretch to fill available space, while other columns resize to their contents. The previous "Status" and "Output Path" columns have been removed.
|
|
||||||
* **Direct Model Restructuring:** The `setData` method now includes logic to directly restructure the underlying `SourceRule` hierarchy when the `target_asset_name_override` field of a `FileRule` is edited. This involves moving the `FileRule` to a different `AssetRule` (creating a new one if necessary) and removing the old `AssetRule` if it becomes empty. This replaces the previous mechanism of re-running prediction after an edit.
|
|
||||||
* **Row Coloring:** Row background colors are dynamically determined based on the `asset_type` (for `AssetRule`s) and `item_type` or `item_type_override` (for `FileRule`s), using the color metadata defined in the `ASSET_TYPE_DEFINITIONS` and `FILE_TYPE_DEFINITIONS` dictionaries sourced from the configuration loaded by `configuration.py` (which includes data from `config/app_settings.json`). `SourceRule` rows have a fixed color.
|
|
||||||
* **`Delegates` (`gui/delegates.py`):** This module contains custom `QStyledItemDelegate` implementations used by the `QTreeView` to provide inline editors for specific data types or rule attributes.
|
|
||||||
* **`ComboBoxDelegate`:** Used for selecting from predefined lists (e.g., allowed asset types, allowed file types sourced from the configuration loaded by `configuration.py`).
|
|
||||||
* **`LineEditDelegate`:** Used for free-form text editing (e.g., target asset name override).
|
|
||||||
* **`SupplierSearchDelegate`:** A new delegate used for the "Supplier" column. It provides a `QLineEdit` with auto-completion suggestions loaded from `config/suppliers.json`. It also handles adding new, unique supplier names entered by the user to the list and saving the updated list back to the JSON file.
|
|
||||||
|
|
||||||
The appropriate prediction handler (`PredictionHandler` or `LLMPredictionHandler`) generates the initial `SourceRule` hierarchy (either for all sources at once or one source at a time for LLM). The `MainWindow` receives this via a signal (`rule_hierarchy_ready` or `llm_prediction_ready`) and calls the `UnifiedViewModel`'s `update_rules_for_sources(source_id, source_rule_list)` method. This method updates the model's internal data structure with the new or updated `SourceRule` object(s) for the given `source_id` and emits the necessary signals (`dataChanged`, `layoutChanged`) to refresh the `QTreeView` display. Edits made in the view directly modify the attributes of the underlying rule objects in the `SourceRule` hierarchy held by the model, with the `UnifiedViewModel` handling the necessary model restructuring and signal emission for view updates.
|
|
||||||
|
|
||||||
**Data Flow Diagram (GUI Rule Management):**
|
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
graph LR
|
graph TD
|
||||||
A[User Input (Drag/Drop, Preset Select)] --> B(MainWindow);
|
subgraph MainWindow [MainWindow Coordinator]
|
||||||
B -- Selects Preset/LLM --> B;
|
direction LR
|
||||||
B -- Starts --> C{Prediction Handler (Rule or LLM)};
|
MW_Input[User Input (Drag/Drop, Preset Select)] --> MW(MainWindow);
|
||||||
C -- rule_hierarchy_ready / llm_prediction_ready --> B;
|
MW -- Initiates --> PredPool{QThreadPool};
|
||||||
B -- Calls update_rules_for_sources(source_id, rules) --> D(UnifiedViewModel);
|
MW -- Connects Signals --> VM(UnifiedViewModel);
|
||||||
D -- Emits dataChanged/layoutChanged --> E(QTreeView - Unified View);
|
MW -- Connects Signals --> ARH(AssetRestructureHandler);
|
||||||
B -- Sets Model --> E;
|
MW -- Owns/Manages --> MPW(MainPanelWidget);
|
||||||
E -- Displays Data from --> D;
|
MW -- Owns/Manages --> PEW(PresetEditorWidget);
|
||||||
E -- Uses Delegates from --> F(Delegates);
|
MW -- Owns/Manages --> LCW(LogConsoleWidget);
|
||||||
F -- Interact with --> D;
|
MW -- Owns/Manages --> LLMIH(LLMInteractionHandler);
|
||||||
User -- Edits Rules via --> E;
|
end
|
||||||
E -- Updates Data in --> D;
|
|
||||||
B -- Triggers Processing with Final SourceRule List --> G(main.py / ProcessingHandler);
|
subgraph MainPanel [MainPanelWidget]
|
||||||
|
direction TB
|
||||||
|
MPW_UI[UI Controls (Load, Predict, Process Btns)];
|
||||||
|
MPW_UI --> MPW;
|
||||||
|
MPW -- Contains --> REW(RuleEditorWidget);
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph RuleEditor [RuleEditorWidget]
|
||||||
|
direction TB
|
||||||
|
REW -- Contains --> TV(QTreeView - Rule View);
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Prediction [Background Prediction]
|
||||||
|
direction TB
|
||||||
|
PredPool -- Runs --> RBP(RuleBasedPredictionHandler);
|
||||||
|
PredPool -- Runs --> LLMP(LLMPredictionHandler);
|
||||||
|
LLMP -- Uses --> LLMIH;
|
||||||
|
RBP -- prediction_signal --> MW;
|
||||||
|
LLMP -- prediction_signal --> MW;
|
||||||
|
LLMP -- status_signal --> MW;
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph ModelView [Model/View Components]
|
||||||
|
direction TB
|
||||||
|
TV -- Sets Model --> VM;
|
||||||
|
TV -- Displays Data From --> VM;
|
||||||
|
TV -- Uses Delegates --> Del(Delegates);
|
||||||
|
UserEdit[User Edits Rules] --> TV;
|
||||||
|
TV -- setData --> VM;
|
||||||
|
VM -- Wraps --> RHM(RuleHierarchyModel);
|
||||||
|
VM -- dataChanged/layoutChanged --> TV;
|
||||||
|
VM -- targetAssetOverrideChanged --> ARH;
|
||||||
|
ARH -- Modifies --> RHM;
|
||||||
|
Del -- Get/Set Data --> VM;
|
||||||
|
end
|
||||||
|
|
||||||
|
MW -- _handle_prediction_completion --> VM;
|
||||||
|
MW -- Triggers Processing --> ProcTask(main.ProcessingTask);
|
||||||
|
|
||||||
|
%% Connections between subgraphs
|
||||||
|
MPW --> MW;
|
||||||
|
PEW --> MW;
|
||||||
|
LCW --> MW;
|
||||||
|
VM --> MW;
|
||||||
|
ARH --> MW;
|
||||||
|
LLMIH --> MW;
|
||||||
|
REW --> MPW;
|
||||||
```
|
```
|
||||||
|
|
||||||
## Application Styling
|
## Application Styling
|
||||||
|
|
||||||
The application style is explicitly set to 'Fusion' in `gui/main_window.py` to provide a more consistent look and feel across different operating systems. A custom `QPalette` is also applied to the application to adjust default colors within the 'Fusion' style.
|
The application style is explicitly set to 'Fusion' in `gui/main_window.py`. A custom `QPalette` adjusts default colors.
|
||||||
|
|
||||||
## Logging
|
## Logging (`gui/log_console_widget.py`)
|
||||||
|
|
||||||
A custom `QtLogHandler` is used to redirect log messages from the standard Python `logging` module to a text area or console widget within the GUI, allowing users to see detailed application output and errors.
|
The `LogConsoleWidget` displays logs captured by a custom `QtLogHandler` from Python's `logging` module.
|
||||||
|
|
||||||
## Cancellation
|
## Cancellation
|
||||||
|
|
||||||
The GUI provides a "Cancel" button to stop ongoing processing. The `ProcessingHandler` implements logic to handle cancellation requests. This typically involves setting an internal flag and attempting to shut down the `ProcessPoolExecutor`. However, it's important to note that this does not immediately terminate worker processes that are already executing; it primarily prevents new tasks from starting and stops processing results from completed futures once the cancellation flag is checked.
|
The GUI provides a "Cancel" button. Cancellation logic for the actual processing is now likely handled within the `main.ProcessingTask` or the code that manages it, as the `ProcessingHandler` has been removed. The GUI button would signal this external task manager.
|
||||||
|
|
||||||
## GUI Configuration Editor (`gui/config_editor_dialog.py`)
|
## GUI Configuration Editor (`gui/config_editor_dialog.py`)
|
||||||
|
|
||||||
A dedicated dialog, implemented in `gui/config_editor_dialog.py`, provides a graphical interface for editing the core application settings stored in `config/app_settings.json`.
|
A dedicated dialog for editing `config/app_settings.json`.
|
||||||
|
|
||||||
* **Functionality:** This dialog loads the current content of `config/app_settings.json` and presents it in a tabbed layout (e.g., "General", "Output & Naming") using standard GUI widgets mapped to the JSON structure. It supports editing basic fields, tables for definitions (`FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`), and a list/detail view for merge rules (`MAP_MERGE_RULES`). The definitions tables include dynamic color editing features.
|
* **Functionality:** Loads `config/app_settings.json`, presents in tabs, allows editing basic fields, definitions tables (with color editing), and merge rules list/detail.
|
||||||
* **Limitations:** Currently, editing complex fields like `IMAGE_RESOLUTIONS` or the full details of `MAP_MERGE_RULES` via the UI is not fully supported.
|
* **Limitations:** Editing complex fields like `IMAGE_RESOLUTIONS` or full `MAP_MERGE_RULES` details might still be limited.
|
||||||
* **Integration:** The `MainWindow` is responsible for creating and displaying an instance of this dialog when the user selects the "Edit" -> "Preferences..." menu option.
|
* **Integration:** Launched by `MainWindow` ("Edit" -> "Preferences...").
|
||||||
* **Persistence:** Changes saved via this editor are written directly to the `config/app_settings.json` file, ensuring they persist across application sessions. However, the `Configuration` class loads settings at application startup, so a restart is required for changes made in the editor to take effect in the application's processing logic.
|
* **Persistence:** Saves changes to `config/app_settings.json`. Requires application restart for changes to affect processing logic loaded by the `Configuration` class.
|
||||||
|
|
||||||
These key components work together to provide the tool's functionality, separating concerns and utilizing concurrency for performance and responsiveness. The Unified Hierarchical View centralizes rule management in the GUI, and the `SourceRule` object serves as a clear data contract passed to the processing engine.
|
The refactored GUI separates concerns into distinct widgets and handlers, coordinated by the `MainWindow`. Background tasks use `QThreadPool` and `QRunnable`. The `UnifiedViewModel` focuses on data presentation and simple edits, delegating complex restructuring to the `AssetRestructureHandler`.
|
||||||
@ -4,30 +4,41 @@ This document provides technical details about the implementation of the Directo
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
The `monitor.py` script provides an automated way to process assets by monitoring a specified input directory for new ZIP files. It is built using the `watchdog` library.
|
The `monitor.py` script provides an automated way to process assets by monitoring a specified input directory for new archive files. It has been refactored to use a `ThreadPoolExecutor` for asynchronous processing.
|
||||||
|
|
||||||
## Key Components
|
## Key Components
|
||||||
|
|
||||||
* **`watchdog` Library:** The script relies on the `watchdog` library for monitoring file system events. Specifically, it uses a `PollingObserver` to watch the `INPUT_DIR` for changes.
|
* **`watchdog` Library:** Used for monitoring file system events (specifically file creation) in the `INPUT_DIR`. A `PollingObserver` is typically used.
|
||||||
* **`ZipHandler` Class:** This is a custom event handler class defined within `monitor.py`. It inherits from a `watchdog` event handler class (likely `FileSystemEventHandler` or similar, though not explicitly stated in the source text, it's the standard pattern). Its primary method of interest is the one that handles file creation events (`on_created`).
|
* **`concurrent.futures.ThreadPoolExecutor`:** Manages a pool of worker threads to process detected archives concurrently. The number of workers can often be configured (e.g., via `NUM_WORKERS` environment variable).
|
||||||
* **`main.run_processing`:** The monitor script triggers the main asset processing logic by calling the `run_processing` function from the `main.py` module.
|
* **`_process_archive_task` Function:** The core function executed by the thread pool for each detected archive. It encapsulates the entire processing workflow for a single archive.
|
||||||
|
* **`utils.prediction_utils.generate_source_rule_from_archive`:** A utility function called by `_process_archive_task` to perform rule-based prediction directly on the archive file and generate the necessary `SourceRule` object.
|
||||||
|
* **`utils.workspace_utils.prepare_processing_workspace`:** A utility function called by `_process_archive_task` to create a temporary workspace and extract the archive contents into it.
|
||||||
|
* **`ProcessingEngine` (`processing_engine.py`):** The core engine instantiated and run within `_process_archive_task` to perform the actual asset processing based on the generated `SourceRule`.
|
||||||
|
* **`Configuration` (`configuration.py`):** Loaded within `_process_archive_task` based on the preset derived from the archive filename.
|
||||||
|
|
||||||
## Functionality Details
|
## Functionality Details (Asynchronous Workflow)
|
||||||
|
|
||||||
1. **Watching:** A `PollingObserver` is set up to monitor the directory specified by the `INPUT_DIR` environment variable. Polling is used, checking for changes at a frequency defined by `POLL_INTERVAL`.
|
1. **Watching:** A `watchdog` observer monitors the `INPUT_DIR` for file creation events.
|
||||||
2. **Event Handling:** The `ZipHandler` is attached to the observer. When a file is created in the monitored directory, the `on_created` method of the `ZipHandler` is triggered.
|
2. **Event Handling:** When a file is created, an event handler (e.g., `on_created` method) is triggered.
|
||||||
3. **ZIP File Detection:** The `on_created` method checks if the newly created file is a `.zip` file.
|
3. **Archive Detection:** The handler checks if the new file is a supported archive type (e.g., `.zip`, `.rar`, `.7z`).
|
||||||
4. **Filename Parsing:** If it's a ZIP file, the script expects the filename to follow a specific format: `[preset]_filename.zip`. It uses a regular expression (`PRESET_FILENAME_REGEX`, likely defined in `config.py` or similar) to extract the `[preset]` part from the filename.
|
4. **Filename Parsing:** If it's a supported archive, the script attempts to parse the filename to extract the intended preset name (e.g., using a regex like `[preset]_filename.ext`).
|
||||||
5. **Preset Validation:** It validates whether the extracted `preset` name corresponds to an existing preset JSON file in the `Presets/` directory.
|
5. **Preset Validation:** The extracted preset name is validated against existing preset files (`Presets/*.json`).
|
||||||
6. **Triggering Processing:** If the preset is valid, the `monitor.py` script calls `main.run_processing`, passing the path to the detected ZIP file and the extracted preset name. This initiates the main asset processing pipeline for that single asset. A `PROCESS_DELAY` can be configured to wait before triggering processing, potentially allowing large files to finish copying.
|
6. **Task Submission:** If the preset is valid, the path to the archive file and the validated preset name are submitted as a task to the `ThreadPoolExecutor`, which will eventually run the `_process_archive_task` function with these arguments in a worker thread.
|
||||||
7. **Source ZIP Management:** After the processing initiated by `main.run_processing` completes, the original source `.zip` file is moved to either the `PROCESSED_DIR` (if processing was successful or skipped) or the `ERROR_DIR` (if processing failed or the preset was invalid).
|
7. **`_process_archive_task` Execution (Worker Thread):**
|
||||||
|
* **Load Configuration:** Loads the `Configuration` object using the provided preset name.
|
||||||
|
* **Generate SourceRule:** Calls `utils.prediction_utils.generate_source_rule_from_archive`, passing the archive path and `Configuration`. This utility handles temporary extraction (if needed internally) and rule-based prediction, returning the `SourceRule`.
|
||||||
|
* **Prepare Workspace:** Calls `utils.workspace_utils.prepare_processing_workspace`, passing the archive path. This creates a unique temporary directory and extracts the archive contents. It returns the path to the prepared workspace. This step should ideally be wrapped in a `try...finally` block to ensure cleanup.
|
||||||
|
* **Instantiate Engine:** Creates an instance of the `ProcessingEngine`, passing the loaded `Configuration` and the prepared workspace path.
|
||||||
|
* **Run Processing:** Calls the `ProcessingEngine.process()` method, passing the generated `SourceRule`.
|
||||||
|
* **Handle Results:** Based on the success or failure of the processing, moves the original source archive file from `INPUT_DIR` to either `PROCESSED_DIR` or `ERROR_DIR`.
|
||||||
|
* **Cleanup Workspace:** Ensures the temporary workspace directory created by `prepare_processing_workspace` is removed (e.g., in the `finally` block).
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The monitor's behavior is primarily controlled by environment variables, which are read by the `monitor.py` script. These include `INPUT_DIR`, `OUTPUT_DIR`, `PROCESSED_DIR`, `ERROR_DIR`, `LOG_LEVEL`, `POLL_INTERVAL`, and `NUM_WORKERS`.
|
The monitor's behavior is controlled by environment variables or configuration settings, likely including `INPUT_DIR`, `OUTPUT_DIR`, `PROCESSED_DIR`, `ERROR_DIR`, `LOG_LEVEL`, `POLL_INTERVAL`, and potentially `NUM_WORKERS` to control the size of the `ThreadPoolExecutor`.
|
||||||
|
|
||||||
## Limitations
|
## Limitations
|
||||||
|
|
||||||
* The current implementation of the directory monitor does *not* support triggering the optional Blender script execution after processing. This post-processing step is only available when running the tool via the CLI or GUI.
|
* The monitor likely still does *not* support triggering optional Blender script execution post-processing, as this integration point was complex and potentially removed or not yet reimplemented in the refactored workflow.
|
||||||
|
|
||||||
Understanding the interaction between `watchdog`, the `ZipHandler`, and the call to `main.run_processing` is key to debugging or modifying the directory monitoring functionality.
|
Understanding the asynchronous nature, the role of the `ThreadPoolExecutor`, the `_process_archive_task` function, and the reliance on utility modules (`prediction_utils`, `workspace_utils`) is key to debugging or modifying the directory monitoring functionality.
|
||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
2763
asset_processor.py
2763
asset_processor.py
File diff suppressed because it is too large
Load Diff
@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"ASSET_TYPE_DEFINITIONS": {
|
"ASSET_TYPE_DEFINITIONS": {
|
||||||
"Surface": {
|
"Surface": {
|
||||||
"description": "Standard PBR material set for a surface.",
|
"description": "A single Standard PBR material set for a surface.",
|
||||||
"color": "#1f3e5d",
|
"color": "#1f3e5d",
|
||||||
"examples": [
|
"examples": [
|
||||||
"WoodFloor01",
|
"WoodFloor01",
|
||||||
@ -10,7 +10,7 @@
|
|||||||
},
|
},
|
||||||
"Model": {
|
"Model": {
|
||||||
"description": "A set that contains models, can include PBR textureset",
|
"description": "A set that contains models, can include PBR textureset",
|
||||||
"color": "#FFA500",
|
"color": "#b67300",
|
||||||
"examples": [
|
"examples": [
|
||||||
"Chair.fbx",
|
"Chair.fbx",
|
||||||
"Character.obj"
|
"Character.obj"
|
||||||
@ -18,7 +18,7 @@
|
|||||||
},
|
},
|
||||||
"Decal": {
|
"Decal": {
|
||||||
"description": "A alphamasked textureset",
|
"description": "A alphamasked textureset",
|
||||||
"color": "#90EE90",
|
"color": "#68ac68",
|
||||||
"examples": [
|
"examples": [
|
||||||
"Graffiti01",
|
"Graffiti01",
|
||||||
"LeakStain03",
|
"LeakStain03",
|
||||||
@ -27,7 +27,7 @@
|
|||||||
},
|
},
|
||||||
"Atlas": {
|
"Atlas": {
|
||||||
"description": "A texture sheet containing multiple smaller textures.",
|
"description": "A texture sheet containing multiple smaller textures.",
|
||||||
"color": "#FFC0CB",
|
"color": "#955b8b",
|
||||||
"examples": [
|
"examples": [
|
||||||
"FoliageAtlas",
|
"FoliageAtlas",
|
||||||
"UITextureSheet"
|
"UITextureSheet"
|
||||||
@ -35,7 +35,7 @@
|
|||||||
},
|
},
|
||||||
"UtilityMap": {
|
"UtilityMap": {
|
||||||
"description": "A useful image-asset consisting of only a single texture. Therefor each Utilitymap can only contain a single item.",
|
"description": "A useful image-asset consisting of only a single texture. Therefor each Utilitymap can only contain a single item.",
|
||||||
"color": "#D3D3D3",
|
"color": "#706b87",
|
||||||
"examples": [
|
"examples": [
|
||||||
"FlowMap",
|
"FlowMap",
|
||||||
"CurvatureMap",
|
"CurvatureMap",
|
||||||
@ -48,7 +48,7 @@
|
|||||||
"FILE_TYPE_DEFINITIONS": {
|
"FILE_TYPE_DEFINITIONS": {
|
||||||
"MAP_COL": {
|
"MAP_COL": {
|
||||||
"description": "Color/Albedo Map",
|
"description": "Color/Albedo Map",
|
||||||
"color": "#3d3021",
|
"color": "#ffaa00",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_col.",
|
"_col.",
|
||||||
"_basecolor.",
|
"_basecolor.",
|
||||||
@ -60,7 +60,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_NRM": {
|
"MAP_NRM": {
|
||||||
"description": "Normal Map",
|
"description": "Normal Map",
|
||||||
"color": "#23263d",
|
"color": "#cca2f1",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_nrm.",
|
"_nrm.",
|
||||||
"_normal."
|
"_normal."
|
||||||
@ -70,7 +70,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_METAL": {
|
"MAP_METAL": {
|
||||||
"description": "Metalness Map",
|
"description": "Metalness Map",
|
||||||
"color": "#1f1f1f",
|
"color": "#dcf4f2",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_metal.",
|
"_metal.",
|
||||||
"_met."
|
"_met."
|
||||||
@ -80,7 +80,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_ROUGH": {
|
"MAP_ROUGH": {
|
||||||
"description": "Roughness Map",
|
"description": "Roughness Map",
|
||||||
"color": "#3d1f11",
|
"color": "#bfd6bf",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_rough.",
|
"_rough.",
|
||||||
"_rgh.",
|
"_rgh.",
|
||||||
@ -91,7 +91,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_AO": {
|
"MAP_AO": {
|
||||||
"description": "Ambient Occlusion Map",
|
"description": "Ambient Occlusion Map",
|
||||||
"color": "#3d3d3d",
|
"color": "#e3c7c7",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_ao.",
|
"_ao.",
|
||||||
"_ambientocclusion."
|
"_ambientocclusion."
|
||||||
@ -101,7 +101,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_DISP": {
|
"MAP_DISP": {
|
||||||
"description": "Displacement/Height Map",
|
"description": "Displacement/Height Map",
|
||||||
"color": "#35343d",
|
"color": "#c6ddd5",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_disp.",
|
"_disp.",
|
||||||
"_height."
|
"_height."
|
||||||
@ -111,7 +111,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_REFL": {
|
"MAP_REFL": {
|
||||||
"description": "Reflection/Specular Map",
|
"description": "Reflection/Specular Map",
|
||||||
"color": "#363d3d",
|
"color": "#c2c2b9",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_refl.",
|
"_refl.",
|
||||||
"_specular."
|
"_specular."
|
||||||
@ -121,7 +121,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_SSS": {
|
"MAP_SSS": {
|
||||||
"description": "Subsurface Scattering Map",
|
"description": "Subsurface Scattering Map",
|
||||||
"color": "#3d342c",
|
"color": "#a0d394",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_sss.",
|
"_sss.",
|
||||||
"_subsurface."
|
"_subsurface."
|
||||||
@ -131,7 +131,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_FUZZ": {
|
"MAP_FUZZ": {
|
||||||
"description": "Fuzz/Sheen Map",
|
"description": "Fuzz/Sheen Map",
|
||||||
"color": "#3d261d",
|
"color": "#a2d1da",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_fuzz.",
|
"_fuzz.",
|
||||||
"_sheen."
|
"_sheen."
|
||||||
@ -141,7 +141,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_IDMAP": {
|
"MAP_IDMAP": {
|
||||||
"description": "ID Map (for masking)",
|
"description": "ID Map (for masking)",
|
||||||
"color": "#3d2121",
|
"color": "#ca8fb4",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_id.",
|
"_id.",
|
||||||
"_matid."
|
"_matid."
|
||||||
@ -151,7 +151,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_MASK": {
|
"MAP_MASK": {
|
||||||
"description": "Generic Mask Map",
|
"description": "Generic Mask Map",
|
||||||
"color": "#3d3d3d",
|
"color": "#c6e2bf",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_mask."
|
"_mask."
|
||||||
],
|
],
|
||||||
@ -160,7 +160,7 @@
|
|||||||
},
|
},
|
||||||
"MAP_IMPERFECTION": {
|
"MAP_IMPERFECTION": {
|
||||||
"description": "Imperfection Map (scratches, dust)",
|
"description": "Imperfection Map (scratches, dust)",
|
||||||
"color": "#3d3a24",
|
"color": "#e6d1a6",
|
||||||
"examples": [
|
"examples": [
|
||||||
"_imp.",
|
"_imp.",
|
||||||
"_imperfection.",
|
"_imperfection.",
|
||||||
@ -175,7 +175,7 @@
|
|||||||
},
|
},
|
||||||
"MODEL": {
|
"MODEL": {
|
||||||
"description": "3D Model File",
|
"description": "3D Model File",
|
||||||
"color": "#3d2700",
|
"color": "#3db2bd",
|
||||||
"examples": [
|
"examples": [
|
||||||
".fbx",
|
".fbx",
|
||||||
".obj"
|
".obj"
|
||||||
@ -185,7 +185,7 @@
|
|||||||
},
|
},
|
||||||
"EXTRA": {
|
"EXTRA": {
|
||||||
"description": "asset previews or metadata",
|
"description": "asset previews or metadata",
|
||||||
"color": "#2f363d",
|
"color": "#8c8c8c",
|
||||||
"examples": [
|
"examples": [
|
||||||
".txt",
|
".txt",
|
||||||
".zip",
|
".zip",
|
||||||
@ -200,7 +200,7 @@
|
|||||||
},
|
},
|
||||||
"FILE_IGNORE": {
|
"FILE_IGNORE": {
|
||||||
"description": "File to be ignored",
|
"description": "File to be ignored",
|
||||||
"color": "#243d3d",
|
"color": "#673d35",
|
||||||
"examples": [
|
"examples": [
|
||||||
"Thumbs.db",
|
"Thumbs.db",
|
||||||
".DS_Store"
|
".DS_Store"
|
||||||
@ -234,7 +234,7 @@
|
|||||||
"BLENDER_EXECUTABLE_PATH": "C:/Program Files/Blender Foundation/Blender 4.4/blender.exe",
|
"BLENDER_EXECUTABLE_PATH": "C:/Program Files/Blender Foundation/Blender 4.4/blender.exe",
|
||||||
"PNG_COMPRESSION_LEVEL": 6,
|
"PNG_COMPRESSION_LEVEL": 6,
|
||||||
"JPG_QUALITY": 98,
|
"JPG_QUALITY": 98,
|
||||||
"RESOLUTION_THRESHOLD_FOR_JPG": 4096,
|
"RESOLUTION_THRESHOLD_FOR_JPG": 999999,
|
||||||
"IMAGE_RESOLUTIONS": {
|
"IMAGE_RESOLUTIONS": {
|
||||||
"8K": 8192,
|
"8K": 8192,
|
||||||
"4K": 4096,
|
"4K": 4096,
|
||||||
@ -263,34 +263,66 @@
|
|||||||
],
|
],
|
||||||
"CALCULATE_STATS_RESOLUTION": "1K",
|
"CALCULATE_STATS_RESOLUTION": "1K",
|
||||||
"DEFAULT_ASSET_CATEGORY": "Surface",
|
"DEFAULT_ASSET_CATEGORY": "Surface",
|
||||||
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_"
|
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_",
|
||||||
,
|
|
||||||
"llm_predictor_examples": [
|
"llm_predictor_examples": [
|
||||||
{
|
{
|
||||||
"input": "MessyTextures/Concrete_Damage_Set/concrete_col.png\nMessyTextures/Concrete_Damage_Set/concrete_N.png\nMessyTextures/Concrete_Damage_Set/concrete_rough.jpg\nMessyTextures/Concrete_Damage_Set/height_map_concrete.tif\nMessyTextures/Concrete_Damage_Set/Thumbs.db\nMessyTextures/Fabric_Pattern/pattern_01_diffuse.tga\nMessyTextures/Fabric_Pattern/pattern_01_ao.png\nMessyTextures/Fabric_Pattern/pattern_01_normal.png\nMessyTextures/Fabric_Pattern/notes.txt\nMessyTextures/Fabric_Pattern/variant_blue_diffuse.tga",
|
"input": "MessyTextures/Concrete_Damage_Set/concrete_col.png\nMessyTextures/Concrete_Damage_Set/concrete_N.png\nMessyTextures/Concrete_Damage_Set/concrete_rough.jpg\nMessyTextures/Concrete_Damage_Set/height_map_concrete.tif\nMessyTextures/Concrete_Damage_Set/Thumbs.db\nMessyTextures/Fabric_Pattern/pattern_01_diffuse.tga\nMessyTextures/Fabric_Pattern/pattern_01_ao.png\nMessyTextures/Fabric_Pattern/pattern_01_normal.png\nMessyTextures/Fabric_Pattern/notes.txt\nMessyTextures/Fabric_Pattern/variant_blue_diffuse.tga\nMessyTextures/Fabric_Pattern/fabric_flat.jpg",
|
||||||
"output": {
|
"output": {
|
||||||
"predicted_assets": [
|
"predicted_assets": [
|
||||||
{
|
{
|
||||||
"suggested_asset_name": "Concrete_Damage_Set",
|
"suggested_asset_name": "Concrete_Damage_01",
|
||||||
"predicted_asset_type": "Surface",
|
"predicted_asset_type": "Surface",
|
||||||
"files": [
|
"files": [
|
||||||
{"file_path": "MessyTextures/Concrete_Damage_Set/concrete_col.png", "predicted_file_type": "MAP_COL"},
|
{
|
||||||
{"file_path": "MessyTextures/Concrete_Damage_Set/concrete_N.png", "predicted_file_type": "MAP_NRM"},
|
"file_path": "MessyTextures/Concrete_Damage_Set/concrete_col.png",
|
||||||
{"file_path": "MessyTextures/Concrete_Damage_Set/concrete_rough.jpg", "predicted_file_type": "MAP_ROUGH"},
|
"predicted_file_type": "MAP_COL"
|
||||||
{"file_path": "MessyTextures/Concrete_Damage_Set/height_map_concrete.tif", "predicted_file_type": "MAP_DISP"},
|
},
|
||||||
{"file_path": "MessyTextures/Concrete_Damage_Set/Thumbs.db", "predicted_file_type": "FILE_IGNORE"}
|
{
|
||||||
|
"file_path": "MessyTextures/Concrete_Damage_Set/concrete_N.png",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "MessyTextures/Concrete_Damage_Set/concrete_rough.jpg",
|
||||||
|
"predicted_file_type": "MAP_ROUGH"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "MessyTextures/Concrete_Damage_Set/height_map_concrete.tif",
|
||||||
|
"predicted_file_type": "MAP_DISP"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "MessyTextures/Concrete_Damage_Set/Thumbs.db",
|
||||||
|
"predicted_file_type": "FILE_IGNORE"
|
||||||
|
}
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"suggested_asset_name": "Fabric_Pattern_01",
|
"suggested_asset_name": "Fabric_Pattern_01",
|
||||||
"predicted_asset_type": "Surface",
|
"predicted_asset_type": "Surface",
|
||||||
"files": [
|
"files": [
|
||||||
{"file_path": "MessyTextures/Fabric_Pattern/pattern_01_diffuse.tga", "predicted_file_type": "MAP_COL"},
|
{
|
||||||
{"file_path": "MessyTextures/Fabric_Pattern/pattern_01_ao.png", "predicted_file_type": "MAP_AO"},
|
"file_path": "MessyTextures/Fabric_Pattern/pattern_01_diffuse.tga",
|
||||||
{"file_path": "MessyTextures/Fabric_Pattern/pattern_01_normal.png", "predicted_file_type": "MAP_NRM"},
|
"predicted_file_type": "MAP_COL"
|
||||||
{"file_path": "MessyTextures/Fabric_Pattern/variant_blue_diffuse.tga", "predicted_file_type": "MAP_COL"},
|
},
|
||||||
{"file_path": "MessyTextures/Fabric_Pattern/variant_blue_flat.tga", "predicted_file_type": "EXTRA"},
|
{
|
||||||
{"file_path": "MessyTextures/Fabric_Pattern/notes.txt", "predicted_file_type": "EXTRA"}
|
"file_path": "MessyTextures/Fabric_Pattern/pattern_01_ao.png",
|
||||||
|
"predicted_file_type": "MAP_AO"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "MessyTextures/Fabric_Pattern/pattern_01_normal.png",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "MessyTextures/Fabric_Pattern/variant_blue_diffuse.tga",
|
||||||
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "MessyTextures/Fabric_Pattern/fabric_flat.jpg",
|
||||||
|
"predicted_file_type": "EXTRA"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "MessyTextures/Fabric_Pattern/notes.txt",
|
||||||
|
"predicted_file_type": "EXTRA"
|
||||||
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@ -304,13 +336,232 @@
|
|||||||
"suggested_asset_name": "SciFi_Drone",
|
"suggested_asset_name": "SciFi_Drone",
|
||||||
"predicted_asset_type": "Model",
|
"predicted_asset_type": "Model",
|
||||||
"files": [
|
"files": [
|
||||||
{"file_path": "SciFi_Drone/Drone_Model.fbx", "predicted_file_type": "MODEL"},
|
{
|
||||||
{"file_path": "SciFi_Drone/Textures/Drone_BaseColor.png", "predicted_file_type": "MAP_COL"},
|
"file_path": "SciFi_Drone/Drone_Model.fbx",
|
||||||
{"file_path": "SciFi_Drone/Textures/Drone_Metallic.png", "predicted_file_type": "MAP_METAL"},
|
"predicted_file_type": "MODEL"
|
||||||
{"file_path": "SciFi_Drone/Textures/Drone_Roughness.png", "predicted_file_type": "MAP_ROUGH"},
|
},
|
||||||
{"file_path": "SciFi_Drone/Textures/Drone_Normal.png", "predicted_file_type": "MAP_NRM"},
|
{
|
||||||
{"file_path": "SciFi_Drone/Textures/Drone_Emissive.jpg", "predicted_file_type": "EXTRA"},
|
"file_path": "SciFi_Drone/Textures/Drone_BaseColor.png",
|
||||||
{"file_path": "SciFi_Drone/ReferenceImages/concept.jpg", "predicted_file_type": "EXTRA"}
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "SciFi_Drone/Textures/Drone_Metallic.png",
|
||||||
|
"predicted_file_type": "MAP_METAL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "SciFi_Drone/Textures/Drone_Roughness.png",
|
||||||
|
"predicted_file_type": "MAP_ROUGH"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "SciFi_Drone/Textures/Drone_Normal.png",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "SciFi_Drone/Textures/Drone_Emissive.jpg",
|
||||||
|
"predicted_file_type": "EXTRA"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "SciFi_Drone/ReferenceImages/concept.jpg",
|
||||||
|
"predicted_file_type": "EXTRA"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"input": "21_hairs_deposits.tif\n22_hairs_fabric.tif\n23_hairs_fibres.tif\n24_hairs_fibres.tif\n25_bonus_isolatedFingerprints.tif\n26_bonus_isolatedPalmprint.tif\n27_metal_aluminum.tif\n28_metal_castIron.tif\n29_scratcehes_deposits_shapes.tif\n30_scratches_deposits.tif",
|
||||||
|
"output": {
|
||||||
|
"predicted_assets": [
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "21-Hairs-Deposits",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "21_hairs_deposits.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "22-Hairs-Fabric",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "22_hairs_fabric.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "23-Hairs-Deposits",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "23_hairs_fibres.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "24-Hairs-Fibres",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "24_hairs_fibres.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "27-MetalAluminium",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "27_metal_aluminum.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "28-MetalCastiron",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "28_metal_castIron.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "29-Scratches-Deposits-Shapes",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "29_scratcehes_deposits_shapes.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "30-Scrathes-Deposits",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "30_scratches_deposits.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Bonus-IsolatedFingerprints",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "25_bonus_isolatedFingerprints.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Bonus-IsolatedPalmprint",
|
||||||
|
"predicted_asset_type": "UtilityMap",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "26_bonus_isolatedPalmprint.tif",
|
||||||
|
"predicted_file_type": "MAP_IMPERFECTION"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"input": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_A_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
|
||||||
|
"output": {
|
||||||
|
"predicted_assets": [
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Boards001_A",
|
||||||
|
"predicted_asset_type": "Surface",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg",
|
||||||
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Normal.jpg",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Boards001_B",
|
||||||
|
"predicted_asset_type": "Surface",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg",
|
||||||
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Normal.jpg",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Boards001_C",
|
||||||
|
"predicted_asset_type": "Surface",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg",
|
||||||
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Normal.jpg",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Boards001_D",
|
||||||
|
"predicted_asset_type": "Surface",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg",
|
||||||
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Normal.jpg",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Boards001_E",
|
||||||
|
"predicted_asset_type": "Surface",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg",
|
||||||
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Normal.jpg",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"suggested_asset_name": "Boards001_F",
|
||||||
|
"predicted_asset_type": "Surface",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg",
|
||||||
|
"predicted_file_type": "MAP_COL"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
|
||||||
|
"predicted_file_type": "MAP_NRM"
|
||||||
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@ -319,8 +570,8 @@
|
|||||||
],
|
],
|
||||||
"llm_endpoint_url": "http://100.65.14.122:1234/v1/chat/completions",
|
"llm_endpoint_url": "http://100.65.14.122:1234/v1/chat/completions",
|
||||||
"llm_api_key": "",
|
"llm_api_key": "",
|
||||||
"llm_model_name": "local-model",
|
"llm_model_name": "",
|
||||||
"llm_temperature": 0.5,
|
"llm_temperature": 0.5,
|
||||||
"llm_request_timeout": 120,
|
"llm_request_timeout": 120,
|
||||||
"llm_predictor_prompt": "You are an expert asset classification system. Your task is to analyze a list of file paths from a directory, identify a pattern and then group them into logical assets, assigning an asset type and file type to each file.\\n\\n**Definitions:**\\n\\n* **Asset Types:** These define the overall category of an asset. Use one of the following keys for `predicted_asset_type`:\\n ```json\\n {ASSET_TYPE_DEFINITIONS}\\n ```\\n\\n* **File Types:** These define the specific purpose of each file. Use one of the following keys for `predicted_file_type`:\\n ```json\\n {FILE_TYPE_DEFINITIONS}\\n ```\\n\\n**Task:**\\n\\nGiven the following file list:\\n\\n```text\\n{FILE_LIST}\\n```\\n\\nAnalyze the file paths and names. Group the files into logical assets. For each asset, determine the most appropriate `predicted_asset_type` from the definitions above. For each file within an asset, determine the most appropriate `predicted_file_type` from the definitions above. Files that should be ignored (like Thumbs.db) should use the `FILE_IGNORE` type. Files that don't fit a standard map type but belong to the asset should use `EXTRA`.\\n\\n**Output Format:**\\n\\nYour response MUST be ONLY a single, perfectly valid JSON object adhering strictly to the structure below. Do NOT include any text before or after the JSON object. Ensure all strings are correctly quoted and escaped, and there are no trailing commas.\\n\\nCRITICAL: Ensure the output is strictly valid JSON parsable by standard libraries. This means NO comments (like // or /* */), NO trailing commas, and correct quoting/escaping of all strings.\\n\\n```json\\n{\\n \"predicted_assets\": [\\n {\\n \"suggested_asset_name\": \"string\", // Your best guess for a concise asset name based on file paths/names\\n \"predicted_asset_type\": \"string\", // Key from Asset Types definitions\\n \"files\": [\\n {\\n \"file_path\": \"string\", // Exact relative path from the input list\\n \"predicted_file_type\": \"string\" // Key from File Types definitions\\n },\\n // ... more files\\n ]\\n },\\n // ... more assets\\n ]\\n}\\n```\\n\\n**Examples:**\\n\\nHere are examples of input file lists and the desired JSON output:\\n\\n```json\\n[\\n {EXAMPLE_INPUT_OUTPUT_PAIRS}\\n]\\n```\\n\\nNow, process the provided file list and generate the JSON output."
|
"llm_predictor_prompt": "You are an expert asset classification system. Your task is to analyze a list of file paths from a directory, identify patterns based on directory structure and filenames, and then group related files into logical assets. For each grouped asset, you must suggest a concise asset name, determine the overall asset type, and for each file within that asset, assign its specific file type.\n\nDefinitions:\n\nAsset Types: These define the overall category of an asset. Use one of the following keys for predicted_asset_type:\njson\n{ASSET_TYPE_DEFINITIONS}\n\n\nFile Types: These define the specific purpose of each file. Use one of the following keys for predicted_file_type:\njson\n{FILE_TYPE_DEFINITIONS}\n\n\nCore Task & Grouping Logic:\n\n1. Analyze Input: Examine the provided FILE_LIST. Pay close attention to directory paths and filenames (including prefixes, suffixes, separators like underscores or hyphens, and file extensions).\n2. Identify Potential Assets: Look for patterns that indicate files belong together:\n - Common Base Name: Files sharing a significant common prefix before map-type identifiers (e.g., Concrete_Damage_Set/concrete_ followed by col.png, N.png, rough.jpg).\n - Directory Grouping: Files located within the same immediate directory are often related, especially if their names follow a pattern (e.g., all files directly under SciFi_Drone/Textures/).\n - Model Association: If a MODEL file type (like .fbx, .obj) is present, group it with texture files that share its base name or are located in a plausible associated directory (like Textures/).\n - Single-File Assets (Utility Maps): Files whose names strongly suggest a UtilityMap type (e.g., scratches.tif, FlowMap.png, 21_hairs_deposits.tif) should typically form their own asset, unless they clearly belong to a larger PBR set based on naming conventions. Remember UtilityMap assets usually contain only one file as per their definition.\n - Variations: Files indicating variations (e.g., _A, _B or _variant_blue) should be grouped logically.\n - If variations represent complete, distinct sets (like Boards001_A and Boards001_B in the examples), create separate assets for each variation.\n - If variations seem like alternative maps or supplementary files for a single core asset (like pattern_01_diffuse.tga and variant_blue_diffuse.tga in the examples), group them under one asset. Use the base name (e.g., Fabric_Pattern_01) for the asset.\n3. Group Files: Based on the identified patterns, group the file paths into logical predicted_assets.\n4. Determine Asset Type: For each asset group, determine the most appropriate predicted_asset_type by considering the types of files it contains (e.g., presence of a .fbx suggests Model; multiple PBR maps like MAP_COL, MAP_NRM, MAP_ROUGH suggest Surface; a single imperfection map suggests UtilityMap). Refer to the ASSET_TYPE_DEFINITIONS.\n5. Suggest Asset Name: For each asset, generate a suggested_asset_name. This should be concise and derived from the common base filename or the immediate parent directory name. Clean up the name (e.g., use CamelCase or underscores consistently, remove redundant info like dimensions if not essential).\n6. Assign File Types: For each file_path within an asset, determine the most appropriate predicted_file_type based on its name, extension, and context within the asset. Use the keys from FILE_TYPE_DEFINITIONS.\n - Use FILE_IGNORE for files that should be ignored (e.g., Thumbs.db, .DS_Store).\n - Use EXTRA for files that belong to the asset but don't fit a standard map type (e.g., previews, text files, non-standard maps like Emissive unless you add a specific type for it).\n\nInput File List:\n\ntext\n{FILE_LIST}\n\n\nOutput Format:\n\nYour response MUST be ONLY a single, perfectly valid JSON object adhering strictly to the structure below. Do NOT include any text, explanations, or introductory phrases before or after the JSON object. Ensure all strings are correctly quoted and escaped, and there are NO trailing commas or comments (//, /* */).\n\nCRITICAL: The output must be strictly valid JSON parsable by standard libraries.\n\njson\n{\n \"predicted_assets\": [\n {\n \"suggested_asset_name\": \"string\", // Concise asset name derived from common file parts or directory\n \"predicted_asset_type\": \"string\", // Key from Asset Types definitions\n \"files\": [\n {\n \"file_path\": \"string\", // Exact relative path from the input list\n \"predicted_file_type\": \"string\" // Key from File Types definitions\n },\n // ... more files\n ]\n },\n // ... more assets\n ]\n}\n\n\nExamples:\n\nHere are examples of input file lists and the desired JSON output, illustrating the grouping logic:\n\njson\n[\n {EXAMPLE_INPUT_OUTPUT_PAIRS}\n]\n\n\nNow, process the provided FILE_LIST and generate ONLY the JSON output according to these instructions."
|
||||||
}
|
}
|
||||||
138
gui/asset_restructure_handler.py
Normal file
138
gui/asset_restructure_handler.py
Normal file
@ -0,0 +1,138 @@
|
|||||||
|
# gui/asset_restructure_handler.py
|
||||||
|
import logging
|
||||||
|
from PySide6.QtCore import QObject, Slot, QModelIndex
|
||||||
|
from PySide6.QtGui import QColor # Might be needed if copying logic directly, though unlikely now
|
||||||
|
from pathlib import Path
|
||||||
|
from .unified_view_model import UnifiedViewModel # Use relative import
|
||||||
|
from rule_structure import SourceRule, AssetRule, FileRule
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class AssetRestructureHandler(QObject):
|
||||||
|
"""
|
||||||
|
Handles the model restructuring logic triggered by changes
|
||||||
|
to FileRule target asset overrides in the UnifiedViewModel.
|
||||||
|
"""
|
||||||
|
def __init__(self, model: UnifiedViewModel, parent=None):
|
||||||
|
super().__init__(parent)
|
||||||
|
if not isinstance(model, UnifiedViewModel):
|
||||||
|
raise TypeError("AssetRestructureHandler requires a UnifiedViewModel instance.")
|
||||||
|
self.model = model
|
||||||
|
log.debug("AssetRestructureHandler initialized.")
|
||||||
|
|
||||||
|
@Slot(QModelIndex, object)
|
||||||
|
def handle_target_asset_override(self, index: QModelIndex, new_target_path: object):
|
||||||
|
"""
|
||||||
|
Slot connected to UnifiedViewModel.targetAssetOverrideChanged.
|
||||||
|
Orchestrates model changes based on the new target asset path.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
index: The QModelIndex of the FileRule whose override changed.
|
||||||
|
new_target_path: The new target asset path (string or None).
|
||||||
|
"""
|
||||||
|
log.debug(f"Handler received targetAssetOverrideChanged: Index=({index.row()},{index.column()}), New Path='{new_target_path}'")
|
||||||
|
|
||||||
|
if not index.isValid():
|
||||||
|
log.warning("Handler received invalid index. Aborting.")
|
||||||
|
return
|
||||||
|
|
||||||
|
file_item = self.model.getItem(index)
|
||||||
|
if not isinstance(file_item, FileRule):
|
||||||
|
log.warning(f"Handler received index for non-FileRule item: {type(file_item)}. Aborting.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Ensure new_target_path is a string or None
|
||||||
|
new_target_name = str(new_target_path).strip() if new_target_path is not None else None
|
||||||
|
if new_target_name == "": new_target_name = None # Treat empty string as None
|
||||||
|
|
||||||
|
# --- Get necessary context ---
|
||||||
|
old_parent_asset = getattr(file_item, 'parent_asset', None)
|
||||||
|
if not old_parent_asset:
|
||||||
|
log.error(f"Handler: File item '{Path(file_item.file_path).name}' has no parent asset. Cannot restructure.")
|
||||||
|
# Note: Data change already happened in setData, cannot easily revert here.
|
||||||
|
return
|
||||||
|
|
||||||
|
source_rule = getattr(old_parent_asset, 'parent_source', None)
|
||||||
|
if not source_rule:
|
||||||
|
log.error(f"Handler: Could not find SourceRule for parent asset '{old_parent_asset.asset_name}'. Cannot restructure.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# --- Logic based on the new target name ---
|
||||||
|
target_parent_asset = None
|
||||||
|
target_parent_index = QModelIndex()
|
||||||
|
move_occurred = False
|
||||||
|
|
||||||
|
# 1. Find existing target parent AssetRule within the same SourceRule
|
||||||
|
if new_target_name:
|
||||||
|
for i, asset in enumerate(source_rule.assets):
|
||||||
|
if asset.asset_name == new_target_name:
|
||||||
|
target_parent_asset = asset
|
||||||
|
# Get index for the target parent
|
||||||
|
try:
|
||||||
|
source_rule_row = self.model._source_rules.index(source_rule)
|
||||||
|
source_rule_index = self.model.createIndex(source_rule_row, 0, source_rule)
|
||||||
|
target_parent_index = self.model.index(i, 0, source_rule_index)
|
||||||
|
if not target_parent_index.isValid():
|
||||||
|
log.error(f"Handler: Failed to create valid index for existing target parent '{new_target_name}'.")
|
||||||
|
target_parent_asset = None # Reset if index is invalid
|
||||||
|
except ValueError:
|
||||||
|
log.error(f"Handler: Could not find SourceRule index while looking for target parent '{new_target_name}'.")
|
||||||
|
target_parent_asset = None # Reset if index is invalid
|
||||||
|
break # Found the asset
|
||||||
|
|
||||||
|
# 2. Handle Move or Creation
|
||||||
|
if target_parent_asset:
|
||||||
|
# --- Move to Existing Parent ---
|
||||||
|
if target_parent_asset != old_parent_asset:
|
||||||
|
log.info(f"Handler: Moving file '{Path(file_item.file_path).name}' to existing asset '{target_parent_asset.asset_name}'.")
|
||||||
|
if self.model.moveFileRule(index, target_parent_index):
|
||||||
|
move_occurred = True
|
||||||
|
else:
|
||||||
|
log.error(f"Handler: Model failed to move file rule to existing asset '{target_parent_asset.asset_name}'.")
|
||||||
|
# Consider how to handle failure - maybe log and continue to cleanup?
|
||||||
|
else:
|
||||||
|
# Target is the same as the old parent. No move needed.
|
||||||
|
log.debug(f"Handler: Target asset '{new_target_name}' is the same as the current parent. No move required.")
|
||||||
|
pass # No move needed, but might still need cleanup if old parent becomes empty later (unlikely in this specific case)
|
||||||
|
|
||||||
|
elif new_target_name: # Only create if a *new* specific target name was given
|
||||||
|
# --- Create New Parent AssetRule and Move ---
|
||||||
|
log.info(f"Handler: Creating new asset '{new_target_name}' and moving file '{Path(file_item.file_path).name}'.")
|
||||||
|
# Create the new asset rule using the model's method
|
||||||
|
new_asset_index = self.model.createAssetRule(source_rule, new_target_name, copy_from_asset=old_parent_asset)
|
||||||
|
|
||||||
|
if new_asset_index.isValid():
|
||||||
|
# Now move the file to the newly created asset
|
||||||
|
if self.model.moveFileRule(index, new_asset_index):
|
||||||
|
move_occurred = True
|
||||||
|
target_parent_asset = new_asset_index.internalPointer() # Update for cleanup check
|
||||||
|
else:
|
||||||
|
log.error(f"Handler: Model failed to move file rule to newly created asset '{new_target_name}'.")
|
||||||
|
# If move fails after creation, should we remove the created asset? Maybe.
|
||||||
|
# For now, just log the error.
|
||||||
|
else:
|
||||||
|
log.error(f"Handler: Model failed to create new asset rule '{new_target_name}'. Cannot move file.")
|
||||||
|
|
||||||
|
else: # new_target_name is None or empty
|
||||||
|
# --- Moving back to original/default parent (Clearing Override) ---
|
||||||
|
# The file *should* already be under its original parent if the override was just cleared.
|
||||||
|
# However, if it was previously moved *away* from its original parent due to an override,
|
||||||
|
# clearing the override *should* ideally move it back.
|
||||||
|
# This logic is complex: we need to know the *original* parent before any overrides.
|
||||||
|
# The current structure doesn't explicitly store this.
|
||||||
|
# For now, assume clearing the override means it stays in its *current* parent,
|
||||||
|
# and we only handle cleanup if that parent becomes empty.
|
||||||
|
# A more robust solution might involve finding the asset matching the file's *directory* name.
|
||||||
|
log.debug(f"Handler: Target asset override cleared for '{Path(file_item.file_path).name}'. File remains in parent '{old_parent_asset.asset_name}'.")
|
||||||
|
# No move occurs in this simplified interpretation.
|
||||||
|
|
||||||
|
# 3. Cleanup Empty Old Parent (only if a move occurred)
|
||||||
|
# Check the old_parent_asset *after* the potential move
|
||||||
|
if move_occurred and old_parent_asset and not old_parent_asset.files:
|
||||||
|
log.info(f"Handler: Attempting to remove empty old parent asset '{old_parent_asset.asset_name}'.")
|
||||||
|
if not self.model.removeAssetRule(old_parent_asset):
|
||||||
|
log.warning(f"Handler: Model failed to remove empty old parent asset '{old_parent_asset.asset_name}'.")
|
||||||
|
elif move_occurred:
|
||||||
|
log.debug(f"Handler: Old parent asset '{old_parent_asset.asset_name}' still contains files. No removal needed.")
|
||||||
|
|
||||||
|
log.debug(f"Handler finished processing targetAssetOverrideChanged for '{Path(file_item.file_path).name}'.")
|
||||||
133
gui/base_prediction_handler.py
Normal file
133
gui/base_prediction_handler.py
Normal file
@ -0,0 +1,133 @@
|
|||||||
|
# gui/base_prediction_handler.py
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Any
|
||||||
|
|
||||||
|
from PySide6.QtCore import QObject, Signal, Slot, QThread
|
||||||
|
|
||||||
|
# Assuming rule_structure defines SourceRule
|
||||||
|
try:
|
||||||
|
from rule_structure import SourceRule
|
||||||
|
except ImportError:
|
||||||
|
print("ERROR (BasePredictionHandler): Failed to import SourceRule. Predictions might fail.")
|
||||||
|
# Define a placeholder if the import fails to allow type hinting
|
||||||
|
class SourceRule: pass
|
||||||
|
|
||||||
|
from abc import ABCMeta
|
||||||
|
from PySide6.QtCore import QObject # Ensure QObject is imported if not already
|
||||||
|
|
||||||
|
# Combine metaclasses to avoid conflict between QObject and ABC
|
||||||
|
class QtABCMeta(type(QObject), ABCMeta):
|
||||||
|
pass
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class BasePredictionHandler(QObject, ABC, metaclass=QtABCMeta):
|
||||||
|
"""
|
||||||
|
Abstract base class for prediction handlers that generate SourceRule hierarchies.
|
||||||
|
Designed to be run in a separate QThread.
|
||||||
|
"""
|
||||||
|
# --- Standardized Signals ---
|
||||||
|
# Emitted when prediction is successfully completed.
|
||||||
|
# Args: input_source_identifier (str), results (List[SourceRule])
|
||||||
|
prediction_ready = Signal(str, list)
|
||||||
|
|
||||||
|
# Emitted when an error occurs during prediction.
|
||||||
|
# Args: input_source_identifier (str), error_message (str)
|
||||||
|
prediction_error = Signal(str, str)
|
||||||
|
|
||||||
|
# Emitted for status updates during the prediction process.
|
||||||
|
# Args: status_message (str)
|
||||||
|
status_update = Signal(str)
|
||||||
|
|
||||||
|
def __init__(self, input_source_identifier: str, parent: QObject = None):
|
||||||
|
"""
|
||||||
|
Initializes the base handler.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
input_source_identifier: The unique identifier for the input source (e.g., file path).
|
||||||
|
parent: The parent QObject.
|
||||||
|
"""
|
||||||
|
super().__init__(parent)
|
||||||
|
self.input_source_identifier = input_source_identifier
|
||||||
|
self._is_running = False
|
||||||
|
self._is_cancelled = False # Added cancellation flag
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_running(self) -> bool:
|
||||||
|
"""Returns True if the handler is currently processing."""
|
||||||
|
return self._is_running
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def run(self):
|
||||||
|
"""
|
||||||
|
Main execution slot intended to be connected to QThread.started.
|
||||||
|
Handles the overall process: setup, execution, error handling, signaling.
|
||||||
|
"""
|
||||||
|
if self._is_running:
|
||||||
|
log.warning(f"Handler for '{self.input_source_identifier}' is already running. Aborting.")
|
||||||
|
return
|
||||||
|
if self._is_cancelled:
|
||||||
|
log.info(f"Handler for '{self.input_source_identifier}' was cancelled before starting.")
|
||||||
|
# Optionally emit an error or specific signal for cancellation before start
|
||||||
|
return
|
||||||
|
|
||||||
|
self._is_running = True
|
||||||
|
self._is_cancelled = False # Ensure cancel flag is reset at start
|
||||||
|
thread_id = QThread.currentThread() # Use currentThread() for PySide6
|
||||||
|
log.info(f"[{time.time():.4f}][T:{thread_id}] Starting prediction run for: {self.input_source_identifier}")
|
||||||
|
self.status_update.emit(f"Starting analysis for '{Path(self.input_source_identifier).name}'...")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# --- Execute Core Logic ---
|
||||||
|
results = self._perform_prediction()
|
||||||
|
|
||||||
|
if self._is_cancelled:
|
||||||
|
log.info(f"Prediction cancelled during execution for: {self.input_source_identifier}")
|
||||||
|
self.prediction_error.emit(self.input_source_identifier, "Prediction cancelled by user.")
|
||||||
|
else:
|
||||||
|
# --- Emit Success Signal ---
|
||||||
|
log.info(f"[{time.time():.4f}][T:{thread_id}] Prediction successful for '{self.input_source_identifier}'. Emitting results.")
|
||||||
|
self.prediction_ready.emit(self.input_source_identifier, results)
|
||||||
|
self.status_update.emit(f"Analysis complete for '{Path(self.input_source_identifier).name}'.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# --- Emit Error Signal ---
|
||||||
|
log.exception(f"[{time.time():.4f}][T:{thread_id}] Error during prediction for '{self.input_source_identifier}': {e}")
|
||||||
|
error_msg = f"Error analyzing '{Path(self.input_source_identifier).name}': {e}"
|
||||||
|
self.prediction_error.emit(self.input_source_identifier, error_msg)
|
||||||
|
# Status update might be redundant if error is shown elsewhere, but can be useful
|
||||||
|
# self.status_update.emit(f"Error: {e}")
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# --- Cleanup ---
|
||||||
|
self._is_running = False
|
||||||
|
log.info(f"[{time.time():.4f}][T:{thread_id}] Finished prediction run for: {self.input_source_identifier}")
|
||||||
|
# Note: The thread itself should be managed (quit/deleteLater) by the caller
|
||||||
|
# based on the signals emitted (prediction_ready, prediction_error).
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def cancel(self):
|
||||||
|
"""
|
||||||
|
Sets the cancellation flag. The running process should check this flag periodically.
|
||||||
|
"""
|
||||||
|
log.info(f"Cancellation requested for handler: {self.input_source_identifier}")
|
||||||
|
self._is_cancelled = True
|
||||||
|
self.status_update.emit(f"Cancellation requested for '{Path(self.input_source_identifier).name}'...")
|
||||||
|
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def _perform_prediction(self) -> List[SourceRule]:
|
||||||
|
"""
|
||||||
|
Abstract method to be implemented by concrete subclasses.
|
||||||
|
This method contains the specific logic for generating the SourceRule list.
|
||||||
|
It should periodically check `self._is_cancelled`.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of SourceRule objects representing the prediction results.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Exception: If any critical error occurs during the prediction process.
|
||||||
|
"""
|
||||||
|
pass
|
||||||
@ -1,8 +1,10 @@
|
|||||||
|
from pathlib import Path
|
||||||
# gui/delegates.py
|
# gui/delegates.py
|
||||||
from PySide6.QtWidgets import QStyledItemDelegate, QLineEdit, QComboBox
|
from PySide6.QtWidgets import QStyledItemDelegate, QLineEdit, QComboBox
|
||||||
from PySide6.QtCore import Qt, QModelIndex
|
from PySide6.QtCore import Qt, QModelIndex
|
||||||
# Import the new config dictionaries
|
# Import Configuration and ConfigurationError
|
||||||
from configuration import load_base_config # Import load_base_config
|
from configuration import Configuration, ConfigurationError, load_base_config # Keep load_base_config for SupplierSearchDelegate
|
||||||
|
from PySide6.QtWidgets import QListWidgetItem # Import QListWidgetItem
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
@ -40,29 +42,49 @@ class LineEditDelegate(QStyledItemDelegate):
|
|||||||
class ComboBoxDelegate(QStyledItemDelegate):
|
class ComboBoxDelegate(QStyledItemDelegate):
|
||||||
"""
|
"""
|
||||||
Delegate for editing string values from a predefined list using a QComboBox.
|
Delegate for editing string values from a predefined list using a QComboBox.
|
||||||
Determines the list source based on column index.
|
Determines the list source based on column index by accessing the
|
||||||
|
UnifiedViewModel directly.
|
||||||
"""
|
"""
|
||||||
|
# REMOVED main_window parameter
|
||||||
|
def __init__(self, parent=None):
|
||||||
|
super().__init__(parent)
|
||||||
|
# REMOVED self.main_window store
|
||||||
|
|
||||||
def createEditor(self, parent, option, index: QModelIndex):
|
def createEditor(self, parent, option, index: QModelIndex):
|
||||||
# Creates the QComboBox editor widget.
|
# Creates the QComboBox editor widget.
|
||||||
editor = QComboBox(parent)
|
editor = QComboBox(parent)
|
||||||
column = index.column()
|
column = index.column()
|
||||||
model = index.model() # Get the model instance
|
model = index.model() # GET model from index
|
||||||
|
|
||||||
# Add a "clear" option first, associating None with it.
|
# Add a "clear" option first, associating None with it.
|
||||||
editor.addItem("---", None) # UserData = None
|
editor.addItem("---", None) # UserData = None
|
||||||
|
|
||||||
# Populate based on column using keys from config dictionaries
|
# Populate based on column by accessing the model's cached keys
|
||||||
items_keys = None
|
items_keys = [] # Default to empty list
|
||||||
try:
|
|
||||||
base_config = load_base_config() # Load base config
|
|
||||||
if column == 2: # Asset-Type Override (AssetRule)
|
|
||||||
items_keys = list(base_config.get('ASSET_TYPE_DEFINITIONS', {}).keys()) # Access from base_config
|
|
||||||
elif column == 4: # Item-Type Override (FileRule)
|
|
||||||
items_keys = list(base_config.get('FILE_TYPE_DEFINITIONS', {}).keys()) # Access from base_config
|
|
||||||
except Exception as e:
|
|
||||||
log.error(f"Error loading base config for ComboBoxDelegate: {e}")
|
|
||||||
items_keys = [] # Fallback to empty list on error
|
|
||||||
|
|
||||||
|
# --- Get keys directly from the UnifiedViewModel ---
|
||||||
|
# Check if the model is the correct type and has the attributes
|
||||||
|
if hasattr(model, '_asset_type_keys') and hasattr(model, '_file_type_keys'):
|
||||||
|
try:
|
||||||
|
# Use column constants from the model if available
|
||||||
|
COL_ASSET_TYPE = getattr(model, 'COL_ASSET_TYPE', 3) # Default fallback
|
||||||
|
COL_ITEM_TYPE = getattr(model, 'COL_ITEM_TYPE', 4) # Default fallback
|
||||||
|
|
||||||
|
if column == COL_ASSET_TYPE:
|
||||||
|
items_keys = model._asset_type_keys # Use cached keys
|
||||||
|
elif column == COL_ITEM_TYPE:
|
||||||
|
items_keys = model._file_type_keys # Use cached keys
|
||||||
|
# else: # Handle other columns if necessary (optional)
|
||||||
|
# log.debug(f"ComboBoxDelegate applied to unexpected column: {column}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Error getting keys from UnifiedViewModel in ComboBoxDelegate: {e}")
|
||||||
|
items_keys = [] # Fallback on error
|
||||||
|
else:
|
||||||
|
log.warning("ComboBoxDelegate: Model is not a UnifiedViewModel or is missing key attributes (_asset_type_keys, _file_type_keys). Dropdown may be empty.")
|
||||||
|
# --- End key retrieval from model ---
|
||||||
|
|
||||||
|
# REMOVED the entire block that loaded Configuration based on main_window preset
|
||||||
|
|
||||||
if items_keys:
|
if items_keys:
|
||||||
for item_key in sorted(items_keys): # Sort keys alphabetically for consistency
|
for item_key in sorted(items_keys): # Sort keys alphabetically for consistency
|
||||||
|
|||||||
340
gui/llm_interaction_handler.py
Normal file
340
gui/llm_interaction_handler.py
Normal file
@ -0,0 +1,340 @@
|
|||||||
|
import os
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from PySide6.QtCore import QObject, Signal, QThread, Slot, QTimer
|
||||||
|
|
||||||
|
# --- Backend Imports ---
|
||||||
|
# Assuming these might be needed based on MainWindow's usage
|
||||||
|
try:
|
||||||
|
from configuration import Configuration, ConfigurationError, load_base_config
|
||||||
|
from .llm_prediction_handler import LLMPredictionHandler # Backend handler
|
||||||
|
from rule_structure import SourceRule # For signal emission type hint
|
||||||
|
except ImportError as e:
|
||||||
|
logging.getLogger(__name__).critical(f"Failed to import backend modules for LLMInteractionHandler: {e}")
|
||||||
|
LLMPredictionHandler = None
|
||||||
|
load_base_config = None
|
||||||
|
ConfigurationError = Exception
|
||||||
|
SourceRule = None # Define as None if import fails
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class LLMInteractionHandler(QObject):
|
||||||
|
"""
|
||||||
|
Handles the logic for interacting with the LLM prediction service,
|
||||||
|
including managing the queue, thread, and communication.
|
||||||
|
"""
|
||||||
|
# Signals to communicate results/status back to MainWindow or other components
|
||||||
|
llm_prediction_ready = Signal(str, list) # input_path, List[SourceRule]
|
||||||
|
llm_prediction_error = Signal(str, str) # input_path, error_message
|
||||||
|
llm_status_update = Signal(str) # status_message
|
||||||
|
llm_processing_state_changed = Signal(bool) # is_processing (True when busy, False when idle)
|
||||||
|
|
||||||
|
def __init__(self, main_window_ref, parent=None):
|
||||||
|
"""
|
||||||
|
Initializes the handler.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
main_window_ref: A reference to the MainWindow instance for accessing
|
||||||
|
shared components like status bar or models if needed.
|
||||||
|
parent: The parent QObject.
|
||||||
|
"""
|
||||||
|
super().__init__(parent)
|
||||||
|
self.main_window = main_window_ref # Store reference if needed for status updates etc.
|
||||||
|
self.llm_processing_queue = [] # Unified queue for initial adds and re-interpretations
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
self.llm_prediction_handler = None
|
||||||
|
self._is_processing = False # Internal flag to track processing state
|
||||||
|
|
||||||
|
def _set_processing_state(self, processing: bool):
|
||||||
|
"""Updates the internal processing state and emits a signal."""
|
||||||
|
if self._is_processing != processing:
|
||||||
|
self._is_processing = processing
|
||||||
|
log.debug(f"LLM Handler processing state changed to: {processing}")
|
||||||
|
self.llm_processing_state_changed.emit(processing)
|
||||||
|
|
||||||
|
@Slot(str, list)
|
||||||
|
def queue_llm_request(self, input_path: str, file_list: list | None):
|
||||||
|
"""Adds a request to the LLM processing queue."""
|
||||||
|
log.debug(f"Queueing LLM request for '{input_path}'. Current queue size: {len(self.llm_processing_queue)}")
|
||||||
|
# Avoid duplicates? Check if already in queue
|
||||||
|
is_in_queue = any(item[0] == input_path for item in self.llm_processing_queue)
|
||||||
|
if not is_in_queue:
|
||||||
|
self.llm_processing_queue.append((input_path, file_list))
|
||||||
|
log.info(f"Added '{input_path}' to LLM queue. New size: {len(self.llm_processing_queue)}")
|
||||||
|
# If not currently processing, start the queue
|
||||||
|
if not self._is_processing:
|
||||||
|
# Use QTimer.singleShot to avoid immediate processing if called rapidly
|
||||||
|
QTimer.singleShot(0, self._process_next_llm_item)
|
||||||
|
else:
|
||||||
|
log.debug(f"Skipping duplicate add to LLM queue for: {input_path}")
|
||||||
|
|
||||||
|
@Slot(list)
|
||||||
|
def queue_llm_requests_batch(self, requests: list[tuple[str, list | None]]):
|
||||||
|
"""Adds multiple requests to the LLM processing queue."""
|
||||||
|
added_count = 0
|
||||||
|
for input_path, file_list in requests:
|
||||||
|
is_in_queue = any(item[0] == input_path for item in self.llm_processing_queue)
|
||||||
|
if not is_in_queue:
|
||||||
|
self.llm_processing_queue.append((input_path, file_list))
|
||||||
|
added_count += 1
|
||||||
|
else:
|
||||||
|
log.debug(f"Skipping duplicate add to LLM queue for: {input_path}")
|
||||||
|
|
||||||
|
if added_count > 0:
|
||||||
|
log.info(f"Added {added_count} requests to LLM queue. New size: {len(self.llm_processing_queue)}")
|
||||||
|
# If not currently processing, start the queue
|
||||||
|
if not self._is_processing:
|
||||||
|
QTimer.singleShot(0, self._process_next_llm_item)
|
||||||
|
|
||||||
|
# --- Methods to be moved from MainWindow ---
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _reset_llm_thread_references(self):
|
||||||
|
"""Resets LLM thread and handler references after the thread finishes."""
|
||||||
|
log.debug("--> Entered LLMInteractionHandler._reset_llm_thread_references")
|
||||||
|
log.debug("Resetting LLM prediction thread and handler references.")
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
self.llm_prediction_handler = None
|
||||||
|
# --- Process next item now that the previous thread is fully finished ---
|
||||||
|
log.debug("Previous LLM thread finished. Triggering processing for next item by calling _process_next_llm_item...")
|
||||||
|
self._set_processing_state(False) # Mark processing as finished *before* trying next item
|
||||||
|
# Use QTimer.singleShot to yield control briefly before starting next item
|
||||||
|
QTimer.singleShot(0, self._process_next_llm_item)
|
||||||
|
log.debug("<-- Exiting LLMInteractionHandler._reset_llm_thread_references")
|
||||||
|
|
||||||
|
|
||||||
|
def _start_llm_prediction(self, input_path_str: str, file_list: list = None):
|
||||||
|
"""
|
||||||
|
Sets up and starts the LLMPredictionHandler in a separate thread.
|
||||||
|
Emits signals for results, errors, or status updates.
|
||||||
|
If file_list is not provided, it will be extracted.
|
||||||
|
"""
|
||||||
|
log.debug(f"Attempting to start LLM prediction for: {input_path_str}")
|
||||||
|
# Extract file list if not provided (needed for re-interpretation calls)
|
||||||
|
if file_list is None:
|
||||||
|
log.debug(f"File list not provided for {input_path_str}, extracting...")
|
||||||
|
# Need access to MainWindow's _extract_file_list or reimplement
|
||||||
|
# For now, assume MainWindow provides it or pass it during queueing
|
||||||
|
# Let's assume file_list is always provided correctly for now.
|
||||||
|
# If extraction fails before queueing, it won't reach here.
|
||||||
|
# If extraction needs to happen here, MainWindow ref is needed.
|
||||||
|
# Re-evaluating: MainWindow._extract_file_list is complex.
|
||||||
|
# It's better if the caller (MainWindow) extracts and passes the list.
|
||||||
|
# We'll modify queue_llm_request to require a non-None list eventually,
|
||||||
|
# or pass the main_window ref to call its extraction method.
|
||||||
|
# Let's pass main_window ref for now.
|
||||||
|
if hasattr(self.main_window, '_extract_file_list'):
|
||||||
|
file_list = self.main_window._extract_file_list(input_path_str)
|
||||||
|
if file_list is None:
|
||||||
|
error_msg = f"Failed to extract file list for {input_path_str} in _start_llm_prediction."
|
||||||
|
log.error(error_msg)
|
||||||
|
self.llm_status_update.emit(f"Error extracting files for {os.path.basename(input_path_str)}")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg) # Signal error
|
||||||
|
# If called as part of a queue, we need to ensure the next item is processed.
|
||||||
|
# _reset_llm_thread_references handles this via the finished signal,
|
||||||
|
# but if the thread never starts, we need to trigger manually.
|
||||||
|
# This case should ideally be caught before calling _start_llm_prediction.
|
||||||
|
# We'll assume the queue logic handles failed extraction before calling this.
|
||||||
|
return # Stop if extraction failed
|
||||||
|
else:
|
||||||
|
error_msg = f"MainWindow reference does not have _extract_file_list method."
|
||||||
|
log.error(error_msg)
|
||||||
|
self.llm_status_update.emit(f"Internal Error: Cannot extract files for {os.path.basename(input_path_str)}")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return # Stop
|
||||||
|
|
||||||
|
input_path_obj = Path(input_path_str) # Still needed for basename
|
||||||
|
|
||||||
|
if not file_list:
|
||||||
|
error_msg = f"LLM Error: No files found/extracted for {input_path_str}"
|
||||||
|
log.error(error_msg)
|
||||||
|
self.llm_status_update.emit(f"LLM Error: No files found for {input_path_obj.name}")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return
|
||||||
|
|
||||||
|
# --- Load Base Config for LLM Settings ---
|
||||||
|
if load_base_config is None:
|
||||||
|
log.critical("LLM Error: load_base_config function not available.")
|
||||||
|
self.llm_status_update.emit("LLM Error: Cannot load base configuration.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, "load_base_config function not available.")
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
base_config = load_base_config()
|
||||||
|
if not base_config:
|
||||||
|
raise ConfigurationError("Failed to load base configuration (app_settings.json).")
|
||||||
|
|
||||||
|
llm_settings = {
|
||||||
|
"llm_endpoint_url": base_config.get('llm_endpoint_url'),
|
||||||
|
"api_key": base_config.get('llm_api_key'),
|
||||||
|
"model_name": base_config.get('llm_model_name', 'gemini-pro'),
|
||||||
|
"prompt_template_content": base_config.get('llm_predictor_prompt'),
|
||||||
|
"asset_types": base_config.get('ASSET_TYPE_DEFINITIONS', {}),
|
||||||
|
"file_types": base_config.get('FILE_TYPE_DEFINITIONS', {}),
|
||||||
|
"examples": base_config.get('llm_predictor_examples', [])
|
||||||
|
}
|
||||||
|
except ConfigurationError as e:
|
||||||
|
log.error(f"LLM Configuration Error: {e}")
|
||||||
|
self.llm_status_update.emit(f"LLM Config Error: {e}")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, f"LLM Configuration Error: {e}")
|
||||||
|
# Optionally show a QMessageBox via main_window ref if critical
|
||||||
|
# self.main_window.show_critical_error("LLM Config Error", str(e))
|
||||||
|
return
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Unexpected error loading LLM configuration: {e}")
|
||||||
|
self.llm_status_update.emit(f"LLM Config Error: {e}")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, f"Unexpected error loading LLM config: {e}")
|
||||||
|
return
|
||||||
|
# --- End Config Loading ---
|
||||||
|
|
||||||
|
if LLMPredictionHandler is None:
|
||||||
|
log.critical("LLMPredictionHandler class not available.")
|
||||||
|
self.llm_status_update.emit("LLM Error: Prediction handler component missing.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, "LLMPredictionHandler class not available.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Clean up previous thread/handler if any exist (should not happen if queue logic is correct)
|
||||||
|
if self.llm_prediction_thread and self.llm_prediction_thread.isRunning():
|
||||||
|
log.warning("Warning: Previous LLM prediction thread still running when trying to start new one. This indicates a potential logic error.")
|
||||||
|
# Attempt graceful shutdown (might need more robust handling)
|
||||||
|
if self.llm_prediction_handler:
|
||||||
|
# Assuming LLMPredictionHandler has a cancel method or similar
|
||||||
|
if hasattr(self.llm_prediction_handler, 'cancel'):
|
||||||
|
self.llm_prediction_handler.cancel()
|
||||||
|
self.llm_prediction_thread.quit()
|
||||||
|
if not self.llm_prediction_thread.wait(1000): # Wait 1 sec
|
||||||
|
log.warning("LLM thread did not quit gracefully. Forcing termination.")
|
||||||
|
self.llm_prediction_thread.terminate()
|
||||||
|
self.llm_prediction_thread.wait() # Wait after terminate
|
||||||
|
# Reset references after ensuring termination
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
self.llm_prediction_handler = None
|
||||||
|
|
||||||
|
|
||||||
|
log.info(f"Starting LLM prediction thread for source: {input_path_str} with {len(file_list)} files.")
|
||||||
|
self.llm_status_update.emit(f"Starting LLM interpretation for {input_path_obj.name}...")
|
||||||
|
|
||||||
|
self.llm_prediction_thread = QThread(self.main_window) # Parent thread to main window's thread? Or self? Let's try self.
|
||||||
|
self.llm_prediction_handler = LLMPredictionHandler(input_path_str, file_list, llm_settings)
|
||||||
|
self.llm_prediction_handler.moveToThread(self.llm_prediction_thread)
|
||||||
|
|
||||||
|
# Connect signals from handler to *internal* slots or directly emit signals
|
||||||
|
self.llm_prediction_handler.prediction_ready.connect(self._handle_llm_result)
|
||||||
|
self.llm_prediction_handler.prediction_error.connect(self._handle_llm_error)
|
||||||
|
self.llm_prediction_handler.status_update.connect(self.llm_status_update) # Pass status through
|
||||||
|
|
||||||
|
# Connect thread signals
|
||||||
|
self.llm_prediction_thread.started.connect(self.llm_prediction_handler.run)
|
||||||
|
# Clean up thread and handler when finished
|
||||||
|
self.llm_prediction_thread.finished.connect(self._reset_llm_thread_references)
|
||||||
|
self.llm_prediction_thread.finished.connect(self.llm_prediction_handler.deleteLater)
|
||||||
|
self.llm_prediction_thread.finished.connect(self.llm_prediction_thread.deleteLater)
|
||||||
|
# Also ensure thread quits when handler signals completion/error
|
||||||
|
self.llm_prediction_handler.prediction_ready.connect(self.llm_prediction_thread.quit)
|
||||||
|
self.llm_prediction_handler.prediction_error.connect(self.llm_prediction_thread.quit)
|
||||||
|
|
||||||
|
self.llm_prediction_thread.start()
|
||||||
|
log.debug(f"LLM prediction thread started for {input_path_str}.")
|
||||||
|
|
||||||
|
|
||||||
|
def is_processing(self) -> bool:
|
||||||
|
"""Safely checks if the LLM prediction thread is currently running."""
|
||||||
|
# Use the internal flag, which is more reliable than checking thread directly
|
||||||
|
# due to potential race conditions during cleanup.
|
||||||
|
# The thread check can be a fallback.
|
||||||
|
is_running_flag = self._is_processing
|
||||||
|
# Also check thread as a safeguard, though the flag should be primary
|
||||||
|
try:
|
||||||
|
is_thread_alive = self.llm_prediction_thread is not None and self.llm_prediction_thread.isRunning()
|
||||||
|
if is_running_flag != is_thread_alive:
|
||||||
|
# This might indicate the flag wasn't updated correctly, log it.
|
||||||
|
log.warning(f"LLM Handler processing flag ({is_running_flag}) mismatch with thread state ({is_thread_alive}). Flag is primary.")
|
||||||
|
return is_running_flag
|
||||||
|
except RuntimeError:
|
||||||
|
log.debug("is_processing: Caught RuntimeError checking isRunning (thread likely deleted).")
|
||||||
|
# If thread died unexpectedly, the flag might be stale. Reset it.
|
||||||
|
if self._is_processing:
|
||||||
|
self._set_processing_state(False)
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _process_next_llm_item(self):
|
||||||
|
"""Processes the next directory in the unified LLM processing queue."""
|
||||||
|
log.debug(f"--> Entered _process_next_llm_item. Queue size: {len(self.llm_processing_queue)}")
|
||||||
|
|
||||||
|
if self.is_processing():
|
||||||
|
log.info("LLM processing already running. Waiting for current item to finish.")
|
||||||
|
# Do not pop from queue if already running, wait for _reset_llm_thread_references to call this again
|
||||||
|
return
|
||||||
|
|
||||||
|
if not self.llm_processing_queue:
|
||||||
|
log.info("LLM processing queue is empty. Finishing.")
|
||||||
|
self.llm_status_update.emit("LLM processing complete.")
|
||||||
|
self._set_processing_state(False) # Ensure state is set to idle
|
||||||
|
log.debug("<-- Exiting _process_next_llm_item (queue empty)")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Set state to busy *before* starting
|
||||||
|
self._set_processing_state(True)
|
||||||
|
|
||||||
|
# Get next item *without* removing it yet
|
||||||
|
next_item = self.llm_processing_queue[0] # Peek at the first item
|
||||||
|
next_dir, file_list = next_item # Unpack the tuple
|
||||||
|
|
||||||
|
# --- Update Status/Progress ---
|
||||||
|
total_in_queue_now = len(self.llm_processing_queue)
|
||||||
|
status_msg = f"LLM Processing {os.path.basename(next_dir)} ({total_in_queue_now} remaining)..."
|
||||||
|
self.llm_status_update.emit(status_msg)
|
||||||
|
log.info(status_msg)
|
||||||
|
|
||||||
|
# --- Start Prediction (which might fail) ---
|
||||||
|
try:
|
||||||
|
# Pass the potentially None file_list. _start_llm_prediction handles extraction if needed.
|
||||||
|
self._start_llm_prediction(next_dir, file_list=file_list)
|
||||||
|
# --- Pop item *after* successfully starting prediction ---
|
||||||
|
self.llm_processing_queue.pop(0)
|
||||||
|
log.debug(f"Successfully started LLM prediction for {next_dir} and removed from queue.")
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error occurred *during* _start_llm_prediction call for {next_dir}: {e}")
|
||||||
|
error_msg = f"Error starting LLM for {os.path.basename(next_dir)}: {e}"
|
||||||
|
self.llm_status_update.emit(error_msg)
|
||||||
|
self.llm_prediction_error.emit(next_dir, error_msg) # Signal the error
|
||||||
|
# --- Remove the failed item from the queue ---
|
||||||
|
try:
|
||||||
|
failed_item = self.llm_processing_queue.pop(0)
|
||||||
|
log.warning(f"Removed failed item {failed_item} from LLM queue due to start error.")
|
||||||
|
except IndexError:
|
||||||
|
log.error("Attempted to pop failed item from already empty LLM queue after start error.")
|
||||||
|
# --- Attempt to process the *next* item ---
|
||||||
|
# Reset processing state since this one failed *before* the thread finished signal could
|
||||||
|
self._set_processing_state(False)
|
||||||
|
# Use QTimer.singleShot to avoid deep recursion
|
||||||
|
QTimer.singleShot(100, self._process_next_llm_item) # Try next item after a short delay
|
||||||
|
|
||||||
|
# --- Internal Slots to Handle Results/Errors from LLMPredictionHandler ---
|
||||||
|
@Slot(str, list)
|
||||||
|
def _handle_llm_result(self, input_path: str, source_rules: list):
|
||||||
|
"""Internal slot to receive results and emit the public signal."""
|
||||||
|
log.debug(f"LLM Handler received result for {input_path}. Emitting llm_prediction_ready.")
|
||||||
|
self.llm_prediction_ready.emit(input_path, source_rules)
|
||||||
|
# Note: The thread's finished signal calls _reset_llm_thread_references,
|
||||||
|
# which then calls _process_next_llm_item.
|
||||||
|
|
||||||
|
@Slot(str, str)
|
||||||
|
def _handle_llm_error(self, input_path: str, error_message: str):
|
||||||
|
"""Internal slot to receive errors and emit the public signal."""
|
||||||
|
log.debug(f"LLM Handler received error for {input_path}: {error_message}. Emitting llm_prediction_error.")
|
||||||
|
self.llm_prediction_error.emit(input_path, error_message)
|
||||||
|
# Note: The thread's finished signal calls _reset_llm_thread_references,
|
||||||
|
# which then calls _process_next_llm_item.
|
||||||
|
|
||||||
|
def clear_queue(self):
|
||||||
|
"""Clears the LLM processing queue."""
|
||||||
|
log.info(f"Clearing LLM processing queue ({len(self.llm_processing_queue)} items).")
|
||||||
|
self.llm_processing_queue.clear()
|
||||||
|
# TODO: Should we also attempt to cancel any *currently* running LLM task?
|
||||||
|
# This might be complex. For now, just clears the queue of pending items.
|
||||||
|
if self.is_processing():
|
||||||
|
log.warning("LLM queue cleared, but a task is currently running. It will complete.")
|
||||||
|
else:
|
||||||
|
self.llm_status_update.emit("LLM queue cleared.")
|
||||||
@ -1,7 +1,11 @@
|
|||||||
import os
|
import os
|
||||||
import json
|
import json
|
||||||
import requests
|
import requests
|
||||||
from PySide6.QtCore import QObject, Signal, Slot, QThread
|
import re # Added import for regex
|
||||||
|
import logging # Add logging
|
||||||
|
from pathlib import Path # Add Path for basename
|
||||||
|
from PySide6.QtCore import QObject, Slot # Keep QObject for parent type hint, Slot for cancel if kept separate
|
||||||
|
# Removed Signal, QThread as they are handled by BasePredictionHandler or caller
|
||||||
from typing import List, Dict, Any
|
from typing import List, Dict, Any
|
||||||
|
|
||||||
# Assuming rule_structure defines SourceRule, AssetRule, FileRule etc.
|
# Assuming rule_structure defines SourceRule, AssetRule, FileRule etc.
|
||||||
@ -12,92 +16,115 @@ from rule_structure import SourceRule, AssetRule, FileRule # Ensure AssetRule an
|
|||||||
# Adjust the import path if necessary
|
# Adjust the import path if necessary
|
||||||
# Removed Configuration import, will use load_base_config if needed or passed settings
|
# Removed Configuration import, will use load_base_config if needed or passed settings
|
||||||
# from configuration import Configuration
|
# from configuration import Configuration
|
||||||
from configuration import load_base_config # Keep this for now if needed elsewhere, or remove if settings are always passed
|
# from configuration import load_base_config # No longer needed here
|
||||||
|
from .base_prediction_handler import BasePredictionHandler # Import base class
|
||||||
|
|
||||||
class LLMPredictionHandler(QObject):
|
log = logging.getLogger(__name__) # Setup logger
|
||||||
|
|
||||||
|
class LLMPredictionHandler(BasePredictionHandler):
|
||||||
"""
|
"""
|
||||||
Handles the interaction with an LLM for predicting asset structures
|
Handles the interaction with an LLM for predicting asset structures
|
||||||
based on a directory's file list. Designed to run in a QThread.
|
based on a directory's file list. Inherits from BasePredictionHandler.
|
||||||
"""
|
"""
|
||||||
# Signal emitted when prediction for a directory is complete
|
# Signals (prediction_ready, prediction_error, status_update) are inherited
|
||||||
# Arguments: directory_path (str), results (List[SourceRule])
|
|
||||||
prediction_ready = Signal(str, list)
|
|
||||||
# Signal emitted on error
|
|
||||||
# Arguments: directory_path (str), error_message (str)
|
|
||||||
prediction_error = Signal(str, str)
|
|
||||||
# Signal to update status message in the GUI
|
|
||||||
status_update = Signal(str)
|
|
||||||
|
|
||||||
def __init__(self, input_path_str: str, file_list: list, llm_settings: dict, parent: QObject = None): # Accept input_path_str and file_list
|
def __init__(self, input_source_identifier: str, file_list: list, llm_settings: dict, parent: QObject = None):
|
||||||
"""
|
"""
|
||||||
Initializes the handler.
|
Initializes the LLM handler.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
input_path_str: The absolute path to the original input source (directory or archive).
|
input_source_identifier: The unique identifier for the input source (e.g., file path).
|
||||||
file_list: A list of relative file paths extracted from the input source.
|
file_list: A list of *relative* file paths extracted from the input source.
|
||||||
llm_settings: A dictionary containing necessary LLM configuration.
|
(LLM expects relative paths based on the prompt template).
|
||||||
|
llm_settings: A dictionary containing necessary LLM configuration
|
||||||
|
(endpoint_url, api_key, prompt_template_content, etc.).
|
||||||
parent: The parent QObject.
|
parent: The parent QObject.
|
||||||
"""
|
"""
|
||||||
super().__init__(parent)
|
super().__init__(input_source_identifier, parent)
|
||||||
self.input_path_str = input_path_str # Store original input path
|
# input_source_identifier is stored by the base class as self.input_source_identifier
|
||||||
self.file_list = file_list # Store the provided file list
|
self.file_list = file_list # Store the provided relative file list
|
||||||
self.llm_settings = llm_settings # Store the settings dictionary
|
self.llm_settings = llm_settings # Store the settings dictionary
|
||||||
self.endpoint_url = self.llm_settings.get('llm_endpoint_url')
|
self.endpoint_url = self.llm_settings.get('llm_endpoint_url')
|
||||||
self.api_key = self.llm_settings.get('llm_api_key')
|
self.api_key = self.llm_settings.get('llm_api_key')
|
||||||
self._is_cancelled = False
|
# _is_running and _is_cancelled are handled by the base class
|
||||||
@Slot()
|
|
||||||
def run(self):
|
|
||||||
"""
|
|
||||||
The main execution method to be called when the thread starts.
|
|
||||||
Orchestrates the prediction process for the given directory.
|
|
||||||
"""
|
|
||||||
# Directory check is no longer needed here, input path is just for context
|
|
||||||
# File list is provided via __init__
|
|
||||||
|
|
||||||
try:
|
# The run() and cancel() slots are provided by the base class.
|
||||||
self.status_update.emit(f"Preparing LLM input for {os.path.basename(self.input_path_str)}...")
|
# We only need to implement the core logic in _perform_prediction.
|
||||||
if self._is_cancelled: return
|
|
||||||
|
def _perform_prediction(self) -> List[SourceRule]:
|
||||||
|
"""
|
||||||
|
Performs the LLM prediction by preparing the prompt, calling the LLM,
|
||||||
|
and parsing the response. Implements the abstract method from BasePredictionHandler.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list containing a single SourceRule object based on the LLM response,
|
||||||
|
or an empty list if prediction fails or yields no results.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If required settings (like endpoint URL or prompt template) are missing.
|
||||||
|
ConnectionError: If the LLM API call fails due to network issues or timeouts.
|
||||||
|
Exception: For other errors during prompt preparation, API call, or parsing.
|
||||||
|
"""
|
||||||
|
log.info(f"Performing LLM prediction for: {self.input_source_identifier}")
|
||||||
|
base_name = Path(self.input_source_identifier).name
|
||||||
|
|
||||||
# Use the file list passed during initialization
|
# Use the file list passed during initialization
|
||||||
if not self.file_list:
|
if not self.file_list:
|
||||||
self.prediction_ready.emit(self.input_path_str, []) # Emit empty list if no files
|
log.warning(f"No files provided for LLM prediction for {self.input_source_identifier}. Returning empty list.")
|
||||||
return
|
self.status_update.emit(f"No files found for {base_name}.") # Use base signal
|
||||||
if self._is_cancelled: return
|
return [] # Return empty list, not an error
|
||||||
|
|
||||||
prompt = self._prepare_prompt(self.file_list) # Use self.file_list
|
# Check for cancellation before preparing prompt
|
||||||
if self._is_cancelled: return
|
if self._is_cancelled:
|
||||||
|
log.info("LLM prediction cancelled before preparing prompt.")
|
||||||
self.status_update.emit(f"Calling LLM for {os.path.basename(self.input_path_str)}...")
|
return []
|
||||||
llm_response_json_str = self._call_llm(prompt)
|
|
||||||
if self._is_cancelled: return
|
|
||||||
|
|
||||||
self.status_update.emit(f"Parsing LLM response for {os.path.basename(self.input_path_str)}...")
|
|
||||||
predicted_rules = self._parse_llm_response(llm_response_json_str)
|
|
||||||
if self._is_cancelled: return
|
|
||||||
|
|
||||||
self.prediction_ready.emit(self.input_path_str, predicted_rules) # Use input_path_str
|
|
||||||
self.status_update.emit(f"LLM interpretation complete for {os.path.basename(self.input_path_str)}.")
|
|
||||||
|
|
||||||
|
# --- Prepare Prompt ---
|
||||||
|
self.status_update.emit(f"Preparing LLM input for {base_name}...")
|
||||||
|
try:
|
||||||
|
# Pass relative file list
|
||||||
|
prompt = self._prepare_prompt(self.file_list)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
error_msg = f"Error during LLM prediction for {self.input_path_str}: {e}"
|
log.exception("Error preparing LLM prompt.")
|
||||||
print(error_msg) # Log the full error
|
raise ValueError(f"Error preparing LLM prompt: {e}") from e # Re-raise for base handler
|
||||||
self.prediction_error.emit(self.input_path_str, f"An error occurred: {e}") # Use input_path_str
|
|
||||||
finally:
|
|
||||||
# Ensure thread cleanup or final signals if needed
|
|
||||||
pass
|
|
||||||
|
|
||||||
@Slot()
|
if self._is_cancelled:
|
||||||
def cancel(self):
|
log.info("LLM prediction cancelled after preparing prompt.")
|
||||||
"""
|
return []
|
||||||
Sets the cancellation flag.
|
|
||||||
"""
|
# --- Call LLM ---
|
||||||
self._is_cancelled = True
|
self.status_update.emit(f"Calling LLM for {base_name}...")
|
||||||
self.status_update.emit(f"Cancellation requested for {os.path.basename(self.input_path_str)}...") # Use input_path_str
|
try:
|
||||||
|
llm_response_json_str = self._call_llm(prompt)
|
||||||
|
except Exception as e:
|
||||||
|
log.exception("Error calling LLM API.")
|
||||||
|
# Re-raise potentially specific errors (ConnectionError, ValueError) or a generic one
|
||||||
|
raise RuntimeError(f"Error calling LLM: {e}") from e
|
||||||
|
|
||||||
|
if self._is_cancelled:
|
||||||
|
log.info("LLM prediction cancelled after calling LLM.")
|
||||||
|
return []
|
||||||
|
|
||||||
|
# --- Parse Response ---
|
||||||
|
self.status_update.emit(f"Parsing LLM response for {base_name}...")
|
||||||
|
try:
|
||||||
|
predicted_rules = self._parse_llm_response(llm_response_json_str)
|
||||||
|
except Exception as e:
|
||||||
|
log.exception("Error parsing LLM response.")
|
||||||
|
raise ValueError(f"Error parsing LLM response: {e}") from e # Re-raise for base handler
|
||||||
|
|
||||||
|
if self._is_cancelled:
|
||||||
|
log.info("LLM prediction cancelled after parsing response.")
|
||||||
|
return []
|
||||||
|
|
||||||
|
log.info(f"LLM prediction finished successfully for '{self.input_source_identifier}'.")
|
||||||
|
# The base class run() method will emit prediction_ready with these results
|
||||||
|
return predicted_rules
|
||||||
|
|
||||||
|
|
||||||
# Removed _get_file_list method as file list is now passed in __init__
|
# --- Helper Methods (Keep these internal to this class) ---
|
||||||
|
|
||||||
def _prepare_prompt(self, file_list: List[str]) -> str:
|
def _prepare_prompt(self, relative_file_list: List[str]) -> str:
|
||||||
"""
|
"""
|
||||||
Prepares the full prompt string to send to the LLM using stored settings.
|
Prepares the full prompt string to send to the LLM using stored settings.
|
||||||
"""
|
"""
|
||||||
@ -124,8 +151,8 @@ class LLMPredictionHandler(QObject):
|
|||||||
file_defs = json.dumps(self.llm_settings.get('file_types', {}), indent=4)
|
file_defs = json.dumps(self.llm_settings.get('file_types', {}), indent=4)
|
||||||
examples = json.dumps(self.llm_settings.get('examples', []), indent=2)
|
examples = json.dumps(self.llm_settings.get('examples', []), indent=2)
|
||||||
|
|
||||||
# Format file list as a single string with newlines
|
# Format *relative* file list as a single string with newlines
|
||||||
file_list_str = "\n".join(file_list)
|
file_list_str = "\n".join(relative_file_list)
|
||||||
|
|
||||||
# Replace placeholders
|
# Replace placeholders
|
||||||
prompt = prompt_template.replace('{ASSET_TYPE_DEFINITIONS}', asset_defs)
|
prompt = prompt_template.replace('{ASSET_TYPE_DEFINITIONS}', asset_defs)
|
||||||
@ -173,40 +200,24 @@ class LLMPredictionHandler(QObject):
|
|||||||
# "response_format": { "type": "json_object" } # If supported by endpoint
|
# "response_format": { "type": "json_object" } # If supported by endpoint
|
||||||
}
|
}
|
||||||
|
|
||||||
self.status_update.emit(f"Sending request to LLM at {self.endpoint_url}...")
|
# Status update emitted by _perform_prediction before calling this
|
||||||
|
# self.status_update.emit(f"Sending request to LLM at {self.endpoint_url}...")
|
||||||
print(f"--- Calling LLM API: {self.endpoint_url} ---")
|
print(f"--- Calling LLM API: {self.endpoint_url} ---")
|
||||||
# print(f"--- Payload Preview ---\n{json.dumps(payload, indent=2)[:500]}...\n--- END Payload Preview ---")
|
# print(f"--- Payload Preview ---\n{json.dumps(payload, indent=2)[:500]}...\n--- END Payload Preview ---")
|
||||||
|
|
||||||
try:
|
# Note: Exceptions raised here (Timeout, RequestException, ValueError)
|
||||||
# Make the POST request with a timeout (e.g., 120 seconds for potentially long LLM responses)
|
# will be caught by the _perform_prediction method's handler.
|
||||||
|
|
||||||
|
# Make the POST request with a timeout
|
||||||
response = requests.post(
|
response = requests.post(
|
||||||
self.endpoint_url,
|
self.endpoint_url,
|
||||||
headers=headers,
|
headers=headers,
|
||||||
json=payload,
|
json=payload,
|
||||||
# Make the POST request with configured timeout, default to 120
|
|
||||||
timeout=self.llm_settings.get("llm_request_timeout", 120)
|
timeout=self.llm_settings.get("llm_request_timeout", 120)
|
||||||
)
|
)
|
||||||
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
||||||
|
|
||||||
except requests.exceptions.Timeout:
|
|
||||||
error_msg = f"LLM request timed out after {self.llm_settings.get('llm_request_timeout', 120)} seconds."
|
|
||||||
print(error_msg)
|
|
||||||
raise ConnectionError(error_msg)
|
|
||||||
except requests.exceptions.RequestException as e:
|
|
||||||
error_msg = f"LLM request failed: {e}"
|
|
||||||
print(error_msg)
|
|
||||||
# Attempt to get more detail from response if available
|
|
||||||
try:
|
|
||||||
if e.response is not None:
|
|
||||||
print(f"LLM Response Status Code: {e.response.status_code}")
|
|
||||||
print(f"LLM Response Text: {e.response.text[:500]}...") # Log partial response text
|
|
||||||
error_msg += f" (Status: {e.response.status_code})"
|
|
||||||
except Exception:
|
|
||||||
pass # Ignore errors during error reporting enhancement
|
|
||||||
raise ConnectionError(error_msg) # Raise a more generic error for the GUI
|
|
||||||
|
|
||||||
# Parse the JSON response
|
# Parse the JSON response
|
||||||
try:
|
|
||||||
response_data = response.json()
|
response_data = response.json()
|
||||||
# print(f"--- LLM Raw Response ---\n{json.dumps(response_data, indent=2)}\n--- END Raw Response ---") # Debugging
|
# print(f"--- LLM Raw Response ---\n{json.dumps(response_data, indent=2)}\n--- END Raw Response ---") # Debugging
|
||||||
|
|
||||||
@ -216,32 +227,20 @@ class LLMPredictionHandler(QObject):
|
|||||||
content = message.get("content")
|
content = message.get("content")
|
||||||
if content:
|
if content:
|
||||||
# The content itself should be the JSON string we asked for
|
# The content itself should be the JSON string we asked for
|
||||||
print("--- LLM Response Content Extracted Successfully ---")
|
log.debug("--- LLM Response Content Extracted Successfully ---")
|
||||||
return content.strip()
|
return content.strip()
|
||||||
else:
|
else:
|
||||||
raise ValueError("LLM response missing 'content' in choices[0].message.")
|
raise ValueError("LLM response missing 'content' in choices[0].message.")
|
||||||
else:
|
else:
|
||||||
raise ValueError("LLM response missing 'choices' array or it's empty.")
|
raise ValueError("LLM response missing 'choices' array or it's empty.")
|
||||||
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
error_msg = f"Failed to decode LLM JSON response. Response text: {response.text[:500]}..."
|
|
||||||
print(error_msg)
|
|
||||||
raise ValueError(error_msg)
|
|
||||||
except Exception as e:
|
|
||||||
# Capture the potentially problematic response_data in the error message
|
|
||||||
response_data_str = "Not available"
|
|
||||||
try:
|
|
||||||
response_data_str = json.dumps(response_data) if 'response_data' in locals() else response.text[:500] + "..."
|
|
||||||
except Exception:
|
|
||||||
pass # Avoid errors during error reporting
|
|
||||||
error_msg = f"Error parsing LLM response structure: {e}. Response data: {response_data_str}"
|
|
||||||
print(error_msg)
|
|
||||||
raise ValueError(error_msg)
|
|
||||||
|
|
||||||
def _parse_llm_response(self, llm_response_json_str: str) -> List[SourceRule]:
|
def _parse_llm_response(self, llm_response_json_str: str) -> List[SourceRule]:
|
||||||
"""
|
"""
|
||||||
Parses the LLM's JSON response string into a list of SourceRule objects.
|
Parses the LLM's JSON response string into a list of SourceRule objects.
|
||||||
"""
|
"""
|
||||||
|
# Note: Exceptions (JSONDecodeError, ValueError) raised here
|
||||||
|
# will be caught by the _perform_prediction method's handler.
|
||||||
|
|
||||||
# Strip potential markdown code fences before parsing
|
# Strip potential markdown code fences before parsing
|
||||||
clean_json_str = llm_response_json_str.strip()
|
clean_json_str = llm_response_json_str.strip()
|
||||||
if clean_json_str.startswith("```json"):
|
if clean_json_str.startswith("```json"):
|
||||||
@ -250,102 +249,112 @@ class LLMPredictionHandler(QObject):
|
|||||||
clean_json_str = clean_json_str[:-3] # Remove ```
|
clean_json_str = clean_json_str[:-3] # Remove ```
|
||||||
clean_json_str = clean_json_str.strip() # Remove any extra whitespace
|
clean_json_str = clean_json_str.strip() # Remove any extra whitespace
|
||||||
|
|
||||||
|
# --- ADDED: Remove <think> tags ---
|
||||||
|
clean_json_str = re.sub(r'<think>.*?</think>', '', clean_json_str, flags=re.DOTALL | re.IGNORECASE)
|
||||||
|
clean_json_str = clean_json_str.strip() # Strip again after potential removal
|
||||||
|
# ---------------------------------
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response_data = json.loads(clean_json_str)
|
response_data = json.loads(clean_json_str)
|
||||||
except json.JSONDecodeError as e:
|
except json.JSONDecodeError as e:
|
||||||
# Log the full cleaned string that caused the error for better debugging
|
# Log the full cleaned string that caused the error for better debugging
|
||||||
error_detail = f"Failed to decode LLM JSON response: {e}\nFull Cleaned Response:\n{clean_json_str}"
|
error_detail = f"Failed to decode LLM JSON response: {e}\nFull Cleaned Response:\n{clean_json_str}"
|
||||||
print(f"ERROR: {error_detail}") # Print full error detail to console
|
log.error(f"ERROR: {error_detail}") # Log full error detail to console
|
||||||
raise ValueError(error_detail) # Raise the error with full detail
|
raise ValueError(error_detail) # Raise the error with full detail
|
||||||
|
|
||||||
if "predicted_assets" not in response_data or not isinstance(response_data["predicted_assets"], list):
|
if "predicted_assets" not in response_data or not isinstance(response_data["predicted_assets"], list):
|
||||||
raise ValueError("Invalid LLM response format: 'predicted_assets' key missing or not a list.")
|
raise ValueError("Invalid LLM response format: 'predicted_assets' key missing or not a list.")
|
||||||
|
|
||||||
source_rules = []
|
source_rules = []
|
||||||
# We assume one SourceRule per input source processed by this handler instance
|
# We assume one SourceRule per input source processed by this handler instance
|
||||||
source_rule = SourceRule(input_path=self.input_path_str) # Use input_path_str
|
# Use self.input_source_identifier from the base class
|
||||||
|
source_rule = SourceRule(input_path=self.input_source_identifier)
|
||||||
|
|
||||||
# Access valid types from the settings dictionary
|
# Access valid types from the settings dictionary
|
||||||
valid_asset_types = list(self.llm_settings.get('asset_types', {}).keys())
|
valid_asset_types = list(self.llm_settings.get('asset_types', {}).keys())
|
||||||
valid_file_types = list(self.llm_settings.get('file_types', {}).keys())
|
valid_file_types = list(self.llm_settings.get('file_types', {}).keys())
|
||||||
|
|
||||||
for asset_data in response_data["predicted_assets"]:
|
for asset_data in response_data["predicted_assets"]:
|
||||||
|
# Check for cancellation within the loop
|
||||||
|
if self._is_cancelled:
|
||||||
|
log.info("LLM prediction cancelled during response parsing (assets).")
|
||||||
|
return []
|
||||||
|
|
||||||
if not isinstance(asset_data, dict):
|
if not isinstance(asset_data, dict):
|
||||||
print(f"Warning: Skipping invalid asset data (not a dict): {asset_data}")
|
log.warning(f"Skipping invalid asset data (not a dict): {asset_data}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
asset_name = asset_data.get("suggested_asset_name", "Unnamed_Asset")
|
asset_name = asset_data.get("suggested_asset_name", "Unnamed_Asset")
|
||||||
asset_type = asset_data.get("predicted_asset_type")
|
asset_type = asset_data.get("predicted_asset_type")
|
||||||
|
|
||||||
if asset_type not in valid_asset_types:
|
if asset_type not in valid_asset_types:
|
||||||
print(f"Warning: Invalid predicted_asset_type '{asset_type}' for asset '{asset_name}'. Defaulting or skipping.")
|
log.warning(f"Invalid predicted_asset_type '{asset_type}' for asset '{asset_name}'. Skipping asset.")
|
||||||
# Decide handling: default to a generic type or skip? For now, skip.
|
continue # Skip this asset
|
||||||
continue # Or assign a default like 'Unknown' if defined
|
|
||||||
|
|
||||||
# --- MODIFIED LINES for AssetRule ---
|
|
||||||
# Create the AssetRule instance first
|
|
||||||
asset_rule = AssetRule(asset_name=asset_name, asset_type=asset_type)
|
asset_rule = AssetRule(asset_name=asset_name, asset_type=asset_type)
|
||||||
source_rule.assets.append(asset_rule) # Append to the list
|
source_rule.assets.append(asset_rule)
|
||||||
|
|
||||||
if "files" not in asset_data or not isinstance(asset_data["files"], list):
|
if "files" not in asset_data or not isinstance(asset_data["files"], list):
|
||||||
print(f"Warning: 'files' key missing or not a list in asset '{asset_name}'. Skipping files for this asset.")
|
log.warning(f"'files' key missing or not a list in asset '{asset_name}'. Skipping files for this asset.")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
for file_data in asset_data["files"]:
|
for file_data in asset_data["files"]:
|
||||||
|
# Check for cancellation within the inner loop
|
||||||
|
if self._is_cancelled:
|
||||||
|
log.info("LLM prediction cancelled during response parsing (files).")
|
||||||
|
return []
|
||||||
|
|
||||||
if not isinstance(file_data, dict):
|
if not isinstance(file_data, dict):
|
||||||
print(f"Warning: Skipping invalid file data (not a dict) in asset '{asset_name}': {file_data}")
|
log.warning(f"Skipping invalid file data (not a dict) in asset '{asset_name}': {file_data}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
file_path_rel = file_data.get("file_path")
|
file_path_rel = file_data.get("file_path") # LLM provides relative path
|
||||||
file_type = file_data.get("predicted_file_type")
|
file_type = file_data.get("predicted_file_type")
|
||||||
|
|
||||||
if not file_path_rel:
|
if not file_path_rel:
|
||||||
print(f"Warning: Missing 'file_path' in file data for asset '{asset_name}'. Skipping file.")
|
log.warning(f"Missing 'file_path' in file data for asset '{asset_name}'. Skipping file.")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Convert relative path from LLM (using '/') back to absolute OS-specific path
|
# Convert relative path from LLM (using '/') back to absolute OS-specific path
|
||||||
# Note: LLM gets relative paths, so we join with the handler's base input path
|
# We need the original input path (directory or archive) to make it absolute
|
||||||
file_path_abs = os.path.join(self.input_path_str, file_path_rel.replace('/', os.sep)) # Use input_path_str
|
# Use self.input_source_identifier which holds the original path
|
||||||
|
# IMPORTANT: Ensure the LLM is actually providing paths relative to the *root* of the input source.
|
||||||
|
try:
|
||||||
|
# Use Pathlib for safer joining, assuming input_source_identifier is the parent dir/archive path
|
||||||
|
# If input_source_identifier is an archive file, this logic might need adjustment
|
||||||
|
# depending on where files were extracted. For now, assume it's the base path.
|
||||||
|
base_path = Path(self.input_source_identifier)
|
||||||
|
# If the input was a file (like a zip), use its parent directory as the base for joining relative paths
|
||||||
|
if base_path.is_file():
|
||||||
|
base_path = base_path.parent
|
||||||
|
# Clean the relative path potentially coming from LLM
|
||||||
|
clean_rel_path = Path(file_path_rel.strip().replace('\\', '/'))
|
||||||
|
file_path_abs = str(base_path / clean_rel_path)
|
||||||
|
except Exception as path_e:
|
||||||
|
log.warning(f"Error constructing absolute path for '{file_path_rel}' relative to '{self.input_source_identifier}': {path_e}. Skipping file.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
|
||||||
if file_type not in valid_file_types:
|
if file_type not in valid_file_types:
|
||||||
print(f"Warning: Invalid predicted_file_type '{file_type}' for file '{file_path_rel}'. Defaulting to EXTRA.")
|
log.warning(f"Invalid predicted_file_type '{file_type}' for file '{file_path_rel}'. Defaulting to EXTRA.")
|
||||||
file_type = "EXTRA" # Default to EXTRA if invalid type from LLM
|
file_type = "EXTRA" # Default to EXTRA if invalid type from LLM
|
||||||
|
|
||||||
# --- MODIFIED LINES for FileRule ---
|
# Create the FileRule instance
|
||||||
# Create the FileRule instance first
|
# Add default values for fields not provided by LLM
|
||||||
file_rule = FileRule(file_path=file_path_abs, item_type=file_type) # Use correct field names
|
file_rule = FileRule(
|
||||||
asset_rule.files.append(file_rule) # Append to the list
|
file_path=file_path_abs,
|
||||||
|
item_type=file_type,
|
||||||
|
item_type_override=file_type, # Initial override
|
||||||
|
target_asset_name_override=asset_name, # Default to asset name
|
||||||
|
output_format_override=None,
|
||||||
|
is_gloss_source=False, # LLM doesn't predict this
|
||||||
|
standard_map_type=None, # LLM doesn't predict this directly
|
||||||
|
resolution_override=None,
|
||||||
|
channel_merge_instructions={}
|
||||||
|
)
|
||||||
|
asset_rule.files.append(file_rule)
|
||||||
|
|
||||||
source_rules.append(source_rule)
|
source_rules.append(source_rule)
|
||||||
return source_rules
|
return source_rules
|
||||||
|
|
||||||
# Example of how this might be used in MainWindow (conceptual)
|
# Removed conceptual example usage comments
|
||||||
# class MainWindow(QMainWindow):
|
|
||||||
# # ... other methods ...
|
|
||||||
# def _start_llm_prediction(self, directory_path):
|
|
||||||
# self.llm_thread = QThread()
|
|
||||||
# self.llm_handler = LLMPredictionHandler(directory_path, self.config_manager)
|
|
||||||
# self.llm_handler.moveToThread(self.llm_thread)
|
|
||||||
#
|
|
||||||
# # Connect signals
|
|
||||||
# self.llm_handler.prediction_ready.connect(self._on_llm_prediction_ready)
|
|
||||||
# self.llm_handler.prediction_error.connect(self._on_llm_prediction_error)
|
|
||||||
# self.llm_handler.status_update.connect(self.statusBar().showMessage)
|
|
||||||
# self.llm_thread.started.connect(self.llm_handler.run)
|
|
||||||
# self.llm_thread.finished.connect(self.llm_thread.deleteLater)
|
|
||||||
# self.llm_handler.prediction_ready.connect(self.llm_thread.quit) # Quit thread on success
|
|
||||||
# self.llm_handler.prediction_error.connect(self.llm_thread.quit) # Quit thread on error
|
|
||||||
#
|
|
||||||
# self.llm_thread.start()
|
|
||||||
#
|
|
||||||
# @Slot(str, list)
|
|
||||||
# def _on_llm_prediction_ready(self, directory_path, results):
|
|
||||||
# print(f"LLM Prediction ready for {directory_path}: {len(results)} source rules found.")
|
|
||||||
# # Process results, update model, etc.
|
|
||||||
# # Make sure to clean up thread/handler references if needed
|
|
||||||
# self.llm_handler.deleteLater() # Schedule handler for deletion
|
|
||||||
#
|
|
||||||
# @Slot(str, str)
|
|
||||||
# def _on_llm_prediction_error(self, directory_path, error_message):
|
|
||||||
# print(f"LLM Prediction error for {directory_path}: {error_message}")
|
|
||||||
# # Show error to user, clean up thread/handler
|
|
||||||
# self.llm_handler.deleteLater()
|
|
||||||
43
gui/log_console_widget.py
Normal file
43
gui/log_console_widget.py
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
# gui/log_console_widget.py
|
||||||
|
import logging
|
||||||
|
from PySide6.QtWidgets import (
|
||||||
|
QWidget, QVBoxLayout, QTextEdit, QLabel, QSizePolicy
|
||||||
|
)
|
||||||
|
from PySide6.QtCore import Slot
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class LogConsoleWidget(QWidget):
|
||||||
|
"""
|
||||||
|
A dedicated widget to display log messages.
|
||||||
|
"""
|
||||||
|
def __init__(self, parent=None):
|
||||||
|
super().__init__(parent)
|
||||||
|
self._init_ui()
|
||||||
|
|
||||||
|
def _init_ui(self):
|
||||||
|
"""Initializes the UI elements for the log console."""
|
||||||
|
layout = QVBoxLayout(self)
|
||||||
|
layout.setContentsMargins(0, 5, 0, 0) # Add some top margin
|
||||||
|
|
||||||
|
log_console_label = QLabel("Log Console:")
|
||||||
|
self.log_console_output = QTextEdit()
|
||||||
|
self.log_console_output.setReadOnly(True)
|
||||||
|
# self.log_console_output.setMaximumHeight(150) # Let the parent layout control height
|
||||||
|
self.log_console_output.setSizePolicy(QSizePolicy.Policy.Expanding, QSizePolicy.Policy.Expanding) # Allow vertical expansion
|
||||||
|
|
||||||
|
layout.addWidget(log_console_label)
|
||||||
|
layout.addWidget(self.log_console_output)
|
||||||
|
|
||||||
|
# Initially hidden, visibility controlled by MainWindow
|
||||||
|
self.setVisible(False)
|
||||||
|
|
||||||
|
@Slot(str)
|
||||||
|
def _append_log_message(self, message):
|
||||||
|
"""Appends a log message to the QTextEdit console."""
|
||||||
|
self.log_console_output.append(message)
|
||||||
|
# Auto-scroll to the bottom
|
||||||
|
self.log_console_output.verticalScrollBar().setValue(self.log_console_output.verticalScrollBar().maximum())
|
||||||
|
|
||||||
|
# Note: Visibility is controlled externally via setVisible(),
|
||||||
|
# so the _toggle_log_console_visibility slot is not needed here.
|
||||||
633
gui/main_panel_widget.py
Normal file
633
gui/main_panel_widget.py
Normal file
@ -0,0 +1,633 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
from functools import partial
|
||||||
|
|
||||||
|
from PySide6.QtWidgets import QApplication # Added for processEvents
|
||||||
|
from PySide6.QtWidgets import (
|
||||||
|
QWidget, QVBoxLayout, QHBoxLayout, QSplitter, QTableView,
|
||||||
|
QPushButton, QComboBox, QTableWidget, QTableWidgetItem, QHeaderView,
|
||||||
|
QProgressBar, QLabel, QFrame, QCheckBox, QSpinBox, QListWidget, QTextEdit,
|
||||||
|
QLineEdit, QMessageBox, QFileDialog, QInputDialog, QListWidgetItem, QTabWidget,
|
||||||
|
QFormLayout, QGroupBox, QAbstractItemView, QSizePolicy, QTreeView, QMenu
|
||||||
|
)
|
||||||
|
from PySide6.QtCore import Qt, Signal, Slot, QPoint, QModelIndex, QTimer
|
||||||
|
from PySide6.QtGui import QColor, QAction, QPalette, QClipboard, QGuiApplication # Added QGuiApplication for clipboard
|
||||||
|
|
||||||
|
# --- Local GUI Imports ---
|
||||||
|
# Import delegates and models needed by the panel
|
||||||
|
from .delegates import LineEditDelegate, ComboBoxDelegate, SupplierSearchDelegate
|
||||||
|
from .unified_view_model import UnifiedViewModel # Assuming UnifiedViewModel is passed in
|
||||||
|
|
||||||
|
# --- Backend Imports ---
|
||||||
|
# Import Rule Structures if needed for context menus etc.
|
||||||
|
from rule_structure import SourceRule, AssetRule, FileRule
|
||||||
|
# Import config loading if defaults are needed directly here (though better passed from MainWindow)
|
||||||
|
try:
|
||||||
|
from configuration import ConfigurationError, load_base_config
|
||||||
|
except ImportError:
|
||||||
|
ConfigurationError = Exception
|
||||||
|
load_base_config = None
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class MainPanelWidget(QWidget):
|
||||||
|
"""
|
||||||
|
Widget handling the main interaction panel:
|
||||||
|
- Output directory selection
|
||||||
|
- Asset preview/editing view (Unified View)
|
||||||
|
- Blender post-processing options
|
||||||
|
- Processing controls (Start, Cancel, Clear, LLM Re-interpret)
|
||||||
|
"""
|
||||||
|
# --- Signals Emitted by the Panel ---
|
||||||
|
# Request to add new input paths (e.g., from drag/drop handled by MainWindow)
|
||||||
|
# add_paths_requested = Signal(list) # Maybe not needed if MainWindow handles drop directly
|
||||||
|
|
||||||
|
# Request to start the main processing job
|
||||||
|
process_requested = Signal(dict) # Emits dict with settings: output_dir, overwrite, workers, blender_enabled, ng_path, mat_path
|
||||||
|
|
||||||
|
# Request to cancel the ongoing processing job
|
||||||
|
cancel_requested = Signal()
|
||||||
|
|
||||||
|
# Request to clear the current queue/view
|
||||||
|
clear_queue_requested = Signal()
|
||||||
|
|
||||||
|
# Request to re-interpret selected items using LLM
|
||||||
|
llm_reinterpret_requested = Signal(list) # Emits list of source paths
|
||||||
|
|
||||||
|
# Notify when the output directory changes
|
||||||
|
output_dir_changed = Signal(str)
|
||||||
|
|
||||||
|
# Notify when Blender settings change
|
||||||
|
blender_settings_changed = Signal(bool, str, str) # enabled, ng_path, mat_path
|
||||||
|
|
||||||
|
def __init__(self, unified_model: UnifiedViewModel, parent=None):
|
||||||
|
"""
|
||||||
|
Initializes the MainPanelWidget.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
unified_model: The shared UnifiedViewModel instance.
|
||||||
|
parent: The parent widget.
|
||||||
|
"""
|
||||||
|
super().__init__(parent)
|
||||||
|
self.unified_model = unified_model
|
||||||
|
self.llm_processing_active = False # Track if LLM is running (set by MainWindow)
|
||||||
|
|
||||||
|
# Get project root for resolving default paths if needed here
|
||||||
|
script_dir = Path(__file__).parent
|
||||||
|
self.project_root = script_dir.parent
|
||||||
|
|
||||||
|
self._setup_ui()
|
||||||
|
self._connect_signals()
|
||||||
|
|
||||||
|
def _setup_ui(self):
|
||||||
|
"""Sets up the UI elements for the panel."""
|
||||||
|
main_layout = QVBoxLayout(self)
|
||||||
|
main_layout.setContentsMargins(5, 5, 5, 5) # Reduce margins
|
||||||
|
|
||||||
|
# --- Output Directory Selection ---
|
||||||
|
output_layout = QHBoxLayout()
|
||||||
|
self.output_dir_label = QLabel("Output Directory:")
|
||||||
|
self.output_path_edit = QLineEdit()
|
||||||
|
self.browse_output_button = QPushButton("Browse...")
|
||||||
|
output_layout.addWidget(self.output_dir_label)
|
||||||
|
output_layout.addWidget(self.output_path_edit, 1)
|
||||||
|
output_layout.addWidget(self.browse_output_button)
|
||||||
|
main_layout.addLayout(output_layout)
|
||||||
|
|
||||||
|
# --- Set Initial Output Path (Copied from MainWindow) ---
|
||||||
|
# Consider passing this default path from MainWindow instead of reloading config here
|
||||||
|
if load_base_config:
|
||||||
|
try:
|
||||||
|
base_config = load_base_config()
|
||||||
|
output_base_dir_config = base_config.get('OUTPUT_BASE_DIR', '../Asset_Processor_Output')
|
||||||
|
default_output_dir = (self.project_root / output_base_dir_config).resolve()
|
||||||
|
self.output_path_edit.setText(str(default_output_dir))
|
||||||
|
log.info(f"MainPanelWidget: Default output directory set to: {default_output_dir}")
|
||||||
|
except ConfigurationError as e:
|
||||||
|
log.error(f"MainPanelWidget: Error reading base configuration for default output directory: {e}")
|
||||||
|
self.output_path_edit.setText("")
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"MainPanelWidget: Error setting default output directory: {e}")
|
||||||
|
self.output_path_edit.setText("")
|
||||||
|
else:
|
||||||
|
log.warning("MainPanelWidget: load_base_config not available to set default output path.")
|
||||||
|
self.output_path_edit.setText("")
|
||||||
|
|
||||||
|
|
||||||
|
# --- Unified View Setup ---
|
||||||
|
self.unified_view = QTreeView()
|
||||||
|
self.unified_view.setModel(self.unified_model) # Set the passed-in model
|
||||||
|
|
||||||
|
# Instantiate Delegates
|
||||||
|
lineEditDelegate = LineEditDelegate(self.unified_view)
|
||||||
|
# ComboBoxDelegate needs access to MainWindow's get_llm_source_preset_name,
|
||||||
|
# which might require passing MainWindow or a callback here.
|
||||||
|
# For now, let's assume it can work without it or we adapt it later.
|
||||||
|
# TODO: Revisit ComboBoxDelegate dependency
|
||||||
|
comboBoxDelegate = ComboBoxDelegate(self) # Pass only parent (self)
|
||||||
|
supplierSearchDelegate = SupplierSearchDelegate(self) # Pass parent
|
||||||
|
|
||||||
|
# Set Delegates for Columns
|
||||||
|
self.unified_view.setItemDelegateForColumn(UnifiedViewModel.COL_SUPPLIER, supplierSearchDelegate)
|
||||||
|
self.unified_view.setItemDelegateForColumn(UnifiedViewModel.COL_ASSET_TYPE, comboBoxDelegate)
|
||||||
|
self.unified_view.setItemDelegateForColumn(UnifiedViewModel.COL_TARGET_ASSET, lineEditDelegate)
|
||||||
|
self.unified_view.setItemDelegateForColumn(UnifiedViewModel.COL_ITEM_TYPE, comboBoxDelegate)
|
||||||
|
|
||||||
|
# Configure View Appearance
|
||||||
|
self.unified_view.setSizePolicy(QSizePolicy.Policy.Expanding, QSizePolicy.Policy.Expanding)
|
||||||
|
self.unified_view.setAlternatingRowColors(True)
|
||||||
|
self.unified_view.setSelectionBehavior(QAbstractItemView.SelectionBehavior.SelectRows)
|
||||||
|
self.unified_view.setEditTriggers(QAbstractItemView.EditTrigger.DoubleClicked | QAbstractItemView.EditTrigger.SelectedClicked | QAbstractItemView.EditTrigger.EditKeyPressed)
|
||||||
|
self.unified_view.setSelectionMode(QAbstractItemView.SelectionMode.ExtendedSelection) # Allow multi-select for re-interpret
|
||||||
|
|
||||||
|
# Configure Header Resize Modes
|
||||||
|
header = self.unified_view.header()
|
||||||
|
header.setStretchLastSection(False)
|
||||||
|
header.setSectionResizeMode(UnifiedViewModel.COL_NAME, QHeaderView.ResizeMode.ResizeToContents)
|
||||||
|
header.setSectionResizeMode(UnifiedViewModel.COL_TARGET_ASSET, QHeaderView.ResizeMode.Stretch)
|
||||||
|
header.setSectionResizeMode(UnifiedViewModel.COL_SUPPLIER, QHeaderView.ResizeMode.ResizeToContents)
|
||||||
|
header.setSectionResizeMode(UnifiedViewModel.COL_ASSET_TYPE, QHeaderView.ResizeMode.ResizeToContents)
|
||||||
|
header.setSectionResizeMode(UnifiedViewModel.COL_ITEM_TYPE, QHeaderView.ResizeMode.ResizeToContents)
|
||||||
|
|
||||||
|
# Enable custom context menu
|
||||||
|
self.unified_view.setContextMenuPolicy(Qt.ContextMenuPolicy.CustomContextMenu)
|
||||||
|
|
||||||
|
# Add the Unified View to the main layout
|
||||||
|
main_layout.addWidget(self.unified_view, 1) # Give it stretch factor 1
|
||||||
|
|
||||||
|
# --- Progress Bar ---
|
||||||
|
self.progress_bar = QProgressBar()
|
||||||
|
self.progress_bar.setValue(0)
|
||||||
|
self.progress_bar.setTextVisible(True)
|
||||||
|
self.progress_bar.setFormat("Idle") # Initial format
|
||||||
|
main_layout.addWidget(self.progress_bar)
|
||||||
|
|
||||||
|
# --- Blender Integration Controls ---
|
||||||
|
blender_group = QGroupBox("Blender Post-Processing")
|
||||||
|
blender_layout = QVBoxLayout(blender_group)
|
||||||
|
|
||||||
|
self.blender_integration_checkbox = QCheckBox("Run Blender Scripts After Processing")
|
||||||
|
self.blender_integration_checkbox.setToolTip("If checked, attempts to run create_nodegroups.py and create_materials.py in Blender.")
|
||||||
|
blender_layout.addWidget(self.blender_integration_checkbox)
|
||||||
|
|
||||||
|
# Nodegroup Blend Path
|
||||||
|
nodegroup_layout = QHBoxLayout()
|
||||||
|
nodegroup_layout.addWidget(QLabel("Nodegroup .blend:"))
|
||||||
|
self.nodegroup_blend_path_input = QLineEdit()
|
||||||
|
self.browse_nodegroup_blend_button = QPushButton("...")
|
||||||
|
self.browse_nodegroup_blend_button.setFixedWidth(30)
|
||||||
|
nodegroup_layout.addWidget(self.nodegroup_blend_path_input)
|
||||||
|
nodegroup_layout.addWidget(self.browse_nodegroup_blend_button)
|
||||||
|
blender_layout.addLayout(nodegroup_layout)
|
||||||
|
|
||||||
|
# Materials Blend Path
|
||||||
|
materials_layout = QHBoxLayout()
|
||||||
|
materials_layout.addWidget(QLabel("Materials .blend:"))
|
||||||
|
self.materials_blend_path_input = QLineEdit()
|
||||||
|
self.browse_materials_blend_button = QPushButton("...")
|
||||||
|
self.browse_materials_blend_button.setFixedWidth(30)
|
||||||
|
materials_layout.addWidget(self.materials_blend_path_input)
|
||||||
|
materials_layout.addWidget(self.browse_materials_blend_button)
|
||||||
|
blender_layout.addLayout(materials_layout)
|
||||||
|
|
||||||
|
# Initialize paths from config (Copied from MainWindow)
|
||||||
|
# Consider passing these defaults from MainWindow
|
||||||
|
if load_base_config:
|
||||||
|
try:
|
||||||
|
base_config = load_base_config()
|
||||||
|
default_ng_path = base_config.get('DEFAULT_NODEGROUP_BLEND_PATH', '')
|
||||||
|
default_mat_path = base_config.get('DEFAULT_MATERIALS_BLEND_PATH', '')
|
||||||
|
self.nodegroup_blend_path_input.setText(default_ng_path if default_ng_path else "")
|
||||||
|
self.materials_blend_path_input.setText(default_mat_path if default_mat_path else "")
|
||||||
|
except ConfigurationError as e:
|
||||||
|
log.error(f"MainPanelWidget: Error reading base configuration for default Blender paths: {e}")
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"MainPanelWidget: Error reading default Blender paths from config: {e}")
|
||||||
|
else:
|
||||||
|
log.warning("MainPanelWidget: load_base_config not available to set default Blender paths.")
|
||||||
|
|
||||||
|
|
||||||
|
# Disable Blender controls initially if checkbox is unchecked
|
||||||
|
self.nodegroup_blend_path_input.setEnabled(False)
|
||||||
|
self.browse_nodegroup_blend_button.setEnabled(False)
|
||||||
|
self.materials_blend_path_input.setEnabled(False)
|
||||||
|
self.browse_materials_blend_button.setEnabled(False)
|
||||||
|
|
||||||
|
main_layout.addWidget(blender_group) # Add the group box to the main layout
|
||||||
|
|
||||||
|
# --- Bottom Controls ---
|
||||||
|
bottom_controls_layout = QHBoxLayout()
|
||||||
|
self.overwrite_checkbox = QCheckBox("Overwrite Existing")
|
||||||
|
self.overwrite_checkbox.setToolTip("If checked, existing output folders for processed assets will be deleted and replaced.")
|
||||||
|
bottom_controls_layout.addWidget(self.overwrite_checkbox)
|
||||||
|
|
||||||
|
self.workers_label = QLabel("Workers:")
|
||||||
|
self.workers_spinbox = QSpinBox()
|
||||||
|
default_workers = 1
|
||||||
|
try:
|
||||||
|
cores = os.cpu_count()
|
||||||
|
if cores: default_workers = max(1, cores // 2)
|
||||||
|
except NotImplementedError: pass
|
||||||
|
self.workers_spinbox.setMinimum(1)
|
||||||
|
self.workers_spinbox.setMaximum(os.cpu_count() or 32)
|
||||||
|
self.workers_spinbox.setValue(default_workers)
|
||||||
|
self.workers_spinbox.setToolTip("Number of assets to process concurrently.")
|
||||||
|
bottom_controls_layout.addWidget(self.workers_label)
|
||||||
|
bottom_controls_layout.addWidget(self.workers_spinbox)
|
||||||
|
bottom_controls_layout.addStretch(1)
|
||||||
|
|
||||||
|
# --- LLM Re-interpret Button ---
|
||||||
|
self.llm_reinterpret_button = QPushButton("Re-interpret Selected with LLM")
|
||||||
|
self.llm_reinterpret_button.setToolTip("Re-run LLM interpretation on the selected source items.")
|
||||||
|
self.llm_reinterpret_button.setEnabled(False) # Initially disabled
|
||||||
|
bottom_controls_layout.addWidget(self.llm_reinterpret_button)
|
||||||
|
|
||||||
|
self.clear_queue_button = QPushButton("Clear Queue")
|
||||||
|
self.start_button = QPushButton("Start Processing")
|
||||||
|
self.cancel_button = QPushButton("Cancel")
|
||||||
|
self.cancel_button.setEnabled(False)
|
||||||
|
|
||||||
|
bottom_controls_layout.addWidget(self.clear_queue_button)
|
||||||
|
bottom_controls_layout.addWidget(self.start_button)
|
||||||
|
bottom_controls_layout.addWidget(self.cancel_button)
|
||||||
|
main_layout.addLayout(bottom_controls_layout)
|
||||||
|
|
||||||
|
def _connect_signals(self):
|
||||||
|
"""Connect internal UI signals to slots or emit panel signals."""
|
||||||
|
# Output Directory
|
||||||
|
self.browse_output_button.clicked.connect(self._browse_for_output_directory)
|
||||||
|
self.output_path_edit.editingFinished.connect(self._on_output_path_changed) # Emit signal when user finishes editing
|
||||||
|
|
||||||
|
# Unified View
|
||||||
|
self.unified_view.selectionModel().selectionChanged.connect(self._update_llm_reinterpret_button_state)
|
||||||
|
self.unified_view.customContextMenuRequested.connect(self._show_unified_view_context_menu)
|
||||||
|
|
||||||
|
# Blender Controls
|
||||||
|
self.blender_integration_checkbox.toggled.connect(self._toggle_blender_controls)
|
||||||
|
self.browse_nodegroup_blend_button.clicked.connect(self._browse_for_nodegroup_blend)
|
||||||
|
self.browse_materials_blend_button.clicked.connect(self._browse_for_materials_blend)
|
||||||
|
# Emit signal when paths change
|
||||||
|
self.nodegroup_blend_path_input.editingFinished.connect(self._emit_blender_settings_changed)
|
||||||
|
self.materials_blend_path_input.editingFinished.connect(self._emit_blender_settings_changed)
|
||||||
|
self.blender_integration_checkbox.toggled.connect(self._emit_blender_settings_changed)
|
||||||
|
|
||||||
|
|
||||||
|
# Bottom Buttons
|
||||||
|
self.clear_queue_button.clicked.connect(self.clear_queue_requested) # Emit signal directly
|
||||||
|
self.start_button.clicked.connect(self._on_start_processing_clicked) # Use slot to gather data
|
||||||
|
self.cancel_button.clicked.connect(self.cancel_requested) # Emit signal directly
|
||||||
|
self.llm_reinterpret_button.clicked.connect(self._on_llm_reinterpret_clicked) # Use slot to gather data
|
||||||
|
|
||||||
|
# --- Slots for Internal UI Logic ---
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _browse_for_output_directory(self):
|
||||||
|
"""Opens a dialog to select the output directory."""
|
||||||
|
current_path = self.output_path_edit.text()
|
||||||
|
if not current_path or not Path(current_path).is_dir():
|
||||||
|
current_path = str(self.project_root) # Use project root as fallback
|
||||||
|
|
||||||
|
directory = QFileDialog.getExistingDirectory(
|
||||||
|
self,
|
||||||
|
"Select Output Directory",
|
||||||
|
current_path,
|
||||||
|
QFileDialog.Option.ShowDirsOnly | QFileDialog.Option.DontResolveSymlinks
|
||||||
|
)
|
||||||
|
if directory:
|
||||||
|
self.output_path_edit.setText(directory)
|
||||||
|
self._on_output_path_changed() # Explicitly call the change handler
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _on_output_path_changed(self):
|
||||||
|
"""Emits the output_dir_changed signal."""
|
||||||
|
self.output_dir_changed.emit(self.output_path_edit.text())
|
||||||
|
|
||||||
|
@Slot(bool)
|
||||||
|
def _toggle_blender_controls(self, checked):
|
||||||
|
"""Enable/disable Blender path inputs based on the checkbox state."""
|
||||||
|
self.nodegroup_blend_path_input.setEnabled(checked)
|
||||||
|
self.browse_nodegroup_blend_button.setEnabled(checked)
|
||||||
|
self.materials_blend_path_input.setEnabled(checked)
|
||||||
|
self.browse_materials_blend_button.setEnabled(checked)
|
||||||
|
# No need to emit here, the checkbox toggle signal is connected separately
|
||||||
|
|
||||||
|
def _browse_for_blend_file(self, line_edit_widget: QLineEdit):
|
||||||
|
"""Opens a dialog to select a .blend file and updates the line edit."""
|
||||||
|
current_path = line_edit_widget.text()
|
||||||
|
start_dir = str(Path(current_path).parent) if current_path and Path(current_path).exists() else str(self.project_root)
|
||||||
|
|
||||||
|
file_path, _ = QFileDialog.getOpenFileName(
|
||||||
|
self,
|
||||||
|
"Select Blender File",
|
||||||
|
start_dir,
|
||||||
|
"Blender Files (*.blend);;All Files (*)"
|
||||||
|
)
|
||||||
|
if file_path:
|
||||||
|
line_edit_widget.setText(file_path)
|
||||||
|
line_edit_widget.editingFinished.emit() # Trigger editingFinished to emit change signal
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _browse_for_nodegroup_blend(self):
|
||||||
|
self._browse_for_blend_file(self.nodegroup_blend_path_input)
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _browse_for_materials_blend(self):
|
||||||
|
self._browse_for_blend_file(self.materials_blend_path_input)
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _emit_blender_settings_changed(self):
|
||||||
|
"""Gathers current Blender settings and emits the blender_settings_changed signal."""
|
||||||
|
enabled = self.blender_integration_checkbox.isChecked()
|
||||||
|
ng_path = self.nodegroup_blend_path_input.text()
|
||||||
|
mat_path = self.materials_blend_path_input.text()
|
||||||
|
self.blender_settings_changed.emit(enabled, ng_path, mat_path)
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _on_start_processing_clicked(self):
|
||||||
|
"""Gathers settings and emits the process_requested signal."""
|
||||||
|
output_dir = self.output_path_edit.text().strip()
|
||||||
|
if not output_dir:
|
||||||
|
QMessageBox.warning(self, "Missing Output Directory", "Please select an output directory.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Basic validation (MainWindow should do more thorough validation)
|
||||||
|
try:
|
||||||
|
Path(output_dir).mkdir(parents=True, exist_ok=True)
|
||||||
|
except Exception as e:
|
||||||
|
QMessageBox.warning(self, "Invalid Output Directory", f"Cannot use output directory:\n{output_dir}\n\nError: {e}")
|
||||||
|
return
|
||||||
|
|
||||||
|
settings = {
|
||||||
|
"output_dir": output_dir,
|
||||||
|
"overwrite": self.overwrite_checkbox.isChecked(),
|
||||||
|
"workers": self.workers_spinbox.value(),
|
||||||
|
"blender_enabled": self.blender_integration_checkbox.isChecked(),
|
||||||
|
"nodegroup_blend_path": self.nodegroup_blend_path_input.text(),
|
||||||
|
"materials_blend_path": self.materials_blend_path_input.text()
|
||||||
|
}
|
||||||
|
self.process_requested.emit(settings)
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _update_llm_reinterpret_button_state(self):
|
||||||
|
"""Enables/disables the LLM re-interpret button based on selection and LLM status."""
|
||||||
|
selection_model = self.unified_view.selectionModel()
|
||||||
|
has_selection = selection_model is not None and selection_model.hasSelection()
|
||||||
|
# Enable only if there's a selection AND LLM is not currently active
|
||||||
|
self.llm_reinterpret_button.setEnabled(has_selection and not self.llm_processing_active)
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _on_llm_reinterpret_clicked(self):
|
||||||
|
"""Gathers selected source paths and emits the llm_reinterpret_requested signal."""
|
||||||
|
selected_indexes = self.unified_view.selectionModel().selectedIndexes()
|
||||||
|
if not selected_indexes:
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.llm_processing_active:
|
||||||
|
QMessageBox.warning(self, "Busy", "LLM processing is already in progress. Please wait.")
|
||||||
|
return
|
||||||
|
|
||||||
|
unique_source_dirs = set()
|
||||||
|
processed_source_paths = set() # Track processed source paths to avoid duplicates
|
||||||
|
for index in selected_indexes:
|
||||||
|
if not index.isValid(): continue
|
||||||
|
item_node = index.internalPointer()
|
||||||
|
if not item_node: continue
|
||||||
|
|
||||||
|
# Traverse up to find the SourceRule node (Simplified traversal)
|
||||||
|
source_node = None
|
||||||
|
current_node = item_node
|
||||||
|
while current_node is not None:
|
||||||
|
if isinstance(current_node, SourceRule):
|
||||||
|
source_node = current_node
|
||||||
|
break
|
||||||
|
# Simplified parent traversal - adjust if model structure is different
|
||||||
|
parent_attr = getattr(current_node, 'parent', None) # Check for generic 'parent'
|
||||||
|
if callable(parent_attr): # Check if parent is a method (like in QStandardItemModel)
|
||||||
|
current_node = parent_attr()
|
||||||
|
elif parent_attr: # Check if parent is an attribute
|
||||||
|
current_node = parent_attr
|
||||||
|
else: # Try specific parent attributes if generic fails
|
||||||
|
parent_source = getattr(current_node, 'parent_source', None)
|
||||||
|
if parent_source:
|
||||||
|
current_node = parent_source
|
||||||
|
else:
|
||||||
|
parent_asset = getattr(current_node, 'parent_asset', None)
|
||||||
|
if parent_asset:
|
||||||
|
current_node = parent_asset
|
||||||
|
else: # Reached top or unexpected node type
|
||||||
|
current_node = None
|
||||||
|
|
||||||
|
|
||||||
|
if source_node and hasattr(source_node, 'input_path') and source_node.input_path:
|
||||||
|
source_path_str = source_node.input_path
|
||||||
|
if source_path_str in processed_source_paths:
|
||||||
|
continue
|
||||||
|
source_path_obj = Path(source_path_str)
|
||||||
|
if source_path_obj.is_dir() or (source_path_obj.is_file() and source_path_obj.suffix.lower() == '.zip'):
|
||||||
|
unique_source_dirs.add(source_path_str)
|
||||||
|
processed_source_paths.add(source_path_str)
|
||||||
|
else:
|
||||||
|
log.warning(f"Skipping non-directory/zip source for re-interpretation: {source_path_str}")
|
||||||
|
# else: # Reduce log noise
|
||||||
|
# log.warning(f"Could not determine valid SourceRule or input_path for selected index: {index.row()},{index.column()} (Item type: {type(item_node).__name__})")
|
||||||
|
|
||||||
|
|
||||||
|
if not unique_source_dirs:
|
||||||
|
# self.statusBar().showMessage("No valid source directories found for selected items.", 5000) # Status bar is in MainWindow
|
||||||
|
log.warning("No valid source directories found for selected items to re-interpret.")
|
||||||
|
return
|
||||||
|
|
||||||
|
self.llm_reinterpret_requested.emit(list(unique_source_dirs))
|
||||||
|
|
||||||
|
|
||||||
|
@Slot(QPoint)
|
||||||
|
def _show_unified_view_context_menu(self, point: QPoint):
|
||||||
|
"""Shows the context menu for the unified view."""
|
||||||
|
index = self.unified_view.indexAt(point)
|
||||||
|
if not index.isValid():
|
||||||
|
return
|
||||||
|
|
||||||
|
item_node = index.internalPointer()
|
||||||
|
is_source_item = isinstance(item_node, SourceRule)
|
||||||
|
|
||||||
|
menu = QMenu(self)
|
||||||
|
|
||||||
|
if is_source_item:
|
||||||
|
copy_llm_example_action = QAction("Copy LLM Example to Clipboard", self)
|
||||||
|
copy_llm_example_action.setToolTip("Copies a JSON structure representing the input files and predicted output, suitable for LLM examples.")
|
||||||
|
copy_llm_example_action.triggered.connect(lambda: self._copy_llm_example_to_clipboard(index))
|
||||||
|
menu.addAction(copy_llm_example_action)
|
||||||
|
menu.addSeparator()
|
||||||
|
|
||||||
|
# Add other actions...
|
||||||
|
|
||||||
|
if not menu.isEmpty():
|
||||||
|
menu.exec(self.unified_view.viewport().mapToGlobal(point))
|
||||||
|
|
||||||
|
@Slot(QModelIndex)
|
||||||
|
def _copy_llm_example_to_clipboard(self, index: QModelIndex):
|
||||||
|
"""Copies a JSON structure for the selected source item to the clipboard."""
|
||||||
|
if not index.isValid(): return
|
||||||
|
item_node = index.internalPointer()
|
||||||
|
if not isinstance(item_node, SourceRule): return
|
||||||
|
|
||||||
|
source_rule: SourceRule = item_node
|
||||||
|
log.info(f"Attempting to generate LLM example JSON for source: {source_rule.input_path}")
|
||||||
|
|
||||||
|
all_file_paths = []
|
||||||
|
predicted_assets_data = []
|
||||||
|
|
||||||
|
for asset_rule in source_rule.assets:
|
||||||
|
asset_files_data = []
|
||||||
|
for file_rule in asset_rule.files:
|
||||||
|
if file_rule.file_path:
|
||||||
|
all_file_paths.append(file_rule.file_path)
|
||||||
|
asset_files_data.append({
|
||||||
|
"file_path": file_rule.file_path,
|
||||||
|
"predicted_file_type": file_rule.item_type or "UNKNOWN"
|
||||||
|
})
|
||||||
|
asset_files_data.sort(key=lambda x: x['file_path'])
|
||||||
|
predicted_assets_data.append({
|
||||||
|
"suggested_asset_name": asset_rule.asset_name or "UnnamedAsset",
|
||||||
|
"predicted_asset_type": asset_rule.asset_type or "UNKNOWN",
|
||||||
|
"files": asset_files_data
|
||||||
|
})
|
||||||
|
|
||||||
|
predicted_assets_data.sort(key=lambda x: x['suggested_asset_name'])
|
||||||
|
all_file_paths.sort()
|
||||||
|
|
||||||
|
if not all_file_paths:
|
||||||
|
log.warning(f"No file paths found for source: {source_rule.input_path}. Cannot generate example.")
|
||||||
|
# Cannot show status bar message here
|
||||||
|
return
|
||||||
|
|
||||||
|
llm_example = {
|
||||||
|
"input": "\n".join(all_file_paths),
|
||||||
|
"output": {"predicted_assets": predicted_assets_data}
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
json_string = json.dumps(llm_example, indent=2)
|
||||||
|
clipboard = QGuiApplication.clipboard() # Use QGuiApplication
|
||||||
|
if clipboard:
|
||||||
|
clipboard.setText(json_string)
|
||||||
|
log.info(f"Copied LLM example JSON to clipboard for source: {source_rule.input_path}")
|
||||||
|
# Cannot show status bar message here
|
||||||
|
else:
|
||||||
|
log.error("Failed to get system clipboard.")
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error copying LLM example JSON to clipboard: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
# --- Public Slots for MainWindow to Call ---
|
||||||
|
|
||||||
|
@Slot(int, int)
|
||||||
|
def update_progress_bar(self, current_count, total_count):
|
||||||
|
"""Updates the progress bar display."""
|
||||||
|
if total_count > 0:
|
||||||
|
percentage = int((current_count / total_count) * 100)
|
||||||
|
log.debug(f"Updating progress bar: current={current_count}, total={total_count}, calculated_percentage={percentage}") # DEBUG LOG
|
||||||
|
self.progress_bar.setValue(percentage)
|
||||||
|
self.progress_bar.setFormat(f"%p% ({current_count}/{total_count})")
|
||||||
|
QApplication.processEvents() # Force GUI update
|
||||||
|
else:
|
||||||
|
self.progress_bar.setValue(0)
|
||||||
|
self.progress_bar.setFormat("0/0")
|
||||||
|
|
||||||
|
@Slot(str)
|
||||||
|
def set_progress_bar_text(self, text: str):
|
||||||
|
"""Sets the text format of the progress bar."""
|
||||||
|
self.progress_bar.setFormat(text)
|
||||||
|
# Reset value if setting text like "Idle" or "Waiting..."
|
||||||
|
if not "%" in text:
|
||||||
|
self.progress_bar.setValue(0)
|
||||||
|
|
||||||
|
|
||||||
|
@Slot(bool)
|
||||||
|
def set_controls_enabled(self, enabled: bool):
|
||||||
|
"""Enables or disables controls within the panel."""
|
||||||
|
# Enable/disable most controls based on the 'enabled' flag
|
||||||
|
self.output_path_edit.setEnabled(enabled)
|
||||||
|
self.browse_output_button.setEnabled(enabled)
|
||||||
|
self.unified_view.setEnabled(enabled)
|
||||||
|
self.overwrite_checkbox.setEnabled(enabled)
|
||||||
|
self.workers_spinbox.setEnabled(enabled)
|
||||||
|
self.clear_queue_button.setEnabled(enabled)
|
||||||
|
self.blender_integration_checkbox.setEnabled(enabled)
|
||||||
|
|
||||||
|
# Start button is enabled only if controls are generally enabled AND preset mode is active (handled by MainWindow)
|
||||||
|
# Cancel button is enabled only when processing is active (handled by MainWindow)
|
||||||
|
# LLM button state depends on selection and LLM status (handled by _update_llm_reinterpret_button_state)
|
||||||
|
|
||||||
|
# Blender path inputs depend on both 'enabled' and the checkbox state
|
||||||
|
blender_paths_enabled = enabled and self.blender_integration_checkbox.isChecked()
|
||||||
|
self.nodegroup_blend_path_input.setEnabled(blender_paths_enabled)
|
||||||
|
self.browse_nodegroup_blend_button.setEnabled(blender_paths_enabled)
|
||||||
|
self.materials_blend_path_input.setEnabled(blender_paths_enabled)
|
||||||
|
self.browse_materials_blend_button.setEnabled(blender_paths_enabled)
|
||||||
|
|
||||||
|
# Update LLM button state explicitly when controls are enabled/disabled
|
||||||
|
if enabled:
|
||||||
|
self._update_llm_reinterpret_button_state()
|
||||||
|
else:
|
||||||
|
self.llm_reinterpret_button.setEnabled(False)
|
||||||
|
|
||||||
|
|
||||||
|
@Slot(bool)
|
||||||
|
def set_start_button_enabled(self, enabled: bool):
|
||||||
|
"""Sets the enabled state of the Start Processing button."""
|
||||||
|
self.start_button.setEnabled(enabled)
|
||||||
|
|
||||||
|
@Slot(str)
|
||||||
|
def set_start_button_text(self, text: str):
|
||||||
|
"""Sets the text of the Start Processing button."""
|
||||||
|
self.start_button.setText(text)
|
||||||
|
|
||||||
|
@Slot(bool)
|
||||||
|
def set_cancel_button_enabled(self, enabled: bool):
|
||||||
|
"""Sets the enabled state of the Cancel button."""
|
||||||
|
self.cancel_button.setEnabled(enabled)
|
||||||
|
|
||||||
|
@Slot(bool)
|
||||||
|
def set_llm_processing_status(self, active: bool):
|
||||||
|
"""Informs the panel whether LLM processing is active."""
|
||||||
|
self.llm_processing_active = active
|
||||||
|
self._update_llm_reinterpret_button_state() # Update button state based on new status
|
||||||
|
|
||||||
|
# TODO: Add method to get current output path if needed by MainWindow before processing
|
||||||
|
def get_output_directory(self) -> str:
|
||||||
|
return self.output_path_edit.text().strip()
|
||||||
|
|
||||||
|
# TODO: Add method to get current Blender settings if needed by MainWindow before processing
|
||||||
|
def get_blender_settings(self) -> dict:
|
||||||
|
return {
|
||||||
|
"enabled": self.blender_integration_checkbox.isChecked(),
|
||||||
|
"nodegroup_blend_path": self.nodegroup_blend_path_input.text(),
|
||||||
|
"materials_blend_path": self.materials_blend_path_input.text()
|
||||||
|
}
|
||||||
|
|
||||||
|
# TODO: Add method to get current worker count if needed by MainWindow before processing
|
||||||
|
def get_worker_count(self) -> int:
|
||||||
|
return self.workers_spinbox.value()
|
||||||
|
|
||||||
|
# TODO: Add method to get current overwrite setting if needed by MainWindow before processing
|
||||||
|
def get_overwrite_setting(self) -> bool:
|
||||||
|
return self.overwrite_checkbox.isChecked()
|
||||||
|
|
||||||
|
# --- Delegate Dependency ---
|
||||||
|
# This method might be needed by ComboBoxDelegate if it relies on MainWindow's logic
|
||||||
|
def get_llm_source_preset_name(self) -> str | None:
|
||||||
|
"""
|
||||||
|
Placeholder for providing context to delegates.
|
||||||
|
Ideally, the required info (like last preset name) should be passed
|
||||||
|
from MainWindow when the delegate needs it, or the delegate's dependency
|
||||||
|
should be refactored.
|
||||||
|
"""
|
||||||
|
log.warning("MainPanelWidget.get_llm_source_preset_name called - needs proper implementation or refactoring.")
|
||||||
|
# This needs to get the info from MainWindow, perhaps via a signal/slot or passed reference.
|
||||||
|
# Returning None for now.
|
||||||
|
return None
|
||||||
2388
gui/main_window.py
2388
gui/main_window.py
File diff suppressed because it is too large
Load Diff
@ -1,4 +1,4 @@
|
|||||||
# gui/prediction_handler.py
|
# gui/rule_based_prediction_handler.py
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import time
|
import time
|
||||||
@ -11,7 +11,8 @@ from collections import defaultdict, Counter # Added Counter
|
|||||||
from typing import List, Dict, Any # For type hinting
|
from typing import List, Dict, Any # For type hinting
|
||||||
|
|
||||||
# --- PySide6 Imports ---
|
# --- PySide6 Imports ---
|
||||||
from PySide6.QtCore import QObject, Signal, QThread, Slot
|
from PySide6.QtCore import QObject, Slot # Keep QObject for parent type hint, Slot for classify_files if kept as method
|
||||||
|
# Removed Signal, QThread as they are handled by BasePredictionHandler or caller
|
||||||
|
|
||||||
# --- Backend Imports ---
|
# --- Backend Imports ---
|
||||||
import sys
|
import sys
|
||||||
@ -21,16 +22,13 @@ if str(project_root) not in sys.path:
|
|||||||
sys.path.insert(0, str(project_root))
|
sys.path.insert(0, str(project_root))
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from configuration import Configuration, ConfigurationError, load_base_config # Import Configuration, ConfigurationError, and load_base_config
|
from configuration import Configuration, ConfigurationError # load_base_config might not be needed here
|
||||||
# AssetProcessor might not be needed directly anymore if logic is moved here
|
from rule_structure import SourceRule, AssetRule, FileRule
|
||||||
# from asset_processor import AssetProcessor, AssetProcessingError
|
from .base_prediction_handler import BasePredictionHandler # Import the base class
|
||||||
from rule_structure import SourceRule, AssetRule, FileRule # Removed AssetType, ItemType
|
|
||||||
# Removed: import config as app_config # Import project's config module
|
|
||||||
# Removed: Import the new dictionaries directly for easier access
|
|
||||||
# Removed: from config import ASSET_TYPE_DEFINITIONS, FILE_TYPE_DEFINITIONS
|
|
||||||
BACKEND_AVAILABLE = True
|
BACKEND_AVAILABLE = True
|
||||||
except ImportError as e:
|
except ImportError as e:
|
||||||
print(f"ERROR (PredictionHandler): Failed to import backend/config modules: {e}")
|
# Update error message source
|
||||||
|
print(f"ERROR (RuleBasedPredictionHandler): Failed to import backend/config/base modules: {e}")
|
||||||
# Define placeholders if imports fail
|
# Define placeholders if imports fail
|
||||||
Configuration = None
|
Configuration = None
|
||||||
load_base_config = None # Placeholder
|
load_base_config = None # Placeholder
|
||||||
@ -44,7 +42,7 @@ except ImportError as e:
|
|||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
# Basic config if logger hasn't been set up elsewhere
|
# Basic config if logger hasn't been set up elsewhere
|
||||||
if not log.hasHandlers():
|
if not log.hasHandlers():
|
||||||
logging.basicConfig(level=logging.INFO, format='%(levelname)s (PredictHandler): %(message)s')
|
logging.basicConfig(level=logging.INFO, format='%(levelname)s (RuleBasedPredictHandler): %(message)s')
|
||||||
|
|
||||||
|
|
||||||
# Helper function for classification (can be moved outside class if preferred)
|
# Helper function for classification (can be moved outside class if preferred)
|
||||||
@ -303,254 +301,191 @@ def classify_files(file_list: List[str], config: Configuration) -> Dict[str, Lis
|
|||||||
return dict(temp_grouped_files)
|
return dict(temp_grouped_files)
|
||||||
|
|
||||||
|
|
||||||
class PredictionHandler(QObject):
|
class RuleBasedPredictionHandler(BasePredictionHandler):
|
||||||
"""
|
"""
|
||||||
Handles running predictions in a separate thread to avoid GUI freezes.
|
Handles running rule-based predictions in a separate thread using presets.
|
||||||
Generates the initial SourceRule hierarchy based on file lists and presets.
|
Generates the initial SourceRule hierarchy based on file lists and presets.
|
||||||
|
Inherits from BasePredictionHandler for common threading and signaling.
|
||||||
"""
|
"""
|
||||||
# --- Signals ---
|
|
||||||
# Emitted when the hierarchical rule structure is ready for a single source
|
|
||||||
rule_hierarchy_ready = Signal(list) # Emits a LIST containing ONE SourceRule object
|
|
||||||
# Emitted when prediction/hierarchy generation for a source is done (emits the input_source_identifier)
|
|
||||||
prediction_finished = Signal(str)
|
|
||||||
# Emitted for status updates
|
|
||||||
status_message = Signal(str, int)
|
|
||||||
|
|
||||||
def __init__(self, parent=None):
|
def __init__(self, input_source_identifier: str, original_input_paths: list[str], preset_name: str, parent: QObject = None):
|
||||||
super().__init__(parent)
|
"""
|
||||||
self._is_running = False
|
Initializes the rule-based handler.
|
||||||
|
|
||||||
@property
|
Args:
|
||||||
def is_running(self):
|
input_source_identifier: The unique identifier for the input source (e.g., file path).
|
||||||
return self._is_running
|
original_input_paths: List of absolute file paths extracted from the source.
|
||||||
|
preset_name: The name of the preset configuration to use.
|
||||||
|
parent: The parent QObject.
|
||||||
|
"""
|
||||||
|
super().__init__(input_source_identifier, parent)
|
||||||
|
self.original_input_paths = original_input_paths
|
||||||
|
self.preset_name = preset_name
|
||||||
|
# _is_running is handled by the base class
|
||||||
|
# Keep track of the current request being processed by this persistent handler
|
||||||
|
self._current_input_path = None
|
||||||
|
self._current_file_list = None
|
||||||
|
self._current_preset_name = None
|
||||||
|
|
||||||
# Removed _predict_single_asset method
|
# Re-introduce run_prediction as the main slot to receive requests
|
||||||
|
@Slot(str, list, str)
|
||||||
@Slot(str, list, str) # Explicitly define types for the slot
|
|
||||||
def run_prediction(self, input_source_identifier: str, original_input_paths: list[str], preset_name: str):
|
def run_prediction(self, input_source_identifier: str, original_input_paths: list[str], preset_name: str):
|
||||||
"""
|
"""
|
||||||
Generates the initial SourceRule hierarchy for a given source identifier
|
Generates the initial SourceRule hierarchy for a given source identifier,
|
||||||
(which could be a folder or archive path), extracting the actual file list first.
|
|
||||||
file list, and preset name. Populates only overridable fields based on
|
file list, and preset name. Populates only overridable fields based on
|
||||||
classification and preset defaults.
|
classification and preset defaults.
|
||||||
This method is intended to be run in a separate QThread.
|
This method is intended to be run in the handler's QThread.
|
||||||
|
Uses the base class signals for reporting results/errors.
|
||||||
"""
|
"""
|
||||||
thread_id = QThread.currentThread()
|
# Check if already running a prediction for a *different* source
|
||||||
log.info(f"[{time.time():.4f}][T:{thread_id}] --> Entered PredictionHandler.run_prediction.")
|
# Allow re-triggering for the *same* source if needed (e.g., preset changed)
|
||||||
# Note: file_list argument is renamed to original_input_paths for clarity,
|
if self._is_running and self._current_input_path != input_source_identifier:
|
||||||
# but the signal passes the list of source paths, not the content files yet.
|
log.warning(f"RuleBasedPredictionHandler is busy with '{self._current_input_path}'. Ignoring request for '{input_source_identifier}'.")
|
||||||
# We use input_source_identifier as the primary path to analyze.
|
# Optionally emit an error signal specific to this condition
|
||||||
log.info(f"VERIFY: PredictionHandler received request. Source: '{input_source_identifier}', Original Paths: {original_input_paths}, Preset: '{preset_name}'") # DEBUG Verify
|
# self.prediction_error.emit(input_source_identifier, "Handler busy with another prediction.")
|
||||||
log.info(f"Source Identifier: '{input_source_identifier}', Preset: '{preset_name}'")
|
return
|
||||||
|
|
||||||
if self._is_running:
|
self._is_running = True
|
||||||
log.warning("Prediction is already running for another source. Aborting this run.")
|
self._is_cancelled = False # Reset cancellation flag for new request
|
||||||
# Don't emit finished, let the running one complete.
|
self._current_input_path = input_source_identifier
|
||||||
return
|
self._current_file_list = original_input_paths
|
||||||
|
self._current_preset_name = preset_name
|
||||||
|
|
||||||
|
log.info(f"Starting rule-based prediction for: {input_source_identifier} using preset: {preset_name}")
|
||||||
|
self.status_update.emit(f"Starting analysis for '{Path(input_source_identifier).name}'...") # Use base signal
|
||||||
|
|
||||||
|
source_rules_list = []
|
||||||
|
try:
|
||||||
if not BACKEND_AVAILABLE:
|
if not BACKEND_AVAILABLE:
|
||||||
log.error("Backend/config modules not available. Cannot run prediction.")
|
raise RuntimeError("Backend/config modules not available. Cannot run prediction.")
|
||||||
self.status_message.emit("Error: Backend components missing.", 5000)
|
|
||||||
# self.prediction_finished.emit() # Don't emit finished if never started properly
|
|
||||||
return
|
|
||||||
if not preset_name:
|
if not preset_name:
|
||||||
log.warning("No preset selected for prediction.")
|
log.warning("No preset selected for prediction.")
|
||||||
self.status_message.emit("No preset selected.", 3000)
|
self.status_update.emit("No preset selected.")
|
||||||
# self.prediction_finished.emit()
|
# Emit empty list for non-critical issues, signal completion
|
||||||
|
self.prediction_ready.emit(input_source_identifier, [])
|
||||||
|
self._is_running = False # Mark as finished
|
||||||
return
|
return
|
||||||
# Check the identifier path itself
|
|
||||||
source_path = Path(input_source_identifier)
|
source_path = Path(input_source_identifier)
|
||||||
if not source_path.exists():
|
if not source_path.exists():
|
||||||
log.warning(f"Input source path does not exist: '{input_source_identifier}'. Skipping prediction.")
|
log.warning(f"Input source path does not exist: '{input_source_identifier}'. Skipping prediction.")
|
||||||
self.status_message.emit("Input path not found.", 3000)
|
raise FileNotFoundError(f"Input source path not found: {input_source_identifier}")
|
||||||
self.rule_hierarchy_ready.emit([])
|
|
||||||
self.prediction_finished.emit(input_source_identifier)
|
|
||||||
return
|
|
||||||
|
|
||||||
|
# --- Load Configuration ---
|
||||||
self._is_running = True
|
|
||||||
self.status_message.emit(f"Analyzing '{source_path.name}'...", 0)
|
|
||||||
|
|
||||||
config: Configuration | None = None
|
|
||||||
# Removed: asset_type_definitions: Dict[str, Dict] = {}
|
|
||||||
# Removed: file_type_definitions: Dict[str, Dict] = {} # These are ItemType names
|
|
||||||
|
|
||||||
try:
|
|
||||||
config = Configuration(preset_name)
|
config = Configuration(preset_name)
|
||||||
# Removed: Load allowed types from the project's config module (now dictionaries)
|
log.info(f"Successfully loaded configuration for preset '{preset_name}'.")
|
||||||
# Removed: if app_config:
|
|
||||||
# Removed: asset_type_definitions = getattr(app_config, 'ASSET_TYPE_DEFINITIONS', {})
|
|
||||||
# Removed: file_type_definitions = getattr(app_config, 'FILE_TYPE_DEFINITIONS', {})
|
|
||||||
# Removed: log.debug(f"Loaded AssetType Definitions: {list(asset_type_definitions.keys())}")
|
|
||||||
# Removed: log.debug(f"Loaded FileType Definitions (ItemTypes): {list(file_type_definitions.keys())}")
|
|
||||||
# Removed: else:
|
|
||||||
# Removed: log.warning("Project config module not loaded. Cannot get type definitions.")
|
|
||||||
|
|
||||||
except ConfigurationError as e:
|
if self._is_cancelled: raise RuntimeError("Prediction cancelled before classification.")
|
||||||
log.error(f"Failed to load configuration for preset '{preset_name}': {e}")
|
|
||||||
self.status_message.emit(f"Error loading preset '{preset_name}': {e}", 5000)
|
|
||||||
self.prediction_finished.emit(input_source_identifier)
|
|
||||||
self._is_running = False
|
|
||||||
return
|
|
||||||
except Exception as e:
|
|
||||||
log.exception(f"Unexpected error loading configuration or allowed types for preset '{preset_name}': {e}")
|
|
||||||
self.status_message.emit(f"Unexpected error loading preset '{preset_name}'.", 5000)
|
|
||||||
self.prediction_finished.emit(input_source_identifier)
|
|
||||||
self._is_running = False
|
|
||||||
return
|
|
||||||
|
|
||||||
log.debug(f"DEBUG: Calling classify_files with file_list: {original_input_paths}") # DEBUG LOG
|
|
||||||
# --- Perform Classification ---
|
# --- Perform Classification ---
|
||||||
|
self.status_update.emit(f"Classifying files for '{source_path.name}'...")
|
||||||
try:
|
try:
|
||||||
classified_assets = classify_files(original_input_paths, config)
|
classified_assets = classify_files(original_input_paths, config)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(f"Error during file classification for source '{input_source_identifier}': {e}")
|
log.exception(f"Error during file classification for source '{input_source_identifier}': {e}")
|
||||||
self.status_message.emit(f"Error classifying files: {e}", 5000)
|
raise RuntimeError(f"Error classifying files: {e}") from e
|
||||||
self.prediction_finished.emit(input_source_identifier)
|
|
||||||
self._is_running = False
|
if self._is_cancelled: raise RuntimeError("Prediction cancelled after classification.")
|
||||||
return
|
|
||||||
|
|
||||||
if not classified_assets:
|
if not classified_assets:
|
||||||
log.warning(f"Classification yielded no assets for source '{input_source_identifier}'.")
|
log.warning(f"Classification yielded no assets for source '{input_source_identifier}'.")
|
||||||
self.status_message.emit("No assets identified from files.", 3000)
|
self.status_update.emit("No assets identified from files.")
|
||||||
self.rule_hierarchy_ready.emit([]) # Emit empty list
|
# Emit empty list, signal completion
|
||||||
self.prediction_finished.emit(input_source_identifier)
|
self.prediction_ready.emit(input_source_identifier, [])
|
||||||
self._is_running = False
|
self._is_running = False # Mark as finished
|
||||||
return
|
return
|
||||||
|
|
||||||
# --- Build the Hierarchy ---
|
# --- Build the Hierarchy ---
|
||||||
source_rules_list = []
|
self.status_update.emit(f"Building rule hierarchy for '{source_path.name}'...")
|
||||||
try:
|
try:
|
||||||
# Determine SourceRule level overrides/defaults
|
# (Hierarchy building logic remains the same as before)
|
||||||
# Get supplier name from the config property
|
supplier_identifier = config.supplier_name
|
||||||
supplier_identifier = config.supplier_name # Use the property
|
|
||||||
|
|
||||||
# Create the single SourceRule for this input source
|
|
||||||
source_rule = SourceRule(
|
source_rule = SourceRule(
|
||||||
input_path=input_source_identifier, # Use the identifier provided
|
input_path=input_source_identifier,
|
||||||
supplier_identifier=supplier_identifier, # Set overridable field
|
supplier_identifier=supplier_identifier,
|
||||||
preset_name=preset_name # Pass the selected preset name
|
preset_name=preset_name
|
||||||
)
|
)
|
||||||
log.debug(f"Created SourceRule for identifier: {input_source_identifier} with supplier: {supplier_identifier}")
|
|
||||||
|
|
||||||
asset_rules = []
|
asset_rules = []
|
||||||
# Get allowed asset types from config's internal core settings
|
|
||||||
asset_type_definitions = config._core_settings.get('ASSET_TYPE_DEFINITIONS', {})
|
asset_type_definitions = config._core_settings.get('ASSET_TYPE_DEFINITIONS', {})
|
||||||
log.debug(f"Loaded AssetType Definitions from config: {list(asset_type_definitions.keys())}")
|
file_type_definitions = config._core_settings.get('FILE_TYPE_DEFINITIONS', {})
|
||||||
|
|
||||||
for asset_name, files_info in classified_assets.items():
|
for asset_name, files_info in classified_assets.items():
|
||||||
if not files_info: continue # Skip empty asset groups
|
if self._is_cancelled: raise RuntimeError("Prediction cancelled during hierarchy building (assets).")
|
||||||
|
if not files_info: continue
|
||||||
|
|
||||||
# Determine AssetRule level overrides/defaults
|
|
||||||
item_types_in_asset = {f_info['item_type'] for f_info in files_info}
|
item_types_in_asset = {f_info['item_type'] for f_info in files_info}
|
||||||
predicted_asset_type = "Surface" # Default to "Surface" string
|
predicted_asset_type = "Surface"
|
||||||
material_indicators = {"MAP_COL", "MAP_NRM", "MAP_ROUGH", "MAP_METAL", "MAP_AO", "MAP_DISP", "COL", "NRM", "ROUGH", "METAL", "AO", "DISP"} # Added base types too
|
material_indicators = {"MAP_COL", "MAP_NRM", "MAP_ROUGH", "MAP_METAL", "MAP_AO", "MAP_DISP", "COL", "NRM", "ROUGH", "METAL", "AO", "DISP"}
|
||||||
if any(it in material_indicators for it in item_types_in_asset if it not in ["EXTRA", "FILE_IGNORE"]): # Exclude non-maps
|
if any(it in material_indicators for it in item_types_in_asset if it not in ["EXTRA", "FILE_IGNORE"]):
|
||||||
predicted_asset_type = "Surface" # Predict as "Surface" string
|
predicted_asset_type = "Surface"
|
||||||
|
|
||||||
# Ensure the predicted type is allowed, fallback if necessary
|
|
||||||
if asset_type_definitions and predicted_asset_type not in asset_type_definitions:
|
if asset_type_definitions and predicted_asset_type not in asset_type_definitions:
|
||||||
log.warning(f"Predicted AssetType '{predicted_asset_type}' for asset '{asset_name}' is not in ASSET_TYPE_DEFINITIONS from config. Falling back.")
|
log.warning(f"Predicted AssetType '{predicted_asset_type}' for asset '{asset_name}' is not in ASSET_TYPE_DEFINITIONS. Falling back.")
|
||||||
default_type = config.default_asset_category
|
default_type = config.default_asset_category
|
||||||
if default_type in asset_type_definitions:
|
if default_type in asset_type_definitions: predicted_asset_type = default_type
|
||||||
predicted_asset_type = default_type
|
elif asset_type_definitions: predicted_asset_type = list(asset_type_definitions.keys())[0]
|
||||||
elif asset_type_definitions:
|
|
||||||
predicted_asset_type = list(asset_type_definitions.keys())[0]
|
|
||||||
else:
|
|
||||||
pass # Keep the original prediction if definitions are empty
|
|
||||||
|
|
||||||
|
|
||||||
asset_rule = AssetRule(
|
|
||||||
asset_name=asset_name,
|
|
||||||
asset_type=predicted_asset_type,
|
|
||||||
)
|
|
||||||
log.debug(f"Created AssetRule for asset: {asset_name} with type: {predicted_asset_type}")
|
|
||||||
|
|
||||||
|
asset_rule = AssetRule(asset_name=asset_name, asset_type=predicted_asset_type)
|
||||||
file_rules = []
|
file_rules = []
|
||||||
file_type_definitions = config._core_settings.get('FILE_TYPE_DEFINITIONS', {})
|
|
||||||
log.debug(f"Loaded FileType Definitions (ItemTypes) from config: {list(file_type_definitions.keys())}")
|
|
||||||
|
|
||||||
for file_info in files_info:
|
for file_info in files_info:
|
||||||
|
if self._is_cancelled: raise RuntimeError("Prediction cancelled during hierarchy building (files).")
|
||||||
|
|
||||||
base_item_type = file_info['item_type']
|
base_item_type = file_info['item_type']
|
||||||
target_asset_name_override = file_info['asset_name']
|
target_asset_name_override = file_info['asset_name']
|
||||||
|
|
||||||
# Determine the final item_type string (prefix maps, check if allowed)
|
|
||||||
final_item_type = base_item_type
|
final_item_type = base_item_type
|
||||||
if not base_item_type.startswith("MAP_") and base_item_type not in ["FILE_IGNORE", "EXTRA", "MODEL"]:
|
if not base_item_type.startswith("MAP_") and base_item_type not in ["FILE_IGNORE", "EXTRA", "MODEL"]:
|
||||||
final_item_type = f"MAP_{base_item_type}"
|
final_item_type = f"MAP_{base_item_type}"
|
||||||
|
|
||||||
# Check if the final type is allowed
|
|
||||||
if file_type_definitions and final_item_type not in file_type_definitions and base_item_type not in ["FILE_IGNORE", "EXTRA"]:
|
if file_type_definitions and final_item_type not in file_type_definitions and base_item_type not in ["FILE_IGNORE", "EXTRA"]:
|
||||||
log.warning(f"Predicted ItemType '{base_item_type}' (checked as '{final_item_type}') for file '{file_info['file_path']}' is not in FILE_TYPE_DEFINITIONS. Setting to FILE_IGNORE.")
|
log.warning(f"Predicted ItemType '{base_item_type}' (checked as '{final_item_type}') for file '{file_info['file_path']}' is not in FILE_TYPE_DEFINITIONS. Setting to FILE_IGNORE.")
|
||||||
final_item_type = "FILE_IGNORE"
|
final_item_type = "FILE_IGNORE"
|
||||||
|
|
||||||
|
|
||||||
# Retrieve the standard_type
|
|
||||||
standard_map_type = None
|
standard_map_type = None
|
||||||
file_type_details = file_type_definitions.get(final_item_type)
|
file_type_details = file_type_definitions.get(final_item_type)
|
||||||
if file_type_details:
|
if file_type_details: standard_map_type = file_type_details.get('standard_type')
|
||||||
standard_map_type = file_type_details.get('standard_type')
|
|
||||||
log.debug(f" Found standard_type '{standard_map_type}' for final_item_type '{final_item_type}'")
|
|
||||||
else:
|
else:
|
||||||
file_type_details_alias = file_type_definitions.get(base_item_type)
|
file_type_details_alias = file_type_definitions.get(base_item_type)
|
||||||
if file_type_details_alias:
|
if file_type_details_alias: standard_map_type = file_type_details_alias.get('standard_type')
|
||||||
standard_map_type = file_type_details_alias.get('standard_type')
|
elif base_item_type in file_type_definitions: standard_map_type = base_item_type
|
||||||
log.debug(f" Found standard_type '{standard_map_type}' via alias lookup for base_item_type '{base_item_type}'")
|
|
||||||
elif base_item_type in file_type_definitions:
|
|
||||||
standard_map_type = base_item_type
|
|
||||||
log.debug(f" Using base_item_type '{base_item_type}' itself as standard_map_type.")
|
|
||||||
else:
|
|
||||||
log.debug(f" Could not determine standard_map_type for base '{base_item_type}' / final '{final_item_type}'. Setting to None.")
|
|
||||||
|
|
||||||
|
|
||||||
output_format_override = None
|
|
||||||
item_type_override = None
|
|
||||||
|
|
||||||
log.debug(f" Creating FileRule for: {file_info['file_path']}")
|
|
||||||
log.debug(f" Base Item Type (from classification): {base_item_type}")
|
|
||||||
log.debug(f" Final Item Type (for model): {final_item_type}")
|
|
||||||
log.debug(f" Target Asset Name Override: {target_asset_name_override}")
|
|
||||||
log.debug(f" Determined Standard Map Type: {standard_map_type}")
|
|
||||||
is_gloss_source_value = file_info.get('is_gloss_source', 'MISSING')
|
|
||||||
log.debug(f" Value for 'is_gloss_source' from file_info: {is_gloss_source_value}")
|
|
||||||
|
|
||||||
|
is_gloss_source_value = file_info.get('is_gloss_source', False)
|
||||||
|
|
||||||
file_rule = FileRule(
|
file_rule = FileRule(
|
||||||
file_path=file_info['file_path'],
|
file_path=file_info['file_path'],
|
||||||
item_type=final_item_type,
|
item_type=final_item_type,
|
||||||
item_type_override=final_item_type,
|
item_type_override=final_item_type,
|
||||||
target_asset_name_override=target_asset_name_override,
|
target_asset_name_override=target_asset_name_override,
|
||||||
output_format_override=output_format_override,
|
output_format_override=None,
|
||||||
is_gloss_source=is_gloss_source_value if isinstance(is_gloss_source_value, bool) else False,
|
is_gloss_source=is_gloss_source_value if isinstance(is_gloss_source_value, bool) else False,
|
||||||
standard_map_type=standard_map_type,
|
standard_map_type=standard_map_type,
|
||||||
resolution_override=None,
|
resolution_override=None,
|
||||||
channel_merge_instructions={},
|
channel_merge_instructions={},
|
||||||
)
|
)
|
||||||
file_rules.append(file_rule)
|
file_rules.append(file_rule)
|
||||||
|
|
||||||
asset_rule.files = file_rules
|
asset_rule.files = file_rules
|
||||||
asset_rules.append(asset_rule)
|
asset_rules.append(asset_rule)
|
||||||
|
|
||||||
source_rule.assets = asset_rules
|
source_rule.assets = asset_rules
|
||||||
log.debug(f"Built SourceRule '{source_rule.input_path}' with {len(asset_rules)} AssetRule(s).")
|
|
||||||
source_rules_list.append(source_rule)
|
source_rules_list.append(source_rule)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(f"Error building rule hierarchy for source '{input_source_identifier}': {e}")
|
log.exception(f"Error building rule hierarchy for source '{input_source_identifier}': {e}")
|
||||||
self.status_message.emit(f"Error building rules: {e}", 5000)
|
raise RuntimeError(f"Error building rule hierarchy: {e}") from e
|
||||||
self.prediction_finished.emit(input_source_identifier)
|
|
||||||
|
# --- Emit Success Signal ---
|
||||||
|
log.info(f"Rule-based prediction finished successfully for '{input_source_identifier}'.")
|
||||||
|
self.prediction_ready.emit(input_source_identifier, source_rules_list) # Use base signal
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# --- Emit Error Signal ---
|
||||||
|
log.exception(f"Error during rule-based prediction for '{input_source_identifier}': {e}")
|
||||||
|
error_msg = f"Error analyzing '{Path(input_source_identifier).name}': {e}"
|
||||||
|
self.prediction_error.emit(input_source_identifier, error_msg) # Use base signal
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# --- Cleanup ---
|
||||||
self._is_running = False
|
self._is_running = False
|
||||||
return
|
self._current_input_path = None # Clear current task info
|
||||||
|
self._current_file_list = None
|
||||||
|
self._current_preset_name = None
|
||||||
# --- Emit Results ---
|
log.info(f"Finished rule-based prediction run for: {input_source_identifier}")
|
||||||
log.info(f"VERIFY: Emitting rule_hierarchy_ready with {len(source_rules_list)} SourceRule(s).")
|
|
||||||
for i, rule in enumerate(source_rules_list):
|
|
||||||
log.debug(f" VERIFY Rule {i}: Input='{rule.input_path}', Assets={len(rule.assets)}")
|
|
||||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Prediction run finished. Emitting hierarchy for '{input_source_identifier}'.")
|
|
||||||
self.rule_hierarchy_ready.emit(source_rules_list)
|
|
||||||
log.info(f"[{time.time():.4f}][T:{thread_id}] Emitted rule_hierarchy_ready signal.")
|
|
||||||
|
|
||||||
self.status_message.emit(f"Analysis complete for '{input_source_identifier}'.", 3000)
|
|
||||||
self.prediction_finished.emit(input_source_identifier)
|
|
||||||
self._is_running = False
|
|
||||||
log.info(f"[{time.time():.4f}][T:{thread_id}] <-- Exiting PredictionHandler.run_prediction.")
|
|
||||||
|
|||||||
717
gui/preset_editor_widget.py
Normal file
717
gui/preset_editor_widget.py
Normal file
@ -0,0 +1,717 @@
|
|||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
from functools import partial
|
||||||
|
|
||||||
|
from PySide6.QtWidgets import (
|
||||||
|
QWidget, QVBoxLayout, QHBoxLayout, QListWidget, QPushButton, QLabel, QTabWidget,
|
||||||
|
QLineEdit, QTextEdit, QSpinBox, QTableWidget, QGroupBox, QFormLayout,
|
||||||
|
QHeaderView, QAbstractItemView, QListWidgetItem, QTableWidgetItem, QMessageBox,
|
||||||
|
QFileDialog, QInputDialog, QSizePolicy
|
||||||
|
)
|
||||||
|
from PySide6.QtCore import Qt, Signal, QObject, Slot
|
||||||
|
from PySide6.QtGui import QAction # Keep QAction if needed for context menus within editor later
|
||||||
|
|
||||||
|
# --- Constants ---
|
||||||
|
# Assuming project root is parent of the directory containing this file
|
||||||
|
script_dir = Path(__file__).parent
|
||||||
|
project_root = script_dir.parent
|
||||||
|
PRESETS_DIR = project_root / "Presets" # Corrected path
|
||||||
|
TEMPLATE_PATH = PRESETS_DIR / "_template.json"
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# --- Preset Editor Widget ---
|
||||||
|
|
||||||
|
class PresetEditorWidget(QWidget):
|
||||||
|
"""
|
||||||
|
Widget dedicated to managing and editing presets.
|
||||||
|
Contains the preset list, editor tabs, and save/load functionality.
|
||||||
|
"""
|
||||||
|
# Signal emitted when presets list changes (saved, deleted, new)
|
||||||
|
presets_changed_signal = Signal()
|
||||||
|
# Signal emitted when the selected preset (or LLM/Placeholder) changes
|
||||||
|
# Emits: mode ("preset", "llm", "placeholder"), preset_name (str or None)
|
||||||
|
preset_selection_changed_signal = Signal(str, str)
|
||||||
|
|
||||||
|
def __init__(self, parent=None):
|
||||||
|
super().__init__(parent)
|
||||||
|
|
||||||
|
# --- Internal State ---
|
||||||
|
self._last_valid_preset_name = None # Store the name of the last valid preset loaded
|
||||||
|
self.current_editing_preset_path = None
|
||||||
|
self.editor_unsaved_changes = False
|
||||||
|
self._is_loading_editor = False # Flag to prevent signals during load
|
||||||
|
|
||||||
|
# --- UI Setup ---
|
||||||
|
self._init_ui()
|
||||||
|
|
||||||
|
# --- Initial State ---
|
||||||
|
self._clear_editor() # Clear/disable editor fields initially
|
||||||
|
self._set_editor_enabled(False) # Disable editor initially
|
||||||
|
self.populate_presets() # Populate preset list
|
||||||
|
|
||||||
|
# --- Connect Editor Signals ---
|
||||||
|
self._connect_editor_change_signals()
|
||||||
|
|
||||||
|
def _init_ui(self):
|
||||||
|
"""Initializes the UI elements for the preset editor."""
|
||||||
|
editor_layout = QVBoxLayout(self)
|
||||||
|
editor_layout.setContentsMargins(5, 5, 5, 5) # Reduce margins
|
||||||
|
|
||||||
|
# Preset List and Controls
|
||||||
|
list_layout = QVBoxLayout()
|
||||||
|
list_layout.addWidget(QLabel("Presets:"))
|
||||||
|
self.editor_preset_list = QListWidget()
|
||||||
|
self.editor_preset_list.currentItemChanged.connect(self._load_selected_preset_for_editing)
|
||||||
|
list_layout.addWidget(self.editor_preset_list)
|
||||||
|
|
||||||
|
list_button_layout = QHBoxLayout()
|
||||||
|
self.editor_new_button = QPushButton("New")
|
||||||
|
self.editor_delete_button = QPushButton("Delete")
|
||||||
|
self.editor_new_button.clicked.connect(self._new_preset)
|
||||||
|
self.editor_delete_button.clicked.connect(self._delete_selected_preset)
|
||||||
|
list_button_layout.addWidget(self.editor_new_button)
|
||||||
|
list_button_layout.addWidget(self.editor_delete_button)
|
||||||
|
list_layout.addLayout(list_button_layout)
|
||||||
|
editor_layout.addLayout(list_layout, 1) # Allow list to stretch
|
||||||
|
|
||||||
|
# Editor Tabs
|
||||||
|
self.editor_tab_widget = QTabWidget()
|
||||||
|
self.editor_tab_general_naming = QWidget()
|
||||||
|
self.editor_tab_mapping_rules = QWidget()
|
||||||
|
self.editor_tab_widget.addTab(self.editor_tab_general_naming, "General & Naming")
|
||||||
|
self.editor_tab_widget.addTab(self.editor_tab_mapping_rules, "Mapping & Rules")
|
||||||
|
self._create_editor_general_tab()
|
||||||
|
self._create_editor_mapping_tab()
|
||||||
|
editor_layout.addWidget(self.editor_tab_widget, 3) # Allow tabs to stretch more
|
||||||
|
|
||||||
|
# Save Buttons
|
||||||
|
save_button_layout = QHBoxLayout()
|
||||||
|
self.editor_save_button = QPushButton("Save")
|
||||||
|
self.editor_save_as_button = QPushButton("Save As...")
|
||||||
|
self.editor_save_button.setEnabled(False) # Disabled initially
|
||||||
|
self.editor_save_button.clicked.connect(self._save_current_preset)
|
||||||
|
self.editor_save_as_button.clicked.connect(self._save_preset_as)
|
||||||
|
save_button_layout.addStretch()
|
||||||
|
save_button_layout.addWidget(self.editor_save_button)
|
||||||
|
save_button_layout.addWidget(self.editor_save_as_button)
|
||||||
|
editor_layout.addLayout(save_button_layout)
|
||||||
|
|
||||||
|
def _create_editor_general_tab(self):
|
||||||
|
"""Creates the widgets and layout for the 'General & Naming' editor tab."""
|
||||||
|
layout = QVBoxLayout(self.editor_tab_general_naming)
|
||||||
|
form_layout = QFormLayout()
|
||||||
|
form_layout.setFieldGrowthPolicy(QFormLayout.FieldGrowthPolicy.ExpandingFieldsGrow)
|
||||||
|
|
||||||
|
# Basic Info
|
||||||
|
self.editor_preset_name = QLineEdit()
|
||||||
|
self.editor_supplier_name = QLineEdit()
|
||||||
|
self.editor_notes = QTextEdit()
|
||||||
|
self.editor_notes.setAcceptRichText(False)
|
||||||
|
self.editor_notes.setFixedHeight(60)
|
||||||
|
form_layout.addRow("Preset Name:", self.editor_preset_name)
|
||||||
|
form_layout.addRow("Supplier Name:", self.editor_supplier_name)
|
||||||
|
form_layout.addRow("Notes:", self.editor_notes)
|
||||||
|
layout.addLayout(form_layout)
|
||||||
|
|
||||||
|
# Source Naming Group
|
||||||
|
naming_group = QGroupBox("Source File Naming Rules")
|
||||||
|
naming_layout_outer = QVBoxLayout(naming_group)
|
||||||
|
naming_layout_form = QFormLayout()
|
||||||
|
self.editor_separator = QLineEdit()
|
||||||
|
self.editor_separator.setMaxLength(1)
|
||||||
|
self.editor_spin_base_name_idx = QSpinBox()
|
||||||
|
self.editor_spin_base_name_idx.setMinimum(-1)
|
||||||
|
self.editor_spin_map_type_idx = QSpinBox()
|
||||||
|
self.editor_spin_map_type_idx.setMinimum(-1)
|
||||||
|
naming_layout_form.addRow("Separator:", self.editor_separator)
|
||||||
|
naming_layout_form.addRow("Base Name Index:", self.editor_spin_base_name_idx)
|
||||||
|
naming_layout_form.addRow("Map Type Index:", self.editor_spin_map_type_idx)
|
||||||
|
naming_layout_outer.addLayout(naming_layout_form)
|
||||||
|
# Gloss Keywords List
|
||||||
|
self._setup_list_widget_with_controls(naming_layout_outer, "Glossiness Keywords", "editor_list_gloss_keywords")
|
||||||
|
# Bit Depth Variants Table
|
||||||
|
self._setup_table_widget_with_controls(naming_layout_outer, "16-bit Variant Patterns", "editor_table_bit_depth_variants", ["Map Type", "Pattern"])
|
||||||
|
self.editor_table_bit_depth_variants.horizontalHeader().setSectionResizeMode(0, QHeaderView.ResizeMode.ResizeToContents)
|
||||||
|
self.editor_table_bit_depth_variants.horizontalHeader().setSectionResizeMode(1, QHeaderView.ResizeMode.Stretch)
|
||||||
|
layout.addWidget(naming_group)
|
||||||
|
|
||||||
|
# Extra Files Group
|
||||||
|
self._setup_list_widget_with_controls(layout, "Move to 'Extra' Folder Patterns", "editor_list_extra_patterns")
|
||||||
|
|
||||||
|
layout.addStretch(1)
|
||||||
|
|
||||||
|
def _create_editor_mapping_tab(self):
|
||||||
|
"""Creates the widgets and layout for the 'Mapping & Rules' editor tab."""
|
||||||
|
layout = QVBoxLayout(self.editor_tab_mapping_rules)
|
||||||
|
|
||||||
|
# Map Type Mapping Group
|
||||||
|
self._setup_table_widget_with_controls(layout, "Map Type Mapping (Standard Type <- Input Keywords)", "editor_table_map_type_mapping", ["Standard Type", "Input Keywords (comma-sep)"])
|
||||||
|
self.editor_table_map_type_mapping.horizontalHeader().setSectionResizeMode(0, QHeaderView.ResizeMode.ResizeToContents)
|
||||||
|
self.editor_table_map_type_mapping.horizontalHeader().setSectionResizeMode(1, QHeaderView.ResizeMode.Stretch)
|
||||||
|
|
||||||
|
# Category Rules Group
|
||||||
|
category_group = QGroupBox("Asset Category Rules")
|
||||||
|
category_layout = QVBoxLayout(category_group)
|
||||||
|
self._setup_list_widget_with_controls(category_layout, "Model File Patterns", "editor_list_model_patterns")
|
||||||
|
self._setup_list_widget_with_controls(category_layout, "Decal Keywords", "editor_list_decal_keywords")
|
||||||
|
layout.addWidget(category_group)
|
||||||
|
|
||||||
|
# Archetype Rules Group
|
||||||
|
self._setup_table_widget_with_controls(layout, "Archetype Rules", "editor_table_archetype_rules", ["Archetype Name", "Match Any (comma-sep)", "Match All (comma-sep)"])
|
||||||
|
self.editor_table_archetype_rules.horizontalHeader().setSectionResizeMode(0, QHeaderView.ResizeMode.ResizeToContents)
|
||||||
|
self.editor_table_archetype_rules.horizontalHeader().setSectionResizeMode(1, QHeaderView.ResizeMode.Stretch)
|
||||||
|
self.editor_table_archetype_rules.horizontalHeader().setSectionResizeMode(2, QHeaderView.ResizeMode.Stretch)
|
||||||
|
|
||||||
|
layout.addStretch(1)
|
||||||
|
|
||||||
|
# --- Helper Functions for UI Setup (Moved into class) ---
|
||||||
|
def _setup_list_widget_with_controls(self, parent_layout, label_text, attribute_name):
|
||||||
|
"""Adds a QListWidget with Add/Remove buttons to a layout."""
|
||||||
|
list_widget = QListWidget()
|
||||||
|
list_widget.setAlternatingRowColors(True)
|
||||||
|
list_widget.setEditTriggers(QAbstractItemView.EditTrigger.DoubleClicked | QAbstractItemView.EditTrigger.SelectedClicked | QAbstractItemView.EditTrigger.EditKeyPressed)
|
||||||
|
setattr(self, attribute_name, list_widget) # Store list widget on the instance
|
||||||
|
|
||||||
|
add_button = QPushButton("+")
|
||||||
|
remove_button = QPushButton("-")
|
||||||
|
add_button.setFixedWidth(30)
|
||||||
|
remove_button.setFixedWidth(30)
|
||||||
|
|
||||||
|
button_layout = QVBoxLayout()
|
||||||
|
button_layout.addWidget(add_button)
|
||||||
|
button_layout.addWidget(remove_button)
|
||||||
|
button_layout.addStretch()
|
||||||
|
|
||||||
|
list_layout = QHBoxLayout()
|
||||||
|
list_layout.addWidget(list_widget)
|
||||||
|
list_layout.addLayout(button_layout)
|
||||||
|
|
||||||
|
group_box = QGroupBox(label_text)
|
||||||
|
group_box_layout = QVBoxLayout(group_box)
|
||||||
|
group_box_layout.addLayout(list_layout)
|
||||||
|
|
||||||
|
parent_layout.addWidget(group_box)
|
||||||
|
|
||||||
|
# Connections
|
||||||
|
add_button.clicked.connect(partial(self._editor_add_list_item, list_widget))
|
||||||
|
remove_button.clicked.connect(partial(self._editor_remove_list_item, list_widget))
|
||||||
|
list_widget.itemChanged.connect(self._mark_editor_unsaved) # Mark unsaved on item edit
|
||||||
|
|
||||||
|
def _setup_table_widget_with_controls(self, parent_layout, label_text, attribute_name, columns):
|
||||||
|
"""Adds a QTableWidget with Add/Remove buttons to a layout."""
|
||||||
|
table_widget = QTableWidget()
|
||||||
|
table_widget.setColumnCount(len(columns))
|
||||||
|
table_widget.setHorizontalHeaderLabels(columns)
|
||||||
|
table_widget.setAlternatingRowColors(True)
|
||||||
|
setattr(self, attribute_name, table_widget) # Store table widget
|
||||||
|
|
||||||
|
add_button = QPushButton("+ Row")
|
||||||
|
remove_button = QPushButton("- Row")
|
||||||
|
|
||||||
|
button_layout = QHBoxLayout()
|
||||||
|
button_layout.addStretch()
|
||||||
|
button_layout.addWidget(add_button)
|
||||||
|
button_layout.addWidget(remove_button)
|
||||||
|
|
||||||
|
group_box = QGroupBox(label_text)
|
||||||
|
group_box_layout = QVBoxLayout(group_box)
|
||||||
|
group_box_layout.addWidget(table_widget)
|
||||||
|
group_box_layout.addLayout(button_layout)
|
||||||
|
|
||||||
|
parent_layout.addWidget(group_box)
|
||||||
|
|
||||||
|
# Connections
|
||||||
|
add_button.clicked.connect(partial(self._editor_add_table_row, table_widget))
|
||||||
|
remove_button.clicked.connect(partial(self._editor_remove_table_row, table_widget))
|
||||||
|
table_widget.itemChanged.connect(self._mark_editor_unsaved) # Mark unsaved on item edit
|
||||||
|
|
||||||
|
# --- Preset Population and Handling ---
|
||||||
|
def populate_presets(self):
|
||||||
|
"""Scans presets dir and populates the editor list."""
|
||||||
|
log.debug("Populating preset list in PresetEditorWidget...")
|
||||||
|
current_list_item = self.editor_preset_list.currentItem()
|
||||||
|
current_list_selection_text = current_list_item.text() if current_list_item else None
|
||||||
|
|
||||||
|
self.editor_preset_list.clear()
|
||||||
|
log.debug("Preset list cleared.")
|
||||||
|
|
||||||
|
# Add the "Select a Preset" placeholder item
|
||||||
|
placeholder_item = QListWidgetItem("--- Select a Preset ---")
|
||||||
|
placeholder_item.setFlags(placeholder_item.flags() & ~Qt.ItemFlag.ItemIsSelectable & ~Qt.ItemFlag.ItemIsEditable)
|
||||||
|
placeholder_item.setData(Qt.ItemDataRole.UserRole, "__PLACEHOLDER__")
|
||||||
|
self.editor_preset_list.addItem(placeholder_item)
|
||||||
|
log.debug("Added '--- Select a Preset ---' placeholder item.")
|
||||||
|
|
||||||
|
# Add LLM Option
|
||||||
|
llm_item = QListWidgetItem("- LLM Interpretation -")
|
||||||
|
llm_item.setData(Qt.ItemDataRole.UserRole, "__LLM__") # Special identifier
|
||||||
|
self.editor_preset_list.addItem(llm_item)
|
||||||
|
log.debug("Added '- LLM Interpretation -' item.")
|
||||||
|
|
||||||
|
if not PRESETS_DIR.is_dir():
|
||||||
|
msg = f"Error: Presets directory not found at {PRESETS_DIR}"
|
||||||
|
log.error(msg)
|
||||||
|
# Consider emitting a status signal to MainWindow?
|
||||||
|
return
|
||||||
|
|
||||||
|
presets = sorted([f for f in PRESETS_DIR.glob("*.json") if f.is_file() and not f.name.startswith('_')])
|
||||||
|
|
||||||
|
if not presets:
|
||||||
|
msg = "Warning: No presets found in presets directory."
|
||||||
|
log.warning(msg)
|
||||||
|
else:
|
||||||
|
for preset_path in presets:
|
||||||
|
item = QListWidgetItem(preset_path.stem)
|
||||||
|
item.setData(Qt.ItemDataRole.UserRole, preset_path) # Store full path
|
||||||
|
self.editor_preset_list.addItem(item)
|
||||||
|
log.info(f"Loaded {len(presets)} presets into editor list.")
|
||||||
|
|
||||||
|
# Select the "Select a Preset" item by default
|
||||||
|
log.debug("Preset list populated. Selecting '--- Select a Preset ---' item.")
|
||||||
|
self.editor_preset_list.setCurrentItem(placeholder_item) # Select the placeholder item
|
||||||
|
|
||||||
|
# --- Preset Editor Methods ---
|
||||||
|
|
||||||
|
def _editor_add_list_item(self, list_widget: QListWidget):
|
||||||
|
"""Adds an editable item to the specified list widget in the editor."""
|
||||||
|
text, ok = QInputDialog.getText(self, f"Add Item", "Enter value:")
|
||||||
|
if ok and text:
|
||||||
|
item = QListWidgetItem(text)
|
||||||
|
list_widget.addItem(item)
|
||||||
|
self._mark_editor_unsaved()
|
||||||
|
|
||||||
|
def _editor_remove_list_item(self, list_widget: QListWidget):
|
||||||
|
"""Removes the selected item from the specified list widget in the editor."""
|
||||||
|
selected_items = list_widget.selectedItems()
|
||||||
|
if not selected_items: return
|
||||||
|
for item in selected_items: list_widget.takeItem(list_widget.row(item))
|
||||||
|
self._mark_editor_unsaved()
|
||||||
|
|
||||||
|
def _editor_add_table_row(self, table_widget: QTableWidget):
|
||||||
|
"""Adds an empty row to the specified table widget in the editor."""
|
||||||
|
row_count = table_widget.rowCount()
|
||||||
|
table_widget.insertRow(row_count)
|
||||||
|
for col in range(table_widget.columnCount()): table_widget.setItem(row_count, col, QTableWidgetItem(""))
|
||||||
|
self._mark_editor_unsaved()
|
||||||
|
|
||||||
|
def _editor_remove_table_row(self, table_widget: QTableWidget):
|
||||||
|
"""Removes the selected row(s) from the specified table widget in the editor."""
|
||||||
|
selected_rows = sorted(list(set(index.row() for index in table_widget.selectedIndexes())), reverse=True)
|
||||||
|
if not selected_rows:
|
||||||
|
if table_widget.rowCount() > 0: selected_rows = [table_widget.rowCount() - 1]
|
||||||
|
else: return
|
||||||
|
for row in selected_rows: table_widget.removeRow(row)
|
||||||
|
self._mark_editor_unsaved()
|
||||||
|
|
||||||
|
def _mark_editor_unsaved(self):
|
||||||
|
"""Marks changes in the editor panel as unsaved."""
|
||||||
|
if self._is_loading_editor: return
|
||||||
|
self.editor_unsaved_changes = True
|
||||||
|
self.editor_save_button.setEnabled(True)
|
||||||
|
# Update window title (handled by MainWindow) - maybe emit signal?
|
||||||
|
# preset_name = Path(self.current_editing_preset_path).name if self.current_editing_preset_path else 'New Preset'
|
||||||
|
# self.window().setWindowTitle(f"Asset Processor Tool - {preset_name}*") # Access parent window
|
||||||
|
|
||||||
|
def _connect_editor_change_signals(self):
|
||||||
|
"""Connect signals from all editor widgets to mark_editor_unsaved."""
|
||||||
|
self.editor_preset_name.textChanged.connect(self._mark_editor_unsaved)
|
||||||
|
self.editor_supplier_name.textChanged.connect(self._mark_editor_unsaved)
|
||||||
|
self.editor_notes.textChanged.connect(self._mark_editor_unsaved)
|
||||||
|
self.editor_separator.textChanged.connect(self._mark_editor_unsaved)
|
||||||
|
self.editor_spin_base_name_idx.valueChanged.connect(self._mark_editor_unsaved)
|
||||||
|
self.editor_spin_map_type_idx.valueChanged.connect(self._mark_editor_unsaved)
|
||||||
|
# List/Table widgets are connected via helper functions
|
||||||
|
|
||||||
|
def check_unsaved_changes(self) -> bool:
|
||||||
|
"""
|
||||||
|
Checks for unsaved changes in the editor and prompts the user.
|
||||||
|
Returns True if the calling action should be cancelled.
|
||||||
|
(Called by MainWindow's closeEvent or before loading a new preset).
|
||||||
|
"""
|
||||||
|
if not self.editor_unsaved_changes: return False # No unsaved changes, proceed
|
||||||
|
reply = QMessageBox.question(self, "Unsaved Preset Changes", # Use self as parent
|
||||||
|
"You have unsaved changes in the preset editor. Discard them?",
|
||||||
|
QMessageBox.StandardButton.Save | QMessageBox.StandardButton.Discard | QMessageBox.StandardButton.Cancel,
|
||||||
|
QMessageBox.StandardButton.Cancel)
|
||||||
|
if reply == QMessageBox.StandardButton.Save:
|
||||||
|
save_successful = self._save_current_preset()
|
||||||
|
return not save_successful # Return True (cancel) if save fails
|
||||||
|
elif reply == QMessageBox.StandardButton.Discard:
|
||||||
|
return False # Discarded, proceed
|
||||||
|
else: # Cancelled
|
||||||
|
return True # Cancel the original action
|
||||||
|
|
||||||
|
def _set_editor_enabled(self, enabled: bool):
|
||||||
|
"""Enables or disables all editor widgets."""
|
||||||
|
self.editor_tab_widget.setEnabled(enabled)
|
||||||
|
self.editor_save_button.setEnabled(enabled and self.editor_unsaved_changes)
|
||||||
|
self.editor_save_as_button.setEnabled(enabled) # Save As is always possible if editor is enabled
|
||||||
|
|
||||||
|
def _clear_editor(self):
|
||||||
|
"""Clears the editor fields and resets state."""
|
||||||
|
self._is_loading_editor = True
|
||||||
|
try:
|
||||||
|
self.editor_preset_name.clear()
|
||||||
|
self.editor_supplier_name.clear()
|
||||||
|
self.editor_notes.clear()
|
||||||
|
self.editor_separator.clear()
|
||||||
|
self.editor_spin_base_name_idx.setValue(0)
|
||||||
|
self.editor_spin_map_type_idx.setValue(1)
|
||||||
|
self.editor_list_gloss_keywords.clear()
|
||||||
|
self.editor_table_bit_depth_variants.setRowCount(0)
|
||||||
|
self.editor_list_extra_patterns.clear()
|
||||||
|
self.editor_table_map_type_mapping.setRowCount(0)
|
||||||
|
self.editor_list_model_patterns.clear()
|
||||||
|
self.editor_list_decal_keywords.clear()
|
||||||
|
self.editor_table_archetype_rules.setRowCount(0)
|
||||||
|
self.current_editing_preset_path = None
|
||||||
|
self.editor_unsaved_changes = False
|
||||||
|
self.editor_save_button.setEnabled(False)
|
||||||
|
# self.window().setWindowTitle("Asset Processor Tool") # Reset window title (handled by MainWindow)
|
||||||
|
self._set_editor_enabled(False)
|
||||||
|
finally:
|
||||||
|
self._is_loading_editor = False
|
||||||
|
|
||||||
|
def _populate_editor_from_data(self, preset_data: dict):
|
||||||
|
"""Helper method to populate editor UI widgets from a preset data dictionary."""
|
||||||
|
self._is_loading_editor = True
|
||||||
|
try:
|
||||||
|
self.editor_preset_name.setText(preset_data.get("preset_name", ""))
|
||||||
|
self.editor_supplier_name.setText(preset_data.get("supplier_name", ""))
|
||||||
|
self.editor_notes.setText(preset_data.get("notes", ""))
|
||||||
|
naming_data = preset_data.get("source_naming", {})
|
||||||
|
self.editor_separator.setText(naming_data.get("separator", "_"))
|
||||||
|
indices = naming_data.get("part_indices", {})
|
||||||
|
self.editor_spin_base_name_idx.setValue(indices.get("base_name", 0))
|
||||||
|
self.editor_spin_map_type_idx.setValue(indices.get("map_type", 1))
|
||||||
|
self.editor_list_gloss_keywords.clear()
|
||||||
|
self.editor_list_gloss_keywords.addItems(naming_data.get("glossiness_keywords", []))
|
||||||
|
self.editor_table_bit_depth_variants.setRowCount(0)
|
||||||
|
bit_depth_vars = naming_data.get("bit_depth_variants", {})
|
||||||
|
for i, (map_type, pattern) in enumerate(bit_depth_vars.items()):
|
||||||
|
self.editor_table_bit_depth_variants.insertRow(i)
|
||||||
|
self.editor_table_bit_depth_variants.setItem(i, 0, QTableWidgetItem(map_type))
|
||||||
|
self.editor_table_bit_depth_variants.setItem(i, 1, QTableWidgetItem(pattern))
|
||||||
|
self.editor_list_extra_patterns.clear()
|
||||||
|
self.editor_list_extra_patterns.addItems(preset_data.get("move_to_extra_patterns", []))
|
||||||
|
self.editor_table_map_type_mapping.setRowCount(0)
|
||||||
|
map_mappings = preset_data.get("map_type_mapping", [])
|
||||||
|
for i, mapping_dict in enumerate(map_mappings):
|
||||||
|
if isinstance(mapping_dict, dict) and "target_type" in mapping_dict and "keywords" in mapping_dict:
|
||||||
|
std_type = mapping_dict["target_type"]
|
||||||
|
keywords = mapping_dict["keywords"]
|
||||||
|
self.editor_table_map_type_mapping.insertRow(i)
|
||||||
|
self.editor_table_map_type_mapping.setItem(i, 0, QTableWidgetItem(std_type))
|
||||||
|
keywords_str = [str(k) for k in keywords if isinstance(k, str)]
|
||||||
|
self.editor_table_map_type_mapping.setItem(i, 1, QTableWidgetItem(", ".join(keywords_str)))
|
||||||
|
else:
|
||||||
|
log.warning(f"Skipping invalid map_type_mapping item during editor population: {mapping_dict}")
|
||||||
|
category_rules = preset_data.get("asset_category_rules", {})
|
||||||
|
self.editor_list_model_patterns.clear()
|
||||||
|
self.editor_list_model_patterns.addItems(category_rules.get("model_patterns", []))
|
||||||
|
self.editor_list_decal_keywords.clear()
|
||||||
|
self.editor_list_decal_keywords.addItems(category_rules.get("decal_keywords", []))
|
||||||
|
# Archetype rules population (assuming table exists)
|
||||||
|
self.editor_table_archetype_rules.setRowCount(0)
|
||||||
|
arch_rules_data = preset_data.get("archetype_rules", [])
|
||||||
|
for i, rule_entry in enumerate(arch_rules_data):
|
||||||
|
# Handle both list and dict format for backward compatibility? Assuming list for now.
|
||||||
|
if isinstance(rule_entry, (list, tuple)) and len(rule_entry) == 2:
|
||||||
|
name, conditions = rule_entry
|
||||||
|
if isinstance(conditions, dict):
|
||||||
|
match_any = conditions.get("match_any", [])
|
||||||
|
match_all = conditions.get("match_all", [])
|
||||||
|
self.editor_table_archetype_rules.insertRow(i)
|
||||||
|
self.editor_table_archetype_rules.setItem(i, 0, QTableWidgetItem(str(name)))
|
||||||
|
self.editor_table_archetype_rules.setItem(i, 1, QTableWidgetItem(", ".join(map(str, match_any))))
|
||||||
|
self.editor_table_archetype_rules.setItem(i, 2, QTableWidgetItem(", ".join(map(str, match_all))))
|
||||||
|
else:
|
||||||
|
log.warning(f"Skipping invalid archetype rule condition format: {conditions}")
|
||||||
|
else:
|
||||||
|
log.warning(f"Skipping invalid archetype rule format: {rule_entry}")
|
||||||
|
|
||||||
|
finally:
|
||||||
|
self._is_loading_editor = False
|
||||||
|
|
||||||
|
def _load_preset_for_editing(self, file_path: Path):
|
||||||
|
"""Loads the content of the selected preset file into the editor widgets."""
|
||||||
|
if not file_path or not file_path.is_file():
|
||||||
|
self._clear_editor()
|
||||||
|
return
|
||||||
|
log.info(f"Loading preset into editor: {file_path.name}")
|
||||||
|
try:
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as f: preset_data = json.load(f)
|
||||||
|
self._populate_editor_from_data(preset_data)
|
||||||
|
self._set_editor_enabled(True)
|
||||||
|
self.current_editing_preset_path = file_path
|
||||||
|
self.editor_unsaved_changes = False
|
||||||
|
self.editor_save_button.setEnabled(False)
|
||||||
|
# self.window().setWindowTitle(f"Asset Processor Tool - {file_path.name}") # Handled by MainWindow
|
||||||
|
log.info(f"Preset '{file_path.name}' loaded into editor.")
|
||||||
|
except json.JSONDecodeError as json_err:
|
||||||
|
log.error(f"Invalid JSON in {file_path.name}: {json_err}")
|
||||||
|
QMessageBox.warning(self, "Load Error", f"Failed to load preset '{file_path.name}'.\nInvalid JSON structure:\n{json_err}")
|
||||||
|
self._clear_editor()
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error loading preset file {file_path}: {e}")
|
||||||
|
QMessageBox.critical(self, "Error", f"Could not load preset file:\n{file_path}\n\nError: {e}")
|
||||||
|
self._clear_editor()
|
||||||
|
|
||||||
|
@Slot(QListWidgetItem, QListWidgetItem)
|
||||||
|
def _load_selected_preset_for_editing(self, current_item: QListWidgetItem, previous_item: QListWidgetItem):
|
||||||
|
"""Loads the preset currently selected in the editor list and emits selection change signal."""
|
||||||
|
log.debug(f"PresetEditor: currentItemChanged signal triggered. current: {current_item.text() if current_item else 'None'}")
|
||||||
|
|
||||||
|
mode = "placeholder"
|
||||||
|
preset_name = None
|
||||||
|
|
||||||
|
# Check for unsaved changes before proceeding
|
||||||
|
if self.check_unsaved_changes():
|
||||||
|
# If user cancels, revert selection
|
||||||
|
if previous_item:
|
||||||
|
log.debug("Unsaved changes check cancelled. Reverting selection.")
|
||||||
|
self.editor_preset_list.blockSignals(True)
|
||||||
|
self.editor_preset_list.setCurrentItem(previous_item)
|
||||||
|
self.editor_preset_list.blockSignals(False)
|
||||||
|
return # Stop processing
|
||||||
|
|
||||||
|
# Determine mode and preset name based on selection
|
||||||
|
if current_item:
|
||||||
|
item_data = current_item.data(Qt.ItemDataRole.UserRole)
|
||||||
|
if item_data == "__PLACEHOLDER__":
|
||||||
|
log.debug("Placeholder item selected.")
|
||||||
|
self._clear_editor()
|
||||||
|
self._set_editor_enabled(False)
|
||||||
|
mode = "placeholder"
|
||||||
|
self._last_valid_preset_name = None # Clear last valid name
|
||||||
|
elif item_data == "__LLM__":
|
||||||
|
log.debug("LLM Interpretation item selected.")
|
||||||
|
self._clear_editor()
|
||||||
|
self._set_editor_enabled(False)
|
||||||
|
mode = "llm"
|
||||||
|
# Keep _last_valid_preset_name as it was
|
||||||
|
elif isinstance(item_data, Path):
|
||||||
|
log.debug(f"Loading preset for editing: {current_item.text()}")
|
||||||
|
preset_path = item_data
|
||||||
|
self._load_preset_for_editing(preset_path)
|
||||||
|
self._last_valid_preset_name = preset_path.stem # Store the name
|
||||||
|
mode = "preset"
|
||||||
|
preset_name = self._last_valid_preset_name
|
||||||
|
else:
|
||||||
|
log.error(f"Invalid data type for preset path: {type(item_data)}. Clearing editor.")
|
||||||
|
self._clear_editor()
|
||||||
|
self._set_editor_enabled(False)
|
||||||
|
mode = "placeholder" # Treat as placeholder on error
|
||||||
|
self._last_valid_preset_name = None
|
||||||
|
else:
|
||||||
|
log.debug("No preset selected. Clearing editor.")
|
||||||
|
self._clear_editor()
|
||||||
|
self._set_editor_enabled(False)
|
||||||
|
mode = "placeholder"
|
||||||
|
self._last_valid_preset_name = None
|
||||||
|
|
||||||
|
# Emit the signal regardless of what was selected
|
||||||
|
log.debug(f"Emitting preset_selection_changed_signal: mode='{mode}', preset_name='{preset_name}'")
|
||||||
|
self.preset_selection_changed_signal.emit(mode, preset_name)
|
||||||
|
|
||||||
|
def _gather_editor_data(self) -> dict:
|
||||||
|
"""Gathers data from all editor UI widgets and returns a dictionary."""
|
||||||
|
preset_data = {}
|
||||||
|
preset_data["preset_name"] = self.editor_preset_name.text().strip()
|
||||||
|
preset_data["supplier_name"] = self.editor_supplier_name.text().strip()
|
||||||
|
preset_data["notes"] = self.editor_notes.toPlainText().strip()
|
||||||
|
naming_data = {}
|
||||||
|
naming_data["separator"] = self.editor_separator.text()
|
||||||
|
naming_data["part_indices"] = { "base_name": self.editor_spin_base_name_idx.value(), "map_type": self.editor_spin_map_type_idx.value() }
|
||||||
|
naming_data["glossiness_keywords"] = [self.editor_list_gloss_keywords.item(i).text() for i in range(self.editor_list_gloss_keywords.count())]
|
||||||
|
naming_data["bit_depth_variants"] = {self.editor_table_bit_depth_variants.item(r, 0).text(): self.editor_table_bit_depth_variants.item(r, 1).text()
|
||||||
|
for r in range(self.editor_table_bit_depth_variants.rowCount()) if self.editor_table_bit_depth_variants.item(r, 0) and self.editor_table_bit_depth_variants.item(r, 1)}
|
||||||
|
preset_data["source_naming"] = naming_data
|
||||||
|
preset_data["move_to_extra_patterns"] = [self.editor_list_extra_patterns.item(i).text() for i in range(self.editor_list_extra_patterns.count())]
|
||||||
|
map_mappings = []
|
||||||
|
for r in range(self.editor_table_map_type_mapping.rowCount()):
|
||||||
|
type_item = self.editor_table_map_type_mapping.item(r, 0)
|
||||||
|
keywords_item = self.editor_table_map_type_mapping.item(r, 1)
|
||||||
|
if type_item and type_item.text() and keywords_item and keywords_item.text():
|
||||||
|
target_type = type_item.text().strip()
|
||||||
|
keywords = [k.strip() for k in keywords_item.text().split(',') if k.strip()]
|
||||||
|
if target_type and keywords:
|
||||||
|
map_mappings.append({"target_type": target_type, "keywords": keywords})
|
||||||
|
else: log.warning(f"Skipping row {r} in map type mapping table due to empty target type or keywords.")
|
||||||
|
else: log.warning(f"Skipping row {r} in map type mapping table due to missing items.")
|
||||||
|
preset_data["map_type_mapping"] = map_mappings
|
||||||
|
category_rules = {}
|
||||||
|
category_rules["model_patterns"] = [self.editor_list_model_patterns.item(i).text() for i in range(self.editor_list_model_patterns.count())]
|
||||||
|
category_rules["decal_keywords"] = [self.editor_list_decal_keywords.item(i).text() for i in range(self.editor_list_decal_keywords.count())]
|
||||||
|
preset_data["asset_category_rules"] = category_rules
|
||||||
|
arch_rules = []
|
||||||
|
for r in range(self.editor_table_archetype_rules.rowCount()):
|
||||||
|
name_item = self.editor_table_archetype_rules.item(r, 0)
|
||||||
|
any_item = self.editor_table_archetype_rules.item(r, 1)
|
||||||
|
all_item = self.editor_table_archetype_rules.item(r, 2)
|
||||||
|
if name_item and name_item.text() and any_item and all_item: # Check name has text
|
||||||
|
match_any = [k.strip() for k in any_item.text().split(',') if k.strip()]
|
||||||
|
match_all = [k.strip() for k in all_item.text().split(',') if k.strip()]
|
||||||
|
# Only add if name is present and at least one condition list is non-empty? Or allow empty conditions?
|
||||||
|
# Let's allow empty conditions for now.
|
||||||
|
arch_rules.append([name_item.text().strip(), {"match_any": match_any, "match_all": match_all}])
|
||||||
|
else:
|
||||||
|
log.warning(f"Skipping row {r} in archetype rules table due to missing items or empty name.")
|
||||||
|
preset_data["archetype_rules"] = arch_rules
|
||||||
|
return preset_data
|
||||||
|
|
||||||
|
def _save_current_preset(self) -> bool:
|
||||||
|
"""Saves the current editor content to the currently loaded file path."""
|
||||||
|
if not self.current_editing_preset_path: return self._save_preset_as()
|
||||||
|
log.info(f"Saving preset: {self.current_editing_preset_path.name}")
|
||||||
|
try:
|
||||||
|
preset_data = self._gather_editor_data()
|
||||||
|
if not preset_data.get("preset_name"): QMessageBox.warning(self, "Save Error", "Preset Name cannot be empty."); return False
|
||||||
|
if not preset_data.get("supplier_name"): QMessageBox.warning(self, "Save Error", "Supplier Name cannot be empty."); return False
|
||||||
|
content_to_save = json.dumps(preset_data, indent=4, ensure_ascii=False)
|
||||||
|
with open(self.current_editing_preset_path, 'w', encoding='utf-8') as f: f.write(content_to_save)
|
||||||
|
self.editor_unsaved_changes = False
|
||||||
|
self.editor_save_button.setEnabled(False)
|
||||||
|
# self.window().setWindowTitle(f"Asset Processor Tool - {self.current_editing_preset_path.name}") # Handled by MainWindow
|
||||||
|
self.presets_changed_signal.emit() # Signal that presets changed
|
||||||
|
log.info("Preset saved successfully.")
|
||||||
|
# Refresh list within the editor
|
||||||
|
self.populate_presets()
|
||||||
|
# Reselect the saved item
|
||||||
|
items = self.editor_preset_list.findItems(self.current_editing_preset_path.stem, Qt.MatchFlag.MatchExactly)
|
||||||
|
if items: self.editor_preset_list.setCurrentItem(items[0])
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error saving preset file {self.current_editing_preset_path}: {e}")
|
||||||
|
QMessageBox.critical(self, "Save Error", f"Could not save preset file:\n{self.current_editing_preset_path}\n\nError: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _save_preset_as(self) -> bool:
|
||||||
|
"""Saves the current editor content to a new file chosen by the user."""
|
||||||
|
log.debug("Save As action triggered.")
|
||||||
|
try:
|
||||||
|
preset_data = self._gather_editor_data()
|
||||||
|
new_preset_name = preset_data.get("preset_name")
|
||||||
|
if not new_preset_name: QMessageBox.warning(self, "Save As Error", "Preset Name cannot be empty."); return False
|
||||||
|
if not preset_data.get("supplier_name"): QMessageBox.warning(self, "Save As Error", "Supplier Name cannot be empty."); return False
|
||||||
|
content_to_save = json.dumps(preset_data, indent=4, ensure_ascii=False)
|
||||||
|
suggested_name = f"{new_preset_name}.json"
|
||||||
|
default_path = PRESETS_DIR / suggested_name
|
||||||
|
file_path_str, _ = QFileDialog.getSaveFileName(self, "Save Preset As", str(default_path), "JSON Files (*.json);;All Files (*)")
|
||||||
|
if not file_path_str: log.debug("Save As cancelled by user."); return False
|
||||||
|
save_path = Path(file_path_str)
|
||||||
|
if save_path.suffix.lower() != ".json": save_path = save_path.with_suffix(".json")
|
||||||
|
if save_path.exists() and save_path != self.current_editing_preset_path:
|
||||||
|
reply = QMessageBox.warning(self, "Confirm Overwrite", f"Preset '{save_path.name}' already exists. Overwrite?", QMessageBox.StandardButton.Yes | QMessageBox.StandardButton.No, QMessageBox.StandardButton.No)
|
||||||
|
if reply == QMessageBox.StandardButton.No: log.debug("Save As overwrite cancelled."); return False
|
||||||
|
log.info(f"Saving preset as: {save_path.name}")
|
||||||
|
with open(save_path, 'w', encoding='utf-8') as f: f.write(content_to_save)
|
||||||
|
self.current_editing_preset_path = save_path # Update current path
|
||||||
|
self.editor_unsaved_changes = False
|
||||||
|
self.editor_save_button.setEnabled(False)
|
||||||
|
# self.window().setWindowTitle(f"Asset Processor Tool - {save_path.name}") # Handled by MainWindow
|
||||||
|
self.presets_changed_signal.emit() # Signal change
|
||||||
|
log.info("Preset saved successfully (Save As).")
|
||||||
|
# Refresh list and select the new item
|
||||||
|
self.populate_presets()
|
||||||
|
items = self.editor_preset_list.findItems(save_path.stem, Qt.MatchFlag.MatchExactly)
|
||||||
|
if items: self.editor_preset_list.setCurrentItem(items[0])
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error saving preset file (Save As): {e}")
|
||||||
|
QMessageBox.critical(self, "Save Error", f"Could not save preset file.\n\nError: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _new_preset(self):
|
||||||
|
"""Clears the editor and loads data from _template.json."""
|
||||||
|
log.debug("New Preset action triggered.")
|
||||||
|
if self.check_unsaved_changes(): return # Check unsaved changes first
|
||||||
|
self._clear_editor()
|
||||||
|
if TEMPLATE_PATH.is_file():
|
||||||
|
log.info("Loading new preset from _template.json")
|
||||||
|
try:
|
||||||
|
with open(TEMPLATE_PATH, 'r', encoding='utf-8') as f: template_data = json.load(f)
|
||||||
|
self._populate_editor_from_data(template_data)
|
||||||
|
# Override specific fields for a new preset
|
||||||
|
self.editor_preset_name.setText("NewPreset")
|
||||||
|
# self.window().setWindowTitle("Asset Processor Tool - New Preset*") # Handled by MainWindow
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error loading template preset file {TEMPLATE_PATH}: {e}")
|
||||||
|
QMessageBox.critical(self, "Error", f"Could not load template preset file:\n{TEMPLATE_PATH}\n\nError: {e}")
|
||||||
|
self._clear_editor()
|
||||||
|
# self.window().setWindowTitle("Asset Processor Tool - New Preset*") # Handled by MainWindow
|
||||||
|
self.editor_supplier_name.setText("MySupplier") # Set a default supplier name
|
||||||
|
else:
|
||||||
|
log.warning("Presets/_template.json not found. Creating empty preset.")
|
||||||
|
# self.window().setWindowTitle("Asset Processor Tool - New Preset*") # Handled by MainWindow
|
||||||
|
self.editor_preset_name.setText("NewPreset")
|
||||||
|
self.editor_supplier_name.setText("MySupplier") # Set a default supplier name
|
||||||
|
self._set_editor_enabled(True)
|
||||||
|
self.editor_unsaved_changes = True
|
||||||
|
self.editor_save_button.setEnabled(True)
|
||||||
|
# Select the placeholder item to avoid auto-loading the "NewPreset"
|
||||||
|
placeholder_item = self.editor_preset_list.findItems("--- Select a Preset ---", Qt.MatchFlag.MatchExactly)
|
||||||
|
if placeholder_item:
|
||||||
|
self.editor_preset_list.setCurrentItem(placeholder_item[0])
|
||||||
|
# Emit selection change for the new state (effectively placeholder)
|
||||||
|
self.preset_selection_changed_signal.emit("placeholder", None)
|
||||||
|
|
||||||
|
|
||||||
|
def _delete_selected_preset(self):
|
||||||
|
"""Deletes the currently selected preset file from the editor list after confirmation."""
|
||||||
|
current_item = self.editor_preset_list.currentItem()
|
||||||
|
if not current_item: QMessageBox.information(self, "Delete Preset", "Please select a preset from the list to delete."); return
|
||||||
|
|
||||||
|
item_data = current_item.data(Qt.ItemDataRole.UserRole)
|
||||||
|
# Ensure it's a real preset path before attempting delete
|
||||||
|
if not isinstance(item_data, Path):
|
||||||
|
QMessageBox.information(self, "Delete Preset", "Cannot delete placeholder or LLM option.")
|
||||||
|
return
|
||||||
|
|
||||||
|
preset_path = item_data
|
||||||
|
preset_name = preset_path.stem
|
||||||
|
reply = QMessageBox.warning(self, "Confirm Delete", f"Are you sure you want to permanently delete the preset '{preset_name}'?", QMessageBox.StandardButton.Yes | QMessageBox.StandardButton.No, QMessageBox.StandardButton.No)
|
||||||
|
if reply == QMessageBox.StandardButton.Yes:
|
||||||
|
log.info(f"Deleting preset: {preset_path.name}")
|
||||||
|
try:
|
||||||
|
preset_path.unlink()
|
||||||
|
log.info("Preset deleted successfully.")
|
||||||
|
if self.current_editing_preset_path == preset_path: self._clear_editor()
|
||||||
|
self.presets_changed_signal.emit() # Signal change
|
||||||
|
# Refresh list
|
||||||
|
self.populate_presets()
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error deleting preset file {preset_path}: {e}")
|
||||||
|
QMessageBox.critical(self, "Delete Error", f"Could not delete preset file:\n{preset_path}\n\nError: {e}")
|
||||||
|
|
||||||
|
# --- Public Access Methods for MainWindow ---
|
||||||
|
|
||||||
|
def get_selected_preset_mode(self) -> tuple[str, str | None]:
|
||||||
|
"""
|
||||||
|
Returns the current selection mode and preset name (if applicable).
|
||||||
|
Returns: tuple(mode_string, preset_name_string_or_None)
|
||||||
|
mode_string can be "preset", "llm", "placeholder"
|
||||||
|
"""
|
||||||
|
current_item = self.editor_preset_list.currentItem()
|
||||||
|
if current_item:
|
||||||
|
item_data = current_item.data(Qt.ItemDataRole.UserRole)
|
||||||
|
if item_data == "__PLACEHOLDER__":
|
||||||
|
return "placeholder", None
|
||||||
|
elif item_data == "__LLM__":
|
||||||
|
return "llm", None
|
||||||
|
elif isinstance(item_data, Path):
|
||||||
|
return "preset", item_data.stem
|
||||||
|
return "placeholder", None # Default or if no item selected
|
||||||
|
|
||||||
|
def get_last_valid_preset_name(self) -> str | None:
|
||||||
|
"""
|
||||||
|
Returns the name (stem) of the last valid preset that was loaded.
|
||||||
|
Used by delegates to populate dropdowns based on the original context.
|
||||||
|
"""
|
||||||
|
return self._last_valid_preset_name
|
||||||
|
|
||||||
|
# --- Slots for MainWindow Interaction ---
|
||||||
@ -1,372 +0,0 @@
|
|||||||
# gui/processing_handler.py
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
from concurrent.futures import ProcessPoolExecutor, as_completed
|
|
||||||
import time # For potential delays if needed
|
|
||||||
|
|
||||||
import subprocess # <<< ADDED IMPORT
|
|
||||||
import shutil # <<< ADDED IMPORT
|
|
||||||
from typing import Optional # <<< ADDED IMPORT
|
|
||||||
from rule_structure import SourceRule # Import SourceRule
|
|
||||||
|
|
||||||
# --- PySide6 Imports ---
|
|
||||||
# Inherit from QObject to support signals/slots for thread communication
|
|
||||||
from PySide6.QtCore import QObject, Signal
|
|
||||||
|
|
||||||
# --- Backend Imports ---
|
|
||||||
# Need to import the worker function and potentially config/processor if needed directly
|
|
||||||
# Adjust path to ensure modules can be found relative to this file's location
|
|
||||||
import sys
|
|
||||||
script_dir = Path(__file__).parent
|
|
||||||
project_root = script_dir.parent
|
|
||||||
if str(project_root) not in sys.path:
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Import the worker function from main.py
|
|
||||||
from main import process_single_asset_wrapper
|
|
||||||
# Import exceptions if needed for type hinting or specific handling
|
|
||||||
from configuration import ConfigurationError, load_base_config # Import ConfigurationError and load_base_config
|
|
||||||
from asset_processor import AssetProcessingError
|
|
||||||
# Removed: import config as core_config # <<< ADDED IMPORT
|
|
||||||
BACKEND_AVAILABLE = True
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"ERROR (ProcessingHandler): Failed to import backend modules/worker: {e}")
|
|
||||||
# Define placeholders if imports fail, so the GUI doesn't crash immediately
|
|
||||||
process_single_asset_wrapper = None
|
|
||||||
ConfigurationError = Exception
|
|
||||||
load_base_config = None # Placeholder
|
|
||||||
AssetProcessingError = Exception
|
|
||||||
BACKEND_AVAILABLE = False
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
# Basic config if logger hasn't been set up elsewhere
|
|
||||||
if not log.hasHandlers():
|
|
||||||
logging.basicConfig(level=logging.INFO, format='%(levelname)s (Handler): %(message)s')
|
|
||||||
|
|
||||||
|
|
||||||
class ProcessingHandler(QObject):
|
|
||||||
"""
|
|
||||||
Handles the execution of the asset processing pipeline in a way that
|
|
||||||
can be run in a separate thread and communicate progress via signals.
|
|
||||||
"""
|
|
||||||
# --- Signals ---
|
|
||||||
# Emitted for overall progress bar update
|
|
||||||
progress_updated = Signal(int, int) # current_count, total_count
|
|
||||||
# Emitted for updating status of individual files in the list
|
|
||||||
file_status_updated = Signal(str, str, str) # input_path_str, status ("processing", "processed", "skipped", "failed"), message
|
|
||||||
# Emitted when the entire batch processing is finished
|
|
||||||
processing_finished = Signal(int, int, int) # processed_count, skipped_count, failed_count
|
|
||||||
# Emitted for general status messages to the status bar
|
|
||||||
status_message = Signal(str, int) # message, timeout_ms
|
|
||||||
|
|
||||||
def __init__(self, parent=None):
|
|
||||||
super().__init__(parent)
|
|
||||||
self._executor = None
|
|
||||||
self._futures = {} # Store future->input_path mapping
|
|
||||||
self._is_running = False
|
|
||||||
self._cancel_requested = False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_running(self):
|
|
||||||
return self._is_running
|
|
||||||
|
|
||||||
# Removed _predict_single_asset method
|
|
||||||
|
|
||||||
@Slot(str, list, str, str, bool, int,
|
|
||||||
bool, str, str, bool, SourceRule) # Explicitly define types for the slot
|
|
||||||
def run_processing(self, input_source_identifier: str, original_input_paths: list[str], preset_name: str, output_dir_str: str, overwrite: bool, num_workers: int,
|
|
||||||
run_blender: bool, nodegroup_blend_path: str, materials_blend_path: str, verbose: bool, rules: SourceRule): # <<< ADDED verbose PARAM
|
|
||||||
"""
|
|
||||||
Starts the asset processing task and optionally runs Blender scripts afterwards.
|
|
||||||
This method should be called when the handler is moved to a separate thread.
|
|
||||||
"""
|
|
||||||
if self._is_running:
|
|
||||||
log.warning("Processing is already running.")
|
|
||||||
self.status_message.emit("Processing already in progress.", 3000)
|
|
||||||
return
|
|
||||||
|
|
||||||
if not BACKEND_AVAILABLE or not process_single_asset_wrapper:
|
|
||||||
log.error("Backend modules or worker function not available. Cannot start processing.")
|
|
||||||
self.status_message.emit("Error: Backend components missing. Cannot process.", 5000)
|
|
||||||
self.processing_finished.emit(0, 0, len(original_input_paths)) # Emit finished with all failed
|
|
||||||
return
|
|
||||||
|
|
||||||
self._is_running = True
|
|
||||||
self._cancel_requested = False
|
|
||||||
self._futures = {} # Reset futures
|
|
||||||
total_files = len(original_input_paths) # Use original_input_paths for total count
|
|
||||||
processed_count = 0
|
|
||||||
skipped_count = 0
|
|
||||||
failed_count = 0
|
|
||||||
completed_count = 0
|
|
||||||
|
|
||||||
log.info(f"Starting processing run: {total_files} assets, Preset='{preset_name}', Workers={num_workers}, Overwrite={overwrite}")
|
|
||||||
self.status_message.emit(f"Starting processing for {total_files} items...", 0) # Persistent message
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Use 'with' statement for ProcessPoolExecutor for cleanup
|
|
||||||
with ProcessPoolExecutor(max_workers=num_workers) as executor:
|
|
||||||
self._executor = executor # Store for potential cancellation
|
|
||||||
|
|
||||||
# Submit tasks
|
|
||||||
for input_path in original_input_paths: # Iterate through the list of input paths
|
|
||||||
if self._cancel_requested: break # Check before submitting more
|
|
||||||
log.debug(f"Submitting task for: {input_path}")
|
|
||||||
# Pass the single SourceRule object to the worker
|
|
||||||
# --- DEBUG LOG: Inspect FileRule overrides before sending to worker ---
|
|
||||||
log.debug(f"ProcessingHandler: Inspecting rules for input '{input_path}' before submitting to worker:")
|
|
||||||
if rules: # Check if rules object exists
|
|
||||||
for asset_rule in rules.assets:
|
|
||||||
log.debug(f" Asset: {asset_rule.asset_name}")
|
|
||||||
for file_rule in asset_rule.files:
|
|
||||||
log.debug(f" File: {Path(file_rule.file_path).name}, ItemType: {file_rule.item_type}, Override: {file_rule.item_type_override}, StandardMap: {getattr(file_rule, 'standard_map_type', 'N/A')}")
|
|
||||||
else:
|
|
||||||
log.debug(" Rules object is None.")
|
|
||||||
# --- END DEBUG LOG ---
|
|
||||||
future = executor.submit(process_single_asset_wrapper, input_path, preset_name, output_dir_str, overwrite, verbose=verbose, rules=rules) # Pass verbose flag from GUI and rules
|
|
||||||
self._futures[future] = input_path # Map future back to input path
|
|
||||||
# Optionally emit "processing" status here
|
|
||||||
self.file_status_updated.emit(input_path, "processing", "")
|
|
||||||
|
|
||||||
if self._cancel_requested:
|
|
||||||
log.info("Processing cancelled during task submission.")
|
|
||||||
# Count remaining unsubmitted tasks as failed/cancelled
|
|
||||||
failed_count = total_files - len(self._futures)
|
|
||||||
|
|
||||||
# Process completed futures
|
|
||||||
for future in as_completed(self._futures):
|
|
||||||
completed_count += 1
|
|
||||||
input_path = self._futures[future] # Get original path
|
|
||||||
asset_name = Path(input_path).name
|
|
||||||
status = "failed" # Default status
|
|
||||||
error_message = "Unknown error"
|
|
||||||
|
|
||||||
if self._cancel_requested:
|
|
||||||
# If cancelled after submission, try to get result but count as failed
|
|
||||||
status = "failed"
|
|
||||||
error_message = "Cancelled"
|
|
||||||
failed_count += 1
|
|
||||||
# Don't try future.result() if cancelled, it might raise CancelledError
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
# Get result tuple: (input_path_str, status_string, error_message_or_None)
|
|
||||||
result_tuple = future.result()
|
|
||||||
_, status, error_message = result_tuple
|
|
||||||
error_message = error_message or "" # Ensure it's a string
|
|
||||||
|
|
||||||
# Increment counters based on status
|
|
||||||
if status == "processed":
|
|
||||||
processed_count += 1
|
|
||||||
elif status == "skipped":
|
|
||||||
skipped_count += 1
|
|
||||||
elif status == "failed":
|
|
||||||
failed_count += 1
|
|
||||||
else:
|
|
||||||
log.warning(f"Unknown status '{status}' received for {asset_name}. Counting as failed.")
|
|
||||||
failed_count += 1
|
|
||||||
error_message = f"Unknown status: {status}"
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
# Catch errors if the future itself fails (e.g., worker process crashed hard)
|
|
||||||
log.exception(f"Critical worker failure for {asset_name}: {e}")
|
|
||||||
failed_count += 1 # Count crashes as failures
|
|
||||||
status = "failed"
|
|
||||||
error_message = f"Worker process crashed: {e}"
|
|
||||||
|
|
||||||
# Emit progress signals
|
|
||||||
self.progress_updated.emit(completed_count, total_files)
|
|
||||||
self.file_status_updated.emit(input_path, status, error_message)
|
|
||||||
|
|
||||||
# Check for cancellation again after processing each result
|
|
||||||
if self._cancel_requested:
|
|
||||||
log.info("Cancellation detected after processing a result.")
|
|
||||||
# Count remaining unprocessed futures as failed/cancelled
|
|
||||||
remaining_futures = total_files - completed_count
|
|
||||||
failed_count += remaining_futures
|
|
||||||
break # Exit the as_completed loop
|
|
||||||
|
|
||||||
except Exception as pool_exc:
|
|
||||||
log.exception(f"An error occurred with the process pool: {pool_exc}")
|
|
||||||
self.status_message.emit(f"Error during processing: {pool_exc}", 5000)
|
|
||||||
# Mark all remaining as failed
|
|
||||||
failed_count = total_files - processed_count - skipped_count
|
|
||||||
|
|
||||||
finally:
|
|
||||||
# --- Blender Script Execution (Optional) ---
|
|
||||||
if run_blender and not self._cancel_requested:
|
|
||||||
log.info("Asset processing complete. Checking for Blender script execution.")
|
|
||||||
self.status_message.emit("Asset processing complete. Starting Blender scripts...", 0)
|
|
||||||
blender_exe = self._find_blender_executable()
|
|
||||||
if blender_exe:
|
|
||||||
script_dir = Path(__file__).parent.parent / "blenderscripts" # Go up one level from gui/
|
|
||||||
nodegroup_script_path = script_dir / "create_nodegroups.py"
|
|
||||||
materials_script_path = script_dir / "create_materials.py"
|
|
||||||
asset_output_root = output_dir_str # Use the same output dir
|
|
||||||
|
|
||||||
# Run Nodegroup Script
|
|
||||||
if nodegroup_blend_path and Path(nodegroup_blend_path).is_file():
|
|
||||||
if nodegroup_script_path.is_file():
|
|
||||||
log.info("-" * 20 + " Running Nodegroup Script " + "-" * 20)
|
|
||||||
self.status_message.emit(f"Running Blender nodegroup script on {Path(nodegroup_blend_path).name}...", 0)
|
|
||||||
success_ng = self._run_blender_script_subprocess(
|
|
||||||
blender_exe_path=blender_exe,
|
|
||||||
blend_file_path=nodegroup_blend_path,
|
|
||||||
python_script_path=str(nodegroup_script_path),
|
|
||||||
asset_root_dir=asset_output_root
|
|
||||||
)
|
|
||||||
if not success_ng:
|
|
||||||
log.error("Blender node group script execution failed.")
|
|
||||||
self.status_message.emit("Blender nodegroup script failed.", 5000)
|
|
||||||
else:
|
|
||||||
log.info("Blender nodegroup script finished successfully.")
|
|
||||||
self.status_message.emit("Blender nodegroup script finished.", 3000)
|
|
||||||
else:
|
|
||||||
log.error(f"Node group script not found: {nodegroup_script_path}")
|
|
||||||
self.status_message.emit(f"Error: Nodegroup script not found.", 5000)
|
|
||||||
elif run_blender and nodegroup_blend_path: # Log if path was provided but invalid
|
|
||||||
log.warning(f"Nodegroup blend path provided but invalid: {nodegroup_blend_path}")
|
|
||||||
self.status_message.emit(f"Warning: Invalid Nodegroup .blend path.", 5000)
|
|
||||||
|
|
||||||
|
|
||||||
# Run Materials Script (only if nodegroup script was attempted or not needed)
|
|
||||||
if materials_blend_path and Path(materials_blend_path).is_file():
|
|
||||||
if materials_script_path.is_file():
|
|
||||||
log.info("-" * 20 + " Running Materials Script " + "-" * 20)
|
|
||||||
self.status_message.emit(f"Running Blender materials script on {Path(materials_blend_path).name}...", 0)
|
|
||||||
# Pass the nodegroup blend path as the second argument to the script
|
|
||||||
success_mat = self._run_blender_script_subprocess(
|
|
||||||
blender_exe_path=blender_exe,
|
|
||||||
blend_file_path=materials_blend_path,
|
|
||||||
python_script_path=str(materials_script_path),
|
|
||||||
asset_root_dir=asset_output_root,
|
|
||||||
nodegroup_blend_file_path_arg=nodegroup_blend_path # Pass the nodegroup path
|
|
||||||
)
|
|
||||||
if not success_mat:
|
|
||||||
log.error("Blender material script execution failed.")
|
|
||||||
self.status_message.emit("Blender material script failed.", 5000)
|
|
||||||
else:
|
|
||||||
log.info("Blender material script finished successfully.")
|
|
||||||
self.status_message.emit("Blender material script finished.", 3000)
|
|
||||||
else:
|
|
||||||
log.error(f"Material script not found: {materials_script_path}")
|
|
||||||
self.status_message.emit(f"Error: Material script not found.", 5000)
|
|
||||||
elif run_blender and materials_blend_path: # Log if path was provided but invalid
|
|
||||||
log.warning(f"Materials blend path provided but invalid: {materials_blend_path}")
|
|
||||||
self.status_message.emit(f"Warning: Invalid Materials .blend path.", 5000)
|
|
||||||
|
|
||||||
else:
|
|
||||||
log.warning("Blender executable not found. Skipping Blender script execution.")
|
|
||||||
self.status_message.emit("Warning: Blender executable not found. Skipping scripts.", 5000)
|
|
||||||
elif self._cancel_requested:
|
|
||||||
log.info("Processing was cancelled. Skipping Blender script execution.")
|
|
||||||
# --- End Blender Script Execution ---
|
|
||||||
|
|
||||||
final_message = f"Finished. Processed: {processed_count}, Skipped: {skipped_count}, Failed: {failed_count}"
|
|
||||||
log.info(final_message)
|
|
||||||
self.status_message.emit(final_message, 5000) # Show final summary
|
|
||||||
self.processing_finished.emit(processed_count, skipped_count, failed_count)
|
|
||||||
self._is_running = False
|
|
||||||
self._executor = None
|
|
||||||
self._futures = {} # Clear futures
|
|
||||||
|
|
||||||
def request_cancel(self):
|
|
||||||
"""Requests cancellation of the ongoing processing task."""
|
|
||||||
if not self._is_running:
|
|
||||||
log.warning("Cancel requested but no processing is running.")
|
|
||||||
return
|
|
||||||
|
|
||||||
if self._cancel_requested:
|
|
||||||
log.warning("Cancellation already requested.")
|
|
||||||
return
|
|
||||||
|
|
||||||
log.info("Cancellation requested.")
|
|
||||||
self.status_message.emit("Cancellation requested...", 3000)
|
|
||||||
self._cancel_requested = True
|
|
||||||
|
|
||||||
# Attempt to shutdown the executor - this might cancel pending tasks
|
|
||||||
# but won't forcefully stop running ones. `cancel_futures=True` is Python 3.9+
|
|
||||||
if self._executor:
|
|
||||||
log.debug("Requesting executor shutdown...")
|
|
||||||
# For Python 3.9+: self._executor.shutdown(wait=False, cancel_futures=True)
|
|
||||||
# For older Python:
|
|
||||||
self._executor.shutdown(wait=False)
|
|
||||||
# Manually try cancelling futures that haven't started
|
|
||||||
for future in self._futures:
|
|
||||||
if not future.running() and not future.done():
|
|
||||||
future.cancel()
|
|
||||||
log.debug("Executor shutdown requested.")
|
|
||||||
|
|
||||||
# Note: True cancellation of running ProcessPoolExecutor tasks is complex.
|
|
||||||
# This implementation primarily prevents processing further results and
|
|
||||||
# attempts to cancel pending/unstarted tasks.
|
|
||||||
|
|
||||||
def _find_blender_executable(self) -> Optional[str]:
|
|
||||||
"""Finds the Blender executable path from config or system PATH."""
|
|
||||||
try:
|
|
||||||
# Use load_base_config to get the Blender executable path
|
|
||||||
if load_base_config:
|
|
||||||
base_config = load_base_config()
|
|
||||||
blender_exe_config = base_config.get('BLENDER_EXECUTABLE_PATH', None)
|
|
||||||
else:
|
|
||||||
blender_exe_config = None
|
|
||||||
log.warning("load_base_config not available. Cannot read BLENDER_EXECUTABLE_PATH from config.")
|
|
||||||
|
|
||||||
if blender_exe_config:
|
|
||||||
p = Path(blender_exe_config)
|
|
||||||
if p.is_file():
|
|
||||||
log.info(f"Using Blender executable from config: {p}")
|
|
||||||
return str(p.resolve())
|
|
||||||
else:
|
|
||||||
log.warning(f"Blender path in config not found: '{blender_exe_config}'. Trying PATH.")
|
|
||||||
else:
|
|
||||||
log.info("BLENDER_EXECUTABLE_PATH not set in config. Trying PATH.")
|
|
||||||
|
|
||||||
blender_exe = shutil.which("blender")
|
|
||||||
if blender_exe:
|
|
||||||
log.info(f"Found Blender executable in PATH: {blender_exe}")
|
|
||||||
return blender_exe
|
|
||||||
else:
|
|
||||||
log.warning("Could not find 'blender' in system PATH.")
|
|
||||||
return None
|
|
||||||
except ConfigurationError as e:
|
|
||||||
log.error(f"Error reading base configuration for Blender executable path: {e}")
|
|
||||||
return None
|
|
||||||
except Exception as e:
|
|
||||||
log.error(f"Error checking Blender executable path: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _run_blender_script_subprocess(self, blender_exe_path: str, blend_file_path: str, python_script_path: str, asset_root_dir: str, nodegroup_blend_file_path_arg: Optional[str] = None) -> bool:
|
|
||||||
"""Internal helper to run a single Blender script via subprocess."""
|
|
||||||
command_base = [
|
|
||||||
blender_exe_path,
|
|
||||||
"--factory-startup",
|
|
||||||
"-b",
|
|
||||||
blend_file_path,
|
|
||||||
"--log", "*", # <<< ADDED BLENDER LOGGING FLAG
|
|
||||||
"--python", python_script_path,
|
|
||||||
"--",
|
|
||||||
asset_root_dir,
|
|
||||||
]
|
|
||||||
# Add nodegroup blend file path if provided (for create_materials script)
|
|
||||||
if nodegroup_blend_file_path_arg:
|
|
||||||
command = command_base + [nodegroup_blend_file_path_arg]
|
|
||||||
else:
|
|
||||||
command = command_base
|
|
||||||
log.debug(f"Executing Blender command: {' '.join(map(str, command))}") # Ensure all parts are strings for join
|
|
||||||
try:
|
|
||||||
# Ensure all parts of the command are strings for subprocess
|
|
||||||
str_command = [str(part) for part in command]
|
|
||||||
result = subprocess.run(str_command, capture_output=True, text=True, check=False, encoding='utf-8') # Specify encoding
|
|
||||||
log.info(f"Blender script '{Path(python_script_path).name}' finished with exit code: {result.returncode}")
|
|
||||||
if result.stdout: log.debug(f"Blender stdout:\n{result.stdout.strip()}")
|
|
||||||
if result.stderr:
|
|
||||||
if result.returncode != 0: log.error(f"Blender stderr:\n{result.stderr.strip()}")
|
|
||||||
else: log.warning(f"Blender stderr (RC=0):\n{result.stderr.strip()}")
|
|
||||||
return result.returncode == 0
|
|
||||||
except FileNotFoundError:
|
|
||||||
log.error(f"Blender executable not found at: {blender_exe_path}")
|
|
||||||
return False
|
|
||||||
except Exception as e:
|
|
||||||
log.exception(f"Error running Blender script '{Path(python_script_path).name}': {e}")
|
|
||||||
return False
|
|
||||||
@ -1,7 +1,7 @@
|
|||||||
# gui/unified_view_model.py
|
# gui/unified_view_model.py
|
||||||
import logging # Added for debugging
|
import logging # Added for debugging
|
||||||
log = logging.getLogger(__name__) # Added for debugging
|
log = logging.getLogger(__name__) # Added for debugging
|
||||||
from PySide6.QtCore import QAbstractItemModel, QModelIndex, Qt, Signal # Added Signal
|
from PySide6.QtCore import QAbstractItemModel, QModelIndex, Qt, Signal, Slot # Added Signal and Slot
|
||||||
from PySide6.QtGui import QColor # Added for background role
|
from PySide6.QtGui import QColor # Added for background role
|
||||||
from pathlib import Path # Added for file_name extraction
|
from pathlib import Path # Added for file_name extraction
|
||||||
from rule_structure import SourceRule, AssetRule, FileRule # Removed AssetType, ItemType import
|
from rule_structure import SourceRule, AssetRule, FileRule # Removed AssetType, ItemType import
|
||||||
@ -18,6 +18,10 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
A QAbstractItemModel for displaying and editing the hierarchical structure
|
A QAbstractItemModel for displaying and editing the hierarchical structure
|
||||||
of SourceRule -> AssetRule -> FileRule.
|
of SourceRule -> AssetRule -> FileRule.
|
||||||
"""
|
"""
|
||||||
|
# Signal emitted when a FileRule's target asset override changes.
|
||||||
|
# Carries the index of the FileRule and the new target asset path (or None).
|
||||||
|
targetAssetOverrideChanged = Signal(QModelIndex, object)
|
||||||
|
|
||||||
Columns = [
|
Columns = [
|
||||||
"Name", "Target Asset", "Supplier",
|
"Name", "Target Asset", "Supplier",
|
||||||
"Asset Type", "Item Type"
|
"Asset Type", "Item Type"
|
||||||
@ -34,9 +38,52 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
def __init__(self, parent=None):
|
def __init__(self, parent=None):
|
||||||
super().__init__(parent)
|
super().__init__(parent)
|
||||||
self._source_rules = [] # Now stores a list of SourceRule objects
|
self._source_rules = [] # Now stores a list of SourceRule objects
|
||||||
|
# self._display_mode removed
|
||||||
|
self._asset_type_colors = {}
|
||||||
|
self._file_type_colors = {}
|
||||||
|
self._asset_type_keys = [] # Store asset type keys
|
||||||
|
self._file_type_keys = [] # Store file type keys
|
||||||
|
self._load_definitions() # Load colors and keys
|
||||||
|
|
||||||
|
def _load_definitions(self):
|
||||||
|
"""Loads configuration and caches colors and type keys."""
|
||||||
|
try:
|
||||||
|
base_config = load_base_config()
|
||||||
|
asset_type_defs = base_config.get('ASSET_TYPE_DEFINITIONS', {})
|
||||||
|
file_type_defs = base_config.get('FILE_TYPE_DEFINITIONS', {})
|
||||||
|
|
||||||
|
# Cache Asset Type Definitions (Keys and Colors)
|
||||||
|
self._asset_type_keys = sorted(list(asset_type_defs.keys()))
|
||||||
|
for type_name, type_info in asset_type_defs.items():
|
||||||
|
hex_color = type_info.get("color")
|
||||||
|
if hex_color:
|
||||||
|
try:
|
||||||
|
self._asset_type_colors[type_name] = QColor(hex_color)
|
||||||
|
except ValueError:
|
||||||
|
log.warning(f"Invalid hex color '{hex_color}' for asset type '{type_name}' in config.")
|
||||||
|
|
||||||
|
# Cache File Type Definitions (Keys and Colors)
|
||||||
|
self._file_type_keys = sorted(list(file_type_defs.keys()))
|
||||||
|
for type_name, type_info in file_type_defs.items():
|
||||||
|
hex_color = type_info.get("color")
|
||||||
|
if hex_color:
|
||||||
|
try:
|
||||||
|
self._file_type_colors[type_name] = QColor(hex_color)
|
||||||
|
except ValueError:
|
||||||
|
log.warning(f"Invalid hex color '{hex_color}' for file type '{type_name}' in config.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error loading or caching colors from configuration: {e}")
|
||||||
|
# Ensure caches/lists are empty if loading fails
|
||||||
|
self._asset_type_colors = {}
|
||||||
|
self._file_type_colors = {}
|
||||||
|
self._asset_type_keys = []
|
||||||
|
self._file_type_keys = []
|
||||||
|
|
||||||
def load_data(self, source_rules_list: list): # Accepts a list
|
def load_data(self, source_rules_list: list): # Accepts a list
|
||||||
"""Loads or reloads the model with a list of SourceRule objects."""
|
"""Loads or reloads the model with a list of SourceRule objects."""
|
||||||
|
# Consider if color cache needs refreshing if config can change dynamically
|
||||||
|
# self._load_and_cache_colors() # Uncomment if config can change and needs refresh
|
||||||
self.beginResetModel()
|
self.beginResetModel()
|
||||||
self._source_rules = source_rules_list if source_rules_list else [] # Assign the new list
|
self._source_rules = source_rules_list if source_rules_list else [] # Assign the new list
|
||||||
# Ensure back-references for parent lookup are set on the NEW items
|
# Ensure back-references for parent lookup are set on the NEW items
|
||||||
@ -56,26 +103,26 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
def get_all_source_rules(self) -> list:
|
def get_all_source_rules(self) -> list:
|
||||||
"""Returns the internal list of SourceRule objects."""
|
"""Returns the internal list of SourceRule objects."""
|
||||||
return self._source_rules
|
return self._source_rules
|
||||||
|
|
||||||
|
# set_display_mode removed
|
||||||
|
|
||||||
def rowCount(self, parent: QModelIndex = QModelIndex()) -> int:
|
def rowCount(self, parent: QModelIndex = QModelIndex()) -> int:
|
||||||
"""Returns the number of rows under the given parent."""
|
"""Returns the number of rows under the given parent."""
|
||||||
if not parent.isValid():
|
if not parent.isValid():
|
||||||
# Parent is the invisible root. Children are the SourceRules.
|
# Parent is the invisible root. Children are the SourceRules.
|
||||||
return len(self._source_rules)
|
return len(self._source_rules)
|
||||||
|
|
||||||
|
# Always use detailed logic
|
||||||
parent_item = parent.internalPointer()
|
parent_item = parent.internalPointer()
|
||||||
|
|
||||||
if isinstance(parent_item, SourceRule):
|
if isinstance(parent_item, SourceRule):
|
||||||
# Parent is a SourceRule. Children are AssetRules.
|
|
||||||
return len(parent_item.assets)
|
return len(parent_item.assets)
|
||||||
elif isinstance(parent_item, AssetRule):
|
elif isinstance(parent_item, AssetRule):
|
||||||
# Parent is an AssetRule. Children are FileRules.
|
|
||||||
return len(parent_item.files)
|
return len(parent_item.files)
|
||||||
elif isinstance(parent_item, FileRule):
|
elif isinstance(parent_item, FileRule):
|
||||||
return 0 # FileRules have no children
|
return 0 # FileRules have no children
|
||||||
|
|
||||||
return 0 # Should not happen for valid items
|
return 0 # Should not happen for valid items
|
||||||
|
|
||||||
|
|
||||||
def columnCount(self, parent: QModelIndex = QModelIndex()) -> int:
|
def columnCount(self, parent: QModelIndex = QModelIndex()) -> int:
|
||||||
"""Returns the number of columns."""
|
"""Returns the number of columns."""
|
||||||
return len(self.Columns)
|
return len(self.Columns)
|
||||||
@ -143,27 +190,22 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
# Parent is a valid index, get its item
|
# Parent is a valid index, get its item
|
||||||
parent_item = parent.internalPointer()
|
parent_item = parent.internalPointer()
|
||||||
|
|
||||||
|
# Always use detailed logic
|
||||||
child_item = None
|
child_item = None
|
||||||
if isinstance(parent_item, SourceRule):
|
if isinstance(parent_item, SourceRule):
|
||||||
# Parent is SourceRule. Children are AssetRules.
|
|
||||||
if row < len(parent_item.assets):
|
if row < len(parent_item.assets):
|
||||||
child_item = parent_item.assets[row]
|
child_item = parent_item.assets[row]
|
||||||
# Ensure parent reference is set
|
|
||||||
if not hasattr(child_item, 'parent_source'):
|
if not hasattr(child_item, 'parent_source'):
|
||||||
child_item.parent_source = parent_item
|
child_item.parent_source = parent_item
|
||||||
elif isinstance(parent_item, AssetRule):
|
elif isinstance(parent_item, AssetRule):
|
||||||
# Parent is AssetRule. Children are FileRules.
|
|
||||||
if row < len(parent_item.files):
|
if row < len(parent_item.files):
|
||||||
child_item = parent_item.files[row]
|
child_item = parent_item.files[row]
|
||||||
# Ensure parent reference is set
|
|
||||||
if not hasattr(child_item, 'parent_asset'):
|
if not hasattr(child_item, 'parent_asset'):
|
||||||
child_item.parent_asset = parent_item
|
child_item.parent_asset = parent_item
|
||||||
|
|
||||||
if child_item:
|
if child_item:
|
||||||
# Create index for the child item under the parent
|
|
||||||
return self.createIndex(row, column, child_item)
|
return self.createIndex(row, column, child_item)
|
||||||
else:
|
else:
|
||||||
# Invalid row or parent type has no children (FileRule)
|
|
||||||
return QModelIndex()
|
return QModelIndex()
|
||||||
|
|
||||||
def data(self, index: QModelIndex, role: int = Qt.DisplayRole):
|
def data(self, index: QModelIndex, role: int = Qt.DisplayRole):
|
||||||
@ -183,107 +225,79 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
# Determine effective asset type
|
# Determine effective asset type
|
||||||
asset_type = item.asset_type_override if item.asset_type_override else item.asset_type
|
asset_type = item.asset_type_override if item.asset_type_override else item.asset_type
|
||||||
if asset_type:
|
if asset_type:
|
||||||
try:
|
# Use cached color
|
||||||
base_config = load_base_config() # Load base config
|
return self._asset_type_colors.get(asset_type) # Returns None if not found
|
||||||
asset_type_definitions = base_config.get('ASSET_TYPE_DEFINITIONS', {}) # Get definitions
|
|
||||||
type_info = asset_type_definitions.get(asset_type)
|
|
||||||
if type_info:
|
|
||||||
hex_color = type_info.get("color")
|
|
||||||
if hex_color:
|
|
||||||
try:
|
|
||||||
return QColor(hex_color)
|
|
||||||
except ValueError:
|
|
||||||
# Optional: Add logging for invalid hex color
|
|
||||||
# print(f"Warning: Invalid hex color '{hex_color}' for asset type '{asset_type}' in config.")
|
|
||||||
return None # Fallback for invalid hex
|
|
||||||
else:
|
|
||||||
# Optional: Add logging for missing color key
|
|
||||||
# print(f"Warning: No color defined for asset type '{asset_type}' in config.")
|
|
||||||
return None # Fallback if color key missing
|
|
||||||
else:
|
|
||||||
# Optional: Add logging for missing asset type definition
|
|
||||||
# print(f"Warning: Asset type '{asset_type}' not found in ASSET_TYPE_DEFINITIONS.")
|
|
||||||
return None # Fallback if type not in config
|
|
||||||
except Exception: # Catch errors during config loading
|
|
||||||
return None # Fallback on error
|
|
||||||
else:
|
else:
|
||||||
return None # Fallback if no asset_type determined
|
return None # Fallback if no asset_type determined
|
||||||
elif isinstance(item, FileRule):
|
elif isinstance(item, FileRule):
|
||||||
# Determine effective item type: Prioritize override, then use base type
|
# --- New Logic: Darkened Parent Background ---
|
||||||
|
parent_asset = getattr(item, 'parent_asset', None)
|
||||||
|
if parent_asset:
|
||||||
|
parent_asset_type = parent_asset.asset_type_override if parent_asset.asset_type_override else parent_asset.asset_type
|
||||||
|
parent_bg_color = self._asset_type_colors.get(parent_asset_type) if parent_asset_type else None
|
||||||
|
|
||||||
|
if parent_bg_color:
|
||||||
|
# Darken the parent color by ~30% (factor 130)
|
||||||
|
return parent_bg_color.darker(130)
|
||||||
|
else:
|
||||||
|
# Parent has no specific color, use default background
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
# Should not happen if structure is correct, but fallback to default
|
||||||
|
return None
|
||||||
|
# --- End New Logic ---
|
||||||
|
else: # Other item types or if item is None
|
||||||
|
return None
|
||||||
|
# --- Handle Foreground Role (Text Color) ---
|
||||||
|
elif role == Qt.ForegroundRole:
|
||||||
|
if isinstance(item, FileRule):
|
||||||
|
# Determine effective item type
|
||||||
effective_item_type = item.item_type_override if item.item_type_override is not None else item.item_type
|
effective_item_type = item.item_type_override if item.item_type_override is not None else item.item_type
|
||||||
if effective_item_type:
|
if effective_item_type:
|
||||||
try:
|
# Use cached color for text
|
||||||
base_config = load_base_config() # Load base config
|
return self._file_type_colors.get(effective_item_type) # Returns None if not found
|
||||||
file_type_definitions = base_config.get('FILE_TYPE_DEFINITIONS', {}) # Get definitions
|
# For SourceRule and AssetRule, return None to use default text color (usually contrasts well)
|
||||||
type_info = file_type_definitions.get(effective_item_type)
|
|
||||||
if type_info:
|
|
||||||
hex_color = type_info.get("color")
|
|
||||||
if hex_color:
|
|
||||||
try:
|
|
||||||
return QColor(hex_color)
|
|
||||||
except ValueError:
|
|
||||||
# Optional: Add logging for invalid hex color
|
|
||||||
# print(f"Warning: Invalid hex color '{hex_color}' for file type '{item_type}' in config.")
|
|
||||||
return None # Fallback for invalid hex
|
|
||||||
else:
|
|
||||||
# Optional: Add logging for missing color key
|
|
||||||
# print(f"Warning: No color defined for file type '{item_type}' in config.")
|
|
||||||
return None # Fallback if color key missing
|
|
||||||
else:
|
|
||||||
# File types often don't have specific colors, so no warning needed unless debugging
|
|
||||||
return None # Fallback if type not in config
|
|
||||||
except Exception: # Catch errors during config loading
|
|
||||||
return None # Fallback on error
|
|
||||||
else:
|
|
||||||
return None # Fallback if no item_type determined
|
|
||||||
else: # Other item types or if item is None
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# --- Handle other roles (Display, Edit, etc.) ---
|
# --- Handle other roles (Display, Edit, etc.) ---
|
||||||
if isinstance(item, SourceRule):
|
if isinstance(item, SourceRule):
|
||||||
if role == Qt.DisplayRole or role == Qt.EditRole: # Combine Display and Edit logic
|
if role == Qt.DisplayRole or role == Qt.EditRole:
|
||||||
if column == self.COL_NAME:
|
if column == self.COL_NAME:
|
||||||
|
# Always display name
|
||||||
return Path(item.input_path).name
|
return Path(item.input_path).name
|
||||||
elif column == self.COL_SUPPLIER:
|
elif column == self.COL_SUPPLIER: # Always handle supplier
|
||||||
# Return override if set, otherwise the original identifier, else empty string
|
|
||||||
display_value = item.supplier_override if item.supplier_override is not None else item.supplier_identifier
|
display_value = item.supplier_override if item.supplier_override is not None else item.supplier_identifier
|
||||||
return display_value if display_value is not None else ""
|
return display_value if display_value is not None else ""
|
||||||
# Other columns return None or "" for SourceRule in Display/Edit roles
|
return None # Other columns/roles are blank for SourceRule
|
||||||
return None # Default for SourceRule for other roles/columns
|
|
||||||
|
|
||||||
|
# --- Logic for AssetRule and FileRule (previously detailed mode only) ---
|
||||||
elif isinstance(item, AssetRule):
|
elif isinstance(item, AssetRule):
|
||||||
if role == Qt.DisplayRole:
|
if role == Qt.DisplayRole:
|
||||||
if column == self.COL_NAME: return item.asset_name
|
if column == self.COL_NAME: return item.asset_name
|
||||||
elif column == self.COL_ASSET_TYPE:
|
elif column == self.COL_ASSET_TYPE:
|
||||||
display_value = item.asset_type_override if item.asset_type_override is not None else item.asset_type
|
display_value = item.asset_type_override if item.asset_type_override is not None else item.asset_type
|
||||||
return display_value if display_value else ""
|
return display_value if display_value else ""
|
||||||
# Removed Status and Output Path columns
|
|
||||||
elif role == Qt.EditRole:
|
elif role == Qt.EditRole:
|
||||||
if column == self.COL_ASSET_TYPE:
|
if column == self.COL_ASSET_TYPE:
|
||||||
return item.asset_type_override
|
return item.asset_type_override
|
||||||
return None # Default for AssetRule
|
return None
|
||||||
|
|
||||||
elif isinstance(item, FileRule):
|
elif isinstance(item, FileRule):
|
||||||
if role == Qt.DisplayRole:
|
if role == Qt.DisplayRole:
|
||||||
if column == self.COL_NAME: return Path(item.file_path).name # Display only filename
|
if column == self.COL_NAME: return Path(item.file_path).name
|
||||||
elif column == self.COL_TARGET_ASSET:
|
elif column == self.COL_TARGET_ASSET:
|
||||||
return item.target_asset_name_override if item.target_asset_name_override is not None else ""
|
return item.target_asset_name_override if item.target_asset_name_override is not None else ""
|
||||||
elif column == self.COL_ITEM_TYPE:
|
elif column == self.COL_ITEM_TYPE:
|
||||||
# Reverted Logic: Display override if set, otherwise base type. Shows prefixed keys.
|
|
||||||
override = item.item_type_override
|
override = item.item_type_override
|
||||||
initial_type = item.item_type
|
initial_type = item.item_type
|
||||||
|
if override is not None: return override
|
||||||
if override is not None:
|
else: return initial_type if initial_type else ""
|
||||||
return override
|
|
||||||
else:
|
|
||||||
return initial_type if initial_type else ""
|
|
||||||
# Removed Status and Output Path columns
|
|
||||||
elif role == Qt.EditRole:
|
elif role == Qt.EditRole:
|
||||||
if column == self.COL_TARGET_ASSET: return item.target_asset_name_override if item.target_asset_name_override is not None else "" # Return string or ""
|
if column == self.COL_TARGET_ASSET: return item.target_asset_name_override if item.target_asset_name_override is not None else ""
|
||||||
elif column == self.COL_ITEM_TYPE: return item.item_type_override # Return string or None
|
elif column == self.COL_ITEM_TYPE: return item.item_type_override
|
||||||
return None # Default for FileRule
|
return None
|
||||||
|
|
||||||
return None # Default return if role/item combination not handled
|
return None
|
||||||
|
|
||||||
def setData(self, index: QModelIndex, value, role: int = Qt.EditRole) -> bool:
|
def setData(self, index: QModelIndex, value, role: int = Qt.EditRole) -> bool:
|
||||||
"""Sets the role data for the item at index to value."""
|
"""Sets the role data for the item at index to value."""
|
||||||
@ -335,119 +349,8 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
old_value = item.target_asset_name_override # Store old value for potential revert/comparison
|
old_value = item.target_asset_name_override # Store old value for potential revert/comparison
|
||||||
item.target_asset_name_override = new_value
|
item.target_asset_name_override = new_value
|
||||||
changed = True
|
changed = True
|
||||||
|
# Emit signal that the override changed, let handler deal with restructuring
|
||||||
# --- Start: New Direct Model Restructuring Logic ---
|
self.targetAssetOverrideChanged.emit(index, new_value)
|
||||||
old_parent_asset = getattr(item, 'parent_asset', None)
|
|
||||||
if old_parent_asset: # Ensure we have the old parent
|
|
||||||
source_rule = getattr(old_parent_asset, 'parent_source', None)
|
|
||||||
if source_rule: # Ensure we have the grandparent
|
|
||||||
new_target_name = new_value # Can be None or a string
|
|
||||||
|
|
||||||
# Get old parent index and source row
|
|
||||||
try:
|
|
||||||
grandparent_row = self._source_rules.index(source_rule)
|
|
||||||
old_parent_row = source_rule.assets.index(old_parent_asset)
|
|
||||||
source_row = old_parent_asset.files.index(item)
|
|
||||||
old_parent_index = self.createIndex(old_parent_row, 0, old_parent_asset)
|
|
||||||
grandparent_index = self.createIndex(grandparent_row, 0, source_rule) # Needed for insert/remove parent
|
|
||||||
except ValueError:
|
|
||||||
print("Error: Could not find item, parent, or grandparent in model structure during setData.")
|
|
||||||
item.target_asset_name_override = old_value # Revert data change
|
|
||||||
return False # Indicate failure
|
|
||||||
|
|
||||||
target_parent_asset = None
|
|
||||||
target_parent_index = QModelIndex()
|
|
||||||
target_parent_row = -1 # Row within source_rule.assets
|
|
||||||
target_row = -1 # Row within target_parent_asset.files
|
|
||||||
move_occurred = False # Flag to track if a move happened
|
|
||||||
|
|
||||||
# 1. Find existing target parent
|
|
||||||
if new_target_name: # Only search if a specific target is given
|
|
||||||
for i, asset in enumerate(source_rule.assets):
|
|
||||||
if asset.asset_name == new_target_name:
|
|
||||||
target_parent_asset = asset
|
|
||||||
target_parent_row = i
|
|
||||||
target_parent_index = self.createIndex(target_parent_row, 0, target_parent_asset)
|
|
||||||
break
|
|
||||||
|
|
||||||
# 2. Handle Move/Creation
|
|
||||||
if target_parent_asset:
|
|
||||||
# --- Move to Existing Parent ---
|
|
||||||
if target_parent_asset != old_parent_asset: # Don't move if target is the same as old parent
|
|
||||||
target_row = len(target_parent_asset.files) # Append to the end
|
|
||||||
# print(f"DEBUG: Moving {Path(item.file_path).name} from {old_parent_asset.asset_name} ({source_row}) to {target_parent_asset.asset_name} ({target_row})")
|
|
||||||
self.beginMoveRows(old_parent_index, source_row, source_row, target_parent_index, target_row)
|
|
||||||
# Restructure internal data
|
|
||||||
old_parent_asset.files.pop(source_row)
|
|
||||||
target_parent_asset.files.append(item)
|
|
||||||
item.parent_asset = target_parent_asset # Update parent reference
|
|
||||||
self.endMoveRows()
|
|
||||||
move_occurred = True
|
|
||||||
else:
|
|
||||||
# Target is the same as the old parent. No move needed.
|
|
||||||
pass
|
|
||||||
|
|
||||||
elif new_target_name: # Only create if a *new* specific target name was given
|
|
||||||
# --- Create New Parent and Move ---
|
|
||||||
# print(f"DEBUG: Creating new parent '{new_target_name}' and moving {Path(item.file_path).name}")
|
|
||||||
# Create new AssetRule
|
|
||||||
new_asset_rule = AssetRule(asset_name=new_target_name)
|
|
||||||
new_asset_rule.asset_type = old_parent_asset.asset_type # Copy type from old parent
|
|
||||||
new_asset_rule.asset_type_override = old_parent_asset.asset_type_override # Copy override too
|
|
||||||
new_asset_rule.parent_source = source_rule # Set parent reference
|
|
||||||
|
|
||||||
# Determine insertion row for the new parent (e.g., append)
|
|
||||||
new_parent_row = len(source_rule.assets)
|
|
||||||
# print(f"DEBUG: Inserting new parent at row {new_parent_row} under {Path(source_rule.input_path).name}")
|
|
||||||
|
|
||||||
# Emit signals for inserting the new parent row
|
|
||||||
self.beginInsertRows(grandparent_index, new_parent_row, new_parent_row)
|
|
||||||
source_rule.assets.insert(new_parent_row, new_asset_rule) # Insert into data structure
|
|
||||||
self.endInsertRows()
|
|
||||||
|
|
||||||
# Get index for the newly inserted parent
|
|
||||||
target_parent_index = self.createIndex(new_parent_row, 0, new_asset_rule)
|
|
||||||
target_row = 0 # Insert file at the beginning of the new parent (for signal)
|
|
||||||
|
|
||||||
# Emit signals for moving the file row
|
|
||||||
# print(f"DEBUG: Moving {Path(item.file_path).name} from {old_parent_asset.asset_name} ({source_row}) to new {new_asset_rule.asset_name} ({target_row})")
|
|
||||||
self.beginMoveRows(old_parent_index, source_row, source_row, target_parent_index, target_row)
|
|
||||||
# Restructure internal data
|
|
||||||
old_parent_asset.files.pop(source_row)
|
|
||||||
new_asset_rule.files.append(item) # Append is fine, target_row=0 was for signal
|
|
||||||
item.parent_asset = new_asset_rule # Update parent reference
|
|
||||||
self.endMoveRows()
|
|
||||||
move_occurred = True
|
|
||||||
|
|
||||||
# Update target_parent_asset for potential cleanup check later
|
|
||||||
target_parent_asset = new_asset_rule
|
|
||||||
|
|
||||||
else: # new_target_name is None or empty
|
|
||||||
# No move happens when the override is simply cleared.
|
|
||||||
pass
|
|
||||||
|
|
||||||
# 3. Cleanup Empty Old Parent (only if a move occurred and old parent is empty)
|
|
||||||
if move_occurred and not old_parent_asset.files:
|
|
||||||
# print(f"DEBUG: Removing empty old parent {old_parent_asset.asset_name}")
|
|
||||||
try:
|
|
||||||
# Find the row of the old parent again, as it might have shifted
|
|
||||||
old_parent_row_for_removal = source_rule.assets.index(old_parent_asset)
|
|
||||||
# print(f"DEBUG: Removing parent at row {old_parent_row_for_removal} under {Path(source_rule.input_path).name}")
|
|
||||||
self.beginRemoveRows(grandparent_index, old_parent_row_for_removal, old_parent_row_for_removal)
|
|
||||||
source_rule.assets.pop(old_parent_row_for_removal)
|
|
||||||
self.endRemoveRows()
|
|
||||||
except ValueError:
|
|
||||||
print(f"Error: Could not find old parent '{old_parent_asset.asset_name}' for removal.")
|
|
||||||
# Log error, but continue
|
|
||||||
else:
|
|
||||||
print("Error: Could not find grandparent SourceRule during setData restructuring.")
|
|
||||||
item.target_asset_name_override = old_value # Revert
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("Error: Could not find parent AssetRule during setData restructuring.")
|
|
||||||
item.target_asset_name_override = old_value # Revert
|
|
||||||
return False
|
|
||||||
# --- End: New Direct Model Restructuring Logic ---
|
|
||||||
elif column == self.COL_ITEM_TYPE: # Item-Type Override
|
elif column == self.COL_ITEM_TYPE: # Item-Type Override
|
||||||
# Delegate provides string value (e.g., "MAP_COL") or None
|
# Delegate provides string value (e.g., "MAP_COL") or None
|
||||||
new_value = str(value) if value is not None else None
|
new_value = str(value) if value is not None else None
|
||||||
@ -515,15 +418,15 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
item = index.internalPointer()
|
item = index.internalPointer()
|
||||||
column = index.column()
|
column = index.column()
|
||||||
|
|
||||||
|
# Always use detailed mode editability logic
|
||||||
can_edit = False
|
can_edit = False
|
||||||
# Determine editability based on item type and column
|
if isinstance(item, SourceRule):
|
||||||
if isinstance(item, SourceRule): # If SourceRule is displayed/editable
|
if column == self.COL_SUPPLIER: can_edit = True
|
||||||
if column == self.COL_SUPPLIER: can_edit = True # Supplier is editable
|
|
||||||
elif isinstance(item, AssetRule):
|
elif isinstance(item, AssetRule):
|
||||||
if column == self.COL_ASSET_TYPE: can_edit = True # Asset Type is editable
|
if column == self.COL_ASSET_TYPE: can_edit = True
|
||||||
elif isinstance(item, FileRule):
|
elif isinstance(item, FileRule):
|
||||||
if column == self.COL_TARGET_ASSET: can_edit = True # Target Asset is editable
|
if column == self.COL_TARGET_ASSET: can_edit = True
|
||||||
if column == self.COL_ITEM_TYPE: can_edit = True # Item Type is editable
|
if column == self.COL_ITEM_TYPE: can_edit = True
|
||||||
|
|
||||||
if can_edit:
|
if can_edit:
|
||||||
return default_flags | Qt.ItemIsEditable
|
return default_flags | Qt.ItemIsEditable
|
||||||
@ -548,98 +451,316 @@ class UnifiedViewModel(QAbstractItemModel):
|
|||||||
if item: # Ensure internal pointer is not None
|
if item: # Ensure internal pointer is not None
|
||||||
return item
|
return item
|
||||||
return None # Return None for invalid index or None pointer
|
return None # Return None for invalid index or None pointer
|
||||||
# --- Method to update model based on LLM predictions ---
|
# --- Method to update model based on prediction results, preserving overrides ---
|
||||||
def update_rules_for_sources(self, source_rules: List[SourceRule]):
|
def update_rules_for_sources(self, new_source_rules: List[SourceRule]):
|
||||||
"""
|
"""
|
||||||
Updates the model's internal data based on a list of SourceRule objects,
|
Updates the model's internal data based on a list of new SourceRule objects
|
||||||
typically containing predictions for one or more source directories.
|
(typically from prediction results), merging them with existing data while
|
||||||
|
preserving user overrides.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
source_rules: A list of SourceRule objects containing the new structure.
|
new_source_rules: A list of SourceRule objects containing the new structure.
|
||||||
"""
|
"""
|
||||||
if not source_rules:
|
if not new_source_rules:
|
||||||
print("UnifiedViewModel: update_rules_for_sources called with empty list.")
|
log.warning("UnifiedViewModel: update_rules_for_sources called with empty list.")
|
||||||
return
|
return
|
||||||
|
|
||||||
# --- Important: Model Change Signaling ---
|
log.info(f"UnifiedViewModel: Updating rules for {len(new_source_rules)} source(s).")
|
||||||
# Using Option 2 (per-source update) as it's generally more efficient.
|
|
||||||
print(f"UnifiedViewModel: Updating rules for {len(source_rules)} source(s).")
|
|
||||||
|
|
||||||
# --- Node Class Placeholders ---
|
for new_source_rule in new_source_rules:
|
||||||
# Ensure these match your actual node implementation if different.
|
source_path = new_source_rule.input_path
|
||||||
# These might be imported from another module or defined within this model.
|
existing_source_rule = None
|
||||||
# Example: from .your_node_module import SourceNode, AssetNode, FileNode
|
existing_source_row = -1
|
||||||
# For now, we assume they are available in the scope.
|
|
||||||
|
|
||||||
for rule in source_rules:
|
# 1. Find existing SourceRule in the model
|
||||||
source_path = rule.input_path # Use input_path as per SourceRule definition
|
for i, rule in enumerate(self._source_rules):
|
||||||
# --- Find the corresponding SourceRule in the model's internal list ---
|
if rule.input_path == source_path:
|
||||||
# This replaces the placeholder _find_source_node_by_path logic
|
existing_source_rule = rule
|
||||||
# We need the *object* and its *index* in self._source_rules
|
existing_source_row = i
|
||||||
source_rule_obj = None
|
|
||||||
source_rule_row = -1
|
|
||||||
for i, existing_rule in enumerate(self._source_rules):
|
|
||||||
if existing_rule.input_path == source_path:
|
|
||||||
source_rule_obj = existing_rule
|
|
||||||
source_rule_row = i
|
|
||||||
break
|
break
|
||||||
|
|
||||||
if source_rule_obj is None:
|
if existing_source_rule is None:
|
||||||
# --- ADD NEW RULE LOGIC ---
|
# 2. Add New SourceRule if not found
|
||||||
log.debug(f"No existing rule found for '{source_path}'. Adding new rule to model.")
|
log.debug(f"Adding new SourceRule for '{source_path}'")
|
||||||
# Ensure parent references are set within the new rule
|
# Ensure parent references are set within the new rule hierarchy
|
||||||
for asset_rule in rule.assets:
|
for asset_rule in new_source_rule.assets:
|
||||||
asset_rule.parent_source = rule # Set parent to the rule being added
|
asset_rule.parent_source = new_source_rule
|
||||||
for file_rule in asset_rule.files:
|
for file_rule in asset_rule.files:
|
||||||
file_rule.parent_asset = asset_rule
|
file_rule.parent_asset = asset_rule
|
||||||
|
|
||||||
# Add to model's internal list and emit signal
|
# Add to model's internal list and emit signal
|
||||||
current_row_count = len(self._source_rules)
|
insert_row = len(self._source_rules)
|
||||||
self.beginInsertRows(QModelIndex(), current_row_count, current_row_count)
|
self.beginInsertRows(QModelIndex(), insert_row, insert_row)
|
||||||
self._source_rules.append(rule) # Append the new rule
|
self._source_rules.append(new_source_rule)
|
||||||
self.endInsertRows()
|
self.endInsertRows()
|
||||||
continue # Skip the rest of the loop for this rule as it's newly added
|
continue # Process next new_source_rule
|
||||||
# --- END ADD NEW RULE LOGIC ---
|
|
||||||
|
|
||||||
# Get the QModelIndex corresponding to the source_rule_obj
|
# 3. Merge Existing SourceRule
|
||||||
# This index represents the parent for layout changes.
|
log.debug(f"Merging SourceRule for '{source_path}'")
|
||||||
source_index = self.createIndex(source_rule_row, 0, source_rule_obj)
|
existing_source_index = self.createIndex(existing_source_row, 0, existing_source_rule)
|
||||||
|
if not existing_source_index.isValid():
|
||||||
if not source_index.isValid():
|
log.error(f"Could not create valid index for existing SourceRule: {source_path}. Skipping.")
|
||||||
print(f"Warning: Could not create valid QModelIndex for SourceRule: {source_path}. Skipping update.")
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# --- Signal layout change for the specific source node ---
|
# Update non-override SourceRule fields (e.g., supplier identifier if needed)
|
||||||
# We are changing the children (AssetRules) of this SourceRule.
|
if existing_source_rule.supplier_identifier != new_source_rule.supplier_identifier:
|
||||||
# Emit with parent index list and orientation.
|
# Only update if override is not set, or if you want prediction to always update base identifier
|
||||||
self.layoutAboutToBeChanged.emit() # Emit without arguments
|
if existing_source_rule.supplier_override is None:
|
||||||
|
existing_source_rule.supplier_identifier = new_source_rule.supplier_identifier
|
||||||
|
# Emit dataChanged for the supplier column if it's displayed/editable at source level
|
||||||
|
supplier_col_index = self.createIndex(existing_source_row, self.COL_SUPPLIER, existing_source_rule)
|
||||||
|
self.dataChanged.emit(supplier_col_index, supplier_col_index, [Qt.DisplayRole, Qt.EditRole])
|
||||||
|
|
||||||
# --- Clear existing children (AssetRules) ---
|
|
||||||
# Directly modify the assets list of the found SourceRule object
|
|
||||||
source_rule_obj.assets.clear() # Clear the list in place
|
|
||||||
|
|
||||||
# --- Rebuild children based on the new rule ---
|
# --- Merge AssetRules ---
|
||||||
for asset_rule in rule.assets:
|
existing_assets_dict = {asset.asset_name: asset for asset in existing_source_rule.assets}
|
||||||
# Add the new AssetRule object directly
|
new_assets_dict = {asset.asset_name: asset for asset in new_source_rule.assets}
|
||||||
source_rule_obj.assets.append(asset_rule)
|
processed_asset_names = set()
|
||||||
# Set the parent reference on the new asset rule
|
|
||||||
asset_rule.parent_source = source_rule_obj
|
|
||||||
|
|
||||||
# Set parent references for the FileRules within the new AssetRule
|
# Iterate through new assets to update existing or add new ones
|
||||||
for file_rule in asset_rule.files:
|
for asset_name, new_asset in new_assets_dict.items():
|
||||||
file_rule.parent_asset = asset_rule
|
processed_asset_names.add(asset_name)
|
||||||
|
existing_asset = existing_assets_dict.get(asset_name)
|
||||||
|
|
||||||
# --- Signal layout change completion ---
|
if existing_asset:
|
||||||
self.layoutChanged.emit() # Emit without arguments
|
# --- Update Existing AssetRule ---
|
||||||
print(f"UnifiedViewModel: Updated children for SourceRule: {source_path}")
|
log.debug(f" Merging AssetRule: {asset_name}")
|
||||||
|
existing_asset_row = existing_source_rule.assets.index(existing_asset)
|
||||||
|
existing_asset_index = self.createIndex(existing_asset_row, 0, existing_asset)
|
||||||
|
|
||||||
|
# Update non-override fields (e.g., asset_type)
|
||||||
|
if existing_asset.asset_type != new_asset.asset_type and existing_asset.asset_type_override is None:
|
||||||
|
existing_asset.asset_type = new_asset.asset_type
|
||||||
|
asset_type_col_index = self.createIndex(existing_asset_row, self.COL_ASSET_TYPE, existing_asset)
|
||||||
|
self.dataChanged.emit(asset_type_col_index, asset_type_col_index, [Qt.DisplayRole, Qt.EditRole, Qt.BackgroundRole]) # Include BackgroundRole for color
|
||||||
|
|
||||||
|
# --- Merge FileRules within the AssetRule ---
|
||||||
|
self._merge_file_rules(existing_asset, new_asset, existing_asset_index)
|
||||||
|
|
||||||
|
else:
|
||||||
|
# --- Add New AssetRule ---
|
||||||
|
log.debug(f" Adding new AssetRule: {asset_name}")
|
||||||
|
new_asset.parent_source = existing_source_rule # Set parent
|
||||||
|
# Ensure file parents are set
|
||||||
|
for file_rule in new_asset.files:
|
||||||
|
file_rule.parent_asset = new_asset
|
||||||
|
|
||||||
|
insert_row = len(existing_source_rule.assets)
|
||||||
|
self.beginInsertRows(existing_source_index, insert_row, insert_row)
|
||||||
|
existing_source_rule.assets.append(new_asset)
|
||||||
|
self.endInsertRows()
|
||||||
|
|
||||||
|
# --- Remove Old AssetRules ---
|
||||||
|
# Find assets in existing but not in new, and remove them in reverse order
|
||||||
|
assets_to_remove = []
|
||||||
|
for i, existing_asset in reversed(list(enumerate(existing_source_rule.assets))):
|
||||||
|
if existing_asset.asset_name not in processed_asset_names:
|
||||||
|
assets_to_remove.append((i, existing_asset.asset_name)) # Store index and name
|
||||||
|
|
||||||
|
for row_index, asset_name_to_remove in assets_to_remove:
|
||||||
|
log.debug(f" Removing old AssetRule: {asset_name_to_remove}")
|
||||||
|
self.beginRemoveRows(existing_source_index, row_index, row_index)
|
||||||
|
existing_source_rule.assets.pop(row_index)
|
||||||
|
self.endRemoveRows()
|
||||||
|
|
||||||
|
|
||||||
|
def _merge_file_rules(self, existing_asset: AssetRule, new_asset: AssetRule, parent_asset_index: QModelIndex):
|
||||||
|
"""Helper method to merge FileRules for a given AssetRule."""
|
||||||
|
existing_files_dict = {file.file_path: file for file in existing_asset.files}
|
||||||
|
new_files_dict = {file.file_path: file for file in new_asset.files}
|
||||||
|
processed_file_paths = set()
|
||||||
|
|
||||||
|
# Iterate through new files to update existing or add new ones
|
||||||
|
for file_path, new_file in new_files_dict.items():
|
||||||
|
processed_file_paths.add(file_path)
|
||||||
|
existing_file = existing_files_dict.get(file_path)
|
||||||
|
|
||||||
|
if existing_file:
|
||||||
|
# --- Update Existing FileRule ---
|
||||||
|
log.debug(f" Merging FileRule: {Path(file_path).name}")
|
||||||
|
existing_file_row = existing_asset.files.index(existing_file)
|
||||||
|
existing_file_index = self.createIndex(existing_file_row, 0, existing_file) # Index relative to parent_asset_index
|
||||||
|
|
||||||
|
# Update non-override fields (item_type, standard_map_type)
|
||||||
|
changed_roles = []
|
||||||
|
if existing_file.item_type != new_file.item_type and existing_file.item_type_override is None:
|
||||||
|
existing_file.item_type = new_file.item_type
|
||||||
|
changed_roles.extend([Qt.DisplayRole, Qt.EditRole, Qt.BackgroundRole]) # Include BackgroundRole for color
|
||||||
|
|
||||||
|
# Update standard_map_type (assuming it's derived/set during prediction)
|
||||||
|
# Check if standard_map_type exists on both objects before comparing
|
||||||
|
new_standard_type = getattr(new_file, 'standard_map_type', None)
|
||||||
|
old_standard_type = getattr(existing_file, 'standard_map_type', None)
|
||||||
|
if old_standard_type != new_standard_type:
|
||||||
|
# Update only if item_type_override is not set, as override dictates standard type
|
||||||
|
if existing_file.item_type_override is None:
|
||||||
|
existing_file.standard_map_type = new_standard_type
|
||||||
|
# standard_map_type might not directly affect display, but item_type change covers it
|
||||||
|
if Qt.DisplayRole not in changed_roles: # Avoid duplicates
|
||||||
|
changed_roles.extend([Qt.DisplayRole, Qt.EditRole])
|
||||||
|
|
||||||
|
|
||||||
|
# Emit dataChanged only if something actually changed
|
||||||
|
if changed_roles:
|
||||||
|
# Emit for all relevant columns potentially affected by type changes
|
||||||
|
for col in [self.COL_ITEM_TYPE]: # Add other cols if needed
|
||||||
|
col_index = self.createIndex(existing_file_row, col, existing_file)
|
||||||
|
self.dataChanged.emit(col_index, col_index, changed_roles)
|
||||||
|
|
||||||
|
else:
|
||||||
|
# --- Add New FileRule ---
|
||||||
|
log.debug(f" Adding new FileRule: {Path(file_path).name}")
|
||||||
|
new_file.parent_asset = existing_asset # Set parent
|
||||||
|
insert_row = len(existing_asset.files)
|
||||||
|
self.beginInsertRows(parent_asset_index, insert_row, insert_row)
|
||||||
|
existing_asset.files.append(new_file)
|
||||||
|
self.endInsertRows()
|
||||||
|
|
||||||
|
# --- Remove Old FileRules ---
|
||||||
|
files_to_remove = []
|
||||||
|
for i, existing_file in reversed(list(enumerate(existing_asset.files))):
|
||||||
|
if existing_file.file_path not in processed_file_paths:
|
||||||
|
files_to_remove.append((i, Path(existing_file.file_path).name))
|
||||||
|
|
||||||
|
for row_index, file_name_to_remove in files_to_remove:
|
||||||
|
log.debug(f" Removing old FileRule: {file_name_to_remove}")
|
||||||
|
self.beginRemoveRows(parent_asset_index, row_index, row_index)
|
||||||
|
existing_asset.files.pop(row_index)
|
||||||
|
self.endRemoveRows()
|
||||||
|
|
||||||
|
|
||||||
|
# --- Dedicated Model Restructuring Methods ---
|
||||||
|
|
||||||
|
def moveFileRule(self, source_file_index: QModelIndex, target_parent_asset_index: QModelIndex):
|
||||||
|
"""Moves a FileRule (source_file_index) to a different AssetRule parent (target_parent_asset_index)."""
|
||||||
|
if not source_file_index.isValid() or not target_parent_asset_index.isValid():
|
||||||
|
log.error("moveFileRule: Invalid source or target index provided.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
file_item = source_file_index.internalPointer()
|
||||||
|
target_parent_asset = target_parent_asset_index.internalPointer()
|
||||||
|
|
||||||
|
if not isinstance(file_item, FileRule) or not isinstance(target_parent_asset, AssetRule):
|
||||||
|
log.error("moveFileRule: Invalid item types for source or target.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
old_parent_asset = getattr(file_item, 'parent_asset', None)
|
||||||
|
if not old_parent_asset:
|
||||||
|
log.error(f"moveFileRule: Source file '{Path(file_item.file_path).name}' has no parent asset.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if old_parent_asset == target_parent_asset:
|
||||||
|
log.debug("moveFileRule: Source and target parent are the same. No move needed.")
|
||||||
|
return True # Technically successful, no change needed
|
||||||
|
|
||||||
|
# Get old parent index
|
||||||
|
source_rule = getattr(old_parent_asset, 'parent_source', None)
|
||||||
|
if not source_rule:
|
||||||
|
log.error(f"moveFileRule: Could not find SourceRule parent for old asset '{old_parent_asset.asset_name}'.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
old_parent_row = source_rule.assets.index(old_parent_asset)
|
||||||
|
old_parent_index = self.createIndex(old_parent_row, 0, old_parent_asset)
|
||||||
|
source_row = old_parent_asset.files.index(file_item)
|
||||||
|
except ValueError:
|
||||||
|
log.error("moveFileRule: Could not find old parent or source file within their respective lists.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
target_row = len(target_parent_asset.files) # Append to the end of the target
|
||||||
|
|
||||||
|
log.debug(f"Moving file '{Path(file_item.file_path).name}' from '{old_parent_asset.asset_name}' (row {source_row}) to '{target_parent_asset.asset_name}' (row {target_row})")
|
||||||
|
self.beginMoveRows(old_parent_index, source_row, source_row, target_parent_asset_index, target_row)
|
||||||
|
# Restructure internal data
|
||||||
|
old_parent_asset.files.pop(source_row)
|
||||||
|
target_parent_asset.files.append(file_item)
|
||||||
|
file_item.parent_asset = target_parent_asset # Update parent reference
|
||||||
|
self.endMoveRows()
|
||||||
|
return True
|
||||||
|
|
||||||
|
def createAssetRule(self, source_rule: SourceRule, new_asset_name: str, copy_from_asset: AssetRule = None) -> QModelIndex:
|
||||||
|
"""Creates a new AssetRule under the given SourceRule and returns its index."""
|
||||||
|
if not isinstance(source_rule, SourceRule) or not new_asset_name:
|
||||||
|
log.error("createAssetRule: Invalid SourceRule or empty asset name provided.")
|
||||||
|
return QModelIndex()
|
||||||
|
|
||||||
|
# Check if asset already exists under this source
|
||||||
|
for asset in source_rule.assets:
|
||||||
|
if asset.asset_name == new_asset_name:
|
||||||
|
log.warning(f"createAssetRule: Asset '{new_asset_name}' already exists under '{Path(source_rule.input_path).name}'.")
|
||||||
|
# Return existing index? Or fail? Let's return existing for now.
|
||||||
|
try:
|
||||||
|
existing_row = source_rule.assets.index(asset)
|
||||||
|
return self.createIndex(existing_row, 0, asset)
|
||||||
|
except ValueError:
|
||||||
|
log.error("createAssetRule: Found existing asset but failed to get its index.")
|
||||||
|
return QModelIndex() # Should not happen
|
||||||
|
|
||||||
|
log.debug(f"Creating new AssetRule '{new_asset_name}' under '{Path(source_rule.input_path).name}'")
|
||||||
|
new_asset_rule = AssetRule(asset_name=new_asset_name)
|
||||||
|
new_asset_rule.parent_source = source_rule # Set parent reference
|
||||||
|
|
||||||
|
# Optionally copy type info from another asset
|
||||||
|
if isinstance(copy_from_asset, AssetRule):
|
||||||
|
new_asset_rule.asset_type = copy_from_asset.asset_type
|
||||||
|
new_asset_rule.asset_type_override = copy_from_asset.asset_type_override
|
||||||
|
|
||||||
|
# Find parent SourceRule index
|
||||||
|
try:
|
||||||
|
grandparent_row = self._source_rules.index(source_rule)
|
||||||
|
grandparent_index = self.createIndex(grandparent_row, 0, source_rule)
|
||||||
|
except ValueError:
|
||||||
|
log.error(f"createAssetRule: Could not find SourceRule '{Path(source_rule.input_path).name}' in the model's root list.")
|
||||||
|
return QModelIndex()
|
||||||
|
|
||||||
|
# Determine insertion row for the new parent (e.g., append)
|
||||||
|
new_parent_row = len(source_rule.assets)
|
||||||
|
|
||||||
|
# Emit signals for inserting the new parent row
|
||||||
|
self.beginInsertRows(grandparent_index, new_parent_row, new_parent_row)
|
||||||
|
source_rule.assets.insert(new_parent_row, new_asset_rule) # Insert into data structure
|
||||||
|
self.endInsertRows()
|
||||||
|
|
||||||
|
# Return index for the newly created asset
|
||||||
|
return self.createIndex(new_parent_row, 0, new_asset_rule)
|
||||||
|
|
||||||
|
|
||||||
|
def removeAssetRule(self, asset_rule_to_remove: AssetRule):
|
||||||
|
"""Removes an AssetRule if it's empty."""
|
||||||
|
if not isinstance(asset_rule_to_remove, AssetRule):
|
||||||
|
log.error("removeAssetRule: Invalid AssetRule provided.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if asset_rule_to_remove.files:
|
||||||
|
log.warning(f"removeAssetRule: Asset '{asset_rule_to_remove.asset_name}' is not empty. Removal aborted.")
|
||||||
|
return False # Do not remove non-empty assets automatically
|
||||||
|
|
||||||
|
source_rule = getattr(asset_rule_to_remove, 'parent_source', None)
|
||||||
|
if not source_rule:
|
||||||
|
log.error(f"removeAssetRule: Could not find parent SourceRule for asset '{asset_rule_to_remove.asset_name}'.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Find parent SourceRule index and the row of the asset to remove
|
||||||
|
try:
|
||||||
|
grandparent_row = self._source_rules.index(source_rule)
|
||||||
|
grandparent_index = self.createIndex(grandparent_row, 0, source_rule)
|
||||||
|
asset_row_for_removal = source_rule.assets.index(asset_rule_to_remove)
|
||||||
|
except ValueError:
|
||||||
|
log.error(f"removeAssetRule: Could not find parent SourceRule or the AssetRule within its parent's list.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def get_asset_type_keys(self) -> List[str]:
|
||||||
|
"""Returns the cached list of asset type keys."""
|
||||||
|
return self._asset_type_keys
|
||||||
|
|
||||||
|
def get_file_type_keys(self) -> List[str]:
|
||||||
|
"""Returns the cached list of file type keys."""
|
||||||
|
return self._file_type_keys
|
||||||
|
log.debug(f"Removing empty AssetRule '{asset_rule_to_remove.asset_name}' at row {asset_row_for_removal} under '{Path(source_rule.input_path).name}'")
|
||||||
|
self.beginRemoveRows(grandparent_index, asset_row_for_removal, asset_row_for_removal)
|
||||||
|
source_rule.assets.pop(asset_row_for_removal)
|
||||||
|
self.endRemoveRows()
|
||||||
|
return True
|
||||||
|
|
||||||
# --- Placeholder for node finding method (Original Request - Replaced by direct list search above) ---
|
# --- Placeholder for node finding method (Original Request - Replaced by direct list search above) ---
|
||||||
# Kept for reference, but the logic above directly searches self._source_rules
|
# Kept for reference, but the logic above directly searches self._source_rules
|
||||||
# def _find_source_node_by_path(self, path: str) -> 'SourceRule | None':
|
|
||||||
# """Placeholder: Finds a top-level SourceRule by its input_path."""
|
|
||||||
# # This assumes the model uses separate node objects, which it doesn't.
|
|
||||||
# # The current implementation uses the Rule objects directly.
|
|
||||||
# for i, rule in enumerate(self._source_rules):
|
|
||||||
# if rule.input_path == path:
|
|
||||||
# return rule # Return the SourceRule object itself
|
|
||||||
# return None
|
|
||||||
|
|
||||||
@ -4,6 +4,7 @@ import os
|
|||||||
import json
|
import json
|
||||||
import requests
|
import requests
|
||||||
import sys
|
import sys
|
||||||
|
import re # Add re import
|
||||||
|
|
||||||
# Add the prototype directory to the Python path to import config_llm
|
# Add the prototype directory to the Python path to import config_llm
|
||||||
sys.path.append(os.path.dirname(__file__))
|
sys.path.append(os.path.dirname(__file__))
|
||||||
@ -121,9 +122,9 @@ def call_llm_api(prompt, config):
|
|||||||
|
|
||||||
def extract_json_from_response(response_data):
|
def extract_json_from_response(response_data):
|
||||||
"""
|
"""
|
||||||
Extracts the JSON list part from the LLM's response content by finding
|
Extracts the main JSON object or list from the LLM's response content.
|
||||||
the first '[' and last ']' and parsing the content between them.
|
It handles markdown fences, reasoning tags (e.g., <think>), and aims
|
||||||
Handles responses that might include a thinking block or other text before/after the JSON.
|
to find the first complete JSON structure ({...} or [...]).
|
||||||
"""
|
"""
|
||||||
print("Extracting JSON from LLM response...")
|
print("Extracting JSON from LLM response...")
|
||||||
|
|
||||||
@ -133,44 +134,94 @@ def extract_json_from_response(response_data):
|
|||||||
message = response_data['choices'][0].get('message', {})
|
message = response_data['choices'][0].get('message', {})
|
||||||
assistant_message_content = message.get('content', '')
|
assistant_message_content = message.get('content', '')
|
||||||
|
|
||||||
# Strip markdown code fences if present
|
|
||||||
if assistant_message_content.strip().startswith("```json"):
|
|
||||||
assistant_message_content = assistant_message_content.strip()[len("```json"):].strip()
|
|
||||||
if assistant_message_content.strip().endswith("```"):
|
|
||||||
assistant_message_content = assistant_message_content.strip()[:-len("```")].strip()
|
|
||||||
|
|
||||||
print("\n--- Processed Assistant Message Content (after stripping fences) ---")
|
|
||||||
print(assistant_message_content)
|
|
||||||
print("-------------------------------------------------------------------\n")
|
|
||||||
|
|
||||||
if not assistant_message_content:
|
if not assistant_message_content:
|
||||||
print("Error: LLM response content is empty or unexpected format.")
|
print("Warning: LLM response content is empty or not found in expected structure.")
|
||||||
print(f"Full response: {response_data}")
|
print(f"Full response data: {response_data}")
|
||||||
# Attempt to return empty list for validation to catch this
|
return [] # Return empty list if no content
|
||||||
return []
|
|
||||||
|
|
||||||
# Find the index of the first '[' and the last ']'
|
content = assistant_message_content.strip()
|
||||||
first_bracket_index = assistant_message_content.find('[')
|
|
||||||
last_bracket_index = assistant_message_content.rfind(']')
|
|
||||||
|
|
||||||
if first_bracket_index == -1 or last_bracket_index == -1 or last_bracket_index < first_bracket_index:
|
# 1. Strip markdown code fences (```json ... ``` or ``` ... ```)
|
||||||
print("Error: Could not find a valid JSON list structure (matching '[' and ']') in the LLM response content.")
|
content = re.sub(r'^```(?:json)?\s*', '', content, flags=re.IGNORECASE)
|
||||||
print(f"Response content snippet: {assistant_message_content[:500]}...") # Print snippet
|
content = re.sub(r'\s*```$', '', content)
|
||||||
# Attempt to return empty list for validation to catch this
|
content = content.strip()
|
||||||
return []
|
|
||||||
|
|
||||||
# Extract the potential JSON string between the first '[' and last ']'
|
print("\n--- Content After Stripping Fences ---")
|
||||||
json_string = assistant_message_content[first_bracket_index : last_bracket_index + 1]
|
print(content)
|
||||||
|
print("---------------------------------------\n")
|
||||||
|
|
||||||
# Attempt to parse the extracted string as JSON
|
# 2. Remove reasoning tags like <think>...</think> (non-greedy)
|
||||||
|
# Consider making tag removal more general if other tags appear
|
||||||
|
print("\n--- Content BEFORE Removing <think> Tags ---")
|
||||||
|
print(repr(content)) # Using repr() to see hidden characters like newlines
|
||||||
|
content = re.sub(r'<think>.*?</think>', '', content, flags=re.DOTALL | re.IGNORECASE)
|
||||||
|
print("\n--- Content AFTER Removing <think> Tags ---") # Added this line
|
||||||
|
print(repr(content)) # Using repr() to see hidden characters like newlines
|
||||||
|
content = content.strip()
|
||||||
|
|
||||||
|
# Original print statement, now correctly indented and showing the final cleaned content before JSON parsing attempt
|
||||||
|
print("\n--- Final Content Before JSON Parsing Attempt ---")
|
||||||
|
print(content)
|
||||||
|
print("-------------------------------------------------\n")
|
||||||
|
|
||||||
|
if not content:
|
||||||
|
print("Error: LLM response content is empty after stripping fences and tags.")
|
||||||
|
return [] # Return empty list if nothing remains
|
||||||
|
|
||||||
|
# 3. Find the first opening bracket or brace indicating start of JSON
|
||||||
|
first_bracket_index = content.find('[')
|
||||||
|
first_brace_index = content.find('{')
|
||||||
|
|
||||||
|
start_index = -1
|
||||||
|
if first_bracket_index != -1 and first_brace_index != -1:
|
||||||
|
start_index = min(first_bracket_index, first_brace_index)
|
||||||
|
elif first_bracket_index != -1:
|
||||||
|
start_index = first_bracket_index
|
||||||
|
elif first_brace_index != -1:
|
||||||
|
start_index = first_brace_index
|
||||||
|
|
||||||
|
if start_index == -1:
|
||||||
|
print("Error: Could not find starting '[' or '{' in the processed content.")
|
||||||
|
print(f"Processed content snippet: {content[:500]}...")
|
||||||
|
return [] # Return empty list if no JSON start found
|
||||||
|
|
||||||
|
# 4. Attempt to find the corresponding closing bracket/brace and parse
|
||||||
|
# This uses json.JSONDecoder.raw_decode to find the first valid JSON object/array.
|
||||||
|
potential_json_str = content[start_index:]
|
||||||
|
json_decoder = json.JSONDecoder()
|
||||||
try:
|
try:
|
||||||
parsed_json = json.loads(json_string)
|
# Use decode with raw_decode to find the first valid JSON object/array
|
||||||
|
# and its end position in the string.
|
||||||
|
parsed_json, end_pos = json_decoder.raw_decode(potential_json_str)
|
||||||
|
print(f"Successfully parsed JSON ending at index {start_index + end_pos}.")
|
||||||
|
# Optional: Log the extracted part: print(f"Extracted JSON string: {potential_json_str[:end_pos]}")
|
||||||
return parsed_json
|
return parsed_json
|
||||||
except json.JSONDecodeError as e:
|
except json.JSONDecodeError as e:
|
||||||
print(f"Error: Could not decode extracted JSON from LLM response: {e}")
|
# This error means no valid JSON object was found starting at start_index
|
||||||
print(f"Attempted to parse (snippet): {json_string[:500]}...") # Print snippet
|
print(f"Error: Could not decode JSON starting from index {start_index}: {e}")
|
||||||
# Attempt to return empty list for validation to catch this
|
print(f"Content snippet starting at index {start_index}: {potential_json_str[:500]}...")
|
||||||
return []
|
|
||||||
|
# Fallback: Try the original naive approach (first '['/'{' to last ']'/'}')
|
||||||
|
# This might capture the JSON if it's the last element, even with preceding noise
|
||||||
|
# that confused raw_decode.
|
||||||
|
print("Attempting fallback: finding last ']' or '}'...")
|
||||||
|
last_bracket_index = content.rfind(']')
|
||||||
|
last_brace_index = content.rfind('}')
|
||||||
|
end_index = max(last_bracket_index, last_brace_index)
|
||||||
|
|
||||||
|
if end_index > start_index:
|
||||||
|
fallback_json_string = content[start_index : end_index + 1]
|
||||||
|
print(f"Fallback attempting to parse: {fallback_json_string[:500]}...")
|
||||||
|
try:
|
||||||
|
parsed_json = json.loads(fallback_json_string)
|
||||||
|
print("Successfully parsed JSON using fallback method.")
|
||||||
|
return parsed_json
|
||||||
|
except json.JSONDecodeError as fallback_e:
|
||||||
|
print(f"Fallback JSON parsing also failed: {fallback_e}")
|
||||||
|
return [] # Return empty list if both methods fail
|
||||||
|
else:
|
||||||
|
print("Fallback failed: Could not find suitable closing bracket/brace.")
|
||||||
|
return [] # Return empty list if fallback indices are invalid
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
418
main.py
418
main.py
@ -26,6 +26,7 @@ try:
|
|||||||
from processing_engine import ProcessingEngine # <<< ADDED NEW ENGINE IMPORT
|
from processing_engine import ProcessingEngine # <<< ADDED NEW ENGINE IMPORT
|
||||||
from rule_structure import SourceRule # Import SourceRule for type hinting
|
from rule_structure import SourceRule # Import SourceRule for type hinting
|
||||||
from gui.main_window import MainWindow # Import MainWindow
|
from gui.main_window import MainWindow # Import MainWindow
|
||||||
|
from utils.workspace_utils import prepare_processing_workspace # <<< ADDED UTILITY IMPORT
|
||||||
except ImportError as e:
|
except ImportError as e:
|
||||||
# Provide a more helpful error message if imports fail
|
# Provide a more helpful error message if imports fail
|
||||||
script_dir = Path(__file__).parent.resolve()
|
script_dir = Path(__file__).parent.resolve()
|
||||||
@ -172,33 +173,14 @@ class ProcessingTask(QRunnable):
|
|||||||
log.debug(f"DEBUG: Rule passed to ProcessingTask.run: {self.rule}") # DEBUG LOG
|
log.debug(f"DEBUG: Rule passed to ProcessingTask.run: {self.rule}") # DEBUG LOG
|
||||||
status = "failed" # Default status
|
status = "failed" # Default status
|
||||||
result_or_error = None
|
result_or_error = None
|
||||||
temp_workspace_dir = None # Initialize outside try
|
prepared_workspace_path = None # Initialize path for prepared content outside try
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# --- 1. Prepare Input Workspace ---
|
# --- 1. Prepare Input Workspace using Utility Function ---
|
||||||
original_input_path = Path(self.rule.input_path)
|
# The utility function creates the temp dir, prepares it, and returns its path.
|
||||||
prepared_workspace_path = None
|
# It raises exceptions on failure (FileNotFoundError, ValueError, zipfile.BadZipFile, OSError).
|
||||||
|
prepared_workspace_path = prepare_processing_workspace(self.rule.input_path)
|
||||||
if not original_input_path.exists():
|
log.info(f"Workspace prepared successfully at: {prepared_workspace_path}")
|
||||||
raise FileNotFoundError(f"Original input path does not exist: {original_input_path}")
|
|
||||||
|
|
||||||
# Create a temporary directory for processing
|
|
||||||
temp_workspace_dir = tempfile.mkdtemp(prefix="asset_proc_")
|
|
||||||
prepared_workspace_path = Path(temp_workspace_dir)
|
|
||||||
log.info(f"Created temporary workspace: {prepared_workspace_path}")
|
|
||||||
|
|
||||||
# Check if input is directory or zip file
|
|
||||||
if original_input_path.is_dir():
|
|
||||||
log.info(f"Input is a directory, copying contents to workspace: {original_input_path}")
|
|
||||||
# Copy directory contents into the temp workspace
|
|
||||||
shutil.copytree(original_input_path, prepared_workspace_path, dirs_exist_ok=True)
|
|
||||||
elif original_input_path.is_file() and original_input_path.suffix.lower() == '.zip':
|
|
||||||
log.info(f"Input is a zip file, extracting to workspace: {original_input_path}")
|
|
||||||
with zipfile.ZipFile(original_input_path, 'r') as zip_ref:
|
|
||||||
zip_ref.extractall(prepared_workspace_path)
|
|
||||||
else:
|
|
||||||
# Handle unsupported input types if necessary
|
|
||||||
raise ValueError(f"Unsupported input type: {original_input_path}. Must be a directory or .zip file.")
|
|
||||||
|
|
||||||
# --- DEBUG: List files in prepared workspace ---
|
# --- DEBUG: List files in prepared workspace ---
|
||||||
try:
|
try:
|
||||||
@ -241,12 +223,13 @@ class ProcessingTask(QRunnable):
|
|||||||
log.error(f"Worker Thread: Error emitting finished signal for {self.rule.input_path}: {sig_err}")
|
log.error(f"Worker Thread: Error emitting finished signal for {self.rule.input_path}: {sig_err}")
|
||||||
|
|
||||||
# --- 3. Cleanup Workspace ---
|
# --- 3. Cleanup Workspace ---
|
||||||
if temp_workspace_dir and Path(temp_workspace_dir).exists():
|
# Use the path returned by the utility function for cleanup
|
||||||
|
if prepared_workspace_path and prepared_workspace_path.exists():
|
||||||
try:
|
try:
|
||||||
log.info(f"Cleaning up temporary workspace: {temp_workspace_dir}")
|
log.info(f"Cleaning up temporary workspace: {prepared_workspace_path}")
|
||||||
shutil.rmtree(temp_workspace_dir)
|
shutil.rmtree(prepared_workspace_path) # Use the Path object
|
||||||
except OSError as cleanup_error:
|
except OSError as cleanup_error:
|
||||||
log.error(f"Worker Thread: Failed to cleanup temporary workspace {temp_workspace_dir}: {cleanup_error}")
|
log.error(f"Worker Thread: Failed to cleanup temporary workspace {prepared_workspace_path}: {cleanup_error}")
|
||||||
|
|
||||||
|
|
||||||
# --- CLI Worker Function (COMMENTED OUT - Replaced by GUI Flow) ---
|
# --- CLI Worker Function (COMMENTED OUT - Replaced by GUI Flow) ---
|
||||||
@ -335,7 +318,8 @@ class App(QObject):
|
|||||||
if self.processing_engine:
|
if self.processing_engine:
|
||||||
self.main_window = MainWindow() # MainWindow now part of the App
|
self.main_window = MainWindow() # MainWindow now part of the App
|
||||||
# Connect the signal from the GUI to the App's slot using QueuedConnection
|
# Connect the signal from the GUI to the App's slot using QueuedConnection
|
||||||
connection_success = self.main_window.processing_requested.connect(self.on_processing_requested, Qt.ConnectionType.QueuedConnection)
|
# Connect the signal from the MainWindow (which is triggered by the panel) to the App's slot
|
||||||
|
connection_success = self.main_window.start_backend_processing.connect(self.on_processing_requested, Qt.ConnectionType.QueuedConnection)
|
||||||
log.info(f"DEBUG: Connection result for processing_requested (Queued): {connection_success}") # <-- Modified LOG
|
log.info(f"DEBUG: Connection result for processing_requested (Queued): {connection_success}") # <-- Modified LOG
|
||||||
if not connection_success:
|
if not connection_success:
|
||||||
log.error("*********************************************************")
|
log.error("*********************************************************")
|
||||||
@ -348,8 +332,8 @@ class App(QObject):
|
|||||||
log.error("Fatal: Cannot initialize MainWindow without ProcessingEngine.")
|
log.error("Fatal: Cannot initialize MainWindow without ProcessingEngine.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
@Slot(list) # Slot to receive List[SourceRule]
|
@Slot(list, dict) # Slot to receive List[SourceRule] and processing_settings dict
|
||||||
def on_processing_requested(self, source_rules: list):
|
def on_processing_requested(self, source_rules: list, processing_settings: dict):
|
||||||
# log.info("*********************************************************") # REMOVED
|
# log.info("*********************************************************") # REMOVED
|
||||||
log.debug("DEBUG: App.on_processing_requested slot entered.") # DEBUG Verify Entry (Keep this one)
|
log.debug("DEBUG: App.on_processing_requested slot entered.") # DEBUG Verify Entry (Keep this one)
|
||||||
# log.info("*********************************************************") # REMOVED
|
# log.info("*********************************************************") # REMOVED
|
||||||
@ -375,14 +359,15 @@ class App(QObject):
|
|||||||
self._task_results = {"processed": 0, "skipped": 0, "failed": 0}
|
self._task_results = {"processed": 0, "skipped": 0, "failed": 0}
|
||||||
log.debug(f"Initialized active task count to: {self._active_tasks_count}")
|
log.debug(f"Initialized active task count to: {self._active_tasks_count}")
|
||||||
|
|
||||||
# Update GUI progress bar/status
|
# Update GUI progress bar/status via MainPanelWidget
|
||||||
self.main_window.progress_bar.setMaximum(len(source_rules))
|
self.main_window.main_panel_widget.progress_bar.setMaximum(len(source_rules))
|
||||||
self.main_window.progress_bar.setValue(0)
|
self.main_window.main_panel_widget.progress_bar.setValue(0)
|
||||||
self.main_window.progress_bar.setFormat(f"0/{len(source_rules)} tasks")
|
self.main_window.main_panel_widget.progress_bar.setFormat(f"0/{len(source_rules)} tasks")
|
||||||
|
|
||||||
# --- Get paths needed for ProcessingTask ---
|
# --- Get paths needed for ProcessingTask ---
|
||||||
try:
|
try:
|
||||||
output_base_path_str = self.main_window.output_path_edit.text().strip()
|
# Access output path via MainPanelWidget
|
||||||
|
output_base_path_str = self.main_window.main_panel_widget.output_path_edit.text().strip()
|
||||||
if not output_base_path_str:
|
if not output_base_path_str:
|
||||||
log.error("Cannot queue tasks: Output directory path is empty in the GUI.")
|
log.error("Cannot queue tasks: Output directory path is empty in the GUI.")
|
||||||
self.main_window.statusBar().showMessage("Error: Output directory cannot be empty.", 5000)
|
self.main_window.statusBar().showMessage("Error: Output directory cannot be empty.", 5000)
|
||||||
@ -406,6 +391,11 @@ class App(QObject):
|
|||||||
# --- End Get paths ---
|
# --- End Get paths ---
|
||||||
|
|
||||||
|
|
||||||
|
# Set max threads based on GUI setting
|
||||||
|
worker_count = processing_settings.get('workers', 1)
|
||||||
|
self.thread_pool.setMaxThreadCount(worker_count)
|
||||||
|
log.info(f"Set thread pool max workers to: {worker_count}")
|
||||||
|
|
||||||
# Queue tasks in the thread pool
|
# Queue tasks in the thread pool
|
||||||
log.debug("DEBUG: Entering task queuing loop.") # <-- Keep this log
|
log.debug("DEBUG: Entering task queuing loop.") # <-- Keep this log
|
||||||
for i, rule in enumerate(source_rules): # Added enumerate for index logging
|
for i, rule in enumerate(source_rules): # Added enumerate for index logging
|
||||||
@ -484,10 +474,10 @@ class App(QObject):
|
|||||||
else: # Count all other statuses (failed_preparation, failed_processing) as failed
|
else: # Count all other statuses (failed_preparation, failed_processing) as failed
|
||||||
self._task_results["failed"] += 1
|
self._task_results["failed"] += 1
|
||||||
|
|
||||||
# Update progress bar
|
# Update progress bar via MainPanelWidget
|
||||||
total_tasks = self.main_window.progress_bar.maximum()
|
total_tasks = self.main_window.main_panel_widget.progress_bar.maximum()
|
||||||
completed_tasks = total_tasks - self._active_tasks_count
|
completed_tasks = total_tasks - self._active_tasks_count
|
||||||
self.main_window.update_progress_bar(completed_tasks, total_tasks) # Use MainWindow's method
|
self.main_window.main_panel_widget.update_progress_bar(completed_tasks, total_tasks) # Use MainPanelWidget's method
|
||||||
|
|
||||||
# Update status for the specific file in the GUI (if needed)
|
# Update status for the specific file in the GUI (if needed)
|
||||||
# self.main_window.update_file_status(rule_input_path, status, str(result_or_error) if result_or_error else "")
|
# self.main_window.update_file_status(rule_input_path, status, str(result_or_error) if result_or_error else "")
|
||||||
@ -513,182 +503,182 @@ class App(QObject):
|
|||||||
|
|
||||||
|
|
||||||
# --- Main CLI Execution Function (Adapted from old main()) ---
|
# --- Main CLI Execution Function (Adapted from old main()) ---
|
||||||
def run_cli(args): # Accept parsed args
|
# def run_cli(args): # Accept parsed args
|
||||||
"""Uses parsed arguments, sets up logging, runs processing, and reports summary for CLI mode."""
|
# """Uses parsed arguments, sets up logging, runs processing, and reports summary for CLI mode."""
|
||||||
# parser = setup_arg_parser() # No longer needed
|
# # parser = setup_arg_parser() # No longer needed
|
||||||
# args = parser.parse_args() # Args are passed in
|
# # args = parser.parse_args() # Args are passed in
|
||||||
|
#
|
||||||
# --- Validate required CLI arguments ---
|
# # --- Validate required CLI arguments ---
|
||||||
if not args.input_paths:
|
# if not args.input_paths:
|
||||||
log.error("CLI Error: Input path(s) are required for CLI mode.")
|
# log.error("CLI Error: Input path(s) are required for CLI mode.")
|
||||||
sys.exit(1)
|
# sys.exit(1)
|
||||||
if not args.preset:
|
# if not args.preset:
|
||||||
log.error("CLI Error: Preset (-p/--preset) is required for CLI mode.")
|
# log.error("CLI Error: Preset (-p/--preset) is required for CLI mode.")
|
||||||
sys.exit(1)
|
# sys.exit(1)
|
||||||
# --- End Validation ---
|
# # --- End Validation ---
|
||||||
|
#
|
||||||
# Logging setup is already done outside this function in the __main__ block
|
# # Logging setup is already done outside this function in the __main__ block
|
||||||
|
#
|
||||||
start_time = time.time()
|
# start_time = time.time()
|
||||||
log.info("Asset Processor Script Started (CLI Mode)")
|
# log.info("Asset Processor Script Started (CLI Mode)")
|
||||||
|
#
|
||||||
# --- Validate Input Paths ---
|
# # --- Validate Input Paths ---
|
||||||
valid_inputs = []
|
# valid_inputs = []
|
||||||
for p_str in args.input_paths:
|
# for p_str in args.input_paths:
|
||||||
p = Path(p_str)
|
# p = Path(p_str)
|
||||||
if p.exists():
|
# if p.exists():
|
||||||
suffix = p.suffix.lower()
|
# suffix = p.suffix.lower()
|
||||||
# TODO: Add support for other archive types if needed (.rar, .7z)
|
# # TODO: Add support for other archive types if needed (.rar, .7z)
|
||||||
if p.is_dir() or (p.is_file() and suffix == '.zip'):
|
# if p.is_dir() or (p.is_file() and suffix == '.zip'):
|
||||||
valid_inputs.append(p_str) # Store the original string path
|
# valid_inputs.append(p_str) # Store the original string path
|
||||||
else:
|
|
||||||
log.warning(f"Input is not a directory or a supported archive type (.zip), skipping: {p_str}")
|
|
||||||
else:
|
|
||||||
log.warning(f"Input path not found, skipping: {p_str}")
|
|
||||||
|
|
||||||
if not valid_inputs:
|
|
||||||
log.error("No valid input paths found. Exiting.")
|
|
||||||
sys.exit(1) # Exit with error code
|
|
||||||
|
|
||||||
# --- Determine Output Directory ---
|
|
||||||
output_dir_str = args.output_dir # Get value from args (might be None)
|
|
||||||
if not output_dir_str:
|
|
||||||
log.debug("Output directory not specified via -o, reading default from app_settings.json via load_base_config().")
|
|
||||||
try:
|
|
||||||
base_config = load_base_config()
|
|
||||||
output_dir_str = base_config.get('OUTPUT_BASE_DIR')
|
|
||||||
if not output_dir_str:
|
|
||||||
log.error("Output directory not specified with -o and 'OUTPUT_BASE_DIR' not found or empty in app_settings.json. Exiting.")
|
|
||||||
sys.exit(1)
|
|
||||||
log.info(f"Using default output directory from app_settings.json: {output_dir_str}")
|
|
||||||
except ConfigurationError as e:
|
|
||||||
log.error(f"Error reading base configuration for OUTPUT_BASE_DIR: {e}")
|
|
||||||
sys.exit(1)
|
|
||||||
except Exception as e:
|
|
||||||
log.exception(f"Unexpected error reading base configuration for OUTPUT_BASE_DIR: {e}")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# --- Resolve Output Path ---
|
|
||||||
output_path_obj = Path(output_dir_str).resolve() # Resolve to absolute path
|
|
||||||
|
|
||||||
# --- Validate and Setup Output Directory ---
|
|
||||||
try:
|
|
||||||
log.info(f"Ensuring output directory exists: {output_path_obj}")
|
|
||||||
output_path_obj.mkdir(parents=True, exist_ok=True)
|
|
||||||
output_dir_for_processor = str(output_path_obj)
|
|
||||||
except Exception as e:
|
|
||||||
log.error(f"Cannot create or access output directory '{output_path_obj}': {e}", exc_info=True)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# --- Load Configuration ---
|
|
||||||
try:
|
|
||||||
config = Configuration(args.preset) # Pass preset name from args
|
|
||||||
log.info(f"Configuration loaded for preset: {args.preset}")
|
|
||||||
except ConfigurationError as e:
|
|
||||||
log.error(f"Error loading configuration for preset '{args.preset}': {e}")
|
|
||||||
sys.exit(1)
|
|
||||||
except Exception as e:
|
|
||||||
log.exception(f"Unexpected error loading configuration: {e}")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# --- Initialize Processing Engine ---
|
|
||||||
try:
|
|
||||||
engine = ProcessingEngine(config)
|
|
||||||
log.info("ProcessingEngine initialized for CLI mode.")
|
|
||||||
except Exception as e:
|
|
||||||
log.exception(f"Fatal: Failed to initialize ProcessingEngine: {e}")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# --- Execute Processing (Simplified Sequential for now) ---
|
|
||||||
# TODO: Re-implement parallel processing using concurrent.futures if needed.
|
|
||||||
# TODO: CLI mode needs a way to generate SourceRule objects.
|
|
||||||
# For now, we'll pass a simplified structure or assume engine handles it.
|
|
||||||
# This part likely needs significant adaptation based on ProcessingEngine.process requirements.
|
|
||||||
log.warning("CLI processing currently uses simplified sequential execution.")
|
|
||||||
log.warning("SourceRule generation for CLI mode is basic and may need refinement.")
|
|
||||||
|
|
||||||
processed_count = 0
|
|
||||||
skipped_count = 0 # Placeholder
|
|
||||||
failed_count = 0
|
|
||||||
results_list = [] # Placeholder
|
|
||||||
|
|
||||||
for input_path_str in valid_inputs:
|
|
||||||
log.info(f"--- Processing Input: {Path(input_path_str).name} ---")
|
|
||||||
try:
|
|
||||||
# --- Basic SourceRule Creation (Needs Review/Adaptation) ---
|
|
||||||
# This is a placeholder. The engine likely needs more detailed file info.
|
|
||||||
# We might need to extract file list here like the GUI does.
|
|
||||||
input_path_obj = Path(input_path_str)
|
|
||||||
# Example: Create a rule assuming the input is a single asset
|
|
||||||
# This won't handle multi-asset archives correctly without more logic.
|
|
||||||
asset_name = input_path_obj.stem # Basic assumption
|
|
||||||
# File list extraction would be needed here for proper FileRule creation
|
|
||||||
# file_list = _extract_file_list(input_path_str) # Need to define/import this helper
|
|
||||||
# file_rules = [FileRule(file_path=f) for f in file_list] if file_list else []
|
|
||||||
# asset_rule = AssetRule(asset_name=asset_name, files=file_rules)
|
|
||||||
# rule = SourceRule(input_path=input_path_str, assets=[asset_rule], supplier_identifier=config.settings.get('supplier_identifier')) # Access from config object
|
|
||||||
# --- End Placeholder ---
|
|
||||||
|
|
||||||
# --- TEMPORARY: Call engine process with just config and path ---
|
|
||||||
# This assumes engine.process can handle this or needs adaptation.
|
|
||||||
# If engine.process strictly requires a SourceRule, this will fail.
|
|
||||||
# result = engine.process(config=config, input_path=input_path_obj, overwrite=args.overwrite)
|
|
||||||
# --- END TEMPORARY ---
|
|
||||||
|
|
||||||
# --- Attempt with Placeholder SourceRule (More likely signature) ---
|
|
||||||
# This still requires file list extraction and rule creation logic
|
|
||||||
log.error("CLI Processing Logic Incomplete: SourceRule creation and engine call need implementation.")
|
|
||||||
# Example (requires file list extraction and rule building):
|
|
||||||
# rule = build_basic_source_rule(input_path_str, config) # Hypothetical function
|
|
||||||
# if rule:
|
|
||||||
# engine.process(rule) # Assuming process takes one rule
|
|
||||||
# processed_count += 1 # Basic success tracking
|
|
||||||
# else:
|
# else:
|
||||||
# log.warning(f"Could not create basic rule for {input_path_str}, skipping.")
|
# log.warning(f"Input is not a directory or a supported archive type (.zip), skipping: {p_str}")
|
||||||
|
# else:
|
||||||
|
# log.warning(f"Input path not found, skipping: {p_str}")
|
||||||
|
#
|
||||||
|
# if not valid_inputs:
|
||||||
|
# log.error("No valid input paths found. Exiting.")
|
||||||
|
# sys.exit(1) # Exit with error code
|
||||||
|
#
|
||||||
|
# # --- Determine Output Directory ---
|
||||||
|
# output_dir_str = args.output_dir # Get value from args (might be None)
|
||||||
|
# if not output_dir_str:
|
||||||
|
# log.debug("Output directory not specified via -o, reading default from app_settings.json via load_base_config().")
|
||||||
|
# try:
|
||||||
|
# base_config = load_base_config()
|
||||||
|
# output_dir_str = base_config.get('OUTPUT_BASE_DIR')
|
||||||
|
# if not output_dir_str:
|
||||||
|
# log.error("Output directory not specified with -o and 'OUTPUT_BASE_DIR' not found or empty in app_settings.json. Exiting.")
|
||||||
|
# sys.exit(1)
|
||||||
|
# log.info(f"Using default output directory from app_settings.json: {output_dir_str}")
|
||||||
|
# except ConfigurationError as e:
|
||||||
|
# log.error(f"Error reading base configuration for OUTPUT_BASE_DIR: {e}")
|
||||||
|
# sys.exit(1)
|
||||||
|
# except Exception as e:
|
||||||
|
# log.exception(f"Unexpected error reading base configuration for OUTPUT_BASE_DIR: {e}")
|
||||||
|
# sys.exit(1)
|
||||||
|
#
|
||||||
|
# # --- Resolve Output Path ---
|
||||||
|
# output_path_obj = Path(output_dir_str).resolve() # Resolve to absolute path
|
||||||
|
#
|
||||||
|
# # --- Validate and Setup Output Directory ---
|
||||||
|
# try:
|
||||||
|
# log.info(f"Ensuring output directory exists: {output_path_obj}")
|
||||||
|
# output_path_obj.mkdir(parents=True, exist_ok=True)
|
||||||
|
# output_dir_for_processor = str(output_path_obj)
|
||||||
|
# except Exception as e:
|
||||||
|
# log.error(f"Cannot create or access output directory '{output_path_obj}': {e}", exc_info=True)
|
||||||
|
# sys.exit(1)
|
||||||
|
#
|
||||||
|
# # --- Load Configuration ---
|
||||||
|
# try:
|
||||||
|
# config = Configuration(args.preset) # Pass preset name from args
|
||||||
|
# log.info(f"Configuration loaded for preset: {args.preset}")
|
||||||
|
# except ConfigurationError as e:
|
||||||
|
# log.error(f"Error loading configuration for preset '{args.preset}': {e}")
|
||||||
|
# sys.exit(1)
|
||||||
|
# except Exception as e:
|
||||||
|
# log.exception(f"Unexpected error loading configuration: {e}")
|
||||||
|
# sys.exit(1)
|
||||||
|
#
|
||||||
|
# # --- Initialize Processing Engine ---
|
||||||
|
# try:
|
||||||
|
# engine = ProcessingEngine(config)
|
||||||
|
# log.info("ProcessingEngine initialized for CLI mode.")
|
||||||
|
# except Exception as e:
|
||||||
|
# log.exception(f"Fatal: Failed to initialize ProcessingEngine: {e}")
|
||||||
|
# sys.exit(1)
|
||||||
|
#
|
||||||
|
# # --- Execute Processing (Simplified Sequential for now) ---
|
||||||
|
# # TODO: Re-implement parallel processing using concurrent.futures if needed.
|
||||||
|
# # TODO: CLI mode needs a way to generate SourceRule objects.
|
||||||
|
# # For now, we'll pass a simplified structure or assume engine handles it.
|
||||||
|
# # This part likely needs significant adaptation based on ProcessingEngine.process requirements.
|
||||||
|
# log.warning("CLI processing currently uses simplified sequential execution.")
|
||||||
|
# log.warning("SourceRule generation for CLI mode is basic and may need refinement.")
|
||||||
|
#
|
||||||
|
# processed_count = 0
|
||||||
|
# skipped_count = 0 # Placeholder
|
||||||
|
# failed_count = 0
|
||||||
|
# results_list = [] # Placeholder
|
||||||
|
#
|
||||||
|
# for input_path_str in valid_inputs:
|
||||||
|
# log.info(f"--- Processing Input: {Path(input_path_str).name} ---")
|
||||||
|
# try:
|
||||||
|
# # --- Basic SourceRule Creation (Needs Review/Adaptation) ---
|
||||||
|
# # This is a placeholder. The engine likely needs more detailed file info.
|
||||||
|
# # We might need to extract file list here like the GUI does.
|
||||||
|
# input_path_obj = Path(input_path_str)
|
||||||
|
# # Example: Create a rule assuming the input is a single asset
|
||||||
|
# # This won't handle multi-asset archives correctly without more logic.
|
||||||
|
# asset_name = input_path_obj.stem # Basic assumption
|
||||||
|
# # File list extraction would be needed here for proper FileRule creation
|
||||||
|
# # file_list = _extract_file_list(input_path_str) # Need to define/import this helper
|
||||||
|
# # file_rules = [FileRule(file_path=f) for f in file_list] if file_list else []
|
||||||
|
# # asset_rule = AssetRule(asset_name=asset_name, files=file_rules)
|
||||||
|
# # rule = SourceRule(input_path=input_path_str, assets=[asset_rule], supplier_identifier=config.settings.get('supplier_identifier')) # Access from config object
|
||||||
|
# # --- End Placeholder ---
|
||||||
|
#
|
||||||
|
# # --- TEMPORARY: Call engine process with just config and path ---
|
||||||
|
# # This assumes engine.process can handle this or needs adaptation.
|
||||||
|
# # If engine.process strictly requires a SourceRule, this will fail.
|
||||||
|
# # result = engine.process(config=config, input_path=input_path_obj, overwrite=args.overwrite)
|
||||||
|
# # --- END TEMPORARY ---
|
||||||
|
#
|
||||||
|
# # --- Attempt with Placeholder SourceRule (More likely signature) ---
|
||||||
|
# # This still requires file list extraction and rule creation logic
|
||||||
|
# log.error("CLI Processing Logic Incomplete: SourceRule creation and engine call need implementation.")
|
||||||
|
# # Example (requires file list extraction and rule building):
|
||||||
|
# # rule = build_basic_source_rule(input_path_str, config) # Hypothetical function
|
||||||
|
# # if rule:
|
||||||
|
# # engine.process(rule) # Assuming process takes one rule
|
||||||
|
# # processed_count += 1 # Basic success tracking
|
||||||
|
# # else:
|
||||||
|
# # log.warning(f"Could not create basic rule for {input_path_str}, skipping.")
|
||||||
|
# # failed_count += 1
|
||||||
|
# # --- End Placeholder ---
|
||||||
|
# raise NotImplementedError("CLI processing logic for SourceRule creation and engine call is not fully implemented.")
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# except NotImplementedError as e:
|
||||||
|
# log.error(f"Stopping CLI run due to incomplete implementation: {e}")
|
||||||
# failed_count += 1
|
# failed_count += 1
|
||||||
# --- End Placeholder ---
|
# break # Stop processing further items
|
||||||
raise NotImplementedError("CLI processing logic for SourceRule creation and engine call is not fully implemented.")
|
# except Exception as e:
|
||||||
|
# log.exception(f"Error processing input '{Path(input_path_str).name}': {e}")
|
||||||
|
# failed_count += 1
|
||||||
except NotImplementedError as e:
|
# results_list.append((input_path_str, "failed", str(e))) # Placeholder result
|
||||||
log.error(f"Stopping CLI run due to incomplete implementation: {e}")
|
#
|
||||||
failed_count += 1
|
# # --- Report Summary ---
|
||||||
break # Stop processing further items
|
# duration = time.time() - start_time
|
||||||
except Exception as e:
|
# log.info("=" * 40)
|
||||||
log.exception(f"Error processing input '{Path(input_path_str).name}': {e}")
|
# log.info("CLI Processing Summary")
|
||||||
failed_count += 1
|
# log.info(f" Duration: {duration:.2f} seconds")
|
||||||
results_list.append((input_path_str, "failed", str(e))) # Placeholder result
|
# log.info(f" Inputs Attempted: {len(valid_inputs)}")
|
||||||
|
# log.info(f" Successfully Processed: {processed_count}")
|
||||||
# --- Report Summary ---
|
# log.info(f" Skipped: {skipped_count}")
|
||||||
duration = time.time() - start_time
|
# log.info(f" Failed: {failed_count}")
|
||||||
log.info("=" * 40)
|
#
|
||||||
log.info("CLI Processing Summary")
|
# exit_code = 0
|
||||||
log.info(f" Duration: {duration:.2f} seconds")
|
# if failed_count > 0:
|
||||||
log.info(f" Inputs Attempted: {len(valid_inputs)}")
|
# log.warning("Failures occurred.")
|
||||||
log.info(f" Successfully Processed: {processed_count}")
|
# # Log specific errors if results_list was populated
|
||||||
log.info(f" Skipped: {skipped_count}")
|
# for input_path, status, err_msg in results_list:
|
||||||
log.info(f" Failed: {failed_count}")
|
# if status == "failed":
|
||||||
|
# log.warning(f" - {Path(input_path).name}: {err_msg}")
|
||||||
exit_code = 0
|
# exit_code = 1 # Exit with error code if failures occurred
|
||||||
if failed_count > 0:
|
#
|
||||||
log.warning("Failures occurred.")
|
# # --- Blender Script Execution (Optional - Copied from old main()) ---
|
||||||
# Log specific errors if results_list was populated
|
# # This section might need review based on current config/engine
|
||||||
for input_path, status, err_msg in results_list:
|
# run_blender = False # Placeholder, add logic if needed
|
||||||
if status == "failed":
|
# if run_blender:
|
||||||
log.warning(f" - {Path(input_path).name}: {err_msg}")
|
# # ... (Blender execution logic from old main() would go here) ...
|
||||||
exit_code = 1 # Exit with error code if failures occurred
|
# log.warning("Blender script execution from CLI not yet re-implemented.")
|
||||||
|
# pass
|
||||||
# --- Blender Script Execution (Optional - Copied from old main()) ---
|
#
|
||||||
# This section might need review based on current config/engine
|
# # --- Final Exit ---
|
||||||
run_blender = False # Placeholder, add logic if needed
|
# log.info("Asset Processor Script Finished (CLI Mode).")
|
||||||
if run_blender:
|
# sys.exit(exit_code)
|
||||||
# ... (Blender execution logic from old main() would go here) ...
|
|
||||||
log.warning("Blender script execution from CLI not yet re-implemented.")
|
|
||||||
pass
|
|
||||||
|
|
||||||
# --- Final Exit ---
|
|
||||||
log.info("Asset Processor Script Finished (CLI Mode).")
|
|
||||||
sys.exit(exit_code)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
240
monitor.py
240
monitor.py
@ -6,18 +6,33 @@ import time
|
|||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
|
import tempfile # For potential temporary workspace if needed directly
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from watchdog.observers.polling import PollingObserver as Observer # Use polling for better compatibility
|
from watchdog.observers.polling import PollingObserver as Observer # Use polling for better compatibility
|
||||||
from watchdog.events import FileSystemEventHandler, FileCreatedEvent
|
from watchdog.events import FileSystemEventHandler, FileCreatedEvent
|
||||||
|
|
||||||
# --- Import from local modules ---
|
# --- Import from local modules ---
|
||||||
|
# Assuming standard project structure
|
||||||
|
from configuration import load_config, ConfigurationError # Assuming load_config is here
|
||||||
|
from processing_engine import ProcessingEngine, ProcessingError # Assuming ProcessingError exists
|
||||||
|
from rule_structure import SourceRule # Assuming SourceRule is here
|
||||||
|
# Assuming workspace utils exist - adjust path if necessary
|
||||||
try:
|
try:
|
||||||
# Assuming main.py is in the same directory
|
from utils.workspace_utils import prepare_processing_workspace, WorkspaceError
|
||||||
from main import run_processing, setup_logging, ConfigurationError, AssetProcessingError
|
except ImportError:
|
||||||
except ImportError as e:
|
log = logging.getLogger(__name__) # Need logger early for this message
|
||||||
print(f"ERROR: Failed to import required functions/classes from main.py: {e}")
|
log.warning("Could not import workspace_utils. Workspace preparation/cleanup might fail.")
|
||||||
print("Ensure main.py is in the same directory as monitor.py.")
|
# Define dummy functions/exceptions if import fails to avoid NameErrors later,
|
||||||
sys.exit(1)
|
# but log prominently.
|
||||||
|
def prepare_processing_workspace(archive_path: Path) -> Path:
|
||||||
|
log.error("prepare_processing_workspace is not available!")
|
||||||
|
# Create a dummy temp dir to allow code flow, but it won't be the real one
|
||||||
|
return Path(tempfile.mkdtemp(prefix="dummy_workspace_"))
|
||||||
|
class WorkspaceError(Exception): pass
|
||||||
|
|
||||||
|
from utils.prediction_utils import generate_source_rule_from_archive, PredictionError
|
||||||
|
|
||||||
|
|
||||||
# --- Configuration ---
|
# --- Configuration ---
|
||||||
# Read from environment variables with defaults
|
# Read from environment variables with defaults
|
||||||
@ -33,13 +48,14 @@ DEFAULT_WORKERS = max(1, os.cpu_count() // 2 if os.cpu_count() else 1)
|
|||||||
NUM_WORKERS = int(os.environ.get('NUM_WORKERS', str(DEFAULT_WORKERS)))
|
NUM_WORKERS = int(os.environ.get('NUM_WORKERS', str(DEFAULT_WORKERS)))
|
||||||
|
|
||||||
# --- Logging Setup ---
|
# --- Logging Setup ---
|
||||||
|
# Configure logging (ensure logger is available before potential import errors)
|
||||||
log_level = getattr(logging, LOG_LEVEL_STR, logging.INFO)
|
log_level = getattr(logging, LOG_LEVEL_STR, logging.INFO)
|
||||||
# Use the setup_logging from main.py but configure the level directly
|
|
||||||
# We don't have a 'verbose' flag here, so call basicConfig directly
|
|
||||||
log_format = '%(asctime)s [%(levelname)-8s] %(name)s: %(message)s'
|
log_format = '%(asctime)s [%(levelname)-8s] %(name)s: %(message)s'
|
||||||
date_format = '%Y-%m-%d %H:%M:%S'
|
date_format = '%Y-%m-%d %H:%M:%S'
|
||||||
logging.basicConfig(level=log_level, format=log_format, datefmt=date_format, handlers=[logging.StreamHandler(sys.stdout)])
|
logging.basicConfig(level=log_level, format=log_format, datefmt=date_format, handlers=[logging.StreamHandler(sys.stdout)])
|
||||||
log = logging.getLogger("monitor")
|
log = logging.getLogger("monitor") # Define logger after basicConfig
|
||||||
|
|
||||||
|
# Log configuration values after logger is set up
|
||||||
log.info(f"Logging level set to: {logging.getLevelName(log_level)}")
|
log.info(f"Logging level set to: {logging.getLevelName(log_level)}")
|
||||||
log.info(f"Monitoring Input Directory: {INPUT_DIR}")
|
log.info(f"Monitoring Input Directory: {INPUT_DIR}")
|
||||||
log.info(f"Output Directory: {OUTPUT_DIR}")
|
log.info(f"Output Directory: {OUTPUT_DIR}")
|
||||||
@ -51,18 +67,8 @@ log.info(f"Max Workers: {NUM_WORKERS}")
|
|||||||
|
|
||||||
|
|
||||||
# --- Preset Validation ---
|
# --- Preset Validation ---
|
||||||
PRESET_DIR = Path(__file__).parent / "Presets"
|
# --- Constants ---
|
||||||
PRESET_FILENAME_REGEX = re.compile(r"^\[?([a-zA-Z0-9_-]+)\]?_.*\.(zip|rar|7z)$", re.IGNORECASE)
|
SUPPORTED_SUFFIXES = ['.zip', '.rar', '.7z']
|
||||||
|
|
||||||
def validate_preset(preset_name: str) -> bool:
|
|
||||||
"""Checks if the preset JSON file exists."""
|
|
||||||
if not preset_name:
|
|
||||||
return False
|
|
||||||
preset_file = PRESET_DIR / f"{preset_name}.json"
|
|
||||||
exists = preset_file.is_file()
|
|
||||||
if not exists:
|
|
||||||
log.warning(f"Preset file not found: {preset_file}")
|
|
||||||
return exists
|
|
||||||
|
|
||||||
# --- Watchdog Event Handler ---
|
# --- Watchdog Event Handler ---
|
||||||
class ZipHandler(FileSystemEventHandler):
|
class ZipHandler(FileSystemEventHandler):
|
||||||
@ -77,10 +83,13 @@ class ZipHandler(FileSystemEventHandler):
|
|||||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
self.output_dir.mkdir(parents=True, exist_ok=True)
|
||||||
self.processed_dir.mkdir(parents=True, exist_ok=True)
|
self.processed_dir.mkdir(parents=True, exist_ok=True)
|
||||||
self.error_dir.mkdir(parents=True, exist_ok=True)
|
self.error_dir.mkdir(parents=True, exist_ok=True)
|
||||||
log.info("Handler initialized, target directories ensured.")
|
|
||||||
|
# Initialize ThreadPoolExecutor
|
||||||
|
self.executor = ThreadPoolExecutor(max_workers=NUM_WORKERS)
|
||||||
|
log.info(f"Handler initialized, target directories ensured. ThreadPoolExecutor started with {NUM_WORKERS} workers.")
|
||||||
|
|
||||||
def on_created(self, event: FileCreatedEvent):
|
def on_created(self, event: FileCreatedEvent):
|
||||||
"""Called when a file or directory is created."""
|
"""Called when a file or directory is created. Submits task to executor."""
|
||||||
if event.is_directory:
|
if event.is_directory:
|
||||||
return
|
return
|
||||||
|
|
||||||
@ -88,87 +97,39 @@ class ZipHandler(FileSystemEventHandler):
|
|||||||
log.debug(f"File creation event detected: {src_path}")
|
log.debug(f"File creation event detected: {src_path}")
|
||||||
|
|
||||||
# Check if the file has a supported archive extension
|
# Check if the file has a supported archive extension
|
||||||
supported_suffixes = ['.zip', '.rar', '.7z']
|
if src_path.suffix.lower() not in SUPPORTED_SUFFIXES:
|
||||||
if src_path.suffix.lower() not in supported_suffixes:
|
|
||||||
log.debug(f"Ignoring file with unsupported extension: {src_path.name}")
|
log.debug(f"Ignoring file with unsupported extension: {src_path.name}")
|
||||||
return
|
return
|
||||||
|
|
||||||
log.info(f"Detected new ZIP file: {src_path.name}. Waiting {PROCESS_DELAY}s before processing...")
|
log.info(f"Detected new archive: {src_path.name}. Waiting {PROCESS_DELAY}s before queueing...")
|
||||||
time.sleep(PROCESS_DELAY)
|
time.sleep(PROCESS_DELAY) # Wait for file write to complete
|
||||||
|
|
||||||
# Re-check if file still exists (might have been temporary)
|
# Re-check if file still exists (might have been temporary or moved quickly)
|
||||||
if not src_path.exists():
|
if not src_path.exists():
|
||||||
log.warning(f"File disappeared after delay: {src_path.name}")
|
log.warning(f"File disappeared after delay: {src_path.name}")
|
||||||
return
|
return
|
||||||
|
|
||||||
log.info(f"Processing file: {src_path.name}")
|
log.info(f"Queueing processing task for: {src_path.name}")
|
||||||
|
# Submit the processing task to the thread pool
|
||||||
# --- Extract Preset Name ---
|
# Pass necessary context like directories
|
||||||
match = PRESET_FILENAME_REGEX.match(src_path.name)
|
self.executor.submit(
|
||||||
if not match:
|
_process_archive_task,
|
||||||
log.warning(f"Filename '{src_path.name}' does not match expected format '[preset]_filename.zip'. Ignoring.")
|
archive_path=src_path,
|
||||||
# Optionally move to an 'ignored' or 'error' directory? For now, leave it.
|
output_dir=self.output_dir,
|
||||||
return
|
processed_dir=self.processed_dir,
|
||||||
|
error_dir=self.error_dir
|
||||||
preset_name = match.group(1)
|
|
||||||
log.info(f"Extracted preset name: '{preset_name}' from {src_path.name}")
|
|
||||||
|
|
||||||
# --- Validate Preset ---
|
|
||||||
if not validate_preset(preset_name):
|
|
||||||
log.error(f"Preset '{preset_name}' is not valid (missing {PRESET_DIR / f'{preset_name}.json'}). Ignoring file {src_path.name}.")
|
|
||||||
# Move to error dir if preset is invalid? Let's do that.
|
|
||||||
self.move_file(src_path, self.error_dir, "invalid_preset")
|
|
||||||
return
|
|
||||||
|
|
||||||
# --- Run Processing ---
|
|
||||||
try:
|
|
||||||
log.info(f"Starting asset processing for '{src_path.name}' using preset '{preset_name}'...")
|
|
||||||
# run_processing expects a list of inputs
|
|
||||||
results = run_processing(
|
|
||||||
valid_inputs=[str(src_path)],
|
|
||||||
preset_name=preset_name,
|
|
||||||
output_dir_for_processor=str(self.output_dir), # Pass absolute output path
|
|
||||||
overwrite=False, # Default to no overwrite for monitored files? Or make configurable? Let's default to False.
|
|
||||||
num_workers=NUM_WORKERS
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# --- Handle Results ---
|
def shutdown(self):
|
||||||
# Check overall status based on counts
|
"""Shuts down the thread pool executor."""
|
||||||
processed = results.get("processed", 0)
|
log.info("Shutting down thread pool executor...")
|
||||||
skipped = results.get("skipped", 0)
|
self.executor.shutdown(wait=True)
|
||||||
failed = results.get("failed", 0)
|
log.info("Executor shut down.")
|
||||||
pool_error = results.get("pool_error")
|
|
||||||
|
|
||||||
if pool_error:
|
# move_file remains largely the same, but called from _process_archive_task now
|
||||||
log.error(f"Processing pool error for {src_path.name}: {pool_error}")
|
# We make it static or move it outside the class if _process_archive_task is outside
|
||||||
self.move_file(src_path, self.error_dir, "pool_error")
|
@staticmethod
|
||||||
elif failed > 0:
|
def move_file(src: Path, dest_dir: Path, reason: str):
|
||||||
log.error(f"Processing failed for {src_path.name}. Check worker logs for details.")
|
|
||||||
# Log specific errors if available in results_list
|
|
||||||
for res_path, status, err_msg in results.get("results_list", []):
|
|
||||||
if status == "failed":
|
|
||||||
log.error(f" - Failure reason: {err_msg}")
|
|
||||||
self.move_file(src_path, self.error_dir, "processing_failed")
|
|
||||||
elif processed > 0:
|
|
||||||
log.info(f"Successfully processed {src_path.name}.")
|
|
||||||
self.move_file(src_path, self.processed_dir, "processed")
|
|
||||||
elif skipped > 0:
|
|
||||||
log.info(f"Processing skipped for {src_path.name} (likely already exists).")
|
|
||||||
self.move_file(src_path, self.processed_dir, "skipped")
|
|
||||||
else:
|
|
||||||
# Should not happen if input was valid zip
|
|
||||||
log.warning(f"Processing finished for {src_path.name} with unexpected status (0 processed, 0 skipped, 0 failed). Moving to error dir.")
|
|
||||||
self.move_file(src_path, self.error_dir, "unknown_status")
|
|
||||||
|
|
||||||
except (ConfigurationError, AssetProcessingError) as e:
|
|
||||||
log.error(f"Asset processing error for {src_path.name}: {e}", exc_info=True)
|
|
||||||
self.move_file(src_path, self.error_dir, "processing_exception")
|
|
||||||
except Exception as e:
|
|
||||||
log.exception(f"Unexpected error during processing trigger for {src_path.name}: {e}")
|
|
||||||
self.move_file(src_path, self.error_dir, "monitor_exception")
|
|
||||||
|
|
||||||
|
|
||||||
def move_file(self, src: Path, dest_dir: Path, reason: str):
|
|
||||||
"""Safely moves a file, handling potential name collisions."""
|
"""Safely moves a file, handling potential name collisions."""
|
||||||
if not src.exists():
|
if not src.exists():
|
||||||
log.warning(f"Source file {src} does not exist, cannot move for reason: {reason}.")
|
log.warning(f"Source file {src} does not exist, cannot move for reason: {reason}.")
|
||||||
@ -190,6 +151,95 @@ class ZipHandler(FileSystemEventHandler):
|
|||||||
log.exception(f"Failed to move file {src.name} to {dest_dir}: {e}")
|
log.exception(f"Failed to move file {src.name} to {dest_dir}: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
# --- Processing Task Function ---
|
||||||
|
def _process_archive_task(archive_path: Path, output_dir: Path, processed_dir: Path, error_dir: Path):
|
||||||
|
"""
|
||||||
|
Task executed by the ThreadPoolExecutor to process a single archive file.
|
||||||
|
"""
|
||||||
|
log.info(f"[Task:{archive_path.name}] Starting processing.")
|
||||||
|
temp_workspace_path: Optional[Path] = None
|
||||||
|
config = None
|
||||||
|
source_rule = None
|
||||||
|
move_reason = "unknown_error" # Default reason if early exit
|
||||||
|
|
||||||
|
try:
|
||||||
|
# --- a. Load Configuration ---
|
||||||
|
log.debug(f"[Task:{archive_path.name}] Loading configuration...")
|
||||||
|
# Assuming load_config() loads the main app config (e.g., app_settings.json)
|
||||||
|
# and potentially merges preset defaults or paths. Adjust if needed.
|
||||||
|
config = load_config() # Might need path argument depending on implementation
|
||||||
|
if not config:
|
||||||
|
raise ConfigurationError("Failed to load application configuration.")
|
||||||
|
log.debug(f"[Task:{archive_path.name}] Configuration loaded.")
|
||||||
|
|
||||||
|
# --- b. Generate Prediction (SourceRule) ---
|
||||||
|
log.debug(f"[Task:{archive_path.name}] Generating source rule prediction...")
|
||||||
|
# This function now handles preset extraction and validation internally
|
||||||
|
source_rule = generate_source_rule_from_archive(archive_path, config)
|
||||||
|
log.info(f"[Task:{archive_path.name}] SourceRule generated successfully.")
|
||||||
|
|
||||||
|
# --- c. Prepare Workspace ---
|
||||||
|
log.debug(f"[Task:{archive_path.name}] Preparing processing workspace...")
|
||||||
|
# This utility should handle extraction and return the temp dir path
|
||||||
|
temp_workspace_path = prepare_processing_workspace(archive_path)
|
||||||
|
log.info(f"[Task:{archive_path.name}] Workspace prepared at: {temp_workspace_path}")
|
||||||
|
|
||||||
|
# --- d. Run Processing Engine ---
|
||||||
|
log.debug(f"[Task:{archive_path.name}] Initializing Processing Engine...")
|
||||||
|
# Pass necessary parts of the config to the engine
|
||||||
|
engine = ProcessingEngine(config=config, output_base_dir=output_dir)
|
||||||
|
log.info(f"[Task:{archive_path.name}] Running Processing Engine...")
|
||||||
|
# The engine uses the source_rule to guide processing on the workspace files
|
||||||
|
engine.run(workspace_path=temp_workspace_path, source_rule=source_rule)
|
||||||
|
log.info(f"[Task:{archive_path.name}] Processing Engine finished successfully.")
|
||||||
|
move_reason = "processed" # Set success reason
|
||||||
|
|
||||||
|
# --- e. Handle Results & Move File (Implicit success if no exception) ---
|
||||||
|
# If engine.run completes without exception, assume success for now.
|
||||||
|
# More granular results could be returned by engine.run if needed.
|
||||||
|
# Moving is handled outside the main try block based on move_reason
|
||||||
|
|
||||||
|
# --- f. Blender Integration (Placeholder) ---
|
||||||
|
# TODO: Add call to utils.blender_utils.run_blender_script if needed later
|
||||||
|
# if config.get('blender', {}).get('run_script_after_processing'):
|
||||||
|
# log.info(f"[Task:{archive_path.name}] Running Blender script (placeholder)...")
|
||||||
|
# # blender_utils.run_blender_script(output_dir / source_rule.name, config)
|
||||||
|
|
||||||
|
|
||||||
|
except FileNotFoundError as e:
|
||||||
|
log.error(f"[Task:{archive_path.name}] Prerequisite file not found: {e}")
|
||||||
|
move_reason = "file_not_found"
|
||||||
|
except (ConfigurationError, PredictionError, WorkspaceError, ProcessingError) as e:
|
||||||
|
log.error(f"[Task:{archive_path.name}] Processing failed: {e}", exc_info=True)
|
||||||
|
move_reason = f"{type(e).__name__.lower()}" # e.g., "predictionerror"
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"[Task:{archive_path.name}] An unexpected error occurred during processing: {e}")
|
||||||
|
move_reason = "unexpected_exception"
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# --- Move Original Archive ---
|
||||||
|
log.debug(f"[Task:{archive_path.name}] Moving original archive based on outcome: {move_reason}")
|
||||||
|
dest_dir = processed_dir if move_reason == "processed" else error_dir
|
||||||
|
try:
|
||||||
|
# Use the static method from the handler class
|
||||||
|
ZipHandler.move_file(archive_path, dest_dir, move_reason)
|
||||||
|
except Exception as move_err:
|
||||||
|
log.exception(f"[Task:{archive_path.name}] CRITICAL: Failed to move archive file {archive_path} after processing: {move_err}")
|
||||||
|
|
||||||
|
# --- g. Cleanup Workspace ---
|
||||||
|
if temp_workspace_path and temp_workspace_path.exists():
|
||||||
|
log.debug(f"[Task:{archive_path.name}] Cleaning up workspace: {temp_workspace_path}")
|
||||||
|
try:
|
||||||
|
shutil.rmtree(temp_workspace_path)
|
||||||
|
log.info(f"[Task:{archive_path.name}] Workspace cleaned up successfully.")
|
||||||
|
except OSError as e:
|
||||||
|
log.error(f"[Task:{archive_path.name}] Error removing temporary workspace {temp_workspace_path}: {e}", exc_info=True)
|
||||||
|
elif temp_workspace_path:
|
||||||
|
log.warning(f"[Task:{archive_path.name}] Temporary workspace path recorded but not found for cleanup: {temp_workspace_path}")
|
||||||
|
|
||||||
|
log.info(f"[Task:{archive_path.name}] Processing task finished with status: {move_reason}")
|
||||||
|
|
||||||
|
|
||||||
# --- Main Monitor Loop ---
|
# --- Main Monitor Loop ---
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
# Ensure input directory exists
|
# Ensure input directory exists
|
||||||
@ -211,11 +261,13 @@ if __name__ == "__main__":
|
|||||||
# Keep the main thread alive, observer runs in background thread
|
# Keep the main thread alive, observer runs in background thread
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
except KeyboardInterrupt:
|
except KeyboardInterrupt:
|
||||||
log.info("Keyboard interrupt received, stopping monitor...")
|
log.info("Keyboard interrupt received, stopping monitor and executor...")
|
||||||
observer.stop()
|
observer.stop()
|
||||||
|
event_handler.shutdown() # Gracefully shutdown the executor
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(f"An unexpected error occurred in the main loop: {e}")
|
log.exception(f"An unexpected error occurred in the main loop: {e}")
|
||||||
observer.stop()
|
observer.stop()
|
||||||
|
event_handler.shutdown() # Ensure shutdown on other exceptions too
|
||||||
|
|
||||||
observer.join()
|
observer.join()
|
||||||
log.info("Monitor stopped.")
|
log.info("Monitor stopped.")
|
||||||
@ -430,12 +430,6 @@ class ProcessingEngine:
|
|||||||
self._cleanup_workspace()
|
self._cleanup_workspace()
|
||||||
|
|
||||||
|
|
||||||
def _setup_workspace(self):
|
|
||||||
"""Creates a temporary directory for processing."""
|
|
||||||
# This is now handled within the process method to ensure it's created per run.
|
|
||||||
# Kept as a placeholder if needed later, but currently unused.
|
|
||||||
pass
|
|
||||||
|
|
||||||
def _cleanup_workspace(self):
|
def _cleanup_workspace(self):
|
||||||
"""Removes the temporary workspace directory if it exists."""
|
"""Removes the temporary workspace directory if it exists."""
|
||||||
if self.temp_dir and self.temp_dir.exists():
|
if self.temp_dir and self.temp_dir.exists():
|
||||||
|
|||||||
197
utils/prediction_utils.py
Normal file
197
utils/prediction_utils.py
Normal file
@ -0,0 +1,197 @@
|
|||||||
|
# utils/prediction_utils.py
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional, Dict, Any
|
||||||
|
|
||||||
|
# Assuming these imports based on project structure and task description
|
||||||
|
from rule_structure import SourceRule, RuleSet, MapRule, AssetRule
|
||||||
|
from configuration import load_preset # Assuming preset loading is handled here or similar
|
||||||
|
# If RuleBasedPredictionHandler exists and is the intended mechanism:
|
||||||
|
# from gui.rule_based_prediction_handler import RuleBasedPredictionHandler
|
||||||
|
# Or, if we need to replicate its core logic:
|
||||||
|
from utils.structure_analyzer import analyze_archive_structure # Hypothetical utility
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Regex to extract preset name (similar to monitor.py)
|
||||||
|
# Matches "[PresetName]_anything.zip/rar/7z"
|
||||||
|
PRESET_FILENAME_REGEX = re.compile(r"^\[?([a-zA-Z0-9_-]+)\]?_.*\.(zip|rar|7z)$", re.IGNORECASE)
|
||||||
|
|
||||||
|
class PredictionError(Exception):
|
||||||
|
"""Custom exception for prediction failures."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def generate_source_rule_from_archive(archive_path: Path, config: Dict[str, Any]) -> SourceRule:
|
||||||
|
"""
|
||||||
|
Generates a SourceRule hierarchy based on rules defined in a preset,
|
||||||
|
determined by the archive filename.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
archive_path: Path to the input archive file.
|
||||||
|
config: The loaded application configuration dictionary, expected
|
||||||
|
to contain preset information or a way to load it.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The generated SourceRule hierarchy.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
PredictionError: If the preset cannot be determined, loaded, or
|
||||||
|
if rule generation fails.
|
||||||
|
FileNotFoundError: If the archive_path does not exist.
|
||||||
|
"""
|
||||||
|
if not archive_path.is_file():
|
||||||
|
raise FileNotFoundError(f"Archive file not found: {archive_path}")
|
||||||
|
|
||||||
|
log.debug(f"Generating SourceRule for archive: {archive_path.name}")
|
||||||
|
|
||||||
|
# --- 1. Extract Preset Name ---
|
||||||
|
match = PRESET_FILENAME_REGEX.match(archive_path.name)
|
||||||
|
if not match:
|
||||||
|
raise PredictionError(f"Filename '{archive_path.name}' does not match expected format '[preset]_filename.ext'. Cannot determine preset.")
|
||||||
|
|
||||||
|
preset_name = match.group(1)
|
||||||
|
log.info(f"Extracted preset name: '{preset_name}' from {archive_path.name}")
|
||||||
|
|
||||||
|
# --- 2. Load Preset Rules ---
|
||||||
|
# Option A: Presets are pre-loaded in config (e.g., under 'presets' key)
|
||||||
|
# preset_rules_dict = config.get('presets', {}).get(preset_name)
|
||||||
|
# Option B: Load preset dynamically using a utility
|
||||||
|
try:
|
||||||
|
# Assuming load_preset takes the name and maybe the base config/path
|
||||||
|
# Adjust based on the actual signature of load_preset
|
||||||
|
preset_config = load_preset(preset_name) # This might need config path or dict
|
||||||
|
if not preset_config:
|
||||||
|
raise PredictionError(f"Preset '{preset_name}' configuration is empty or invalid.")
|
||||||
|
# Assuming the preset config directly contains the RuleSet structure
|
||||||
|
# or needs parsing into RuleSet. Let's assume it needs parsing.
|
||||||
|
# This part is highly dependent on how presets are stored and loaded.
|
||||||
|
# For now, let's assume preset_config IS the RuleSet dictionary.
|
||||||
|
if not isinstance(preset_config.get('rules'), dict): # Basic validation
|
||||||
|
raise PredictionError(f"Preset '{preset_name}' does not contain a valid 'rules' dictionary.")
|
||||||
|
rule_set_dict = preset_config['rules']
|
||||||
|
# We need to deserialize this dict into RuleSet object
|
||||||
|
# Assuming RuleSet has a class method or similar for this
|
||||||
|
rule_set = RuleSet.from_dict(rule_set_dict) # Placeholder for actual deserialization
|
||||||
|
|
||||||
|
except FileNotFoundError:
|
||||||
|
raise PredictionError(f"Preset file for '{preset_name}' not found.")
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Failed to load or parse preset '{preset_name}': {e}")
|
||||||
|
raise PredictionError(f"Failed to load or parse preset '{preset_name}': {e}")
|
||||||
|
|
||||||
|
if not rule_set:
|
||||||
|
raise PredictionError(f"Failed to obtain RuleSet for preset '{preset_name}'.")
|
||||||
|
|
||||||
|
log.debug(f"Successfully loaded RuleSet for preset: {preset_name}")
|
||||||
|
|
||||||
|
# --- 3. Generate SourceRule (Simplified Rule-Based Approach) ---
|
||||||
|
# This simulates what a RuleBasedPredictionHandler might do, but without
|
||||||
|
# needing the actual extracted files for *this* step. The rules themselves
|
||||||
|
# define the expected structure. The ProcessingEngine will later use this
|
||||||
|
# rule against the actual extracted files.
|
||||||
|
|
||||||
|
# Create the root SourceRule based on the archive name and the loaded RuleSet
|
||||||
|
# The actual structure (AssetRules, MapRules) comes directly from the RuleSet.
|
||||||
|
# We might need to adapt the archive name slightly (e.g., remove preset prefix)
|
||||||
|
# for the root node name, depending on desired output structure.
|
||||||
|
root_name = archive_path.stem # Or further processing if needed
|
||||||
|
source_rule = SourceRule(name=root_name, rule_set=rule_set)
|
||||||
|
|
||||||
|
# Potentially add logic here if basic archive structure analysis *is* needed
|
||||||
|
# for rule generation (e.g., using utils.structure_analyzer if it exists)
|
||||||
|
# analyze_archive_structure(archive_path, source_rule) # Example
|
||||||
|
|
||||||
|
log.info(f"Generated initial SourceRule for '{archive_path.name}' based on preset '{preset_name}'.")
|
||||||
|
|
||||||
|
# --- 4. Return SourceRule ---
|
||||||
|
# No temporary workspace needed/created in this function based on current plan.
|
||||||
|
# Cleanup is not required here.
|
||||||
|
return source_rule
|
||||||
|
|
||||||
|
# Example Usage (Conceptual - requires actual config/presets)
|
||||||
|
if __name__ == '__main__':
|
||||||
|
logging.basicConfig(level=logging.DEBUG)
|
||||||
|
log.info("Testing prediction_utils...")
|
||||||
|
|
||||||
|
# Create dummy files/config for testing
|
||||||
|
dummy_archive = Path("./[TestPreset]_MyAsset.zip")
|
||||||
|
dummy_archive.touch()
|
||||||
|
|
||||||
|
# Need a dummy preset file `Presets/TestPreset.json`
|
||||||
|
preset_dir = Path(__file__).parent.parent / "Presets"
|
||||||
|
preset_dir.mkdir(exist_ok=True)
|
||||||
|
dummy_preset_path = preset_dir / "TestPreset.json"
|
||||||
|
dummy_preset_content = """
|
||||||
|
{
|
||||||
|
"name": "TestPreset",
|
||||||
|
"description": "A dummy preset for testing",
|
||||||
|
"rules": {
|
||||||
|
"map_rules": [
|
||||||
|
{"pattern": ".*albedo.*", "map_type": "Albedo", "color_space": "sRGB"},
|
||||||
|
{"pattern": ".*normal.*", "map_type": "Normal", "color_space": "Non-Color"}
|
||||||
|
],
|
||||||
|
"asset_rules": [
|
||||||
|
{"pattern": ".*", "material_name": "{asset_name}"}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"settings": {}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
# Need RuleSet.from_dict implementation for this to work
|
||||||
|
# try:
|
||||||
|
# with open(dummy_preset_path, 'w') as f:
|
||||||
|
# f.write(dummy_preset_content)
|
||||||
|
# log.info(f"Created dummy preset: {dummy_preset_path}")
|
||||||
|
|
||||||
|
# # Dummy config - structure depends on actual implementation
|
||||||
|
# dummy_config = {
|
||||||
|
# 'paths': {'presets': str(preset_dir)},
|
||||||
|
# # 'presets': { 'TestPreset': json.loads(dummy_preset_content) } # Alt if pre-loaded
|
||||||
|
# }
|
||||||
|
|
||||||
|
# # Mock load_preset if it's complex
|
||||||
|
# original_load_preset = load_preset
|
||||||
|
# def mock_load_preset(name):
|
||||||
|
# if name == "TestPreset":
|
||||||
|
# import json
|
||||||
|
# return json.loads(dummy_preset_content)
|
||||||
|
# else:
|
||||||
|
# raise FileNotFoundError
|
||||||
|
# load_preset = mock_load_preset # Monkey patch
|
||||||
|
|
||||||
|
# # Mock RuleSet.from_dict
|
||||||
|
# original_from_dict = RuleSet.from_dict
|
||||||
|
# def mock_from_dict(data):
|
||||||
|
# # Basic mock - replace with actual logic
|
||||||
|
# mock_rule_set = RuleSet()
|
||||||
|
# mock_rule_set.map_rules = [MapRule(**mr) for mr in data.get('map_rules', [])]
|
||||||
|
# mock_rule_set.asset_rules = [AssetRule(**ar) for ar in data.get('asset_rules', [])]
|
||||||
|
# return mock_rule_set
|
||||||
|
# RuleSet.from_dict = mock_from_dict # Monkey patch
|
||||||
|
|
||||||
|
|
||||||
|
# try:
|
||||||
|
# generated_rule = generate_source_rule_from_archive(dummy_archive, dummy_config)
|
||||||
|
# log.info(f"Successfully generated SourceRule: {generated_rule.name}")
|
||||||
|
# log.info(f" RuleSet Map Rules: {len(generated_rule.rule_set.map_rules)}")
|
||||||
|
# log.info(f" RuleSet Asset Rules: {len(generated_rule.rule_set.asset_rules)}")
|
||||||
|
# # Add more detailed checks if needed
|
||||||
|
# except (PredictionError, FileNotFoundError) as e:
|
||||||
|
# log.error(f"Test failed: {e}")
|
||||||
|
# except Exception as e:
|
||||||
|
# log.exception("Unexpected error during test")
|
||||||
|
|
||||||
|
# finally:
|
||||||
|
# # Clean up dummy files
|
||||||
|
# if dummy_archive.exists():
|
||||||
|
# dummy_archive.unlink()
|
||||||
|
# if dummy_preset_path.exists():
|
||||||
|
# dummy_preset_path.unlink()
|
||||||
|
# # Restore mocked functions
|
||||||
|
# load_preset = original_load_preset
|
||||||
|
# RuleSet.from_dict = original_from_dict
|
||||||
|
# log.info("Test cleanup complete.")
|
||||||
|
|
||||||
|
log.warning("Note: Main execution block is commented out as it requires specific implementations of load_preset and RuleSet.from_dict.")
|
||||||
87
utils/workspace_utils.py
Normal file
87
utils/workspace_utils.py
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
# utils/workspace_utils.py
|
||||||
|
|
||||||
|
import tempfile
|
||||||
|
import shutil
|
||||||
|
import zipfile
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Union
|
||||||
|
|
||||||
|
# Get a logger for this module
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Define supported archive extensions (add more as needed, e.g., '.rar', '.7z')
|
||||||
|
# Requires additional libraries like patoolib for non-zip formats.
|
||||||
|
SUPPORTED_ARCHIVES = {'.zip'}
|
||||||
|
|
||||||
|
def prepare_processing_workspace(input_path_str: Union[str, Path]) -> Path:
|
||||||
|
"""
|
||||||
|
Prepares a temporary workspace for processing an asset source.
|
||||||
|
|
||||||
|
Handles copying directory contents or extracting supported archives
|
||||||
|
into a unique temporary directory.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
input_path_str: The path (as a string or Path object) to the input
|
||||||
|
directory or archive file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The Path object representing the created temporary workspace directory.
|
||||||
|
The caller is responsible for cleaning up this directory.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
FileNotFoundError: If the input_path does not exist.
|
||||||
|
ValueError: If the input_path is not a directory or a supported archive type.
|
||||||
|
zipfile.BadZipFile: If a zip file is corrupted.
|
||||||
|
OSError: If there are issues creating the temp directory or copying files.
|
||||||
|
"""
|
||||||
|
input_path = Path(input_path_str)
|
||||||
|
log.info(f"Preparing workspace for input: {input_path}")
|
||||||
|
|
||||||
|
if not input_path.exists():
|
||||||
|
raise FileNotFoundError(f"Input path does not exist: {input_path}")
|
||||||
|
|
||||||
|
# Create a secure temporary directory
|
||||||
|
try:
|
||||||
|
temp_workspace_dir = tempfile.mkdtemp(prefix="asset_proc_")
|
||||||
|
prepared_workspace_path = Path(temp_workspace_dir)
|
||||||
|
log.info(f"Created temporary workspace: {prepared_workspace_path}")
|
||||||
|
except OSError as e:
|
||||||
|
log.error(f"Failed to create temporary directory: {e}")
|
||||||
|
raise # Re-raise the exception
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Check if input is directory or a supported archive file
|
||||||
|
if input_path.is_dir():
|
||||||
|
log.info(f"Input is a directory, copying contents to workspace: {input_path}")
|
||||||
|
# Copy directory contents into the temp workspace
|
||||||
|
shutil.copytree(input_path, prepared_workspace_path, dirs_exist_ok=True)
|
||||||
|
elif input_path.is_file() and input_path.suffix.lower() in SUPPORTED_ARCHIVES:
|
||||||
|
log.info(f"Input is a supported archive ({input_path.suffix}), extracting to workspace: {input_path}")
|
||||||
|
if input_path.suffix.lower() == '.zip':
|
||||||
|
with zipfile.ZipFile(input_path, 'r') as zip_ref:
|
||||||
|
zip_ref.extractall(prepared_workspace_path)
|
||||||
|
# Add elif blocks here for other archive types (e.g., using patoolib)
|
||||||
|
# elif input_path.suffix.lower() in ['.rar', '.7z']:
|
||||||
|
# import patoolib
|
||||||
|
# patoolib.extract_archive(str(input_path), outdir=str(prepared_workspace_path))
|
||||||
|
else:
|
||||||
|
# This case should ideally not be reached if SUPPORTED_ARCHIVES is correct
|
||||||
|
raise ValueError(f"Archive type {input_path.suffix} marked as supported but no extraction logic defined.")
|
||||||
|
else:
|
||||||
|
# Handle unsupported input types
|
||||||
|
raise ValueError(f"Unsupported input type: {input_path}. Must be a directory or a supported archive ({', '.join(SUPPORTED_ARCHIVES)}).")
|
||||||
|
|
||||||
|
log.debug(f"Workspace preparation successful for: {input_path}")
|
||||||
|
return prepared_workspace_path
|
||||||
|
|
||||||
|
except (FileNotFoundError, ValueError, zipfile.BadZipFile, OSError, ImportError) as e:
|
||||||
|
# Clean up the created temp directory if preparation fails mid-way
|
||||||
|
log.error(f"Error during workspace preparation for {input_path}: {e}. Cleaning up workspace.")
|
||||||
|
if prepared_workspace_path.exists():
|
||||||
|
try:
|
||||||
|
shutil.rmtree(prepared_workspace_path)
|
||||||
|
log.info(f"Cleaned up failed workspace: {prepared_workspace_path}")
|
||||||
|
except OSError as cleanup_error:
|
||||||
|
log.error(f"Failed to cleanup workspace {prepared_workspace_path} after error: {cleanup_error}")
|
||||||
|
raise # Re-raise the original exception
|
||||||
Loading…
x
Reference in New Issue
Block a user