Compare commits
3 Commits
74b3d008ea
...
6b704c561a
| Author | SHA1 | Date | |
|---|---|---|---|
| 6b704c561a | |||
| 336d698f9b | |||
| 01c8f68ea0 |
@ -44,16 +44,19 @@ The GUI has been refactored into several key components:
|
|||||||
|
|
||||||
The `MainWindow` class acts as the main application window and **coordinator** for the GUI. Its primary responsibilities now include:
|
The `MainWindow` class acts as the main application window and **coordinator** for the GUI. Its primary responsibilities now include:
|
||||||
|
|
||||||
* Setting up the main window structure and menu bar.
|
* Setting up the main window structure (using a `QSplitter`) and menu bar.
|
||||||
* Instantiating and arranging the major GUI widgets:
|
* Instantiating and arranging the major GUI widgets:
|
||||||
* `MainPanelWidget` (containing core controls and the rule editor)
|
* `PresetEditorWidget` (providing selector and JSON editor parts)
|
||||||
* `PresetEditorWidget`
|
* `LLMEditorWidget` (for LLM settings)
|
||||||
|
* `MainPanelWidget` (containing the rule view and processing controls)
|
||||||
* `LogConsoleWidget`
|
* `LogConsoleWidget`
|
||||||
* Connecting signals and slots between these widgets, the underlying models (`UnifiedViewModel`), and background handlers (`RuleBasedPredictionHandler`, `LLMPredictionHandler`, `LLMInteractionHandler`).
|
* **Layout Management:** Placing the preset selector statically and using a `QStackedWidget` to switch between the `PresetEditorWidget`'s JSON editor and the `LLMEditorWidget`.
|
||||||
|
* **Editor Switching:** Handling the `preset_selection_changed_signal` from `PresetEditorWidget` to switch the stacked editor view (`_on_preset_selection_changed` slot).
|
||||||
|
* Connecting signals and slots between widgets, models (`UnifiedViewModel`), and handlers (`LLMInteractionHandler`, `AssetRestructureHandler`).
|
||||||
* Managing the overall application state related to GUI interactions (e.g., enabling/disabling controls).
|
* Managing the overall application state related to GUI interactions (e.g., enabling/disabling controls).
|
||||||
* Handling top-level actions like loading sources (drag-and-drop), initiating predictions, and starting the processing task (via `main.ProcessingTask`).
|
* Handling top-level actions like loading sources (drag-and-drop), initiating predictions (`update_preview`), and starting the processing task (`_on_process_requested`).
|
||||||
* Managing the `QThreadPool` for running background tasks (prediction).
|
* Managing background prediction threads (Rule-Based via `QThread`, LLM via `LLMInteractionHandler`).
|
||||||
* Implementing slots like `_handle_prediction_completion` to update the model/view when prediction results are ready.
|
* Implementing slots (`_on_rule_hierarchy_ready`, `_on_llm_prediction_ready_from_handler`, `_on_prediction_error`, `_handle_prediction_completion`) to update the model/view when prediction results/errors arrive.
|
||||||
|
|
||||||
### `MainPanelWidget` (`gui/main_panel_widget.py`)
|
### `MainPanelWidget` (`gui/main_panel_widget.py`)
|
||||||
|
|
||||||
@ -69,7 +72,10 @@ This widget contains the central part of the GUI, including:
|
|||||||
This widget provides the interface for managing presets:
|
This widget provides the interface for managing presets:
|
||||||
|
|
||||||
* Loading, saving, and editing preset files (`Presets/*.json`).
|
* Loading, saving, and editing preset files (`Presets/*.json`).
|
||||||
* Displaying preset rules and settings.
|
* Displaying preset rules and settings in a tabbed JSON editor.
|
||||||
|
* Providing the preset selection list (`QListWidget`) including the "LLM Interpretation" option.
|
||||||
|
* **Refactored:** Exposes its selector (`selector_container`) and JSON editor (`json_editor_container`) as separate widgets for use by `MainWindow`.
|
||||||
|
* Emits `preset_selection_changed_signal` when the selection changes.
|
||||||
|
|
||||||
### `LogConsoleWidget` (`gui/log_console_widget.py`)
|
### `LogConsoleWidget` (`gui/log_console_widget.py`)
|
||||||
|
|
||||||
@ -79,6 +85,15 @@ This widget displays application logs within the GUI:
|
|||||||
* Integrates with Python's `logging` system via a custom `QtLogHandler`.
|
* Integrates with Python's `logging` system via a custom `QtLogHandler`.
|
||||||
* Can be shown/hidden via the main window's "View" menu.
|
* Can be shown/hidden via the main window's "View" menu.
|
||||||
|
|
||||||
|
### `LLMEditorWidget` (`gui/llm_editor_widget.py`)
|
||||||
|
|
||||||
|
A new widget dedicated to editing LLM settings:
|
||||||
|
|
||||||
|
* Provides a tabbed interface ("Prompt Settings", "API Settings") to edit `config/llm_settings.json`.
|
||||||
|
* Allows editing the main prompt, managing examples (add/delete/edit JSON), and configuring API details (URL, key, model, temperature, timeout).
|
||||||
|
* Loads settings via `load_settings()` and saves them using `_save_settings()` (which calls `configuration.save_llm_config()`).
|
||||||
|
* Placed within `MainWindow`'s `QStackedWidget`.
|
||||||
|
|
||||||
### `UnifiedViewModel` (`gui/unified_view_model.py`)
|
### `UnifiedViewModel` (`gui/unified_view_model.py`)
|
||||||
|
|
||||||
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
||||||
@ -136,16 +151,19 @@ An experimental predictor (inheriting from `BasePredictionHandler`) that uses a
|
|||||||
|
|
||||||
* Takes an input source identifier, file list, and `Configuration` object.
|
* Takes an input source identifier, file list, and `Configuration` object.
|
||||||
* Interacts with the `LLMInteractionHandler` to send data to the LLM and receive predictions.
|
* Interacts with the `LLMInteractionHandler` to send data to the LLM and receive predictions.
|
||||||
* Parses the LLM response to construct a `SourceRule` hierarchy.
|
* **Parses the LLM's JSON response**: It expects a specific two-part JSON structure (see `12_LLM_Predictor_Integration.md`). It first sanitizes the response (removing comments/markdown) and then parses the JSON.
|
||||||
* Emits the `prediction_signal` with the generated `SourceRule` object.
|
* **Constructs `SourceRule`**: It groups files based on the `proposed_asset_group_name` from the JSON, assigns the final `asset_type` using the `asset_group_classifications` map, and builds the complete `SourceRule` hierarchy.
|
||||||
|
* Emits the `prediction_signal` with the generated `SourceRule` object or `error_signal` on failure.
|
||||||
|
|
||||||
### `LLMInteractionHandler` (`gui/llm_interaction_handler.py`)
|
### `LLMInteractionHandler` (`gui/llm_interaction_handler.py`)
|
||||||
|
|
||||||
This class manages the specifics of communicating with the configured LLM API:
|
This class now acts as the central manager for LLM prediction tasks:
|
||||||
|
|
||||||
* Handles constructing prompts based on templates and input data.
|
* **Manages the LLM prediction queue** and processes items sequentially.
|
||||||
* Sends requests to the LLM endpoint.
|
* **Loads LLM configuration** directly from `config/llm_settings.json` and `config/app_settings.json`.
|
||||||
* Receives and potentially pre-processes the LLM's response before returning it to the `LLMPredictionHandler`.
|
* **Instantiates and manages** the `LLMPredictionHandler` and its `QThread`.
|
||||||
|
* **Handles LLM task state** (running/idle) and signals changes to the GUI.
|
||||||
|
* Receives results/errors from `LLMPredictionHandler` and **emits signals** (`llm_prediction_ready`, `llm_prediction_error`, `llm_status_update`, `llm_processing_state_changed`) to `MainWindow`.
|
||||||
|
|
||||||
## Utility Modules (`utils/`)
|
## Utility Modules (`utils/`)
|
||||||
|
|
||||||
|
|||||||
@ -6,10 +6,11 @@ This document provides technical details about the configuration system and the
|
|||||||
|
|
||||||
The tool utilizes a two-tiered configuration system managed by the `configuration.py` module:
|
The tool utilizes a two-tiered configuration system managed by the `configuration.py` module:
|
||||||
|
|
||||||
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources. Examples include default output paths, standard image resolutions, map merge rules, output format rules, Blender executable paths, and default map types. It also centrally defines metadata for allowed asset and file types. Key sections include `FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`, and `MAP_MERGE_RULES`.
|
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources (e.g., default output paths, standard image resolutions, map merge rules, output format rules, Blender paths, `FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`).
|
||||||
2. **Preset Files (`Presets/*.json`):** These JSON files define supplier-specific rules and overrides. They contain patterns (often regular expressions) to interpret filenames, classify map types, handle variants, define naming conventions, and specify other source-specific behaviors.
|
2. **LLM Settings (`config/llm_settings.json`):** This JSON file contains settings specifically related to the LLM predictor, such as the API endpoint, model name, prompt template, and examples. These settings can be edited through the GUI using the `LLMEditorWidget`.
|
||||||
|
3. **Preset Files (`Presets/*.json`):** These JSON files define supplier-specific rules and overrides. They contain patterns to interpret filenames, classify map types, handle variants, define naming conventions, and specify other source-specific behaviors.
|
||||||
|
|
||||||
The `configuration.py` module is responsible for loading the base settings from `config/app_settings.json` (including loading and saving the JSON content), merging them with the rules from the selected preset file, and providing the base configuration via the `load_base_config()` function. Preset values generally override core settings where applicable. Note that the old `config.py` file has been deleted.
|
The `configuration.py` module contains the `Configuration` class (for loading/merging settings for processing) and standalone functions like `load_base_config()` (for accessing `app_settings.json` directly) and `save_llm_config()` / `save_base_config()` (for writing settings back to files). Note that the old `config.py` file has been deleted.
|
||||||
|
|
||||||
## Supplier Management (`config/suppliers.json`)
|
## Supplier Management (`config/suppliers.json`)
|
||||||
|
|
||||||
@ -24,23 +25,37 @@ The `Configuration` class is central to the new configuration system. It is resp
|
|||||||
|
|
||||||
* **Initialization:** An instance is created with a specific `preset_name`.
|
* **Initialization:** An instance is created with a specific `preset_name`.
|
||||||
* **Loading:**
|
* **Loading:**
|
||||||
* It first loads the base application settings from `config/app_settings.json`. This file now also contains the LLM-specific settings (`llm_endpoint_url`, `llm_api_key`, `llm_model_name`, `llm_temperature`, `llm_request_timeout`, `llm_predictor_prompt`, `llm_predictor_examples`).
|
* It first loads the base application settings from `config/app_settings.json`.
|
||||||
* It then loads the specified preset JSON file from the `Presets/` directory.
|
* It then loads the LLM-specific settings from `config/llm_settings.json`.
|
||||||
* **Merging:** The loaded settings from `app_settings.json` and the preset rules are merged into a single configuration object accessible via instance attributes. Preset values generally override the base settings from `app_settings.json` where applicable.
|
* Finally, it loads the specified preset JSON file from the `Presets/` directory.
|
||||||
* **Validation (`_validate_configs`):** Performs basic structural validation on the loaded settings, checking for the presence of required keys and basic data types (e.g., ensuring `map_type_mapping` is a list of dictionaries).
|
* **Merging & Access:** The base settings from `app_settings.json` are merged with the preset rules. LLM settings are stored separately. All settings are accessible via instance properties (e.g., `config.target_filename_pattern`, `config.llm_endpoint_url`). Preset values generally override the base settings where applicable.
|
||||||
* **Regex Compilation (`_compile_regex_patterns`):** A crucial step for performance. It iterates through the regex patterns defined in the merged configuration (from both `app_settings.json` and the preset) and compiles them using `re.compile` (mostly case-insensitive). These compiled regex objects are stored as instance attributes (e.g., `self.compiled_map_keyword_regex`) for fast matching during file classification. It uses a helper (`_fnmatch_to_regex`) for basic wildcard (`*`, `?`) conversion in patterns.
|
* **Validation (`_validate_configs`):** Performs basic structural validation on the loaded settings (base, LLM, and preset), checking for the presence of required keys and basic data types. Logs warnings for missing optional LLM keys.
|
||||||
* **LLM Settings Access:** The `Configuration` class provides getter methods (e.g., `get_llm_endpoint_url()`, `get_llm_api_key()`, `get_llm_model_name()`, `get_llm_temperature()`, `get_llm_request_timeout()`, `get_llm_predictor_prompt()`, `get_llm_predictor_examples()`) to allow components like the `LLMPredictionHandler` to easily access the necessary LLM configuration values loaded from `app_settings.json`.
|
* **Regex Compilation (`_compile_regex_patterns`):** Compiles regex patterns defined in the merged configuration (from base settings and the preset) for performance. Compiled regex objects are stored as instance attributes (e.g., `self.compiled_map_keyword_regex`).
|
||||||
|
* **LLM Settings Access:** The `Configuration` class provides direct property access (e.g., `config.llm_endpoint_url`, `config.llm_api_key`, `config.llm_model_name`, `config.llm_temperature`, `config.llm_request_timeout`, `config.llm_predictor_prompt`, `config.get_llm_examples()`) to allow components like the `LLMPredictionHandler` to easily access the necessary LLM configuration values loaded from `config/llm_settings.json`.
|
||||||
|
|
||||||
An instance of `Configuration` is created within each worker process (`main.process_single_asset_wrapper`) to ensure that each concurrently processed asset uses the correct, isolated configuration based on the specified preset and the base application settings.
|
An instance of `Configuration` is created within each worker process (`main.process_single_asset_wrapper`) to ensure that each concurrently processed asset uses the correct, isolated configuration based on the specified preset and the base application settings. The `LLMInteractionHandler` loads LLM settings directly using helper functions or file access, not the `Configuration` class.
|
||||||
|
|
||||||
## GUI Configuration Editor (`gui/config_editor_dialog.py`)
|
## GUI Configuration Editors
|
||||||
|
|
||||||
|
The GUI provides dedicated editors for modifying configuration files:
|
||||||
|
|
||||||
|
* **`ConfigEditorDialog` (`gui/config_editor_dialog.py`):** Edits the core `config/app_settings.json`.
|
||||||
|
* **`LLMEditorWidget` (`gui/llm_editor_widget.py`):** Edits the LLM-specific `config/llm_settings.json`.
|
||||||
|
|
||||||
|
### `ConfigEditorDialog` (`gui/config_editor_dialog.py`)
|
||||||
|
|
||||||
The GUI includes a dedicated editor for modifying the `config/app_settings.json` file. This is implemented in `gui/config_editor_dialog.py`.
|
The GUI includes a dedicated editor for modifying the `config/app_settings.json` file. This is implemented in `gui/config_editor_dialog.py`.
|
||||||
|
|
||||||
* **Purpose:** Provides a user-friendly interface for viewing and editing the core application settings defined in `app_settings.json`.
|
* **Purpose:** Provides a user-friendly interface for viewing and editing the core application settings defined in `app_settings.json`.
|
||||||
* **Implementation:** The dialog loads the JSON content of `app_settings.json`, presents it in a tabbed layout ("General", "Output & Naming", etc.) using standard GUI widgets mapped to the JSON structure, and saves the changes back to the file. It supports editing basic fields, tables for definitions (`FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`), and a list/detail view for merge rules (`MAP_MERGE_RULES`). The definitions tables include dynamic color editing features.
|
* **Implementation:** The dialog loads the JSON content of `app_settings.json`, presents it in a tabbed layout ("General", "Output & Naming", etc.) using standard GUI widgets mapped to the JSON structure, and saves the changes back to the file. It supports editing basic fields, tables for definitions (`FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`), and a list/detail view for merge rules (`MAP_MERGE_RULES`). The definitions tables include dynamic color editing features.
|
||||||
* **Limitations:** Currently, editing complex fields like `IMAGE_RESOLUTIONS` or the full details of `MAP_MERGE_RULES` via the UI is not fully supported.
|
* **Limitations:** Currently, editing complex fields like `IMAGE_RESOLUTIONS` or the full details of `MAP_MERGE_RULES` via the UI is not fully supported.
|
||||||
* **Note:** Changes made through the GUI editor are written directly to `config/app_settings.json` but require an application restart to be loaded and applied by the `Configuration` class.
|
* **Note:** Changes made through the `ConfigEditorDialog` are written directly to `config/app_settings.json` (using `save_base_config`) but require an application restart to be loaded and applied by the `Configuration` class during processing.
|
||||||
|
|
||||||
|
### `LLMEditorWidget` (`gui/llm_editor_widget.py`)
|
||||||
|
|
||||||
|
* **Purpose:** Provides a user-friendly interface for viewing and editing the LLM settings defined in `config/llm_settings.json`.
|
||||||
|
* **Implementation:** Uses tabs for "Prompt Settings" and "API Settings". Allows editing the prompt, managing examples, and configuring API details.
|
||||||
|
* **Persistence:** Saves changes directly to `config/llm_settings.json` using the `configuration.save_llm_config()` function. Changes are loaded by the `LLMInteractionHandler` the next time an LLM task is initiated.
|
||||||
|
|
||||||
## Preset File Structure (`Presets/*.json`)
|
## Preset File Structure (`Presets/*.json`)
|
||||||
|
|
||||||
|
|||||||
@ -11,21 +11,34 @@ The GUI is built using `PySide6`, which provides Python bindings for the Qt fram
|
|||||||
The `MainWindow` class acts as the central **coordinator** for the GUI application. It is responsible for:
|
The `MainWindow` class acts as the central **coordinator** for the GUI application. It is responsible for:
|
||||||
|
|
||||||
* Setting up the main application window structure and menu bar.
|
* Setting up the main application window structure and menu bar.
|
||||||
* Instantiating and arranging the major GUI widgets:
|
* **Layout:** Arranging the main GUI components using a `QSplitter`.
|
||||||
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the core controls, preset selection, and the rule editor.
|
* **Left Pane:** Contains the preset selection controls (from `PresetEditorWidget`) permanently displayed at the top. Below this, a `QStackedWidget` switches between the preset JSON editor (also from `PresetEditorWidget`) and the `LLMEditorWidget`.
|
||||||
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Handles preset loading, saving, and editing.
|
* **Right Pane:** Contains the `MainPanelWidget`.
|
||||||
|
* Instantiating and managing the major GUI widgets:
|
||||||
|
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Provides the preset selector and the JSON editor parts.
|
||||||
|
* `LLMEditorWidget` (`gui/llm_editor_widget.py`): Provides the editor for LLM settings.
|
||||||
|
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the rule hierarchy view and processing controls.
|
||||||
* `LogConsoleWidget` (`gui/log_console_widget.py`): Displays application logs.
|
* `LogConsoleWidget` (`gui/log_console_widget.py`): Displays application logs.
|
||||||
* Instantiating key models and handlers:
|
* Instantiating key models and handlers:
|
||||||
* `UnifiedViewModel` (`gui/unified_view_model.py`): The model for the rule hierarchy view.
|
* `UnifiedViewModel` (`gui/unified_view_model.py`): The model for the rule hierarchy view.
|
||||||
* `LLMInteractionHandler` (`gui/llm_interaction_handler.py`): Manages communication with the LLM service.
|
* `LLMInteractionHandler` (`gui/llm_interaction_handler.py`): Manages communication with the LLM service.
|
||||||
|
* `AssetRestructureHandler` (`gui/asset_restructure_handler.py`): Handles rule restructuring.
|
||||||
* Connecting signals and slots between these components to orchestrate the application flow.
|
* Connecting signals and slots between these components to orchestrate the application flow.
|
||||||
|
* **Editor Switching:** Handling the `preset_selection_changed_signal` from `PresetEditorWidget` in its `_on_preset_selection_changed` slot. This slot:
|
||||||
|
* Switches the `QStackedWidget` (`editor_stack`) to display either the `PresetEditorWidget`'s JSON editor or the `LLMEditorWidget` based on the selected mode ("preset", "llm", "placeholder").
|
||||||
|
* Calls `llm_editor_widget.load_settings()` when switching to LLM mode.
|
||||||
|
* Updates the window title.
|
||||||
|
* Triggers `update_preview()`.
|
||||||
* Handling top-level user interactions like drag-and-drop for loading sources (`add_input_paths`). This method now handles the "placeholder" state (no preset selected) by scanning directories or inspecting archives (ZIP) and creating placeholder `SourceRule`/`AssetRule`/`FileRule` objects to immediately populate the `UnifiedViewModel` with the file structure.
|
* Handling top-level user interactions like drag-and-drop for loading sources (`add_input_paths`). This method now handles the "placeholder" state (no preset selected) by scanning directories or inspecting archives (ZIP) and creating placeholder `SourceRule`/`AssetRule`/`FileRule` objects to immediately populate the `UnifiedViewModel` with the file structure.
|
||||||
* Initiating predictions based on the selected preset mode (Rule-Based or LLM) when presets change or sources are added.
|
* Initiating predictions based on the selected preset mode (Rule-Based or LLM) when presets change or sources are added (`update_preview`).
|
||||||
* Starting the processing task (`_on_process_requested`): This slot now filters the `SourceRule` list obtained from the `UnifiedViewModel`, excluding sources where no asset has a `Target Asset` name assigned, before emitting the `start_backend_processing` signal. It also manages enabling/disabling controls.
|
* Starting the processing task (`_on_process_requested`): This slot now filters the `SourceRule` list obtained from the `UnifiedViewModel`, excluding sources where no asset has a `Target Asset` name assigned, before emitting the `start_backend_processing` signal. It also manages enabling/disabling controls.
|
||||||
* Managing the `QThreadPool` for running background prediction tasks (`RuleBasedPredictionHandler`, `LLMPredictionHandler`).
|
* Managing the background prediction threads (`RuleBasedPredictionHandler` via `QThread`, `LLMPredictionHandler` via `LLMInteractionHandler`).
|
||||||
* Implementing slots to handle results from background tasks:
|
* Implementing slots to handle results from background tasks:
|
||||||
* `_handle_prediction_completion(source_id, source_rule_list)`: Receives results from either prediction handler via the `prediction_signal`. It calls `self.unified_view_model.update_rules_for_sources()` to update the view model, preserving user overrides where possible. For LLM predictions, it also triggers processing the next item in the queue.
|
* `_on_rule_hierarchy_ready`: Handles results from `RuleBasedPredictionHandler`.
|
||||||
* Slots to handle status updates from the LLM handler.
|
* `_on_llm_prediction_ready_from_handler`: Handles results from `LLMInteractionHandler`.
|
||||||
|
* `_on_prediction_error`: Handles errors from both prediction paths.
|
||||||
|
* `_handle_prediction_completion`: Centralized logic to track completion and update UI state after each prediction result or error.
|
||||||
|
* Slots to handle status and state changes from `LLMInteractionHandler`.
|
||||||
|
|
||||||
## Threading and Background Tasks
|
## Threading and Background Tasks
|
||||||
|
|
||||||
@ -53,7 +66,26 @@ Communication between the `MainWindow` (main UI thread) and the background predi
|
|||||||
|
|
||||||
## Preset Editor (`gui/preset_editor_widget.py`)
|
## Preset Editor (`gui/preset_editor_widget.py`)
|
||||||
|
|
||||||
The `PresetEditorWidget` provides a dedicated interface for managing presets. It handles loading, displaying, editing, and saving preset `.json` files. It communicates with the `MainWindow` (e.g., via signals) when a preset is loaded or saved.
|
The `PresetEditorWidget` provides a dedicated interface for managing presets. It handles loading, displaying, editing, and saving preset `.json` files.
|
||||||
|
|
||||||
|
* **Refactoring:** This widget has been refactored to expose its main components:
|
||||||
|
* `selector_container`: A `QWidget` containing the preset list (`QListWidget`) and New/Delete buttons. Used statically by `MainWindow`.
|
||||||
|
* `json_editor_container`: A `QWidget` containing the tabbed editor (`QTabWidget`) for preset JSON details and the Save/Save As buttons. Placed in `MainWindow`'s `QStackedWidget`.
|
||||||
|
* **Functionality:** Still manages the logic for populating the preset list, loading/saving presets, handling unsaved changes, and providing the editor UI for preset details.
|
||||||
|
* **Communication:** Emits `preset_selection_changed_signal(mode, preset_name)` when the user selects a preset, the LLM option, or the placeholder. This signal is crucial for `MainWindow` to switch the editor stack and trigger preview updates.
|
||||||
|
|
||||||
|
## LLM Settings Editor (`gui/llm_editor_widget.py`)
|
||||||
|
|
||||||
|
This new widget provides a dedicated interface for editing LLM-specific settings stored in `config/llm_settings.json`.
|
||||||
|
|
||||||
|
* **Purpose:** Allows users to configure the LLM predictor's behavior without directly editing the JSON file.
|
||||||
|
* **Structure:** Uses a `QTabWidget` with two tabs:
|
||||||
|
* **"Prompt Settings":** Contains a `QPlainTextEdit` for the main prompt and a nested `QTabWidget` for managing examples (add/delete/edit JSON in `QTextEdit` widgets).
|
||||||
|
* **"API Settings":** Contains fields (`QLineEdit`, `QDoubleSpinBox`, `QSpinBox`) for endpoint URL, API key, model name, temperature, and timeout.
|
||||||
|
* **Functionality:**
|
||||||
|
* `load_settings()`: Reads `config/llm_settings.json` and populates the UI fields. Handles file not found or JSON errors. Called by `MainWindow` when switching to LLM mode.
|
||||||
|
* `_save_settings()`: Gathers data from the UI, validates example JSON, constructs the settings dictionary, and calls `configuration.save_llm_config()` to write back to the file. Emits `settings_saved` signal on success.
|
||||||
|
* Manages unsaved changes state and enables/disables the "Save LLM Settings" button accordingly.
|
||||||
|
|
||||||
## Unified Hierarchical View
|
## Unified Hierarchical View
|
||||||
|
|
||||||
@ -80,36 +112,44 @@ The core rule editing interface is built around a `QTreeView` managed within the
|
|||||||
graph TD
|
graph TD
|
||||||
subgraph MainWindow [MainWindow Coordinator]
|
subgraph MainWindow [MainWindow Coordinator]
|
||||||
direction LR
|
direction LR
|
||||||
MW_Input[User Input (Drag/Drop, Preset Select)] --> MW(MainWindow);
|
MW_Input[User Input (Drag/Drop)] --> MW(MainWindow);
|
||||||
MW -- Initiates --> PredPool{QThreadPool};
|
MW -- Owns/Manages --> Splitter(QSplitter);
|
||||||
MW -- Connects Signals --> VM(UnifiedViewModel);
|
|
||||||
MW -- Connects Signals --> ARH(AssetRestructureHandler);
|
|
||||||
MW -- Owns/Manages --> MPW(MainPanelWidget);
|
|
||||||
MW -- Owns/Manages --> PEW(PresetEditorWidget);
|
|
||||||
MW -- Owns/Manages --> LCW(LogConsoleWidget);
|
|
||||||
MW -- Owns/Manages --> LLMIH(LLMInteractionHandler);
|
MW -- Owns/Manages --> LLMIH(LLMInteractionHandler);
|
||||||
|
MW -- Owns/Manages --> ARH(AssetRestructureHandler);
|
||||||
|
MW -- Owns/Manages --> VM(UnifiedViewModel);
|
||||||
|
MW -- Owns/Manages --> LCW(LogConsoleWidget);
|
||||||
|
MW -- Initiates --> PredPool{Prediction Threads};
|
||||||
|
MW -- Connects Signals --> VM;
|
||||||
|
MW -- Connects Signals --> ARH;
|
||||||
|
MW -- Connects Signals --> LLMIH;
|
||||||
|
MW -- Connects Signals --> PEW(PresetEditorWidget);
|
||||||
|
MW -- Connects Signals --> LLMEDW(LLMEditorWidget);
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph MainPanel [MainPanelWidget]
|
subgraph LeftPane [Left Pane Widgets]
|
||||||
direction TB
|
direction TB
|
||||||
MPW_UI[UI Controls (Load, Predict, Process Btns)];
|
Splitter -- Adds Widget --> LPW(Left Pane Container);
|
||||||
|
LPW -- Contains --> PEW_Sel(PresetEditorWidget - Selector);
|
||||||
|
LPW -- Contains --> Stack(QStackedWidget);
|
||||||
|
Stack -- Contains --> PEW_Edit(PresetEditorWidget - JSON Editor);
|
||||||
|
Stack -- Contains --> LLMEDW;
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph RightPane [Right Pane Widgets]
|
||||||
|
direction TB
|
||||||
|
Splitter -- Adds Widget --> MPW(MainPanelWidget);
|
||||||
|
MPW -- Contains --> TV(QTreeView - Rule View);
|
||||||
|
MPW_UI[UI Controls (Process Btn, etc)];
|
||||||
MPW_UI --> MPW;
|
MPW_UI --> MPW;
|
||||||
MPW -- Contains --> REW(RuleEditorWidget);
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph RuleEditor [RuleEditorWidget]
|
|
||||||
direction TB
|
|
||||||
REW -- Contains --> TV(QTreeView - Rule View);
|
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph Prediction [Background Prediction]
|
subgraph Prediction [Background Prediction]
|
||||||
direction TB
|
direction TB
|
||||||
PredPool -- Runs --> RBP(RuleBasedPredictionHandler);
|
PredPool -- Runs --> RBP(RuleBasedPredictionHandler);
|
||||||
PredPool -- Runs --> LLMP(LLMPredictionHandler);
|
PredPool -- Runs --> LLMP(LLMPredictionHandler);
|
||||||
LLMP -- Uses --> LLMIH;
|
LLMIH -- Manages/Starts --> LLMP;
|
||||||
RBP -- prediction_signal --> MW;
|
RBP -- prediction_ready/error/status --> MW;
|
||||||
LLMP -- prediction_signal --> MW;
|
LLMIH -- llm_prediction_ready/error/status --> MW;
|
||||||
LLMP -- status_signal --> MW;
|
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph ModelView [Model/View Components]
|
subgraph ModelView [Model/View Components]
|
||||||
@ -126,17 +166,24 @@ graph TD
|
|||||||
Del -- Get/Set Data --> VM;
|
Del -- Get/Set Data --> VM;
|
||||||
end
|
end
|
||||||
|
|
||||||
|
%% MainWindow Interactions
|
||||||
|
MW_Input -- Triggers --> MW;
|
||||||
|
PEW -- preset_selection_changed_signal --> MW;
|
||||||
|
LLMEDW -- settings_saved --> MW;
|
||||||
|
MPW -- process_requested/etc --> MW;
|
||||||
|
MW -- _on_preset_selection_changed --> Stack;
|
||||||
|
MW -- _on_preset_selection_changed --> LLMEDW;
|
||||||
MW -- _handle_prediction_completion --> VM;
|
MW -- _handle_prediction_completion --> VM;
|
||||||
MW -- Triggers Processing --> ProcTask(main.ProcessingTask);
|
MW -- Triggers Processing --> ProcTask(main.ProcessingTask);
|
||||||
|
|
||||||
%% Connections between subgraphs
|
%% Connections between subgraphs
|
||||||
MPW --> MW;
|
PEW --> LPW; %% PresetEditorWidget parts are in Left Pane
|
||||||
PEW --> MW;
|
LLMEDW --> Stack; %% LLMEditorWidget is in Stack
|
||||||
LCW --> MW;
|
MPW --> Splitter; %% MainPanelWidget is in Right Pane
|
||||||
VM --> MW;
|
VM --> MW;
|
||||||
ARH --> MW;
|
ARH --> MW;
|
||||||
LLMIH --> MW;
|
LLMIH --> MW;
|
||||||
REW --> MPW;
|
LCW --> MW;
|
||||||
```
|
```
|
||||||
|
|
||||||
## Application Styling
|
## Application Styling
|
||||||
|
|||||||
@ -6,7 +6,7 @@ The LLM Predictor feature provides an alternative method for classifying asset t
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The LLM Predictor is configured via new settings in the `config/app_settings.json` file. These settings control the behavior of the LLM interaction:
|
The LLM Predictor is configured via settings in the dedicated `config/llm_settings.json` file. These settings control the behavior of the LLM interaction:
|
||||||
|
|
||||||
- `llm_predictor_prompt`: The template for the prompt sent to the LLM. This prompt should guide the LLM to classify the asset based on its name and potentially other context. It can include placeholders that will be replaced with actual data during processing.
|
- `llm_predictor_prompt`: The template for the prompt sent to the LLM. This prompt should guide the LLM to classify the asset based on its name and potentially other context. It can include placeholders that will be replaced with actual data during processing.
|
||||||
- `llm_endpoint_url`: The URL of the LLM API endpoint.
|
- `llm_endpoint_url`: The URL of the LLM API endpoint.
|
||||||
@ -16,48 +16,108 @@ The LLM Predictor is configured via new settings in the `config/app_settings.jso
|
|||||||
- `llm_request_timeout`: The maximum time (in seconds) to wait for a response from the LLM API.
|
- `llm_request_timeout`: The maximum time (in seconds) to wait for a response from the LLM API.
|
||||||
- `llm_predictor_examples`: A list of example input/output pairs to include in the prompt for few-shot learning, helping the LLM understand the desired output format and classification logic.
|
- `llm_predictor_examples`: A list of example input/output pairs to include in the prompt for few-shot learning, helping the LLM understand the desired output format and classification logic.
|
||||||
|
|
||||||
The prompt structure is crucial for effective classification. It should clearly instruct the LLM on the task and the expected output format. Placeholders within the prompt template (e.g., `{asset_name}`) are dynamically replaced with relevant data before the request is sent.
|
**Editing:** These settings can be edited directly through the GUI using the **`LLMEditorWidget`** (`gui/llm_editor_widget.py`), which provides a user-friendly interface for modifying the prompt, examples, and API parameters. Changes are saved back to `config/llm_settings.json` via the `configuration.save_llm_config()` function.
|
||||||
|
|
||||||
## `LLMPredictionHandler`
|
**Loading:** The `LLMInteractionHandler` now loads these settings directly from `config/llm_settings.json` and relevant parts of `config/app_settings.json` when it needs to start an `LLMPredictionHandler` task. It no longer relies on the main `Configuration` class for LLM-specific settings. The prompt structure remains crucial for effective classification. Placeholders within the prompt template (e.g., `{FILE_LIST}`) are dynamically replaced with relevant data before the request is sent.
|
||||||
|
|
||||||
The `gui/llm_prediction_handler.py` module contains the `LLMPredictionHandler` class, which is responsible for interacting with the LLM API. It operates in a separate thread to avoid blocking the GUI during potentially long API calls.
|
## Expected LLM Output Format (Refactored)
|
||||||
|
|
||||||
Key methods:
|
The LLM is now expected to return a JSON object containing two distinct parts. This structure helps the LLM maintain context across multiple files belonging to the same conceptual asset and allows for a more robust grouping mechanism.
|
||||||
|
|
||||||
- `run()`: The main method executed when the thread starts. It processes prediction requests from a queue.
|
**Rationale:** The previous implicit format made it difficult for the LLM to consistently group related files (e.g., different texture maps for the same material) under a single asset, especially in complex archives. The new two-part structure explicitly separates file-level analysis from asset-level classification, improving accuracy and consistency.
|
||||||
- `_prepare_prompt(asset_name)`: Constructs the final prompt string by loading the template from settings, including examples, and replacing placeholders like `{asset_name}`.
|
|
||||||
- `_call_llm(prompt)`: Sends the prepared prompt to the configured LLM API endpoint using the `requests` library and handles the HTTP communication.
|
|
||||||
- `_parse_llm_response(response)`: Parses the response received from the LLM API to extract the predicted classification.
|
|
||||||
|
|
||||||
Signals:
|
**Structure:**
|
||||||
|
|
||||||
- `prediction_ready(asset_name, prediction_result)`: Emitted when a prediction is successfully received and parsed for a given asset.
|
```json
|
||||||
- `prediction_error(asset_name, error_message)`: Emitted if an error occurs during the prediction process (e.g., API call failure, parsing error).
|
{
|
||||||
|
"individual_file_analysis": [
|
||||||
|
{
|
||||||
|
"relative_file_path": "Textures/Wood_Floor_01/Wood_Floor_01_BaseColor.png",
|
||||||
|
"classified_file_type": "BaseColor",
|
||||||
|
"proposed_asset_group_name": "Wood_Floor_01"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Textures/Wood_Floor_01/Wood_Floor_01_Roughness.png",
|
||||||
|
"classified_file_type": "Roughness",
|
||||||
|
"proposed_asset_group_name": "Wood_Floor_01"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Textures/Metal_Plate_03/Metal_Plate_03_Metallic.jpg",
|
||||||
|
"classified_file_type": "Metallic",
|
||||||
|
"proposed_asset_group_name": "Metal_Plate_03"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"asset_group_classifications": {
|
||||||
|
"Wood_Floor_01": "PBR Material",
|
||||||
|
"Metal_Plate_03": "PBR Material"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
The handler uses the `requests` library to make HTTP POST requests to the LLM endpoint, including the API key in the headers for authentication.
|
- **`individual_file_analysis`**: A list where each object represents a single file within the source.
|
||||||
|
- `relative_file_path`: The path of the file relative to the source root.
|
||||||
|
- `classified_file_type`: The LLM's prediction for the *type* of this specific file (e.g., "BaseColor", "Normal", "Model"). This corresponds to the `item_type` in the `FileRule`.
|
||||||
|
- `proposed_asset_group_name`: A name suggested by the LLM to group this file with others belonging to the same conceptual asset. This is used internally by the parser.
|
||||||
|
- **`asset_group_classifications`**: A dictionary mapping the `proposed_asset_group_name` values from the list above to a final `asset_type` (e.g., "PBR Material", "HDR Environment").
|
||||||
|
|
||||||
|
## `LLMInteractionHandler` (Refactored)
|
||||||
|
|
||||||
|
The `gui/llm_interaction_handler.py` module contains the `LLMInteractionHandler` class, which now acts as the central manager for LLM prediction tasks.
|
||||||
|
|
||||||
|
Key Responsibilities & Methods:
|
||||||
|
|
||||||
|
- **Queue Management:** Maintains a queue (`llm_processing_queue`) of pending prediction requests (input path, file list). Handles adding single (`queue_llm_request`) or batch (`queue_llm_requests_batch`) requests.
|
||||||
|
- **State Management:** Tracks whether an LLM task is currently running (`_is_processing`) and emits `llm_processing_state_changed(bool)` to update the GUI (e.g., disable preset editor). Includes `force_reset_state()` for recovery.
|
||||||
|
- **Task Orchestration:** Processes the queue sequentially (`_process_next_llm_item`). For each item:
|
||||||
|
* Loads required settings directly from `config/llm_settings.json` and `config/app_settings.json`.
|
||||||
|
* Instantiates an `LLMPredictionHandler` in a new `QThread`.
|
||||||
|
* Passes the loaded settings dictionary to the `LLMPredictionHandler`.
|
||||||
|
* Connects signals from the handler (`prediction_ready`, `prediction_error`, `status_update`) to internal slots (`_handle_llm_result`, `_handle_llm_error`) or directly re-emits them (`llm_status_update`).
|
||||||
|
* Starts the thread.
|
||||||
|
- **Result/Error Handling:** Internal slots (`_handle_llm_result`, `_handle_llm_error`) receive results/errors from the `LLMPredictionHandler`, remove the completed/failed item from the queue, emit the corresponding public signal (`llm_prediction_ready`, `llm_prediction_error`), and trigger processing of the next queue item.
|
||||||
|
- **Communication:** Emits signals to `MainWindow`:
|
||||||
|
* `llm_prediction_ready(input_path, source_rule_list)`
|
||||||
|
* `llm_prediction_error(input_path, error_message)`
|
||||||
|
* `llm_status_update(status_message)`
|
||||||
|
* `llm_processing_state_changed(is_processing)`
|
||||||
|
|
||||||
|
## `LLMPredictionHandler` (Refactored)
|
||||||
|
|
||||||
|
The `gui/llm_prediction_handler.py` module contains the `LLMPredictionHandler` class (inheriting from `BasePredictionHandler`), which performs the actual LLM prediction for a *single* input source. It runs in a background thread managed by the `LLMInteractionHandler`.
|
||||||
|
|
||||||
|
Key Responsibilities & Methods:
|
||||||
|
|
||||||
|
- **Initialization**: Takes the source identifier, file list, and a **`settings` dictionary** (passed from `LLMInteractionHandler`) containing all necessary configuration (LLM endpoint, prompt, examples, API details, type definitions, etc.).
|
||||||
|
- **`_perform_prediction()`**: Implements the core prediction logic:
|
||||||
|
* **Prompt Preparation (`_prepare_prompt`)**: Uses the passed `settings` dictionary to access the prompt template, type definitions, and examples to build the final prompt string.
|
||||||
|
* **API Call (`_call_llm`)**: Uses the passed `settings` dictionary to get the endpoint URL, API key, model name, temperature, and timeout to make the API request.
|
||||||
|
* **Parsing (`_parse_llm_response`)**: Parses the LLM's JSON response (using type definitions from the `settings` dictionary for validation) and constructs the `SourceRule` hierarchy based on the two-part format (`individual_file_analysis`, `asset_group_classifications`). Includes sanitization logic for comments and markdown fences.
|
||||||
|
- **Signals (Inherited):** Emits `prediction_ready(input_path, source_rule_list)` or `prediction_error(input_path, error_message)` upon completion or failure, which are connected to the `LLMInteractionHandler`. Also emits `status_update(message)`.
|
||||||
|
|
||||||
## GUI Integration
|
## GUI Integration
|
||||||
|
|
||||||
The `gui/main_window.py` module integrates the LLM Predictor feature into the main application window.
|
- The LLM predictor mode is selected via the preset dropdown in `PresetEditorWidget`.
|
||||||
|
- Selecting "LLM Interpretation" triggers `MainWindow._on_preset_selection_changed`, which switches the editor view to the `LLMEditorWidget` and calls `update_preview`.
|
||||||
|
- `MainWindow.update_preview` (or `add_input_paths`) delegates the LLM prediction request(s) to the `LLMInteractionHandler`'s queue.
|
||||||
|
- `LLMInteractionHandler` manages the background tasks and signals results/errors/status back to `MainWindow`.
|
||||||
|
- `MainWindow` slots (`_on_llm_prediction_ready_from_handler`, `_on_prediction_error`, `show_status_message`, `_on_llm_processing_state_changed`) handle these signals to update the `UnifiedViewModel` and the UI state (status bar, progress, button enablement).
|
||||||
|
- The `LLMEditorWidget` allows users to modify settings, saving them via `configuration.save_llm_config()`. `MainWindow` listens for the `settings_saved` signal to provide user feedback.
|
||||||
|
|
||||||
Integration points:
|
## Model Integration (Refactored)
|
||||||
|
|
||||||
- **Preset Dropdown Option:** A new option is added to the preset dropdown to enable LLM prediction as the classification method.
|
The `gui/unified_view_model.py` module's `update_rules_for_sources` method still incorporates the results.
|
||||||
- **Re-interpret Button:** The "Re-interpret" button's functionality is extended to trigger LLM prediction when the LLM method is selected.
|
|
||||||
- `llm_processing_queue`: A queue (`Queue` object) is used to hold asset names that require LLM prediction. The `LLMPredictionHandler` thread consumes items from this queue.
|
|
||||||
- `_start_llm_prediction(asset_name)`: A method to add an asset name to the `llm_processing_queue` and ensure the `LLMPredictionHandler` thread is running.
|
|
||||||
- `_process_next_llm_item()`: A slot connected to the `prediction_ready` and `prediction_error` signals. It processes the results received from the `LLMPredictionHandler` and updates the GUI accordingly.
|
|
||||||
- **Signal Handling:** Connections are established between the `LLMPredictionHandler`'s signals (`prediction_ready`, `prediction_error`) and slots in `main_window.py` to handle prediction results and errors asynchronously.
|
|
||||||
|
|
||||||
## Model Integration
|
- When the `prediction_signal` is received from `LLMPredictionHandler`, the accompanying `SourceRule` object (which has already been constructed based on the new two-part JSON parsing logic) is passed to `update_rules_for_sources`.
|
||||||
|
- This method then merges the new `SourceRule` hierarchy into the existing model data, preserving user overrides where applicable. The internal structure of the received `SourceRule` now directly reflects the groupings and classifications determined by the LLM and the new parser.
|
||||||
|
|
||||||
The `gui/unified_view_model.py` module, specifically the `update_rules_for_sources` method, is responsible for incorporating the prediction results into the application's data model. When a prediction is received via the `prediction_ready` signal, the `update_rules_for_sources` method is called to update the classification rules for the corresponding asset source based on the LLM's output.
|
## Error Handling (Updated)
|
||||||
|
|
||||||
## Error Handling
|
Error handling is distributed:
|
||||||
|
|
||||||
Error handling for the LLM Predictor includes:
|
- **Configuration Loading:** `LLMInteractionHandler` handles errors loading `llm_settings.json` or `app_settings.json` before starting a task.
|
||||||
|
- **LLM API Errors:** Handled within `LLMPredictionHandler._call_llm` (e.g., `requests.exceptions.RequestException`, `HTTPError`) and propagated via the `prediction_error` signal.
|
||||||
|
- **Sanitization/Parsing Errors:** `LLMPredictionHandler._parse_llm_response` catches errors during comment/markdown removal and `json.loads()`.
|
||||||
|
- **Structure/Validation Errors:** `LLMPredictionHandler._parse_llm_response` includes explicit checks for the required two-part JSON structure and data consistency.
|
||||||
|
- **Task Management Errors:** `LLMInteractionHandler` handles errors during thread setup/start.
|
||||||
|
|
||||||
- **LLM API Errors:** The `_call_llm` method in `LLMPredictionHandler` catches exceptions during the HTTP request and emits the `prediction_error` signal with a relevant error message.
|
All errors ultimately result in the `llm_prediction_error` signal being emitted by `LLMInteractionHandler`, allowing `MainWindow` to inform the user via the status bar and handle the completion state.
|
||||||
- **Parsing Errors:** The `_parse_llm_response` method handles potential errors during the parsing of the LLM's response, emitting `prediction_error` if the response format is unexpected or invalid.
|
|
||||||
|
|
||||||
These errors are then handled in `main_window.py` by the slot connected to the `prediction_error` signal, typically by displaying an error message to the user.
|
|
||||||
@ -263,315 +263,5 @@
|
|||||||
],
|
],
|
||||||
"CALCULATE_STATS_RESOLUTION": "1K",
|
"CALCULATE_STATS_RESOLUTION": "1K",
|
||||||
"DEFAULT_ASSET_CATEGORY": "Surface",
|
"DEFAULT_ASSET_CATEGORY": "Surface",
|
||||||
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_",
|
"TEMP_DIR_PREFIX": "_PROCESS_ASSET_"
|
||||||
"llm_predictor_examples": [
|
|
||||||
{
|
|
||||||
"input": "MessyTextures/Concrete_Damage_Set/concrete_col.png\nMessyTextures/Concrete_Damage_Set/concrete_N.png\nMessyTextures/Concrete_Damage_Set/concrete_rough.jpg\nMessyTextures/Concrete_Damage_Set/height_map_concrete.tif\nMessyTextures/Concrete_Damage_Set/Thumbs.db\nMessyTextures/Fabric_Pattern/pattern_01_diffuse.tga\nMessyTextures/Fabric_Pattern/pattern_01_ao.png\nMessyTextures/Fabric_Pattern/pattern_01_normal.png\nMessyTextures/Fabric_Pattern/notes.txt\nMessyTextures/Fabric_Pattern/variant_blue_diffuse.tga\nMessyTextures/Fabric_Pattern/fabric_flat.jpg",
|
|
||||||
"output": {
|
|
||||||
"predicted_assets": [
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Concrete_Damage_01",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Concrete_Damage_Set/concrete_col.png",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Concrete_Damage_Set/concrete_N.png",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Concrete_Damage_Set/concrete_rough.jpg",
|
|
||||||
"predicted_file_type": "MAP_ROUGH"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Concrete_Damage_Set/height_map_concrete.tif",
|
|
||||||
"predicted_file_type": "MAP_DISP"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Concrete_Damage_Set/Thumbs.db",
|
|
||||||
"predicted_file_type": "FILE_IGNORE"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Fabric_Pattern_01",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Fabric_Pattern/pattern_01_diffuse.tga",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Fabric_Pattern/pattern_01_ao.png",
|
|
||||||
"predicted_file_type": "MAP_AO"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Fabric_Pattern/pattern_01_normal.png",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Fabric_Pattern/variant_blue_diffuse.tga",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Fabric_Pattern/fabric_flat.jpg",
|
|
||||||
"predicted_file_type": "EXTRA"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "MessyTextures/Fabric_Pattern/notes.txt",
|
|
||||||
"predicted_file_type": "EXTRA"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"input": "SciFi_Drone/Drone_Model.fbx\nSciFi_Drone/Textures/Drone_BaseColor.png\nSciFi_Drone/Textures/Drone_Metallic.png\nSciFi_Drone/Textures/Drone_Roughness.png\nSciFi_Drone/Textures/Drone_Normal.png\nSciFi_Drone/Textures/Drone_Emissive.jpg\nSciFi_Drone/ReferenceImages/concept.jpg",
|
|
||||||
"output": {
|
|
||||||
"predicted_assets": [
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "SciFi_Drone",
|
|
||||||
"predicted_asset_type": "Model",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "SciFi_Drone/Drone_Model.fbx",
|
|
||||||
"predicted_file_type": "MODEL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "SciFi_Drone/Textures/Drone_BaseColor.png",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "SciFi_Drone/Textures/Drone_Metallic.png",
|
|
||||||
"predicted_file_type": "MAP_METAL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "SciFi_Drone/Textures/Drone_Roughness.png",
|
|
||||||
"predicted_file_type": "MAP_ROUGH"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "SciFi_Drone/Textures/Drone_Normal.png",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "SciFi_Drone/Textures/Drone_Emissive.jpg",
|
|
||||||
"predicted_file_type": "EXTRA"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "SciFi_Drone/ReferenceImages/concept.jpg",
|
|
||||||
"predicted_file_type": "EXTRA"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"input": "21_hairs_deposits.tif\n22_hairs_fabric.tif\n23_hairs_fibres.tif\n24_hairs_fibres.tif\n25_bonus_isolatedFingerprints.tif\n26_bonus_isolatedPalmprint.tif\n27_metal_aluminum.tif\n28_metal_castIron.tif\n29_scratcehes_deposits_shapes.tif\n30_scratches_deposits.tif",
|
|
||||||
"output": {
|
|
||||||
"predicted_assets": [
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "21-Hairs-Deposits",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "21_hairs_deposits.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "22-Hairs-Fabric",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "22_hairs_fabric.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "23-Hairs-Deposits",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "23_hairs_fibres.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "24-Hairs-Fibres",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "24_hairs_fibres.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "27-MetalAluminium",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "27_metal_aluminum.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "28-MetalCastiron",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "28_metal_castIron.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "29-Scratches-Deposits-Shapes",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "29_scratcehes_deposits_shapes.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "30-Scrathes-Deposits",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "30_scratches_deposits.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Bonus-IsolatedFingerprints",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "25_bonus_isolatedFingerprints.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Bonus-IsolatedPalmprint",
|
|
||||||
"predicted_asset_type": "UtilityMap",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "26_bonus_isolatedPalmprint.tif",
|
|
||||||
"predicted_file_type": "MAP_IMPERFECTION"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"input": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_A_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
|
|
||||||
"output": {
|
|
||||||
"predicted_assets": [
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Boards001_A",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Normal.jpg",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Boards001_B",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Normal.jpg",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Boards001_C",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Normal.jpg",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Boards001_D",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Normal.jpg",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Boards001_E",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Normal.jpg",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"suggested_asset_name": "Boards001_F",
|
|
||||||
"predicted_asset_type": "Surface",
|
|
||||||
"files": [
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg",
|
|
||||||
"predicted_file_type": "MAP_COL"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
|
|
||||||
"predicted_file_type": "MAP_NRM"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"llm_endpoint_url": "http://100.65.14.122:1234/v1/chat/completions",
|
|
||||||
"llm_api_key": "",
|
|
||||||
"llm_model_name": "",
|
|
||||||
"llm_temperature": 0.5,
|
|
||||||
"llm_request_timeout": 120,
|
|
||||||
"llm_predictor_prompt": "You are an expert asset classification system. Your task is to analyze a list of file paths from a directory, identify patterns based on directory structure and filenames, and then group related files into logical assets. For each grouped asset, you must suggest a concise asset name, determine the overall asset type, and for each file within that asset, assign its specific file type.\n\nDefinitions:\n\nAsset Types: These define the overall category of an asset. Use one of the following keys for predicted_asset_type:\njson\n{ASSET_TYPE_DEFINITIONS}\n\n\nFile Types: These define the specific purpose of each file. Use one of the following keys for predicted_file_type:\njson\n{FILE_TYPE_DEFINITIONS}\n\n\nCore Task & Grouping Logic:\n\n1. Analyze Input: Examine the provided FILE_LIST. Pay close attention to directory paths and filenames (including prefixes, suffixes, separators like underscores or hyphens, and file extensions).\n2. Identify Potential Assets: Look for patterns that indicate files belong together:\n - Common Base Name: Files sharing a significant common prefix before map-type identifiers (e.g., Concrete_Damage_Set/concrete_ followed by col.png, N.png, rough.jpg).\n - Directory Grouping: Files located within the same immediate directory are often related, especially if their names follow a pattern (e.g., all files directly under SciFi_Drone/Textures/).\n - Model Association: If a MODEL file type (like .fbx, .obj) is present, group it with texture files that share its base name or are located in a plausible associated directory (like Textures/).\n - Single-File Assets (Utility Maps): Files whose names strongly suggest a UtilityMap type (e.g., scratches.tif, FlowMap.png, 21_hairs_deposits.tif) should typically form their own asset, unless they clearly belong to a larger PBR set based on naming conventions. Remember UtilityMap assets usually contain only one file as per their definition.\n - Variations: Files indicating variations (e.g., _A, _B or _variant_blue) should be grouped logically.\n - If variations represent complete, distinct sets (like Boards001_A and Boards001_B in the examples), create separate assets for each variation.\n - If variations seem like alternative maps or supplementary files for a single core asset (like pattern_01_diffuse.tga and variant_blue_diffuse.tga in the examples), group them under one asset. Use the base name (e.g., Fabric_Pattern_01) for the asset.\n3. Group Files: Based on the identified patterns, group the file paths into logical predicted_assets.\n4. Determine Asset Type: For each asset group, determine the most appropriate predicted_asset_type by considering the types of files it contains (e.g., presence of a .fbx suggests Model; multiple PBR maps like MAP_COL, MAP_NRM, MAP_ROUGH suggest Surface; a single imperfection map suggests UtilityMap). Refer to the ASSET_TYPE_DEFINITIONS.\n5. Suggest Asset Name: For each asset, generate a suggested_asset_name. This should be concise and derived from the common base filename or the immediate parent directory name. Clean up the name (e.g., use CamelCase or underscores consistently, remove redundant info like dimensions if not essential).\n6. Assign File Types: For each file_path within an asset, determine the most appropriate predicted_file_type based on its name, extension, and context within the asset. Use the keys from FILE_TYPE_DEFINITIONS.\n - Use FILE_IGNORE for files that should be ignored (e.g., Thumbs.db, .DS_Store).\n - Use EXTRA for files that belong to the asset but don't fit a standard map type (e.g., previews, text files, non-standard maps like Emissive unless you add a specific type for it).\n\nInput File List:\n\ntext\n{FILE_LIST}\n\n\nOutput Format:\n\nYour response MUST be ONLY a single, perfectly valid JSON object adhering strictly to the structure below. Do NOT include any text, explanations, or introductory phrases before or after the JSON object. Ensure all strings are correctly quoted and escaped, and there are NO trailing commas or comments (//, /* */).\n\nCRITICAL: The output must be strictly valid JSON parsable by standard libraries.\n\njson\n{\n \"predicted_assets\": [\n {\n \"suggested_asset_name\": \"string\", // Concise asset name derived from common file parts or directory\n \"predicted_asset_type\": \"string\", // Key from Asset Types definitions\n \"files\": [\n {\n \"file_path\": \"string\", // Exact relative path from the input list\n \"predicted_file_type\": \"string\" // Key from File Types definitions\n },\n // ... more files\n ]\n },\n // ... more assets\n ]\n}\n\n\nExamples:\n\nHere are examples of input file lists and the desired JSON output, illustrating the grouping logic:\n\njson\n[\n {EXAMPLE_INPUT_OUTPUT_PAIRS}\n]\n\n\nNow, process the provided FILE_LIST and generate ONLY the JSON output according to these instructions."
|
|
||||||
}
|
}
|
||||||
265
config/llm_settings.json
Normal file
265
config/llm_settings.json
Normal file
@ -0,0 +1,265 @@
|
|||||||
|
{
|
||||||
|
"llm_predictor_examples": [
|
||||||
|
{
|
||||||
|
"input": "MessyTextures/Concrete_Damage_Set/concrete_col.png\nMessyTextures/Concrete_Damage_Set/concrete_N.png\nMessyTextures/Concrete_Damage_Set/concrete_rough.jpg\nMessyTextures/Concrete_Damage_Set/height_map_concrete.tif\nMessyTextures/Concrete_Damage_Set/Thumbs.db\nMessyTextures/Fabric_Pattern/pattern_01_diffuse.tga\nMessyTextures/Fabric_Pattern/pattern_01_ao.png\nMessyTextures/Fabric_Pattern/pattern_01_normal.png\nMessyTextures/Fabric_Pattern/notes.txt\nMessyTextures/Fabric_Pattern/variant_blue_diffuse.tga\nMessyTextures/Fabric_Pattern/fabric_flat.jpg",
|
||||||
|
"output": {
|
||||||
|
"individual_file_analysis": [
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_col.png",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Concrete_Damage_Set"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_N.png",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Concrete_Damage_Set"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Concrete_Damage_Set/concrete_rough.jpg",
|
||||||
|
"classified_file_type": "MAP_ROUGH",
|
||||||
|
"proposed_asset_group_name": "Concrete_Damage_Set"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Concrete_Damage_Set/height_map_concrete.tif",
|
||||||
|
"classified_file_type": "MAP_DISP",
|
||||||
|
"proposed_asset_group_name": "Concrete_Damage_Set"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Concrete_Damage_Set/Thumbs.db",
|
||||||
|
"classified_file_type": "FILE_IGNORE",
|
||||||
|
"proposed_asset_group_name": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_diffuse.tga",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Fabric_Pattern_01"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_ao.png",
|
||||||
|
"classified_file_type": "MAP_AO",
|
||||||
|
"proposed_asset_group_name": "Fabric_Pattern_01"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Fabric_Pattern/pattern_01_normal.png",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Fabric_Pattern_01"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Fabric_Pattern/notes.txt",
|
||||||
|
"classified_file_type": "EXTRA",
|
||||||
|
"proposed_asset_group_name": "Fabric_Pattern_01"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Fabric_Pattern/variant_blue_diffuse.tga",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Fabric_Pattern_01"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "MessyTextures/Fabric_Pattern/fabric_flat.jpg",
|
||||||
|
"classified_file_type": "EXTRA",
|
||||||
|
"proposed_asset_group_name": "Fabric_Pattern_01"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"asset_group_classifications": {
|
||||||
|
"Concrete_Damage_Set": "Surface",
|
||||||
|
"Fabric_Pattern_01": "Surface"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"input": "SciFi_Drone/Drone_Model.fbx\nSciFi_Drone/Textures/Drone_BaseColor.png\nSciFi_Drone/Textures/Drone_Metallic.png\nSciFi_Drone/Textures/Drone_Roughness.png\nSciFi_Drone/Textures/Drone_Normal.png\nSciFi_Drone/Textures/Drone_Emissive.jpg\nSciFi_Drone/ReferenceImages/concept.jpg",
|
||||||
|
"output": {
|
||||||
|
"individual_file_analysis": [
|
||||||
|
{
|
||||||
|
"relative_file_path": "SciFi_Drone/Drone_Model.fbx",
|
||||||
|
"classified_file_type": "MODEL",
|
||||||
|
"proposed_asset_group_name": "SciFi_Drone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "SciFi_Drone/Textures/Drone_BaseColor.png",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "SciFi_Drone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "SciFi_Drone/Textures/Drone_Metallic.png",
|
||||||
|
"classified_file_type": "MAP_METAL",
|
||||||
|
"proposed_asset_group_name": "SciFi_Drone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "SciFi_Drone/Textures/Drone_Roughness.png",
|
||||||
|
"classified_file_type": "MAP_ROUGH",
|
||||||
|
"proposed_asset_group_name": "SciFi_Drone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "SciFi_Drone/Textures/Drone_Normal.png",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "SciFi_Drone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "SciFi_Drone/Textures/Drone_Emissive.jpg",
|
||||||
|
"classified_file_type": "EXTRA",
|
||||||
|
"proposed_asset_group_name": "SciFi_Drone"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "SciFi_Drone/ReferenceImages/concept.jpg",
|
||||||
|
"classified_file_type": "EXTRA",
|
||||||
|
"proposed_asset_group_name": "SciFi_Drone"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"asset_group_classifications": {
|
||||||
|
"SciFi_Drone": "Model"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"input": "21_hairs_deposits.tif\n22_hairs_fabric.tif\n23_hairs_fibres.tif\n24_hairs_fibres.tif\n25_bonus_isolatedFingerprints.tif\n26_bonus_isolatedPalmprint.tif\n27_metal_aluminum.tif\n28_metal_castIron.tif\n29_scratcehes_deposits_shapes.tif\n30_scratches_deposits.tif",
|
||||||
|
"output": {
|
||||||
|
"individual_file_analysis": [
|
||||||
|
{
|
||||||
|
"relative_file_path": "21_hairs_deposits.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Hairs_Deposits_21"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "22_hairs_fabric.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Hairs_Fabric_22"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "23_hairs_fibres.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Hairs_Fibres_23"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "24_hairs_fibres.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Hairs_Fibres_24"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "25_bonus_isolatedFingerprints.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Bonus_IsolatedFingerprints_25"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "26_bonus_isolatedPalmprint.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Bonus_IsolatedPalmprint_26"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "27_metal_aluminum.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Metal_Aluminum_27"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "28_metal_castIron.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Metal_CastIron_28"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "29_scratcehes_deposits_shapes.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Scratches_Deposits_Shapes_29"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "30_scratches_deposits.tif",
|
||||||
|
"classified_file_type": "MAP_IMPERFECTION",
|
||||||
|
"proposed_asset_group_name": "Scratches_Deposits_30"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"asset_group_classifications": {
|
||||||
|
"Hairs_Deposits_21": "UtilityMap",
|
||||||
|
"Hairs_Fabric_22": "UtilityMap",
|
||||||
|
"Hairs_Fibres_23": "UtilityMap",
|
||||||
|
"Hairs_Fibres_24": "UtilityMap",
|
||||||
|
"Bonus_IsolatedFingerprints_25": "UtilityMap",
|
||||||
|
"Bonus_IsolatedPalmprint_26": "UtilityMap",
|
||||||
|
"Metal_Aluminum_27": "UtilityMap",
|
||||||
|
"Metal_CastIron_28": "UtilityMap",
|
||||||
|
"Scratches_Deposits_Shapes_29": "UtilityMap",
|
||||||
|
"Scratches_Deposits_30": "UtilityMap"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"input": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_A_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_B_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_C_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_D_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_E_28x300cm-Normal.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg\nPart1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
|
||||||
|
"output": {
|
||||||
|
"individual_file_analysis": [
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Albedo.jpg",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Boards001_A"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_A_28x300cm-Normal.jpg",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Boards001_A"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Albedo.jpg",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Boards001_B"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_B_28x300cm-Normal.jpg",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Boards001_B"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Albedo.jpg",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Boards001_C"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_C_28x300cm-Normal.jpg",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Boards001_C"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Albedo.jpg",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Boards001_D"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_D_28x300cm-Normal.jpg",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Boards001_D"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Albedo.jpg",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Boards001_E"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_E_28x300cm-Normal.jpg",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Boards001_E"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Albedo.jpg",
|
||||||
|
"classified_file_type": "MAP_COL",
|
||||||
|
"proposed_asset_group_name": "Boards001_F"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"relative_file_path": "Part1/TextureSupply_Boards001_F_28x300cm-Normal.jpg",
|
||||||
|
"classified_file_type": "MAP_NRM",
|
||||||
|
"proposed_asset_group_name": "Boards001_F"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"asset_group_classifications": {
|
||||||
|
"Boards001_A": "Surface",
|
||||||
|
"Boards001_B": "Surface",
|
||||||
|
"Boards001_C": "Surface",
|
||||||
|
"Boards001_D": "Surface",
|
||||||
|
"Boards001_E": "Surface",
|
||||||
|
"Boards001_F": "Surface"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"llm_endpoint_url": "https://api.llm.gestaltservers.com/v1/chat/completions",
|
||||||
|
"llm_api_key": "",
|
||||||
|
"llm_model_name": "qwen2.5-coder:3b",
|
||||||
|
"llm_temperature": 0.5,
|
||||||
|
"llm_request_timeout": 120,
|
||||||
|
"llm_predictor_prompt": "You are an expert asset classification system. Your task is to analyze a list of file paths, understand their relationships based on naming and directory structure, and output a structured JSON object that classifies each file individually and then classifies the logical asset groups they belong to.\n\nDefinitions:\n\nAsset Types: These define the overall category of a logical asset group. Use one of the following keys when classifying asset groups:\njson\n{ASSET_TYPE_DEFINITIONS}\n\n\nFile Types: These define the specific purpose of each individual file. Use one of the following keys when classifying individual files:\njson\n{FILE_TYPE_DEFINITIONS}\n\n\nCore Task & Logic:\n\n1. **Individual File Analysis:**\n * Examine each `relative_file_path` in the input `FILE_LIST`.\n * For EACH file, determine its most likely `classified_file_type` using the `FILE_TYPE_DEFINITIONS`. Pay attention to filename suffixes, keywords, and extensions. Use `FILE_IGNORE` for files like `Thumbs.db` or `.DS_Store`. Use `EXTRA` for previews, metadata, or unidentifiable maps.\n * For EACH file, propose a logical `proposed_asset_group_name` (string). This name should represent the asset the file likely belongs to, based on common base names (e.g., `WoodFloor01` from `WoodFloor01_col.png`, `WoodFloor01_nrm.png`) or directory structure (e.g., `SciFi_Drone` for files within that folder).\n * Files that seem to be standalone utility maps (like `scratches.png`, `FlowMap.tif`) should get a unique group name derived from their filename (e.g., `Scratches`, `FlowMap`).\n * If a file doesn't seem to belong to any logical group (e.g., a stray readme file in the root), you can propose `null` or a generic name like `Miscellaneous`.\n * Be consistent with the proposed names for files belonging to the same logical asset.\n * Populate the `individual_file_analysis` array with one object for *every* file in the input list, containing `relative_file_path`, `classified_file_type`, and `proposed_asset_group_name`.\n\n2. **Asset Group Classification:**\n * Collect all unique, non-null `proposed_asset_group_name` values generated in the previous step.\n * For EACH unique group name, determine the overall `asset_type` (using `ASSET_TYPE_DEFINITIONS`) based on the types of files assigned to that group name in the `individual_file_analysis`.\n * Example: If files proposed as `AssetGroup1` include `MAP_COL`, `MAP_NRM`, `MAP_ROUGH`, classify `AssetGroup1` as `Surface`.\n * Example: If files proposed as `AssetGroup2` include `MODEL` and texture maps, classify `AssetGroup2` as `Model`.\n * Example: If `AssetGroup3` only has one file classified as `MAP_IMPERFECTION`, classify `AssetGroup3` as `UtilityMap`.\n * Populate the `asset_group_classifications` dictionary, mapping each unique `proposed_asset_group_name` to its determined `asset_type`.\n\nInput File List:\n\ntext\n{FILE_LIST}\n\n\nOutput Format:\n\nYour response MUST be ONLY a single JSON object. You MAY include comments (using // or /* */) within the JSON structure for clarification if needed, but the core structure must be valid JSON. Do NOT include any text, explanations, or introductory phrases before or after the JSON object itself. Ensure all strings are correctly quoted and escaped.\n\nCRITICAL: The output JSON structure must strictly adhere to the following format:\n\n```json\n{\n \"individual_file_analysis\": [\n {\n // Optional comment about this file\n \"relative_file_path\": \"string\", // Exact relative path from the input list\n \"classified_file_type\": \"string\", // Key from FILE_TYPE_DEFINITIONS\n \"proposed_asset_group_name\": \"string_or_null\" // Your suggested group name for this file\n }\n // ... one object for EVERY file in the input list\n ],\n \"asset_group_classifications\": {\n // Dictionary mapping unique proposed group names to asset types\n \"ProposedGroupName1\": \"string\", // Key: proposed_asset_group_name, Value: Key from ASSET_TYPE_DEFINITIONS\n \"ProposedGroupName2\": \"string\"\n // ... one entry for each unique, non-null proposed_asset_group_name\n }\n}\n```\n\nExamples:\n\nHere are examples of input file lists and the desired JSON output, illustrating the two-part structure:\n\njson\n[\n {EXAMPLE_INPUT_OUTPUT_PAIRS}\n]\n\n\nNow, process the provided FILE_LIST and generate ONLY the JSON output according to these instructions. Remember to include an entry in `individual_file_analysis` for every single input file path."
|
||||||
|
}
|
||||||
@ -13,7 +13,8 @@ log = logging.getLogger(__name__) # Use logger defined in main.py
|
|||||||
# Assumes config/ and presets/ are relative to this file's location
|
# Assumes config/ and presets/ are relative to this file's location
|
||||||
BASE_DIR = Path(__file__).parent
|
BASE_DIR = Path(__file__).parent
|
||||||
APP_SETTINGS_PATH = BASE_DIR / "config" / "app_settings.json"
|
APP_SETTINGS_PATH = BASE_DIR / "config" / "app_settings.json"
|
||||||
PRESETS_DIR = BASE_DIR / "presets"
|
LLM_SETTINGS_PATH = BASE_DIR / "config" / "llm_settings.json" # Added LLM settings path
|
||||||
|
PRESETS_DIR = BASE_DIR / "Presets"
|
||||||
|
|
||||||
# --- Custom Exception ---
|
# --- Custom Exception ---
|
||||||
class ConfigurationError(Exception):
|
class ConfigurationError(Exception):
|
||||||
@ -89,6 +90,7 @@ class Configuration:
|
|||||||
log.debug(f"Initializing Configuration with preset: '{preset_name}'")
|
log.debug(f"Initializing Configuration with preset: '{preset_name}'")
|
||||||
self.preset_name = preset_name
|
self.preset_name = preset_name
|
||||||
self._core_settings: dict = self._load_core_config()
|
self._core_settings: dict = self._load_core_config()
|
||||||
|
self._llm_settings: dict = self._load_llm_config() # Load LLM settings
|
||||||
self._preset_settings: dict = self._load_preset(preset_name)
|
self._preset_settings: dict = self._load_preset(preset_name)
|
||||||
self._validate_configs()
|
self._validate_configs()
|
||||||
self._compile_regex_patterns() # Compile regex after validation
|
self._compile_regex_patterns() # Compile regex after validation
|
||||||
@ -209,6 +211,26 @@ class Configuration:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise ConfigurationError(f"Failed to read core configuration file {APP_SETTINGS_PATH}: {e}")
|
raise ConfigurationError(f"Failed to read core configuration file {APP_SETTINGS_PATH}: {e}")
|
||||||
|
|
||||||
|
def _load_llm_config(self) -> dict:
|
||||||
|
"""Loads settings from the llm_settings.json file."""
|
||||||
|
log.debug(f"Loading LLM config from: {LLM_SETTINGS_PATH}")
|
||||||
|
if not LLM_SETTINGS_PATH.is_file():
|
||||||
|
# Log a warning but don't raise an error, allow fallback if possible
|
||||||
|
log.warning(f"LLM configuration file not found: {LLM_SETTINGS_PATH}. LLM features might be disabled or use defaults.")
|
||||||
|
return {} # Return empty dict if file not found
|
||||||
|
try:
|
||||||
|
with open(LLM_SETTINGS_PATH, 'r', encoding='utf-8') as f:
|
||||||
|
settings = json.load(f)
|
||||||
|
log.debug(f"LLM config loaded successfully.")
|
||||||
|
return settings
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
log.error(f"Failed to parse LLM configuration file {LLM_SETTINGS_PATH}: Invalid JSON - {e}")
|
||||||
|
return {} # Return empty dict on parse error
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Failed to read LLM configuration file {LLM_SETTINGS_PATH}: {e}")
|
||||||
|
return {} # Return empty dict on other read errors
|
||||||
|
|
||||||
|
|
||||||
def _load_preset(self, preset_name: str) -> dict:
|
def _load_preset(self, preset_name: str) -> dict:
|
||||||
"""Loads the specified preset JSON file."""
|
"""Loads the specified preset JSON file."""
|
||||||
log.debug(f"Loading preset: '{preset_name}' from {PRESETS_DIR}")
|
log.debug(f"Loading preset: '{preset_name}' from {PRESETS_DIR}")
|
||||||
@ -263,8 +285,22 @@ class Configuration:
|
|||||||
raise ConfigurationError("Core config 'IMAGE_RESOLUTIONS' must be a dictionary.")
|
raise ConfigurationError("Core config 'IMAGE_RESOLUTIONS' must be a dictionary.")
|
||||||
if not isinstance(self._core_settings.get('STANDARD_MAP_TYPES'), list):
|
if not isinstance(self._core_settings.get('STANDARD_MAP_TYPES'), list):
|
||||||
raise ConfigurationError("Core config 'STANDARD_MAP_TYPES' must be a list.")
|
raise ConfigurationError("Core config 'STANDARD_MAP_TYPES' must be a list.")
|
||||||
# Add more checks as necessary
|
|
||||||
log.debug("Configuration validation passed.")
|
# LLM settings validation (check if keys exist if the file was loaded)
|
||||||
|
if self._llm_settings: # Only validate if LLM settings were loaded
|
||||||
|
required_llm_keys = [ # Indent this block
|
||||||
|
"llm_predictor_examples", "llm_endpoint_url", "llm_api_key",
|
||||||
|
"llm_model_name", "llm_temperature", "llm_request_timeout",
|
||||||
|
"llm_predictor_prompt"
|
||||||
|
]
|
||||||
|
for key in required_llm_keys: # Indent this block
|
||||||
|
if key not in self._llm_settings: # Indent this block
|
||||||
|
# Log warning instead of raising error to allow partial functionality
|
||||||
|
log.warning(f"LLM config is missing recommended key: '{key}'. LLM features might not work correctly.") # Indent this block
|
||||||
|
# raise ConfigurationError(f"LLM config is missing required key: '{key}'.") # Indent this block
|
||||||
|
|
||||||
|
# Add more checks as necessary
|
||||||
|
log.debug("Configuration validation passed.") # Keep this alignment
|
||||||
|
|
||||||
|
|
||||||
# --- Accessor Methods/Properties ---
|
# --- Accessor Methods/Properties ---
|
||||||
@ -409,13 +445,40 @@ class Configuration:
|
|||||||
return list(self.get_file_type_definitions_with_examples().keys())
|
return list(self.get_file_type_definitions_with_examples().keys())
|
||||||
|
|
||||||
def get_llm_examples(self) -> list:
|
def get_llm_examples(self) -> list:
|
||||||
"""Returns the list of LLM input/output examples from core settings."""
|
"""Returns the list of LLM input/output examples from LLM settings."""
|
||||||
return self._core_settings.get('llm_predictor_examples', [])
|
# Use empty list as fallback if LLM settings file is missing/invalid
|
||||||
|
return self._llm_settings.get('llm_predictor_examples', [])
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_predictor_prompt(self) -> str:
|
||||||
|
"""Returns the LLM predictor prompt string from LLM settings."""
|
||||||
|
return self._llm_settings.get('llm_predictor_prompt', '') # Fallback to empty string
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_endpoint_url(self) -> str:
|
||||||
|
"""Returns the LLM endpoint URL from LLM settings."""
|
||||||
|
return self._llm_settings.get('llm_endpoint_url', '')
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_api_key(self) -> str:
|
||||||
|
"""Returns the LLM API key from LLM settings."""
|
||||||
|
return self._llm_settings.get('llm_api_key', '')
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_model_name(self) -> str:
|
||||||
|
"""Returns the LLM model name from LLM settings."""
|
||||||
|
return self._llm_settings.get('llm_model_name', '')
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_temperature(self) -> float:
|
||||||
|
"""Returns the LLM temperature from LLM settings."""
|
||||||
|
return self._llm_settings.get('llm_temperature', 0.5) # Default temperature
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_request_timeout(self) -> int:
|
||||||
|
"""Returns the LLM request timeout in seconds from LLM settings."""
|
||||||
|
return self._llm_settings.get('llm_request_timeout', 120) # Default timeout
|
||||||
|
|
||||||
def get_setting(self, key: str, default: any = None) -> any:
|
|
||||||
"""Gets a specific setting by key from the core settings."""
|
|
||||||
# Note: This accesses _core_settings directly, not combined/preset settings.
|
|
||||||
return self._core_settings.get(key, default)
|
|
||||||
# --- Standalone Base Config Functions ---
|
# --- Standalone Base Config Functions ---
|
||||||
|
|
||||||
def load_base_config() -> dict:
|
def load_base_config() -> dict:
|
||||||
@ -441,6 +504,19 @@ def load_base_config() -> dict:
|
|||||||
log.error(f"Failed to read base configuration file {APP_SETTINGS_PATH}: {e}")
|
log.error(f"Failed to read base configuration file {APP_SETTINGS_PATH}: {e}")
|
||||||
return {} # Return empty dict on error
|
return {} # Return empty dict on error
|
||||||
|
|
||||||
|
def save_llm_config(settings_dict: dict):
|
||||||
|
"""
|
||||||
|
Saves the provided LLM settings dictionary to llm_settings.json.
|
||||||
|
"""
|
||||||
|
log.debug(f"Saving LLM config to: {LLM_SETTINGS_PATH}")
|
||||||
|
try:
|
||||||
|
with open(LLM_SETTINGS_PATH, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(settings_dict, f, indent=4)
|
||||||
|
log.info(f"LLM config saved successfully to {LLM_SETTINGS_PATH}") # Use info level for successful save
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Failed to save LLM configuration file {LLM_SETTINGS_PATH}: {e}")
|
||||||
|
# Re-raise as ConfigurationError to signal failure upstream
|
||||||
|
raise ConfigurationError(f"Failed to save LLM configuration: {e}")
|
||||||
def save_base_config(settings_dict: dict):
|
def save_base_config(settings_dict: dict):
|
||||||
"""
|
"""
|
||||||
Saves the provided settings dictionary to app_settings.json.
|
Saves the provided settings dictionary to app_settings.json.
|
||||||
|
|||||||
@ -65,6 +65,7 @@ class BasePredictionHandler(QObject, ABC, metaclass=QtABCMeta):
|
|||||||
Main execution slot intended to be connected to QThread.started.
|
Main execution slot intended to be connected to QThread.started.
|
||||||
Handles the overall process: setup, execution, error handling, signaling.
|
Handles the overall process: setup, execution, error handling, signaling.
|
||||||
"""
|
"""
|
||||||
|
log.debug(f"--> Entered BasePredictionHandler.run() for {self.input_source_identifier}") # ADDED DEBUG LOG
|
||||||
if self._is_running:
|
if self._is_running:
|
||||||
log.warning(f"Handler for '{self.input_source_identifier}' is already running. Aborting.")
|
log.warning(f"Handler for '{self.input_source_identifier}' is already running. Aborting.")
|
||||||
return
|
return
|
||||||
|
|||||||
318
gui/llm_editor_widget.py
Normal file
318
gui/llm_editor_widget.py
Normal file
@ -0,0 +1,318 @@
|
|||||||
|
# gui/llm_editor_widget.py
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from PySide6.QtWidgets import (
|
||||||
|
QWidget, QVBoxLayout, QTabWidget, QPlainTextEdit, QGroupBox,
|
||||||
|
QHBoxLayout, QPushButton, QFormLayout, QLineEdit, QDoubleSpinBox,
|
||||||
|
QSpinBox, QMessageBox, QTextEdit
|
||||||
|
)
|
||||||
|
from PySide6.QtCore import Slot as pyqtSlot, Signal as pyqtSignal # Use PySide6 equivalents
|
||||||
|
|
||||||
|
# Assuming configuration module exists and has relevant functions later
|
||||||
|
from configuration import save_llm_config, ConfigurationError # Import necessary items
|
||||||
|
# For now, define path directly for initial structure
|
||||||
|
LLM_CONFIG_PATH = "config/llm_settings.json"
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class LLMEditorWidget(QWidget):
|
||||||
|
"""
|
||||||
|
Widget for editing LLM settings stored in config/llm_settings.json.
|
||||||
|
"""
|
||||||
|
settings_saved = pyqtSignal() # Signal emitted when settings are successfully saved
|
||||||
|
|
||||||
|
def __init__(self, parent=None):
|
||||||
|
super().__init__(parent)
|
||||||
|
self._unsaved_changes = False
|
||||||
|
self._init_ui()
|
||||||
|
self._connect_signals()
|
||||||
|
self.save_button.setEnabled(False) # Initially disabled
|
||||||
|
|
||||||
|
def _init_ui(self):
|
||||||
|
"""Initialize the user interface components."""
|
||||||
|
main_layout = QVBoxLayout(self)
|
||||||
|
|
||||||
|
# --- Main Tab Widget ---
|
||||||
|
self.tab_widget = QTabWidget()
|
||||||
|
main_layout.addWidget(self.tab_widget)
|
||||||
|
|
||||||
|
# --- Tab 1: Prompt Settings ---
|
||||||
|
self.tab_prompt = QWidget()
|
||||||
|
prompt_layout = QVBoxLayout(self.tab_prompt)
|
||||||
|
self.tab_widget.addTab(self.tab_prompt, "Prompt Settings")
|
||||||
|
|
||||||
|
self.prompt_editor = QPlainTextEdit()
|
||||||
|
self.prompt_editor.setPlaceholderText("Enter the main LLM predictor prompt here...")
|
||||||
|
prompt_layout.addWidget(self.prompt_editor)
|
||||||
|
|
||||||
|
# Examples GroupBox
|
||||||
|
examples_groupbox = QGroupBox("Examples")
|
||||||
|
examples_layout = QVBoxLayout(examples_groupbox)
|
||||||
|
prompt_layout.addWidget(examples_groupbox)
|
||||||
|
|
||||||
|
self.examples_tab_widget = QTabWidget()
|
||||||
|
self.examples_tab_widget.setTabsClosable(True)
|
||||||
|
examples_layout.addWidget(self.examples_tab_widget)
|
||||||
|
|
||||||
|
example_button_layout = QHBoxLayout()
|
||||||
|
examples_layout.addLayout(example_button_layout)
|
||||||
|
|
||||||
|
self.add_example_button = QPushButton("Add Example")
|
||||||
|
example_button_layout.addWidget(self.add_example_button)
|
||||||
|
|
||||||
|
self.delete_example_button = QPushButton("Delete Current Example")
|
||||||
|
example_button_layout.addWidget(self.delete_example_button)
|
||||||
|
example_button_layout.addStretch()
|
||||||
|
|
||||||
|
|
||||||
|
# --- Tab 2: API Settings ---
|
||||||
|
self.tab_api = QWidget()
|
||||||
|
api_layout = QFormLayout(self.tab_api)
|
||||||
|
self.tab_widget.addTab(self.tab_api, "API Settings")
|
||||||
|
|
||||||
|
self.endpoint_url_edit = QLineEdit()
|
||||||
|
api_layout.addRow("Endpoint URL:", self.endpoint_url_edit)
|
||||||
|
|
||||||
|
self.api_key_edit = QLineEdit()
|
||||||
|
self.api_key_edit.setEchoMode(QLineEdit.Password)
|
||||||
|
api_layout.addRow("API Key:", self.api_key_edit)
|
||||||
|
|
||||||
|
self.model_name_edit = QLineEdit()
|
||||||
|
api_layout.addRow("Model Name:", self.model_name_edit)
|
||||||
|
|
||||||
|
self.temperature_spinbox = QDoubleSpinBox()
|
||||||
|
self.temperature_spinbox.setRange(0.0, 2.0)
|
||||||
|
self.temperature_spinbox.setSingleStep(0.1)
|
||||||
|
self.temperature_spinbox.setDecimals(2)
|
||||||
|
api_layout.addRow("Temperature:", self.temperature_spinbox)
|
||||||
|
|
||||||
|
self.timeout_spinbox = QSpinBox()
|
||||||
|
self.timeout_spinbox.setRange(1, 600)
|
||||||
|
self.timeout_spinbox.setSuffix(" s")
|
||||||
|
api_layout.addRow("Request Timeout:", self.timeout_spinbox)
|
||||||
|
|
||||||
|
# --- Save Button ---
|
||||||
|
save_button_layout = QHBoxLayout()
|
||||||
|
main_layout.addLayout(save_button_layout)
|
||||||
|
save_button_layout.addStretch()
|
||||||
|
self.save_button = QPushButton("Save LLM Settings")
|
||||||
|
save_button_layout.addWidget(self.save_button)
|
||||||
|
|
||||||
|
self.setLayout(main_layout)
|
||||||
|
|
||||||
|
def _connect_signals(self):
|
||||||
|
"""Connect signals to slots."""
|
||||||
|
# Save button
|
||||||
|
self.save_button.clicked.connect(self._save_settings)
|
||||||
|
|
||||||
|
# Fields triggering unsaved changes
|
||||||
|
self.prompt_editor.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.endpoint_url_edit.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.api_key_edit.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.model_name_edit.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.temperature_spinbox.valueChanged.connect(self._mark_unsaved)
|
||||||
|
self.timeout_spinbox.valueChanged.connect(self._mark_unsaved)
|
||||||
|
|
||||||
|
# Example management buttons and tab close signal
|
||||||
|
self.add_example_button.clicked.connect(self._add_example_tab)
|
||||||
|
self.delete_example_button.clicked.connect(self._delete_current_example_tab)
|
||||||
|
self.examples_tab_widget.tabCloseRequested.connect(self._remove_example_tab)
|
||||||
|
|
||||||
|
# Note: Connecting textChanged for example editors needs to happen
|
||||||
|
# when the tabs/editors are created (in load_settings and _add_example_tab)
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def load_settings(self):
|
||||||
|
"""Load settings from the JSON file and populate the UI."""
|
||||||
|
logger.info(f"Attempting to load LLM settings from {LLM_CONFIG_PATH}")
|
||||||
|
self.setEnabled(True) # Enable widget before trying to load
|
||||||
|
|
||||||
|
# Clear previous examples
|
||||||
|
while self.examples_tab_widget.count() > 0:
|
||||||
|
self.examples_tab_widget.removeTab(0)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(LLM_CONFIG_PATH, 'r', encoding='utf-8') as f:
|
||||||
|
settings = json.load(f)
|
||||||
|
|
||||||
|
# Populate Prompt Settings
|
||||||
|
self.prompt_editor.setPlainText(settings.get("llm_predictor_prompt", ""))
|
||||||
|
|
||||||
|
# Populate Examples
|
||||||
|
examples = settings.get("llm_predictor_examples", [])
|
||||||
|
for i, example in enumerate(examples):
|
||||||
|
try:
|
||||||
|
example_text = json.dumps(example, indent=4)
|
||||||
|
example_editor = QTextEdit()
|
||||||
|
example_editor.setPlainText(example_text)
|
||||||
|
example_editor.textChanged.connect(self._mark_unsaved) # Connect here
|
||||||
|
self.examples_tab_widget.addTab(example_editor, f"Example {i+1}")
|
||||||
|
except TypeError as e:
|
||||||
|
logger.error(f"Error formatting example {i+1}: {e}. Skipping.")
|
||||||
|
QMessageBox.warning(self, "Load Error", f"Could not format example {i+1}. It might be invalid.\nError: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
# Populate API Settings
|
||||||
|
self.endpoint_url_edit.setText(settings.get("llm_endpoint_url", ""))
|
||||||
|
self.api_key_edit.setText(settings.get("llm_api_key", "")) # Consider security implications
|
||||||
|
self.model_name_edit.setText(settings.get("llm_model_name", ""))
|
||||||
|
self.temperature_spinbox.setValue(settings.get("llm_temperature", 0.7))
|
||||||
|
self.timeout_spinbox.setValue(settings.get("llm_request_timeout", 120))
|
||||||
|
|
||||||
|
logger.info("LLM settings loaded successfully.")
|
||||||
|
|
||||||
|
except FileNotFoundError:
|
||||||
|
logger.warning(f"LLM settings file not found: {LLM_CONFIG_PATH}. Using defaults and disabling editor.")
|
||||||
|
QMessageBox.warning(self, "Load Error",
|
||||||
|
f"LLM settings file not found:\n{LLM_CONFIG_PATH}\n\nPlease ensure the file exists. Using default values.")
|
||||||
|
# Reset to defaults (optional, or leave fields empty)
|
||||||
|
self.prompt_editor.clear()
|
||||||
|
self.endpoint_url_edit.clear()
|
||||||
|
self.api_key_edit.clear()
|
||||||
|
self.model_name_edit.clear()
|
||||||
|
self.temperature_spinbox.setValue(0.7)
|
||||||
|
self.timeout_spinbox.setValue(120)
|
||||||
|
# self.setEnabled(False) # Disabling might be too harsh if user wants to create settings
|
||||||
|
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
logger.error(f"Error decoding JSON from {LLM_CONFIG_PATH}: {e}")
|
||||||
|
QMessageBox.critical(self, "Load Error",
|
||||||
|
f"Failed to parse LLM settings file:\n{LLM_CONFIG_PATH}\n\nError: {e}\n\nPlease check the file for syntax errors. Editor will be disabled.")
|
||||||
|
self.setEnabled(False) # Disable editor on critical load error
|
||||||
|
|
||||||
|
except Exception as e: # Catch other potential errors during loading/populating
|
||||||
|
logger.error(f"An unexpected error occurred loading LLM settings: {e}", exc_info=True)
|
||||||
|
QMessageBox.critical(self, "Load Error",
|
||||||
|
f"An unexpected error occurred while loading settings:\n{e}\n\nEditor will be disabled.")
|
||||||
|
self.setEnabled(False)
|
||||||
|
|
||||||
|
|
||||||
|
# Reset unsaved changes flag and disable save button after loading
|
||||||
|
self.save_button.setEnabled(False)
|
||||||
|
self._unsaved_changes = False
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def _mark_unsaved(self):
|
||||||
|
"""Mark settings as having unsaved changes and enable the save button."""
|
||||||
|
if not self._unsaved_changes:
|
||||||
|
self._unsaved_changes = True
|
||||||
|
self.save_button.setEnabled(True)
|
||||||
|
logger.debug("Unsaved changes marked.")
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def _save_settings(self):
|
||||||
|
"""Gather data from UI, save to JSON file, and handle errors."""
|
||||||
|
logger.info("Attempting to save LLM settings...")
|
||||||
|
|
||||||
|
settings_dict = {}
|
||||||
|
parsed_examples = []
|
||||||
|
has_errors = False
|
||||||
|
|
||||||
|
# Gather API Settings
|
||||||
|
settings_dict["llm_endpoint_url"] = self.endpoint_url_edit.text().strip()
|
||||||
|
settings_dict["llm_api_key"] = self.api_key_edit.text() # Keep as is, don't strip
|
||||||
|
settings_dict["llm_model_name"] = self.model_name_edit.text().strip()
|
||||||
|
settings_dict["llm_temperature"] = self.temperature_spinbox.value()
|
||||||
|
settings_dict["llm_request_timeout"] = self.timeout_spinbox.value()
|
||||||
|
|
||||||
|
# Gather Prompt Settings
|
||||||
|
settings_dict["llm_predictor_prompt"] = self.prompt_editor.toPlainText().strip()
|
||||||
|
|
||||||
|
# Gather and Parse Examples
|
||||||
|
for i in range(self.examples_tab_widget.count()):
|
||||||
|
example_editor = self.examples_tab_widget.widget(i)
|
||||||
|
if isinstance(example_editor, QTextEdit):
|
||||||
|
example_text = example_editor.toPlainText().strip()
|
||||||
|
if not example_text: # Skip empty examples silently
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
parsed_example = json.loads(example_text)
|
||||||
|
parsed_examples.append(parsed_example)
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
has_errors = True
|
||||||
|
tab_name = self.examples_tab_widget.tabText(i)
|
||||||
|
logger.warning(f"Invalid JSON in '{tab_name}': {e}. Skipping example.")
|
||||||
|
QMessageBox.warning(self, "Invalid Example",
|
||||||
|
f"The content in '{tab_name}' is not valid JSON and will not be saved.\n\nError: {e}\n\nPlease correct it or remove the tab.")
|
||||||
|
# Optionally switch to the tab with the error:
|
||||||
|
# self.examples_tab_widget.setCurrentIndex(i)
|
||||||
|
else:
|
||||||
|
logger.warning(f"Widget at index {i} in examples tab is not a QTextEdit. Skipping.")
|
||||||
|
|
||||||
|
|
||||||
|
if has_errors:
|
||||||
|
logger.warning("LLM settings not saved due to invalid JSON in examples.")
|
||||||
|
# Keep save button enabled if there were errors, allowing user to fix and retry
|
||||||
|
# self.save_button.setEnabled(True)
|
||||||
|
# self._unsaved_changes = True
|
||||||
|
return # Stop saving process
|
||||||
|
|
||||||
|
settings_dict["llm_predictor_examples"] = parsed_examples
|
||||||
|
|
||||||
|
# Save the dictionary to file
|
||||||
|
try:
|
||||||
|
save_llm_config(settings_dict)
|
||||||
|
QMessageBox.information(self, "Save Successful", f"LLM settings saved to:\n{LLM_CONFIG_PATH}")
|
||||||
|
self.save_button.setEnabled(False)
|
||||||
|
self._unsaved_changes = False
|
||||||
|
self.settings_saved.emit() # Notify MainWindow or others
|
||||||
|
logger.info("LLM settings saved successfully.")
|
||||||
|
|
||||||
|
except ConfigurationError as e:
|
||||||
|
logger.error(f"Failed to save LLM settings: {e}")
|
||||||
|
QMessageBox.critical(self, "Save Error", f"Could not save LLM settings.\n\nError: {e}")
|
||||||
|
# Keep save button enabled as save failed
|
||||||
|
self.save_button.setEnabled(True)
|
||||||
|
self._unsaved_changes = True
|
||||||
|
except Exception as e: # Catch unexpected errors during save
|
||||||
|
logger.error(f"An unexpected error occurred during LLM settings save: {e}", exc_info=True)
|
||||||
|
QMessageBox.critical(self, "Save Error", f"An unexpected error occurred while saving settings:\n{e}")
|
||||||
|
self.save_button.setEnabled(True)
|
||||||
|
self._unsaved_changes = True
|
||||||
|
|
||||||
|
# --- Example Management Slots ---
|
||||||
|
@pyqtSlot()
|
||||||
|
def _add_example_tab(self):
|
||||||
|
"""Add a new, empty tab for an LLM example."""
|
||||||
|
logger.debug("Adding new example tab.")
|
||||||
|
new_example_editor = QTextEdit()
|
||||||
|
new_example_editor.setPlaceholderText("Enter example JSON here...")
|
||||||
|
new_example_editor.textChanged.connect(self._mark_unsaved) # Connect signal
|
||||||
|
|
||||||
|
# Determine the next example number
|
||||||
|
next_example_num = self.examples_tab_widget.count() + 1
|
||||||
|
index = self.examples_tab_widget.addTab(new_example_editor, f"Example {next_example_num}")
|
||||||
|
self.examples_tab_widget.setCurrentIndex(index) # Focus the new tab
|
||||||
|
new_example_editor.setFocus() # Focus the editor within the tab
|
||||||
|
|
||||||
|
self._mark_unsaved() # Mark changes since we added a tab
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def _delete_current_example_tab(self):
|
||||||
|
"""Delete the currently selected example tab."""
|
||||||
|
current_index = self.examples_tab_widget.currentIndex()
|
||||||
|
if current_index != -1: # Check if a tab is selected
|
||||||
|
logger.debug(f"Deleting current example tab at index {current_index}.")
|
||||||
|
self._remove_example_tab(current_index) # Reuse the remove logic
|
||||||
|
else:
|
||||||
|
logger.debug("Delete current example tab called, but no tab is selected.")
|
||||||
|
|
||||||
|
@pyqtSlot(int)
|
||||||
|
def _remove_example_tab(self, index):
|
||||||
|
"""Remove the example tab at the given index."""
|
||||||
|
if 0 <= index < self.examples_tab_widget.count():
|
||||||
|
widget_to_remove = self.examples_tab_widget.widget(index)
|
||||||
|
self.examples_tab_widget.removeTab(index)
|
||||||
|
if widget_to_remove:
|
||||||
|
# Disconnect signals if necessary, though Python's GC should handle it
|
||||||
|
# widget_to_remove.textChanged.disconnect(self._mark_unsaved) # Optional cleanup
|
||||||
|
widget_to_remove.deleteLater() # Ensure proper cleanup of the widget
|
||||||
|
logger.debug(f"Removed example tab at index {index}.")
|
||||||
|
|
||||||
|
# Renumber subsequent tabs
|
||||||
|
for i in range(index, self.examples_tab_widget.count()):
|
||||||
|
self.examples_tab_widget.setTabText(i, f"Example {i+1}")
|
||||||
|
|
||||||
|
self._mark_unsaved() # Mark changes since we removed a tab
|
||||||
|
else:
|
||||||
|
logger.warning(f"Attempted to remove example tab at invalid index {index}.")
|
||||||
@ -1,4 +1,5 @@
|
|||||||
import os
|
import os
|
||||||
|
import json # Added for direct config loading
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
@ -7,17 +8,24 @@ from PySide6.QtCore import QObject, Signal, QThread, Slot, QTimer
|
|||||||
# --- Backend Imports ---
|
# --- Backend Imports ---
|
||||||
# Assuming these might be needed based on MainWindow's usage
|
# Assuming these might be needed based on MainWindow's usage
|
||||||
try:
|
try:
|
||||||
from configuration import Configuration, ConfigurationError, load_base_config
|
# Removed load_base_config import
|
||||||
|
# Removed Configuration import as we load manually now
|
||||||
|
from configuration import ConfigurationError # Keep error class
|
||||||
from .llm_prediction_handler import LLMPredictionHandler # Backend handler
|
from .llm_prediction_handler import LLMPredictionHandler # Backend handler
|
||||||
from rule_structure import SourceRule # For signal emission type hint
|
from rule_structure import SourceRule # For signal emission type hint
|
||||||
except ImportError as e:
|
except ImportError as e:
|
||||||
logging.getLogger(__name__).critical(f"Failed to import backend modules for LLMInteractionHandler: {e}")
|
logging.getLogger(__name__).critical(f"Failed to import backend modules for LLMInteractionHandler: {e}")
|
||||||
LLMPredictionHandler = None
|
LLMPredictionHandler = None
|
||||||
load_base_config = None
|
# load_base_config = None # Removed
|
||||||
ConfigurationError = Exception
|
ConfigurationError = Exception
|
||||||
SourceRule = None # Define as None if import fails
|
SourceRule = None # Define as None if import fails
|
||||||
|
# Configuration = None # Removed
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
# Define config file paths relative to this handler's location
|
||||||
|
CONFIG_DIR = Path(__file__).parent.parent / "config"
|
||||||
|
APP_SETTINGS_PATH = CONFIG_DIR / "app_settings.json"
|
||||||
|
LLM_SETTINGS_PATH = CONFIG_DIR / "llm_settings.json"
|
||||||
|
|
||||||
class LLMInteractionHandler(QObject):
|
class LLMInteractionHandler(QObject):
|
||||||
"""
|
"""
|
||||||
@ -53,6 +61,22 @@ class LLMInteractionHandler(QObject):
|
|||||||
log.debug(f"LLM Handler processing state changed to: {processing}")
|
log.debug(f"LLM Handler processing state changed to: {processing}")
|
||||||
self.llm_processing_state_changed.emit(processing)
|
self.llm_processing_state_changed.emit(processing)
|
||||||
|
|
||||||
|
def force_reset_state(self):
|
||||||
|
"""Forces the processing state to False. Use with caution."""
|
||||||
|
log.warning("Forcing LLMInteractionHandler state reset.")
|
||||||
|
if self.llm_prediction_thread and self.llm_prediction_thread.isRunning():
|
||||||
|
log.warning("Force reset called while thread is running. Attempting to stop thread.")
|
||||||
|
# Attempt graceful shutdown first
|
||||||
|
self.llm_prediction_thread.quit()
|
||||||
|
if not self.llm_prediction_thread.wait(500): # Wait 0.5 sec
|
||||||
|
log.warning("LLM thread did not quit gracefully after force reset. Terminating.")
|
||||||
|
self.llm_prediction_thread.terminate()
|
||||||
|
self.llm_prediction_thread.wait() # Wait after terminate
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
self.llm_prediction_handler = None
|
||||||
|
self._set_processing_state(False)
|
||||||
|
# Do NOT clear the queue here, let the user decide via Clear Queue button
|
||||||
|
|
||||||
@Slot(str, list)
|
@Slot(str, list)
|
||||||
def queue_llm_request(self, input_path: str, file_list: list | None):
|
def queue_llm_request(self, input_path: str, file_list: list | None):
|
||||||
"""Adds a request to the LLM processing queue."""
|
"""Adds a request to the LLM processing queue."""
|
||||||
@ -73,6 +97,7 @@ class LLMInteractionHandler(QObject):
|
|||||||
def queue_llm_requests_batch(self, requests: list[tuple[str, list | None]]):
|
def queue_llm_requests_batch(self, requests: list[tuple[str, list | None]]):
|
||||||
"""Adds multiple requests to the LLM processing queue."""
|
"""Adds multiple requests to the LLM processing queue."""
|
||||||
added_count = 0
|
added_count = 0
|
||||||
|
log.debug(f"Queueing batch. Current queue content: {self.llm_processing_queue}") # ADDED DEBUG LOG
|
||||||
for input_path, file_list in requests:
|
for input_path, file_list in requests:
|
||||||
is_in_queue = any(item[0] == input_path for item in self.llm_processing_queue)
|
is_in_queue = any(item[0] == input_path for item in self.llm_processing_queue)
|
||||||
if not is_in_queue:
|
if not is_in_queue:
|
||||||
@ -97,10 +122,10 @@ class LLMInteractionHandler(QObject):
|
|||||||
self.llm_prediction_thread = None
|
self.llm_prediction_thread = None
|
||||||
self.llm_prediction_handler = None
|
self.llm_prediction_handler = None
|
||||||
# --- Process next item now that the previous thread is fully finished ---
|
# --- Process next item now that the previous thread is fully finished ---
|
||||||
log.debug("Previous LLM thread finished. Triggering processing for next item by calling _process_next_llm_item...")
|
log.debug("Previous LLM thread finished. Setting processing state to False.")
|
||||||
self._set_processing_state(False) # Mark processing as finished *before* trying next item
|
self._set_processing_state(False) # Mark processing as finished
|
||||||
# Use QTimer.singleShot to yield control briefly before starting next item
|
# The next item will be processed when _handle_llm_result or _handle_llm_error
|
||||||
QTimer.singleShot(0, self._process_next_llm_item)
|
# calls _process_next_llm_item after popping the completed item.
|
||||||
log.debug("<-- Exiting LLMInteractionHandler._reset_llm_thread_references")
|
log.debug("<-- Exiting LLMInteractionHandler._reset_llm_thread_references")
|
||||||
|
|
||||||
|
|
||||||
@ -114,16 +139,6 @@ class LLMInteractionHandler(QObject):
|
|||||||
# Extract file list if not provided (needed for re-interpretation calls)
|
# Extract file list if not provided (needed for re-interpretation calls)
|
||||||
if file_list is None:
|
if file_list is None:
|
||||||
log.debug(f"File list not provided for {input_path_str}, extracting...")
|
log.debug(f"File list not provided for {input_path_str}, extracting...")
|
||||||
# Need access to MainWindow's _extract_file_list or reimplement
|
|
||||||
# For now, assume MainWindow provides it or pass it during queueing
|
|
||||||
# Let's assume file_list is always provided correctly for now.
|
|
||||||
# If extraction fails before queueing, it won't reach here.
|
|
||||||
# If extraction needs to happen here, MainWindow ref is needed.
|
|
||||||
# Re-evaluating: MainWindow._extract_file_list is complex.
|
|
||||||
# It's better if the caller (MainWindow) extracts and passes the list.
|
|
||||||
# We'll modify queue_llm_request to require a non-None list eventually,
|
|
||||||
# or pass the main_window ref to call its extraction method.
|
|
||||||
# Let's pass main_window ref for now.
|
|
||||||
if hasattr(self.main_window, '_extract_file_list'):
|
if hasattr(self.main_window, '_extract_file_list'):
|
||||||
file_list = self.main_window._extract_file_list(input_path_str)
|
file_list = self.main_window._extract_file_list(input_path_str)
|
||||||
if file_list is None:
|
if file_list is None:
|
||||||
@ -131,11 +146,6 @@ class LLMInteractionHandler(QObject):
|
|||||||
log.error(error_msg)
|
log.error(error_msg)
|
||||||
self.llm_status_update.emit(f"Error extracting files for {os.path.basename(input_path_str)}")
|
self.llm_status_update.emit(f"Error extracting files for {os.path.basename(input_path_str)}")
|
||||||
self.llm_prediction_error.emit(input_path_str, error_msg) # Signal error
|
self.llm_prediction_error.emit(input_path_str, error_msg) # Signal error
|
||||||
# If called as part of a queue, we need to ensure the next item is processed.
|
|
||||||
# _reset_llm_thread_references handles this via the finished signal,
|
|
||||||
# but if the thread never starts, we need to trigger manually.
|
|
||||||
# This case should ideally be caught before calling _start_llm_prediction.
|
|
||||||
# We'll assume the queue logic handles failed extraction before calling this.
|
|
||||||
return # Stop if extraction failed
|
return # Stop if extraction failed
|
||||||
else:
|
else:
|
||||||
error_msg = f"MainWindow reference does not have _extract_file_list method."
|
error_msg = f"MainWindow reference does not have _extract_file_list method."
|
||||||
@ -153,88 +163,143 @@ class LLMInteractionHandler(QObject):
|
|||||||
self.llm_prediction_error.emit(input_path_str, error_msg)
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
return
|
return
|
||||||
|
|
||||||
# --- Load Base Config for LLM Settings ---
|
# --- Load Required Settings Directly ---
|
||||||
if load_base_config is None:
|
llm_settings = {}
|
||||||
log.critical("LLM Error: load_base_config function not available.")
|
|
||||||
self.llm_status_update.emit("LLM Error: Cannot load base configuration.")
|
|
||||||
self.llm_prediction_error.emit(input_path_str, "load_base_config function not available.")
|
|
||||||
return
|
|
||||||
try:
|
try:
|
||||||
base_config = load_base_config()
|
log.debug(f"Loading LLM settings from: {LLM_SETTINGS_PATH}")
|
||||||
if not base_config:
|
with open(LLM_SETTINGS_PATH, 'r') as f:
|
||||||
raise ConfigurationError("Failed to load base configuration (app_settings.json).")
|
llm_data = json.load(f)
|
||||||
|
# Extract required fields with defaults
|
||||||
|
llm_settings['endpoint_url'] = llm_data.get('llm_endpoint_url')
|
||||||
|
llm_settings['api_key'] = llm_data.get('llm_api_key') # Can be None
|
||||||
|
llm_settings['model_name'] = llm_data.get('llm_model_name', 'local-model')
|
||||||
|
llm_settings['temperature'] = llm_data.get('llm_temperature', 0.5)
|
||||||
|
llm_settings['request_timeout'] = llm_data.get('llm_request_timeout', 120)
|
||||||
|
llm_settings['predictor_prompt'] = llm_data.get('llm_predictor_prompt', '')
|
||||||
|
llm_settings['examples'] = llm_data.get('llm_examples', [])
|
||||||
|
|
||||||
llm_settings = {
|
log.debug(f"Loading App settings from: {APP_SETTINGS_PATH}")
|
||||||
"llm_endpoint_url": base_config.get('llm_endpoint_url'),
|
with open(APP_SETTINGS_PATH, 'r') as f:
|
||||||
"api_key": base_config.get('llm_api_key'),
|
app_data = json.load(f)
|
||||||
"model_name": base_config.get('llm_model_name', 'gemini-pro'),
|
# Extract required fields
|
||||||
"prompt_template_content": base_config.get('llm_predictor_prompt'),
|
llm_settings['asset_type_definitions'] = app_data.get('ASSET_TYPE_DEFINITIONS', {})
|
||||||
"asset_types": base_config.get('ASSET_TYPE_DEFINITIONS', {}),
|
llm_settings['file_type_definitions'] = app_data.get('FILE_TYPE_DEFINITIONS', {})
|
||||||
"file_types": base_config.get('FILE_TYPE_DEFINITIONS', {}),
|
|
||||||
"examples": base_config.get('llm_predictor_examples', [])
|
# Validate essential settings
|
||||||
}
|
if not llm_settings['endpoint_url']:
|
||||||
except ConfigurationError as e:
|
raise ValueError("LLM endpoint URL is missing in llm_settings.json")
|
||||||
log.error(f"LLM Configuration Error: {e}")
|
if not llm_settings['predictor_prompt']:
|
||||||
self.llm_status_update.emit(f"LLM Config Error: {e}")
|
raise ValueError("LLM predictor prompt is missing in llm_settings.json")
|
||||||
self.llm_prediction_error.emit(input_path_str, f"LLM Configuration Error: {e}")
|
|
||||||
# Optionally show a QMessageBox via main_window ref if critical
|
log.debug("LLM and App settings loaded successfully for LLMInteractionHandler.")
|
||||||
# self.main_window.show_critical_error("LLM Config Error", str(e))
|
|
||||||
|
except FileNotFoundError as e:
|
||||||
|
error_msg = f"LLM Error: Configuration file not found: {e.filename}"
|
||||||
|
log.critical(error_msg)
|
||||||
|
self.llm_status_update.emit("LLM Error: Cannot load configuration file.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
return
|
return
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
error_msg = f"LLM Error: Failed to parse configuration file: {e}"
|
||||||
|
log.critical(error_msg)
|
||||||
|
self.llm_status_update.emit("LLM Error: Cannot parse configuration file.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return
|
||||||
|
except ValueError as e: # Catch validation errors
|
||||||
|
error_msg = f"LLM Error: Invalid configuration - {e}"
|
||||||
|
log.critical(error_msg)
|
||||||
|
self.llm_status_update.emit("LLM Error: Invalid configuration.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return
|
||||||
|
except Exception as e: # Catch other potential errors
|
||||||
|
error_msg = f"LLM Error: Unexpected error loading configuration: {e}"
|
||||||
|
log.critical(error_msg, exc_info=True)
|
||||||
|
self.llm_status_update.emit("LLM Error: Cannot load application configuration.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return
|
||||||
|
|
||||||
|
# --- Wrap thread/handler setup and start in try...except ---
|
||||||
|
try:
|
||||||
|
# --- Check if Handler Class is Available ---
|
||||||
|
if LLMPredictionHandler is None:
|
||||||
|
# Raise ValueError to be caught below
|
||||||
|
raise ValueError("LLMPredictionHandler class not available.")
|
||||||
|
|
||||||
|
# --- Clean up previous thread/handler if necessary ---
|
||||||
|
# (Keep this cleanup logic as it handles potential stale threads)
|
||||||
|
if self.llm_prediction_thread and self.llm_prediction_thread.isRunning():
|
||||||
|
log.warning("Warning: Previous LLM prediction thread still running when trying to start new one. Attempting cleanup.")
|
||||||
|
if self.llm_prediction_handler:
|
||||||
|
if hasattr(self.llm_prediction_handler, 'cancel'):
|
||||||
|
self.llm_prediction_handler.cancel()
|
||||||
|
self.llm_prediction_thread.quit()
|
||||||
|
if not self.llm_prediction_thread.wait(1000): # Wait 1 sec
|
||||||
|
log.warning("LLM thread did not quit gracefully. Forcing termination.")
|
||||||
|
self.llm_prediction_thread.terminate()
|
||||||
|
self.llm_prediction_thread.wait() # Wait after terminate
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
self.llm_prediction_handler = None
|
||||||
|
|
||||||
|
log.info(f"Starting LLM prediction thread for source: {input_path_str} with {len(file_list)} files.")
|
||||||
|
self.llm_status_update.emit(f"Starting LLM interpretation for {input_path_obj.name}...")
|
||||||
|
|
||||||
|
# --- Create Thread and Handler ---
|
||||||
|
self.llm_prediction_thread = QThread(self) # Parent thread to self
|
||||||
|
# Pass the loaded settings dictionary
|
||||||
|
self.llm_prediction_handler = LLMPredictionHandler(input_path_str, file_list, llm_settings)
|
||||||
|
self.llm_prediction_handler.moveToThread(self.llm_prediction_thread)
|
||||||
|
|
||||||
|
# Connect signals from handler to *internal* slots or directly emit signals
|
||||||
|
self.llm_prediction_handler.prediction_ready.connect(self._handle_llm_result)
|
||||||
|
self.llm_prediction_handler.prediction_error.connect(self._handle_llm_error)
|
||||||
|
self.llm_prediction_handler.status_update.connect(self.llm_status_update) # Pass status through
|
||||||
|
|
||||||
|
# Connect thread signals
|
||||||
|
self.llm_prediction_thread.started.connect(self.llm_prediction_handler.run)
|
||||||
|
# Clean up thread and handler when finished
|
||||||
|
self.llm_prediction_thread.finished.connect(self._reset_llm_thread_references)
|
||||||
|
self.llm_prediction_thread.finished.connect(self.llm_prediction_handler.deleteLater)
|
||||||
|
self.llm_prediction_thread.finished.connect(self.llm_prediction_thread.deleteLater)
|
||||||
|
# Also ensure thread quits when handler signals completion/error
|
||||||
|
self.llm_prediction_handler.prediction_ready.connect(self.llm_prediction_thread.quit)
|
||||||
|
self.llm_prediction_handler.prediction_error.connect(self.llm_prediction_thread.quit)
|
||||||
|
|
||||||
|
# TODO: Add a logging.debug statement at the very beginning of LLMPredictionHandler.run()
|
||||||
|
# to confirm if the method is being reached. Example:
|
||||||
|
# log.debug(f"--> Entered LLMPredictionHandler.run() for {self.input_path}")
|
||||||
|
|
||||||
|
self.llm_prediction_thread.start()
|
||||||
|
log.debug(f"LLM prediction thread start() called for {input_path_str}. Is running: {self.llm_prediction_thread.isRunning()}") # ADDED DEBUG LOG
|
||||||
|
# Log success *after* start() is called successfully
|
||||||
|
log.debug(f"Successfully initiated LLM prediction thread for {input_path_str}.") # MOVED/REWORDED LOG
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.exception(f"Unexpected error loading LLM configuration: {e}")
|
# --- Handle errors during setup/start ---
|
||||||
self.llm_status_update.emit(f"LLM Config Error: {e}")
|
log.exception(f"Critical error during LLM thread setup/start for {input_path_str}: {e}")
|
||||||
self.llm_prediction_error.emit(input_path_str, f"Unexpected error loading LLM config: {e}")
|
error_msg = f"Error initializing LLM task for {input_path_obj.name}: {e}"
|
||||||
return
|
self.llm_status_update.emit(error_msg)
|
||||||
# --- End Config Loading ---
|
self.llm_prediction_error.emit(input_path_str, error_msg) # Signal the error
|
||||||
|
|
||||||
if LLMPredictionHandler is None:
|
# --- Crucially, reset processing state if setup fails ---
|
||||||
log.critical("LLMPredictionHandler class not available.")
|
log.warning("Resetting processing state due to thread setup/start error.")
|
||||||
self.llm_status_update.emit("LLM Error: Prediction handler component missing.")
|
self._set_processing_state(False)
|
||||||
self.llm_prediction_error.emit(input_path_str, "LLMPredictionHandler class not available.")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Clean up previous thread/handler if any exist (should not happen if queue logic is correct)
|
# Clean up potentially partially created objects
|
||||||
if self.llm_prediction_thread and self.llm_prediction_thread.isRunning():
|
|
||||||
log.warning("Warning: Previous LLM prediction thread still running when trying to start new one. This indicates a potential logic error.")
|
|
||||||
# Attempt graceful shutdown (might need more robust handling)
|
|
||||||
if self.llm_prediction_handler:
|
if self.llm_prediction_handler:
|
||||||
# Assuming LLMPredictionHandler has a cancel method or similar
|
self.llm_prediction_handler.deleteLater()
|
||||||
if hasattr(self.llm_prediction_handler, 'cancel'):
|
self.llm_prediction_handler = None
|
||||||
self.llm_prediction_handler.cancel()
|
if self.llm_prediction_thread:
|
||||||
self.llm_prediction_thread.quit()
|
if self.llm_prediction_thread.isRunning():
|
||||||
if not self.llm_prediction_thread.wait(1000): # Wait 1 sec
|
self.llm_prediction_thread.quit()
|
||||||
log.warning("LLM thread did not quit gracefully. Forcing termination.")
|
self.llm_prediction_thread.wait(500)
|
||||||
self.llm_prediction_thread.terminate()
|
self.llm_prediction_thread.terminate() # Force if needed
|
||||||
self.llm_prediction_thread.wait() # Wait after terminate
|
self.llm_prediction_thread.wait()
|
||||||
# Reset references after ensuring termination
|
self.llm_prediction_thread.deleteLater()
|
||||||
self.llm_prediction_thread = None
|
self.llm_prediction_thread = None
|
||||||
self.llm_prediction_handler = None
|
|
||||||
|
|
||||||
|
# Do NOT automatically try the next item here, as the error might be persistent.
|
||||||
log.info(f"Starting LLM prediction thread for source: {input_path_str} with {len(file_list)} files.")
|
# Let the error signal handle popping the item and trying the next one.
|
||||||
self.llm_status_update.emit(f"Starting LLM interpretation for {input_path_obj.name}...")
|
# The error signal (_handle_llm_error) will pop the item and call _process_next_llm_item.
|
||||||
|
|
||||||
self.llm_prediction_thread = QThread(self.main_window) # Parent thread to main window's thread? Or self? Let's try self.
|
|
||||||
self.llm_prediction_handler = LLMPredictionHandler(input_path_str, file_list, llm_settings)
|
|
||||||
self.llm_prediction_handler.moveToThread(self.llm_prediction_thread)
|
|
||||||
|
|
||||||
# Connect signals from handler to *internal* slots or directly emit signals
|
|
||||||
self.llm_prediction_handler.prediction_ready.connect(self._handle_llm_result)
|
|
||||||
self.llm_prediction_handler.prediction_error.connect(self._handle_llm_error)
|
|
||||||
self.llm_prediction_handler.status_update.connect(self.llm_status_update) # Pass status through
|
|
||||||
|
|
||||||
# Connect thread signals
|
|
||||||
self.llm_prediction_thread.started.connect(self.llm_prediction_handler.run)
|
|
||||||
# Clean up thread and handler when finished
|
|
||||||
self.llm_prediction_thread.finished.connect(self._reset_llm_thread_references)
|
|
||||||
self.llm_prediction_thread.finished.connect(self.llm_prediction_handler.deleteLater)
|
|
||||||
self.llm_prediction_thread.finished.connect(self.llm_prediction_thread.deleteLater)
|
|
||||||
# Also ensure thread quits when handler signals completion/error
|
|
||||||
self.llm_prediction_handler.prediction_ready.connect(self.llm_prediction_thread.quit)
|
|
||||||
self.llm_prediction_handler.prediction_error.connect(self.llm_prediction_thread.quit)
|
|
||||||
|
|
||||||
self.llm_prediction_thread.start()
|
|
||||||
log.debug(f"LLM prediction thread started for {input_path_str}.")
|
|
||||||
|
|
||||||
|
|
||||||
def is_processing(self) -> bool:
|
def is_processing(self) -> bool:
|
||||||
@ -291,10 +356,11 @@ class LLMInteractionHandler(QObject):
|
|||||||
try:
|
try:
|
||||||
# Pass the potentially None file_list. _start_llm_prediction handles extraction if needed.
|
# Pass the potentially None file_list. _start_llm_prediction handles extraction if needed.
|
||||||
self._start_llm_prediction(next_dir, file_list=file_list)
|
self._start_llm_prediction(next_dir, file_list=file_list)
|
||||||
# --- Pop item *after* successfully starting prediction ---
|
# --- DO NOT pop item here. Item is popped in _handle_llm_result or _handle_llm_error ---
|
||||||
self.llm_processing_queue.pop(0)
|
# Log message moved into the try block of _start_llm_prediction
|
||||||
log.debug(f"Successfully started LLM prediction for {next_dir} and removed from queue.")
|
# log.debug(f"Successfully started LLM prediction thread for {next_dir}. Item remains in queue until finished.")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
# This block now catches errors from _start_llm_prediction itself
|
||||||
log.exception(f"Error occurred *during* _start_llm_prediction call for {next_dir}: {e}")
|
log.exception(f"Error occurred *during* _start_llm_prediction call for {next_dir}: {e}")
|
||||||
error_msg = f"Error starting LLM for {os.path.basename(next_dir)}: {e}"
|
error_msg = f"Error starting LLM for {os.path.basename(next_dir)}: {e}"
|
||||||
self.llm_status_update.emit(error_msg)
|
self.llm_status_update.emit(error_msg)
|
||||||
@ -314,19 +380,37 @@ class LLMInteractionHandler(QObject):
|
|||||||
# --- Internal Slots to Handle Results/Errors from LLMPredictionHandler ---
|
# --- Internal Slots to Handle Results/Errors from LLMPredictionHandler ---
|
||||||
@Slot(str, list)
|
@Slot(str, list)
|
||||||
def _handle_llm_result(self, input_path: str, source_rules: list):
|
def _handle_llm_result(self, input_path: str, source_rules: list):
|
||||||
"""Internal slot to receive results and emit the public signal."""
|
"""Internal slot to receive results, pop item, and emit the public signal."""
|
||||||
log.debug(f"LLM Handler received result for {input_path}. Emitting llm_prediction_ready.")
|
log.debug(f"LLM Handler received result for {input_path}. Removing from queue and emitting llm_prediction_ready.")
|
||||||
|
# Remove the completed item from the queue
|
||||||
|
try:
|
||||||
|
# Find and remove the item by input_path
|
||||||
|
self.llm_processing_queue = [item for item in self.llm_processing_queue if item[0] != input_path]
|
||||||
|
log.debug(f"Removed '{input_path}' from LLM queue after successful prediction. New size: {len(self.llm_processing_queue)}")
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Error removing '{input_path}' from LLM queue after success: {e}")
|
||||||
|
|
||||||
self.llm_prediction_ready.emit(input_path, source_rules)
|
self.llm_prediction_ready.emit(input_path, source_rules)
|
||||||
# Note: The thread's finished signal calls _reset_llm_thread_references,
|
|
||||||
# which then calls _process_next_llm_item.
|
# Process the next item in the queue
|
||||||
|
QTimer.singleShot(0, self._process_next_llm_item)
|
||||||
|
|
||||||
@Slot(str, str)
|
@Slot(str, str)
|
||||||
def _handle_llm_error(self, input_path: str, error_message: str):
|
def _handle_llm_error(self, input_path: str, error_message: str):
|
||||||
"""Internal slot to receive errors and emit the public signal."""
|
"""Internal slot to receive errors, pop item, and emit the public signal."""
|
||||||
log.debug(f"LLM Handler received error for {input_path}: {error_message}. Emitting llm_prediction_error.")
|
log.debug(f"LLM Handler received error for {input_path}: {error_message}. Removing from queue and emitting llm_prediction_error.")
|
||||||
|
# Remove the failed item from the queue
|
||||||
|
try:
|
||||||
|
# Find and remove the item by input_path
|
||||||
|
self.llm_processing_queue = [item for item in self.llm_processing_queue if item[0] != input_path]
|
||||||
|
log.debug(f"Removed '{input_path}' from LLM queue after error. New size: {len(self.llm_processing_queue)}")
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Error removing '{input_path}' from LLM queue after error: {e}")
|
||||||
|
|
||||||
self.llm_prediction_error.emit(input_path, error_message)
|
self.llm_prediction_error.emit(input_path, error_message)
|
||||||
# Note: The thread's finished signal calls _reset_llm_thread_references,
|
|
||||||
# which then calls _process_next_llm_item.
|
# Process the next item in the queue
|
||||||
|
QTimer.singleShot(0, self._process_next_llm_item)
|
||||||
|
|
||||||
def clear_queue(self):
|
def clear_queue(self):
|
||||||
"""Clears the LLM processing queue."""
|
"""Clears the LLM processing queue."""
|
||||||
|
|||||||
@ -14,7 +14,7 @@ from rule_structure import SourceRule, AssetRule, FileRule # Ensure AssetRule an
|
|||||||
|
|
||||||
# Assuming configuration loads app_settings.json
|
# Assuming configuration loads app_settings.json
|
||||||
# Adjust the import path if necessary
|
# Adjust the import path if necessary
|
||||||
# Removed Configuration import, will use load_base_config if needed or passed settings
|
# Removed Configuration import
|
||||||
# from configuration import Configuration
|
# from configuration import Configuration
|
||||||
# from configuration import load_base_config # No longer needed here
|
# from configuration import load_base_config # No longer needed here
|
||||||
from .base_prediction_handler import BasePredictionHandler # Import base class
|
from .base_prediction_handler import BasePredictionHandler # Import base class
|
||||||
@ -28,7 +28,8 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
"""
|
"""
|
||||||
# Signals (prediction_ready, prediction_error, status_update) are inherited
|
# Signals (prediction_ready, prediction_error, status_update) are inherited
|
||||||
|
|
||||||
def __init__(self, input_source_identifier: str, file_list: list, llm_settings: dict, parent: QObject = None):
|
# Changed 'config: Configuration' to 'settings: dict'
|
||||||
|
def __init__(self, input_source_identifier: str, file_list: list, settings: dict, parent: QObject = None):
|
||||||
"""
|
"""
|
||||||
Initializes the LLM handler.
|
Initializes the LLM handler.
|
||||||
|
|
||||||
@ -36,16 +37,14 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
input_source_identifier: The unique identifier for the input source (e.g., file path).
|
input_source_identifier: The unique identifier for the input source (e.g., file path).
|
||||||
file_list: A list of *relative* file paths extracted from the input source.
|
file_list: A list of *relative* file paths extracted from the input source.
|
||||||
(LLM expects relative paths based on the prompt template).
|
(LLM expects relative paths based on the prompt template).
|
||||||
llm_settings: A dictionary containing necessary LLM configuration
|
settings: A dictionary containing required LLM and App settings.
|
||||||
(endpoint_url, api_key, prompt_template_content, etc.).
|
|
||||||
parent: The parent QObject.
|
parent: The parent QObject.
|
||||||
"""
|
"""
|
||||||
super().__init__(input_source_identifier, parent)
|
super().__init__(input_source_identifier, parent)
|
||||||
# input_source_identifier is stored by the base class as self.input_source_identifier
|
# input_source_identifier is stored by the base class as self.input_source_identifier
|
||||||
self.file_list = file_list # Store the provided relative file list
|
self.file_list = file_list # Store the provided relative file list
|
||||||
self.llm_settings = llm_settings # Store the settings dictionary
|
self.settings = settings # Store the settings dictionary
|
||||||
self.endpoint_url = self.llm_settings.get('llm_endpoint_url')
|
# Access LLM settings via self.settings['key']
|
||||||
self.api_key = self.llm_settings.get('llm_api_key')
|
|
||||||
# _is_running and _is_cancelled are handled by the base class
|
# _is_running and _is_cancelled are handled by the base class
|
||||||
|
|
||||||
# The run() and cancel() slots are provided by the base class.
|
# The run() and cancel() slots are provided by the base class.
|
||||||
@ -65,6 +64,7 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
ConnectionError: If the LLM API call fails due to network issues or timeouts.
|
ConnectionError: If the LLM API call fails due to network issues or timeouts.
|
||||||
Exception: For other errors during prompt preparation, API call, or parsing.
|
Exception: For other errors during prompt preparation, API call, or parsing.
|
||||||
"""
|
"""
|
||||||
|
log.debug(f"--> Entered LLMPredictionHandler._perform_prediction() for {self.input_source_identifier}")
|
||||||
log.info(f"Performing LLM prediction for: {self.input_source_identifier}")
|
log.info(f"Performing LLM prediction for: {self.input_source_identifier}")
|
||||||
base_name = Path(self.input_source_identifier).name
|
base_name = Path(self.input_source_identifier).name
|
||||||
|
|
||||||
@ -128,28 +128,25 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
"""
|
"""
|
||||||
Prepares the full prompt string to send to the LLM using stored settings.
|
Prepares the full prompt string to send to the LLM using stored settings.
|
||||||
"""
|
"""
|
||||||
# Access settings from the stored dictionary
|
# Access settings via the settings dictionary
|
||||||
prompt_template = self.llm_settings.get('prompt_template_content')
|
prompt_template = self.settings.get('predictor_prompt')
|
||||||
if not prompt_template:
|
if not prompt_template:
|
||||||
# Attempt to fall back to reading the default file path if content is missing
|
raise ValueError("LLM predictor prompt template content is empty or missing in settings.")
|
||||||
default_template_path = 'llm_prototype/prompt_template.txt'
|
|
||||||
print(f"Warning: 'prompt_template_content' missing in llm_settings. Falling back to reading default file: {default_template_path}")
|
|
||||||
try:
|
|
||||||
with open(default_template_path, 'r', encoding='utf-8') as f:
|
|
||||||
prompt_template = f.read()
|
|
||||||
except FileNotFoundError:
|
|
||||||
raise ValueError(f"LLM predictor prompt template content missing in settings and default file not found at: {default_template_path}")
|
|
||||||
except Exception as e:
|
|
||||||
raise ValueError(f"Error reading default LLM prompt template file {default_template_path}: {e}")
|
|
||||||
|
|
||||||
if not prompt_template: # Final check after potential fallback
|
|
||||||
raise ValueError("LLM predictor prompt template content is empty or could not be loaded.")
|
|
||||||
|
|
||||||
|
|
||||||
# Access definitions and examples from the settings dictionary
|
# Access definitions and examples directly from the settings dictionary
|
||||||
asset_defs = json.dumps(self.llm_settings.get('asset_types', {}), indent=4)
|
asset_defs = json.dumps(self.settings.get('asset_type_definitions', {}), indent=4)
|
||||||
file_defs = json.dumps(self.llm_settings.get('file_types', {}), indent=4)
|
# Combine file type defs and examples (assuming structure from Configuration class)
|
||||||
examples = json.dumps(self.llm_settings.get('examples', []), indent=2)
|
file_type_defs_combined = {}
|
||||||
|
file_type_defs = self.settings.get('file_type_definitions', {})
|
||||||
|
for key, definition in file_type_defs.items():
|
||||||
|
# Add examples if they exist within the definition structure
|
||||||
|
file_type_defs_combined[key] = {
|
||||||
|
"description": definition.get("description", ""),
|
||||||
|
"examples": definition.get("examples", [])
|
||||||
|
}
|
||||||
|
file_defs = json.dumps(file_type_defs_combined, indent=4)
|
||||||
|
examples = json.dumps(self.settings.get('examples', []), indent=2)
|
||||||
|
|
||||||
# Format *relative* file list as a single string with newlines
|
# Format *relative* file list as a single string with newlines
|
||||||
file_list_str = "\n".join(relative_file_list)
|
file_list_str = "\n".join(relative_file_list)
|
||||||
@ -177,32 +174,34 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
ValueError: If the endpoint URL is not configured or the response is invalid.
|
ValueError: If the endpoint URL is not configured or the response is invalid.
|
||||||
requests.exceptions.RequestException: For other request-related errors.
|
requests.exceptions.RequestException: For other request-related errors.
|
||||||
"""
|
"""
|
||||||
if not self.endpoint_url:
|
endpoint_url = self.settings.get('endpoint_url') # Get from settings dict
|
||||||
|
if not endpoint_url:
|
||||||
raise ValueError("LLM endpoint URL is not configured in settings.")
|
raise ValueError("LLM endpoint URL is not configured in settings.")
|
||||||
|
|
||||||
headers = {
|
headers = {
|
||||||
"Content-Type": "application/json",
|
"Content-Type": "application/json",
|
||||||
}
|
}
|
||||||
if self.api_key:
|
api_key = self.settings.get('api_key') # Get from settings dict
|
||||||
headers["Authorization"] = f"Bearer {self.api_key}"
|
if api_key:
|
||||||
|
headers["Authorization"] = f"Bearer {api_key}"
|
||||||
|
|
||||||
# Construct payload based on OpenAI Chat Completions format
|
# Construct payload based on OpenAI Chat Completions format
|
||||||
payload = {
|
payload = {
|
||||||
# Use configured model name, default to 'local-model'
|
# Use configured model name from settings dict
|
||||||
"model": self.llm_settings.get("llm_model_name", "local-model"),
|
"model": self.settings.get('model_name', 'local-model'),
|
||||||
"messages": [{"role": "user", "content": prompt}],
|
"messages": [{"role": "user", "content": prompt}],
|
||||||
# Use configured temperature, default to 0.5
|
# Use configured temperature from settings dict
|
||||||
"temperature": self.llm_settings.get("llm_temperature", 0.5),
|
"temperature": self.settings.get('temperature', 0.5),
|
||||||
# Add max_tokens if needed/configurable:
|
# Add max_tokens if needed/configurable:
|
||||||
# "max_tokens": self.llm_settings.get("llm_max_tokens", 1024),
|
# "max_tokens": self.settings.get('max_tokens'), # Example if added to settings
|
||||||
# Ensure the LLM is instructed to return JSON in the prompt itself
|
# Ensure the LLM is instructed to return JSON in the prompt itself
|
||||||
# Some models/endpoints support a specific json mode:
|
# Some models/endpoints support a specific json mode:
|
||||||
# "response_format": { "type": "json_object" } # If supported by endpoint
|
# "response_format": { "type": "json_object" } # If supported by endpoint
|
||||||
}
|
}
|
||||||
|
|
||||||
# Status update emitted by _perform_prediction before calling this
|
# Status update emitted by _perform_prediction before calling this
|
||||||
# self.status_update.emit(f"Sending request to LLM at {self.endpoint_url}...")
|
# self.status_update.emit(f"Sending request to LLM at {endpoint_url}...")
|
||||||
print(f"--- Calling LLM API: {self.endpoint_url} ---")
|
print(f"--- Calling LLM API: {endpoint_url} ---")
|
||||||
# print(f"--- Payload Preview ---\n{json.dumps(payload, indent=2)[:500]}...\n--- END Payload Preview ---")
|
# print(f"--- Payload Preview ---\n{json.dumps(payload, indent=2)[:500]}...\n--- END Payload Preview ---")
|
||||||
|
|
||||||
# Note: Exceptions raised here (Timeout, RequestException, ValueError)
|
# Note: Exceptions raised here (Timeout, RequestException, ValueError)
|
||||||
@ -210,10 +209,10 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
|
|
||||||
# Make the POST request with a timeout
|
# Make the POST request with a timeout
|
||||||
response = requests.post(
|
response = requests.post(
|
||||||
self.endpoint_url,
|
endpoint_url,
|
||||||
headers=headers,
|
headers=headers,
|
||||||
json=payload,
|
json=payload,
|
||||||
timeout=self.llm_settings.get("llm_request_timeout", 120)
|
timeout=self.settings.get('request_timeout', 120) # Use settings dict (with default)
|
||||||
)
|
)
|
||||||
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
||||||
|
|
||||||
@ -236,124 +235,192 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
|
|
||||||
def _parse_llm_response(self, llm_response_json_str: str) -> List[SourceRule]:
|
def _parse_llm_response(self, llm_response_json_str: str) -> List[SourceRule]:
|
||||||
"""
|
"""
|
||||||
Parses the LLM's JSON response string into a list of SourceRule objects.
|
Parses the LLM's JSON response string (new two-part format) into a
|
||||||
|
list containing a single SourceRule object.
|
||||||
|
Includes sanitization for comments and markdown fences.
|
||||||
"""
|
"""
|
||||||
# Note: Exceptions (JSONDecodeError, ValueError) raised here
|
# Note: Exceptions (JSONDecodeError, ValueError) raised here
|
||||||
# will be caught by the _perform_prediction method's handler.
|
# will be caught by the _perform_prediction method's handler.
|
||||||
|
|
||||||
# Strip potential markdown code fences before parsing
|
# --- Sanitize Input String ---
|
||||||
clean_json_str = llm_response_json_str.strip()
|
clean_json_str = llm_response_json_str.strip()
|
||||||
|
|
||||||
|
# 1. Remove multi-line /* */ comments
|
||||||
|
clean_json_str = re.sub(r'/\*.*?\*/', '', clean_json_str, flags=re.DOTALL)
|
||||||
|
|
||||||
|
# 2. Remove single-line // comments (handle potential URLs carefully)
|
||||||
|
# Only remove // if it's likely a comment (e.g., whitespace before it,
|
||||||
|
# or at the start of a line after stripping leading whitespace).
|
||||||
|
lines = clean_json_str.splitlines()
|
||||||
|
cleaned_lines = []
|
||||||
|
for line in lines:
|
||||||
|
stripped_line = line.strip()
|
||||||
|
# Find the first // that isn't preceded by a : (to avoid breaking URLs like http://)
|
||||||
|
comment_index = -1
|
||||||
|
search_start = 0
|
||||||
|
while True:
|
||||||
|
idx = stripped_line.find('//', search_start)
|
||||||
|
if idx == -1:
|
||||||
|
break # No more // found
|
||||||
|
if idx == 0 or stripped_line[idx-1] != ':':
|
||||||
|
# Found a potential comment marker
|
||||||
|
# Check if it's inside quotes
|
||||||
|
in_quotes = False
|
||||||
|
quote_char = ''
|
||||||
|
for i in range(idx):
|
||||||
|
char = stripped_line[i]
|
||||||
|
if char in ('"', "'") and (i == 0 or stripped_line[i-1] != '\\'): # Handle escaped quotes
|
||||||
|
if not in_quotes:
|
||||||
|
in_quotes = True
|
||||||
|
quote_char = char
|
||||||
|
elif char == quote_char:
|
||||||
|
in_quotes = False
|
||||||
|
quote_char = ''
|
||||||
|
if not in_quotes:
|
||||||
|
comment_index = idx
|
||||||
|
break # Found valid comment marker
|
||||||
|
else:
|
||||||
|
# // is inside quotes, continue searching after it
|
||||||
|
search_start = idx + 2
|
||||||
|
else:
|
||||||
|
# Found ://, likely a URL, continue searching after it
|
||||||
|
search_start = idx + 2
|
||||||
|
|
||||||
|
if comment_index != -1:
|
||||||
|
# Find the original position in the non-stripped line
|
||||||
|
original_comment_start = line.find(stripped_line[comment_index:])
|
||||||
|
cleaned_lines.append(line[:original_comment_start].rstrip())
|
||||||
|
else:
|
||||||
|
cleaned_lines.append(line)
|
||||||
|
clean_json_str = "\n".join(cleaned_lines)
|
||||||
|
|
||||||
|
|
||||||
|
# 3. Remove markdown code fences
|
||||||
|
clean_json_str = clean_json_str.strip()
|
||||||
if clean_json_str.startswith("```json"):
|
if clean_json_str.startswith("```json"):
|
||||||
clean_json_str = clean_json_str[7:] # Remove ```json\n
|
clean_json_str = clean_json_str[7:] # Remove ```json\n
|
||||||
if clean_json_str.endswith("```"):
|
if clean_json_str.endswith("```"):
|
||||||
clean_json_str = clean_json_str[:-3] # Remove ```
|
clean_json_str = clean_json_str[:-3] # Remove ```
|
||||||
clean_json_str = clean_json_str.strip() # Remove any extra whitespace
|
clean_json_str = clean_json_str.strip() # Remove any extra whitespace
|
||||||
|
|
||||||
# --- ADDED: Remove <think> tags ---
|
# 4. Remove <think> tags (just in case)
|
||||||
clean_json_str = re.sub(r'<think>.*?</think>', '', clean_json_str, flags=re.DOTALL | re.IGNORECASE)
|
clean_json_str = re.sub(r'<think>.*?</think>', '', clean_json_str, flags=re.DOTALL | re.IGNORECASE)
|
||||||
clean_json_str = clean_json_str.strip() # Strip again after potential removal
|
clean_json_str = clean_json_str.strip()
|
||||||
# ---------------------------------
|
|
||||||
|
|
||||||
|
# --- Parse Sanitized JSON ---
|
||||||
try:
|
try:
|
||||||
response_data = json.loads(clean_json_str)
|
response_data = json.loads(clean_json_str)
|
||||||
except json.JSONDecodeError as e:
|
except json.JSONDecodeError as e:
|
||||||
# Log the full cleaned string that caused the error for better debugging
|
error_detail = f"Failed to decode LLM JSON response after sanitization: {e}\nSanitized Response Attempted:\n{clean_json_str}"
|
||||||
error_detail = f"Failed to decode LLM JSON response: {e}\nFull Cleaned Response:\n{clean_json_str}"
|
log.error(f"ERROR: {error_detail}")
|
||||||
log.error(f"ERROR: {error_detail}") # Log full error detail to console
|
raise ValueError(error_detail)
|
||||||
raise ValueError(error_detail) # Raise the error with full detail
|
|
||||||
|
|
||||||
if "predicted_assets" not in response_data or not isinstance(response_data["predicted_assets"], list):
|
# --- Validate Top-Level Structure ---
|
||||||
raise ValueError("Invalid LLM response format: 'predicted_assets' key missing or not a list.")
|
if not isinstance(response_data, dict):
|
||||||
|
raise ValueError("Invalid LLM response: Root element is not a JSON object.")
|
||||||
|
|
||||||
source_rules = []
|
if "individual_file_analysis" not in response_data or not isinstance(response_data["individual_file_analysis"], list):
|
||||||
# We assume one SourceRule per input source processed by this handler instance
|
raise ValueError("Invalid LLM response format: 'individual_file_analysis' key missing or not a list.")
|
||||||
# Use self.input_source_identifier from the base class
|
|
||||||
|
if "asset_group_classifications" not in response_data or not isinstance(response_data["asset_group_classifications"], dict):
|
||||||
|
raise ValueError("Invalid LLM response format: 'asset_group_classifications' key missing or not a dictionary.")
|
||||||
|
|
||||||
|
# --- Prepare for Rule Creation ---
|
||||||
source_rule = SourceRule(input_path=self.input_source_identifier)
|
source_rule = SourceRule(input_path=self.input_source_identifier)
|
||||||
|
# Get valid types directly from the settings dictionary
|
||||||
|
valid_asset_types = list(self.settings.get('asset_type_definitions', {}).keys())
|
||||||
|
valid_file_types = list(self.settings.get('file_type_definitions', {}).keys())
|
||||||
|
asset_rules_map: Dict[str, AssetRule] = {} # Maps group_name to AssetRule
|
||||||
|
|
||||||
# Access valid types from the settings dictionary
|
# --- Process Individual Files and Build Rules ---
|
||||||
valid_asset_types = list(self.llm_settings.get('asset_types', {}).keys())
|
for file_data in response_data["individual_file_analysis"]:
|
||||||
valid_file_types = list(self.llm_settings.get('file_types', {}).keys())
|
|
||||||
|
|
||||||
for asset_data in response_data["predicted_assets"]:
|
|
||||||
# Check for cancellation within the loop
|
# Check for cancellation within the loop
|
||||||
if self._is_cancelled:
|
if self._is_cancelled:
|
||||||
log.info("LLM prediction cancelled during response parsing (assets).")
|
log.info("LLM prediction cancelled during response parsing (files).")
|
||||||
return []
|
return []
|
||||||
|
|
||||||
if not isinstance(asset_data, dict):
|
if not isinstance(file_data, dict):
|
||||||
log.warning(f"Skipping invalid asset data (not a dict): {asset_data}")
|
log.warning(f"Skipping invalid file data entry (not a dict): {file_data}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
asset_name = asset_data.get("suggested_asset_name", "Unnamed_Asset")
|
file_path_rel = file_data.get("relative_file_path")
|
||||||
asset_type = asset_data.get("predicted_asset_type")
|
file_type = file_data.get("classified_file_type")
|
||||||
|
group_name = file_data.get("proposed_asset_group_name") # Can be string or null
|
||||||
|
|
||||||
|
# --- Validate File Data ---
|
||||||
|
if not file_path_rel or not isinstance(file_path_rel, str):
|
||||||
|
log.warning(f"Missing or invalid 'relative_file_path' in file data: {file_data}. Skipping file.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not file_type or not isinstance(file_type, str):
|
||||||
|
log.warning(f"Missing or invalid 'classified_file_type' for file '{file_path_rel}'. Skipping file.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Handle FILE_IGNORE explicitly
|
||||||
|
if file_type == "FILE_IGNORE":
|
||||||
|
log.debug(f"Ignoring file as per LLM prediction: {file_path_rel}")
|
||||||
|
continue # Skip creating a rule for this file
|
||||||
|
|
||||||
|
# Validate file_type against definitions
|
||||||
|
if file_type not in valid_file_types:
|
||||||
|
log.warning(f"Invalid predicted_file_type '{file_type}' for file '{file_path_rel}'. Defaulting to EXTRA.")
|
||||||
|
file_type = "EXTRA"
|
||||||
|
|
||||||
|
# --- Handle Grouping and Asset Type ---
|
||||||
|
if not group_name or not isinstance(group_name, str):
|
||||||
|
log.warning(f"File '{file_path_rel}' has missing, null, or invalid 'proposed_asset_group_name' ({group_name}). Cannot assign to an asset. Skipping file.")
|
||||||
|
continue # Skip files that cannot be grouped
|
||||||
|
|
||||||
|
asset_type = response_data["asset_group_classifications"].get(group_name)
|
||||||
|
|
||||||
|
if not asset_type:
|
||||||
|
log.warning(f"No classification found in 'asset_group_classifications' for group '{group_name}' (proposed for file '{file_path_rel}'). Skipping file.")
|
||||||
|
continue # Skip files belonging to unclassified groups
|
||||||
|
|
||||||
if asset_type not in valid_asset_types:
|
if asset_type not in valid_asset_types:
|
||||||
log.warning(f"Invalid predicted_asset_type '{asset_type}' for asset '{asset_name}'. Skipping asset.")
|
log.warning(f"Invalid asset_type '{asset_type}' found in 'asset_group_classifications' for group '{group_name}'. Skipping file '{file_path_rel}'.")
|
||||||
continue # Skip this asset
|
continue # Skip files belonging to groups with invalid types
|
||||||
|
|
||||||
asset_rule = AssetRule(asset_name=asset_name, asset_type=asset_type)
|
# --- Construct Absolute Path ---
|
||||||
source_rule.assets.append(asset_rule)
|
try:
|
||||||
|
base_path = Path(self.input_source_identifier)
|
||||||
if "files" not in asset_data or not isinstance(asset_data["files"], list):
|
if base_path.is_file():
|
||||||
log.warning(f"'files' key missing or not a list in asset '{asset_name}'. Skipping files for this asset.")
|
base_path = base_path.parent
|
||||||
|
clean_rel_path = Path(file_path_rel.strip().replace('\\', '/'))
|
||||||
|
file_path_abs = str(base_path / clean_rel_path)
|
||||||
|
except Exception as path_e:
|
||||||
|
log.warning(f"Error constructing absolute path for '{file_path_rel}' relative to '{self.input_source_identifier}': {path_e}. Skipping file.")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
for file_data in asset_data["files"]:
|
# --- Get or Create Asset Rule ---
|
||||||
# Check for cancellation within the inner loop
|
asset_rule = asset_rules_map.get(group_name)
|
||||||
if self._is_cancelled:
|
if not asset_rule:
|
||||||
log.info("LLM prediction cancelled during response parsing (files).")
|
# Create new AssetRule if this is the first file for this group
|
||||||
return []
|
log.debug(f"Creating new AssetRule for group '{group_name}' with type '{asset_type}'.")
|
||||||
|
asset_rule = AssetRule(asset_name=group_name, asset_type=asset_type)
|
||||||
|
source_rule.assets.append(asset_rule)
|
||||||
|
asset_rules_map[group_name] = asset_rule
|
||||||
|
# else: use existing asset_rule
|
||||||
|
|
||||||
if not isinstance(file_data, dict):
|
# --- Create and Add File Rule ---
|
||||||
log.warning(f"Skipping invalid file data (not a dict) in asset '{asset_name}': {file_data}")
|
file_rule = FileRule(
|
||||||
continue
|
file_path=file_path_abs,
|
||||||
|
item_type=file_type,
|
||||||
file_path_rel = file_data.get("file_path") # LLM provides relative path
|
item_type_override=file_type, # Initial override based on LLM
|
||||||
file_type = file_data.get("predicted_file_type")
|
target_asset_name_override=group_name, # Use the group name
|
||||||
|
output_format_override=None,
|
||||||
if not file_path_rel:
|
is_gloss_source=False,
|
||||||
log.warning(f"Missing 'file_path' in file data for asset '{asset_name}'. Skipping file.")
|
resolution_override=None,
|
||||||
continue
|
channel_merge_instructions={}
|
||||||
|
)
|
||||||
# Convert relative path from LLM (using '/') back to absolute OS-specific path
|
asset_rule.files.append(file_rule)
|
||||||
# We need the original input path (directory or archive) to make it absolute
|
log.debug(f"Added file '{file_path_rel}' (type: {file_type}) to asset '{group_name}'.")
|
||||||
# Use self.input_source_identifier which holds the original path
|
|
||||||
# IMPORTANT: Ensure the LLM is actually providing paths relative to the *root* of the input source.
|
|
||||||
try:
|
|
||||||
# Use Pathlib for safer joining, assuming input_source_identifier is the parent dir/archive path
|
|
||||||
# If input_source_identifier is an archive file, this logic might need adjustment
|
|
||||||
# depending on where files were extracted. For now, assume it's the base path.
|
|
||||||
base_path = Path(self.input_source_identifier)
|
|
||||||
# If the input was a file (like a zip), use its parent directory as the base for joining relative paths
|
|
||||||
if base_path.is_file():
|
|
||||||
base_path = base_path.parent
|
|
||||||
# Clean the relative path potentially coming from LLM
|
|
||||||
clean_rel_path = Path(file_path_rel.strip().replace('\\', '/'))
|
|
||||||
file_path_abs = str(base_path / clean_rel_path)
|
|
||||||
except Exception as path_e:
|
|
||||||
log.warning(f"Error constructing absolute path for '{file_path_rel}' relative to '{self.input_source_identifier}': {path_e}. Skipping file.")
|
|
||||||
continue
|
|
||||||
|
|
||||||
|
|
||||||
if file_type not in valid_file_types:
|
# Log if no assets were created
|
||||||
log.warning(f"Invalid predicted_file_type '{file_type}' for file '{file_path_rel}'. Defaulting to EXTRA.")
|
if not source_rule.assets:
|
||||||
file_type = "EXTRA" # Default to EXTRA if invalid type from LLM
|
log.warning(f"LLM prediction for '{self.input_source_identifier}' resulted in zero valid assets after parsing.")
|
||||||
|
|
||||||
# Create the FileRule instance
|
return [source_rule] # Return list containing the single SourceRule
|
||||||
# Add default values for fields not provided by LLM
|
|
||||||
file_rule = FileRule(
|
|
||||||
file_path=file_path_abs,
|
|
||||||
item_type=file_type,
|
|
||||||
item_type_override=file_type, # Initial override
|
|
||||||
target_asset_name_override=asset_name, # Default to asset name
|
|
||||||
output_format_override=None,
|
|
||||||
is_gloss_source=False, # LLM doesn't predict this
|
|
||||||
resolution_override=None,
|
|
||||||
channel_merge_instructions={}
|
|
||||||
)
|
|
||||||
asset_rule.files.append(file_rule)
|
|
||||||
|
|
||||||
source_rules.append(source_rule)
|
|
||||||
return source_rules
|
|
||||||
|
|
||||||
# Removed conceptual example usage comments
|
# Removed conceptual example usage comments
|
||||||
@ -11,7 +11,7 @@ log.info(f"sys.path: {sys.path}")
|
|||||||
|
|
||||||
from PySide6.QtWidgets import (
|
from PySide6.QtWidgets import (
|
||||||
QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QSplitter, QTableView, # Added QSplitter, QTableView
|
QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QSplitter, QTableView, # Added QSplitter, QTableView
|
||||||
QPushButton, QComboBox, QTableWidget, QTableWidgetItem, QHeaderView,
|
QPushButton, QComboBox, QTableWidget, QTableWidgetItem, QHeaderView, QStackedWidget, # Added QStackedWidget
|
||||||
QProgressBar, QLabel, QFrame, QCheckBox, QSpinBox, QListWidget, QTextEdit, # Added QListWidget, QTextEdit
|
QProgressBar, QLabel, QFrame, QCheckBox, QSpinBox, QListWidget, QTextEdit, # Added QListWidget, QTextEdit
|
||||||
QLineEdit, QMessageBox, QFileDialog, QInputDialog, QListWidgetItem, QTabWidget, # Added more widgets
|
QLineEdit, QMessageBox, QFileDialog, QInputDialog, QListWidgetItem, QTabWidget, # Added more widgets
|
||||||
QFormLayout, QGroupBox, QAbstractItemView, QSizePolicy, # Added more layout/widget items
|
QFormLayout, QGroupBox, QAbstractItemView, QSizePolicy, # Added more layout/widget items
|
||||||
@ -21,9 +21,10 @@ from PySide6.QtCore import Qt, QThread, Slot, Signal, QObject, QModelIndex, QIte
|
|||||||
from PySide6.QtGui import QColor, QAction, QPalette, QClipboard # Add QColor import, QAction, QPalette, QClipboard
|
from PySide6.QtGui import QColor, QAction, QPalette, QClipboard # Add QColor import, QAction, QPalette, QClipboard
|
||||||
|
|
||||||
# --- Local GUI Imports ---
|
# --- Local GUI Imports ---
|
||||||
from .preset_editor_widget import PresetEditorWidget # Import the new widget
|
from .preset_editor_widget import PresetEditorWidget
|
||||||
from .log_console_widget import LogConsoleWidget # Import the log console widget
|
from .llm_editor_widget import LLMEditorWidget # Import the new LLM editor
|
||||||
from .main_panel_widget import MainPanelWidget # Import the new main panel widget
|
from .log_console_widget import LogConsoleWidget
|
||||||
|
from .main_panel_widget import MainPanelWidget
|
||||||
|
|
||||||
# --- Backend Imports for Data Structures ---
|
# --- Backend Imports for Data Structures ---
|
||||||
from rule_structure import SourceRule, AssetRule, FileRule # Import Rule Structures
|
from rule_structure import SourceRule, AssetRule, FileRule # Import Rule Structures
|
||||||
@ -158,13 +159,30 @@ class MainWindow(QMainWindow):
|
|||||||
self.restructure_handler = AssetRestructureHandler(self.unified_model, self) # Instantiate the restructure handler
|
self.restructure_handler = AssetRestructureHandler(self.unified_model, self) # Instantiate the restructure handler
|
||||||
|
|
||||||
# --- Create Panels ---
|
# --- Create Panels ---
|
||||||
self.preset_editor_widget = PresetEditorWidget() # Instantiate the preset editor
|
self.preset_editor_widget = PresetEditorWidget()
|
||||||
|
self.llm_editor_widget = LLMEditorWidget() # Instantiate the LLM editor
|
||||||
# Instantiate MainPanelWidget, passing the model and self (MainWindow) for context
|
# Instantiate MainPanelWidget, passing the model and self (MainWindow) for context
|
||||||
self.main_panel_widget = MainPanelWidget(self.unified_model, self)
|
self.main_panel_widget = MainPanelWidget(self.unified_model, self)
|
||||||
self.log_console = LogConsoleWidget(self) # Instantiate the log console
|
self.log_console = LogConsoleWidget(self)
|
||||||
|
|
||||||
self.splitter.addWidget(self.preset_editor_widget) # Add the preset editor
|
# --- Create Left Pane with Static Selector and Stacked Editor ---
|
||||||
self.splitter.addWidget(self.main_panel_widget) # Add the new main panel widget
|
self.left_pane_widget = QWidget()
|
||||||
|
left_pane_layout = QVBoxLayout(self.left_pane_widget)
|
||||||
|
left_pane_layout.setContentsMargins(0, 0, 0, 0)
|
||||||
|
left_pane_layout.setSpacing(0) # No space between selector and stack
|
||||||
|
|
||||||
|
# Add the selector part from PresetEditorWidget
|
||||||
|
left_pane_layout.addWidget(self.preset_editor_widget.selector_container)
|
||||||
|
|
||||||
|
# Create the stacked widget for swappable editors
|
||||||
|
self.editor_stack = QStackedWidget()
|
||||||
|
self.editor_stack.addWidget(self.preset_editor_widget.json_editor_container) # Page 0: Preset JSON Editor
|
||||||
|
self.editor_stack.addWidget(self.llm_editor_widget) # Page 1: LLM Editor
|
||||||
|
left_pane_layout.addWidget(self.editor_stack)
|
||||||
|
|
||||||
|
# Add the new left pane and the main panel to the splitter
|
||||||
|
self.splitter.addWidget(self.left_pane_widget)
|
||||||
|
self.splitter.addWidget(self.main_panel_widget)
|
||||||
|
|
||||||
# --- Setup UI Elements ---
|
# --- Setup UI Elements ---
|
||||||
# Main panel UI is handled internally by MainPanelWidget
|
# Main panel UI is handled internally by MainPanelWidget
|
||||||
@ -198,6 +216,8 @@ class MainWindow(QMainWindow):
|
|||||||
|
|
||||||
# --- Connect Model Signals ---
|
# --- Connect Model Signals ---
|
||||||
self.unified_model.targetAssetOverrideChanged.connect(self.restructure_handler.handle_target_asset_override)
|
self.unified_model.targetAssetOverrideChanged.connect(self.restructure_handler.handle_target_asset_override)
|
||||||
|
# --- Connect LLM Editor Signals ---
|
||||||
|
self.llm_editor_widget.settings_saved.connect(self._on_llm_settings_saved) # Connect save signal
|
||||||
|
|
||||||
# --- Adjust Splitter ---
|
# --- Adjust Splitter ---
|
||||||
self.splitter.setSizes([400, 800]) # Initial size ratio
|
self.splitter.setSizes([400, 800]) # Initial size ratio
|
||||||
@ -633,8 +653,8 @@ class MainWindow(QMainWindow):
|
|||||||
|
|
||||||
# Check if rule-based prediction is already running (optional, handler might manage internally)
|
# Check if rule-based prediction is already running (optional, handler might manage internally)
|
||||||
# Note: QueuedConnection on the signal helps, but check anyway for immediate feedback/logging
|
# Note: QueuedConnection on the signal helps, but check anyway for immediate feedback/logging
|
||||||
# TODO: Add is_running() method to RuleBasedPredictionHandler if needed for this check
|
# TODO: Add is_running() method to RuleBasedPredictionHandler if needed for this check - NOTE: is_running is a property now
|
||||||
if self.prediction_handler and hasattr(self.prediction_handler, 'is_running') and self.prediction_handler.is_running():
|
if self.prediction_handler and hasattr(self.prediction_handler, 'is_running') and self.prediction_handler.is_running: # Removed ()
|
||||||
log.warning("Rule-based prediction is already running. Queuing re-interpretation request.")
|
log.warning("Rule-based prediction is already running. Queuing re-interpretation request.")
|
||||||
# Proceed, relying on QueuedConnection
|
# Proceed, relying on QueuedConnection
|
||||||
|
|
||||||
@ -1180,9 +1200,34 @@ class MainWindow(QMainWindow):
|
|||||||
# --- Slot for Preset Editor Selection Changes ---
|
# --- Slot for Preset Editor Selection Changes ---
|
||||||
@Slot(str, str)
|
@Slot(str, str)
|
||||||
def _on_preset_selection_changed(self, mode: str, preset_name: str | None):
|
def _on_preset_selection_changed(self, mode: str, preset_name: str | None):
|
||||||
"""Handles changes in the preset editor selection (preset, LLM, placeholder)."""
|
"""
|
||||||
|
Handles changes in the preset editor selection (preset, LLM, placeholder).
|
||||||
|
Switches between PresetEditorWidget and LLMEditorWidget.
|
||||||
|
"""
|
||||||
log.info(f"Preset selection changed: mode='{mode}', preset_name='{preset_name}'")
|
log.info(f"Preset selection changed: mode='{mode}', preset_name='{preset_name}'")
|
||||||
|
|
||||||
|
# --- Editor Stack Switching ---
|
||||||
|
if mode == "llm":
|
||||||
|
log.debug("Switching editor stack to LLM Editor Widget.")
|
||||||
|
# Force reset the LLM handler state in case it got stuck
|
||||||
|
if hasattr(self, 'llm_interaction_handler'):
|
||||||
|
self.llm_interaction_handler.force_reset_state()
|
||||||
|
self.editor_stack.setCurrentWidget(self.llm_editor_widget)
|
||||||
|
# Load settings *after* switching the stack
|
||||||
|
try:
|
||||||
|
self.llm_editor_widget.load_settings()
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error loading LLM settings in _on_preset_selection_changed: {e}")
|
||||||
|
QMessageBox.critical(self, "LLM Settings Error", f"Failed to load LLM settings:\n{e}")
|
||||||
|
elif mode == "preset":
|
||||||
|
log.debug("Switching editor stack to Preset JSON Editor Widget.")
|
||||||
|
self.editor_stack.setCurrentWidget(self.preset_editor_widget.json_editor_container)
|
||||||
|
else: # "placeholder"
|
||||||
|
log.debug("Switching editor stack to Preset JSON Editor Widget (placeholder selected).")
|
||||||
|
self.editor_stack.setCurrentWidget(self.preset_editor_widget.json_editor_container)
|
||||||
|
# The PresetEditorWidget's internal logic handles disabling/clearing the editor fields.
|
||||||
|
# --- End Editor Stack Switching ---
|
||||||
|
|
||||||
# Update window title based on selection
|
# Update window title based on selection
|
||||||
if mode == "preset" and preset_name:
|
if mode == "preset" and preset_name:
|
||||||
# Check for unsaved changes *within the editor widget*
|
# Check for unsaved changes *within the editor widget*
|
||||||
@ -1212,6 +1257,17 @@ class MainWindow(QMainWindow):
|
|||||||
# update_preview will now respect the mode set above
|
# update_preview will now respect the mode set above
|
||||||
self.update_preview()
|
self.update_preview()
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _on_llm_settings_saved(self):
|
||||||
|
"""Slot called when LLM settings are saved successfully."""
|
||||||
|
log.info("LLM settings saved signal received by MainWindow.")
|
||||||
|
self.statusBar().showMessage("LLM settings saved successfully.", 3000)
|
||||||
|
# Optionally, trigger a reload of configuration if needed elsewhere,
|
||||||
|
# or update the LLMInteractionHandler if it caches settings.
|
||||||
|
# For now, just show a status message.
|
||||||
|
# If the LLM handler uses the config directly, no action needed here.
|
||||||
|
# If it caches, we might need: self.llm_interaction_handler.reload_settings()
|
||||||
|
|
||||||
# --- Slot for LLM Processing State Changes from Handler ---
|
# --- Slot for LLM Processing State Changes from Handler ---
|
||||||
@Slot(bool)
|
@Slot(bool)
|
||||||
def _on_llm_processing_state_changed(self, is_processing: bool):
|
def _on_llm_processing_state_changed(self, is_processing: bool):
|
||||||
|
|||||||
@ -58,15 +58,19 @@ class PresetEditorWidget(QWidget):
|
|||||||
|
|
||||||
def _init_ui(self):
|
def _init_ui(self):
|
||||||
"""Initializes the UI elements for the preset editor."""
|
"""Initializes the UI elements for the preset editor."""
|
||||||
editor_layout = QVBoxLayout(self)
|
main_layout = QVBoxLayout(self)
|
||||||
editor_layout.setContentsMargins(5, 5, 5, 5) # Reduce margins
|
main_layout.setContentsMargins(0, 0, 0, 0) # Let containers manage margins
|
||||||
|
main_layout.setSpacing(0) # No space between selector and editor containers
|
||||||
|
|
||||||
# Preset List and Controls
|
# Preset List and Controls
|
||||||
list_layout = QVBoxLayout()
|
self.selector_container = QWidget()
|
||||||
list_layout.addWidget(QLabel("Presets:"))
|
selector_layout = QVBoxLayout(self.selector_container)
|
||||||
|
selector_layout.setContentsMargins(5, 5, 5, 5) # Margins for selector area
|
||||||
|
|
||||||
|
selector_layout.addWidget(QLabel("Presets:"))
|
||||||
self.editor_preset_list = QListWidget()
|
self.editor_preset_list = QListWidget()
|
||||||
self.editor_preset_list.currentItemChanged.connect(self._load_selected_preset_for_editing)
|
self.editor_preset_list.currentItemChanged.connect(self._load_selected_preset_for_editing)
|
||||||
list_layout.addWidget(self.editor_preset_list)
|
selector_layout.addWidget(self.editor_preset_list) # Corrected: Add to selector_layout
|
||||||
|
|
||||||
list_button_layout = QHBoxLayout()
|
list_button_layout = QHBoxLayout()
|
||||||
self.editor_new_button = QPushButton("New")
|
self.editor_new_button = QPushButton("New")
|
||||||
@ -75,10 +79,14 @@ class PresetEditorWidget(QWidget):
|
|||||||
self.editor_delete_button.clicked.connect(self._delete_selected_preset)
|
self.editor_delete_button.clicked.connect(self._delete_selected_preset)
|
||||||
list_button_layout.addWidget(self.editor_new_button)
|
list_button_layout.addWidget(self.editor_new_button)
|
||||||
list_button_layout.addWidget(self.editor_delete_button)
|
list_button_layout.addWidget(self.editor_delete_button)
|
||||||
list_layout.addLayout(list_button_layout)
|
selector_layout.addLayout(list_button_layout)
|
||||||
editor_layout.addLayout(list_layout, 1) # Allow list to stretch
|
main_layout.addWidget(self.selector_container) # Add selector container to main layout
|
||||||
|
|
||||||
# Editor Tabs
|
# Editor Tabs
|
||||||
|
self.json_editor_container = QWidget()
|
||||||
|
editor_layout = QVBoxLayout(self.json_editor_container)
|
||||||
|
editor_layout.setContentsMargins(5, 0, 5, 5) # Margins for editor area (no top margin)
|
||||||
|
|
||||||
self.editor_tab_widget = QTabWidget()
|
self.editor_tab_widget = QTabWidget()
|
||||||
self.editor_tab_general_naming = QWidget()
|
self.editor_tab_general_naming = QWidget()
|
||||||
self.editor_tab_mapping_rules = QWidget()
|
self.editor_tab_mapping_rules = QWidget()
|
||||||
@ -86,7 +94,7 @@ class PresetEditorWidget(QWidget):
|
|||||||
self.editor_tab_widget.addTab(self.editor_tab_mapping_rules, "Mapping & Rules")
|
self.editor_tab_widget.addTab(self.editor_tab_mapping_rules, "Mapping & Rules")
|
||||||
self._create_editor_general_tab()
|
self._create_editor_general_tab()
|
||||||
self._create_editor_mapping_tab()
|
self._create_editor_mapping_tab()
|
||||||
editor_layout.addWidget(self.editor_tab_widget, 3) # Allow tabs to stretch more
|
editor_layout.addWidget(self.editor_tab_widget, 1) # Allow tabs to stretch
|
||||||
|
|
||||||
# Save Buttons
|
# Save Buttons
|
||||||
save_button_layout = QHBoxLayout()
|
save_button_layout = QHBoxLayout()
|
||||||
@ -100,6 +108,8 @@ class PresetEditorWidget(QWidget):
|
|||||||
save_button_layout.addWidget(self.editor_save_as_button)
|
save_button_layout.addWidget(self.editor_save_as_button)
|
||||||
editor_layout.addLayout(save_button_layout)
|
editor_layout.addLayout(save_button_layout)
|
||||||
|
|
||||||
|
main_layout.addWidget(self.json_editor_container) # Add editor container to main layout
|
||||||
|
|
||||||
def _create_editor_general_tab(self):
|
def _create_editor_general_tab(self):
|
||||||
"""Creates the widgets and layout for the 'General & Naming' editor tab."""
|
"""Creates the widgets and layout for the 'General & Naming' editor tab."""
|
||||||
layout = QVBoxLayout(self.editor_tab_general_naming)
|
layout = QVBoxLayout(self.editor_tab_general_naming)
|
||||||
@ -347,9 +357,10 @@ class PresetEditorWidget(QWidget):
|
|||||||
|
|
||||||
def _set_editor_enabled(self, enabled: bool):
|
def _set_editor_enabled(self, enabled: bool):
|
||||||
"""Enables or disables all editor widgets."""
|
"""Enables or disables all editor widgets."""
|
||||||
self.editor_tab_widget.setEnabled(enabled)
|
# Target the container holding the tabs and save buttons
|
||||||
|
self.json_editor_container.setEnabled(enabled)
|
||||||
|
# Save button state still depends on unsaved changes, but only if container is enabled
|
||||||
self.editor_save_button.setEnabled(enabled and self.editor_unsaved_changes)
|
self.editor_save_button.setEnabled(enabled and self.editor_unsaved_changes)
|
||||||
self.editor_save_as_button.setEnabled(enabled) # Save As is always possible if editor is enabled
|
|
||||||
|
|
||||||
def _clear_editor(self):
|
def _clear_editor(self):
|
||||||
"""Clears the editor fields and resets state."""
|
"""Clears the editor fields and resets state."""
|
||||||
|
|||||||
@ -4,3 +4,4 @@ openexr
|
|||||||
PySide6
|
PySide6
|
||||||
py7zr
|
py7zr
|
||||||
rarfile
|
rarfile
|
||||||
|
requests
|
||||||
Loading…
x
Reference in New Issue
Block a user