LLM GUI updates and tests
This commit is contained in:
parent
336d698f9b
commit
6b704c561a
@ -44,16 +44,19 @@ The GUI has been refactored into several key components:
|
|||||||
|
|
||||||
The `MainWindow` class acts as the main application window and **coordinator** for the GUI. Its primary responsibilities now include:
|
The `MainWindow` class acts as the main application window and **coordinator** for the GUI. Its primary responsibilities now include:
|
||||||
|
|
||||||
* Setting up the main window structure and menu bar.
|
* Setting up the main window structure (using a `QSplitter`) and menu bar.
|
||||||
* Instantiating and arranging the major GUI widgets:
|
* Instantiating and arranging the major GUI widgets:
|
||||||
* `MainPanelWidget` (containing core controls and the rule editor)
|
* `PresetEditorWidget` (providing selector and JSON editor parts)
|
||||||
* `PresetEditorWidget`
|
* `LLMEditorWidget` (for LLM settings)
|
||||||
|
* `MainPanelWidget` (containing the rule view and processing controls)
|
||||||
* `LogConsoleWidget`
|
* `LogConsoleWidget`
|
||||||
* Connecting signals and slots between these widgets, the underlying models (`UnifiedViewModel`), and background handlers (`RuleBasedPredictionHandler`, `LLMPredictionHandler`, `LLMInteractionHandler`).
|
* **Layout Management:** Placing the preset selector statically and using a `QStackedWidget` to switch between the `PresetEditorWidget`'s JSON editor and the `LLMEditorWidget`.
|
||||||
|
* **Editor Switching:** Handling the `preset_selection_changed_signal` from `PresetEditorWidget` to switch the stacked editor view (`_on_preset_selection_changed` slot).
|
||||||
|
* Connecting signals and slots between widgets, models (`UnifiedViewModel`), and handlers (`LLMInteractionHandler`, `AssetRestructureHandler`).
|
||||||
* Managing the overall application state related to GUI interactions (e.g., enabling/disabling controls).
|
* Managing the overall application state related to GUI interactions (e.g., enabling/disabling controls).
|
||||||
* Handling top-level actions like loading sources (drag-and-drop), initiating predictions, and starting the processing task (via `main.ProcessingTask`).
|
* Handling top-level actions like loading sources (drag-and-drop), initiating predictions (`update_preview`), and starting the processing task (`_on_process_requested`).
|
||||||
* Managing the `QThreadPool` for running background tasks (prediction).
|
* Managing background prediction threads (Rule-Based via `QThread`, LLM via `LLMInteractionHandler`).
|
||||||
* Implementing slots like `_handle_prediction_completion` to update the model/view when prediction results are ready.
|
* Implementing slots (`_on_rule_hierarchy_ready`, `_on_llm_prediction_ready_from_handler`, `_on_prediction_error`, `_handle_prediction_completion`) to update the model/view when prediction results/errors arrive.
|
||||||
|
|
||||||
### `MainPanelWidget` (`gui/main_panel_widget.py`)
|
### `MainPanelWidget` (`gui/main_panel_widget.py`)
|
||||||
|
|
||||||
@ -69,7 +72,10 @@ This widget contains the central part of the GUI, including:
|
|||||||
This widget provides the interface for managing presets:
|
This widget provides the interface for managing presets:
|
||||||
|
|
||||||
* Loading, saving, and editing preset files (`Presets/*.json`).
|
* Loading, saving, and editing preset files (`Presets/*.json`).
|
||||||
* Displaying preset rules and settings.
|
* Displaying preset rules and settings in a tabbed JSON editor.
|
||||||
|
* Providing the preset selection list (`QListWidget`) including the "LLM Interpretation" option.
|
||||||
|
* **Refactored:** Exposes its selector (`selector_container`) and JSON editor (`json_editor_container`) as separate widgets for use by `MainWindow`.
|
||||||
|
* Emits `preset_selection_changed_signal` when the selection changes.
|
||||||
|
|
||||||
### `LogConsoleWidget` (`gui/log_console_widget.py`)
|
### `LogConsoleWidget` (`gui/log_console_widget.py`)
|
||||||
|
|
||||||
@ -79,6 +85,15 @@ This widget displays application logs within the GUI:
|
|||||||
* Integrates with Python's `logging` system via a custom `QtLogHandler`.
|
* Integrates with Python's `logging` system via a custom `QtLogHandler`.
|
||||||
* Can be shown/hidden via the main window's "View" menu.
|
* Can be shown/hidden via the main window's "View" menu.
|
||||||
|
|
||||||
|
### `LLMEditorWidget` (`gui/llm_editor_widget.py`)
|
||||||
|
|
||||||
|
A new widget dedicated to editing LLM settings:
|
||||||
|
|
||||||
|
* Provides a tabbed interface ("Prompt Settings", "API Settings") to edit `config/llm_settings.json`.
|
||||||
|
* Allows editing the main prompt, managing examples (add/delete/edit JSON), and configuring API details (URL, key, model, temperature, timeout).
|
||||||
|
* Loads settings via `load_settings()` and saves them using `_save_settings()` (which calls `configuration.save_llm_config()`).
|
||||||
|
* Placed within `MainWindow`'s `QStackedWidget`.
|
||||||
|
|
||||||
### `UnifiedViewModel` (`gui/unified_view_model.py`)
|
### `UnifiedViewModel` (`gui/unified_view_model.py`)
|
||||||
|
|
||||||
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
||||||
@ -142,11 +157,13 @@ An experimental predictor (inheriting from `BasePredictionHandler`) that uses a
|
|||||||
|
|
||||||
### `LLMInteractionHandler` (`gui/llm_interaction_handler.py`)
|
### `LLMInteractionHandler` (`gui/llm_interaction_handler.py`)
|
||||||
|
|
||||||
This class manages the specifics of communicating with the configured LLM API:
|
This class now acts as the central manager for LLM prediction tasks:
|
||||||
|
|
||||||
* Handles constructing prompts based on templates and input data.
|
* **Manages the LLM prediction queue** and processes items sequentially.
|
||||||
* Sends requests to the LLM endpoint.
|
* **Loads LLM configuration** directly from `config/llm_settings.json` and `config/app_settings.json`.
|
||||||
* Receives and potentially pre-processes the LLM's response before returning it to the `LLMPredictionHandler`.
|
* **Instantiates and manages** the `LLMPredictionHandler` and its `QThread`.
|
||||||
|
* **Handles LLM task state** (running/idle) and signals changes to the GUI.
|
||||||
|
* Receives results/errors from `LLMPredictionHandler` and **emits signals** (`llm_prediction_ready`, `llm_prediction_error`, `llm_status_update`, `llm_processing_state_changed`) to `MainWindow`.
|
||||||
|
|
||||||
## Utility Modules (`utils/`)
|
## Utility Modules (`utils/`)
|
||||||
|
|
||||||
|
|||||||
@ -6,11 +6,11 @@ This document provides technical details about the configuration system and the
|
|||||||
|
|
||||||
The tool utilizes a two-tiered configuration system managed by the `configuration.py` module:
|
The tool utilizes a two-tiered configuration system managed by the `configuration.py` module:
|
||||||
|
|
||||||
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources (e.g., default output paths, standard image resolutions, map merge rules, output format rules, Blender paths, `FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`). **LLM-specific settings are now located in `config/llm_settings.json`.**
|
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources (e.g., default output paths, standard image resolutions, map merge rules, output format rules, Blender paths, `FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`).
|
||||||
2. **LLM Settings (`config/llm_settings.json`):** This JSON file contains settings specifically related to the LLM predictor, such as the API endpoint, model name, prompt template, and examples.
|
2. **LLM Settings (`config/llm_settings.json`):** This JSON file contains settings specifically related to the LLM predictor, such as the API endpoint, model name, prompt template, and examples. These settings can be edited through the GUI using the `LLMEditorWidget`.
|
||||||
3. **Preset Files (`Presets/*.json`):** These JSON files define supplier-specific rules and overrides. They contain patterns to interpret filenames, classify map types, handle variants, define naming conventions, and specify other source-specific behaviors.
|
3. **Preset Files (`Presets/*.json`):** These JSON files define supplier-specific rules and overrides. They contain patterns to interpret filenames, classify map types, handle variants, define naming conventions, and specify other source-specific behaviors.
|
||||||
|
|
||||||
The `configuration.py` module is responsible for loading the base settings from `config/app_settings.json`, the LLM settings from `config/llm_settings.json`, merging the base settings with the rules from the selected preset file, and providing access to all settings via the `Configuration` class. The `load_base_config()` function is still available for accessing only the `app_settings.json` content directly (e.g., for the GUI editor). Preset values generally override core settings where applicable. Note that the old `config.py` file has been deleted.
|
The `configuration.py` module contains the `Configuration` class (for loading/merging settings for processing) and standalone functions like `load_base_config()` (for accessing `app_settings.json` directly) and `save_llm_config()` / `save_base_config()` (for writing settings back to files). Note that the old `config.py` file has been deleted.
|
||||||
|
|
||||||
## Supplier Management (`config/suppliers.json`)
|
## Supplier Management (`config/suppliers.json`)
|
||||||
|
|
||||||
@ -33,16 +33,29 @@ The `Configuration` class is central to the new configuration system. It is resp
|
|||||||
* **Regex Compilation (`_compile_regex_patterns`):** Compiles regex patterns defined in the merged configuration (from base settings and the preset) for performance. Compiled regex objects are stored as instance attributes (e.g., `self.compiled_map_keyword_regex`).
|
* **Regex Compilation (`_compile_regex_patterns`):** Compiles regex patterns defined in the merged configuration (from base settings and the preset) for performance. Compiled regex objects are stored as instance attributes (e.g., `self.compiled_map_keyword_regex`).
|
||||||
* **LLM Settings Access:** The `Configuration` class provides direct property access (e.g., `config.llm_endpoint_url`, `config.llm_api_key`, `config.llm_model_name`, `config.llm_temperature`, `config.llm_request_timeout`, `config.llm_predictor_prompt`, `config.get_llm_examples()`) to allow components like the `LLMPredictionHandler` to easily access the necessary LLM configuration values loaded from `config/llm_settings.json`.
|
* **LLM Settings Access:** The `Configuration` class provides direct property access (e.g., `config.llm_endpoint_url`, `config.llm_api_key`, `config.llm_model_name`, `config.llm_temperature`, `config.llm_request_timeout`, `config.llm_predictor_prompt`, `config.get_llm_examples()`) to allow components like the `LLMPredictionHandler` to easily access the necessary LLM configuration values loaded from `config/llm_settings.json`.
|
||||||
|
|
||||||
An instance of `Configuration` is created within each worker process (`main.process_single_asset_wrapper`) to ensure that each concurrently processed asset uses the correct, isolated configuration based on the specified preset and the base application settings.
|
An instance of `Configuration` is created within each worker process (`main.process_single_asset_wrapper`) to ensure that each concurrently processed asset uses the correct, isolated configuration based on the specified preset and the base application settings. The `LLMInteractionHandler` loads LLM settings directly using helper functions or file access, not the `Configuration` class.
|
||||||
|
|
||||||
## GUI Configuration Editor (`gui/config_editor_dialog.py`)
|
## GUI Configuration Editors
|
||||||
|
|
||||||
|
The GUI provides dedicated editors for modifying configuration files:
|
||||||
|
|
||||||
|
* **`ConfigEditorDialog` (`gui/config_editor_dialog.py`):** Edits the core `config/app_settings.json`.
|
||||||
|
* **`LLMEditorWidget` (`gui/llm_editor_widget.py`):** Edits the LLM-specific `config/llm_settings.json`.
|
||||||
|
|
||||||
|
### `ConfigEditorDialog` (`gui/config_editor_dialog.py`)
|
||||||
|
|
||||||
The GUI includes a dedicated editor for modifying the `config/app_settings.json` file. This is implemented in `gui/config_editor_dialog.py`.
|
The GUI includes a dedicated editor for modifying the `config/app_settings.json` file. This is implemented in `gui/config_editor_dialog.py`.
|
||||||
|
|
||||||
* **Purpose:** Provides a user-friendly interface for viewing and editing the core application settings defined in `app_settings.json`.
|
* **Purpose:** Provides a user-friendly interface for viewing and editing the core application settings defined in `app_settings.json`.
|
||||||
* **Implementation:** The dialog loads the JSON content of `app_settings.json`, presents it in a tabbed layout ("General", "Output & Naming", etc.) using standard GUI widgets mapped to the JSON structure, and saves the changes back to the file. It supports editing basic fields, tables for definitions (`FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`), and a list/detail view for merge rules (`MAP_MERGE_RULES`). The definitions tables include dynamic color editing features.
|
* **Implementation:** The dialog loads the JSON content of `app_settings.json`, presents it in a tabbed layout ("General", "Output & Naming", etc.) using standard GUI widgets mapped to the JSON structure, and saves the changes back to the file. It supports editing basic fields, tables for definitions (`FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`), and a list/detail view for merge rules (`MAP_MERGE_RULES`). The definitions tables include dynamic color editing features.
|
||||||
* **Limitations:** Currently, editing complex fields like `IMAGE_RESOLUTIONS` or the full details of `MAP_MERGE_RULES` via the UI is not fully supported.
|
* **Limitations:** Currently, editing complex fields like `IMAGE_RESOLUTIONS` or the full details of `MAP_MERGE_RULES` via the UI is not fully supported.
|
||||||
* **Note:** Changes made through the GUI editor are written directly to `config/app_settings.json` but require an application restart to be loaded and applied by the `Configuration` class.
|
* **Note:** Changes made through the `ConfigEditorDialog` are written directly to `config/app_settings.json` (using `save_base_config`) but require an application restart to be loaded and applied by the `Configuration` class during processing.
|
||||||
|
|
||||||
|
### `LLMEditorWidget` (`gui/llm_editor_widget.py`)
|
||||||
|
|
||||||
|
* **Purpose:** Provides a user-friendly interface for viewing and editing the LLM settings defined in `config/llm_settings.json`.
|
||||||
|
* **Implementation:** Uses tabs for "Prompt Settings" and "API Settings". Allows editing the prompt, managing examples, and configuring API details.
|
||||||
|
* **Persistence:** Saves changes directly to `config/llm_settings.json` using the `configuration.save_llm_config()` function. Changes are loaded by the `LLMInteractionHandler` the next time an LLM task is initiated.
|
||||||
|
|
||||||
## Preset File Structure (`Presets/*.json`)
|
## Preset File Structure (`Presets/*.json`)
|
||||||
|
|
||||||
|
|||||||
@ -11,21 +11,34 @@ The GUI is built using `PySide6`, which provides Python bindings for the Qt fram
|
|||||||
The `MainWindow` class acts as the central **coordinator** for the GUI application. It is responsible for:
|
The `MainWindow` class acts as the central **coordinator** for the GUI application. It is responsible for:
|
||||||
|
|
||||||
* Setting up the main application window structure and menu bar.
|
* Setting up the main application window structure and menu bar.
|
||||||
* Instantiating and arranging the major GUI widgets:
|
* **Layout:** Arranging the main GUI components using a `QSplitter`.
|
||||||
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the core controls, preset selection, and the rule editor.
|
* **Left Pane:** Contains the preset selection controls (from `PresetEditorWidget`) permanently displayed at the top. Below this, a `QStackedWidget` switches between the preset JSON editor (also from `PresetEditorWidget`) and the `LLMEditorWidget`.
|
||||||
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Handles preset loading, saving, and editing.
|
* **Right Pane:** Contains the `MainPanelWidget`.
|
||||||
|
* Instantiating and managing the major GUI widgets:
|
||||||
|
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Provides the preset selector and the JSON editor parts.
|
||||||
|
* `LLMEditorWidget` (`gui/llm_editor_widget.py`): Provides the editor for LLM settings.
|
||||||
|
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the rule hierarchy view and processing controls.
|
||||||
* `LogConsoleWidget` (`gui/log_console_widget.py`): Displays application logs.
|
* `LogConsoleWidget` (`gui/log_console_widget.py`): Displays application logs.
|
||||||
* Instantiating key models and handlers:
|
* Instantiating key models and handlers:
|
||||||
* `UnifiedViewModel` (`gui/unified_view_model.py`): The model for the rule hierarchy view.
|
* `UnifiedViewModel` (`gui/unified_view_model.py`): The model for the rule hierarchy view.
|
||||||
* `LLMInteractionHandler` (`gui/llm_interaction_handler.py`): Manages communication with the LLM service.
|
* `LLMInteractionHandler` (`gui/llm_interaction_handler.py`): Manages communication with the LLM service.
|
||||||
|
* `AssetRestructureHandler` (`gui/asset_restructure_handler.py`): Handles rule restructuring.
|
||||||
* Connecting signals and slots between these components to orchestrate the application flow.
|
* Connecting signals and slots between these components to orchestrate the application flow.
|
||||||
|
* **Editor Switching:** Handling the `preset_selection_changed_signal` from `PresetEditorWidget` in its `_on_preset_selection_changed` slot. This slot:
|
||||||
|
* Switches the `QStackedWidget` (`editor_stack`) to display either the `PresetEditorWidget`'s JSON editor or the `LLMEditorWidget` based on the selected mode ("preset", "llm", "placeholder").
|
||||||
|
* Calls `llm_editor_widget.load_settings()` when switching to LLM mode.
|
||||||
|
* Updates the window title.
|
||||||
|
* Triggers `update_preview()`.
|
||||||
* Handling top-level user interactions like drag-and-drop for loading sources (`add_input_paths`). This method now handles the "placeholder" state (no preset selected) by scanning directories or inspecting archives (ZIP) and creating placeholder `SourceRule`/`AssetRule`/`FileRule` objects to immediately populate the `UnifiedViewModel` with the file structure.
|
* Handling top-level user interactions like drag-and-drop for loading sources (`add_input_paths`). This method now handles the "placeholder" state (no preset selected) by scanning directories or inspecting archives (ZIP) and creating placeholder `SourceRule`/`AssetRule`/`FileRule` objects to immediately populate the `UnifiedViewModel` with the file structure.
|
||||||
* Initiating predictions based on the selected preset mode (Rule-Based or LLM) when presets change or sources are added.
|
* Initiating predictions based on the selected preset mode (Rule-Based or LLM) when presets change or sources are added (`update_preview`).
|
||||||
* Starting the processing task (`_on_process_requested`): This slot now filters the `SourceRule` list obtained from the `UnifiedViewModel`, excluding sources where no asset has a `Target Asset` name assigned, before emitting the `start_backend_processing` signal. It also manages enabling/disabling controls.
|
* Starting the processing task (`_on_process_requested`): This slot now filters the `SourceRule` list obtained from the `UnifiedViewModel`, excluding sources where no asset has a `Target Asset` name assigned, before emitting the `start_backend_processing` signal. It also manages enabling/disabling controls.
|
||||||
* Managing the `QThreadPool` for running background prediction tasks (`RuleBasedPredictionHandler`, `LLMPredictionHandler`).
|
* Managing the background prediction threads (`RuleBasedPredictionHandler` via `QThread`, `LLMPredictionHandler` via `LLMInteractionHandler`).
|
||||||
* Implementing slots to handle results from background tasks:
|
* Implementing slots to handle results from background tasks:
|
||||||
* `_handle_prediction_completion(source_id, source_rule_list)`: Receives results from either prediction handler via the `prediction_signal`. It calls `self.unified_view_model.update_rules_for_sources()` to update the view model, preserving user overrides where possible. For LLM predictions, it also triggers processing the next item in the queue.
|
* `_on_rule_hierarchy_ready`: Handles results from `RuleBasedPredictionHandler`.
|
||||||
* Slots to handle status updates from the LLM handler.
|
* `_on_llm_prediction_ready_from_handler`: Handles results from `LLMInteractionHandler`.
|
||||||
|
* `_on_prediction_error`: Handles errors from both prediction paths.
|
||||||
|
* `_handle_prediction_completion`: Centralized logic to track completion and update UI state after each prediction result or error.
|
||||||
|
* Slots to handle status and state changes from `LLMInteractionHandler`.
|
||||||
|
|
||||||
## Threading and Background Tasks
|
## Threading and Background Tasks
|
||||||
|
|
||||||
@ -53,7 +66,26 @@ Communication between the `MainWindow` (main UI thread) and the background predi
|
|||||||
|
|
||||||
## Preset Editor (`gui/preset_editor_widget.py`)
|
## Preset Editor (`gui/preset_editor_widget.py`)
|
||||||
|
|
||||||
The `PresetEditorWidget` provides a dedicated interface for managing presets. It handles loading, displaying, editing, and saving preset `.json` files. It communicates with the `MainWindow` (e.g., via signals) when a preset is loaded or saved.
|
The `PresetEditorWidget` provides a dedicated interface for managing presets. It handles loading, displaying, editing, and saving preset `.json` files.
|
||||||
|
|
||||||
|
* **Refactoring:** This widget has been refactored to expose its main components:
|
||||||
|
* `selector_container`: A `QWidget` containing the preset list (`QListWidget`) and New/Delete buttons. Used statically by `MainWindow`.
|
||||||
|
* `json_editor_container`: A `QWidget` containing the tabbed editor (`QTabWidget`) for preset JSON details and the Save/Save As buttons. Placed in `MainWindow`'s `QStackedWidget`.
|
||||||
|
* **Functionality:** Still manages the logic for populating the preset list, loading/saving presets, handling unsaved changes, and providing the editor UI for preset details.
|
||||||
|
* **Communication:** Emits `preset_selection_changed_signal(mode, preset_name)` when the user selects a preset, the LLM option, or the placeholder. This signal is crucial for `MainWindow` to switch the editor stack and trigger preview updates.
|
||||||
|
|
||||||
|
## LLM Settings Editor (`gui/llm_editor_widget.py`)
|
||||||
|
|
||||||
|
This new widget provides a dedicated interface for editing LLM-specific settings stored in `config/llm_settings.json`.
|
||||||
|
|
||||||
|
* **Purpose:** Allows users to configure the LLM predictor's behavior without directly editing the JSON file.
|
||||||
|
* **Structure:** Uses a `QTabWidget` with two tabs:
|
||||||
|
* **"Prompt Settings":** Contains a `QPlainTextEdit` for the main prompt and a nested `QTabWidget` for managing examples (add/delete/edit JSON in `QTextEdit` widgets).
|
||||||
|
* **"API Settings":** Contains fields (`QLineEdit`, `QDoubleSpinBox`, `QSpinBox`) for endpoint URL, API key, model name, temperature, and timeout.
|
||||||
|
* **Functionality:**
|
||||||
|
* `load_settings()`: Reads `config/llm_settings.json` and populates the UI fields. Handles file not found or JSON errors. Called by `MainWindow` when switching to LLM mode.
|
||||||
|
* `_save_settings()`: Gathers data from the UI, validates example JSON, constructs the settings dictionary, and calls `configuration.save_llm_config()` to write back to the file. Emits `settings_saved` signal on success.
|
||||||
|
* Manages unsaved changes state and enables/disables the "Save LLM Settings" button accordingly.
|
||||||
|
|
||||||
## Unified Hierarchical View
|
## Unified Hierarchical View
|
||||||
|
|
||||||
@ -80,36 +112,44 @@ The core rule editing interface is built around a `QTreeView` managed within the
|
|||||||
graph TD
|
graph TD
|
||||||
subgraph MainWindow [MainWindow Coordinator]
|
subgraph MainWindow [MainWindow Coordinator]
|
||||||
direction LR
|
direction LR
|
||||||
MW_Input[User Input (Drag/Drop, Preset Select)] --> MW(MainWindow);
|
MW_Input[User Input (Drag/Drop)] --> MW(MainWindow);
|
||||||
MW -- Initiates --> PredPool{QThreadPool};
|
MW -- Owns/Manages --> Splitter(QSplitter);
|
||||||
MW -- Connects Signals --> VM(UnifiedViewModel);
|
|
||||||
MW -- Connects Signals --> ARH(AssetRestructureHandler);
|
|
||||||
MW -- Owns/Manages --> MPW(MainPanelWidget);
|
|
||||||
MW -- Owns/Manages --> PEW(PresetEditorWidget);
|
|
||||||
MW -- Owns/Manages --> LCW(LogConsoleWidget);
|
|
||||||
MW -- Owns/Manages --> LLMIH(LLMInteractionHandler);
|
MW -- Owns/Manages --> LLMIH(LLMInteractionHandler);
|
||||||
|
MW -- Owns/Manages --> ARH(AssetRestructureHandler);
|
||||||
|
MW -- Owns/Manages --> VM(UnifiedViewModel);
|
||||||
|
MW -- Owns/Manages --> LCW(LogConsoleWidget);
|
||||||
|
MW -- Initiates --> PredPool{Prediction Threads};
|
||||||
|
MW -- Connects Signals --> VM;
|
||||||
|
MW -- Connects Signals --> ARH;
|
||||||
|
MW -- Connects Signals --> LLMIH;
|
||||||
|
MW -- Connects Signals --> PEW(PresetEditorWidget);
|
||||||
|
MW -- Connects Signals --> LLMEDW(LLMEditorWidget);
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph MainPanel [MainPanelWidget]
|
subgraph LeftPane [Left Pane Widgets]
|
||||||
direction TB
|
direction TB
|
||||||
MPW_UI[UI Controls (Load, Predict, Process Btns)];
|
Splitter -- Adds Widget --> LPW(Left Pane Container);
|
||||||
|
LPW -- Contains --> PEW_Sel(PresetEditorWidget - Selector);
|
||||||
|
LPW -- Contains --> Stack(QStackedWidget);
|
||||||
|
Stack -- Contains --> PEW_Edit(PresetEditorWidget - JSON Editor);
|
||||||
|
Stack -- Contains --> LLMEDW;
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph RightPane [Right Pane Widgets]
|
||||||
|
direction TB
|
||||||
|
Splitter -- Adds Widget --> MPW(MainPanelWidget);
|
||||||
|
MPW -- Contains --> TV(QTreeView - Rule View);
|
||||||
|
MPW_UI[UI Controls (Process Btn, etc)];
|
||||||
MPW_UI --> MPW;
|
MPW_UI --> MPW;
|
||||||
MPW -- Contains --> REW(RuleEditorWidget);
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph RuleEditor [RuleEditorWidget]
|
|
||||||
direction TB
|
|
||||||
REW -- Contains --> TV(QTreeView - Rule View);
|
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph Prediction [Background Prediction]
|
subgraph Prediction [Background Prediction]
|
||||||
direction TB
|
direction TB
|
||||||
PredPool -- Runs --> RBP(RuleBasedPredictionHandler);
|
PredPool -- Runs --> RBP(RuleBasedPredictionHandler);
|
||||||
PredPool -- Runs --> LLMP(LLMPredictionHandler);
|
PredPool -- Runs --> LLMP(LLMPredictionHandler);
|
||||||
LLMP -- Uses --> LLMIH;
|
LLMIH -- Manages/Starts --> LLMP;
|
||||||
RBP -- prediction_signal --> MW;
|
RBP -- prediction_ready/error/status --> MW;
|
||||||
LLMP -- prediction_signal --> MW;
|
LLMIH -- llm_prediction_ready/error/status --> MW;
|
||||||
LLMP -- status_signal --> MW;
|
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph ModelView [Model/View Components]
|
subgraph ModelView [Model/View Components]
|
||||||
@ -126,17 +166,24 @@ graph TD
|
|||||||
Del -- Get/Set Data --> VM;
|
Del -- Get/Set Data --> VM;
|
||||||
end
|
end
|
||||||
|
|
||||||
|
%% MainWindow Interactions
|
||||||
|
MW_Input -- Triggers --> MW;
|
||||||
|
PEW -- preset_selection_changed_signal --> MW;
|
||||||
|
LLMEDW -- settings_saved --> MW;
|
||||||
|
MPW -- process_requested/etc --> MW;
|
||||||
|
MW -- _on_preset_selection_changed --> Stack;
|
||||||
|
MW -- _on_preset_selection_changed --> LLMEDW;
|
||||||
MW -- _handle_prediction_completion --> VM;
|
MW -- _handle_prediction_completion --> VM;
|
||||||
MW -- Triggers Processing --> ProcTask(main.ProcessingTask);
|
MW -- Triggers Processing --> ProcTask(main.ProcessingTask);
|
||||||
|
|
||||||
%% Connections between subgraphs
|
%% Connections between subgraphs
|
||||||
MPW --> MW;
|
PEW --> LPW; %% PresetEditorWidget parts are in Left Pane
|
||||||
PEW --> MW;
|
LLMEDW --> Stack; %% LLMEditorWidget is in Stack
|
||||||
LCW --> MW;
|
MPW --> Splitter; %% MainPanelWidget is in Right Pane
|
||||||
VM --> MW;
|
VM --> MW;
|
||||||
ARH --> MW;
|
ARH --> MW;
|
||||||
LLMIH --> MW;
|
LLMIH --> MW;
|
||||||
REW --> MPW;
|
LCW --> MW;
|
||||||
```
|
```
|
||||||
|
|
||||||
## Application Styling
|
## Application Styling
|
||||||
|
|||||||
@ -16,7 +16,9 @@ The LLM Predictor is configured via settings in the dedicated `config/llm_settin
|
|||||||
- `llm_request_timeout`: The maximum time (in seconds) to wait for a response from the LLM API.
|
- `llm_request_timeout`: The maximum time (in seconds) to wait for a response from the LLM API.
|
||||||
- `llm_predictor_examples`: A list of example input/output pairs to include in the prompt for few-shot learning, helping the LLM understand the desired output format and classification logic.
|
- `llm_predictor_examples`: A list of example input/output pairs to include in the prompt for few-shot learning, helping the LLM understand the desired output format and classification logic.
|
||||||
|
|
||||||
These settings are loaded by the `Configuration` class (from `configuration.py`) along with the core `app_settings.json` and the selected preset. The prompt structure is crucial for effective classification. It should clearly instruct the LLM on the task and the expected output format. Placeholders within the prompt template (e.g., `{FILE_LIST}`) are dynamically replaced with relevant data before the request is sent.
|
**Editing:** These settings can be edited directly through the GUI using the **`LLMEditorWidget`** (`gui/llm_editor_widget.py`), which provides a user-friendly interface for modifying the prompt, examples, and API parameters. Changes are saved back to `config/llm_settings.json` via the `configuration.save_llm_config()` function.
|
||||||
|
|
||||||
|
**Loading:** The `LLMInteractionHandler` now loads these settings directly from `config/llm_settings.json` and relevant parts of `config/app_settings.json` when it needs to start an `LLMPredictionHandler` task. It no longer relies on the main `Configuration` class for LLM-specific settings. The prompt structure remains crucial for effective classification. Placeholders within the prompt template (e.g., `{FILE_LIST}`) are dynamically replaced with relevant data before the request is sent.
|
||||||
|
|
||||||
## Expected LLM Output Format (Refactored)
|
## Expected LLM Output Format (Refactored)
|
||||||
|
|
||||||
@ -58,41 +60,48 @@ The LLM is now expected to return a JSON object containing two distinct parts. T
|
|||||||
- `proposed_asset_group_name`: A name suggested by the LLM to group this file with others belonging to the same conceptual asset. This is used internally by the parser.
|
- `proposed_asset_group_name`: A name suggested by the LLM to group this file with others belonging to the same conceptual asset. This is used internally by the parser.
|
||||||
- **`asset_group_classifications`**: A dictionary mapping the `proposed_asset_group_name` values from the list above to a final `asset_type` (e.g., "PBR Material", "HDR Environment").
|
- **`asset_group_classifications`**: A dictionary mapping the `proposed_asset_group_name` values from the list above to a final `asset_type` (e.g., "PBR Material", "HDR Environment").
|
||||||
|
|
||||||
## `LLMPredictionHandler` (Refactored Parsing)
|
## `LLMInteractionHandler` (Refactored)
|
||||||
|
|
||||||
The `gui/llm_prediction_handler.py` module contains the `LLMPredictionHandler` class (inheriting from `BasePredictionHandler`), which orchestrates the LLM prediction process. It runs in a background thread managed by the `MainWindow`'s `QThreadPool`.
|
The `gui/llm_interaction_handler.py` module contains the `LLMInteractionHandler` class, which now acts as the central manager for LLM prediction tasks.
|
||||||
|
|
||||||
Key Responsibilities & Methods:
|
Key Responsibilities & Methods:
|
||||||
|
|
||||||
- **Initialization**: Takes the source identifier, file list, and the main `Configuration` object (which has loaded settings from `app_settings.json`, `llm_settings.json`, and the active preset).
|
- **Queue Management:** Maintains a queue (`llm_processing_queue`) of pending prediction requests (input path, file list). Handles adding single (`queue_llm_request`) or batch (`queue_llm_requests_batch`) requests.
|
||||||
- **`run()`**: The main method executed by the thread pool. It prepares the prompt, calls the LLM, parses the response, and emits the result or error.
|
- **State Management:** Tracks whether an LLM task is currently running (`_is_processing`) and emits `llm_processing_state_changed(bool)` to update the GUI (e.g., disable preset editor). Includes `force_reset_state()` for recovery.
|
||||||
- **Prompt Preparation (`_prepare_prompt`)**: Uses the `Configuration` object (`self.config`) to access the `llm_predictor_prompt`, `asset_type_definitions`, `file_type_definitions`, and `llm_examples` to build the final prompt string.
|
- **Task Orchestration:** Processes the queue sequentially (`_process_next_llm_item`). For each item:
|
||||||
- **API Call (`_call_llm`)**: Uses the `Configuration` object (`self.config`) to get the `llm_endpoint_url`, `llm_api_key`, `llm_model_name`, `llm_temperature`, and `llm_request_timeout` to make the API request.
|
* Loads required settings directly from `config/llm_settings.json` and `config/app_settings.json`.
|
||||||
- **Parsing (`_parse_llm_response`)**: Parses the LLM's JSON response (using `self.config` again to get valid asset/file types for validation) and constructs the `SourceRule` hierarchy.
|
* Instantiates an `LLMPredictionHandler` in a new `QThread`.
|
||||||
- **`_parse_llm_response(response_text)`**: This method contains the **new parsing logic**:
|
* Passes the loaded settings dictionary to the `LLMPredictionHandler`.
|
||||||
1. **Sanitization**: Removes common non-JSON elements like comments (`//`, `/* */`) and markdown code fences (```json ... ```) from the raw `response_text` to increase the likelihood of successful JSON parsing.
|
* Connects signals from the handler (`prediction_ready`, `prediction_error`, `status_update`) to internal slots (`_handle_llm_result`, `_handle_llm_error`) or directly re-emits them (`llm_status_update`).
|
||||||
2. **JSON Parsing**: Parses the sanitized string into a Python dictionary.
|
* Starts the thread.
|
||||||
3. **Structure Validation**: Checks if the parsed dictionary contains the required top-level keys: `individual_file_analysis` (list) and `asset_group_classifications` (dict).
|
- **Result/Error Handling:** Internal slots (`_handle_llm_result`, `_handle_llm_error`) receive results/errors from the `LLMPredictionHandler`, remove the completed/failed item from the queue, emit the corresponding public signal (`llm_prediction_ready`, `llm_prediction_error`), and trigger processing of the next queue item.
|
||||||
4. **Grouping**: Iterates through the `individual_file_analysis` list. For each file:
|
- **Communication:** Emits signals to `MainWindow`:
|
||||||
* Retrieves the `proposed_asset_group_name`.
|
* `llm_prediction_ready(input_path, source_rule_list)`
|
||||||
* Uses the `asset_group_classifications` dictionary to find the corresponding final `asset_type` for that group.
|
* `llm_prediction_error(input_path, error_message)`
|
||||||
* Creates or updates an `AssetRule` for the group name, assigning the determined `asset_type`.
|
* `llm_status_update(status_message)`
|
||||||
* Creates a `FileRule` for the specific file, assigning its `classified_file_type` as the `item_type`.
|
* `llm_processing_state_changed(is_processing)`
|
||||||
5. **Hierarchy Construction**: Organizes the created `AssetRule` and `FileRule` objects into a single `SourceRule` object representing the entire source.
|
|
||||||
6. **Validation**: Ensures all files from the input list were accounted for in the LLM response.
|
|
||||||
|
|
||||||
Signals:
|
## `LLMPredictionHandler` (Refactored)
|
||||||
|
|
||||||
- `prediction_signal(source_id, source_rule)`: Emitted when a prediction is successfully parsed and the `SourceRule` hierarchy is constructed. The `source_rule` argument contains the complete hierarchy.
|
The `gui/llm_prediction_handler.py` module contains the `LLMPredictionHandler` class (inheriting from `BasePredictionHandler`), which performs the actual LLM prediction for a *single* input source. It runs in a background thread managed by the `LLMInteractionHandler`.
|
||||||
- `error_signal(source_id, error_message)`: Emitted if an error occurs during any stage (API call, sanitization, parsing, validation).
|
|
||||||
|
Key Responsibilities & Methods:
|
||||||
|
|
||||||
|
- **Initialization**: Takes the source identifier, file list, and a **`settings` dictionary** (passed from `LLMInteractionHandler`) containing all necessary configuration (LLM endpoint, prompt, examples, API details, type definitions, etc.).
|
||||||
|
- **`_perform_prediction()`**: Implements the core prediction logic:
|
||||||
|
* **Prompt Preparation (`_prepare_prompt`)**: Uses the passed `settings` dictionary to access the prompt template, type definitions, and examples to build the final prompt string.
|
||||||
|
* **API Call (`_call_llm`)**: Uses the passed `settings` dictionary to get the endpoint URL, API key, model name, temperature, and timeout to make the API request.
|
||||||
|
* **Parsing (`_parse_llm_response`)**: Parses the LLM's JSON response (using type definitions from the `settings` dictionary for validation) and constructs the `SourceRule` hierarchy based on the two-part format (`individual_file_analysis`, `asset_group_classifications`). Includes sanitization logic for comments and markdown fences.
|
||||||
|
- **Signals (Inherited):** Emits `prediction_ready(input_path, source_rule_list)` or `prediction_error(input_path, error_message)` upon completion or failure, which are connected to the `LLMInteractionHandler`. Also emits `status_update(message)`.
|
||||||
|
|
||||||
## GUI Integration
|
## GUI Integration
|
||||||
|
|
||||||
Integration remains largely the same at the `MainWindow` level:
|
- The LLM predictor mode is selected via the preset dropdown in `PresetEditorWidget`.
|
||||||
|
- Selecting "LLM Interpretation" triggers `MainWindow._on_preset_selection_changed`, which switches the editor view to the `LLMEditorWidget` and calls `update_preview`.
|
||||||
- The LLM predictor is selected via the preset dropdown or triggered explicitly.
|
- `MainWindow.update_preview` (or `add_input_paths`) delegates the LLM prediction request(s) to the `LLMInteractionHandler`'s queue.
|
||||||
- `MainWindow` manages the `QThreadPool` and starts the `LLMPredictionHandler` task.
|
- `LLMInteractionHandler` manages the background tasks and signals results/errors/status back to `MainWindow`.
|
||||||
- Slots in `MainWindow` connect to the `prediction_signal` and `error_signal` of the handler.
|
- `MainWindow` slots (`_on_llm_prediction_ready_from_handler`, `_on_prediction_error`, `show_status_message`, `_on_llm_processing_state_changed`) handle these signals to update the `UnifiedViewModel` and the UI state (status bar, progress, button enablement).
|
||||||
|
- The `LLMEditorWidget` allows users to modify settings, saving them via `configuration.save_llm_config()`. `MainWindow` listens for the `settings_saved` signal to provide user feedback.
|
||||||
|
|
||||||
## Model Integration (Refactored)
|
## Model Integration (Refactored)
|
||||||
|
|
||||||
@ -103,11 +112,12 @@ The `gui/unified_view_model.py` module's `update_rules_for_sources` method still
|
|||||||
|
|
||||||
## Error Handling (Updated)
|
## Error Handling (Updated)
|
||||||
|
|
||||||
Error handling now covers additional scenarios:
|
Error handling is distributed:
|
||||||
|
|
||||||
- **LLM API Errors:** Handled by `LLMInteractionHandler` and propagated via the `error_signal`.
|
- **Configuration Loading:** `LLMInteractionHandler` handles errors loading `llm_settings.json` or `app_settings.json` before starting a task.
|
||||||
- **Sanitization/Parsing Errors:** The `_parse_llm_response` method catches errors during comment/markdown removal and `json.loads()`.
|
- **LLM API Errors:** Handled within `LLMPredictionHandler._call_llm` (e.g., `requests.exceptions.RequestException`, `HTTPError`) and propagated via the `prediction_error` signal.
|
||||||
- **Structure Errors:** Explicit checks for the presence and types of `individual_file_analysis` and `asset_group_classifications` keys in the parsed JSON.
|
- **Sanitization/Parsing Errors:** `LLMPredictionHandler._parse_llm_response` catches errors during comment/markdown removal and `json.loads()`.
|
||||||
- **Data Consistency Errors:** Validation errors if the parsed data doesn't align (e.g., a `proposed_asset_group_name` missing from `asset_group_classifications`, or files missing from the analysis).
|
- **Structure/Validation Errors:** `LLMPredictionHandler._parse_llm_response` includes explicit checks for the required two-part JSON structure and data consistency.
|
||||||
|
- **Task Management Errors:** `LLMInteractionHandler` handles errors during thread setup/start.
|
||||||
|
|
||||||
These errors trigger the `error_signal`, allowing `MainWindow` to inform the user.
|
All errors ultimately result in the `llm_prediction_error` signal being emitted by `LLMInteractionHandler`, allowing `MainWindow` to inform the user via the status bar and handle the completion state.
|
||||||
@ -504,6 +504,19 @@ def load_base_config() -> dict:
|
|||||||
log.error(f"Failed to read base configuration file {APP_SETTINGS_PATH}: {e}")
|
log.error(f"Failed to read base configuration file {APP_SETTINGS_PATH}: {e}")
|
||||||
return {} # Return empty dict on error
|
return {} # Return empty dict on error
|
||||||
|
|
||||||
|
def save_llm_config(settings_dict: dict):
|
||||||
|
"""
|
||||||
|
Saves the provided LLM settings dictionary to llm_settings.json.
|
||||||
|
"""
|
||||||
|
log.debug(f"Saving LLM config to: {LLM_SETTINGS_PATH}")
|
||||||
|
try:
|
||||||
|
with open(LLM_SETTINGS_PATH, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(settings_dict, f, indent=4)
|
||||||
|
log.info(f"LLM config saved successfully to {LLM_SETTINGS_PATH}") # Use info level for successful save
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Failed to save LLM configuration file {LLM_SETTINGS_PATH}: {e}")
|
||||||
|
# Re-raise as ConfigurationError to signal failure upstream
|
||||||
|
raise ConfigurationError(f"Failed to save LLM configuration: {e}")
|
||||||
def save_base_config(settings_dict: dict):
|
def save_base_config(settings_dict: dict):
|
||||||
"""
|
"""
|
||||||
Saves the provided settings dictionary to app_settings.json.
|
Saves the provided settings dictionary to app_settings.json.
|
||||||
|
|||||||
@ -65,6 +65,7 @@ class BasePredictionHandler(QObject, ABC, metaclass=QtABCMeta):
|
|||||||
Main execution slot intended to be connected to QThread.started.
|
Main execution slot intended to be connected to QThread.started.
|
||||||
Handles the overall process: setup, execution, error handling, signaling.
|
Handles the overall process: setup, execution, error handling, signaling.
|
||||||
"""
|
"""
|
||||||
|
log.debug(f"--> Entered BasePredictionHandler.run() for {self.input_source_identifier}") # ADDED DEBUG LOG
|
||||||
if self._is_running:
|
if self._is_running:
|
||||||
log.warning(f"Handler for '{self.input_source_identifier}' is already running. Aborting.")
|
log.warning(f"Handler for '{self.input_source_identifier}' is already running. Aborting.")
|
||||||
return
|
return
|
||||||
|
|||||||
318
gui/llm_editor_widget.py
Normal file
318
gui/llm_editor_widget.py
Normal file
@ -0,0 +1,318 @@
|
|||||||
|
# gui/llm_editor_widget.py
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from PySide6.QtWidgets import (
|
||||||
|
QWidget, QVBoxLayout, QTabWidget, QPlainTextEdit, QGroupBox,
|
||||||
|
QHBoxLayout, QPushButton, QFormLayout, QLineEdit, QDoubleSpinBox,
|
||||||
|
QSpinBox, QMessageBox, QTextEdit
|
||||||
|
)
|
||||||
|
from PySide6.QtCore import Slot as pyqtSlot, Signal as pyqtSignal # Use PySide6 equivalents
|
||||||
|
|
||||||
|
# Assuming configuration module exists and has relevant functions later
|
||||||
|
from configuration import save_llm_config, ConfigurationError # Import necessary items
|
||||||
|
# For now, define path directly for initial structure
|
||||||
|
LLM_CONFIG_PATH = "config/llm_settings.json"
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class LLMEditorWidget(QWidget):
|
||||||
|
"""
|
||||||
|
Widget for editing LLM settings stored in config/llm_settings.json.
|
||||||
|
"""
|
||||||
|
settings_saved = pyqtSignal() # Signal emitted when settings are successfully saved
|
||||||
|
|
||||||
|
def __init__(self, parent=None):
|
||||||
|
super().__init__(parent)
|
||||||
|
self._unsaved_changes = False
|
||||||
|
self._init_ui()
|
||||||
|
self._connect_signals()
|
||||||
|
self.save_button.setEnabled(False) # Initially disabled
|
||||||
|
|
||||||
|
def _init_ui(self):
|
||||||
|
"""Initialize the user interface components."""
|
||||||
|
main_layout = QVBoxLayout(self)
|
||||||
|
|
||||||
|
# --- Main Tab Widget ---
|
||||||
|
self.tab_widget = QTabWidget()
|
||||||
|
main_layout.addWidget(self.tab_widget)
|
||||||
|
|
||||||
|
# --- Tab 1: Prompt Settings ---
|
||||||
|
self.tab_prompt = QWidget()
|
||||||
|
prompt_layout = QVBoxLayout(self.tab_prompt)
|
||||||
|
self.tab_widget.addTab(self.tab_prompt, "Prompt Settings")
|
||||||
|
|
||||||
|
self.prompt_editor = QPlainTextEdit()
|
||||||
|
self.prompt_editor.setPlaceholderText("Enter the main LLM predictor prompt here...")
|
||||||
|
prompt_layout.addWidget(self.prompt_editor)
|
||||||
|
|
||||||
|
# Examples GroupBox
|
||||||
|
examples_groupbox = QGroupBox("Examples")
|
||||||
|
examples_layout = QVBoxLayout(examples_groupbox)
|
||||||
|
prompt_layout.addWidget(examples_groupbox)
|
||||||
|
|
||||||
|
self.examples_tab_widget = QTabWidget()
|
||||||
|
self.examples_tab_widget.setTabsClosable(True)
|
||||||
|
examples_layout.addWidget(self.examples_tab_widget)
|
||||||
|
|
||||||
|
example_button_layout = QHBoxLayout()
|
||||||
|
examples_layout.addLayout(example_button_layout)
|
||||||
|
|
||||||
|
self.add_example_button = QPushButton("Add Example")
|
||||||
|
example_button_layout.addWidget(self.add_example_button)
|
||||||
|
|
||||||
|
self.delete_example_button = QPushButton("Delete Current Example")
|
||||||
|
example_button_layout.addWidget(self.delete_example_button)
|
||||||
|
example_button_layout.addStretch()
|
||||||
|
|
||||||
|
|
||||||
|
# --- Tab 2: API Settings ---
|
||||||
|
self.tab_api = QWidget()
|
||||||
|
api_layout = QFormLayout(self.tab_api)
|
||||||
|
self.tab_widget.addTab(self.tab_api, "API Settings")
|
||||||
|
|
||||||
|
self.endpoint_url_edit = QLineEdit()
|
||||||
|
api_layout.addRow("Endpoint URL:", self.endpoint_url_edit)
|
||||||
|
|
||||||
|
self.api_key_edit = QLineEdit()
|
||||||
|
self.api_key_edit.setEchoMode(QLineEdit.Password)
|
||||||
|
api_layout.addRow("API Key:", self.api_key_edit)
|
||||||
|
|
||||||
|
self.model_name_edit = QLineEdit()
|
||||||
|
api_layout.addRow("Model Name:", self.model_name_edit)
|
||||||
|
|
||||||
|
self.temperature_spinbox = QDoubleSpinBox()
|
||||||
|
self.temperature_spinbox.setRange(0.0, 2.0)
|
||||||
|
self.temperature_spinbox.setSingleStep(0.1)
|
||||||
|
self.temperature_spinbox.setDecimals(2)
|
||||||
|
api_layout.addRow("Temperature:", self.temperature_spinbox)
|
||||||
|
|
||||||
|
self.timeout_spinbox = QSpinBox()
|
||||||
|
self.timeout_spinbox.setRange(1, 600)
|
||||||
|
self.timeout_spinbox.setSuffix(" s")
|
||||||
|
api_layout.addRow("Request Timeout:", self.timeout_spinbox)
|
||||||
|
|
||||||
|
# --- Save Button ---
|
||||||
|
save_button_layout = QHBoxLayout()
|
||||||
|
main_layout.addLayout(save_button_layout)
|
||||||
|
save_button_layout.addStretch()
|
||||||
|
self.save_button = QPushButton("Save LLM Settings")
|
||||||
|
save_button_layout.addWidget(self.save_button)
|
||||||
|
|
||||||
|
self.setLayout(main_layout)
|
||||||
|
|
||||||
|
def _connect_signals(self):
|
||||||
|
"""Connect signals to slots."""
|
||||||
|
# Save button
|
||||||
|
self.save_button.clicked.connect(self._save_settings)
|
||||||
|
|
||||||
|
# Fields triggering unsaved changes
|
||||||
|
self.prompt_editor.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.endpoint_url_edit.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.api_key_edit.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.model_name_edit.textChanged.connect(self._mark_unsaved)
|
||||||
|
self.temperature_spinbox.valueChanged.connect(self._mark_unsaved)
|
||||||
|
self.timeout_spinbox.valueChanged.connect(self._mark_unsaved)
|
||||||
|
|
||||||
|
# Example management buttons and tab close signal
|
||||||
|
self.add_example_button.clicked.connect(self._add_example_tab)
|
||||||
|
self.delete_example_button.clicked.connect(self._delete_current_example_tab)
|
||||||
|
self.examples_tab_widget.tabCloseRequested.connect(self._remove_example_tab)
|
||||||
|
|
||||||
|
# Note: Connecting textChanged for example editors needs to happen
|
||||||
|
# when the tabs/editors are created (in load_settings and _add_example_tab)
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def load_settings(self):
|
||||||
|
"""Load settings from the JSON file and populate the UI."""
|
||||||
|
logger.info(f"Attempting to load LLM settings from {LLM_CONFIG_PATH}")
|
||||||
|
self.setEnabled(True) # Enable widget before trying to load
|
||||||
|
|
||||||
|
# Clear previous examples
|
||||||
|
while self.examples_tab_widget.count() > 0:
|
||||||
|
self.examples_tab_widget.removeTab(0)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(LLM_CONFIG_PATH, 'r', encoding='utf-8') as f:
|
||||||
|
settings = json.load(f)
|
||||||
|
|
||||||
|
# Populate Prompt Settings
|
||||||
|
self.prompt_editor.setPlainText(settings.get("llm_predictor_prompt", ""))
|
||||||
|
|
||||||
|
# Populate Examples
|
||||||
|
examples = settings.get("llm_predictor_examples", [])
|
||||||
|
for i, example in enumerate(examples):
|
||||||
|
try:
|
||||||
|
example_text = json.dumps(example, indent=4)
|
||||||
|
example_editor = QTextEdit()
|
||||||
|
example_editor.setPlainText(example_text)
|
||||||
|
example_editor.textChanged.connect(self._mark_unsaved) # Connect here
|
||||||
|
self.examples_tab_widget.addTab(example_editor, f"Example {i+1}")
|
||||||
|
except TypeError as e:
|
||||||
|
logger.error(f"Error formatting example {i+1}: {e}. Skipping.")
|
||||||
|
QMessageBox.warning(self, "Load Error", f"Could not format example {i+1}. It might be invalid.\nError: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
# Populate API Settings
|
||||||
|
self.endpoint_url_edit.setText(settings.get("llm_endpoint_url", ""))
|
||||||
|
self.api_key_edit.setText(settings.get("llm_api_key", "")) # Consider security implications
|
||||||
|
self.model_name_edit.setText(settings.get("llm_model_name", ""))
|
||||||
|
self.temperature_spinbox.setValue(settings.get("llm_temperature", 0.7))
|
||||||
|
self.timeout_spinbox.setValue(settings.get("llm_request_timeout", 120))
|
||||||
|
|
||||||
|
logger.info("LLM settings loaded successfully.")
|
||||||
|
|
||||||
|
except FileNotFoundError:
|
||||||
|
logger.warning(f"LLM settings file not found: {LLM_CONFIG_PATH}. Using defaults and disabling editor.")
|
||||||
|
QMessageBox.warning(self, "Load Error",
|
||||||
|
f"LLM settings file not found:\n{LLM_CONFIG_PATH}\n\nPlease ensure the file exists. Using default values.")
|
||||||
|
# Reset to defaults (optional, or leave fields empty)
|
||||||
|
self.prompt_editor.clear()
|
||||||
|
self.endpoint_url_edit.clear()
|
||||||
|
self.api_key_edit.clear()
|
||||||
|
self.model_name_edit.clear()
|
||||||
|
self.temperature_spinbox.setValue(0.7)
|
||||||
|
self.timeout_spinbox.setValue(120)
|
||||||
|
# self.setEnabled(False) # Disabling might be too harsh if user wants to create settings
|
||||||
|
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
logger.error(f"Error decoding JSON from {LLM_CONFIG_PATH}: {e}")
|
||||||
|
QMessageBox.critical(self, "Load Error",
|
||||||
|
f"Failed to parse LLM settings file:\n{LLM_CONFIG_PATH}\n\nError: {e}\n\nPlease check the file for syntax errors. Editor will be disabled.")
|
||||||
|
self.setEnabled(False) # Disable editor on critical load error
|
||||||
|
|
||||||
|
except Exception as e: # Catch other potential errors during loading/populating
|
||||||
|
logger.error(f"An unexpected error occurred loading LLM settings: {e}", exc_info=True)
|
||||||
|
QMessageBox.critical(self, "Load Error",
|
||||||
|
f"An unexpected error occurred while loading settings:\n{e}\n\nEditor will be disabled.")
|
||||||
|
self.setEnabled(False)
|
||||||
|
|
||||||
|
|
||||||
|
# Reset unsaved changes flag and disable save button after loading
|
||||||
|
self.save_button.setEnabled(False)
|
||||||
|
self._unsaved_changes = False
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def _mark_unsaved(self):
|
||||||
|
"""Mark settings as having unsaved changes and enable the save button."""
|
||||||
|
if not self._unsaved_changes:
|
||||||
|
self._unsaved_changes = True
|
||||||
|
self.save_button.setEnabled(True)
|
||||||
|
logger.debug("Unsaved changes marked.")
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def _save_settings(self):
|
||||||
|
"""Gather data from UI, save to JSON file, and handle errors."""
|
||||||
|
logger.info("Attempting to save LLM settings...")
|
||||||
|
|
||||||
|
settings_dict = {}
|
||||||
|
parsed_examples = []
|
||||||
|
has_errors = False
|
||||||
|
|
||||||
|
# Gather API Settings
|
||||||
|
settings_dict["llm_endpoint_url"] = self.endpoint_url_edit.text().strip()
|
||||||
|
settings_dict["llm_api_key"] = self.api_key_edit.text() # Keep as is, don't strip
|
||||||
|
settings_dict["llm_model_name"] = self.model_name_edit.text().strip()
|
||||||
|
settings_dict["llm_temperature"] = self.temperature_spinbox.value()
|
||||||
|
settings_dict["llm_request_timeout"] = self.timeout_spinbox.value()
|
||||||
|
|
||||||
|
# Gather Prompt Settings
|
||||||
|
settings_dict["llm_predictor_prompt"] = self.prompt_editor.toPlainText().strip()
|
||||||
|
|
||||||
|
# Gather and Parse Examples
|
||||||
|
for i in range(self.examples_tab_widget.count()):
|
||||||
|
example_editor = self.examples_tab_widget.widget(i)
|
||||||
|
if isinstance(example_editor, QTextEdit):
|
||||||
|
example_text = example_editor.toPlainText().strip()
|
||||||
|
if not example_text: # Skip empty examples silently
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
parsed_example = json.loads(example_text)
|
||||||
|
parsed_examples.append(parsed_example)
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
has_errors = True
|
||||||
|
tab_name = self.examples_tab_widget.tabText(i)
|
||||||
|
logger.warning(f"Invalid JSON in '{tab_name}': {e}. Skipping example.")
|
||||||
|
QMessageBox.warning(self, "Invalid Example",
|
||||||
|
f"The content in '{tab_name}' is not valid JSON and will not be saved.\n\nError: {e}\n\nPlease correct it or remove the tab.")
|
||||||
|
# Optionally switch to the tab with the error:
|
||||||
|
# self.examples_tab_widget.setCurrentIndex(i)
|
||||||
|
else:
|
||||||
|
logger.warning(f"Widget at index {i} in examples tab is not a QTextEdit. Skipping.")
|
||||||
|
|
||||||
|
|
||||||
|
if has_errors:
|
||||||
|
logger.warning("LLM settings not saved due to invalid JSON in examples.")
|
||||||
|
# Keep save button enabled if there were errors, allowing user to fix and retry
|
||||||
|
# self.save_button.setEnabled(True)
|
||||||
|
# self._unsaved_changes = True
|
||||||
|
return # Stop saving process
|
||||||
|
|
||||||
|
settings_dict["llm_predictor_examples"] = parsed_examples
|
||||||
|
|
||||||
|
# Save the dictionary to file
|
||||||
|
try:
|
||||||
|
save_llm_config(settings_dict)
|
||||||
|
QMessageBox.information(self, "Save Successful", f"LLM settings saved to:\n{LLM_CONFIG_PATH}")
|
||||||
|
self.save_button.setEnabled(False)
|
||||||
|
self._unsaved_changes = False
|
||||||
|
self.settings_saved.emit() # Notify MainWindow or others
|
||||||
|
logger.info("LLM settings saved successfully.")
|
||||||
|
|
||||||
|
except ConfigurationError as e:
|
||||||
|
logger.error(f"Failed to save LLM settings: {e}")
|
||||||
|
QMessageBox.critical(self, "Save Error", f"Could not save LLM settings.\n\nError: {e}")
|
||||||
|
# Keep save button enabled as save failed
|
||||||
|
self.save_button.setEnabled(True)
|
||||||
|
self._unsaved_changes = True
|
||||||
|
except Exception as e: # Catch unexpected errors during save
|
||||||
|
logger.error(f"An unexpected error occurred during LLM settings save: {e}", exc_info=True)
|
||||||
|
QMessageBox.critical(self, "Save Error", f"An unexpected error occurred while saving settings:\n{e}")
|
||||||
|
self.save_button.setEnabled(True)
|
||||||
|
self._unsaved_changes = True
|
||||||
|
|
||||||
|
# --- Example Management Slots ---
|
||||||
|
@pyqtSlot()
|
||||||
|
def _add_example_tab(self):
|
||||||
|
"""Add a new, empty tab for an LLM example."""
|
||||||
|
logger.debug("Adding new example tab.")
|
||||||
|
new_example_editor = QTextEdit()
|
||||||
|
new_example_editor.setPlaceholderText("Enter example JSON here...")
|
||||||
|
new_example_editor.textChanged.connect(self._mark_unsaved) # Connect signal
|
||||||
|
|
||||||
|
# Determine the next example number
|
||||||
|
next_example_num = self.examples_tab_widget.count() + 1
|
||||||
|
index = self.examples_tab_widget.addTab(new_example_editor, f"Example {next_example_num}")
|
||||||
|
self.examples_tab_widget.setCurrentIndex(index) # Focus the new tab
|
||||||
|
new_example_editor.setFocus() # Focus the editor within the tab
|
||||||
|
|
||||||
|
self._mark_unsaved() # Mark changes since we added a tab
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def _delete_current_example_tab(self):
|
||||||
|
"""Delete the currently selected example tab."""
|
||||||
|
current_index = self.examples_tab_widget.currentIndex()
|
||||||
|
if current_index != -1: # Check if a tab is selected
|
||||||
|
logger.debug(f"Deleting current example tab at index {current_index}.")
|
||||||
|
self._remove_example_tab(current_index) # Reuse the remove logic
|
||||||
|
else:
|
||||||
|
logger.debug("Delete current example tab called, but no tab is selected.")
|
||||||
|
|
||||||
|
@pyqtSlot(int)
|
||||||
|
def _remove_example_tab(self, index):
|
||||||
|
"""Remove the example tab at the given index."""
|
||||||
|
if 0 <= index < self.examples_tab_widget.count():
|
||||||
|
widget_to_remove = self.examples_tab_widget.widget(index)
|
||||||
|
self.examples_tab_widget.removeTab(index)
|
||||||
|
if widget_to_remove:
|
||||||
|
# Disconnect signals if necessary, though Python's GC should handle it
|
||||||
|
# widget_to_remove.textChanged.disconnect(self._mark_unsaved) # Optional cleanup
|
||||||
|
widget_to_remove.deleteLater() # Ensure proper cleanup of the widget
|
||||||
|
logger.debug(f"Removed example tab at index {index}.")
|
||||||
|
|
||||||
|
# Renumber subsequent tabs
|
||||||
|
for i in range(index, self.examples_tab_widget.count()):
|
||||||
|
self.examples_tab_widget.setTabText(i, f"Example {i+1}")
|
||||||
|
|
||||||
|
self._mark_unsaved() # Mark changes since we removed a tab
|
||||||
|
else:
|
||||||
|
logger.warning(f"Attempted to remove example tab at invalid index {index}.")
|
||||||
@ -1,4 +1,5 @@
|
|||||||
import os
|
import os
|
||||||
|
import json # Added for direct config loading
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
@ -8,18 +9,23 @@ from PySide6.QtCore import QObject, Signal, QThread, Slot, QTimer
|
|||||||
# Assuming these might be needed based on MainWindow's usage
|
# Assuming these might be needed based on MainWindow's usage
|
||||||
try:
|
try:
|
||||||
# Removed load_base_config import
|
# Removed load_base_config import
|
||||||
from configuration import Configuration, ConfigurationError
|
# Removed Configuration import as we load manually now
|
||||||
|
from configuration import ConfigurationError # Keep error class
|
||||||
from .llm_prediction_handler import LLMPredictionHandler # Backend handler
|
from .llm_prediction_handler import LLMPredictionHandler # Backend handler
|
||||||
from rule_structure import SourceRule # For signal emission type hint
|
from rule_structure import SourceRule # For signal emission type hint
|
||||||
except ImportError as e:
|
except ImportError as e:
|
||||||
logging.getLogger(__name__).critical(f"Failed to import backend modules for LLMInteractionHandler: {e}")
|
logging.getLogger(__name__).critical(f"Failed to import backend modules for LLMInteractionHandler: {e}")
|
||||||
LLMPredictionHandler = None
|
LLMPredictionHandler = None
|
||||||
load_base_config = None
|
# load_base_config = None # Removed
|
||||||
ConfigurationError = Exception
|
ConfigurationError = Exception
|
||||||
SourceRule = None # Define as None if import fails
|
SourceRule = None # Define as None if import fails
|
||||||
Configuration = None # Define as None if import fails
|
# Configuration = None # Removed
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
# Define config file paths relative to this handler's location
|
||||||
|
CONFIG_DIR = Path(__file__).parent.parent / "config"
|
||||||
|
APP_SETTINGS_PATH = CONFIG_DIR / "app_settings.json"
|
||||||
|
LLM_SETTINGS_PATH = CONFIG_DIR / "llm_settings.json"
|
||||||
|
|
||||||
class LLMInteractionHandler(QObject):
|
class LLMInteractionHandler(QObject):
|
||||||
"""
|
"""
|
||||||
@ -55,6 +61,22 @@ class LLMInteractionHandler(QObject):
|
|||||||
log.debug(f"LLM Handler processing state changed to: {processing}")
|
log.debug(f"LLM Handler processing state changed to: {processing}")
|
||||||
self.llm_processing_state_changed.emit(processing)
|
self.llm_processing_state_changed.emit(processing)
|
||||||
|
|
||||||
|
def force_reset_state(self):
|
||||||
|
"""Forces the processing state to False. Use with caution."""
|
||||||
|
log.warning("Forcing LLMInteractionHandler state reset.")
|
||||||
|
if self.llm_prediction_thread and self.llm_prediction_thread.isRunning():
|
||||||
|
log.warning("Force reset called while thread is running. Attempting to stop thread.")
|
||||||
|
# Attempt graceful shutdown first
|
||||||
|
self.llm_prediction_thread.quit()
|
||||||
|
if not self.llm_prediction_thread.wait(500): # Wait 0.5 sec
|
||||||
|
log.warning("LLM thread did not quit gracefully after force reset. Terminating.")
|
||||||
|
self.llm_prediction_thread.terminate()
|
||||||
|
self.llm_prediction_thread.wait() # Wait after terminate
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
self.llm_prediction_handler = None
|
||||||
|
self._set_processing_state(False)
|
||||||
|
# Do NOT clear the queue here, let the user decide via Clear Queue button
|
||||||
|
|
||||||
@Slot(str, list)
|
@Slot(str, list)
|
||||||
def queue_llm_request(self, input_path: str, file_list: list | None):
|
def queue_llm_request(self, input_path: str, file_list: list | None):
|
||||||
"""Adds a request to the LLM processing queue."""
|
"""Adds a request to the LLM processing queue."""
|
||||||
@ -75,6 +97,7 @@ class LLMInteractionHandler(QObject):
|
|||||||
def queue_llm_requests_batch(self, requests: list[tuple[str, list | None]]):
|
def queue_llm_requests_batch(self, requests: list[tuple[str, list | None]]):
|
||||||
"""Adds multiple requests to the LLM processing queue."""
|
"""Adds multiple requests to the LLM processing queue."""
|
||||||
added_count = 0
|
added_count = 0
|
||||||
|
log.debug(f"Queueing batch. Current queue content: {self.llm_processing_queue}") # ADDED DEBUG LOG
|
||||||
for input_path, file_list in requests:
|
for input_path, file_list in requests:
|
||||||
is_in_queue = any(item[0] == input_path for item in self.llm_processing_queue)
|
is_in_queue = any(item[0] == input_path for item in self.llm_processing_queue)
|
||||||
if not is_in_queue:
|
if not is_in_queue:
|
||||||
@ -99,10 +122,10 @@ class LLMInteractionHandler(QObject):
|
|||||||
self.llm_prediction_thread = None
|
self.llm_prediction_thread = None
|
||||||
self.llm_prediction_handler = None
|
self.llm_prediction_handler = None
|
||||||
# --- Process next item now that the previous thread is fully finished ---
|
# --- Process next item now that the previous thread is fully finished ---
|
||||||
log.debug("Previous LLM thread finished. Triggering processing for next item by calling _process_next_llm_item...")
|
log.debug("Previous LLM thread finished. Setting processing state to False.")
|
||||||
self._set_processing_state(False) # Mark processing as finished *before* trying next item
|
self._set_processing_state(False) # Mark processing as finished
|
||||||
# Use QTimer.singleShot to yield control briefly before starting next item
|
# The next item will be processed when _handle_llm_result or _handle_llm_error
|
||||||
QTimer.singleShot(0, self._process_next_llm_item)
|
# calls _process_next_llm_item after popping the completed item.
|
||||||
log.debug("<-- Exiting LLMInteractionHandler._reset_llm_thread_references")
|
log.debug("<-- Exiting LLMInteractionHandler._reset_llm_thread_references")
|
||||||
|
|
||||||
|
|
||||||
@ -140,64 +163,143 @@ class LLMInteractionHandler(QObject):
|
|||||||
self.llm_prediction_error.emit(input_path_str, error_msg)
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
return
|
return
|
||||||
|
|
||||||
# --- Get Configuration Object ---
|
# --- Load Required Settings Directly ---
|
||||||
if not hasattr(self.main_window, 'config') or not isinstance(self.main_window.config, Configuration):
|
llm_settings = {}
|
||||||
error_msg = "LLM Error: Main window does not have a valid Configuration object."
|
try:
|
||||||
|
log.debug(f"Loading LLM settings from: {LLM_SETTINGS_PATH}")
|
||||||
|
with open(LLM_SETTINGS_PATH, 'r') as f:
|
||||||
|
llm_data = json.load(f)
|
||||||
|
# Extract required fields with defaults
|
||||||
|
llm_settings['endpoint_url'] = llm_data.get('llm_endpoint_url')
|
||||||
|
llm_settings['api_key'] = llm_data.get('llm_api_key') # Can be None
|
||||||
|
llm_settings['model_name'] = llm_data.get('llm_model_name', 'local-model')
|
||||||
|
llm_settings['temperature'] = llm_data.get('llm_temperature', 0.5)
|
||||||
|
llm_settings['request_timeout'] = llm_data.get('llm_request_timeout', 120)
|
||||||
|
llm_settings['predictor_prompt'] = llm_data.get('llm_predictor_prompt', '')
|
||||||
|
llm_settings['examples'] = llm_data.get('llm_examples', [])
|
||||||
|
|
||||||
|
log.debug(f"Loading App settings from: {APP_SETTINGS_PATH}")
|
||||||
|
with open(APP_SETTINGS_PATH, 'r') as f:
|
||||||
|
app_data = json.load(f)
|
||||||
|
# Extract required fields
|
||||||
|
llm_settings['asset_type_definitions'] = app_data.get('ASSET_TYPE_DEFINITIONS', {})
|
||||||
|
llm_settings['file_type_definitions'] = app_data.get('FILE_TYPE_DEFINITIONS', {})
|
||||||
|
|
||||||
|
# Validate essential settings
|
||||||
|
if not llm_settings['endpoint_url']:
|
||||||
|
raise ValueError("LLM endpoint URL is missing in llm_settings.json")
|
||||||
|
if not llm_settings['predictor_prompt']:
|
||||||
|
raise ValueError("LLM predictor prompt is missing in llm_settings.json")
|
||||||
|
|
||||||
|
log.debug("LLM and App settings loaded successfully for LLMInteractionHandler.")
|
||||||
|
|
||||||
|
except FileNotFoundError as e:
|
||||||
|
error_msg = f"LLM Error: Configuration file not found: {e.filename}"
|
||||||
log.critical(error_msg)
|
log.critical(error_msg)
|
||||||
self.llm_status_update.emit("LLM Error: Cannot access application configuration.")
|
self.llm_status_update.emit("LLM Error: Cannot load configuration file.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
error_msg = f"LLM Error: Failed to parse configuration file: {e}"
|
||||||
|
log.critical(error_msg)
|
||||||
|
self.llm_status_update.emit("LLM Error: Cannot parse configuration file.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return
|
||||||
|
except ValueError as e: # Catch validation errors
|
||||||
|
error_msg = f"LLM Error: Invalid configuration - {e}"
|
||||||
|
log.critical(error_msg)
|
||||||
|
self.llm_status_update.emit("LLM Error: Invalid configuration.")
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
|
return
|
||||||
|
except Exception as e: # Catch other potential errors
|
||||||
|
error_msg = f"LLM Error: Unexpected error loading configuration: {e}"
|
||||||
|
log.critical(error_msg, exc_info=True)
|
||||||
|
self.llm_status_update.emit("LLM Error: Cannot load application configuration.")
|
||||||
self.llm_prediction_error.emit(input_path_str, error_msg)
|
self.llm_prediction_error.emit(input_path_str, error_msg)
|
||||||
return
|
return
|
||||||
|
|
||||||
config = self.main_window.config # Get the config object
|
# --- Wrap thread/handler setup and start in try...except ---
|
||||||
|
try:
|
||||||
|
# --- Check if Handler Class is Available ---
|
||||||
|
if LLMPredictionHandler is None:
|
||||||
|
# Raise ValueError to be caught below
|
||||||
|
raise ValueError("LLMPredictionHandler class not available.")
|
||||||
|
|
||||||
# --- Check if Handler Class is Available ---
|
# --- Clean up previous thread/handler if necessary ---
|
||||||
if LLMPredictionHandler is None:
|
# (Keep this cleanup logic as it handles potential stale threads)
|
||||||
log.critical("LLMPredictionHandler class not available.")
|
if self.llm_prediction_thread and self.llm_prediction_thread.isRunning():
|
||||||
self.llm_status_update.emit("LLM Error: Prediction handler component missing.")
|
log.warning("Warning: Previous LLM prediction thread still running when trying to start new one. Attempting cleanup.")
|
||||||
self.llm_prediction_error.emit(input_path_str, "LLMPredictionHandler class not available.")
|
if self.llm_prediction_handler:
|
||||||
return
|
if hasattr(self.llm_prediction_handler, 'cancel'):
|
||||||
|
self.llm_prediction_handler.cancel()
|
||||||
|
self.llm_prediction_thread.quit()
|
||||||
|
if not self.llm_prediction_thread.wait(1000): # Wait 1 sec
|
||||||
|
log.warning("LLM thread did not quit gracefully. Forcing termination.")
|
||||||
|
self.llm_prediction_thread.terminate()
|
||||||
|
self.llm_prediction_thread.wait() # Wait after terminate
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
self.llm_prediction_handler = None
|
||||||
|
|
||||||
# --- Clean up previous thread/handler if necessary ---
|
log.info(f"Starting LLM prediction thread for source: {input_path_str} with {len(file_list)} files.")
|
||||||
if self.llm_prediction_thread and self.llm_prediction_thread.isRunning():
|
self.llm_status_update.emit(f"Starting LLM interpretation for {input_path_obj.name}...")
|
||||||
log.warning("Warning: Previous LLM prediction thread still running when trying to start new one. Attempting cleanup.")
|
|
||||||
|
# --- Create Thread and Handler ---
|
||||||
|
self.llm_prediction_thread = QThread(self) # Parent thread to self
|
||||||
|
# Pass the loaded settings dictionary
|
||||||
|
self.llm_prediction_handler = LLMPredictionHandler(input_path_str, file_list, llm_settings)
|
||||||
|
self.llm_prediction_handler.moveToThread(self.llm_prediction_thread)
|
||||||
|
|
||||||
|
# Connect signals from handler to *internal* slots or directly emit signals
|
||||||
|
self.llm_prediction_handler.prediction_ready.connect(self._handle_llm_result)
|
||||||
|
self.llm_prediction_handler.prediction_error.connect(self._handle_llm_error)
|
||||||
|
self.llm_prediction_handler.status_update.connect(self.llm_status_update) # Pass status through
|
||||||
|
|
||||||
|
# Connect thread signals
|
||||||
|
self.llm_prediction_thread.started.connect(self.llm_prediction_handler.run)
|
||||||
|
# Clean up thread and handler when finished
|
||||||
|
self.llm_prediction_thread.finished.connect(self._reset_llm_thread_references)
|
||||||
|
self.llm_prediction_thread.finished.connect(self.llm_prediction_handler.deleteLater)
|
||||||
|
self.llm_prediction_thread.finished.connect(self.llm_prediction_thread.deleteLater)
|
||||||
|
# Also ensure thread quits when handler signals completion/error
|
||||||
|
self.llm_prediction_handler.prediction_ready.connect(self.llm_prediction_thread.quit)
|
||||||
|
self.llm_prediction_handler.prediction_error.connect(self.llm_prediction_thread.quit)
|
||||||
|
|
||||||
|
# TODO: Add a logging.debug statement at the very beginning of LLMPredictionHandler.run()
|
||||||
|
# to confirm if the method is being reached. Example:
|
||||||
|
# log.debug(f"--> Entered LLMPredictionHandler.run() for {self.input_path}")
|
||||||
|
|
||||||
|
self.llm_prediction_thread.start()
|
||||||
|
log.debug(f"LLM prediction thread start() called for {input_path_str}. Is running: {self.llm_prediction_thread.isRunning()}") # ADDED DEBUG LOG
|
||||||
|
# Log success *after* start() is called successfully
|
||||||
|
log.debug(f"Successfully initiated LLM prediction thread for {input_path_str}.") # MOVED/REWORDED LOG
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# --- Handle errors during setup/start ---
|
||||||
|
log.exception(f"Critical error during LLM thread setup/start for {input_path_str}: {e}")
|
||||||
|
error_msg = f"Error initializing LLM task for {input_path_obj.name}: {e}"
|
||||||
|
self.llm_status_update.emit(error_msg)
|
||||||
|
self.llm_prediction_error.emit(input_path_str, error_msg) # Signal the error
|
||||||
|
|
||||||
|
# --- Crucially, reset processing state if setup fails ---
|
||||||
|
log.warning("Resetting processing state due to thread setup/start error.")
|
||||||
|
self._set_processing_state(False)
|
||||||
|
|
||||||
|
# Clean up potentially partially created objects
|
||||||
if self.llm_prediction_handler:
|
if self.llm_prediction_handler:
|
||||||
if hasattr(self.llm_prediction_handler, 'cancel'):
|
self.llm_prediction_handler.deleteLater()
|
||||||
self.llm_prediction_handler.cancel()
|
self.llm_prediction_handler = None
|
||||||
self.llm_prediction_thread.quit()
|
if self.llm_prediction_thread:
|
||||||
if not self.llm_prediction_thread.wait(1000): # Wait 1 sec
|
if self.llm_prediction_thread.isRunning():
|
||||||
log.warning("LLM thread did not quit gracefully. Forcing termination.")
|
self.llm_prediction_thread.quit()
|
||||||
self.llm_prediction_thread.terminate()
|
self.llm_prediction_thread.wait(500)
|
||||||
self.llm_prediction_thread.wait() # Wait after terminate
|
self.llm_prediction_thread.terminate() # Force if needed
|
||||||
self.llm_prediction_thread = None
|
self.llm_prediction_thread.wait()
|
||||||
self.llm_prediction_handler = None
|
self.llm_prediction_thread.deleteLater()
|
||||||
|
self.llm_prediction_thread = None
|
||||||
|
|
||||||
|
# Do NOT automatically try the next item here, as the error might be persistent.
|
||||||
log.info(f"Starting LLM prediction thread for source: {input_path_str} with {len(file_list)} files.")
|
# Let the error signal handle popping the item and trying the next one.
|
||||||
self.llm_status_update.emit(f"Starting LLM interpretation for {input_path_obj.name}...")
|
# The error signal (_handle_llm_error) will pop the item and call _process_next_llm_item.
|
||||||
|
|
||||||
# --- Create Thread and Handler ---
|
|
||||||
self.llm_prediction_thread = QThread(self) # Parent thread to self
|
|
||||||
# Pass the Configuration object directly
|
|
||||||
self.llm_prediction_handler = LLMPredictionHandler(input_path_str, file_list, config)
|
|
||||||
self.llm_prediction_handler.moveToThread(self.llm_prediction_thread)
|
|
||||||
|
|
||||||
# Connect signals from handler to *internal* slots or directly emit signals
|
|
||||||
self.llm_prediction_handler.prediction_ready.connect(self._handle_llm_result)
|
|
||||||
self.llm_prediction_handler.prediction_error.connect(self._handle_llm_error)
|
|
||||||
self.llm_prediction_handler.status_update.connect(self.llm_status_update) # Pass status through
|
|
||||||
|
|
||||||
# Connect thread signals
|
|
||||||
self.llm_prediction_thread.started.connect(self.llm_prediction_handler.run)
|
|
||||||
# Clean up thread and handler when finished
|
|
||||||
self.llm_prediction_thread.finished.connect(self._reset_llm_thread_references)
|
|
||||||
self.llm_prediction_thread.finished.connect(self.llm_prediction_handler.deleteLater)
|
|
||||||
self.llm_prediction_thread.finished.connect(self.llm_prediction_thread.deleteLater)
|
|
||||||
# Also ensure thread quits when handler signals completion/error
|
|
||||||
self.llm_prediction_handler.prediction_ready.connect(self.llm_prediction_thread.quit)
|
|
||||||
self.llm_prediction_handler.prediction_error.connect(self.llm_prediction_thread.quit)
|
|
||||||
|
|
||||||
self.llm_prediction_thread.start()
|
|
||||||
log.debug(f"LLM prediction thread started for {input_path_str}.")
|
|
||||||
|
|
||||||
|
|
||||||
def is_processing(self) -> bool:
|
def is_processing(self) -> bool:
|
||||||
@ -254,10 +356,11 @@ class LLMInteractionHandler(QObject):
|
|||||||
try:
|
try:
|
||||||
# Pass the potentially None file_list. _start_llm_prediction handles extraction if needed.
|
# Pass the potentially None file_list. _start_llm_prediction handles extraction if needed.
|
||||||
self._start_llm_prediction(next_dir, file_list=file_list)
|
self._start_llm_prediction(next_dir, file_list=file_list)
|
||||||
# --- Pop item *after* successfully starting prediction ---
|
# --- DO NOT pop item here. Item is popped in _handle_llm_result or _handle_llm_error ---
|
||||||
self.llm_processing_queue.pop(0)
|
# Log message moved into the try block of _start_llm_prediction
|
||||||
log.debug(f"Successfully started LLM prediction for {next_dir} and removed from queue.")
|
# log.debug(f"Successfully started LLM prediction thread for {next_dir}. Item remains in queue until finished.")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
# This block now catches errors from _start_llm_prediction itself
|
||||||
log.exception(f"Error occurred *during* _start_llm_prediction call for {next_dir}: {e}")
|
log.exception(f"Error occurred *during* _start_llm_prediction call for {next_dir}: {e}")
|
||||||
error_msg = f"Error starting LLM for {os.path.basename(next_dir)}: {e}"
|
error_msg = f"Error starting LLM for {os.path.basename(next_dir)}: {e}"
|
||||||
self.llm_status_update.emit(error_msg)
|
self.llm_status_update.emit(error_msg)
|
||||||
@ -277,19 +380,37 @@ class LLMInteractionHandler(QObject):
|
|||||||
# --- Internal Slots to Handle Results/Errors from LLMPredictionHandler ---
|
# --- Internal Slots to Handle Results/Errors from LLMPredictionHandler ---
|
||||||
@Slot(str, list)
|
@Slot(str, list)
|
||||||
def _handle_llm_result(self, input_path: str, source_rules: list):
|
def _handle_llm_result(self, input_path: str, source_rules: list):
|
||||||
"""Internal slot to receive results and emit the public signal."""
|
"""Internal slot to receive results, pop item, and emit the public signal."""
|
||||||
log.debug(f"LLM Handler received result for {input_path}. Emitting llm_prediction_ready.")
|
log.debug(f"LLM Handler received result for {input_path}. Removing from queue and emitting llm_prediction_ready.")
|
||||||
|
# Remove the completed item from the queue
|
||||||
|
try:
|
||||||
|
# Find and remove the item by input_path
|
||||||
|
self.llm_processing_queue = [item for item in self.llm_processing_queue if item[0] != input_path]
|
||||||
|
log.debug(f"Removed '{input_path}' from LLM queue after successful prediction. New size: {len(self.llm_processing_queue)}")
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Error removing '{input_path}' from LLM queue after success: {e}")
|
||||||
|
|
||||||
self.llm_prediction_ready.emit(input_path, source_rules)
|
self.llm_prediction_ready.emit(input_path, source_rules)
|
||||||
# Note: The thread's finished signal calls _reset_llm_thread_references,
|
|
||||||
# which then calls _process_next_llm_item.
|
# Process the next item in the queue
|
||||||
|
QTimer.singleShot(0, self._process_next_llm_item)
|
||||||
|
|
||||||
@Slot(str, str)
|
@Slot(str, str)
|
||||||
def _handle_llm_error(self, input_path: str, error_message: str):
|
def _handle_llm_error(self, input_path: str, error_message: str):
|
||||||
"""Internal slot to receive errors and emit the public signal."""
|
"""Internal slot to receive errors, pop item, and emit the public signal."""
|
||||||
log.debug(f"LLM Handler received error for {input_path}: {error_message}. Emitting llm_prediction_error.")
|
log.debug(f"LLM Handler received error for {input_path}: {error_message}. Removing from queue and emitting llm_prediction_error.")
|
||||||
|
# Remove the failed item from the queue
|
||||||
|
try:
|
||||||
|
# Find and remove the item by input_path
|
||||||
|
self.llm_processing_queue = [item for item in self.llm_processing_queue if item[0] != input_path]
|
||||||
|
log.debug(f"Removed '{input_path}' from LLM queue after error. New size: {len(self.llm_processing_queue)}")
|
||||||
|
except Exception as e:
|
||||||
|
log.error(f"Error removing '{input_path}' from LLM queue after error: {e}")
|
||||||
|
|
||||||
self.llm_prediction_error.emit(input_path, error_message)
|
self.llm_prediction_error.emit(input_path, error_message)
|
||||||
# Note: The thread's finished signal calls _reset_llm_thread_references,
|
|
||||||
# which then calls _process_next_llm_item.
|
# Process the next item in the queue
|
||||||
|
QTimer.singleShot(0, self._process_next_llm_item)
|
||||||
|
|
||||||
def clear_queue(self):
|
def clear_queue(self):
|
||||||
"""Clears the LLM processing queue."""
|
"""Clears the LLM processing queue."""
|
||||||
|
|||||||
@ -14,8 +14,8 @@ from rule_structure import SourceRule, AssetRule, FileRule # Ensure AssetRule an
|
|||||||
|
|
||||||
# Assuming configuration loads app_settings.json
|
# Assuming configuration loads app_settings.json
|
||||||
# Adjust the import path if necessary
|
# Adjust the import path if necessary
|
||||||
# Import Configuration class
|
# Removed Configuration import
|
||||||
from configuration import Configuration
|
# from configuration import Configuration
|
||||||
# from configuration import load_base_config # No longer needed here
|
# from configuration import load_base_config # No longer needed here
|
||||||
from .base_prediction_handler import BasePredictionHandler # Import base class
|
from .base_prediction_handler import BasePredictionHandler # Import base class
|
||||||
|
|
||||||
@ -28,7 +28,8 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
"""
|
"""
|
||||||
# Signals (prediction_ready, prediction_error, status_update) are inherited
|
# Signals (prediction_ready, prediction_error, status_update) are inherited
|
||||||
|
|
||||||
def __init__(self, input_source_identifier: str, file_list: list, config: Configuration, parent: QObject = None):
|
# Changed 'config: Configuration' to 'settings: dict'
|
||||||
|
def __init__(self, input_source_identifier: str, file_list: list, settings: dict, parent: QObject = None):
|
||||||
"""
|
"""
|
||||||
Initializes the LLM handler.
|
Initializes the LLM handler.
|
||||||
|
|
||||||
@ -36,15 +37,14 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
input_source_identifier: The unique identifier for the input source (e.g., file path).
|
input_source_identifier: The unique identifier for the input source (e.g., file path).
|
||||||
file_list: A list of *relative* file paths extracted from the input source.
|
file_list: A list of *relative* file paths extracted from the input source.
|
||||||
(LLM expects relative paths based on the prompt template).
|
(LLM expects relative paths based on the prompt template).
|
||||||
config: The loaded Configuration object containing all settings.
|
settings: A dictionary containing required LLM and App settings.
|
||||||
parent: The parent QObject.
|
parent: The parent QObject.
|
||||||
"""
|
"""
|
||||||
super().__init__(input_source_identifier, parent)
|
super().__init__(input_source_identifier, parent)
|
||||||
# input_source_identifier is stored by the base class as self.input_source_identifier
|
# input_source_identifier is stored by the base class as self.input_source_identifier
|
||||||
self.file_list = file_list # Store the provided relative file list
|
self.file_list = file_list # Store the provided relative file list
|
||||||
self.config = config # Store the Configuration object
|
self.settings = settings # Store the settings dictionary
|
||||||
# Access LLM settings via self.config properties when needed
|
# Access LLM settings via self.settings['key']
|
||||||
# e.g., self.config.llm_endpoint_url, self.config.llm_api_key
|
|
||||||
# _is_running and _is_cancelled are handled by the base class
|
# _is_running and _is_cancelled are handled by the base class
|
||||||
|
|
||||||
# The run() and cancel() slots are provided by the base class.
|
# The run() and cancel() slots are provided by the base class.
|
||||||
@ -64,6 +64,7 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
ConnectionError: If the LLM API call fails due to network issues or timeouts.
|
ConnectionError: If the LLM API call fails due to network issues or timeouts.
|
||||||
Exception: For other errors during prompt preparation, API call, or parsing.
|
Exception: For other errors during prompt preparation, API call, or parsing.
|
||||||
"""
|
"""
|
||||||
|
log.debug(f"--> Entered LLMPredictionHandler._perform_prediction() for {self.input_source_identifier}")
|
||||||
log.info(f"Performing LLM prediction for: {self.input_source_identifier}")
|
log.info(f"Performing LLM prediction for: {self.input_source_identifier}")
|
||||||
base_name = Path(self.input_source_identifier).name
|
base_name = Path(self.input_source_identifier).name
|
||||||
|
|
||||||
@ -127,17 +128,25 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
"""
|
"""
|
||||||
Prepares the full prompt string to send to the LLM using stored settings.
|
Prepares the full prompt string to send to the LLM using stored settings.
|
||||||
"""
|
"""
|
||||||
# Access settings via the Configuration object
|
# Access settings via the settings dictionary
|
||||||
prompt_template = self.config.llm_predictor_prompt
|
prompt_template = self.settings.get('predictor_prompt')
|
||||||
if not prompt_template:
|
if not prompt_template:
|
||||||
# Config object should handle defaults or raise error during init if critical prompt is missing
|
raise ValueError("LLM predictor prompt template content is empty or missing in settings.")
|
||||||
raise ValueError("LLM predictor prompt template content is empty or could not be loaded from configuration.")
|
|
||||||
|
|
||||||
|
|
||||||
# Access definitions and examples via Configuration object methods/properties
|
# Access definitions and examples directly from the settings dictionary
|
||||||
asset_defs = json.dumps(self.config.get_asset_type_definitions(), indent=4)
|
asset_defs = json.dumps(self.settings.get('asset_type_definitions', {}), indent=4)
|
||||||
file_defs = json.dumps(self.config.get_file_type_definitions_with_examples(), indent=4)
|
# Combine file type defs and examples (assuming structure from Configuration class)
|
||||||
examples = json.dumps(self.config.get_llm_examples(), indent=2)
|
file_type_defs_combined = {}
|
||||||
|
file_type_defs = self.settings.get('file_type_definitions', {})
|
||||||
|
for key, definition in file_type_defs.items():
|
||||||
|
# Add examples if they exist within the definition structure
|
||||||
|
file_type_defs_combined[key] = {
|
||||||
|
"description": definition.get("description", ""),
|
||||||
|
"examples": definition.get("examples", [])
|
||||||
|
}
|
||||||
|
file_defs = json.dumps(file_type_defs_combined, indent=4)
|
||||||
|
examples = json.dumps(self.settings.get('examples', []), indent=2)
|
||||||
|
|
||||||
# Format *relative* file list as a single string with newlines
|
# Format *relative* file list as a single string with newlines
|
||||||
file_list_str = "\n".join(relative_file_list)
|
file_list_str = "\n".join(relative_file_list)
|
||||||
@ -165,26 +174,26 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
ValueError: If the endpoint URL is not configured or the response is invalid.
|
ValueError: If the endpoint URL is not configured or the response is invalid.
|
||||||
requests.exceptions.RequestException: For other request-related errors.
|
requests.exceptions.RequestException: For other request-related errors.
|
||||||
"""
|
"""
|
||||||
endpoint_url = self.config.llm_endpoint_url # Get from config
|
endpoint_url = self.settings.get('endpoint_url') # Get from settings dict
|
||||||
if not endpoint_url:
|
if not endpoint_url:
|
||||||
raise ValueError("LLM endpoint URL is not configured in settings.")
|
raise ValueError("LLM endpoint URL is not configured in settings.")
|
||||||
|
|
||||||
headers = {
|
headers = {
|
||||||
"Content-Type": "application/json",
|
"Content-Type": "application/json",
|
||||||
}
|
}
|
||||||
api_key = self.config.llm_api_key # Get from config
|
api_key = self.settings.get('api_key') # Get from settings dict
|
||||||
if api_key:
|
if api_key:
|
||||||
headers["Authorization"] = f"Bearer {api_key}"
|
headers["Authorization"] = f"Bearer {api_key}"
|
||||||
|
|
||||||
# Construct payload based on OpenAI Chat Completions format
|
# Construct payload based on OpenAI Chat Completions format
|
||||||
payload = {
|
payload = {
|
||||||
# Use configured model name, default to 'local-model'
|
# Use configured model name from settings dict
|
||||||
"model": self.config.llm_model_name or "local-model", # Use config property, fallback
|
"model": self.settings.get('model_name', 'local-model'),
|
||||||
"messages": [{"role": "user", "content": prompt}],
|
"messages": [{"role": "user", "content": prompt}],
|
||||||
# Use configured temperature, default to 0.5
|
# Use configured temperature from settings dict
|
||||||
"temperature": self.config.llm_temperature, # Use config property (has default)
|
"temperature": self.settings.get('temperature', 0.5),
|
||||||
# Add max_tokens if needed/configurable:
|
# Add max_tokens if needed/configurable:
|
||||||
# "max_tokens": self.config.llm_max_tokens, # Example if added to config
|
# "max_tokens": self.settings.get('max_tokens'), # Example if added to settings
|
||||||
# Ensure the LLM is instructed to return JSON in the prompt itself
|
# Ensure the LLM is instructed to return JSON in the prompt itself
|
||||||
# Some models/endpoints support a specific json mode:
|
# Some models/endpoints support a specific json mode:
|
||||||
# "response_format": { "type": "json_object" } # If supported by endpoint
|
# "response_format": { "type": "json_object" } # If supported by endpoint
|
||||||
@ -203,7 +212,7 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
endpoint_url,
|
endpoint_url,
|
||||||
headers=headers,
|
headers=headers,
|
||||||
json=payload,
|
json=payload,
|
||||||
timeout=self.config.llm_request_timeout # Use config property (has default)
|
timeout=self.settings.get('request_timeout', 120) # Use settings dict (with default)
|
||||||
)
|
)
|
||||||
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
|
||||||
|
|
||||||
@ -318,8 +327,9 @@ class LLMPredictionHandler(BasePredictionHandler):
|
|||||||
|
|
||||||
# --- Prepare for Rule Creation ---
|
# --- Prepare for Rule Creation ---
|
||||||
source_rule = SourceRule(input_path=self.input_source_identifier)
|
source_rule = SourceRule(input_path=self.input_source_identifier)
|
||||||
valid_asset_types = self.config.get_asset_type_keys() # Use config method
|
# Get valid types directly from the settings dictionary
|
||||||
valid_file_types = self.config.get_file_type_keys() # Use config method
|
valid_asset_types = list(self.settings.get('asset_type_definitions', {}).keys())
|
||||||
|
valid_file_types = list(self.settings.get('file_type_definitions', {}).keys())
|
||||||
asset_rules_map: Dict[str, AssetRule] = {} # Maps group_name to AssetRule
|
asset_rules_map: Dict[str, AssetRule] = {} # Maps group_name to AssetRule
|
||||||
|
|
||||||
# --- Process Individual Files and Build Rules ---
|
# --- Process Individual Files and Build Rules ---
|
||||||
|
|||||||
@ -11,7 +11,7 @@ log.info(f"sys.path: {sys.path}")
|
|||||||
|
|
||||||
from PySide6.QtWidgets import (
|
from PySide6.QtWidgets import (
|
||||||
QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QSplitter, QTableView, # Added QSplitter, QTableView
|
QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QSplitter, QTableView, # Added QSplitter, QTableView
|
||||||
QPushButton, QComboBox, QTableWidget, QTableWidgetItem, QHeaderView,
|
QPushButton, QComboBox, QTableWidget, QTableWidgetItem, QHeaderView, QStackedWidget, # Added QStackedWidget
|
||||||
QProgressBar, QLabel, QFrame, QCheckBox, QSpinBox, QListWidget, QTextEdit, # Added QListWidget, QTextEdit
|
QProgressBar, QLabel, QFrame, QCheckBox, QSpinBox, QListWidget, QTextEdit, # Added QListWidget, QTextEdit
|
||||||
QLineEdit, QMessageBox, QFileDialog, QInputDialog, QListWidgetItem, QTabWidget, # Added more widgets
|
QLineEdit, QMessageBox, QFileDialog, QInputDialog, QListWidgetItem, QTabWidget, # Added more widgets
|
||||||
QFormLayout, QGroupBox, QAbstractItemView, QSizePolicy, # Added more layout/widget items
|
QFormLayout, QGroupBox, QAbstractItemView, QSizePolicy, # Added more layout/widget items
|
||||||
@ -21,9 +21,10 @@ from PySide6.QtCore import Qt, QThread, Slot, Signal, QObject, QModelIndex, QIte
|
|||||||
from PySide6.QtGui import QColor, QAction, QPalette, QClipboard # Add QColor import, QAction, QPalette, QClipboard
|
from PySide6.QtGui import QColor, QAction, QPalette, QClipboard # Add QColor import, QAction, QPalette, QClipboard
|
||||||
|
|
||||||
# --- Local GUI Imports ---
|
# --- Local GUI Imports ---
|
||||||
from .preset_editor_widget import PresetEditorWidget # Import the new widget
|
from .preset_editor_widget import PresetEditorWidget
|
||||||
from .log_console_widget import LogConsoleWidget # Import the log console widget
|
from .llm_editor_widget import LLMEditorWidget # Import the new LLM editor
|
||||||
from .main_panel_widget import MainPanelWidget # Import the new main panel widget
|
from .log_console_widget import LogConsoleWidget
|
||||||
|
from .main_panel_widget import MainPanelWidget
|
||||||
|
|
||||||
# --- Backend Imports for Data Structures ---
|
# --- Backend Imports for Data Structures ---
|
||||||
from rule_structure import SourceRule, AssetRule, FileRule # Import Rule Structures
|
from rule_structure import SourceRule, AssetRule, FileRule # Import Rule Structures
|
||||||
@ -158,13 +159,30 @@ class MainWindow(QMainWindow):
|
|||||||
self.restructure_handler = AssetRestructureHandler(self.unified_model, self) # Instantiate the restructure handler
|
self.restructure_handler = AssetRestructureHandler(self.unified_model, self) # Instantiate the restructure handler
|
||||||
|
|
||||||
# --- Create Panels ---
|
# --- Create Panels ---
|
||||||
self.preset_editor_widget = PresetEditorWidget() # Instantiate the preset editor
|
self.preset_editor_widget = PresetEditorWidget()
|
||||||
|
self.llm_editor_widget = LLMEditorWidget() # Instantiate the LLM editor
|
||||||
# Instantiate MainPanelWidget, passing the model and self (MainWindow) for context
|
# Instantiate MainPanelWidget, passing the model and self (MainWindow) for context
|
||||||
self.main_panel_widget = MainPanelWidget(self.unified_model, self)
|
self.main_panel_widget = MainPanelWidget(self.unified_model, self)
|
||||||
self.log_console = LogConsoleWidget(self) # Instantiate the log console
|
self.log_console = LogConsoleWidget(self)
|
||||||
|
|
||||||
self.splitter.addWidget(self.preset_editor_widget) # Add the preset editor
|
# --- Create Left Pane with Static Selector and Stacked Editor ---
|
||||||
self.splitter.addWidget(self.main_panel_widget) # Add the new main panel widget
|
self.left_pane_widget = QWidget()
|
||||||
|
left_pane_layout = QVBoxLayout(self.left_pane_widget)
|
||||||
|
left_pane_layout.setContentsMargins(0, 0, 0, 0)
|
||||||
|
left_pane_layout.setSpacing(0) # No space between selector and stack
|
||||||
|
|
||||||
|
# Add the selector part from PresetEditorWidget
|
||||||
|
left_pane_layout.addWidget(self.preset_editor_widget.selector_container)
|
||||||
|
|
||||||
|
# Create the stacked widget for swappable editors
|
||||||
|
self.editor_stack = QStackedWidget()
|
||||||
|
self.editor_stack.addWidget(self.preset_editor_widget.json_editor_container) # Page 0: Preset JSON Editor
|
||||||
|
self.editor_stack.addWidget(self.llm_editor_widget) # Page 1: LLM Editor
|
||||||
|
left_pane_layout.addWidget(self.editor_stack)
|
||||||
|
|
||||||
|
# Add the new left pane and the main panel to the splitter
|
||||||
|
self.splitter.addWidget(self.left_pane_widget)
|
||||||
|
self.splitter.addWidget(self.main_panel_widget)
|
||||||
|
|
||||||
# --- Setup UI Elements ---
|
# --- Setup UI Elements ---
|
||||||
# Main panel UI is handled internally by MainPanelWidget
|
# Main panel UI is handled internally by MainPanelWidget
|
||||||
@ -198,6 +216,8 @@ class MainWindow(QMainWindow):
|
|||||||
|
|
||||||
# --- Connect Model Signals ---
|
# --- Connect Model Signals ---
|
||||||
self.unified_model.targetAssetOverrideChanged.connect(self.restructure_handler.handle_target_asset_override)
|
self.unified_model.targetAssetOverrideChanged.connect(self.restructure_handler.handle_target_asset_override)
|
||||||
|
# --- Connect LLM Editor Signals ---
|
||||||
|
self.llm_editor_widget.settings_saved.connect(self._on_llm_settings_saved) # Connect save signal
|
||||||
|
|
||||||
# --- Adjust Splitter ---
|
# --- Adjust Splitter ---
|
||||||
self.splitter.setSizes([400, 800]) # Initial size ratio
|
self.splitter.setSizes([400, 800]) # Initial size ratio
|
||||||
@ -633,8 +653,8 @@ class MainWindow(QMainWindow):
|
|||||||
|
|
||||||
# Check if rule-based prediction is already running (optional, handler might manage internally)
|
# Check if rule-based prediction is already running (optional, handler might manage internally)
|
||||||
# Note: QueuedConnection on the signal helps, but check anyway for immediate feedback/logging
|
# Note: QueuedConnection on the signal helps, but check anyway for immediate feedback/logging
|
||||||
# TODO: Add is_running() method to RuleBasedPredictionHandler if needed for this check
|
# TODO: Add is_running() method to RuleBasedPredictionHandler if needed for this check - NOTE: is_running is a property now
|
||||||
if self.prediction_handler and hasattr(self.prediction_handler, 'is_running') and self.prediction_handler.is_running():
|
if self.prediction_handler and hasattr(self.prediction_handler, 'is_running') and self.prediction_handler.is_running: # Removed ()
|
||||||
log.warning("Rule-based prediction is already running. Queuing re-interpretation request.")
|
log.warning("Rule-based prediction is already running. Queuing re-interpretation request.")
|
||||||
# Proceed, relying on QueuedConnection
|
# Proceed, relying on QueuedConnection
|
||||||
|
|
||||||
@ -1180,9 +1200,34 @@ class MainWindow(QMainWindow):
|
|||||||
# --- Slot for Preset Editor Selection Changes ---
|
# --- Slot for Preset Editor Selection Changes ---
|
||||||
@Slot(str, str)
|
@Slot(str, str)
|
||||||
def _on_preset_selection_changed(self, mode: str, preset_name: str | None):
|
def _on_preset_selection_changed(self, mode: str, preset_name: str | None):
|
||||||
"""Handles changes in the preset editor selection (preset, LLM, placeholder)."""
|
"""
|
||||||
|
Handles changes in the preset editor selection (preset, LLM, placeholder).
|
||||||
|
Switches between PresetEditorWidget and LLMEditorWidget.
|
||||||
|
"""
|
||||||
log.info(f"Preset selection changed: mode='{mode}', preset_name='{preset_name}'")
|
log.info(f"Preset selection changed: mode='{mode}', preset_name='{preset_name}'")
|
||||||
|
|
||||||
|
# --- Editor Stack Switching ---
|
||||||
|
if mode == "llm":
|
||||||
|
log.debug("Switching editor stack to LLM Editor Widget.")
|
||||||
|
# Force reset the LLM handler state in case it got stuck
|
||||||
|
if hasattr(self, 'llm_interaction_handler'):
|
||||||
|
self.llm_interaction_handler.force_reset_state()
|
||||||
|
self.editor_stack.setCurrentWidget(self.llm_editor_widget)
|
||||||
|
# Load settings *after* switching the stack
|
||||||
|
try:
|
||||||
|
self.llm_editor_widget.load_settings()
|
||||||
|
except Exception as e:
|
||||||
|
log.exception(f"Error loading LLM settings in _on_preset_selection_changed: {e}")
|
||||||
|
QMessageBox.critical(self, "LLM Settings Error", f"Failed to load LLM settings:\n{e}")
|
||||||
|
elif mode == "preset":
|
||||||
|
log.debug("Switching editor stack to Preset JSON Editor Widget.")
|
||||||
|
self.editor_stack.setCurrentWidget(self.preset_editor_widget.json_editor_container)
|
||||||
|
else: # "placeholder"
|
||||||
|
log.debug("Switching editor stack to Preset JSON Editor Widget (placeholder selected).")
|
||||||
|
self.editor_stack.setCurrentWidget(self.preset_editor_widget.json_editor_container)
|
||||||
|
# The PresetEditorWidget's internal logic handles disabling/clearing the editor fields.
|
||||||
|
# --- End Editor Stack Switching ---
|
||||||
|
|
||||||
# Update window title based on selection
|
# Update window title based on selection
|
||||||
if mode == "preset" and preset_name:
|
if mode == "preset" and preset_name:
|
||||||
# Check for unsaved changes *within the editor widget*
|
# Check for unsaved changes *within the editor widget*
|
||||||
@ -1212,6 +1257,17 @@ class MainWindow(QMainWindow):
|
|||||||
# update_preview will now respect the mode set above
|
# update_preview will now respect the mode set above
|
||||||
self.update_preview()
|
self.update_preview()
|
||||||
|
|
||||||
|
@Slot()
|
||||||
|
def _on_llm_settings_saved(self):
|
||||||
|
"""Slot called when LLM settings are saved successfully."""
|
||||||
|
log.info("LLM settings saved signal received by MainWindow.")
|
||||||
|
self.statusBar().showMessage("LLM settings saved successfully.", 3000)
|
||||||
|
# Optionally, trigger a reload of configuration if needed elsewhere,
|
||||||
|
# or update the LLMInteractionHandler if it caches settings.
|
||||||
|
# For now, just show a status message.
|
||||||
|
# If the LLM handler uses the config directly, no action needed here.
|
||||||
|
# If it caches, we might need: self.llm_interaction_handler.reload_settings()
|
||||||
|
|
||||||
# --- Slot for LLM Processing State Changes from Handler ---
|
# --- Slot for LLM Processing State Changes from Handler ---
|
||||||
@Slot(bool)
|
@Slot(bool)
|
||||||
def _on_llm_processing_state_changed(self, is_processing: bool):
|
def _on_llm_processing_state_changed(self, is_processing: bool):
|
||||||
|
|||||||
@ -58,15 +58,19 @@ class PresetEditorWidget(QWidget):
|
|||||||
|
|
||||||
def _init_ui(self):
|
def _init_ui(self):
|
||||||
"""Initializes the UI elements for the preset editor."""
|
"""Initializes the UI elements for the preset editor."""
|
||||||
editor_layout = QVBoxLayout(self)
|
main_layout = QVBoxLayout(self)
|
||||||
editor_layout.setContentsMargins(5, 5, 5, 5) # Reduce margins
|
main_layout.setContentsMargins(0, 0, 0, 0) # Let containers manage margins
|
||||||
|
main_layout.setSpacing(0) # No space between selector and editor containers
|
||||||
|
|
||||||
# Preset List and Controls
|
# Preset List and Controls
|
||||||
list_layout = QVBoxLayout()
|
self.selector_container = QWidget()
|
||||||
list_layout.addWidget(QLabel("Presets:"))
|
selector_layout = QVBoxLayout(self.selector_container)
|
||||||
|
selector_layout.setContentsMargins(5, 5, 5, 5) # Margins for selector area
|
||||||
|
|
||||||
|
selector_layout.addWidget(QLabel("Presets:"))
|
||||||
self.editor_preset_list = QListWidget()
|
self.editor_preset_list = QListWidget()
|
||||||
self.editor_preset_list.currentItemChanged.connect(self._load_selected_preset_for_editing)
|
self.editor_preset_list.currentItemChanged.connect(self._load_selected_preset_for_editing)
|
||||||
list_layout.addWidget(self.editor_preset_list)
|
selector_layout.addWidget(self.editor_preset_list) # Corrected: Add to selector_layout
|
||||||
|
|
||||||
list_button_layout = QHBoxLayout()
|
list_button_layout = QHBoxLayout()
|
||||||
self.editor_new_button = QPushButton("New")
|
self.editor_new_button = QPushButton("New")
|
||||||
@ -75,10 +79,14 @@ class PresetEditorWidget(QWidget):
|
|||||||
self.editor_delete_button.clicked.connect(self._delete_selected_preset)
|
self.editor_delete_button.clicked.connect(self._delete_selected_preset)
|
||||||
list_button_layout.addWidget(self.editor_new_button)
|
list_button_layout.addWidget(self.editor_new_button)
|
||||||
list_button_layout.addWidget(self.editor_delete_button)
|
list_button_layout.addWidget(self.editor_delete_button)
|
||||||
list_layout.addLayout(list_button_layout)
|
selector_layout.addLayout(list_button_layout)
|
||||||
editor_layout.addLayout(list_layout, 1) # Allow list to stretch
|
main_layout.addWidget(self.selector_container) # Add selector container to main layout
|
||||||
|
|
||||||
# Editor Tabs
|
# Editor Tabs
|
||||||
|
self.json_editor_container = QWidget()
|
||||||
|
editor_layout = QVBoxLayout(self.json_editor_container)
|
||||||
|
editor_layout.setContentsMargins(5, 0, 5, 5) # Margins for editor area (no top margin)
|
||||||
|
|
||||||
self.editor_tab_widget = QTabWidget()
|
self.editor_tab_widget = QTabWidget()
|
||||||
self.editor_tab_general_naming = QWidget()
|
self.editor_tab_general_naming = QWidget()
|
||||||
self.editor_tab_mapping_rules = QWidget()
|
self.editor_tab_mapping_rules = QWidget()
|
||||||
@ -86,7 +94,7 @@ class PresetEditorWidget(QWidget):
|
|||||||
self.editor_tab_widget.addTab(self.editor_tab_mapping_rules, "Mapping & Rules")
|
self.editor_tab_widget.addTab(self.editor_tab_mapping_rules, "Mapping & Rules")
|
||||||
self._create_editor_general_tab()
|
self._create_editor_general_tab()
|
||||||
self._create_editor_mapping_tab()
|
self._create_editor_mapping_tab()
|
||||||
editor_layout.addWidget(self.editor_tab_widget, 3) # Allow tabs to stretch more
|
editor_layout.addWidget(self.editor_tab_widget, 1) # Allow tabs to stretch
|
||||||
|
|
||||||
# Save Buttons
|
# Save Buttons
|
||||||
save_button_layout = QHBoxLayout()
|
save_button_layout = QHBoxLayout()
|
||||||
@ -100,6 +108,8 @@ class PresetEditorWidget(QWidget):
|
|||||||
save_button_layout.addWidget(self.editor_save_as_button)
|
save_button_layout.addWidget(self.editor_save_as_button)
|
||||||
editor_layout.addLayout(save_button_layout)
|
editor_layout.addLayout(save_button_layout)
|
||||||
|
|
||||||
|
main_layout.addWidget(self.json_editor_container) # Add editor container to main layout
|
||||||
|
|
||||||
def _create_editor_general_tab(self):
|
def _create_editor_general_tab(self):
|
||||||
"""Creates the widgets and layout for the 'General & Naming' editor tab."""
|
"""Creates the widgets and layout for the 'General & Naming' editor tab."""
|
||||||
layout = QVBoxLayout(self.editor_tab_general_naming)
|
layout = QVBoxLayout(self.editor_tab_general_naming)
|
||||||
@ -347,9 +357,10 @@ class PresetEditorWidget(QWidget):
|
|||||||
|
|
||||||
def _set_editor_enabled(self, enabled: bool):
|
def _set_editor_enabled(self, enabled: bool):
|
||||||
"""Enables or disables all editor widgets."""
|
"""Enables or disables all editor widgets."""
|
||||||
self.editor_tab_widget.setEnabled(enabled)
|
# Target the container holding the tabs and save buttons
|
||||||
|
self.json_editor_container.setEnabled(enabled)
|
||||||
|
# Save button state still depends on unsaved changes, but only if container is enabled
|
||||||
self.editor_save_button.setEnabled(enabled and self.editor_unsaved_changes)
|
self.editor_save_button.setEnabled(enabled and self.editor_unsaved_changes)
|
||||||
self.editor_save_as_button.setEnabled(enabled) # Save As is always possible if editor is enabled
|
|
||||||
|
|
||||||
def _clear_editor(self):
|
def _clear_editor(self):
|
||||||
"""Clears the editor fields and resets state."""
|
"""Clears the editor fields and resets state."""
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user