LLM GUI updates and tests
This commit is contained in:
@@ -44,16 +44,19 @@ The GUI has been refactored into several key components:
|
||||
|
||||
The `MainWindow` class acts as the main application window and **coordinator** for the GUI. Its primary responsibilities now include:
|
||||
|
||||
* Setting up the main window structure and menu bar.
|
||||
* Setting up the main window structure (using a `QSplitter`) and menu bar.
|
||||
* Instantiating and arranging the major GUI widgets:
|
||||
* `MainPanelWidget` (containing core controls and the rule editor)
|
||||
* `PresetEditorWidget`
|
||||
* `PresetEditorWidget` (providing selector and JSON editor parts)
|
||||
* `LLMEditorWidget` (for LLM settings)
|
||||
* `MainPanelWidget` (containing the rule view and processing controls)
|
||||
* `LogConsoleWidget`
|
||||
* Connecting signals and slots between these widgets, the underlying models (`UnifiedViewModel`), and background handlers (`RuleBasedPredictionHandler`, `LLMPredictionHandler`, `LLMInteractionHandler`).
|
||||
* **Layout Management:** Placing the preset selector statically and using a `QStackedWidget` to switch between the `PresetEditorWidget`'s JSON editor and the `LLMEditorWidget`.
|
||||
* **Editor Switching:** Handling the `preset_selection_changed_signal` from `PresetEditorWidget` to switch the stacked editor view (`_on_preset_selection_changed` slot).
|
||||
* Connecting signals and slots between widgets, models (`UnifiedViewModel`), and handlers (`LLMInteractionHandler`, `AssetRestructureHandler`).
|
||||
* Managing the overall application state related to GUI interactions (e.g., enabling/disabling controls).
|
||||
* Handling top-level actions like loading sources (drag-and-drop), initiating predictions, and starting the processing task (via `main.ProcessingTask`).
|
||||
* Managing the `QThreadPool` for running background tasks (prediction).
|
||||
* Implementing slots like `_handle_prediction_completion` to update the model/view when prediction results are ready.
|
||||
* Handling top-level actions like loading sources (drag-and-drop), initiating predictions (`update_preview`), and starting the processing task (`_on_process_requested`).
|
||||
* Managing background prediction threads (Rule-Based via `QThread`, LLM via `LLMInteractionHandler`).
|
||||
* Implementing slots (`_on_rule_hierarchy_ready`, `_on_llm_prediction_ready_from_handler`, `_on_prediction_error`, `_handle_prediction_completion`) to update the model/view when prediction results/errors arrive.
|
||||
|
||||
### `MainPanelWidget` (`gui/main_panel_widget.py`)
|
||||
|
||||
@@ -69,7 +72,10 @@ This widget contains the central part of the GUI, including:
|
||||
This widget provides the interface for managing presets:
|
||||
|
||||
* Loading, saving, and editing preset files (`Presets/*.json`).
|
||||
* Displaying preset rules and settings.
|
||||
* Displaying preset rules and settings in a tabbed JSON editor.
|
||||
* Providing the preset selection list (`QListWidget`) including the "LLM Interpretation" option.
|
||||
* **Refactored:** Exposes its selector (`selector_container`) and JSON editor (`json_editor_container`) as separate widgets for use by `MainWindow`.
|
||||
* Emits `preset_selection_changed_signal` when the selection changes.
|
||||
|
||||
### `LogConsoleWidget` (`gui/log_console_widget.py`)
|
||||
|
||||
@@ -79,6 +85,15 @@ This widget displays application logs within the GUI:
|
||||
* Integrates with Python's `logging` system via a custom `QtLogHandler`.
|
||||
* Can be shown/hidden via the main window's "View" menu.
|
||||
|
||||
### `LLMEditorWidget` (`gui/llm_editor_widget.py`)
|
||||
|
||||
A new widget dedicated to editing LLM settings:
|
||||
|
||||
* Provides a tabbed interface ("Prompt Settings", "API Settings") to edit `config/llm_settings.json`.
|
||||
* Allows editing the main prompt, managing examples (add/delete/edit JSON), and configuring API details (URL, key, model, temperature, timeout).
|
||||
* Loads settings via `load_settings()` and saves them using `_save_settings()` (which calls `configuration.save_llm_config()`).
|
||||
* Placed within `MainWindow`'s `QStackedWidget`.
|
||||
|
||||
### `UnifiedViewModel` (`gui/unified_view_model.py`)
|
||||
|
||||
The `UnifiedViewModel` implements a `QAbstractItemModel` for use with Qt's model-view architecture. It is specifically designed to:
|
||||
@@ -142,11 +157,13 @@ An experimental predictor (inheriting from `BasePredictionHandler`) that uses a
|
||||
|
||||
### `LLMInteractionHandler` (`gui/llm_interaction_handler.py`)
|
||||
|
||||
This class manages the specifics of communicating with the configured LLM API:
|
||||
This class now acts as the central manager for LLM prediction tasks:
|
||||
|
||||
* Handles constructing prompts based on templates and input data.
|
||||
* Sends requests to the LLM endpoint.
|
||||
* Receives and potentially pre-processes the LLM's response before returning it to the `LLMPredictionHandler`.
|
||||
* **Manages the LLM prediction queue** and processes items sequentially.
|
||||
* **Loads LLM configuration** directly from `config/llm_settings.json` and `config/app_settings.json`.
|
||||
* **Instantiates and manages** the `LLMPredictionHandler` and its `QThread`.
|
||||
* **Handles LLM task state** (running/idle) and signals changes to the GUI.
|
||||
* Receives results/errors from `LLMPredictionHandler` and **emits signals** (`llm_prediction_ready`, `llm_prediction_error`, `llm_status_update`, `llm_processing_state_changed`) to `MainWindow`.
|
||||
|
||||
## Utility Modules (`utils/`)
|
||||
|
||||
|
||||
@@ -6,11 +6,11 @@ This document provides technical details about the configuration system and the
|
||||
|
||||
The tool utilizes a two-tiered configuration system managed by the `configuration.py` module:
|
||||
|
||||
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources (e.g., default output paths, standard image resolutions, map merge rules, output format rules, Blender paths, `FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`). **LLM-specific settings are now located in `config/llm_settings.json`.**
|
||||
2. **LLM Settings (`config/llm_settings.json`):** This JSON file contains settings specifically related to the LLM predictor, such as the API endpoint, model name, prompt template, and examples.
|
||||
1. **Application Settings (`config/app_settings.json`):** This JSON file defines the core global default settings, constants, and rules that apply generally across different asset sources (e.g., default output paths, standard image resolutions, map merge rules, output format rules, Blender paths, `FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`).
|
||||
2. **LLM Settings (`config/llm_settings.json`):** This JSON file contains settings specifically related to the LLM predictor, such as the API endpoint, model name, prompt template, and examples. These settings can be edited through the GUI using the `LLMEditorWidget`.
|
||||
3. **Preset Files (`Presets/*.json`):** These JSON files define supplier-specific rules and overrides. They contain patterns to interpret filenames, classify map types, handle variants, define naming conventions, and specify other source-specific behaviors.
|
||||
|
||||
The `configuration.py` module is responsible for loading the base settings from `config/app_settings.json`, the LLM settings from `config/llm_settings.json`, merging the base settings with the rules from the selected preset file, and providing access to all settings via the `Configuration` class. The `load_base_config()` function is still available for accessing only the `app_settings.json` content directly (e.g., for the GUI editor). Preset values generally override core settings where applicable. Note that the old `config.py` file has been deleted.
|
||||
The `configuration.py` module contains the `Configuration` class (for loading/merging settings for processing) and standalone functions like `load_base_config()` (for accessing `app_settings.json` directly) and `save_llm_config()` / `save_base_config()` (for writing settings back to files). Note that the old `config.py` file has been deleted.
|
||||
|
||||
## Supplier Management (`config/suppliers.json`)
|
||||
|
||||
@@ -33,16 +33,29 @@ The `Configuration` class is central to the new configuration system. It is resp
|
||||
* **Regex Compilation (`_compile_regex_patterns`):** Compiles regex patterns defined in the merged configuration (from base settings and the preset) for performance. Compiled regex objects are stored as instance attributes (e.g., `self.compiled_map_keyword_regex`).
|
||||
* **LLM Settings Access:** The `Configuration` class provides direct property access (e.g., `config.llm_endpoint_url`, `config.llm_api_key`, `config.llm_model_name`, `config.llm_temperature`, `config.llm_request_timeout`, `config.llm_predictor_prompt`, `config.get_llm_examples()`) to allow components like the `LLMPredictionHandler` to easily access the necessary LLM configuration values loaded from `config/llm_settings.json`.
|
||||
|
||||
An instance of `Configuration` is created within each worker process (`main.process_single_asset_wrapper`) to ensure that each concurrently processed asset uses the correct, isolated configuration based on the specified preset and the base application settings.
|
||||
An instance of `Configuration` is created within each worker process (`main.process_single_asset_wrapper`) to ensure that each concurrently processed asset uses the correct, isolated configuration based on the specified preset and the base application settings. The `LLMInteractionHandler` loads LLM settings directly using helper functions or file access, not the `Configuration` class.
|
||||
|
||||
## GUI Configuration Editor (`gui/config_editor_dialog.py`)
|
||||
## GUI Configuration Editors
|
||||
|
||||
The GUI provides dedicated editors for modifying configuration files:
|
||||
|
||||
* **`ConfigEditorDialog` (`gui/config_editor_dialog.py`):** Edits the core `config/app_settings.json`.
|
||||
* **`LLMEditorWidget` (`gui/llm_editor_widget.py`):** Edits the LLM-specific `config/llm_settings.json`.
|
||||
|
||||
### `ConfigEditorDialog` (`gui/config_editor_dialog.py`)
|
||||
|
||||
The GUI includes a dedicated editor for modifying the `config/app_settings.json` file. This is implemented in `gui/config_editor_dialog.py`.
|
||||
|
||||
* **Purpose:** Provides a user-friendly interface for viewing and editing the core application settings defined in `app_settings.json`.
|
||||
* **Implementation:** The dialog loads the JSON content of `app_settings.json`, presents it in a tabbed layout ("General", "Output & Naming", etc.) using standard GUI widgets mapped to the JSON structure, and saves the changes back to the file. It supports editing basic fields, tables for definitions (`FILE_TYPE_DEFINITIONS`, `ASSET_TYPE_DEFINITIONS`), and a list/detail view for merge rules (`MAP_MERGE_RULES`). The definitions tables include dynamic color editing features.
|
||||
* **Limitations:** Currently, editing complex fields like `IMAGE_RESOLUTIONS` or the full details of `MAP_MERGE_RULES` via the UI is not fully supported.
|
||||
* **Note:** Changes made through the GUI editor are written directly to `config/app_settings.json` but require an application restart to be loaded and applied by the `Configuration` class.
|
||||
* **Note:** Changes made through the `ConfigEditorDialog` are written directly to `config/app_settings.json` (using `save_base_config`) but require an application restart to be loaded and applied by the `Configuration` class during processing.
|
||||
|
||||
### `LLMEditorWidget` (`gui/llm_editor_widget.py`)
|
||||
|
||||
* **Purpose:** Provides a user-friendly interface for viewing and editing the LLM settings defined in `config/llm_settings.json`.
|
||||
* **Implementation:** Uses tabs for "Prompt Settings" and "API Settings". Allows editing the prompt, managing examples, and configuring API details.
|
||||
* **Persistence:** Saves changes directly to `config/llm_settings.json` using the `configuration.save_llm_config()` function. Changes are loaded by the `LLMInteractionHandler` the next time an LLM task is initiated.
|
||||
|
||||
## Preset File Structure (`Presets/*.json`)
|
||||
|
||||
|
||||
@@ -11,21 +11,34 @@ The GUI is built using `PySide6`, which provides Python bindings for the Qt fram
|
||||
The `MainWindow` class acts as the central **coordinator** for the GUI application. It is responsible for:
|
||||
|
||||
* Setting up the main application window structure and menu bar.
|
||||
* Instantiating and arranging the major GUI widgets:
|
||||
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the core controls, preset selection, and the rule editor.
|
||||
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Handles preset loading, saving, and editing.
|
||||
* **Layout:** Arranging the main GUI components using a `QSplitter`.
|
||||
* **Left Pane:** Contains the preset selection controls (from `PresetEditorWidget`) permanently displayed at the top. Below this, a `QStackedWidget` switches between the preset JSON editor (also from `PresetEditorWidget`) and the `LLMEditorWidget`.
|
||||
* **Right Pane:** Contains the `MainPanelWidget`.
|
||||
* Instantiating and managing the major GUI widgets:
|
||||
* `PresetEditorWidget` (`gui/preset_editor_widget.py`): Provides the preset selector and the JSON editor parts.
|
||||
* `LLMEditorWidget` (`gui/llm_editor_widget.py`): Provides the editor for LLM settings.
|
||||
* `MainPanelWidget` (`gui/main_panel_widget.py`): Contains the rule hierarchy view and processing controls.
|
||||
* `LogConsoleWidget` (`gui/log_console_widget.py`): Displays application logs.
|
||||
* Instantiating key models and handlers:
|
||||
* `UnifiedViewModel` (`gui/unified_view_model.py`): The model for the rule hierarchy view.
|
||||
* `LLMInteractionHandler` (`gui/llm_interaction_handler.py`): Manages communication with the LLM service.
|
||||
* `AssetRestructureHandler` (`gui/asset_restructure_handler.py`): Handles rule restructuring.
|
||||
* Connecting signals and slots between these components to orchestrate the application flow.
|
||||
* **Editor Switching:** Handling the `preset_selection_changed_signal` from `PresetEditorWidget` in its `_on_preset_selection_changed` slot. This slot:
|
||||
* Switches the `QStackedWidget` (`editor_stack`) to display either the `PresetEditorWidget`'s JSON editor or the `LLMEditorWidget` based on the selected mode ("preset", "llm", "placeholder").
|
||||
* Calls `llm_editor_widget.load_settings()` when switching to LLM mode.
|
||||
* Updates the window title.
|
||||
* Triggers `update_preview()`.
|
||||
* Handling top-level user interactions like drag-and-drop for loading sources (`add_input_paths`). This method now handles the "placeholder" state (no preset selected) by scanning directories or inspecting archives (ZIP) and creating placeholder `SourceRule`/`AssetRule`/`FileRule` objects to immediately populate the `UnifiedViewModel` with the file structure.
|
||||
* Initiating predictions based on the selected preset mode (Rule-Based or LLM) when presets change or sources are added.
|
||||
* Initiating predictions based on the selected preset mode (Rule-Based or LLM) when presets change or sources are added (`update_preview`).
|
||||
* Starting the processing task (`_on_process_requested`): This slot now filters the `SourceRule` list obtained from the `UnifiedViewModel`, excluding sources where no asset has a `Target Asset` name assigned, before emitting the `start_backend_processing` signal. It also manages enabling/disabling controls.
|
||||
* Managing the `QThreadPool` for running background prediction tasks (`RuleBasedPredictionHandler`, `LLMPredictionHandler`).
|
||||
* Managing the background prediction threads (`RuleBasedPredictionHandler` via `QThread`, `LLMPredictionHandler` via `LLMInteractionHandler`).
|
||||
* Implementing slots to handle results from background tasks:
|
||||
* `_handle_prediction_completion(source_id, source_rule_list)`: Receives results from either prediction handler via the `prediction_signal`. It calls `self.unified_view_model.update_rules_for_sources()` to update the view model, preserving user overrides where possible. For LLM predictions, it also triggers processing the next item in the queue.
|
||||
* Slots to handle status updates from the LLM handler.
|
||||
* `_on_rule_hierarchy_ready`: Handles results from `RuleBasedPredictionHandler`.
|
||||
* `_on_llm_prediction_ready_from_handler`: Handles results from `LLMInteractionHandler`.
|
||||
* `_on_prediction_error`: Handles errors from both prediction paths.
|
||||
* `_handle_prediction_completion`: Centralized logic to track completion and update UI state after each prediction result or error.
|
||||
* Slots to handle status and state changes from `LLMInteractionHandler`.
|
||||
|
||||
## Threading and Background Tasks
|
||||
|
||||
@@ -53,7 +66,26 @@ Communication between the `MainWindow` (main UI thread) and the background predi
|
||||
|
||||
## Preset Editor (`gui/preset_editor_widget.py`)
|
||||
|
||||
The `PresetEditorWidget` provides a dedicated interface for managing presets. It handles loading, displaying, editing, and saving preset `.json` files. It communicates with the `MainWindow` (e.g., via signals) when a preset is loaded or saved.
|
||||
The `PresetEditorWidget` provides a dedicated interface for managing presets. It handles loading, displaying, editing, and saving preset `.json` files.
|
||||
|
||||
* **Refactoring:** This widget has been refactored to expose its main components:
|
||||
* `selector_container`: A `QWidget` containing the preset list (`QListWidget`) and New/Delete buttons. Used statically by `MainWindow`.
|
||||
* `json_editor_container`: A `QWidget` containing the tabbed editor (`QTabWidget`) for preset JSON details and the Save/Save As buttons. Placed in `MainWindow`'s `QStackedWidget`.
|
||||
* **Functionality:** Still manages the logic for populating the preset list, loading/saving presets, handling unsaved changes, and providing the editor UI for preset details.
|
||||
* **Communication:** Emits `preset_selection_changed_signal(mode, preset_name)` when the user selects a preset, the LLM option, or the placeholder. This signal is crucial for `MainWindow` to switch the editor stack and trigger preview updates.
|
||||
|
||||
## LLM Settings Editor (`gui/llm_editor_widget.py`)
|
||||
|
||||
This new widget provides a dedicated interface for editing LLM-specific settings stored in `config/llm_settings.json`.
|
||||
|
||||
* **Purpose:** Allows users to configure the LLM predictor's behavior without directly editing the JSON file.
|
||||
* **Structure:** Uses a `QTabWidget` with two tabs:
|
||||
* **"Prompt Settings":** Contains a `QPlainTextEdit` for the main prompt and a nested `QTabWidget` for managing examples (add/delete/edit JSON in `QTextEdit` widgets).
|
||||
* **"API Settings":** Contains fields (`QLineEdit`, `QDoubleSpinBox`, `QSpinBox`) for endpoint URL, API key, model name, temperature, and timeout.
|
||||
* **Functionality:**
|
||||
* `load_settings()`: Reads `config/llm_settings.json` and populates the UI fields. Handles file not found or JSON errors. Called by `MainWindow` when switching to LLM mode.
|
||||
* `_save_settings()`: Gathers data from the UI, validates example JSON, constructs the settings dictionary, and calls `configuration.save_llm_config()` to write back to the file. Emits `settings_saved` signal on success.
|
||||
* Manages unsaved changes state and enables/disables the "Save LLM Settings" button accordingly.
|
||||
|
||||
## Unified Hierarchical View
|
||||
|
||||
@@ -80,36 +112,44 @@ The core rule editing interface is built around a `QTreeView` managed within the
|
||||
graph TD
|
||||
subgraph MainWindow [MainWindow Coordinator]
|
||||
direction LR
|
||||
MW_Input[User Input (Drag/Drop, Preset Select)] --> MW(MainWindow);
|
||||
MW -- Initiates --> PredPool{QThreadPool};
|
||||
MW -- Connects Signals --> VM(UnifiedViewModel);
|
||||
MW -- Connects Signals --> ARH(AssetRestructureHandler);
|
||||
MW -- Owns/Manages --> MPW(MainPanelWidget);
|
||||
MW -- Owns/Manages --> PEW(PresetEditorWidget);
|
||||
MW -- Owns/Manages --> LCW(LogConsoleWidget);
|
||||
MW_Input[User Input (Drag/Drop)] --> MW(MainWindow);
|
||||
MW -- Owns/Manages --> Splitter(QSplitter);
|
||||
MW -- Owns/Manages --> LLMIH(LLMInteractionHandler);
|
||||
MW -- Owns/Manages --> ARH(AssetRestructureHandler);
|
||||
MW -- Owns/Manages --> VM(UnifiedViewModel);
|
||||
MW -- Owns/Manages --> LCW(LogConsoleWidget);
|
||||
MW -- Initiates --> PredPool{Prediction Threads};
|
||||
MW -- Connects Signals --> VM;
|
||||
MW -- Connects Signals --> ARH;
|
||||
MW -- Connects Signals --> LLMIH;
|
||||
MW -- Connects Signals --> PEW(PresetEditorWidget);
|
||||
MW -- Connects Signals --> LLMEDW(LLMEditorWidget);
|
||||
end
|
||||
|
||||
subgraph MainPanel [MainPanelWidget]
|
||||
subgraph LeftPane [Left Pane Widgets]
|
||||
direction TB
|
||||
MPW_UI[UI Controls (Load, Predict, Process Btns)];
|
||||
Splitter -- Adds Widget --> LPW(Left Pane Container);
|
||||
LPW -- Contains --> PEW_Sel(PresetEditorWidget - Selector);
|
||||
LPW -- Contains --> Stack(QStackedWidget);
|
||||
Stack -- Contains --> PEW_Edit(PresetEditorWidget - JSON Editor);
|
||||
Stack -- Contains --> LLMEDW;
|
||||
end
|
||||
|
||||
subgraph RightPane [Right Pane Widgets]
|
||||
direction TB
|
||||
Splitter -- Adds Widget --> MPW(MainPanelWidget);
|
||||
MPW -- Contains --> TV(QTreeView - Rule View);
|
||||
MPW_UI[UI Controls (Process Btn, etc)];
|
||||
MPW_UI --> MPW;
|
||||
MPW -- Contains --> REW(RuleEditorWidget);
|
||||
end
|
||||
|
||||
subgraph RuleEditor [RuleEditorWidget]
|
||||
direction TB
|
||||
REW -- Contains --> TV(QTreeView - Rule View);
|
||||
end
|
||||
|
||||
subgraph Prediction [Background Prediction]
|
||||
direction TB
|
||||
PredPool -- Runs --> RBP(RuleBasedPredictionHandler);
|
||||
PredPool -- Runs --> LLMP(LLMPredictionHandler);
|
||||
LLMP -- Uses --> LLMIH;
|
||||
RBP -- prediction_signal --> MW;
|
||||
LLMP -- prediction_signal --> MW;
|
||||
LLMP -- status_signal --> MW;
|
||||
LLMIH -- Manages/Starts --> LLMP;
|
||||
RBP -- prediction_ready/error/status --> MW;
|
||||
LLMIH -- llm_prediction_ready/error/status --> MW;
|
||||
end
|
||||
|
||||
subgraph ModelView [Model/View Components]
|
||||
@@ -126,17 +166,24 @@ graph TD
|
||||
Del -- Get/Set Data --> VM;
|
||||
end
|
||||
|
||||
%% MainWindow Interactions
|
||||
MW_Input -- Triggers --> MW;
|
||||
PEW -- preset_selection_changed_signal --> MW;
|
||||
LLMEDW -- settings_saved --> MW;
|
||||
MPW -- process_requested/etc --> MW;
|
||||
MW -- _on_preset_selection_changed --> Stack;
|
||||
MW -- _on_preset_selection_changed --> LLMEDW;
|
||||
MW -- _handle_prediction_completion --> VM;
|
||||
MW -- Triggers Processing --> ProcTask(main.ProcessingTask);
|
||||
|
||||
%% Connections between subgraphs
|
||||
MPW --> MW;
|
||||
PEW --> MW;
|
||||
LCW --> MW;
|
||||
PEW --> LPW; %% PresetEditorWidget parts are in Left Pane
|
||||
LLMEDW --> Stack; %% LLMEditorWidget is in Stack
|
||||
MPW --> Splitter; %% MainPanelWidget is in Right Pane
|
||||
VM --> MW;
|
||||
ARH --> MW;
|
||||
LLMIH --> MW;
|
||||
REW --> MPW;
|
||||
LCW --> MW;
|
||||
```
|
||||
|
||||
## Application Styling
|
||||
|
||||
@@ -16,7 +16,9 @@ The LLM Predictor is configured via settings in the dedicated `config/llm_settin
|
||||
- `llm_request_timeout`: The maximum time (in seconds) to wait for a response from the LLM API.
|
||||
- `llm_predictor_examples`: A list of example input/output pairs to include in the prompt for few-shot learning, helping the LLM understand the desired output format and classification logic.
|
||||
|
||||
These settings are loaded by the `Configuration` class (from `configuration.py`) along with the core `app_settings.json` and the selected preset. The prompt structure is crucial for effective classification. It should clearly instruct the LLM on the task and the expected output format. Placeholders within the prompt template (e.g., `{FILE_LIST}`) are dynamically replaced with relevant data before the request is sent.
|
||||
**Editing:** These settings can be edited directly through the GUI using the **`LLMEditorWidget`** (`gui/llm_editor_widget.py`), which provides a user-friendly interface for modifying the prompt, examples, and API parameters. Changes are saved back to `config/llm_settings.json` via the `configuration.save_llm_config()` function.
|
||||
|
||||
**Loading:** The `LLMInteractionHandler` now loads these settings directly from `config/llm_settings.json` and relevant parts of `config/app_settings.json` when it needs to start an `LLMPredictionHandler` task. It no longer relies on the main `Configuration` class for LLM-specific settings. The prompt structure remains crucial for effective classification. Placeholders within the prompt template (e.g., `{FILE_LIST}`) are dynamically replaced with relevant data before the request is sent.
|
||||
|
||||
## Expected LLM Output Format (Refactored)
|
||||
|
||||
@@ -58,41 +60,48 @@ The LLM is now expected to return a JSON object containing two distinct parts. T
|
||||
- `proposed_asset_group_name`: A name suggested by the LLM to group this file with others belonging to the same conceptual asset. This is used internally by the parser.
|
||||
- **`asset_group_classifications`**: A dictionary mapping the `proposed_asset_group_name` values from the list above to a final `asset_type` (e.g., "PBR Material", "HDR Environment").
|
||||
|
||||
## `LLMPredictionHandler` (Refactored Parsing)
|
||||
## `LLMInteractionHandler` (Refactored)
|
||||
|
||||
The `gui/llm_prediction_handler.py` module contains the `LLMPredictionHandler` class (inheriting from `BasePredictionHandler`), which orchestrates the LLM prediction process. It runs in a background thread managed by the `MainWindow`'s `QThreadPool`.
|
||||
The `gui/llm_interaction_handler.py` module contains the `LLMInteractionHandler` class, which now acts as the central manager for LLM prediction tasks.
|
||||
|
||||
Key Responsibilities & Methods:
|
||||
|
||||
- **Initialization**: Takes the source identifier, file list, and the main `Configuration` object (which has loaded settings from `app_settings.json`, `llm_settings.json`, and the active preset).
|
||||
- **`run()`**: The main method executed by the thread pool. It prepares the prompt, calls the LLM, parses the response, and emits the result or error.
|
||||
- **Prompt Preparation (`_prepare_prompt`)**: Uses the `Configuration` object (`self.config`) to access the `llm_predictor_prompt`, `asset_type_definitions`, `file_type_definitions`, and `llm_examples` to build the final prompt string.
|
||||
- **API Call (`_call_llm`)**: Uses the `Configuration` object (`self.config`) to get the `llm_endpoint_url`, `llm_api_key`, `llm_model_name`, `llm_temperature`, and `llm_request_timeout` to make the API request.
|
||||
- **Parsing (`_parse_llm_response`)**: Parses the LLM's JSON response (using `self.config` again to get valid asset/file types for validation) and constructs the `SourceRule` hierarchy.
|
||||
- **`_parse_llm_response(response_text)`**: This method contains the **new parsing logic**:
|
||||
1. **Sanitization**: Removes common non-JSON elements like comments (`//`, `/* */`) and markdown code fences (```json ... ```) from the raw `response_text` to increase the likelihood of successful JSON parsing.
|
||||
2. **JSON Parsing**: Parses the sanitized string into a Python dictionary.
|
||||
3. **Structure Validation**: Checks if the parsed dictionary contains the required top-level keys: `individual_file_analysis` (list) and `asset_group_classifications` (dict).
|
||||
4. **Grouping**: Iterates through the `individual_file_analysis` list. For each file:
|
||||
* Retrieves the `proposed_asset_group_name`.
|
||||
* Uses the `asset_group_classifications` dictionary to find the corresponding final `asset_type` for that group.
|
||||
* Creates or updates an `AssetRule` for the group name, assigning the determined `asset_type`.
|
||||
* Creates a `FileRule` for the specific file, assigning its `classified_file_type` as the `item_type`.
|
||||
5. **Hierarchy Construction**: Organizes the created `AssetRule` and `FileRule` objects into a single `SourceRule` object representing the entire source.
|
||||
6. **Validation**: Ensures all files from the input list were accounted for in the LLM response.
|
||||
- **Queue Management:** Maintains a queue (`llm_processing_queue`) of pending prediction requests (input path, file list). Handles adding single (`queue_llm_request`) or batch (`queue_llm_requests_batch`) requests.
|
||||
- **State Management:** Tracks whether an LLM task is currently running (`_is_processing`) and emits `llm_processing_state_changed(bool)` to update the GUI (e.g., disable preset editor). Includes `force_reset_state()` for recovery.
|
||||
- **Task Orchestration:** Processes the queue sequentially (`_process_next_llm_item`). For each item:
|
||||
* Loads required settings directly from `config/llm_settings.json` and `config/app_settings.json`.
|
||||
* Instantiates an `LLMPredictionHandler` in a new `QThread`.
|
||||
* Passes the loaded settings dictionary to the `LLMPredictionHandler`.
|
||||
* Connects signals from the handler (`prediction_ready`, `prediction_error`, `status_update`) to internal slots (`_handle_llm_result`, `_handle_llm_error`) or directly re-emits them (`llm_status_update`).
|
||||
* Starts the thread.
|
||||
- **Result/Error Handling:** Internal slots (`_handle_llm_result`, `_handle_llm_error`) receive results/errors from the `LLMPredictionHandler`, remove the completed/failed item from the queue, emit the corresponding public signal (`llm_prediction_ready`, `llm_prediction_error`), and trigger processing of the next queue item.
|
||||
- **Communication:** Emits signals to `MainWindow`:
|
||||
* `llm_prediction_ready(input_path, source_rule_list)`
|
||||
* `llm_prediction_error(input_path, error_message)`
|
||||
* `llm_status_update(status_message)`
|
||||
* `llm_processing_state_changed(is_processing)`
|
||||
|
||||
Signals:
|
||||
## `LLMPredictionHandler` (Refactored)
|
||||
|
||||
- `prediction_signal(source_id, source_rule)`: Emitted when a prediction is successfully parsed and the `SourceRule` hierarchy is constructed. The `source_rule` argument contains the complete hierarchy.
|
||||
- `error_signal(source_id, error_message)`: Emitted if an error occurs during any stage (API call, sanitization, parsing, validation).
|
||||
The `gui/llm_prediction_handler.py` module contains the `LLMPredictionHandler` class (inheriting from `BasePredictionHandler`), which performs the actual LLM prediction for a *single* input source. It runs in a background thread managed by the `LLMInteractionHandler`.
|
||||
|
||||
Key Responsibilities & Methods:
|
||||
|
||||
- **Initialization**: Takes the source identifier, file list, and a **`settings` dictionary** (passed from `LLMInteractionHandler`) containing all necessary configuration (LLM endpoint, prompt, examples, API details, type definitions, etc.).
|
||||
- **`_perform_prediction()`**: Implements the core prediction logic:
|
||||
* **Prompt Preparation (`_prepare_prompt`)**: Uses the passed `settings` dictionary to access the prompt template, type definitions, and examples to build the final prompt string.
|
||||
* **API Call (`_call_llm`)**: Uses the passed `settings` dictionary to get the endpoint URL, API key, model name, temperature, and timeout to make the API request.
|
||||
* **Parsing (`_parse_llm_response`)**: Parses the LLM's JSON response (using type definitions from the `settings` dictionary for validation) and constructs the `SourceRule` hierarchy based on the two-part format (`individual_file_analysis`, `asset_group_classifications`). Includes sanitization logic for comments and markdown fences.
|
||||
- **Signals (Inherited):** Emits `prediction_ready(input_path, source_rule_list)` or `prediction_error(input_path, error_message)` upon completion or failure, which are connected to the `LLMInteractionHandler`. Also emits `status_update(message)`.
|
||||
|
||||
## GUI Integration
|
||||
|
||||
Integration remains largely the same at the `MainWindow` level:
|
||||
|
||||
- The LLM predictor is selected via the preset dropdown or triggered explicitly.
|
||||
- `MainWindow` manages the `QThreadPool` and starts the `LLMPredictionHandler` task.
|
||||
- Slots in `MainWindow` connect to the `prediction_signal` and `error_signal` of the handler.
|
||||
- The LLM predictor mode is selected via the preset dropdown in `PresetEditorWidget`.
|
||||
- Selecting "LLM Interpretation" triggers `MainWindow._on_preset_selection_changed`, which switches the editor view to the `LLMEditorWidget` and calls `update_preview`.
|
||||
- `MainWindow.update_preview` (or `add_input_paths`) delegates the LLM prediction request(s) to the `LLMInteractionHandler`'s queue.
|
||||
- `LLMInteractionHandler` manages the background tasks and signals results/errors/status back to `MainWindow`.
|
||||
- `MainWindow` slots (`_on_llm_prediction_ready_from_handler`, `_on_prediction_error`, `show_status_message`, `_on_llm_processing_state_changed`) handle these signals to update the `UnifiedViewModel` and the UI state (status bar, progress, button enablement).
|
||||
- The `LLMEditorWidget` allows users to modify settings, saving them via `configuration.save_llm_config()`. `MainWindow` listens for the `settings_saved` signal to provide user feedback.
|
||||
|
||||
## Model Integration (Refactored)
|
||||
|
||||
@@ -103,11 +112,12 @@ The `gui/unified_view_model.py` module's `update_rules_for_sources` method still
|
||||
|
||||
## Error Handling (Updated)
|
||||
|
||||
Error handling now covers additional scenarios:
|
||||
Error handling is distributed:
|
||||
|
||||
- **LLM API Errors:** Handled by `LLMInteractionHandler` and propagated via the `error_signal`.
|
||||
- **Sanitization/Parsing Errors:** The `_parse_llm_response` method catches errors during comment/markdown removal and `json.loads()`.
|
||||
- **Structure Errors:** Explicit checks for the presence and types of `individual_file_analysis` and `asset_group_classifications` keys in the parsed JSON.
|
||||
- **Data Consistency Errors:** Validation errors if the parsed data doesn't align (e.g., a `proposed_asset_group_name` missing from `asset_group_classifications`, or files missing from the analysis).
|
||||
- **Configuration Loading:** `LLMInteractionHandler` handles errors loading `llm_settings.json` or `app_settings.json` before starting a task.
|
||||
- **LLM API Errors:** Handled within `LLMPredictionHandler._call_llm` (e.g., `requests.exceptions.RequestException`, `HTTPError`) and propagated via the `prediction_error` signal.
|
||||
- **Sanitization/Parsing Errors:** `LLMPredictionHandler._parse_llm_response` catches errors during comment/markdown removal and `json.loads()`.
|
||||
- **Structure/Validation Errors:** `LLMPredictionHandler._parse_llm_response` includes explicit checks for the required two-part JSON structure and data consistency.
|
||||
- **Task Management Errors:** `LLMInteractionHandler` handles errors during thread setup/start.
|
||||
|
||||
These errors trigger the `error_signal`, allowing `MainWindow` to inform the user.
|
||||
All errors ultimately result in the `llm_prediction_error` signal being emitted by `LLMInteractionHandler`, allowing `MainWindow` to inform the user via the status bar and handle the completion state.
|
||||
Reference in New Issue
Block a user