Asset-Frameworker/ProjectNotes/ARCHITECTURE_README_DRAFT.md
2025-04-29 18:26:13 +02:00

10 KiB

DRAFT README Enhancements - Architecture Section & Refinements

(Note: This is a draft. Integrate the "Architecture" section and the refinements into the main readme.md file.)


Refinements to Existing Sections

(Suggest adding these points or similar wording to the relevant existing sections)

  • In Features:
    • Add: Responsive GUI: Utilizes background threads for processing and file preview generation, ensuring the user interface remains responsive.
    • Add: Optimized Classification: Pre-compiles regular expressions from presets for faster file identification during classification.
  • In Directory Structure:
    • Update Core Logic bullet: * **Core Logic:** main.py, monitor.py, asset_processor.py, configuration.py, config.py (explicitly add configuration.py).

Architecture

(Suggest adding this new section, perhaps after "Features" or "Directory Structure")

This section provides a higher-level overview of the tool's internal structure and design, intended for developers or users interested in the technical implementation.

Core Components

The tool is primarily built around several key Python modules:

  • config.py: Defines core, global settings (output paths, resolutions, default behaviors, format rules, etc.) that are generally not supplier-specific.
  • Presets/*.json: Supplier-specific JSON files defining rules for interpreting source assets (filename patterns, map type keywords, model identification, etc.).
  • configuration.py (Configuration class): Responsible for loading the core config.py settings and merging them with a selected preset JSON file. Crucially, it also pre-compiles regular expression patterns defined in the preset (e.g., for map keywords, extra files, 16-bit variants) upon initialization. This pre-compilation significantly speeds up the file classification process.
  • asset_processor.py (AssetProcessor class): Contains the core logic for processing a single asset. It orchestrates the pipeline steps: workspace setup, extraction, file classification, metadata determination, map processing, channel merging, metadata file generation, and output organization.
  • main.py: Serves as the entry point for the Command-Line Interface (CLI). It handles argument parsing, sets up logging, manages the parallel processing pool, and calls AssetProcessor for each input asset via a wrapper function.
  • gui/: Contains modules related to the Graphical User Interface (GUI), built using PySide6.
  • monitor.py: Implements the directory monitoring functionality for automated processing.

Parallel Processing (CLI & GUI)

To accelerate the processing of multiple assets, the tool utilizes Python's concurrent.futures.ProcessPoolExecutor.

  • Both main.py (for CLI) and gui/processing_handler.py (for GUI background tasks) create a process pool.
  • The actual processing for each asset is delegated to the main.process_single_asset_wrapper function. This wrapper is executed in a separate worker process within the pool.
  • The wrapper function is responsible for instantiating the Configuration and AssetProcessor classes for the specific asset being processed in that worker. This isolates each asset's processing environment.
  • Results (success, skip, failure, error messages) are communicated back from the worker processes to the main coordinating script (either main.py or gui/processing_handler.py).

Asset Processing Pipeline (AssetProcessor class)

The AssetProcessor class executes a sequence of steps for each asset:

  1. _setup_workspace(): Creates a temporary directory for processing.
  2. _extract_input(): Extracts the input ZIP archive or copies the input folder contents into the temporary workspace.
  3. _inventory_and_classify_files(): This is a critical step that scans the workspace and classifies each file based on rules defined in the loaded Configuration (which includes the preset). It uses the pre-compiled regex patterns for efficiency. Key logic includes:
    • Identifying files explicitly marked for the Extra/ folder.
    • Identifying model files.
    • Matching potential texture maps against keyword patterns.
    • Identifying and prioritizing 16-bit variants (e.g., _NRM16.tif) over their 8-bit counterparts based on source_naming.bit_depth_variants patterns. Ignored 8-bit files are tracked.
    • Handling map variants (e.g., multiple Color maps) by assigning suffixes (-1, -2) based on the RESPECT_VARIANT_MAP_TYPES setting in config.py and the order of keywords defined in the preset's map_type_mapping.
    • Classifying any remaining files as 'Unrecognised' (which are also moved to the Extra/ folder).
  4. _determine_base_metadata(): Determines the asset's base name, category (Texture, Asset, Decal), and archetype (e.g., Wood, Metal) based on classified files and preset rules (source_naming, asset_category_rules, archetype_rules).
  5. Skip Check: If overwrite is false, checks if the final output directory and metadata file already exist. If so, processing for this asset stops early.
  6. _process_maps(): Iterates through classified texture maps. For each map:
    • Loads the image data (handling potential Gloss->Roughness inversion).
    • Resizes the map to each target resolution specified in config.py, avoiding upscaling.
    • Determines the output bit depth based on MAP_BIT_DEPTH_RULES (respect source or force_8bit).
    • Determines the output file format (.jpg, .png, .exr) based on a combination of factors:
      • The RESOLUTION_THRESHOLD_FOR_JPG (forces JPG for 8-bit maps above the threshold).
      • The original input file format (e.g., .jpg inputs tend to produce .jpg outputs if 8-bit and below threshold).
      • The target bit depth (16-bit outputs use configured OUTPUT_FORMAT_16BIT_PRIMARY or _FALLBACK).
      • Configured 8-bit format (OUTPUT_FORMAT_8BIT).
    • Saves the processed map for each resolution, applying appropriate compression/quality settings. Includes fallback logic if saving in the primary format fails (e.g., EXR -> PNG).
    • Calculates basic image statistics (Min/Max/Mean) for a reference resolution (CALCULATE_STATS_RESOLUTION).
  7. _merge_maps(): Combines channels from different processed maps into new textures (e.g., NRMRGH) based on MAP_MERGE_RULES defined in config.py. It determines the output format for merged maps similarly to _process_maps, considering the formats of the input maps involved.
  8. _generate_metadata_file(): Collects all gathered information (asset name, maps present, resolutions, stats, etc.) and writes it to the metadata.json file.
  9. _organize_output_files(): Moves the processed maps, merged maps, models, metadata file, and any 'Extra'/'Unrecognised'/'Ignored' files from the temporary workspace to the final structured output directory (<output_base>/<supplier>/<asset_name>/).
  10. _cleanup_workspace(): Removes the temporary workspace directory.

GUI Architecture (gui/)

The GUI provides an interactive way to use the tool and manage presets.

  • Framework: Built using PySide6, the official Python bindings for the Qt framework.
  • Main Window (main_window.py): Defines the main application window, which includes:
    • An integrated preset editor panel (using QSplitter).
    • A processing panel with drag-and-drop support, a file preview table, and processing controls.
  • Threading Model: To prevent the UI from freezing during potentially long operations, background tasks are run in separate QThreads:
    • ProcessingHandler (processing_handler.py): Manages the execution of the main processing pipeline (using ProcessPoolExecutor and main.process_single_asset_wrapper, similar to the CLI) in a background thread.
    • PredictionHandler (prediction_handler.py): Manages the generation of file previews in a background thread. It calls AssetProcessor.get_detailed_file_predictions(), which performs the extraction and classification steps without full image processing, making it much faster.
  • Communication: Qt's signal and slot mechanism is used for communication between the background threads (ProcessingHandler, PredictionHandler) and the main GUI thread (MainWindow). For example, signals are emitted to update the progress bar, populate the preview table, and report completion status or errors.
  • Preset Editor: The editor allows creating, modifying, and saving preset JSON files directly within the GUI. Changes are tracked, and users are prompted to save before closing or loading another preset if changes are pending.

Monitor Architecture (monitor.py)

The monitor.py script enables automated processing of assets dropped into a designated input directory.

  • File System Watching: Uses the watchdog library (specifically PollingObserver for cross-platform compatibility) to monitor the specified INPUT_DIR.
  • Event Handling: A custom ZipHandler detects on_created events for .zip files.
  • Filename Parsing: It expects filenames in the format [preset]_filename.zip and uses a regular expression (PRESET_FILENAME_REGEX) to extract the preset name.
  • Preset Validation: Checks if the extracted preset name corresponds to a valid .json file in the Presets/ directory.
  • Processing Trigger: If the filename format and preset are valid, it calls the main.run_processing function (the same core logic used by the CLI) to process the detected ZIP file using the extracted preset.
  • File Management: Moves the source ZIP file to either a PROCESSED_DIR (on success/skip) or an ERROR_DIR (on failure or invalid preset) after the processing attempt.

Error Handling

  • Custom exception classes (ConfigurationError, AssetProcessingError) are defined and used to signal specific types of errors during configuration loading or asset processing.
  • Standard Python logging is used throughout the application (CLI, GUI, Monitor, Core Logic) to record information, warnings, and errors. Log levels can be configured.
  • Worker processes in the processing pool capture exceptions and report them back to the main process for logging and status updates.