Skip to content

Release Notes

0.2.0 [2026-03-02]

Enhancements:

  • Added opt-in upload control via meta.upload in model config: models must declare meta: {upload: "<path>"} to be uploaded to external storage, replacing the previous automatic upload of all external materialization models.
  • Extended upload support to any materialization type (including Python table models) that produces an output file and declares meta.upload.
  • Added pandas>=2,<3 dependency.
  • Added automated release notes CI jobs via claude-code integration.

Fixes:

  • Fixed SQL file metadata update in run.py to use sql_file.name instead of the full relative path when calling update_file_metadata.

Maintenance:

  • Replaced _find_external_model_outputs and _parse_external_models_from_manifest with _load_manifest and _find_uploadable_outputs to support the meta.upload upload mechanism.
  • Removed fallback recursive parquet file scan when no manifest entries are found.
  • Added unit tests for _load_manifest, _find_uploadable_outputs, and _upload_external_model_outputs.
  • Added scripts/test_fw_dataset_loading.py integration script for validating fw-dataset and data-connect compatibility.

Documentation:

  • Updated README.md and docs/getting_started_with_dbt.md to reflect the meta.upload upload model, including new examples for SQL and Python models and a revised materialization support table.
  • Removed FAQ.md and its references from README.md.

Breaking Changes:

  • Models that previously relied on materialized='external' being automatically uploaded must now declare meta: {upload: "<path>"} in their config to be uploaded. Models without meta.upload are silently skipped.

0.2.0

Enhancements:

  • Upload selection is now user-configurable via dbt meta.upload instead of relying on hardcoded materialization type and glob patterns. Each model can declare meta: {upload: "path/relative/to/target"} to opt into upload, giving dbt project authors full control over what gets uploaded
  • Supports any materialization type (external, table, Python models) as long as the model declares meta.upload

Breaking Changes:

  • Models using materialized='external' are no longer automatically uploaded. All models that should be uploaded must now declare meta: {upload: "<path>"} in their config
  • The fallback glob search for *.parquet files under target/ has been removed
  • The hardcoded target/schemas/*.schema.json upload has been removed. Python models must declare meta.upload to be uploaded

Maintenance:

  • Removed _find_external_model_outputs, _find_schema_json_outputs, and _parse_external_models_from_manifest in favor of a single manifest-driven _find_uploadable_outputs function
  • Added comprehensive unit tests for the new upload discovery logic covering string paths, non-string values, missing files, node-level meta, and non-model nodes

0.1.1 [2026-01-12]

Fixes:

  • Fixed output file type for .sql files in the compiled directory to be set as "source code" so they open in the native editor instead of as generic files

Maintenance:

  • Fixed markdown line length violations in CONTRIBUTING.md and docs/getting_started_with_dbt.md to comply with markdownlint 88-character limit

Documentation:

  • Updated README.md to recommend reaching out to Flywheel support for large datasets so that appropriate resources can be allocated
  • Enhanced docs/getting_started_with_dbt.md with comprehensive dbt sources documentation, including benefits of using {{ source() }} function over direct read_parquet() calls
  • Improved source data requirements section with recommended vs alternative approaches and common mistakes to avoid
  • Added {name} placeholder documentation for DRY source configuration in sources.yml
  • Updated all example SQL queries to use {{ source() }} function instead of direct read_parquet() calls

0.1.0 [2025-12-01]

Enhancements:

  • Initial implementation of dbt Runner gear for executing dbt projects on Flywheel datasets
  • Added support for dbt-duckdb adapter to process parquet files locally
  • Implemented external storage integration using fw-storage and fw-client libraries for downloading source datasets and uploading transformed results
  • Added comprehensive validation for dbt project structure, configuration parameters, and storage access
  • Implemented automatic extraction and validation of dbt project zip files
  • Added support for saving dbt artifacts (manifest.json, run_results.json, sources.json, compiled/) as gear outputs
  • Implemented structured logging with numbered execution phases for easy progress tracking
  • Added automatic creation of target subdirectories before dbt runs to prevent "directory does not exist" errors when models specify nested output locations
  • Implemented subdirectory structure preservation when uploading model outputs to external storage, allowing organized output hierarchy
  • Created comprehensive README documentation with usage examples, workflow diagrams, and use cases
  • Added detailed "Assumptions and Limitations" documentation to README and getting started guide covering model output requirements, supported features, execution model, and known limitations

Fixes:

  • Fixed storage upload API to use correct set() method instead of non-existent put() method for fw-storage compatibility with Google Cloud Storage and other backends
  • Fixed test mock configuration to properly handle nested GearContext attribute access for API key retrieval

Maintenance:

  • Set up project structure based on Flywheel gear template
  • Configured manifest.json with required inputs (dbt_project_zip) and config parameters (storage_label, source_prefix, output_prefix)
  • Updated Python requirement to >=3.12
  • Added dependencies: dbt-core>=1.9.0, dbt-duckdb>=1.9.0, fw-storage>=2.0.0, fw-client>=2.3.1
  • Added glibc-locale-posix to Dockerfile for VS Code Server compatibility
  • Simplified GearContext import in run.py by removing unnecessary aliasing from GearToolkitContext

Documentation:

  • Created comprehensive README with usage examples, workflow diagrams, and use cases
  • Added getting started guide with step-by-step instructions for creating, testing, and deploying dbt projects
  • Added CONTRIBUTING.md with dependency management, linting, and testing instructions
  • Added detailed assumptions and limitations documentation covering model output requirements, supported features, execution model, and known limitations