Release Notes
0.2.0 [2026-03-02]
Enhancements:
- Added opt-in upload control via
meta.uploadin model config: models must declaremeta: {upload: "<path>"}to be uploaded to external storage, replacing the previous automatic upload of allexternalmaterialization models. - Extended upload support to any materialization type (including Python
tablemodels) that produces an output file and declaresmeta.upload. - Added
pandas>=2,<3dependency. - Added automated release notes CI jobs via
claude-codeintegration.
Fixes:
- Fixed SQL file metadata update in
run.pyto usesql_file.nameinstead of the full relative path when callingupdate_file_metadata.
Maintenance:
- Replaced
_find_external_model_outputsand_parse_external_models_from_manifestwith_load_manifestand_find_uploadable_outputsto support themeta.uploadupload mechanism. - Removed fallback recursive parquet file scan when no manifest entries are found.
- Added unit tests for
_load_manifest,_find_uploadable_outputs, and_upload_external_model_outputs. - Added
scripts/test_fw_dataset_loading.pyintegration script for validatingfw-datasetanddata-connectcompatibility.
Documentation:
- Updated
README.mdanddocs/getting_started_with_dbt.mdto reflect themeta.uploadupload model, including new examples for SQL and Python models and a revised materialization support table. - Removed
FAQ.mdand its references fromREADME.md.
Breaking Changes:
- Models that previously relied on
materialized='external'being automatically uploaded must now declaremeta: {upload: "<path>"}in their config to be uploaded. Models withoutmeta.uploadare silently skipped.
0.2.0
Enhancements:
- Upload selection is now user-configurable via dbt
meta.uploadinstead of relying on hardcoded materialization type and glob patterns. Each model can declaremeta: {upload: "path/relative/to/target"}to opt into upload, giving dbt project authors full control over what gets uploaded - Supports any materialization type (external, table, Python models) as long as the model declares
meta.upload
Breaking Changes:
- Models using
materialized='external'are no longer automatically uploaded. All models that should be uploaded must now declaremeta: {upload: "<path>"}in their config - The fallback glob search for
*.parquetfiles undertarget/has been removed - The hardcoded
target/schemas/*.schema.jsonupload has been removed. Python models must declaremeta.uploadto be uploaded
Maintenance:
- Removed
_find_external_model_outputs,_find_schema_json_outputs, and_parse_external_models_from_manifestin favor of a single manifest-driven_find_uploadable_outputsfunction - Added comprehensive unit tests for the new upload discovery logic covering string paths, non-string values, missing files, node-level meta, and non-model nodes
0.1.1 [2026-01-12]
Fixes:
- Fixed output file type for
.sqlfiles in the compiled directory to be set as "source code" so they open in the native editor instead of as generic files
Maintenance:
- Fixed markdown line length violations in
CONTRIBUTING.mdanddocs/getting_started_with_dbt.mdto comply withmarkdownlint88-character limit
Documentation:
- Updated
README.mdto recommend reaching out to Flywheel support for large datasets so that appropriate resources can be allocated - Enhanced
docs/getting_started_with_dbt.mdwith comprehensive dbt sources documentation, including benefits of using{{ source() }}function over directread_parquet()calls - Improved source data requirements section with recommended vs alternative approaches and common mistakes to avoid
- Added
{name}placeholder documentation for DRY source configuration insources.yml - Updated all example SQL queries to use
{{ source() }}function instead of directread_parquet()calls
0.1.0 [2025-12-01]
Enhancements:
- Initial implementation of dbt Runner gear for executing dbt projects on Flywheel datasets
- Added support for
dbt-duckdbadapter to process parquet files locally - Implemented external storage integration using
fw-storageandfw-clientlibraries for downloading source datasets and uploading transformed results - Added comprehensive validation for dbt project structure, configuration parameters, and storage access
- Implemented automatic extraction and validation of dbt project zip files
- Added support for saving dbt artifacts (
manifest.json,run_results.json,sources.json,compiled/) as gear outputs - Implemented structured logging with numbered execution phases for easy progress tracking
- Added automatic creation of target subdirectories before dbt runs to prevent "directory does not exist" errors when models specify nested output locations
- Implemented subdirectory structure preservation when uploading model outputs to external storage, allowing organized output hierarchy
- Created comprehensive README documentation with usage examples, workflow diagrams, and use cases
- Added detailed "Assumptions and Limitations" documentation to README and getting started guide covering model output requirements, supported features, execution model, and known limitations
Fixes:
- Fixed storage upload API to use correct
set()method instead of non-existentput()method forfw-storagecompatibility with Google Cloud Storage and other backends - Fixed test mock configuration to properly handle nested
GearContextattribute access for API key retrieval
Maintenance:
- Set up project structure based on Flywheel gear template
- Configured
manifest.jsonwith required inputs (dbt_project_zip) and config parameters (storage_label,source_prefix,output_prefix) - Updated Python requirement to
>=3.12 - Added dependencies:
dbt-core>=1.9.0,dbt-duckdb>=1.9.0,fw-storage>=2.0.0,fw-client>=2.3.1 - Added
glibc-locale-posixto Dockerfile for VS Code Server compatibility - Simplified
GearContextimport inrun.pyby removing unnecessary aliasing fromGearToolkitContext
Documentation:
- Created comprehensive README with usage examples, workflow diagrams, and use cases
- Added getting started guide with step-by-step instructions for creating, testing, and deploying dbt projects
- Added
CONTRIBUTING.mdwith dependency management, linting, and testing instructions - Added detailed assumptions and limitations documentation covering model output requirements, supported features, execution model, and known limitations