Skip to content

Understanding the Flywheel CLI

This guide explains what the Flywheel CLI is, how it works, and when to use it. Whether you're new to Flywheel or deciding between the CLI and web interface, this guide will help you understand the CLI's role in your workflow.

What you'll learn:

  • How the CLI works and communicates with Flywheel
  • When to use the CLI vs. the web UI
  • How to choose the right data import approach
  • CLI architecture and components

What is the Flywheel CLI?

The Flywheel Command-Line Interface (CLI) is a standalone program that runs on your computer and communicates with your Flywheel site through an API (Application Programming Interface). Think of it as a direct connection between your computer's command line and your Flywheel data.

How It Works

1
2
3
4
5
6
7
Your Computer                    Network                  Flywheel Site
┌─────────────────┐                                      ┌──────────────────┐
│                 │                                      │                  │
│  CLI Program    │  ──[API Key + Commands]──>         │  Flywheel API    │
│  (fw command)   │                                      │                  │
│                 │  <──[Data + Responses]────          │  Your Data       │
└─────────────────┘                                      └──────────────────┘

Key Components:

  1. CLI Program (fw): The executable file you download and run on your computer
  2. API Key: Your authentication credential that identifies you to Flywheel
  3. Flywheel API: The server-side interface that receives commands and returns data
  4. Commands: Actions you tell the CLI to perform (ingest, download, sync, etc.)

Authentication Flow

When you run fw login <api_key>, the CLI:

  1. Validates your API key with the Flywheel server
  2. Stores the key locally on your computer (in ~/.config/flywheel/user.json)
  3. Includes this key with every subsequent command for authentication

Security Note: Your API key provides full access to your Flywheel account. Keep it secret and never share it.

CLI vs. Web UI: When to Use Each

Both the CLI and web interface access the same Flywheel data, but each excels at different tasks.

Use the CLI When You Need To

Import Large Datasets

  • Uploading hundreds or thousands of files
  • Importing entire study datasets (DICOM, BIDS)
  • Need to preserve complex folder structures
  • Want to automate uploads with scripts

Example: Importing 500 subjects with 2 sessions each (1,000+ individual uploads) is impractical through the web UI but straightforward with fw ingest dicom.

Download Bulk Data

  • Downloading entire projects or multiple sessions
  • Syncing data to external storage (S3, Google Cloud)
  • Need to preserve Flywheel folder structure locally
  • Want differential updates (only download changed files)

Example: Using fw sync to mirror a project to your HPC cluster for analysis.

Automate Workflows

  • Scheduled or repeated operations
  • Integration with other scripts or pipelines
  • Batch processing across multiple projects
  • Part of continuous integration workflows

Example: A nightly script that uploads new imaging data from your scanner workstation.

Work in Non-Interactive Environments

  • SSH sessions to remote servers
  • Docker containers
  • HPC job scripts
  • Environments without a graphical interface

Use the Web UI When You Need To

Browse and Explore Data

  • Viewing metadata and file details
  • Searching across projects
  • Exploring data hierarchy visually
  • Understanding data organization

Example: Looking through sessions to find specific acquisition types or checking metadata quality.

Manage Projects and Permissions

  • Creating and configuring projects
  • Managing user roles and permissions
  • Setting up gear rules
  • Configuring de-identification profiles

Example: Adding collaborators to a project and setting their permission levels.

Run Analysis Gears

  • Selecting and running analysis gears
  • Monitoring gear execution progress
  • Reviewing gear outputs and logs
  • Comparing results across sessions

Example: Running fMRIPrep on specific sessions and reviewing quality control outputs.

View and Annotate Data

  • Viewing DICOM images
  • Reviewing BIDS validation results
  • Adding notes and tags to sessions
  • Viewing analysis dashboards

Example: Reviewing T1 image quality and tagging problematic scans for exclusion.

Handle One-Off Tasks

  • Uploading a single file
  • Downloading one session
  • Quick permission checks
  • Occasional data exports

Example: Downloading analysis results from a single gear run.

Combined Workflow Example

A typical research workflow often uses both:

  1. Setup (Web UI): Create project, configure permissions, set up gear rules, create de-ID profile
  2. Data Import (CLI): Use fw ingest dicom --de-identify to upload study data
  3. Quality Check (Web UI): Review uploaded data, check metadata, verify sessions imported correctly
  4. Analysis (Web UI): Run gears on imported data, monitor processing
  5. Export (CLI): Use fw export bids to download processed results for statistical analysis

Choosing Your Data Import Approach

The CLI provides multiple import commands. Here's how to choose the right one.

Decision Flow

What type of data are you importing?

├─ DICOM files from medical imaging
│  └─> Use: fw ingest dicom
│     ├─ Pros: Automatically organizes by DICOM metadata
│     ├─ Pros: Built-in de-identification support
│     └─ Best for: Raw scanner data, clinical imaging

├─ BIDS-formatted neuroimaging dataset
│  └─> Use: fw ingest bids
│     ├─ Pros: Preserves BIDS structure and validation
│     ├─ Pros: Maintains sidecar JSON metadata
│     └─ Best for: Pre-organized BIDS datasets, sharing with collaborators

├─ Structured folders (Flywheel export format)
│  └─> Use: fw ingest folder
│     ├─ Pros: Fast import of Flywheel-formatted data
│     ├─ Pros: Preserves existing metadata
│     └─ Best for: Moving data between Flywheel sites

└─ Custom folder structure (non-DICOM, non-BIDS)
   └─> Use: fw ingest template
      ├─ Pros: Complete control over folder-to-hierarchy mapping
      ├─ Pros: Flexible metadata extraction
      └─ Best for: Non-standard data layouts, custom imaging formats

Import Strategy Considerations

For PHI/Sensitive Data:

  • Configure de-identification profiles BEFORE importing
  • Test de-ID rules with fw deid test on sample data
  • Use --de-identify flag during import to apply rules
  • This removes PHI before data leaves your network

For Large Datasets (>100 GB):

  • Use --jobs flag to increase concurrent uploads
  • Consider importing in batches (by subject, session, or date range)
  • Monitor network bandwidth and adjust concurrency
  • Use --detect-duplicates to prevent re-uploading existing data

For Automated Processing:

  • Set up gear rules BEFORE importing data
  • Rules trigger automatically as data arrives
  • Test rules on a small dataset first
  • This enables immediate processing without manual intervention

For Multi-Site Studies:

  • Use consistent subject/session labeling across sites
  • Consider using --subject and --session flags to override metadata
  • Create templates for non-DICOM data to ensure consistent structure
  • Document your import process for reproducibility

CLI Architecture

Understanding the CLI's internal structure helps troubleshoot issues and optimize usage.

Command Structure

All CLI commands follow this pattern:

fw <command> <subcommand> [options] <arguments>

Examples:

1
2
3
fw ingest dicom [options] <source> <group> <project>
fw export bids [options] <project> <destination>
fw sync [options] <source> <destination>

Data Flow During Import

1. Scan Phase
   └─ CLI reads local files
   └─ Extracts metadata (DICOM tags, BIDS JSON, etc.)
   └─ Builds hierarchy map (subjects → sessions → acquisitions)

2. Validation Phase
   └─ Checks for duplicate data
   └─ Validates required metadata exists
   └─ Applies de-identification rules (if enabled)
   └─ Shows preview and prompts for confirmation

3. Upload Phase
   └─ Compresses files (if needed)
   └─ Uploads files in parallel (controlled by --jobs)
   └─ Creates containers (project/subject/session/acquisition)
   └─ Attaches metadata to containers
   └─ Writes audit logs (if enabled)

4. Post-Processing (Flywheel Server)
   └─ Extracts DICOM metadata
   └─ Generates thumbnails
   └─ Triggers gear rules (if configured)
   └─ Runs classification

Configuration Files

The CLI supports configuration files to simplify complex commands:

Location: ~/.config/flywheel/config.yaml

Purpose: Store frequently-used options (API key, timezone, de-ID profiles, etc.)

Benefit: Run fw ingest dicom /data/scans psychology Study1 instead of fw ingest dicom --de-identify --deid-profile remove-phi --timezone America/New_York --jobs 8 /data/scans psychology Study1

Learn more about configuration files

Logging and Debugging

The CLI provides several levels of logging:

  • Default: Shows progress and important messages
  • --verbose: Shows detailed operation information
  • --debug: Shows API calls and internal operations
  • --quiet: Suppresses all output except errors

Log Locations:

  • Linux/Mac: ~/.cache/flywheel/log/cli.log
  • Windows: %LOCALAPPDATA%\flywheel\log\cli.log

Troubleshooting guide | Finding logs

Performance Optimization

Upload Speed

Factors affecting upload speed:

  1. Network bandwidth: Your internet upload speed is usually the bottleneck
  2. File size: Larger files are more efficient (less overhead per file)
  3. Concurrency: More parallel uploads (--jobs flag) can maximize bandwidth
  4. Compression: Compression reduces upload size but adds CPU overhead

Optimization tips:

1
2
3
4
5
6
# Increase concurrent uploads (default: 4)
fw ingest dicom --jobs 8 /data/scans psychology Study1

# Monitor bandwidth usage and adjust
# Too high: Network congestion, slower overall
# Too low: Unused bandwidth capacity

Download Speed

For fw download:

  • Downloads files individually (best for small numbers of files)
  • Use --zip flag for multiple small files

For fw sync:

  • Optimized for large-scale downloads
  • Only downloads changed files on subsequent runs
  • Supports parallel downloads with --jobs flag

For fw export bids:

  • Specifically optimized for BIDS structure
  • Converts metadata to BIDS sidecar JSON during export

Next Steps

Now that you understand how the CLI works:

Start Using the CLI

Learn Specific Commands

Dive Deeper