Configuration & Data Format
Solver Configuration
FLEX-Operation and FLEX-Community solve LP models using Pyomo. The solver is configured via environment variables:
Variable |
Default |
Description |
|---|---|---|
|
|
Solver name. Any Pyomo-supported solver works (e.g. |
|
|
|
|
(empty) |
Comma-separated |
Example:
export FLEX_OPERATION_SOLVER=highs
export FLEX_OPERATION_SOLVER_INTERFACE=shell
python -m projects.test_operation.main
Gurobi is recommended and requires a license (free for academics). HiGHS is a high-quality open-source alternative.
Other Environment Variables
Variable |
Default |
Description |
|---|---|---|
|
|
Format for hourly result files: |
|
|
Set to |
Input Data Format
Supported file formats
Input tables are read from the project’s input/ folder. The following formats are supported, in order of lookup priority:
.csv.xlsx.parquet.gzip.parquet
File names must exactly match the table names defined in src/utils/tables.py (e.g. OperationScenario.xlsx, OperationScenario_Component_Battery.xlsx).
Table name registry
All input and output table names are defined as enums in src/utils/tables.py:
InputTables— 17 Behavior tables, 17 Operation tables, 6 Community tablesOutputTables— 2 Behavior, 6 Operation, 2 Community
Time series conventions
All hourly time series have exactly 8,760 rows (one non-leap year).
Behavior person-level profiles have 52,560 rows (10-minute resolution: 144 slots/day × 365 days).
The year is assumed to start on a Tuesday (matching 2019).
Output Data Format
Per-scenario files
Hourly results are written as one file per scenario:
OperationResult_RefHour_S1.parquet.gzip
OperationResult_RefHour_S2.parquet.gzip
OperationResult_OptHour_S1.parquet.gzip
...
Aggregate files
Monthly and annual results are written per-scenario first (_S{id} suffix), then automatically merged into a single file after all scenarios complete:
OperationResult_RefYear.csv ← merged from _S1, _S2, ...
OperationResult_OptYear.csv
OperationResult_RefMonth.csv ← (if monthly output is enabled)
The merge deduplicates by ID_Scenario (keeping the latest) and removes the per-scenario files.
Parallel Execution
For large scenario sweeps, use run_operation_model_parallel():
from src.models.operation.main import run_operation_model_parallel
from src.utils.config import Config
config = Config("my_project", "/path/to/project")
run_operation_model_parallel(
config=config,
task_num=4, # number of parallel workers
save_hour=True,
run_ref=True,
run_opt=True,
)
This splits scenarios across workers using joblib, writes results to temporary task_{id}/ subdirectories, then merges everything back into the main output/ folder.
Creating Your Own Project
Create a project directory with
input/andoutput/subfolders:my_project/ ├── input/ │ ├── OperationScenario.xlsx │ ├── OperationScenario_Component_Building.xlsx │ ├── ... │ └── OperationScenario_RegionWeather.xlsx └── output/
Populate input tables. Use
projects/test_operation/input/as a template — column names and formats must match exactly.Write a runner script:
from src.models.operation.main import run_operation_model from src.utils.config import Config from src.utils.db import prepare_project_run config = Config("my_project", "/path/to/my_project") prepare_project_run(config) run_operation_model(config, save_hour=True)
For FLEX-Community, you need to export Operation results into Community input format. See
projects/test_community/main.pyfor the helper functionscopy_operation_tables()andcopy_household_ref_hour()that automate this.