:py:mod:`mlair.workflows` ========================= .. py:module:: mlair.workflows Submodules ---------- .. toctree:: :titlesonly: :maxdepth: 1 abstract_workflow/index.rst custom_workflows/index.rst default_workflow/index.rst Package Contents ---------------- Classes ~~~~~~~ .. autoapisummary:: mlair.workflows.Workflow mlair.workflows.DefaultWorkflow mlair.workflows.DefaultWorkflowHPC .. py:class:: Workflow(name=None, log_level_stream=None) Abstract workflow class to handle sequence of stages (run modules). An inheriting class has to first initialise this mother class and can afterwards add an arbitrary number of stages by using the add method. The execution order is equal to the ordering of the stages have been added. To run the workflow, finally, a single call of the run method is sufficient. It must be taken care for inter-stage dependencies, this workflow class only handles the execution but not the dependencies (workflow would probably fail in this case). .. py:method:: add(self, stage, **kwargs) Add a new stage with optional kwargs. .. py:method:: run(self) Run workflow embedded in a run environment and according to the stage's ordering. .. py:class:: DefaultWorkflow(stations=None, train_model=None, create_new_model=None, window_history_size=None, experiment_date='testrun', variables=None, statistics_per_var=None, start=None, end=None, target_var=None, target_dim=None, window_lead_time=None, dimensions=None, interpolation_method=None, time_dim=None, limit_nan_fill=None, train_start=None, train_end=None, val_start=None, val_end=None, test_start=None, test_end=None, use_all_stations_on_all_data_sets=None, fraction_of_train=None, experiment_path=None, plot_path=None, forecast_path=None, bootstrap_path=None, overwrite_local_data=None, sampling=None, permute_data_on_training=None, extreme_values=None, extremes_on_right_tail_only=None, transformation=None, train_min_length=None, val_min_length=None, test_min_length=None, plot_list=None, model=None, batch_size=None, epochs=None, data_handler=None, log_level_stream=None, **kwargs) Bases: :py:obj:`mlair.workflows.abstract_workflow.Workflow` A default workflow executing ExperimentSetup, PreProcessing, ModelSetup, Training and PostProcessing in exact the mentioned ordering. .. py:method:: _setup(self, **kwargs) Set up default workflow. .. py:class:: DefaultWorkflowHPC(stations=None, train_model=None, create_new_model=None, window_history_size=None, experiment_date='testrun', variables=None, statistics_per_var=None, start=None, end=None, target_var=None, target_dim=None, window_lead_time=None, dimensions=None, interpolation_method=None, time_dim=None, limit_nan_fill=None, train_start=None, train_end=None, val_start=None, val_end=None, test_start=None, test_end=None, use_all_stations_on_all_data_sets=None, fraction_of_train=None, experiment_path=None, plot_path=None, forecast_path=None, bootstrap_path=None, overwrite_local_data=None, sampling=None, permute_data_on_training=None, extreme_values=None, extremes_on_right_tail_only=None, transformation=None, train_min_length=None, val_min_length=None, test_min_length=None, plot_list=None, model=None, batch_size=None, epochs=None, data_handler=None, log_level_stream=None, **kwargs) Bases: :py:obj:`DefaultWorkflow` A default workflow for Jülich HPC systems executing ExperimentSetup, PreProcessing, PartitionCheck, ModelSetup, Training and PostProcessing in exact the mentioned ordering. .. py:method:: _setup(self, **kwargs) Set up default workflow.