:py:mod:`mlair.workflows.default_workflow` ========================================== .. py:module:: mlair.workflows.default_workflow .. autoapi-nested-parse:: Default workflow. Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: mlair.workflows.default_workflow.DefaultWorkflow mlair.workflows.default_workflow.DefaultWorkflowHPC Attributes ~~~~~~~~~~ .. autoapisummary:: mlair.workflows.default_workflow.__author__ mlair.workflows.default_workflow.__date__ .. py:data:: __author__ :annotation: = Lukas Leufen .. py:data:: __date__ :annotation: = 2020-06-26 .. py:class:: DefaultWorkflow(stations=None, train_model=None, create_new_model=None, window_history_size=None, experiment_date='testrun', variables=None, statistics_per_var=None, start=None, end=None, target_var=None, target_dim=None, window_lead_time=None, dimensions=None, interpolation_method=None, time_dim=None, limit_nan_fill=None, train_start=None, train_end=None, val_start=None, val_end=None, test_start=None, test_end=None, use_all_stations_on_all_data_sets=None, fraction_of_train=None, experiment_path=None, plot_path=None, forecast_path=None, bootstrap_path=None, overwrite_local_data=None, sampling=None, permute_data_on_training=None, extreme_values=None, extremes_on_right_tail_only=None, transformation=None, train_min_length=None, val_min_length=None, test_min_length=None, plot_list=None, model=None, batch_size=None, epochs=None, data_handler=None, log_level_stream=None, **kwargs) Bases: :py:obj:`mlair.workflows.abstract_workflow.Workflow` A default workflow executing ExperimentSetup, PreProcessing, ModelSetup, Training and PostProcessing in exact the mentioned ordering. .. py:method:: _setup(self, **kwargs) Set up default workflow. .. py:class:: DefaultWorkflowHPC(stations=None, train_model=None, create_new_model=None, window_history_size=None, experiment_date='testrun', variables=None, statistics_per_var=None, start=None, end=None, target_var=None, target_dim=None, window_lead_time=None, dimensions=None, interpolation_method=None, time_dim=None, limit_nan_fill=None, train_start=None, train_end=None, val_start=None, val_end=None, test_start=None, test_end=None, use_all_stations_on_all_data_sets=None, fraction_of_train=None, experiment_path=None, plot_path=None, forecast_path=None, bootstrap_path=None, overwrite_local_data=None, sampling=None, permute_data_on_training=None, extreme_values=None, extremes_on_right_tail_only=None, transformation=None, train_min_length=None, val_min_length=None, test_min_length=None, plot_list=None, model=None, batch_size=None, epochs=None, data_handler=None, log_level_stream=None, **kwargs) Bases: :py:obj:`DefaultWorkflow` A default workflow for Jülich HPC systems executing ExperimentSetup, PreProcessing, PartitionCheck, ModelSetup, Training and PostProcessing in exact the mentioned ordering. .. py:method:: _setup(self, **kwargs) Set up default workflow.