Configurations Classes
Dataset
TrajectoriesDatasetConfig
- class prescyent.dataset.config.TrajectoriesDatasetConfig(*, name: str | None = None, seed: int = None, batch_size: int = 128, num_workers: int = 1, persistent_workers: bool = True, pin_memory: bool = True, save_samples_on_disk: bool = True, learning_type: LearningTypes = LearningTypes.SEQ2SEQ, frequency: int, history_size: int, future_size: int, in_features: Features | None, out_features: Features | None, in_points: List[int] | None, out_points: List[int] | None, context_keys: List[str] = [], convert_trajectories_beforehand: bool = True, loop_over_traj: bool = False, reverse_pair_ratio: float = 0, **extra_data: Any)
Bases:
BaseConfig
Pydantic Basemodel for TrajectoriesDatasets configuration
- batch_size: int
Size of the batch of all dataloaders
Required: False
Default: 128
- context_keys: List[str]
List of the key of the tensors we’ll pass as context to the predictor. Must be a subset of the existing context keys in the Dataset’s Trajectories
Required: False
Default: []
- convert_trajectories_beforehand: bool
If in_features and out_features allows it, convert the trajectories as a preprocessing instead of in the dataloaders
Required: False
Default: True
- frequency: int
The frequency in Hz of the dataset, If different from original data we’ll use linear upsampling or downsampling of the data
Required: True
- future_size: int
Number of timesteps predicted as output
Required: True
- history_size: int
Number of timesteps as input
Required: True
- in_features: Features | None
List of features used as input, if None, use default from the dataset
Required: True
- in_points: List[int] | None
Ids of the points used as input.
Required: True
- learning_type: LearningTypes
Method used to generate TrajectoryDataSamples
Required: False
Default: sequence_2_sequence
- loop_over_traj: bool
Make the trajectory loop over itself where generating training pairs
Required: False
Default: False
- name: str | None
Name of your dataset. WARNING, If you override default value, AutoDataset won’t be able to load your dataset
Required: False
Default: None
- num_workers: int
See https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading
Required: False
Default: 1
- out_features: Features | None
List of features used as output, if None, use default from the dataset
Required: True
- out_points: List[int] | None
Ids of the points used as output.
Required: True
- persistent_workers: bool
See https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
Required: False
Default: True
- pin_memory: bool
See https://pytorch.org/docs/stable/data.html#memory-pinning
Required: False
Default: True
- reverse_pair_ratio: float
Do data augmentation by reversing some trajectories’ sequence with given ratio as chance of occuring between 0 and 1
Required: False
Default: 0
- save_samples_on_disk: bool
If True we’ll use a tmp hdf5 file to store the x, y pairs and win some time at computation during training in the detriment of some init time and temporary disk space
Required: False
Default: True
- seed: int
A seed for all random operations in the dataset class
Required: False
Default: None
H36MDatasetConfig
- class prescyent.dataset.datasets.human36m.config.H36MDatasetConfig(*, name: str | None = None, seed: int = None, batch_size: int = 128, num_workers: int = 1, persistent_workers: bool = True, pin_memory: bool = True, save_samples_on_disk: bool = True, learning_type: LearningTypes = LearningTypes.SEQ2SEQ, frequency: int = 25, history_size: int = 25, future_size: int = 25, in_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}, {'distance_unit': 'rad', 'feature_class': 'prescyent.dataset.features.feature.rotation.RotationRotMat', 'ids': [3, 4, 5, 6, 7, 8, 9, 10, 11], 'name': 'Rotation_1'}], out_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}, {'distance_unit': 'rad', 'feature_class': 'prescyent.dataset.features.feature.rotation.RotationRotMat', 'ids': [3, 4, 5, 6, 7, 8, 9, 10, 11], 'name': 'Rotation_1'}], in_points: List[int] = [2, 3, 4, 5, 7, 8, 9, 10, 12, 13, 14, 15, 17, 18, 19, 21, 22, 25, 26, 27, 29, 30], out_points: List[int] = [2, 3, 4, 5, 7, 8, 9, 10, 12, 13, 14, 15, 17, 18, 19, 21, 22, 25, 26, 27, 29, 30], context_keys: List[str] = [], convert_trajectories_beforehand: bool = True, loop_over_traj: bool = False, reverse_pair_ratio: float = 0, hdf5_path: str, actions: List[str] = ['directions', 'discussion', 'eating', 'greeting', 'phoning', 'posing', 'purchases', 'sitting', 'sittingdown', 'smoking', 'takingphoto', 'waiting', 'walking', 'walkingdog', 'walkingtogether'], subjects_train: List[str] = ['S1', 'S6', 'S7', 'S8', 'S9'], subjects_test: List[str] = ['S5'], subjects_val: List[str] = ['S11'], **extra_data: Any)
Bases:
TrajectoriesDatasetConfig
Pydantic Basemodel for Dataset configuration
- actions: List[str]
List of the H36M Actions to consider
Required: False
Default: [‘directions’, ‘discussion’, ‘eating’, ‘greeting’, ‘phoning’, ‘posing’, ‘purchases’, ‘sitting’, ‘sittingdown’, ‘smoking’, ‘takingphoto’, ‘waiting’, ‘walking’, ‘walkingdog’, ‘walkingtogether’]
- batch_size: int
Size of the batch of all dataloaders
Required: False
Default: 128
- context_keys: List[str]
List of the key of the tensors we’ll pass as context to the predictor. Must be a subset of the existing context keys in the Dataset’s Trajectories
Required: False
Default: []
- convert_trajectories_beforehand: bool
If in_features and out_features allows it, convert the trajectories as a preprocessing instead of in the dataloaders
Required: False
Default: True
- frequency: int
The frequency in Hz of the dataset If different from original data we’ll use linear upsampling or downsampling of the data Default is downsampling 50 Hz to 25Hz
Required: False
Default: 25
- future_size: int
number of predicted timesteps, default to 1sec at 25Hz
Required: False
Default: 25
- hdf5_path: str
Path to the hdf5 data file
Required: True
- history_size: int
number of timesteps as input, default to 1sec at 25Hz
Required: False
Default: 25
- in_features: Features
List of features used as input, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}, {‘name’: ‘Rotation_1’, ‘ids’: [3, 4, 5, 6, 7, 8, 9, 10, 11], ‘distance_unit’: ‘rad’, ‘feature_class’: ‘prescyent.dataset.features.feature.rotation.RotationRotMat’}]
- in_points: List[int]
Ids of the points used as input.
Required: False
Default: [2, 3, 4, 5, 7, 8, 9, 10, 12, 13, 14, 15, 17, 18, 19, 21, 22, 25, 26, 27, 29, 30]
- learning_type: LearningTypes
Method used to generate TrajectoryDataSamples
Required: False
Default: sequence_2_sequence
- loop_over_traj: bool
Make the trajectory loop over itself where generating training pairs
Required: False
Default: False
- name: str | None
Name of your dataset. WARNING, If you override default value, AutoDataset won’t be able to load your dataset
Required: False
Default: None
- num_workers: int
See https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading
Required: False
Default: 1
- out_features: Features
List of features used as output, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}, {‘name’: ‘Rotation_1’, ‘ids’: [3, 4, 5, 6, 7, 8, 9, 10, 11], ‘distance_unit’: ‘rad’, ‘feature_class’: ‘prescyent.dataset.features.feature.rotation.RotationRotMat’}]
- out_points: List[int]
Ids of the points used as output.
Required: False
Default: [2, 3, 4, 5, 7, 8, 9, 10, 12, 13, 14, 15, 17, 18, 19, 21, 22, 25, 26, 27, 29, 30]
- persistent_workers: bool
See https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
Required: False
Default: True
- pin_memory: bool
See https://pytorch.org/docs/stable/data.html#memory-pinning
Required: False
Default: True
- reverse_pair_ratio: float
Do data augmentation by reversing some trajectories’ sequence with given ratio as chance of occuring between 0 and 1
Required: False
Default: 0
- save_samples_on_disk: bool
If True we’ll use a tmp hdf5 file to store the x, y pairs and win some time at computation during training in the detriment of some init time and temporary disk space
Required: False
Default: True
- seed: int
A seed for all random operations in the dataset class
Required: False
Default: None
- subjects_test: List[str]
Subject from which’s trajectories are placed in Trajectories.test
Required: False
Default: [‘S5’]
- subjects_train: List[str]
Subject from which’s trajectories are placed in Trajectories.train
Required: False
Default: [‘S1’, ‘S6’, ‘S7’, ‘S8’, ‘S9’]
- subjects_val: List[str]
Subject from which’s trajectories are placed in Trajectories.val
Required: False
Default: [‘S11’]
AndyDatasetConfig
- class prescyent.dataset.datasets.andydataset.config.AndyDatasetConfig(*, name: str | None = None, seed: int = None, batch_size: int = 128, num_workers: int = 1, persistent_workers: bool = True, pin_memory: bool = True, save_samples_on_disk: bool = True, learning_type: LearningTypes = LearningTypes.SEQ2SEQ, frequency: int = 10, history_size: int = 10, future_size: int = 10, in_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}, {'distance_unit': 'rad', 'feature_class': 'prescyent.dataset.features.feature.rotation.RotationQuat', 'ids': [3, 4, 5, 6], 'name': 'Rotation_1'}], out_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}, {'distance_unit': 'rad', 'feature_class': 'prescyent.dataset.features.feature.rotation.RotationQuat', 'ids': [3, 4, 5, 6], 'name': 'Rotation_1'}], in_points: List[int] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], out_points: List[int] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], context_keys: List[Literal['velocity', 'acceleration', 'angularVelocity', 'angularAcceleration', 'sensorFreeAcceleration', 'sensorMagneticField', 'sensorOrientation', 'jointAngle', 'jointAngleXZY', 'centerOfMass']] = [], convert_trajectories_beforehand: bool = True, loop_over_traj: bool = False, reverse_pair_ratio: float = 0, hdf5_path: str, shuffle_data_files: bool = True, participants: List[str] = [], make_joints_position_relative_to: int | None = None, ratio_train: float = 0.8, ratio_test: float = 0.15, ratio_val: float = 0.05, **extra_data: Any)
Bases:
TrajectoriesDatasetConfig
Pydantic Basemodel for AndyDataset configuration
- batch_size: int
Size of the batch of all dataloaders
Required: False
Default: 128
- context_keys: List[Literal['velocity', 'acceleration', 'angularVelocity', 'angularAcceleration', 'sensorFreeAcceleration', 'sensorMagneticField', 'sensorOrientation', 'jointAngle', 'jointAngleXZY', 'centerOfMass']]
List of the key of the tensors we’ll pass as context to the predictor. Must be a subset of the existing context keys in the Dataset’s Trajectories
Required: False
Default: []
- convert_trajectories_beforehand: bool
If in_features and out_features allows it, convert the trajectories as a preprocessing instead of in the dataloaders
Required: False
Default: True
- frequency: int
The frequency in Hz of the dataset If different from original data we’ll use linear upsampling or downsampling of the data Default is downsampling 240 Hz to 10Hz
Required: False
Default: 10
- future_size: int
number of predicted timesteps, default to 1s at 10Hz
Required: False
Default: 10
- hdf5_path: str
Path to the hdf5 data file
Required: True
- history_size: int
number of timesteps as input, default to 1s at 10Hz
Required: False
Default: 10
- in_features: Features
List of features used as input, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}, {‘name’: ‘Rotation_1’, ‘ids’: [3, 4, 5, 6], ‘distance_unit’: ‘rad’, ‘feature_class’: ‘prescyent.dataset.features.feature.rotation.RotationQuat’}]
- in_points: List[int]
Ids of the points used as input.
Required: False
Default: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
- learning_type: LearningTypes
Method used to generate TrajectoryDataSamples
Required: False
Default: sequence_2_sequence
- loop_over_traj: bool
Make the trajectory loop over itself where generating training pairs
Required: False
Default: False
- make_joints_position_relative_to: int | None
None == Relative to world, else relative to joint with id == int
Required: False
Default: None
- name: str | None
Name of your dataset. WARNING, If you override default value, AutoDataset won’t be able to load your dataset
Required: False
Default: None
- num_workers: int
See https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading
Required: False
Default: 1
- out_features: Features
List of features used as output, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}, {‘name’: ‘Rotation_1’, ‘ids’: [3, 4, 5, 6], ‘distance_unit’: ‘rad’, ‘feature_class’: ‘prescyent.dataset.features.feature.rotation.RotationQuat’}]
- out_points: List[int]
Ids of the points used as output.
Required: False
Default: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
- participants: List[str]
If True the list of files is shuffled
Required: False
Default: []
- persistent_workers: bool
See https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
Required: False
Default: True
- pin_memory: bool
See https://pytorch.org/docs/stable/data.html#memory-pinning
Required: False
Default: True
- ratio_test: float
ratio of trajectories placed in Trajectories.test
Required: False
Default: 0.15
- ratio_train: float
ratio of trajectories placed in Trajectories.train
Required: False
Default: 0.8
- ratio_val: float
ratio of trajectories placed in Trajectories.val
Required: False
Default: 0.05
- reverse_pair_ratio: float
Do data augmentation by reversing some trajectories’ sequence with given ratio as chance of occuring between 0 and 1
Required: False
Default: 0
- save_samples_on_disk: bool
If True we’ll use a tmp hdf5 file to store the x, y pairs and win some time at computation during training in the detriment of some init time and temporary disk space
Required: False
Default: True
- seed: int
A seed for all random operations in the dataset class
Required: False
Default: None
- shuffle_data_files: bool
TeleopIcubDatasetConfig
- class prescyent.dataset.datasets.teleop_icub.config.TeleopIcubDatasetConfig(*, name: str | None = None, seed: int = None, batch_size: int = 128, num_workers: int = 1, persistent_workers: bool = True, pin_memory: bool = True, save_samples_on_disk: bool = True, learning_type: LearningTypes = LearningTypes.SEQ2SEQ, frequency: int = 10, history_size: int = 10, future_size: int = 10, in_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}], out_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}], in_points: List[int] = [0, 1, 2], out_points: List[int] = [0, 1, 2], context_keys: List[Literal['center_of_mass', 'icub_dof']] = [], convert_trajectories_beforehand: bool = True, loop_over_traj: bool = False, reverse_pair_ratio: float = 0, hdf5_path: str, subsets: List[str] | None = None, shuffle_data_files: bool = True, ratio_train: float = 0.7, ratio_test: float = 0.2, ratio_val: float = 0.1, **extra_data: Any)
Bases:
TrajectoriesDatasetConfig
Pydantic Basemodel for TeleopIcubDataset configuration
- batch_size: int
Size of the batch of all dataloaders
Required: False
Default: 128
- context_keys: List[Literal['center_of_mass', 'icub_dof']]
List of the key of the tensors we’ll pass as context to the predictor. Must be a subset of the existing context keys in the Dataset’s Trajectories
Required: False
Default: []
- convert_trajectories_beforehand: bool
If in_features and out_features allows it, convert the trajectories as a preprocessing instead of in the dataloaders
Required: False
Default: True
- frequency: int
The frequency in Hz of the dataset If different from original data we’ll use linear upsampling or downsampling of the data Default is downsampling 100Hz to 10Hz
Required: False
Default: 10
- future_size: int
number of predicted timesteps, default to 1s at 10Hz
Required: False
Default: 10
- hdf5_path: str
Path to the hdf5 data file
Required: True
- history_size: int
number of timesteps as input, default to 1s at 10Hz
Required: False
Default: 10
- in_features: Features
List of features used as input, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}]
- in_points: List[int]
Ids of the points used as input.
Required: False
Default: [0, 1, 2]
- learning_type: LearningTypes
Method used to generate TrajectoryDataSamples
Required: False
Default: sequence_2_sequence
- loop_over_traj: bool
Make the trajectory loop over itself where generating training pairs
Required: False
Default: False
- name: str | None
Name of your dataset. WARNING, If you override default value, AutoDataset won’t be able to load your dataset
Required: False
Default: None
- num_workers: int
See https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading
Required: False
Default: 1
- out_features: Features
List of features used as output, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}]
- out_points: List[int]
Ids of the points used as output.
Required: False
Default: [0, 1, 2]
- persistent_workers: bool
See https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
Required: False
Default: True
- pin_memory: bool
See https://pytorch.org/docs/stable/data.html#memory-pinning
Required: False
Default: True
- ratio_test: float
ratio of trajectories placed in Trajectories.test
Required: False
Default: 0.2
- ratio_train: float
ratio of trajectories placed in Trajectories.train
Required: False
Default: 0.7
- ratio_val: float
ratio of trajectories placed in Trajectories.val
Required: False
Default: 0.1
- reverse_pair_ratio: float
Do data augmentation by reversing some trajectories’ sequence with given ratio as chance of occuring between 0 and 1
Required: False
Default: 0
- save_samples_on_disk: bool
If True we’ll use a tmp hdf5 file to store the x, y pairs and win some time at computation during training in the detriment of some init time and temporary disk space
Required: False
Default: True
- seed: int
A seed for all random operations in the dataset class
Required: False
Default: None
- shuffle_data_files: bool
If True the list of files is shuffled
Required: False
Default: True
- subsets: List[str] | None
Pattern used to find the list of files using a rglob method
Required: False
Default: None
SCCDatasetConfig
- class prescyent.dataset.datasets.synthetic_circle_clusters.config.SCCDatasetConfig(*, name: str | None = None, seed: int = None, batch_size: int = 128, num_workers: int = 1, persistent_workers: bool = True, pin_memory: bool = True, save_samples_on_disk: bool = True, learning_type: LearningTypes = LearningTypes.SEQ2SEQ, frequency: int = 10, history_size: int = 10, future_size: int = 10, in_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXY', 'ids': [0, 1], 'name': 'Coordinate_0'}], out_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXY', 'ids': [0, 1], 'name': 'Coordinate_0'}], in_points: List[int] = [0], out_points: List[int] = [0], context_keys: List[str] = [], convert_trajectories_beforehand: bool = True, loop_over_traj: bool = False, reverse_pair_ratio: float = 0, ratio_train: float = 0.7, ratio_test: float = 0.15, ratio_val: float = 0.15, num_trajs: List[int] = [25, 25], starting_xs: List[float] = [0, 4], starting_ys: List[float] = [0, 0], radius: List[float] = [1, 1], radius_eps: float = 0.01, perturbation_range: float = 0.1, num_perturbation_points: int = 10, num_points: int = 100, **extra_data: Any)
Bases:
TrajectoriesDatasetConfig
Pydantic Basemodel for SCCDataset configuration
- batch_size: int
Size of the batch of all dataloaders
Required: False
Default: 128
- context_keys: List[str]
List of the key of the tensors we’ll pass as context to the predictor. Must be a subset of the existing context keys in the Dataset’s Trajectories
Required: False
Default: []
- convert_trajectories_beforehand: bool
If in_features and out_features allows it, convert the trajectories as a preprocessing instead of in the dataloaders
Required: False
Default: True
- frequency: int
The frequency in Hz of the dataset, If different from original data we’ll use linear upsampling or downsampling of the data
Required: False
Default: 10
- future_size: int
Number of timesteps predicted as output
Required: False
Default: 10
- history_size: int
Number of timesteps as input
Required: False
Default: 10
- in_features: Features
List of features used as input, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXY’}]
- in_points: List[int]
Ids of the points used as input.
Required: False
Default: [0]
- learning_type: LearningTypes
Method used to generate TrajectoryDataSamples
Required: False
Default: sequence_2_sequence
- loop_over_traj: bool
Make the trajectory loop over itself where generating training pairs
Required: False
Default: False
- name: str | None
Name of your dataset. WARNING, If you override default value, AutoDataset won’t be able to load your dataset
Required: False
Default: None
- num_perturbation_points: int
number of perturbation points
Required: False
Default: 10
- num_points: int
number of points in the final shape after smoothing
Required: False
Default: 100
- num_trajs: List[int]
Number of trajectory generated per cluster
Required: False
Default: [25, 25]
- num_workers: int
See https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading
Required: False
Default: 1
- out_features: Features
List of features used as output, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXY’}]
- out_points: List[int]
Ids of the points used as output.
Required: False
Default: [0]
- persistent_workers: bool
See https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
Required: False
Default: True
- perturbation_range: float
perturbation over the shape’s main points
Required: False
Default: 0.1
- pin_memory: bool
See https://pytorch.org/docs/stable/data.html#memory-pinning
Required: False
Default: True
- radius: List[float]
radius for each cluster
Required: False
Default: [1, 1]
- radius_eps: float
variation for radius
Required: False
Default: 0.01
- ratio_test: float
ratio of trajectories placed in Trajectories.test
Required: False
Default: 0.15
- ratio_train: float
ratio of trajectories placed in Trajectories.train
Required: False
Default: 0.7
- ratio_val: float
ratio of trajectories placed in Trajectories.val
Required: False
Default: 0.15
- reverse_pair_ratio: float
Do data augmentation by reversing some trajectories’ sequence with given ratio as chance of occuring between 0 and 1
Required: False
Default: 0
- save_samples_on_disk: bool
If True we’ll use a tmp hdf5 file to store the x, y pairs and win some time at computation during training in the detriment of some init time and temporary disk space
Required: False
Default: True
- seed: int
A seed for all random operations in the dataset class
Required: False
Default: None
- starting_xs: List[float]
x coordinate for each cluster
Required: False
Default: [0, 4]
- starting_ys: List[float]
y coordinate for each cluster
Required: False
Default: [0, 0]
SSTDatasetConfig
- class prescyent.dataset.datasets.synthetic_simple_trajs.config.SSTDatasetConfig(*, name: str | None = None, seed: int = None, batch_size: int = 128, num_workers: int = 1, persistent_workers: bool = True, pin_memory: bool = True, save_samples_on_disk: bool = True, learning_type: LearningTypes = LearningTypes.SEQ2SEQ, frequency: int = 50, history_size: int = 50, future_size: int = 50, in_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}, {'distance_unit': 'rad', 'feature_class': 'prescyent.dataset.features.feature.rotation.RotationEuler', 'ids': [3, 4, 5], 'name': 'Rotation_1'}], out_features: Features = [{'distance_unit': 'm', 'feature_class': 'prescyent.dataset.features.feature.coordinate.CoordinateXYZ', 'ids': [0, 1, 2], 'name': 'Coordinate_0'}, {'distance_unit': 'rad', 'feature_class': 'prescyent.dataset.features.feature.rotation.RotationEuler', 'ids': [3, 4, 5], 'name': 'Rotation_1'}], in_points: List[int] = [0], out_points: List[int] = [0], context_keys: List[str] = [], convert_trajectories_beforehand: bool = True, loop_over_traj: bool = False, reverse_pair_ratio: float = 0, num_traj: int = 1000, ratio_train: float = 0.8, ratio_test: float = 0.1, ratio_val: float = 0.1, min_x: float = 1.0, max_x: float = 2.0, min_y: float = -1.0, max_y: float = 1.0, min_z: float = -1.0, max_z: float = 1.0, starting_pose: List[float] = [0, 0, 0, 0, 0, 0], dt: float = 0.02, gain_lin: float = 1.0, gain_ang: float = 1.0, clamp_lin: float = 0.2, clamp_ang: float = 0.5, **extra_data: Any)
Bases:
TrajectoriesDatasetConfig
Pydantic Basemodel for SSTDataset configuration
- batch_size: int
Size of the batch of all dataloaders
Required: False
Default: 128
- clamp_ang: float
max value for the angular speed
Required: False
Default: 0.5
- clamp_lin: float
max value for the linear speed
Required: False
Default: 0.2
- context_keys: List[str]
List of the key of the tensors we’ll pass as context to the predictor. Must be a subset of the existing context keys in the Dataset’s Trajectories
Required: False
Default: []
- convert_trajectories_beforehand: bool
If in_features and out_features allows it, convert the trajectories as a preprocessing instead of in the dataloaders
Required: False
Default: True
- dt: float
frequency of the controller
Required: False
Default: 0.02
- frequency: int
The frequency in Hz of the dataset, If different from original data we’ll use linear upsampling or downsampling of the data
Required: False
Default: 50
- future_size: int
Number of timesteps predicted as output
Required: False
Default: 50
- gain_ang: float
angular gain for the “controller”
Required: False
Default: 1.0
- gain_lin: float
linear gain for the “controller”
Required: False
Default: 1.0
- history_size: int
Number of timesteps as input
Required: False
Default: 50
- in_features: Features
List of features used as input, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}, {‘name’: ‘Rotation_1’, ‘ids’: [3, 4, 5], ‘distance_unit’: ‘rad’, ‘feature_class’: ‘prescyent.dataset.features.feature.rotation.RotationEuler’}]
- in_points: List[int]
Ids of the points used as input.
Required: False
Default: [0]
- learning_type: LearningTypes
Method used to generate TrajectoryDataSamples
Required: False
Default: sequence_2_sequence
- loop_over_traj: bool
Make the trajectory loop over itself where generating training pairs
Required: False
Default: False
- max_x: float
max value for x
Required: False
Default: 2.0
- max_y: float
max value for y
Required: False
Default: 1.0
- max_z: float
max value for z
Required: False
Default: 1.0
- min_x: float
min value for x
Required: False
Default: 1.0
- min_y: float
min value for y
Required: False
Default: -1.0
- min_z: float
min value for z
Required: False
Default: -1.0
- name: str | None
Name of your dataset. WARNING, If you override default value, AutoDataset won’t be able to load your dataset
Required: False
Default: None
- num_traj: int
Number of trajectories to generate
Required: False
Default: 1000
- num_workers: int
See https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading
Required: False
Default: 1
- out_features: Features
List of features used as output, if None, use default from the dataset
Required: False
Default: [{‘name’: ‘Coordinate_0’, ‘ids’: [0, 1, 2], ‘distance_unit’: ‘m’, ‘feature_class’: ‘prescyent.dataset.features.feature.coordinate.CoordinateXYZ’}, {‘name’: ‘Rotation_1’, ‘ids’: [3, 4, 5], ‘distance_unit’: ‘rad’, ‘feature_class’: ‘prescyent.dataset.features.feature.rotation.RotationEuler’}]
- out_points: List[int]
Ids of the points used as output.
Required: False
Default: [0]
- persistent_workers: bool
See https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
Required: False
Default: True
- pin_memory: bool
See https://pytorch.org/docs/stable/data.html#memory-pinning
Required: False
Default: True
- ratio_test: float
ratio of trajectories placed in Trajectories.test
Required: False
Default: 0.1
- ratio_train: float
ratio of trajectories placed in Trajectories.train
Required: False
Default: 0.8
- ratio_val: float
ratio of trajectories placed in Trajectories.val
Required: False
Default: 0.1
- reverse_pair_ratio: float
Do data augmentation by reversing some trajectories’ sequence with given ratio as chance of occuring between 0 and 1
Required: False
Default: 0
- save_samples_on_disk: bool
If True we’ll use a tmp hdf5 file to store the x, y pairs and win some time at computation during training in the detriment of some init time and temporary disk space
Required: False
Default: True
- seed: int
A seed for all random operations in the dataset class
Required: False
Default: None
- starting_pose: List[float]
position used as the starting point of all generated trajectories, the features are [CoordinateXYZ(range(3)), RotationEuler(range(3, 6))]
Required: False
Default: [0, 0, 0, 0, 0, 0]
Predictor
PredictorConfig
- class prescyent.predictor.config.PredictorConfig(*, name: str | None = None, version: int | None = None, save_path: str = 'data/models', dataset_config: TrajectoriesDatasetConfig, scaler_config: ScalerConfig | None = None)
Bases:
BaseConfig
Pydantic Basemodel for predictor configuration. It includes the dataset_config and the scaler config !
- dataset_config: TrajectoriesDatasetConfig
The TrajectoriesDatasetConfig used to understand the dataset and its tensor
Required: True
- name: str | None
The name of the predictor. If None, default to the class value. WARNING, If you override default value, AutoPredictor won’t be able to load your Predictor
Required: False
Default: None
- save_path: str
Directory where the model will log and save
Required: False
Default: data/models
- scaler_config: ScalerConfig | None
The ScalerConfig used instanciate the scaler of this predictor. If None, we’ll not use a scaler ahead of the predictor
Required: False
Default: None
- version: int | None
A version number for this instance of the predictor. If None, we’ll use TensorBoardLogger logic to aquire a version number from the log path
Required: False
Default: None
PrompConfig
- class prescyent.predictor.promp.config.PrompConfig(*, name: str | None = None, version: int | None = None, save_path: str = 'data/models', dataset_config: TrajectoriesDatasetConfig, scaler_config: ScalerConfig | None = None, num_bf: float = 20, ridge_factor: float = 1e-10)
Bases:
PredictorConfig
Pydantic Basemodel for PrompPredictor configuration
- dataset_config: TrajectoriesDatasetConfig
The TrajectoriesDatasetConfig used to understand the dataset and its tensor
Required: True
- name: str | None
The name of the predictor. If None, default to the class value. WARNING, If you override default value, AutoPredictor won’t be able to load your Predictor
Required: False
Default: None
- num_bf: float
Lower = smoother
Required: False
Default: 20
- ridge_factor: float
Regularization parameter of the ridge regression
Required: False
Default: 1e-10
- save_path: str
Directory where the model will log and save
Required: False
Default: data/models
- scaler_config: ScalerConfig | None
The ScalerConfig used instanciate the scaler of this predictor. If None, we’ll not use a scaler ahead of the predictor
Required: False
Default: None
- version: int | None
A version number for this instance of the predictor. If None, we’ll use TensorBoardLogger logic to aquire a version number from the log path
Required: False
Default: None
ModuleConfig
- class prescyent.predictor.lightning.configs.module_config.ModuleConfig(*, name: str | None = None, version: int | None = None, save_path: str = 'data/models', dataset_config: TrajectoriesDatasetConfig, scaler_config: ScalerConfig | None = None, loss_fn: LossFunctions | None = None, do_lipschitz_continuation: bool = False, dropout_value: float | None = None, deriv_on_last_frame: bool | None = False, deriv_output: bool | None = False)
Bases:
PredictorConfig
Pydantic Basemodel for Torch Module configuration
- dataset_config: TrajectoriesDatasetConfig
The TrajectoriesDatasetConfig used to understand the dataset and its tensor
Required: True
- deriv_on_last_frame: bool | None
If True, we’ll make the whole input that is fed to the model relative to its last frame, It also makes the model’s output relative to this frame
Required: False
Default: False
- deriv_output: bool | None
If True, the model’s output is relative to the last frame of the input
Required: False
Default: False
- do_lipschitz_continuation: bool
If True, we’ll apply Spectral Normalization to every layer of the model
Required: False
Default: False
- dropout_value: float | None
Value for the torch Dropout layer as one of the first steps of the forward method of the torch module, Default to None results is no Dropout layer See https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html
Required: False
Default: None
- loss_fn: LossFunctions | None
Define what loss function will be used to train your model
Required: False
Default: None
- name: str | None
The name of the predictor. If None, default to the class value. WARNING, If you override default value, AutoPredictor won’t be able to load your Predictor
Required: False
Default: None
- save_path: str
Directory where the model will log and save
Required: False
Default: data/models
- scaler_config: ScalerConfig | None
The ScalerConfig used instanciate the scaler of this predictor. If None, we’ll not use a scaler ahead of the predictor
Required: False
Default: None
- version: int | None
A version number for this instance of the predictor. If None, we’ll use TensorBoardLogger logic to aquire a version number from the log path
Required: False
Default: None
Seq2SeqConfig
- class prescyent.predictor.lightning.models.sequence.seq2seq.config.Seq2SeqConfig(*, name: str | None = None, version: int | None = None, save_path: str = 'data/models', dataset_config: TrajectoriesDatasetConfig, scaler_config: ScalerConfig | None = None, loss_fn: LossFunctions | None = None, do_lipschitz_continuation: bool = False, dropout_value: float | None = None, deriv_on_last_frame: bool | None = False, deriv_output: bool | None = False, hidden_size: int = 128, num_layers: Annotated[int, Gt(gt=0)] = 2)
Bases:
ModuleConfig
Pydantic Basemodel for Seq2Seq Module configuration
- dataset_config: TrajectoriesDatasetConfig
The TrajectoriesDatasetConfig used to understand the dataset and its tensor
Required: True
- deriv_on_last_frame: bool | None
If True, we’ll make the whole input that is fed to the model relative to its last frame, It also makes the model’s output relative to this frame
Required: False
Default: False
- deriv_output: bool | None
If True, the model’s output is relative to the last frame of the input
Required: False
Default: False
- do_lipschitz_continuation: bool
If True, we’ll apply Spectral Normalization to every layer of the model
Required: False
Default: False
- dropout_value: float | None
Value for the torch Dropout layer as one of the first steps of the forward method of the torch module, Default to None results is no Dropout layer See https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html
Required: False
Default: None
Hidden size of the GRU layers
Required: False
Default: 128
- loss_fn: LossFunctions | None
Define what loss function will be used to train your model
Required: False
Default: None
- name: str | None
The name of the predictor. If None, default to the class value. WARNING, If you override default value, AutoPredictor won’t be able to load your Predictor
Required: False
Default: None
- num_layers: int
Num layers in the GRU layers
Required: False
Default: 2
- save_path: str
Directory where the model will log and save
Required: False
Default: data/models
- scaler_config: ScalerConfig | None
The ScalerConfig used instanciate the scaler of this predictor. If None, we’ll not use a scaler ahead of the predictor
Required: False
Default: None
- version: int | None
A version number for this instance of the predictor. If None, we’ll use TensorBoardLogger logic to aquire a version number from the log path
Required: False
Default: None
SiMLPeConfig
- class prescyent.predictor.lightning.models.sequence.simlpe.config.SiMLPeConfig(*, name: str | None = None, version: int | None = None, save_path: str = 'data/models', dataset_config: TrajectoriesDatasetConfig, scaler_config: ScalerConfig | None = None, loss_fn: LossFunctions | None = None, do_lipschitz_continuation: bool = False, dropout_value: float | None = None, deriv_on_last_frame: bool | None = False, deriv_output: bool | None = False, num_layers: Annotated[int, Gt(gt=0)] = 48, dct: bool = True, spatial_fc_only: bool = False, temporal_fc_in: bool = False, temporal_fc_out: bool = False, mpl_blocks_norm: Literal[TrajectoryDimensions.BATCH, TrajectoryDimensions.ALL, TrajectoryDimensions.SPATIAL, TrajectoryDimensions.TEMPORAL] | None = TrajectoryDimensions.SPATIAL)
Bases:
ModuleConfig
Pydantic Basemodel for MLP Module configuration
- dataset_config: TrajectoriesDatasetConfig
The TrajectoriesDatasetConfig used to understand the dataset and its tensor
Required: True
- dct: bool
If True, we apply Discrete Cosine Transform over the input and inverse cosine transform over the output
Required: False
Default: True
- deriv_on_last_frame: bool | None
If True, we’ll make the whole input that is fed to the model relative to its last frame, It also makes the model’s output relative to this frame
Required: False
Default: False
- deriv_output: bool | None
If True, the model’s output is relative to the last frame of the input
Required: False
Default: False
- do_lipschitz_continuation: bool
If True, we’ll apply Spectral Normalization to every layer of the model
Required: False
Default: False
- dropout_value: float | None
Value for the torch Dropout layer as one of the first steps of the forward method of the torch module, Default to None results is no Dropout layer See https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html
Required: False
Default: None
- loss_fn: LossFunctions | None
Define what loss function will be used to train your model
Required: False
Default: None
- mpl_blocks_norm: Literal[TrajectoryDimensions.BATCH, TrajectoryDimensions.ALL, TrajectoryDimensions.SPATIAL, TrajectoryDimensions.TEMPORAL] | None
Normalization used in each MLPBlock
Required: False
Default: [2, 3]
- name: str | None
The name of the predictor. If None, default to the class value. WARNING, If you override default value, AutoPredictor won’t be able to load your Predictor
Required: False
Default: None
- num_layers: int
Number of MLPBlock
Required: False
Default: 48
- save_path: str
Directory where the model will log and save
Required: False
Default: data/models
- scaler_config: ScalerConfig | None
The ScalerConfig used instanciate the scaler of this predictor. If None, we’ll not use a scaler ahead of the predictor
Required: False
Default: None
- spatial_fc_only: bool
If True, MLPBlock will have the Spatial features as inputs, else it’s temporals
Required: False
Default: False
- temporal_fc_in: bool
If True, First FC Layer will be have the temporal features as inputs, else it’s spatial features
Required: False
Default: False
- temporal_fc_out: bool
If True, Last FC Layer will be have the temporal features as inputs, else it’s spatial features
Required: False
Default: False
- version: int | None
A version number for this instance of the predictor. If None, we’ll use TensorBoardLogger logic to aquire a version number from the log path
Required: False
Default: None
MlpConfig
- class prescyent.predictor.lightning.models.sequence.mlp.config.MlpConfig(*, name: str | None = None, version: int | None = None, save_path: str = 'data/models', dataset_config: TrajectoriesDatasetConfig, scaler_config: ScalerConfig | None = None, loss_fn: LossFunctions | None = None, do_lipschitz_continuation: bool = False, dropout_value: float | None = None, deriv_on_last_frame: bool | None = False, deriv_output: bool | None = False, hidden_size: int = 64, num_layers: Annotated[int, Gt(gt=0)] = 2, activation: ActivationFunctions | None = ActivationFunctions.RELU, context_size: int | None = None)
Bases:
ModuleConfig
Pydantic Basemodel for MLP Module configuration
- activation: ActivationFunctions | None
Activation function used between layers
Required: False
Default: relu
- context_size: int | None
Number of features of the context tensors used as inputs. See dataset.context_size_sum
Required: False
Default: None
- dataset_config: TrajectoriesDatasetConfig
The TrajectoriesDatasetConfig used to understand the dataset and its tensor
Required: True
- deriv_on_last_frame: bool | None
If True, we’ll make the whole input that is fed to the model relative to its last frame, It also makes the model’s output relative to this frame
Required: False
Default: False
- deriv_output: bool | None
If True, the model’s output is relative to the last frame of the input
Required: False
Default: False
- do_lipschitz_continuation: bool
If True, we’ll apply Spectral Normalization to every layer of the model
Required: False
Default: False
- dropout_value: float | None
Value for the torch Dropout layer as one of the first steps of the forward method of the torch module, Default to None results is no Dropout layer See https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html
Required: False
Default: None
Size of the hidden FC Layers in the MLP
Required: False
Default: 64
- loss_fn: LossFunctions | None
Define what loss function will be used to train your model
Required: False
Default: None
- name: str | None
The name of the predictor. If None, default to the class value. WARNING, If you override default value, AutoPredictor won’t be able to load your Predictor
Required: False
Default: None
- num_layers: int
Number of FC layers in the MLP
Required: False
Default: 2
- save_path: str
Directory where the model will log and save
Required: False
Default: data/models
- scaler_config: ScalerConfig | None
The ScalerConfig used instanciate the scaler of this predictor. If None, we’ll not use a scaler ahead of the predictor
Required: False
Default: None
- version: int | None
A version number for this instance of the predictor. If None, we’ll use TensorBoardLogger logic to aquire a version number from the log path
Required: False
Default: None
SARLSTMConfig
- class prescyent.predictor.lightning.models.autoreg.sarlstm.config.SARLSTMConfig(*, name: str | None = None, version: int | None = None, save_path: str = 'data/models', dataset_config: TrajectoriesDatasetConfig, scaler_config: ScalerConfig | None = None, loss_fn: LossFunctions | None = None, do_lipschitz_continuation: bool = False, dropout_value: float | None = None, deriv_on_last_frame: bool | None = False, deriv_output: bool | None = False, hidden_size: int = 128, num_layers: int = 2)
Bases:
ModuleConfig
Pydantic Basemodel for SARLSTM configuration
- dataset_config: TrajectoriesDatasetConfig
The TrajectoriesDatasetConfig used to understand the dataset and its tensor
Required: True
- deriv_on_last_frame: bool | None
If True, we’ll make the whole input that is fed to the model relative to its last frame, It also makes the model’s output relative to this frame
Required: False
Default: False
- deriv_output: bool | None
If True, the model’s output is relative to the last frame of the input
Required: False
Default: False
- do_lipschitz_continuation: bool
If True, we’ll apply Spectral Normalization to every layer of the model
Required: False
Default: False
- dropout_value: float | None
Value for the torch Dropout layer as one of the first steps of the forward method of the torch module, Default to None results is no Dropout layer See https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html
Required: False
Default: None
Hidden size of the LSTMCells
Required: False
Default: 128
- loss_fn: LossFunctions | None
Define what loss function will be used to train your model
Required: False
Default: None
- name: str | None
The name of the predictor. If None, default to the class value. WARNING, If you override default value, AutoPredictor won’t be able to load your Predictor
Required: False
Default: None
- num_layers: int
Number of LSTMCell
Required: False
Default: 2
- save_path: str
Directory where the model will log and save
Required: False
Default: data/models
- scaler_config: ScalerConfig | None
The ScalerConfig used instanciate the scaler of this predictor. If None, we’ll not use a scaler ahead of the predictor
Required: False
Default: None
- version: int | None
A version number for this instance of the predictor. If None, we’ll use TensorBoardLogger logic to aquire a version number from the log path
Required: False
Default: None
Training
TrainingConfig
- class prescyent.predictor.lightning.configs.training_config.TrainingConfig(*, lr: float = 0.001, weight_decay: float = 0.01, use_scheduler: bool = False, max_lr: float = 0.01, max_epochs: int = 100, max_steps: int = -1, accelerator: str = 'auto', devices: str | int | List[int] = 'auto', accumulate_grad_batches: int = 1, log_every_n_steps: int = 1, gradient_clip_val: float | None = None, gradient_clip_algorithm: str | None = None, early_stopping_value: str = 'Val/loss', early_stopping_patience: int | None = None, early_stopping_mode: str = 'min', use_deterministic_algorithms: bool = True, use_auto_lr: bool = False, seed: int | None = None, used_profiler: Profilers | None = None)
Bases:
BaseConfig
Pydantic Basemodel for Pytorch Lightning Training configuration
- accelerator: str
See https://lightning.ai/docs/pytorch/stable/common/trainer.html#accelerator
Required: False
Default: auto
- accumulate_grad_batches: int
See https://lightning.ai/docs/pytorch/stable/common/trainer.html#accumulate-grad-batches
Required: False
Default: 1
- devices: str | int | List[int]
See https://lightning.ai/docs/pytorch/stable/common/trainer.html#devices
Required: False
Default: auto
- early_stopping_mode: str
The method used to compare the monitored value See https://lightning.ai/docs/pytorch/stable/common/trainer.html#early-stopping-mode
Required: False
Default: min
- early_stopping_patience: int | None
The number of epoch without an improvements before stopping the training See https://lightning.ai/docs/pytorch/stable/common/trainer.html#early-stopping-patience
Required: False
Default: None
- early_stopping_value: str
The value to be monitored for early stopping See https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.EarlyStopping.html#lightning.pytorch.callbacks.EarlyStopping
Required: False
Default: Val/loss
- gradient_clip_algorithm: str | None
See monitor in https://lightning.ai/docs/pytorch/stable/common/trainer.html#init
Required: False
Default: None
- gradient_clip_val: float | None
See https://lightning.ai/docs/pytorch/stable/common/trainer.html#gradient-clip-val
Required: False
Default: None
- log_every_n_steps: int
See https://lightning.ai/docs/pytorch/stable/common/trainer.html#log-every-n-steps
Required: False
Default: 1
- lr: float
The learning rate used by the Optimizer during training
Required: False
Default: 0.001
- max_epochs: int
See https://lightning.ai/docs/pytorch/stable/common/trainer.html#max-epochs
Required: False
Default: 100
- max_lr: float
Serve as the upper limit for the learning rate if the Scheduler is used
Required: False
Default: 0.01
- max_steps: int
See https://lightning.ai/docs/pytorch/stable/common/trainer.html#max-steps
Required: False
Default: -1
- seed: int | None
Seed used during training and any predictor random operation
Required: False
Default: None
- use_auto_lr: bool
If True, lr will be determined by Tuner.lr_finder() See https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.tuner.tuning.Tuner.html#lightning.pytorch.tuner.tuning.Tuner.lr_find
Required: False
Default: False
- use_deterministic_algorithms: bool
Sets torch.use_deterministic_algorithms and Trainer’s deterministic flag to True
Required: False
Default: True
- use_scheduler: bool
If True, will use a OneCycleLR scheduler for the Learning rate See https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html
Required: False
Default: False
- used_profiler: Profilers | None
List of profilers to use during training See https://lightning.ai/docs/pytorch/stable/tuning/profiler_basic.html
Required: False
Default: None
- weight_decay: float
The weight decay used by the Optimizer during training
Required: False
Default: 0.01
Scaler
ScalerConfig
- class prescyent.scaler.config.ScalerConfig(*, scaler: Scalers | None = Scalers.STANDARDIZATION, scaling_axis: Literal[TrajectoryDimensions.FEATURE, TrajectoryDimensions.POINT, TrajectoryDimensions.SPATIAL, TrajectoryDimensions.TEMPORAL, None] = TrajectoryDimensions.SPATIAL, do_feature_wise_scaling: bool = False, scale_rotations: bool = False)
Bases:
BaseConfig
Pydantic Basemodel for Scaling configuration
- do_feature_wise_scaling: bool
if True, will train a scaler for each feature, else we scale over all feature with one scaler, Defaults to False
Required: False
Default: False
- scale_rotations: bool
if False and do feature do_feature_wise_scaling, rotations will not be scaled, Defaults to False
Required: False
Default: False
- scaler: Scalers | None
scaling method to use. If None we will not use scaling
Required: False
Default: standardization
- scaling_axis: Literal[TrajectoryDimensions.FEATURE, TrajectoryDimensions.POINT, TrajectoryDimensions.SPATIAL, TrajectoryDimensions.TEMPORAL, None]
dimensions on which the scaling will be applied. If None we will not use scaling
Required: False
Default: [2, 3]