FunctionalChaosAnalysis

class persalys.FunctionalChaosAnalysis(*args)

Create a Functional chaos analysis.

Parameters
namestr

Name

designOfExperimentDesignOfExperiment

Design of experiments

Examples

>>> import openturns as ot
>>> import persalys

Create the model:

>>> from math import pi
>>> ot.RandomGenerator.SetSeed(0)
>>> xi1 = persalys.Input('xi1', ot.Uniform(-pi, pi))
>>> xi2 = persalys.Input('xi2', ot.Uniform(-pi, pi))
>>> xi3 = persalys.Input('xi3', ot.Uniform(-pi, pi))
>>> y0 = persalys.Output('y0')
>>> myPhysicalModel = persalys.SymbolicPhysicalModel('myPhysicalModel', [xi1, xi2, xi3], [y0], ['sin(xi1) + 7. * (sin(xi2)) ^ 2 + 0.1 * xi3^4 * sin(xi1)'])

Create the design of experiments:

>>> aDesign = persalys.FixedDesignOfExperiment('aDesign', myPhysicalModel)
>>> inputSample = ot.LHSExperiment(myPhysicalModel.getDistribution(), 250).generate()
>>> aDesign.setOriginalInputSample(inputSample)
>>> aDesign.run()

Create the Functional Chaos Analysis:

>>> chaos = persalys.FunctionalChaosAnalysis('chaos', aDesign)
>>> chaos.setChaosDegree(6)
>>> chaos.setSparseChaos(False)
>>> chaos.setLeaveOneOutValidation(False)
>>> chaos.run()

Get the result:

>>> chaosResult = chaos.getResult()
>>> sobolResult = chaosResult.getSobolResult()

Methods

analyticalValidation(self)

Whether an analytical validation is requested.

getChaosDegree(self)

Chaos degree accessor.

getClassName(self)

Accessor to the object’s name.

getDesignOfExperiment(self)

Design of experiments accessor.

getDistribution(self)

Input distribution accessor.

getEffectiveInputSample(self)

Effective input sample accessor.

getEffectiveOutputSample(self)

Effective output sample accessor.

getErrorMessage(self)

Error message accessor.

getId(self)

Accessor to the object’s id.

getInterestVariables(self)

Get the variables to analyse.

getKFoldValidationNumberOfFolds(self)

Number of folds accessor.

getKFoldValidationSeed(self)

Seed accessor.

getName(self)

Accessor to the object’s name.

getPythonScript(self)

Physical model accessor.

getResult(self)

Results accessor.

getShadowedId(self)

Accessor to the object’s shadowed id.

getSparseChaos(self)

Whether it is sparse.

getTestSampleValidationPercentageOfPoints(self)

Percentage of points accessor.

getTestSampleValidationSeed(self)

Seed accessor.

getVisibility(self)

Accessor to the object’s visibility state.

getWarningMessage(self)

Warning message accessor.

hasName(self)

Test if the object is named.

hasValidResult(self)

Whether the analysis has been run.

hasVisibleName(self)

Test if the object has a distinguishable name.

isReliabilityAnalysis(self)

Whether the analysis involves reliability.

isRunning(self)

Whether the analysis is running.

kFoldValidation(self)

Whether a k-Fold cross-validation is requested.

leaveOneOutValidation(self)

Whether a validation by leave-one-out is requested.

run(self)

Launch the analysis.

setAnalyticalValidation(self, validation)

Whether an analytical validation is requested.

setChaosDegree(self, degree)

Chaos degree accessor.

setInterestVariables(self, variablesNames)

Set the variables to analyse.

setKFoldValidation(self, validation)

Whether a k-Fold cross-validation is requested.

setKFoldValidationNumberOfFolds(self, nbFolds)

Number of folds accessor.

setKFoldValidationSeed(self, seed)

Seed accessor.

setLeaveOneOutValidation(self, validation)

Whether it is sparse.

setName(self, name)

Accessor to the object’s name.

setShadowedId(self, id)

Accessor to the object’s shadowed id.

setSparseChaos(self, sparse)

Whether it is sparse.

setTestSampleValidation(self, validation)

Whether a validation with a test sample is requested.

setTestSampleValidationPercentageOfPoints(…)

Percentage of points accessor.

setTestSampleValidationSeed(self, seed)

Seed accessor.

setVisibility(self, visible)

Accessor to the object’s visibility state.

testSampleValidation(self)

Whether a validation with a test sample is requested.

canBeLaunched

getElapsedTime

getParentObserver

__init__(self, *args)

Initialize self. See help(type(self)) for accurate signature.

analyticalValidation(self)

Whether an analytical validation is requested.

Returns
validationbool

Whether an analytical validation is requested. This method corresponds to an approximation of the Leave-one-out method result.

getChaosDegree(self)

Chaos degree accessor.

Returns
degreeint

Chaos degree. It is set by default to 2

getClassName(self)

Accessor to the object’s name.

Returns
class_namestr

The object class name (object.__class__.__name__).

getDesignOfExperiment(self)

Design of experiments accessor.

Returns
modelDesignOfExperiment

Design of experiments

getDistribution(self)

Input distribution accessor.

Returns
distributionopenturns.ComposedDistribution

The distribution defined in the probabilistic model or a distribution composed of Uniform laws if there is no stochastic input variable.

getEffectiveInputSample(self)

Effective input sample accessor.

Returns
sampleopenturns.Sample

Sample of all the input variables if all of them are deterministic. Otherwise, sample of the stochastic input variables.

getEffectiveOutputSample(self)

Effective output sample accessor.

Returns
sampleopenturns.Sample

Sample of the interest output variables.

getErrorMessage(self)

Error message accessor.

Returns
messagestr

Error message if the analysis failed

getId(self)

Accessor to the object’s id.

Returns
idint

Internal unique identifier.

getInterestVariables(self)

Get the variables to analyse.

Returns
variablesNamessequence of str

Names of the variables to analyse

getKFoldValidationNumberOfFolds(self)

Number of folds accessor.

Returns
foldsint

Number of folds. By default it is 3.

getKFoldValidationSeed(self)

Seed accessor.

Returns
seedint

Seed value for k-Fold cross-validation

getName(self)

Accessor to the object’s name.

Returns
namestr

The name of the object.

getPythonScript(self)

Physical model accessor.

Returns
scriptstr

Python script to replay the analysis

getResult(self)

Results accessor.

Returns
resultFunctionalChaosAnalysisResult

Results

getShadowedId(self)

Accessor to the object’s shadowed id.

Returns
idint

Internal unique identifier.

getSparseChaos(self)

Whether it is sparse.

Returns
isSparsebool

Whether it is sparse. By default, the chaos is not sparse

getTestSampleValidationPercentageOfPoints(self)

Percentage of points accessor.

Returns
percentageint

Percentage of points used to validate the metamodel. By default it is 20%.

getTestSampleValidationSeed(self)

Seed accessor.

Returns
seedint

Seed value for the validation with a test sample

getVisibility(self)

Accessor to the object’s visibility state.

Returns
visiblebool

Visibility flag.

getWarningMessage(self)

Warning message accessor.

Returns
messagestr

Warning message which can appear during the analysis computation

hasName(self)

Test if the object is named.

Returns
hasNamebool

True if the name is not empty.

hasValidResult(self)

Whether the analysis has been run.

Returns
hasValidResultbool

Whether the analysis has already been run

hasVisibleName(self)

Test if the object has a distinguishable name.

Returns
hasVisibleNamebool

True if the name is not empty and not the default one.

isReliabilityAnalysis(self)

Whether the analysis involves reliability.

Returns
isReliabilityAnalysisbool

Whether the analysis involves a reliability analysis

isRunning(self)

Whether the analysis is running.

Returns
isRunningbool

Whether the analysis is running

kFoldValidation(self)

Whether a k-Fold cross-validation is requested.

Returns
validationbool

Whether a k-Fold cross-validation is requested

leaveOneOutValidation(self)

Whether a validation by leave-one-out is requested.

Returns
validationbool

Whether a validation by leave-one-out is requested

run(self)

Launch the analysis.

setAnalyticalValidation(self, validation)

Whether an analytical validation is requested.

Parameters
validationbool

Whether an analytical validation is requested. This method corresponds to an approximation of the Leave-one-out method result.

setChaosDegree(self, degree)

Chaos degree accessor.

Parameters
degreeint

Chaos degree

setInterestVariables(self, variablesNames)

Set the variables to analyse.

Parameters
variablesNamessequence of str

Names of the variables to analyse

setKFoldValidation(self, validation)

Whether a k-Fold cross-validation is requested.

Parameters
validationbool

Whether a k-Fold cross-validation is requested

setKFoldValidationNumberOfFolds(self, nbFolds)

Number of folds accessor.

Parameters
foldsint

Number of folds. By default it is 3.

setKFoldValidationSeed(self, seed)

Seed accessor.

Parameters
seedint

Seed value for k-Fold cross-validation

setLeaveOneOutValidation(self, validation)

Whether it is sparse.

Parameters
validationbool

Whether a validation by leave-one-out is requested

setName(self, name)

Accessor to the object’s name.

Parameters
namestr

The name of the object.

setShadowedId(self, id)

Accessor to the object’s shadowed id.

Parameters
idint

Internal unique identifier.

setSparseChaos(self, sparse)

Whether it is sparse.

Parameters
isSparsebool

Whether it is sparse

setTestSampleValidation(self, validation)

Whether a validation with a test sample is requested.

Parameters
validationbool

Whether a validation with a test sample is requested. The data sample is dividing into two sub-samples: a training sample (default: 80% of the sample points) and a test sample (default: 20% of the sample points). A new metamodel is built with the training sample and is validated with the test sample. The points are randomly picked in the data sample (by default the seed is 1).

setTestSampleValidationPercentageOfPoints(self, percentage)

Percentage of points accessor.

Parameters
percentageint

Percentage of points used to validate the metamodel. By default it is 20%.

setTestSampleValidationSeed(self, seed)

Seed accessor.

Parameters
seedint

Seed value for the validation with a test sample

setVisibility(self, visible)

Accessor to the object’s visibility state.

Parameters
visiblebool

Visibility flag.

testSampleValidation(self)

Whether a validation with a test sample is requested.

Returns
validationbool

Whether a validation with a test sample is requested. The data sample is dividing into two sub-samples: a training sample (default: 80% of the sample points) and a test sample (default: 20% of the sample points). A new metamodel is built with the training sample and is validated with the test sample. The points are randomly picked in the data sample (by default the seed is 1).