FunctionalChaosAnalysis

class persalys.FunctionalChaosAnalysis(*args)

Create a Functional chaos analysis.

See Functional chaos

Parameters
namestr

Name

designOfExperimentDesignOfExperiment

Design of experiments

Examples

>>> import openturns as ot
>>> import persalys

Create the model:

>>> from math import pi
>>> ot.RandomGenerator.SetSeed(0)
>>> xi1 = persalys.Input('xi1', ot.Uniform(-pi, pi))
>>> xi2 = persalys.Input('xi2', ot.Uniform(-pi, pi))
>>> xi3 = persalys.Input('xi3', ot.Uniform(-pi, pi))
>>> y0 = persalys.Output('y0')
>>> myPhysicalModel = persalys.SymbolicPhysicalModel('myPhysicalModel', [xi1, xi2, xi3], [y0], ['sin(xi1) + 7. * (sin(xi2)) ^ 2 + 0.1 * xi3^4 * sin(xi1)'])

Create the design of experiments:

>>> aDesign = persalys.FixedDesignOfExperiment('aDesign', myPhysicalModel)
>>> inputSample = ot.LHSExperiment(myPhysicalModel.getDistribution(), 250).generate()
>>> aDesign.setOriginalInputSample(inputSample)
>>> aDesign.run()

Create the Functional Chaos Analysis:

>>> chaos = persalys.FunctionalChaosAnalysis('chaos', aDesign)
>>> chaos.setChaosDegree(6)
>>> chaos.setSparseChaos(False)
>>> chaos.setLeaveOneOutValidation(False)
>>> chaos.run()

Get the result:

>>> chaosResult = chaos.getResult()
>>> sobolResult = chaosResult.getSobolResult()

Methods

analyticalValidation()

Whether an analytical validation is requested.

getChaosDegree()

Chaos degree accessor.

getClassName()

Accessor to the object's name.

getDesignOfExperiment()

Design of experiments accessor.

getDistribution()

Input distribution accessor.

getEffectiveInputSample()

Effective input sample accessor.

getEffectiveOutputSample()

Effective output sample accessor.

getErrorMessage()

Error message accessor.

getInterestVariables()

Get the variables to analyse.

getKFoldValidationNumberOfFolds()

Number of folds accessor.

getKFoldValidationSeed()

Seed accessor.

getName()

Accessor to the object's name.

getPythonScript()

Physical model accessor.

getResult()

Results accessor.

getSparseChaos()

Whether it is sparse.

getTestSampleValidationPercentageOfPoints()

Percentage of points accessor.

getTestSampleValidationSeed()

Seed accessor.

getWarningMessage()

Warning message accessor.

hasName()

Test if the object is named.

hasValidResult()

Whether the analysis has been run.

isReliabilityAnalysis()

Whether the analysis involves reliability.

isRunning()

Whether the analysis is running.

kFoldValidation()

Whether a k-Fold cross-validation is requested.

leaveOneOutValidation()

Whether a validation by leave-one-out is requested.

run()

Launch the analysis.

setAnalyticalValidation(validation)

Whether an analytical validation is requested.

setChaosDegree(degree)

Chaos degree accessor.

setInterestVariables(variablesNames)

Set the variables to analyse.

setKFoldValidation(validation)

Whether a k-Fold cross-validation is requested.

setKFoldValidationNumberOfFolds(nbFolds)

Number of folds accessor.

setKFoldValidationSeed(seed)

Seed accessor.

setLeaveOneOutValidation(validation)

Whether it is sparse.

setName(name)

Accessor to the object's name.

setSparseChaos(sparse)

Whether it is sparse.

setTestSampleValidation(validation)

Whether a validation with a test sample is requested.

setTestSampleValidationPercentageOfPoints(...)

Percentage of points accessor.

setTestSampleValidationSeed(seed)

Seed accessor.

testSampleValidation()

Whether a validation with a test sample is requested.

canBeLaunched

getElapsedTime

getParentObserver

__init__(*args)
analyticalValidation()

Whether an analytical validation is requested.

Returns
validationbool

Whether an analytical validation is requested. This method corresponds to an approximation of the Leave-one-out method result.

getChaosDegree()

Chaos degree accessor.

Returns
degreeint

Chaos degree. It is set by default to 2

getClassName()

Accessor to the object’s name.

Returns
class_namestr

The object class name (object.__class__.__name__).

getDesignOfExperiment()

Design of experiments accessor.

Returns
modelDesignOfExperiment

Design of experiments

getDistribution()

Input distribution accessor.

Returns
distributionopenturns.JointDistribution

The distribution defined in the probabilistic model or a distribution composed of Uniform laws if there is no stochastic input variable.

getEffectiveInputSample()

Effective input sample accessor.

Returns
sampleopenturns.Sample

Sample of all the input variables if all of them are deterministic. Otherwise, sample of the stochastic input variables.

getEffectiveOutputSample()

Effective output sample accessor.

Returns
sampleopenturns.Sample

Sample of the interest output variables.

getErrorMessage()

Error message accessor.

Returns
messagestr

Error message if the analysis failed

getInterestVariables()

Get the variables to analyse.

Returns
variablesNamessequence of str

Names of the variables to analyse

getKFoldValidationNumberOfFolds()

Number of folds accessor.

Returns
foldsint

Number of folds. By default it is 3.

getKFoldValidationSeed()

Seed accessor.

Returns
seedint

Seed value for k-Fold cross-validation

getName()

Accessor to the object’s name.

Returns
namestr

The name of the object.

getPythonScript()

Physical model accessor.

Returns
scriptstr

Python script to replay the analysis

getResult()

Results accessor.

Returns
resultFunctionalChaosAnalysisResult

Results

getSparseChaos()

Whether it is sparse.

Returns
isSparsebool

Whether it is sparse. By default, the chaos is not sparse

getTestSampleValidationPercentageOfPoints()

Percentage of points accessor.

Returns
percentageint

Percentage of points used to validate the metamodel. By default it is 20%.

getTestSampleValidationSeed()

Seed accessor.

Returns
seedint

Seed value for the validation with a test sample

getWarningMessage()

Warning message accessor.

Returns
messagestr

Warning message which can appear during the analysis computation

hasName()

Test if the object is named.

Returns
hasNamebool

True if the name is not empty.

hasValidResult()

Whether the analysis has been run.

Returns
hasValidResultbool

Whether the analysis has already been run

isReliabilityAnalysis()

Whether the analysis involves reliability.

Returns
isReliabilityAnalysisbool

Whether the analysis involves a reliability analysis

isRunning()

Whether the analysis is running.

Returns
isRunningbool

Whether the analysis is running

kFoldValidation()

Whether a k-Fold cross-validation is requested.

Returns
validationbool

Whether a k-Fold cross-validation is requested

leaveOneOutValidation()

Whether a validation by leave-one-out is requested.

Returns
validationbool

Whether a validation by leave-one-out is requested

run()

Launch the analysis.

setAnalyticalValidation(validation)

Whether an analytical validation is requested.

Parameters
validationbool

Whether an analytical validation is requested. This method corresponds to an approximation of the Leave-one-out method result.

setChaosDegree(degree)

Chaos degree accessor.

Parameters
degreeint

Chaos degree

setInterestVariables(variablesNames)

Set the variables to analyse.

Parameters
variablesNamessequence of str

Names of the variables to analyse

setKFoldValidation(validation)

Whether a k-Fold cross-validation is requested.

Parameters
validationbool

Whether a k-Fold cross-validation is requested

setKFoldValidationNumberOfFolds(nbFolds)

Number of folds accessor.

Parameters
foldsint

Number of folds. By default it is 3.

setKFoldValidationSeed(seed)

Seed accessor.

Parameters
seedint

Seed value for k-Fold cross-validation

setLeaveOneOutValidation(validation)

Whether it is sparse.

Parameters
validationbool

Whether a validation by leave-one-out is requested

setName(name)

Accessor to the object’s name.

Parameters
namestr

The name of the object.

setSparseChaos(sparse)

Whether it is sparse.

Parameters
isSparsebool

Whether it is sparse

setTestSampleValidation(validation)

Whether a validation with a test sample is requested.

Parameters
validationbool

Whether a validation with a test sample is requested. The data sample is dividing into two sub-samples: a training sample (default: 80% of the sample points) and a test sample (default: 20% of the sample points). A new metamodel is built with the training sample and is validated with the test sample. The points are randomly picked in the data sample (by default the seed is 1).

setTestSampleValidationPercentageOfPoints(percentage)

Percentage of points accessor.

Parameters
percentageint

Percentage of points used to validate the metamodel. By default it is 20%.

setTestSampleValidationSeed(seed)

Seed accessor.

Parameters
seedint

Seed value for the validation with a test sample

testSampleValidation()

Whether a validation with a test sample is requested.

Returns
validationbool

Whether a validation with a test sample is requested. The data sample is dividing into two sub-samples: a training sample (default: 80% of the sample points) and a test sample (default: 20% of the sample points). A new metamodel is built with the training sample and is validated with the test sample. The points are randomly picked in the data sample (by default the seed is 1).