FunctionBlackScholesLibrary "FunctionBlackScholes"
Some methods for the Black Scholes Options Model, which demonstrates several approaches to the valuation of a European call.
// reference:
// people.math.sc.edu
// people.math.sc.edu
asset_path(s0, mu, sigma, t1, n) Simulates the behavior of an asset price over time.
Parameters:
s0 : float, asset price at time 0.
mu : float, growth rate.
sigma : float, volatility.
t1 : float, time to expiry date.
n : int, time steps to expiry date.
Returns: option values at each equal timed step (0 -> t1)
binomial(s0, e, r, sigma, t1, m) Uses the binomial method for a European call.
Parameters:
s0 : float, asset price at time 0.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
m : int, time steps to expiry date.
Returns: option value at time 0.
bsf(s0, t0, e, r, sigma, t1) Evaluates the Black-Scholes formula for a European call.
Parameters:
s0 : float, asset price at time 0.
t0 : float, time at which the price is known.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
Returns: option value at time 0.
forward(e, r, sigma, t1, nx, nt, smax) Forward difference method to value a European call option.
Parameters:
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
nx : int, number of space steps in interval (0, L).
nt : int, number of time steps.
smax : float, maximum value of S to consider.
Returns: option values for the european call, float array of size ((nx-1) * (nt+1)).
mc(s0, e, r, sigma, t1, m) Uses Monte Carlo valuation on a European call.
Parameters:
s0 : float, asset price at time 0.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
m : int, time steps to expiry date.
Returns: confidence interval for the estimated range of valuation.
Statistics
FunctionMinkowskiDistanceLibrary "FunctionMinkowskiDistance"
Method for Minkowski Distance,
The Minkowski distance or Minkowski metric is a metric in a normed vector space
which can be considered as a generalization of both the Euclidean distance and
the Manhattan distance.
It is named after the German mathematician Hermann Minkowski.
reference: en.wikipedia.org
double(point_ax, point_ay, point_bx, point_by, p_value) Minkowsky Distance for single points.
Parameters:
point_ax : float, x value of point a.
point_ay : float, y value of point a.
point_bx : float, x value of point b.
point_by : float, y value of point b.
p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: float
ndim(point_x, point_y, p_value) Minkowsky Distance for N dimensions.
Parameters:
point_x : float array, point x dimension attributes.
point_y : float array, point y dimension attributes.
p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: float
regressLibrary "regress"
produces the slope (beta), y-intercept (alpha) and coefficient of determination for a linear regression
regress(x, y, len) regress: computes alpha, beta, and r^2 for a linear regression of y on x
Parameters:
x : the explaining (independent) variable
y : the dependent variable
len : use the most recent "len" values of x and y
Returns: : alpha is the x-intercept, beta is the slope, an r2 is the coefficient of determination
Note: the chart does not show anything, use the return values to compute model values in your own application, if you wish.
FunctionNNLayerLibrary "FunctionNNLayer"
Generalized Neural Network Layer method.
function(inputs, weights, n_nodes, activation_function, bias, alpha, scale) Generalized Layer.
Parameters:
inputs : float array, input values.
weights : float array, weight values.
n_nodes : int, number of nodes in layer.
activation_function : string, default='sigmoid', name of the activation function used.
bias : float, default=1.0, bias to pass into activation function.
alpha : float, default=na, if required to pass into activation function.
scale : float, default=na, if required to pass into activation function.
Returns: float
FunctionNNPerceptronLibrary "FunctionNNPerceptron"
Perceptron Function for Neural networks.
function(inputs, weights, bias, activation_function, alpha, scale) generalized perceptron node for Neural Networks.
Parameters:
inputs : float array, the inputs of the perceptron.
weights : float array, the weights for inputs.
bias : float, default=1.0, the default bias of the perceptron.
activation_function : string, default='sigmoid', activation function applied to the output.
alpha : float, default=na, if required for activation.
scale : float, default=na, if required for activation.
@outputs float
MLActivationFunctionsLibrary "MLActivationFunctions"
Activation functions for Neural networks.
binary_step(value) Basic threshold output classifier to activate/deactivate neuron.
Parameters:
value : float, value to process.
Returns: float
linear(value) Input is the same as output.
Parameters:
value : float, value to process.
Returns: float
sigmoid(value) Sigmoid or logistic function.
Parameters:
value : float, value to process.
Returns: float
sigmoid_derivative(value) Derivative of sigmoid function.
Parameters:
value : float, value to process.
Returns: float
tanh(value) Hyperbolic tangent function.
Parameters:
value : float, value to process.
Returns: float
tanh_derivative(value) Hyperbolic tangent function derivative.
Parameters:
value : float, value to process.
Returns: float
relu(value) Rectified linear unit (RELU) function.
Parameters:
value : float, value to process.
Returns: float
relu_derivative(value) RELU function derivative.
Parameters:
value : float, value to process.
Returns: float
leaky_relu(value) Leaky RELU function.
Parameters:
value : float, value to process.
Returns: float
leaky_relu_derivative(value) Leaky RELU function derivative.
Parameters:
value : float, value to process.
Returns: float
relu6(value) RELU-6 function.
Parameters:
value : float, value to process.
Returns: float
softmax(value) Softmax function.
Parameters:
value : float array, values to process.
Returns: float
softplus(value) Softplus function.
Parameters:
value : float, value to process.
Returns: float
softsign(value) Softsign function.
Parameters:
value : float, value to process.
Returns: float
elu(value, alpha) Exponential Linear Unit (ELU) function.
Parameters:
value : float, value to process.
alpha : float, default=1.0, predefined constant, controls the value to which an ELU saturates for negative net inputs. .
Returns: float
selu(value, alpha, scale) Scaled Exponential Linear Unit (SELU) function.
Parameters:
value : float, value to process.
alpha : float, default=1.67326324, predefined constant, controls the value to which an SELU saturates for negative net inputs. .
scale : float, default=1.05070098, predefined constant.
Returns: float
exponential(value) Pointer to math.exp() function.
Parameters:
value : float, value to process.
Returns: float
function(name, value, alpha, scale) Activation function.
Parameters:
name : string, name of activation function.
value : float, value to process.
alpha : float, default=na, if required.
scale : float, default=na, if required.
Returns: float
derivative(name, value, alpha, scale) Derivative Activation function.
Parameters:
name : string, name of activation function.
value : float, value to process.
alpha : float, default=na, if required.
scale : float, default=na, if required.
Returns: float
MLLossFunctionsLibrary "MLLossFunctions"
Methods for Loss functions.
mse(expects, predicts) Mean Squared Error (MSE) " MSE = 1/N * sum ((y - y')^2) ".
Parameters:
expects : float array, expected values.
predicts : float array, prediction values.
Returns: float
binary_cross_entropy(expects, predicts) Binary Cross-Entropy Loss (log).
Parameters:
expects : float array, expected values.
predicts : float array, prediction values.
Returns: float
DivergenceLibrary "Divergence"
Calculates a divergence between 2 series
bullish(_src, _low, depth) Calculates bullish divergence
Parameters:
_src : Main series
_low : Comparison series (`low` is used if no argument is supplied)
depth : Fractal Depth (`2` is used if no argument is supplied)
Returns: 2 boolean values for regular and hidden divergence
bearish(_src, _high, depth) Calculates bearish divergence
Parameters:
_src : Main series
_high : Comparison series (`high` is used if no argument is supplied)
depth : Fractal Depth (`2` is used if no argument is supplied)
Returns: 2 boolean values for regular and hidden divergence
I created this library to plug and play divergences in any code.
You can create a divergence indicator from any series you like.
Fractals are used to pinpoint the edge of the series. The higher the depth, the slower the divergence updates get.
My Plain Stochastic Divergence uses the same calculation. Watch it in action.
FunctionPeakDetectionLibrary "FunctionPeakDetection"
Method used for peak detection, similar to MATLAB peakdet method
function(sample_x, sample_y, delta) Method for detecting peaks.
Parameters:
sample_x : float array, sample with indices.
sample_y : float array, sample with data.
delta : float, positive threshold value for detecting a peak.
Returns: tuple with found max/min peak indices.
DailyDeviationLibrary "DailyDeviation"
Helps in determining the relative deviation from the open of the day compared to the high or low values.
hlcDeltaArrays(daysPrior, maxDeviation, spec, res) Retuns a set of arrays representing the daily deviation of price for a given number of days.
Parameters:
daysPrior : Number of days back to get the close from.
maxDeviation : Maximum deviation before a value is considered an outlier. A value of 0 will not filter results.
spec : session.regular (default), session.extended or other time spec.
res : The resolution (default = '1440').
Returns: Where OH = Open vs High, OL = Open vs Low, and OC = Open vs Close
fromOpen(daysPrior, maxDeviation, comparison, spec, res) Retuns a value representing the deviation from the open (to the high or low) of the current day given number of days to measure from.
Parameters:
daysPrior : Number of days back to get the close from.
maxDeviation : Maximum deviation before a value is considered an outlier. A value of 0 will not filter results.
comparison : The value use in comparison to the current open for the day.
spec : session.regular (default), session.extended or other time spec.
res : The resolution (default = '1440').
VolatilityLibrary "Volatility"
Functions for determining if volatility (true range) is within or exceeds normal.
The "True Range" (ta.tr) is used for measuring volatility.
Values are normalized by the volume adjusted weighted moving average (VAWMA) to be more like percent moves than price.
current(len) Returns the current price adjusted volatitlity ratio.
Parameters:
len : Number of bars to get a volume adjusted weighted average price.
normal(len, maxDeviation, level, gapDays, spec, res) Returns the normal upper range of volatility. Compensates for overnight gaps within a regular session.
Parameters:
len : Number of bars to measure volatility.
maxDeviation : The limit of volatility before considered an outlier.
level : The amount of standard deviation after cleaning outliers to be considered within normal.
gapDays : The number of days in the past to measure overnight gap volaility.
spec : session.regular (default), session.extended or other time spec.
res : The resolution (default = '1440').
isNormal(len, maxDeviation, level, gapDays, spec, res) Returns true if the volatility (true range) is within normal levels. Compensates for overnight gaps within a regular session.
Parameters:
len : Number of bars to measure volatility.
maxDeviation : The limit of volatility before considered an outlier.
level : The amount of standard deviation after cleaning outliers to be considered within normal.
gapDays : The number of days in the past to measure overnight gap volaility.
spec : session.regular (default), session.extended or other time spec.
res : The resolution (default = '1440').
severity(len, maxDeviation, level, gapDays, spec, res) Returns ratio of the current value to the normal value. Compensates for overnight gaps within a regular session.
Parameters:
len : Number of bars to measure volatility.
maxDeviation : The limit of volatility before considered an outlier.
level : The amount of standard deviation after cleaning outliers to be considered within normal.
gapDays : The number of days in the past to measure overnight gap volaility.
spec : session.regular (default), session.extended or other time spec.
res : The resolution (default = '1440').
DataCleanerLibrary "DataCleaner"
Functions for acquiring outlier levels and acquiring a cleaned version of a series.
outlierLevel(src, len, level) Gets the (standard deviation) outlier level for a given series.
Parameters:
src : The series to average and add a multiple of the standard deviation to.
len : The The number of bars to measure.
level : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary.
Returns: The average of the series plus the multiple of the standard deviation.
cleanUsing(src, result, len, maxDeviation) Returns an array representing the result series with (outliers provided by the source) removed.
Parameters:
src : The source series to read from.
result : The result series.
len : The maximum size of the resultant array.
maxDeviation : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary.
Returns: An array containing the cleaned series.
clean(src, len, maxDeviation) Returns an array representing the source series with outliers removed.
Parameters:
src : The source series to read from.
len : The maximum size of the resultant array.
maxDeviation : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary.
Returns: An array containing the cleaned series.
outlierLevelAdjusted(src, level, len, maxDeviation) Gets the (standard deviation) outlier level for a given series after a single pass of removing any outliers.
Parameters:
src : The series to average and add a multiple of the standard deviation to.
level : The positive or negative multiple of the standard deviation to apply to the average. A positive number will be the upper boundary and a negative number will be the lower boundary.
len : The The number of bars to measure.
maxDeviation : The optional standard deviation level to use when cleaning the series. The default is the value of the provided level.
Returns: The average of the series plus the multiple of the standard deviation.
benchLibrary "bench"
A simple banchmark library to analyse script performance and bottlenecks.
Very useful if you are developing an overly complex application in Pine Script, or trying to optimise a library / function / algorithm...
Supports artificial looping benchmarks (of fast functions)
Supports integrated linear benchmarks (of expensive scripts)
One important thing to note is that the Pine Script compiler will completely ignore any calculations that do not eventually produce chart output. Therefore, if you are performing an artificial benchmark you will need to use the bench.reference(value) function to ensure the calculations are executed.
Please check the examples towards the bottom of the script.
Quick Reference
(Be warned this uses non-standard space characters to get the line indentation to work in the description!)
```
// Looping benchmark style
benchmark = bench.new(samples = 500, loops = 5000)
data = array.new_int()
if bench.start(benchmark)
while bench.loop(benchmark)
array.unshift(data, timenow)
bench.mark(benchmark)
while bench.loop(benchmark)
array.unshift(data, timenow)
bench.mark(benchmark)
while bench.loop(benchmark)
array.unshift(data, timenow)
bench.stop(benchmark)
bench.reference(array.get(data, 0))
bench.report(benchmark, '1x array.unshift()')
// Linear benchmark style
benchmark = bench.new()
data = array.new_int()
bench.start(benchmark)
for i = 0 to 1000
array.unshift(data, timenow)
bench.mark(benchmark)
for i = 0 to 1000
array.unshift(data, timenow)
bench.stop(benchmark)
bench.reference(array.get(data, 0))
bench.report(benchmark,'1000x array.unshift()')
```
Detailed Interface
new(samples, loops) Initialises a new benchmark array
Parameters:
samples : int, the number of bars in which to collect samples
loops : int, the number of loops to execute within each sample
Returns: int , the benchmark array
active(benchmark) Determing if the benchmarks state is active
Parameters:
benchmark : int , the benchmark array
Returns: bool, true only if the state is active
start(benchmark) Start recording a benchmark from this point
Parameters:
benchmark : int , the benchmark array
Returns: bool, true only if the benchmark is unfinished
loop(benchmark) Returns true until call count exceeds bench.new(loop) variable
Parameters:
benchmark : int , the benchmark array
Returns: bool, true while looping
reference(number, string) Add a compiler reference to the chart so the calculations don't get optimised away
Parameters:
number : float, a numeric value to reference
string : string, a string value to reference
mark(benchmark, number, string) Marks the end of one recorded interval and the start of the next
Parameters:
benchmark : int , the benchmark array
number : float, a numeric value to reference
string : string, a string value to reference
stop(benchmark, number, string) Stop the benchmark, ending the final interval
Parameters:
benchmark : int , the benchmark array
number : float, a numeric value to reference
string : string, a string value to reference
report(Prints, benchmark, title, text_size, position)
Parameters:
Prints : the benchmarks results to the screen
benchmark : int , the benchmark array
title : string, add a custom title to the report
text_size : string, the text size of the log console (global size vars)
position : string, the position of the log console (global position vars)
unittest_bench(case) Cache module unit tests, for inclusion in parent script test suite. Usage: bench.unittest_bench(__ASSERTS)
Parameters:
case : string , the current test case and array of previous unit tests (__ASSERTS)
unittest(verbose) Run the bench module unit tests as a stand alone. Usage: bench.unittest()
Parameters:
verbose : bool, optionally disable the full report to only display failures
HurstExponentLibrary "HurstExponent"
Library to calculate Hurst Exponent refactored from Hurst Exponent - Detrended Fluctuation Analysis
demean(src) Calculates a series subtracted from the series mean.
Parameters:
src : The series used to calculate the difference from the mean (e.g. log returns).
Returns: The series subtracted from the series mean
cumsum(src, length) Calculates a cumulated sum from the series.
Parameters:
src : The series used to calculate the cumulative sum (e.g. demeaned log returns).
length : The length used to calculate the cumulative sum (e.g. 100).
Returns: The cumulative sum of the series as an array
aproximateLogScale(scale, length) Calculates an aproximated log scale. Used to save sample size
Parameters:
scale : The scale to aproximate.
length : The length used to aproximate the expected scale.
Returns: The aproximated log scale of the value
rootMeanSum(cumulativeSum, barId, numberOfSegments) Calculates linear trend to determine error between linear trend and cumulative sum
Parameters:
cumulativeSum : The cumulative sum array to regress.
barId : The barId for the slice
numberOfSegments : The total number of segments used for the regression calculation
Returns: The error between linear trend and cumulative sum
averageRootMeanSum(cumulativeSum, barId, length) Calculates the Root Mean Sum Measured for each block (e.g the aproximated log scale)
Parameters:
cumulativeSum : The cumulative sum array to regress and determine the average of.
barId : The barId for the slice
length : The length used for finding the average
Returns: The average root mean sum error of the cumulativeSum
criticalValues(length) Calculates the critical values for a hurst exponent for a given length
Parameters:
length : The length used for finding the average
Returns: The critical value, upper critical value and lower critical value for a hurst exponent
slope(cumulativeSum, length) Calculates the hurst exponent slope measured from root mean sum, scaled to log log plot using linear regression
Parameters:
cumulativeSum : The cumulative sum array to regress and determine the average of.
length : The length used for the hurst exponent sample size
Returns: The slope of the hurst exponent
smooth(src, length) Smooths input using advanced linear regression
Parameters:
src : The series to smooth (e.g. hurst exponent slope)
length : The length used to smooth
Returns: The src smoothed according to the given length
exponent(src, hurstLength) Wrapper function to calculate the hurst exponent slope
Parameters:
src : The series used for returns calculation (e.g. close)
hurstLength : The length used to calculate the hurst exponent (should be greater than 50)
Returns: The src smoothed according to the given length
MomentsLibrary "Moments"
Based on Moments (Mean,Variance,Skewness,Kurtosis) . Rewritten for Pinescript v5.
logReturns(src) Calculates log returns of a series (e.g log percentage change)
Parameters:
src : Source to use for the returns calculation (e.g. close).
Returns: Log percentage returns of a series
mean(src, length) Calculates the mean of a series using ta.sma
Parameters:
src : Source to use for the mean calculation (e.g. close).
length : Length to use mean calculation (e.g. 14).
Returns: The sma of the source over the length provided.
variance(src, length) Calculates the variance of a series
Parameters:
src : Source to use for the variance calculation (e.g. close).
length : Length to use for the variance calculation (e.g. 14).
Returns: The variance of the source over the length provided.
standardDeviation(src, length) Calculates the standard deviation of a series
Parameters:
src : Source to use for the standard deviation calculation (e.g. close).
length : Length to use for the standard deviation calculation (e.g. 14).
Returns: The standard deviation of the source over the length provided.
skewness(src, length) Calculates the skewness of a series
Parameters:
src : Source to use for the skewness calculation (e.g. close).
length : Length to use for the skewness calculation (e.g. 14).
Returns: The skewness of the source over the length provided.
kurtosis(src, length) Calculates the kurtosis of a series
Parameters:
src : Source to use for the kurtosis calculation (e.g. close).
length : Length to use for the kurtosis calculation (e.g. 14).
Returns: The kurtosis of the source over the length provided.
skewnessStandardError(sampleSize) Estimates the standard error of skewness based on sample size
Parameters:
sampleSize : The number of samples used for calculating standard error.
Returns: The standard error estimate for skewness based on the sample size provided.
kurtosisStandardError(sampleSize) Estimates the standard error of kurtosis based on sample size
Parameters:
sampleSize : The number of samples used for calculating standard error.
Returns: The standard error estimate for kurtosis based on the sample size provided.
skewnessCriticalValue(sampleSize) Estimates the critical value of skewness based on sample size
Parameters:
sampleSize : The number of samples used for calculating critical value.
Returns: The critical value estimate for skewness based on the sample size provided.
kurtosisCriticalValue(sampleSize) Estimates the critical value of kurtosis based on sample size
Parameters:
sampleSize : The number of samples used for calculating critical value.
Returns: The critical value estimate for kurtosis based on the sample size provided.
pNRTRLibrary "pNRTR"
Provides functions for calculating Nick Rypock Trailing Reverse (NRTR) trend values with higher precision offsets for both low, and high points rather than the standard single offset.
pnrtr(float low_offset = 0.2, float high_offset = 0.2, float value = close)
low_offset
Offset used for nrtr low_point calculations. Default is 0.2.
high_offset
Offset used for nrtr high_point calculations. Default is 0.2.
value
Variable used for nrtr point calculations. Default is close.