math_utilsLibrary "math_utils"
Collection of math functions that are not part of the standard math library
num_of_non_decimal_digits(number) num_of_non_decimal_digits - The number of the most significant digits on the left of the dot
Parameters:
number : - The floating point number
Returns: number of non digits
num_of_decimal_digits(number) num_of_decimal_digits - The number of the most significant digits on the right of the dot
Parameters:
number : - The floating point number
Returns: number of decimal digits
floor(number, precision) floor - floor with precision to the given most significant decimal point
Parameters:
number : - The floating point number
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number floored to the decimal digits defined by the precision
ceil(number, precision) floor - ceil with precision to the given most significant decimal point
Parameters:
number : - The floating point number
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number ceiled to the decimal digits defined by the precision
clamp(number, lower, higher, precision) clamp - clamp with precision to the given most significant decimal point
Parameters:
number : - The floating point number
lower : - The lowerst number limit to return
higher : - The highest number limit to return
precision : - The number of decimal places a.k.a the most significant decimal digit - e.g precision 2 will produce 0.01 minimum change
Returns: number clamped to the decimal digits defined by the precision
MATH
ConditionalAverages█ OVERVIEW
This library is a Pine Script™ programmer’s tool containing functions that average values selectively.
█ CONCEPTS
Averaging can be useful to smooth out unstable readings in the data set, provide a benchmark to see the underlying trend of the data, or to provide a general expectancy of values in establishing a central tendency. Conventional averaging techniques tend to apply indiscriminately to all values in a fixed window, but it can sometimes be useful to average values only when a specific condition is met. As conditional averaging works on specific elements of a dataset, it can help us derive more context-specific conclusions. This library offers a collection of averaging methods that not only accomplish these tasks, but also exploit the efficiencies of the Pine Script™ runtime by foregoing unnecessary and resource-intensive for loops.
█ NOTES
To Loop or Not to Loop
Though for and while loops are essential programming tools, they are often unnecessary in Pine Script™. This is because the Pine Script™ runtime already runs your scripts in a loop where it executes your code on each bar of the dataset. Pine Script™ programmers who understand how their code executes on charts can use this to their advantage by designing loop-less code that will run orders of magnitude faster than functionally identical code using loops. Most of this library's function illustrate how you can achieve loop-less code to process past values. See the User Manual page on loops for more information. If you are looking for ways to measure execution time for you scripts, have a look at our LibraryStopwatch library .
Our `avgForTimeWhen()` and `totalForTimeWhen()` are exceptions in the library, as they use a while structure. Only a few iterations of the loop are executed on each bar, however, as its only job is to remove the few elements in the array that are outside the moving window defined by a time boundary.
Cumulating and Summing Conditionally
The ta.cum() or math.sum() built-in functions can be used with ternaries that select only certain values. In our `avgWhen(src, cond)` function, for example, we use this technique to cumulate only the occurrences of `src` when `cond` is true:
float cumTotal = ta.cum(cond ? src : 0) We then use:
float cumCount = ta.cum(cond ? 1 : 0) to calculate the number of occurrences where `cond` is true, which corresponds to the quantity of values cumulated in `cumTotal`.
Building Custom Series With Arrays
The advent of arrays in Pine has enabled us to build our custom data series. Many of this library's functions use arrays for this purpose, saving newer values that come in when a condition is met, and discarding the older ones, implementing a queue .
`avgForTimeWhen()` and `totalForTimeWhen()`
These two functions warrant a few explanations. They operate on a number of values included in a moving window defined by a timeframe expressed in milliseconds. We use a 1D timeframe in our example code. The number of bars included in the moving window is unknown to the programmer, who only specifies the period of time defining the moving window. You can thus use `avgForTimeWhen()` to calculate a rolling moving average for the last 24 hours, for example, that will work whether the chart is using a 1min or 1H timeframe. A 24-hour moving window will typically contain many more values on a 1min chart that on a 1H chart, but their calculated average will be very close.
Problems will arise on non-24x7 markets when large time gaps occur between chart bars, as will be the case across holidays or trading sessions. For example, if you were using a 24H timeframe and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The `minBars` parameter mitigates this by guaranteeing that a minimum number of bars are always included in the calculation, even if including those bars requires reaching outside the prescribed timeframe. We use a minimum value of 10 bars in the example code.
Using var in Constant Declarations
In the past, we have been using var when initializing so-called constants in our scripts, which as per the Style Guide 's recommendations, we identify using UPPER_SNAKE_CASE. It turns out that var variables incur slightly superior maintenance overhead in the Pine Script™ runtime, when compared to variables initialized on each bar. We thus no longer use var to declare our "int/float/bool" constants, but still use it when an initialization on each bar would require too much time, such as when initializing a string or with a heavy function call.
Look first. Then leap.
█ FUNCTIONS
avgWhen(src, cond)
Gathers values of the source when a condition is true and averages them over the total number of occurrences of the condition.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
Returns: (float) A cumulative average of values when a condition is met.
avgWhenLast(src, cond, cnt)
Gathers values of the source when a condition is true and averages them over a defined number of occurrences of the condition.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
cnt : (simple int) The quantity of last occurrences of the condition for which to average values.
Returns: (float) The average of `src` for the last `x` occurrences where `cond` is true.
avgWhenInLast(src, cond, cnt)
Gathers values of the source when a condition is true and averages them over the total number of occurrences during a defined number of bars back.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when a value will be included in the set of values to be averaged.
cnt : (simple int) The quantity of bars back to evaluate.
Returns: (float) The average of `src` in last `cnt` bars, but only when `cond` is true.
avgSince(src, cond)
Averages values of the source since a condition was true.
Parameters:
src : (series int/float) The source of the values to be averaged.
cond : (series bool) The condition determining when the average is reset.
Returns: (float) The average of `src` since `cond` was true.
avgForTimeWhen(src, ms, cond, minBars)
Averages values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
Parameters:
src : (series int/float) The source of the values to be averaged.
ms : (simple int) The time duration in milliseconds defining the size of the moving window.
cond : (series bool) The condition determining which values are included. Optional.
minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
Returns: (float) The average of `src` when `cond` is true in the moving window.
totalForTimeWhen(src, ms, cond, minBars)
Sums values of `src` when `cond` is true, over a moving window of length `ms` milliseconds.
Parameters:
src : (series int/float) The source of the values to be summed.
ms : (simple int) The time duration in milliseconds defining the size of the moving window.
cond : (series bool) The condition determining which values are included. Optional.
minBars : (simple int) The minimum number of values to keep in the moving window. Optional.
Returns: (float) The sum of `src` when `cond` is true in the moving window.
MovingAveragesLibraryLibrary "MovingAveragesLibrary"
This is a library allowing one to select between many different Moving Average formulas to smooth out any float variable.
You can use this library to apply a Moving Average function to any series of data as long as your source is a float.
The default application would be for applying Moving Averages onto your chart. However, the scope of this library is beyond that. Any indicator or strategy you are building can benefit from this library.
You can apply different types of smoothing and moving average functions to your indicators, momentum oscillators, average true range calculations, support and resistance zones, envelope bands, channels, and anything you can think of to attempt to smooth out noise while finding a delicate balance against lag.
If you are developing an indicator, you can use the 'ave_func' to allow your users to select any Moving Average for any function or variable by creating an input string with the following structure:
var_name = input.string(, , )
Where the types of Moving Average you would like to be provided would be included in options.
Example:
i_ma_type = input.string(title = "Moving Average Type", defval = "Hull Moving Average", options = )
Where you would add after options the strings I have included for you at the top of the PineScript for your convenience.
Then for the output you desire, simply call 'ave_func' like so:
ma = ave_func(source, length, i_ma_type)
Now the plotted Moving Average will be the same as what you or your users select from the Input.
ema(src, len) Exponential Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
sma(src, len) Simple Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
rma(src, len) Relative Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
wma(src, len) Weighted Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
dv2(len) Donchian V2 function.
Parameters:
len : Lookback length to use.
Returns: Open + Close / 2 for the selected length.
ModFilt(src, len) Modular Filter smoothing function.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: Float value.
EDSMA(src, len) Ehlers Dynamic Smoothed Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: EDSMA smoothing.
dema(x, t) Double Exponential Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: DEMA smoothing.
tema(src, len) Triple Exponential Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: TEMA smoothing.
smma(x, t) Smoothed Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: SMMA smoothing.
vwma(x, t) Volume Weighted Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: VWMA smoothing.
hullma(x, t) Hull Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: Hull smoothing.
covwma(x, t) Coefficient of Variation Weighted Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: COVWMA smoothing.
frama(x, t) Fractal Reactive Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: FRAMA smoothing.
kama(x, t) Kaufman's Adaptive Moving Average.
Parameters:
x : Series to use ('close' is used if no argument is supplied).
t : Lookback length to use.
Returns: KAMA smoothing.
donchian(len) Donchian Calculation.
Parameters:
len : Lookback length to use.
Returns: Average of the highest price and the lowest price for the specified look-back period.
tma(src, len) Triangular Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: TMA smoothing.
VAMA(src, len) Volatility Adjusted Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: VAMA smoothing.
Jurik(src, len) Jurik Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: JMA smoothing.
MCG(src, len) McGinley smoothing.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: McGinley smoothing.
zlema(series, length) Zero Lag Exponential Moving Average.
Parameters:
series : Series to use ('close' is used if no argument is supplied).
length : Lookback length to use.
Returns: ZLEMA smoothing.
xema(src, len) Optimized Exponential Moving Average.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
len : Lookback length to use.
Returns: XEMA smoothing.
EhlersSuperSmoother(src, lower) Ehlers Super Smoother.
Parameters:
src : Series to use ('close' is used if no argument is supplied).
lower : Smoothing value to use.
Returns: Ehlers Super smoothing.
EhlersEmaSmoother(sig, smoothK, smoothP) Ehlers EMA Smoother.
Parameters:
sig : Series to use ('close' is used if no argument is supplied).
smoothK : Lookback length to use.
smoothP : Smothing value to use.
Returns: Ehlers EMA smoothing.
ave_func(in_src, in_len, in_type) Returns the source after running it through a Moving Average function.
Parameters:
in_src : Series to use ('close' is used if no argument is supplied).
in_len : Lookback period to be used for the Moving Average function.
in_type : Type of Moving Average function to use. Must have a string input to select the options from that MUST match the type-casing in the function below.
Returns: The source as a float after running it through the Moving Average function.
lib_MilitzerLibrary "lib_Militzer"
// This is a collection of functions either found on the internet, or made by me.
// This is only public so my other scripts that reference this can also be public.
// But if you find anything useful here, be my guest.
print()
strToInt()
timeframeToMinutes()
MathProbabilityDistributionLibrary "MathProbabilityDistribution"
Probability Distribution Functions.
name(idx) Indexed names helper function.
Parameters:
idx : int, position in the range (0, 6).
Returns: string, distribution name.
usage:
.name(1)
Notes:
(0) => 'StdNormal'
(1) => 'Normal'
(2) => 'Skew Normal'
(3) => 'Student T'
(4) => 'Skew Student T'
(5) => 'GED'
(6) => 'Skew GED'
zscore(position, mean, deviation) Z-score helper function for x calculation.
Parameters:
position : float, position.
mean : float, mean.
deviation : float, standard deviation.
Returns: float, z-score.
usage:
.zscore(1.5, 2.0, 1.0)
std_normal(position) Standard Normal Distribution.
Parameters:
position : float, position.
Returns: float, probability density.
usage:
.std_normal(0.6)
normal(position, mean, scale) Normal Distribution.
Parameters:
position : float, position in the distribution.
mean : float, mean of the distribution, default=0.0 for standard distribution.
scale : float, scale of the distribution, default=1.0 for standard distribution.
Returns: float, probability density.
usage:
.normal(0.6)
skew_normal(position, skew, mean, scale) Skew Normal Distribution.
Parameters:
position : float, position in the distribution.
skew : float, skewness of the distribution.
mean : float, mean of the distribution, default=0.0 for standard distribution.
scale : float, scale of the distribution, default=1.0 for standard distribution.
Returns: float, probability density.
usage:
.skew_normal(0.8, -2.0)
ged(position, shape, mean, scale) Generalized Error Distribution.
Parameters:
position : float, position.
shape : float, shape.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.ged(0.8, -2.0)
skew_ged(position, shape, skew, mean, scale) Skew Generalized Error Distribution.
Parameters:
position : float, position.
shape : float, shape.
skew : float, skew.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.skew_ged(0.8, 2.0, 1.0)
student_t(position, shape, mean, scale) Student-T Distribution.
Parameters:
position : float, position.
shape : float, shape.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.student_t(0.8, 2.0, 1.0)
skew_student_t(position, shape, skew, mean, scale) Skew Student-T Distribution.
Parameters:
position : float, position.
shape : float, shape.
skew : float, skew.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
Returns: float, probability.
usage:
.skew_student_t(0.8, 2.0, 1.0)
select(distribution, position, mean, scale, shape, skew, log) Conditional Distribution.
Parameters:
distribution : string, distribution name.
position : float, position.
mean : float, mean, default=0.0 for standard distribution.
scale : float, scale, default=1.0 for standard distribution.
shape : float, shape.
skew : float, skew.
log : bool, if true apply log() to the result.
Returns: float, probability.
usage:
.select('StdNormal', __CYCLE4F__, log=true)
historicalrangeLibrary "historicalrange"
Library provices a method to calculate historical percentile range of series.
hpercentrank(source) calculates historical percentrank of the source
Parameters:
source : Source for which historical percentrank needs to be calculated. Source should be ranging between 0-100. If using a source which can beyond 0-100, use short term percentrank to baseline them.
Returns: pArray - percentrank array which contains how many instances of source occurred at different levels.
upperPercentile - percentile based on higher value
lowerPercentile - percentile based on lower value
median - median value of the source
max - max value of the source
distancefromath(source) returns stats on historical distance from ath in terms of percentage
Parameters:
source : for which stats are calculated
Returns: percentile and related historical stats regarding distance from ath
distancefromma(maType, length, source) returns stats on historical distance from moving average in terms of percentage
Parameters:
maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
length : Moving Average Length
source : for which stats are calculated
Returns: percentile and related historical stats regarding distance from ath
bpercentb(source, maType, length, multiplier, sticky) returns percentrank and stats on historical bpercentb levels
Parameters:
source : Moving Average Source
maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
length : Moving Average Length
multiplier : Standard Deviation multiplier
sticky : - sticky boundaries which will only change when value is outside boundary.
Returns: percentile and related historical stats regarding Bollinger Percent B
kpercentk(source, maType, length, multiplier, useTrueRange, sticky) returns percentrank and stats on historical kpercentk levels
Parameters:
source : Moving Average Source
maType : Moving Average Type : Can be sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
length : Moving Average Length
multiplier : Standard Deviation multiplier
useTrueRange : - if set to false, uses high-low.
sticky : - sticky boundaries which will only change when value is outside boundary.
Returns: percentile and related historical stats regarding Keltener Percent K
dpercentd(useAlternateSource, alternateSource, length, sticky) returns percentrank and stats on historical dpercentd levels
Parameters:
useAlternateSource : - Custom source is used only if useAlternateSource is set to true
alternateSource : - Custom source
length : - donchian channel length
sticky : - sticky boundaries which will only change when value is outside boundary.
Returns: percentile and related historical stats regarding Donchian Percent D
oscillator(type, length, shortLength, longLength, source, highSource, lowSource, method, highlowLength, sticky) oscillator - returns Choice of oscillator with custom overbought/oversold range
Parameters:
type : - oscillator type. Valid values : cci, cmo, cog, mfi, roc, rsi, stoch, tsi, wpr
length : - Oscillator length - not used for TSI
shortLength : - shortLength only used for TSI
longLength : - longLength only used for TSI
source : - custom source if required
highSource : - custom high source for stochastic oscillator
lowSource : - custom low source for stochastic oscillator
method : - Valid values for method are : sma, ema, hma, rma, wma, vwma, swma, highlow, linreg, median
highlowLength : - length on which highlow of the oscillator is calculated
sticky : - overbought, oversold levels won't change unless crossed
Returns: percentile and related historical stats regarding oscillator
ArrayOperationsLibrary "ArrayOperations"
Array element wise basic operations.
add(sample_a, sample_b) Adds sample_b to sample_a and returns a new array.
Parameters:
sample_a : values to be added to.
sample_b : values to add.
Returns: array with added results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
subtract(sample_a, sample_b) Subtracts sample_b from sample_a and returns a new array.
sample_a : values to be subtracted from.
sample_b : values to subtract.
Returns: array with subtracted results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
multiply(sample_a, sample_b) multiply sample_a by sample_b and returns a new array.
sample_a : values to multiply.
sample_b : values to multiply with.
Returns: array with multiplied results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
divide(sample_a, sample_b) Divide sample_a by sample_b and returns a new array.
sample_a : values to divide.
sample_b : values to divide with.
Returns: array with divided results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
power(sample_a, sample_b) power sample_a by sample_b and returns a new array.
sample_a : values to power.
sample_b : values to power with.
Returns: float array with power results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
remainder(sample_a, sample_b) Remainder sample_a by sample_b and returns a new array.
sample_a : values to remainder.
sample_b : values to remainder with.
Returns: array with remainder results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
equal(sample_a, sample_b) Check element wise sample_a equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
not_equal(sample_a, sample_b) Check element wise sample_a not equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
over_or_equal(sample_a, sample_b) Check element wise sample_a over or equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
under_or_equal(sample_a, sample_b) Check element wise sample_a under or equals sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
over(sample_a, sample_b) Check element wise sample_a over sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
under(sample_a, sample_b) Check element wise sample_a under sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
and_(sample_a, sample_b) Check element wise sample_a and sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
or_(sample_a, sample_b) Check element wise sample_a or sample_b and returns a new array.
sample_a : values to check.
sample_b : values to check.
Returns: int array with results.
- sample_a provides type format for output.
- arrays do not need to be symmetric.
- sample_a must have same or more elements than sample_b
all(sample) Check element wise if all numeric samples are true (!= 0).
sample : values to check.
Returns: int.
any(sample) Check element wise if any numeric samples are true (!= 0).
sample : values to check.
Returns: int.
FunctionWeekofmonthLibrary "FunctionWeekofmonth"
Week of Month function.
weekofmonth(utime) Week of month for provided unix time.
Parameters:
utime : int, unix timestamp.
Returns: int
WIPNNetworkLibrary "WIPNNetwork"
this is a work in progress (WIP) and prone to have some errors, so use at your own risk...
let me know if you find any issues..
Method for a generalized Neural Network.
network(x) Generalized Neural Network Method.
Parameters:
x : TODO: add parameter x description here
Returns: TODO: add what function returns
FunctionBlackScholesLibrary "FunctionBlackScholes"
Some methods for the Black Scholes Options Model, which demonstrates several approaches to the valuation of a European call.
// reference:
// people.math.sc.edu
// people.math.sc.edu
asset_path(s0, mu, sigma, t1, n) Simulates the behavior of an asset price over time.
Parameters:
s0 : float, asset price at time 0.
mu : float, growth rate.
sigma : float, volatility.
t1 : float, time to expiry date.
n : int, time steps to expiry date.
Returns: option values at each equal timed step (0 -> t1)
binomial(s0, e, r, sigma, t1, m) Uses the binomial method for a European call.
Parameters:
s0 : float, asset price at time 0.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
m : int, time steps to expiry date.
Returns: option value at time 0.
bsf(s0, t0, e, r, sigma, t1) Evaluates the Black-Scholes formula for a European call.
Parameters:
s0 : float, asset price at time 0.
t0 : float, time at which the price is known.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
Returns: option value at time 0.
forward(e, r, sigma, t1, nx, nt, smax) Forward difference method to value a European call option.
Parameters:
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
nx : int, number of space steps in interval (0, L).
nt : int, number of time steps.
smax : float, maximum value of S to consider.
Returns: option values for the european call, float array of size ((nx-1) * (nt+1)).
mc(s0, e, r, sigma, t1, m) Uses Monte Carlo valuation on a European call.
Parameters:
s0 : float, asset price at time 0.
e : float, exercise price.
r : float, interest rate.
sigma : float, volatility.
t1 : float, time to expiry date.
m : int, time steps to expiry date.
Returns: confidence interval for the estimated range of valuation.
FunctionMinkowskiDistanceLibrary "FunctionMinkowskiDistance"
Method for Minkowski Distance,
The Minkowski distance or Minkowski metric is a metric in a normed vector space
which can be considered as a generalization of both the Euclidean distance and
the Manhattan distance.
It is named after the German mathematician Hermann Minkowski.
reference: en.wikipedia.org
double(point_ax, point_ay, point_bx, point_by, p_value) Minkowsky Distance for single points.
Parameters:
point_ax : float, x value of point a.
point_ay : float, y value of point a.
point_bx : float, x value of point b.
point_by : float, y value of point b.
p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: float
ndim(point_x, point_y, p_value) Minkowsky Distance for N dimensions.
Parameters:
point_x : float array, point x dimension attributes.
point_y : float array, point y dimension attributes.
p_value : float, p value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: float
TimeLockedMALibrary "TimeLockedMA"
Library & function(s) which generates a moving average that stays locked to users desired time preference.
TODO - Add functionality for more moving average types. IE: smooth, weighted etc...
Example:
time_locked_ma(close, length=1, timeframe='days', type='ema')
Will generate a 1 day exponential moving average that will stay consistent across all chart intervals.
Error Handling
On small time frames with large moving averages (IE: 1min chart with a 50 week moving average), you'll get a study error that says "(function "sma") references too many candles in history" .
To fix this, make sure you have timeframe="" as an indicator() header. Next, in the indicator settings, increase the timeframe from to a higher interval until the error goes away.
By default, it's set to "Chart". Bringing the interval up to 1hr will usually solve the issue.
Furthermore, adding timeframe_gaps=false to your indicator() header will give you an approximation of real-time values.
Misc Info
For time_lock_ma() setting type='na' will return the relative length value that adjusts dynamically to user's chart time interval.
This is good for plugging into other functions where a lookback or length is required. (IE: Bollinger Bands)
time_locked_ma(source, length, timeframe, type) Creates a moving average that is locked to a desired timeframe
Parameters:
source : float, Moving average source
length : int, Moving average length
timeframe : string, Desired timeframe. Use: "minutes", "hours", "days", "weeks", "months", "chart"
type : string, string Moving average type. Use "SMA" (default) or "EMA". Value of "NA" will return relative lookback length.
Returns: moving average that is locked to desired timeframe.
timeframe_convert(t, a, b) Converts timeframe to desired timeframe. From a --> b
Parameters:
t : int, Time interval
a : string, Time period
b : string, Time period to convert to
Returns: Converted timeframe value
chart_time(timeframe_period, timeframe_multiplier) Separates timeframe.period function and returns chart interval and period
Parameters:
timeframe_period : string, timeframe.period
timeframe_multiplier : int, timeframe.multiplier
Enjoy :)
FunctionGenerateRandomPointsInShapeLibrary "FunctionGenerateRandomPointsInShape"
Generate random vector points in geometric shape (parallelogram, triangle)
random_parallelogram(vector_a, vector_b) Generate random vector point in a parallelogram shape.
Parameters:
vector_a : float array, vector of (x, y) shape.
vector_b : float array, vector of (x, y) shape.
Returns: float array, vector of (x, y) shape.
random_triangle(vector_a, vector_b) Generate random vector point in a triangle shape.
Parameters:
vector_a : float array, vector of (x, y) shape.
vector_b : float array, vector of (x, y) shape.
Returns: float array, vector of (x, y) shape.
Punchline_LibLibrary "Punchline_Lib"
roundSmart(float) Truncates decimal points of a float value based on the amount of digits before the decimal point
Parameters:
float : _value any number
Returns: float
tostring_smart(float) converts a float to a string, intelligently cutting off decimal points
Parameters:
float : _value any number
Returns: string
FunctionNNLayerLibrary "FunctionNNLayer"
Generalized Neural Network Layer method.
function(inputs, weights, n_nodes, activation_function, bias, alpha, scale) Generalized Layer.
Parameters:
inputs : float array, input values.
weights : float array, weight values.
n_nodes : int, number of nodes in layer.
activation_function : string, default='sigmoid', name of the activation function used.
bias : float, default=1.0, bias to pass into activation function.
alpha : float, default=na, if required to pass into activation function.
scale : float, default=na, if required to pass into activation function.
Returns: float
FunctionNNPerceptronLibrary "FunctionNNPerceptron"
Perceptron Function for Neural networks.
function(inputs, weights, bias, activation_function, alpha, scale) generalized perceptron node for Neural Networks.
Parameters:
inputs : float array, the inputs of the perceptron.
weights : float array, the weights for inputs.
bias : float, default=1.0, the default bias of the perceptron.
activation_function : string, default='sigmoid', activation function applied to the output.
alpha : float, default=na, if required for activation.
scale : float, default=na, if required for activation.
@outputs float
InterpolationLibrary "Interpolation"
Functions for interpolating values. Can be useful in signal processing or applied as a sigmoid function.
linear(k, delta, offset, unbound) Returns the linear adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the line the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
quadIn(k, delta, offset, unbound) Returns the quadratic (easing-in) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
quadOut(k, delta, offset, unbound) Returns the quadratic (easing-out) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
quadInOut(k, delta, offset, unbound) Returns the quadratic (easing-in-out) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
cubicIn(k, delta, offset, unbound) Returns the cubic (easing-in) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
cubicOut(k, delta, offset, unbound) Returns the cubic (easing-out) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
cubicInOut(k, delta, offset, unbound) Returns the cubic (easing-in-out) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
expoIn(k, delta, offset, unbound) Returns the exponential (easing-in) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
expoOut(k, delta, offset, unbound) Returns the exponential (easing-out) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
expoInOut(k, delta, offset, unbound) Returns the exponential (easing-in-out) adjusted value.
Parameters:
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
using(fn, k, delta, offset, unbound) Returns the adjusted value by function name.
Parameters:
fn : The name of the function. Allowed values: linear, quadIn, quadOut, quadInOut, cubicIn, cubicOut, cubicInOut, expoIn, expoOut, expoInOut.
k : A number (float) from 0 to 1 representing where the on the curve the value is.
delta : The amount the value should change as k reaches 1.
offset : The start value.
unbound : When true, k values less than 0 or greater than 1 are still calculated. When false (default), k values less than 0 will return the offset value and values greater than 1 will return (offset + delta).
CRC.lib Log FunctionsLibrary "CRCLog"
default_params() Returns default high/low intercept/slope parameter values for Bitcoin that can be adjusted and used to calculate new Regression Log lines
log_regression() Returns set of (fib) spaced lines representing log regression (default values attempt fitted to INDEX:BTCUSD genesis-2021)
CRC.lib Bars - Bar FunctionsLibrary "CRCBars"
min_max(open, open) Get bar min (low) and max (high) price points
Parameters:
open : Open price data
open : Close price data
Returns:
is_bullish_bearish(open, open) Get bar bullish/bearish boolean signals
Parameters:
open : Open price data
open : Close price data
Returns:
sizes(open, open, open, open) Get bar sizes based on open/high/low/close data
Parameters:
open : Open price data
open : High price data
open : Low price data
open : Close price data
Returns:
options_expiration_and_strike_price_calculatorLibrary "options_expiration_and_strike_price_calculator"
TODO: add library description here
fun()
this is a library to help calculate options strike price and expiration that you can add to a script i use it mainly for symbol calulation to place orders to buy options on TD ameritrade so it will be set up to order options on TD ameritrade using json order placer and webhook it fills in the area in the json under symbol i suggest manually adding the year it should look like this is an example of an order to buy 10 call options using json through td ameritrade api
"complexOrderStrategyType": "NONE",
"orderType": "LIMIT",
"session": "NORMAL",
"price": "6.45",
"duration": "DAY",
"orderStrategyType": "SINGLE",
"orderLegCollection":
}
GenericTALibrary "GenericTA"
What is it?
The real generic library. Which means it is just covering most built-in indicators / functions, but with more parameters, so the user don't have to write more few lines to achieve something simple and replicative.
Development process?
Will tidy it up, and setting up in later stage.
Welcome to inbox me to improve the library ------
If you are finding a similar thing. That's a good news. Because I am making it.
FFTLibraryLibrary "FFTLibrary" contains a function for performing Fast Fourier Transform (FFT) along with a few helper functions. In general, FFT is defined for complex inputs and outputs. The real and imaginary parts of formally complex data are treated as separate arrays (denoted as x and y). For real-valued data, the array of imaginary parts should be filled with zeros.
FFT function
fft(x, y, dir) : Computes the one-dimensional discrete Fourier transform using an in-place complex-to-complex FFT algorithm . Note: The transform also produces a mirror copy of the frequency components, which correspond to the signal's negative frequencies.
Parameters:
x : float array, real part of the data, array size must be a power of 2
y : float array, imaginary part of the data, array size must be the same as x ; for real-valued input, y must be an array of zeros
dir : string, options = , defines the direction of the transform: forward" (time-to-frequency) or inverse (frequency-to-time)
Returns: x, y : tuple (float array, float array), real and imaginary parts of the transformed data (original x and y are changed on output)
Helper functions
fftPower(x, y) : Helper function that computes the power of each frequency component (in other words, Fourier amplitudes squared).
Parameters:
x : float array, real part of the Fourier amplitudes
y : float array, imaginary part of the Fourier amplitudes
Returns: power : float array of the same length as x and y , Fourier amplitudes squared
fftFreq(N) : Helper function that returns the FFT sample frequencies defined in cycles per timeframe unit. For example, if the timeframe is 5m, the frequencies are in cycles/(5 minutes).
Parameters:
N : int, window length (number of points in the transformed dataset)
Returns: freq : float array of N, contains the sample frequencies (with zero at the start).
FunctionArrayReduceLibrary "FunctionArrayReduce"
A limited method to reduce a array using a mathematical formula.
float_(formula, samples, arguments, initial_value) Method to reduce a array using a mathematical formula.
Parameters:
formula : string, the mathematical formula, accepts some default name codes (index, output, previous, current, integer index of arguments array).
samples : float array, the data to process.
arguments : float array, the extra arguments of the formula.
initial_value : float, default=0, a initial base value for the process.
Returns: float.
Notes:
** if initial value is specified as "na" it will default to the first element of samples.