Support & Resistance AI (K means/median) [ThinkLogicAI]█ OVERVIEW
K-means is a clustering algorithm commonly used in machine learning to group data points into distinct clusters based on their similarities. While K-means is not typically used directly for identifying support and resistance levels in financial markets, it can serve as a tool in a broader analysis approach.
Support and resistance levels are price levels in financial markets where the price tends to react or reverse. Support is a level where the price tends to stop falling and might start to rise, while resistance is a level where the price tends to stop rising and might start to fall. Traders and analysts often look for these levels as they can provide insights into potential price movements and trading opportunities.
█ BACKGROUND
The K-means algorithm has been around since the late 1950s, making it more than six decades old. The algorithm was introduced by Stuart Lloyd in his 1957 research paper "Least squares quantization in PCM" for telecommunications applications. However, it wasn't widely known or recognized until James MacQueen's 1967 paper "Some Methods for Classification and Analysis of Multivariate Observations," where he formalized the algorithm and referred to it as the "K-means" clustering method.
So, while K-means has been around for a considerable amount of time, it continues to be a widely used and influential algorithm in the fields of machine learning, data analysis, and pattern recognition due to its simplicity and effectiveness in clustering tasks.
█ COMPARE AND CONTRAST SUPPORT AND RESISTANCE METHODS
1) K-means Approach:
Cluster Formation: After applying the K-means algorithm to historical price change data and visualizing the resulting clusters, traders can identify distinct regions on the price chart where clusters are formed. Each cluster represents a group of similar price change patterns.
Cluster Analysis: Analyze the clusters to identify areas where clusters tend to form. These areas might correspond to regions of price behavior that repeat over time and could be indicative of support and resistance levels.
Potential Support and Resistance Levels: Based on the identified areas of cluster formation, traders can consider these regions as potential support and resistance levels. A cluster forming at a specific price level could suggest that this level has been historically significant, causing similar price behavior in the past.
Cluster Standard Deviation: In addition to looking at the means (centroids) of the clusters, traders can also calculate the standard deviation of price changes within each cluster. Standard deviation is a measure of the dispersion or volatility of data points around the mean. A higher standard deviation indicates greater price volatility within a cluster.
Low Standard Deviation: If a cluster has a low standard deviation, it suggests that prices within that cluster are relatively stable and less likely to exhibit sudden and large price movements. Traders might consider placing tighter stop-loss orders for trades within these clusters.
High Standard Deviation: Conversely, if a cluster has a high standard deviation, it indicates greater price volatility within that cluster. Traders might opt for wider stop-loss orders to allow for potential price fluctuations without getting stopped out prematurely.
Cluster Density: Each data point is assigned to a cluster so a cluster that is more dense will act more like gravity and
2) Traditional Approach:
Trendlines: Draw trendlines connecting significant highs or lows on a price chart to identify potential support and resistance levels.
Chart Patterns: Identify chart patterns like double tops, double bottoms, head and shoulders, and triangles that often indicate potential reversal points.
Moving Averages: Use moving averages to identify levels where the price might find support or resistance based on the average price over a specific period.
Psychological Levels: Identify round numbers or levels that traders often pay attention to, which can act as support and resistance.
Previous Highs and Lows: Identify significant previous price highs and lows that might act as support or resistance.
The key difference lies in the approach and the foundation of these methods. Traditional methods are based on well-established principles of technical analysis and market psychology, while the K-means approach involves clustering price behavior without necessarily incorporating market sentiment or specific price patterns.
It's important to note that while the K-means approach might provide an interesting way to analyze price data, it should be used cautiously and in conjunction with other traditional methods. Financial markets are influenced by a wide range of factors beyond just price behavior, and the effectiveness of any method for identifying support and resistance levels should be thoroughly tested and validated. Additionally, developments in trading strategies and analysis techniques could have occurred since my last update.
█ K MEANS ALGORITHM
The algorithm for K means is as follows:
Initialize cluster centers
assign data to clusters based on minimum distance
calculate cluster center by taking the average or median of the clusters
repeat steps 1-3 until cluster centers stop moving
█ LIMITATIONS OF K MEANS
There are 3 main limitations of this algorithm:
Sensitive to Initializations: K-means is sensitive to the initial placement of centroids. Different initializations can lead to different cluster assignments and final results.
Assumption of Equal Sizes and Variances: K-means assumes that clusters have roughly equal sizes and spherical shapes. This may not hold true for all types of data. It can struggle with identifying clusters with uneven densities, sizes, or shapes.
Impact of Outliers: K-means is sensitive to outliers, as a single outlier can significantly affect the position of cluster centroids. Outliers can lead to the creation of spurious clusters or distortion of the true cluster structure.
█ LIMITATIONS IN APPLICATION OF K MEANS IN TRADING
Trading data often exhibits characteristics that can pose challenges when applying indicators and analysis techniques. Here's how the limitations of outliers, varying scales, and unequal variance can impact the use of indicators in trading:
Outliers are data points that significantly deviate from the rest of the dataset. In trading, outliers can represent extreme price movements caused by rare events, news, or market anomalies. Outliers can have a significant impact on trading indicators and analyses:
Indicator Distortion: Outliers can skew the calculations of indicators, leading to misleading signals. For instance, a single extreme price spike could cause indicators like moving averages or RSI (Relative Strength Index) to give false signals.
Risk Management: Outliers can lead to overly aggressive trading decisions if not properly accounted for. Ignoring outliers might result in unexpected losses or missed opportunities to adjust trading strategies.
Different Scales: Trading data often includes multiple indicators with varying units and scales. For example, prices are typically in dollars, volume in units traded, and oscillators have their own scale. Mixing indicators with different scales can complicate analysis:
Normalization: Indicators on different scales need to be normalized or standardized to ensure they contribute equally to the analysis. Failure to do so can lead to one indicator dominating the analysis due to its larger magnitude.
Comparability: Without normalization, it's challenging to directly compare the significance of indicators. Some indicators might have a larger numerical range and could overshadow others.
Unequal Variance: Unequal variance in trading data refers to the fact that some indicators might exhibit higher volatility than others. This can impact the interpretation of signals and the performance of trading strategies:
Volatility Adjustment: When combining indicators with varying volatility, it's essential to adjust for their relative volatilities. Failure to do so might lead to overemphasizing or underestimating the importance of certain indicators in the trading strategy.
Risk Assessment: Unequal variance can impact risk assessment. Indicators with higher volatility might lead to riskier trading decisions if not properly taken into account.
█ APPLICATION OF THIS INDICATOR
This indicator can be used in 2 ways:
1) Make a directional trade:
If a trader thinks price will go higher or lower and price is within a cluster zone, The trader can take a position and place a stop on the 1 sd band around the cluster. As one can see below, the trader can go long the green arrow and place a stop on the one standard deviation mark for that cluster below it at the red arrow. using this we can calculate a risk to reward ratio.
Calculating risk to reward: targeting a risk reward ratio of 2:1, the trader could clearly make that given that the next resistance area above that in the orange cluster exceeds this risk reward ratio.
2) Take a reversal Trade:
We can use cluster centers (support and resistance levels) to go in the opposite direction that price is currently moving in hopes of price forming a pivot and reversing off this level.
Similar to the directional trade, we can use the standard deviation of the cluster to place a stop just in case we are wrong.
In this example below we can see that shorting on the red arrow and placing a stop at the one standard deviation above this cluster would give us a profitable trade with minimal risk.
Using the cluster density table in the upper right informs the trader just how dense the cluster is. Higher density clusters will give a higher likelihood of a pivot forming at these levels and price being rejected and switching direction with a larger move.
█ FEATURES & SETTINGS
General Settings:
Number of clusters: The user can select from 3 to five clusters. A good rule of thumb is that if you are trading intraday, less is more (Think 3 rather than 5). For daily 4 to 5 clusters is good.
Cluster Method: To get around the outlier limitation of k means clustering, The median was added. This gives the user the ability to choose either k means or k median clustering. K means is the preferred method if the user things there are no large outliers, and if there appears to be large outliers or it is assumed there are then K medians is preferred.
Bars back To train on: This will be the amount of bars to include in the clustering. This number is important so that the user includes bars that are recent but not so far back that they are out of the scope of where price can be. For example the last 2 years we have been in a range on the sp500 so 505 days in this setting would be more relevant than say looking back 5 years ago because price would have to move far to get there.
Show SD Bands: Select this to show the 1 standard deviation bands around the support and resistance level or unselect this to just show the support and resistance level by itself.
Features:
Besides the support and resistance levels and standard deviation bands, this indicator gives a table in the upper right hand corner to show the density of each cluster (support and resistance level) and is color coded to the cluster line on the chart. Higher density clusters mean price has been there previously more than lower density clusters and could mean a higher likelihood of a reversal when price reaches these areas.
█ WORKS CITED
Victor Sim, "Using K-means Clustering to Create Support and Resistance", 2020, towardsdatascience.com
Chris Piech, "K means", stanford.edu
█ ACKNOLWEDGMENTS
@jdehorty- Thanks for the publish template. It made organizing my thoughts and work alot easier.
Wyszukaj w skryptach "algo"
Channel Based Zigzag [HeWhoMustNotBeNamed]🎲 Concept
Zigzag is built based on the price and number of offset bars. But, in this experiment, we build zigzag based on different bands such as Bollinger Band, Keltner Channel and Donchian Channel. The process is simple:
🎯 Derive bands based on input parameters
🎯 High of a bar is considered as pivot high only if the high price is above or equal to upper band.
🎯 Similarly low of a bar is considered as pivot low only if low price is below or equal to lower band.
🎯 Adding the pivot high/low follows same logic as that of regular zigzag where pivot high is always followed by pivot low and vice versa.
🎯 If the new pivot added is of same direction as that of last pivot, then both pivots are compared with each other and only the extreme one is kept. (Highest in case of pivot high and lowest in case of pivot low)
🎯 If a bar has both pivot high and pivot low - pivot with same direction as previous pivot is added to the list first before adding the pivot with opposite direction.
🎲 Use Cases
Can be used for pattern recognition algorithms instead of standard zigzag. This will help derive patterns which are relative to bands and channels.
Example: John Bollinger explains how to manually scan double tap using Bollinger Bands in this video: www.youtube.com This modified zigzag base can be used to achieve the same using algorithmic means.
🎲 Settings
Few simple configurations which will let you select the band properties. Notice that there is no zigzag length here. All the calculations depend on the bands.
With bands display, indicator looks something like this
Note that pivots do not always represent highest/lowest prices. They represent highest/lowest price relative to bands.
As mentioned many times, application of zigzag is not for buying at lower price and selling at higher price. It is mainly used for pattern recognition either manually or via algorithms. Lets build new Harmonic, Chart patterns, Trend Lines using the new zigzag?
cbndLibrary "cbnd"
Description:
A standalone Cumulative Bivariate Normal Distribution (CBND) functions that do not require any external libraries.
This includes 3 different CBND calculations: Drezner(1978), Drezner and Wesolowsky (1990), and Genz (2004)
Comments:
The standardized cumulative normal distribution function returns the probability that one random
variable is less than a and that a second random variable is less than b when the correlation
between the two variables is p. Since no closed-form solution exists for the bivariate cumulative
normal distribution, we present three approximations. The first one is the well-known
Drezner (1978) algorithm. The second one is the more efficient Drezner and Wesolowsky (1990)
algorithm. The third is the Genz (2004) algorithm, which is the most accurate one and therefore
our recommended algorithm. West (2005b) and Agca and Chance (2003) discuss the speed and
accuracy of bivariate normal distribution approximations for use in option pricing in
ore detail.
Reference:
The Complete Guide to Option Pricing Formulas, 2nd ed. (Espen Gaarder Haug)
CBND1(A, b, rho)
Returns the Cumulative Bivariate Normal Distribution (CBND) using Drezner 1978 Algorithm
Parameters:
A : float,
b : float,
rho : float,
Returns: float.
CBND2(A, b, rho)
Returns the Cumulative Bivariate Normal Distribution (CBND) using Drezner and Wesolowsky (1990) function
Parameters:
A : float,
b : float,
rho : float,
Returns: float.
CBND3(x, y, rho)
Returns the Cumulative Bivariate Normal Distribution (CBND) using Genz (2004) algorithm (this is the preferred method)
Parameters:
x : float,
y : float,
rho : float,
Returns: float.
STD-Stepped Fast Cosine Transform Moving Average [Loxx]STD-Stepped Fast Cosine Transform Moving Average is an experimental moving average that uses Fast Cosine Transform to calculate a moving average. This indicator has standard deviation stepping in order to smooth the trend by weeding out low volatility movements.
What is the Discrete Cosine Transform?
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF, where small high-frequency components can be discarded), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
The use of cosine rather than sine functions is critical for compression, since it turns out (as described below) that fewer cosine functions are needed to approximate a typical signal, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier Series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier Series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common.
The most common variant of discrete cosine transform is the type-II DCT, which is often called simply "the DCT". This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to MD signals. There are several algorithms to compute MD DCT. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT, : ix, xiii, 1, 141–304 used in several ISO/IEC and ITU-T international standards.
Notable settings
windowper = period for calculation, restricted to powers of 2: "16", "32", "64", "128", "256", "512", "1024", "2048", this reason for this is FFT is an algorithm that computes DFT (Discrete Fourier Transform) in a fast way, generally in 𝑂(𝑁⋅log2(𝑁)) instead of 𝑂(𝑁2). To achieve this the input matrix has to be a power of 2 but many FFT algorithm can handle any size of input since the matrix can be zero-padded. For our purposes here, we stick to powers of 2 to keep this fast and neat. read more about this here: Cooley–Tukey FFT algorithm
smthper = smoothing count, this smoothing happens after the first FCT regular pass. this zeros out frequencies from the previously calculated values above SS count. the lower this number, the smoother the output, it works opposite from other smoothing periods
Included
Alerts
Signals
Loxx's Expanded Source Types
Additional reading
A Fast Computational Algorithm for the Discrete Cosine Transform by Chen et al.
Practical Fast 1-D DCT Algorithms With 11 Multiplications by Loeffler et al.
Cooley–Tukey FFT algorithm
Weighted Burg AR Spectral Estimate Extrapolation of Price [Loxx]Weighted Burg AR Spectral Estimate Extrapolation of Price is an indicator that uses an autoregressive spectral estimation called the Weighted Burg Algorithm. This method is commonly used in speech modeling and speech prediction engines. This method also includes Levinson–Durbin algorithm. As was already discussed previously in the following indicator:
Levinson-Durbin Autocorrelation Extrapolation of Price
What is Levinson recursion or Levinson–Durbin recursion?
In many applications, the duration of an uninterrupted measurement of a time series is limited. However, it is often possible to obtain several separate segments of data. The estimation of an autoregressive model from this type of data is discussed. A straightforward approach is to take the average of models estimated from each segment separately. In this way, the variance of the estimated parameters is reduced. However, averaging does not reduce the bias in the estimate. With the Burg algorithm for segments, both the variance and the bias in the estimated parameters are reduced by fitting a single model to all segments simultaneously. As a result, the model estimated with the Burg algorithm for segments is more accurate than models obtained with averaging. The new weighted Burg algorithm for segments allows combining segments of different amplitudes.
The Burg algorithm estimates the AR parameters by determining reflection coefficients that minimize the sum of for-ward and backward residuals. The extension of the algorithm to segments is that the reflection coefficients are estimated by minimizing the sum of forward and backward residuals of all segments taken together. This means a single model is fitted to all segments in one time. This concept is also used for prediction error methods in system identification, where the input to the system is known, like in ARX modeling
Data inputs
Source Settings: -Loxx's Expanded Source Types. You typically use "open" since open has already closed on the current active bar
LastBar - bar where to start the prediction
PastBars - how many bars back to model
LPOrder - order of linear prediction model; 0 to 1
FutBars - how many bars you want to forward predict
BurgWin - weighing function index, rectangular, hamming, or parabolic
Things to know
Normally, a simple moving average is calculated on source data. I've expanded this to 38 different averaging methods using Loxx's Moving Avreages.
This indicator repaints
Included
Bar color muting
Further reading
Performance of the weighted burg methods of ar spectral estimation for pitch-synchronous analysis of voiced speech
The Burg algorithm for segments
Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions
Related Indicators
Synthetic Price Action GeneratorNOTICE:
First thing you need to know, it "DOES NOT" reflect the price of the ticker you will load it on. THIS IS NOT AN INDICATOR FOR TRADING! It's a developer tool solely generating random values that look exactly like the fractals we observe every single day. This script's generated candles are as fake as the never ending garbage news cycles we are often force fed and expected to believe by using carefully scripted narratives peddled as hypnotic truth to psychologically and emotionally influence you to the point of control by coercion and subjugation. I wanted to make the script's synthetic nature very clear using that analogy, it's dynamically artificial. Do not accidentally become disillusioned by this scripts values, make trading decisions from it, and lastly don't become victim to predatory media magic ministry parrots with pretty, handsome smiles, compelling you to board their ferris wheel of fear. Now, on to the good stuff...
BACKSTORY:
Occasionally I find myself in situations where I have to build analyzers in Pine to actually build novel quantitative analytic indicators and tools worthy of future use. These analyzers certainly don't exist on this platform, but usually are required to engineer and tweak algorithms of the highest quality with the finest computational caliber. I have numerous other synthesizers to publish besides this one.
For many reasons, I needed a synthetic environment to utilize the analyzers I built in Pine, to even pursue building some exotic indicators and algorithms. Pine doesn't allow sourcing of tuples. Not to mention, I required numerous Pine advancements to make long held dreams into tangible realities. Many Pine upgrades have arrived and MANY, MANY more are in need of implementation for all. Now that I have this, intending to use it in the future often when in need, you can now use it too. I do anticipate some skilled Pine poets will employ this intended handy utility to design and/or improved indicators for trading.
ORIGIN:
This was inspired by the brilliance from the world renowned ALGOmist John F. Ehlers, but it's taken on a completely alien form from its original DNA. Browsing on the internet for something else, I came across an article with a small code snippet, and I remembered an old wish of mine. I have long known that by flipping back and forth on specific tickers and timeframes in my Watchlist is not the most efficient way to evaluate indicators in multiple theatres of price action. I realized, I always wanted to possess and use this sort of tool, so... I put it into Pine form, but now have decided to inject it with Pine Script steroids. The outcome is highly mutable candle formations in a reusable mutagenic package, observable above and masquerading as genuine looking price candles.
OVERVIEW:
I guess you could call it a price action synthesizer, but I entitled it "Synthetic Price Action Generator" for those who may be searching for such a thing. You may find this more useful on the All or 5Y charts initially to witness indication from beginning (barstate.isfirst === barindex==0) to end (last_bar_index), but you may also use keyboard shortcuts + + to view the earliest plottable bars on any timeframe. I often use that keyboard shortcut to qualify an indicator through the entirety of it's runtime.
A lot can go wrong unexpectedly with indicator initialization, and you will never know it if you don't inspect it. Many recursively endowed Infinite Impulse Response (IIR) Filters can initialize with unintended results that minutely ring in slightly erroneous fashion for the entire runtime, beginning to end, causing deviations from "what should of been..." values with false signals. Looking closely at spg(), you will recognize that 3 EMAs are employed to manage and maintain randomness of CLOSE, HIGH, and LOW. In fact, any indicator's barindex==0 initialization can be inspected with the keyboard shortcuts above. If you see anything obviously strange in an authors indicator, please contact the developer if possible and respectfully notify them.
PURPOSE:
The primary intended application of this script, is to offer developers from advanced to even novice skill levels assistance with building next generation indicators. Mostly, it's purpose is for testing and troubleshooting indicators AND evaluating how they perform in a "manageable" randomized environment. Some times indicators flake out on rare but problematic price fluctuations, and this may help you with finding your issues/errata sooner than later. While the candles upon initial loading look pristine, by tweaking it to the minval/maxval parameters limits OR beyond with a few code modifications, you can generate unusual volatility, for instance... huge wicks. Limits of minval= and maxval= of are by default set to a comfort zone of operation. Massive wicks or candle bodies will undoubtedly affect your indication and often render them useless on tickers that exhibit that behavior, like WGMCF intraday currently.
Copy/paste boundaries are provided for relevant insertion into another script. Paste placement should happen at the very top of a script. Note that by overwriting the close, open, high, etc... values, your compiler will give you generous warnings of "variable shadowing" in abundance, but this is an expected part of applying it to your novel script, no worries. plotcandle() can be copied over too and enabled/disabled in Settings->Style. Always remember to fully remove this scripts' code and those assignments properly before actual trading use of your script occurs, AND specifically when publishing. The entirety of this provided code should never, never exist in a published indicator.
OTHER INTENTIONS:
Even though these are 100% synthetic generated price points, you will notice ALL of the fractal pseudo-patterns that commonly exist in the markets, are naturally occurring with this generator too. You can also swiftly immerse yourself in pattern recognition exercises with increased efficiency in real time by clicking any SPAG Setting in focus and then using the up/down arrow keys. I hope I explained potential uses adequately...
On a personal note, the existence of fractal symmetry often makes me wonder, do we truly live in a totality chaotic universe or is it ordered mathematically for some outcomes to a certain extent. I think both. My observations, it's a pre-deterministic reality completely influenced by infinitesimal amounts of sentient free will with unimaginable existing and emerging quantities. Some how an unknown mysterious mechanism governing the totality of universal physics and mathematics counts this 100.0% flawlessly and perpetually. Anyways, you can't change the past that long existed before your birth or even yesterday, but you can choose to dream, create, and forge the future into your desires and hopes. As always, shite always happens when your not looking for it. What you choose to do after stepping in it unintentionally... is totally up to you. :) Maybe this tool and tips provided will aid you in not stepping in an algo cachucha up to your ankles somehow.
SCRIPTING LESSONS PORTRAYED IN THIS SCRIPT:
Pine etiquette and code cleanliness
Overwrite capabilities of built-in Pine variables for testing indicators
Various techniques to organize Settings panel while providing ease of adjustment utility
Use of tooltip= to provide users adequate valuable information. Most people want to trade with indicators, not blindly make adjustments to them without any knowledge of their intended operation/effects
When available time provides itself, I will consider your inquiries, thoughts, and concepts presented below in the comments section, should you have any questions or comments regarding this indicator. When my indicators achieve more prevalent use by TV members , I may implement more ideas when they present themselves as worthy additions. Have a profitable future everyone!
LinearRegressionLibraryLibrary "LinearRegressionLibrary" contains functions for fitting a regression line to the time series by means of different models, as well as functions for estimating the accuracy of the fit.
Linear regression algorithms:
RepeatedMedian(y, n, lastBar) applies repeated median regression (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
mSlope :: float, slope of the regression line
mInter :: float, intercept of the regression line
TheilSen(y, n, lastBar) applies the Theil-Sen estimator (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
tsSlope :: float, slope of the regression line
tsInter :: float, intercept of the regression line
OrdinaryLeastSquares(y, n, lastBar) applies the ordinary least squares regression (non-robust) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
olsSlope :: float, slope of the regression line
olsInter :: float, intercept of the regression line
Model performance metrics:
metricRMSE(y, n, lastBar, slope, intercept) returns the Root-Mean-Square Error (RMSE) of the regression. The better the model, the lower the RMSE.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
rmse :: float, RMSE value
metricMAE(y, n, lastBar, slope, intercept) returns the Mean Absolute Error (MAE) of the regression. MAE is is similar to RMSE but is less sensitive to outliers. The better the model, the lower the MAE.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
mae :: float, MAE value
metricR2(y, n, lastBar, slope, intercept) returns the coefficient of determination (R squared) of the regression. The better the linear regression fits the data (compared to the sample mean), the closer the value of the R squared is to 1.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
Rsq :: float, R-sqared score
Usage example:
//@version=5
indicator('ExampleLinReg', overlay=true)
// import the library
import tbiktag/LinearRegressionLibrary/1 as linreg
// define the studied interval: last 100 bars
int Npoints = 100
int lastBar = bar_index
int firstBar = bar_index - Npoints
// apply repeated median regression to the closing price time series within the specified interval
{square bracket}slope, intercept{square bracket} = linreg.RepeatedMedian(close, Npoints, lastBar)
// calculate the root-mean-square error of the obtained linear fit
rmse = linreg.metricRMSE(close, Npoints, lastBar, slope, intercept)
// plot the line and print the RMSE value
float y1 = intercept
float y2 = intercept + slope * (Npoints - 1)
if barstate.islast
{indent} line.new(firstBar,y1, lastBar,y2)
{indent} label.new(lastBar,y2,text='RMSE = '+str.format("{0,number,#.#}", rmse))
Cycles StrategyThis is back-testable strategy is a modified version of the Stochastic strategy. It is meant to accompany the modified Stochastic indicator: "Cycles".
Modifications to the Stochastic strategy include;
1. Programmable settings for the Stochastic Periods (%K, %D and Smooth %K).
2. Programmable settings for the MACD Periods (Fast, Slow, Smoothing)
3. Programmable thresholds for %K, to qualify a potential entry strategy.
4. Programmable thresholds for %D, to qualify a potential exit strategy.
5. Buttons to choose which components to use in the trading algorithm.
6. Choose the month and year to back test.
The trading algorithm:
1. When %K exceeds the upper/lower threshold and then hooks down/up, in the direction of the Moving Average (MA). This is the minimum entry qualification.
2. When %D exceeds the lower/upper threshold and angled in the direction of the trade, is the exit qualification.
3. Additional entry filters include the direction of MACD, Signal and %D. Also, the "cliff", being a long entry is a higher high or a short entry is a lower low.
4. Strategy can only go "Long" or "Short" depending on the selected setting.
5. By matching the settings in the "Cycles" indicator, you can (almost) see what the strategy is doing.
6. Be sure to select the "Recalculate" buttons, to recalculate on every new Tick, for best results.
Please click the Like button and leave a comment if you appreciate this script. Improvements will be implemented as time goes on.
I am not a licensed trade advisor. This strategy is for entertainment only. Use at your own risk!
Advanced Multi-Level S/R ZonesAdvanced Multi-Level S/R Zones: The Comprehensive Guide
1. Introduction: The Evolution of Support & Resistance:
Support and Resistance (S/R) is the backbone of technical analysis. However, traditional methods of drawing these levels are often plagued by subjectivity. Two traders looking at the same chart will often draw two different lines. Furthermore, standard indicators often treat every price point equally, ignoring the critical context of Volume and Time.
The Advanced Multi-Level S/R Zones script represents a paradigm shift. It moves away from subjective line drawing and toward Quantitative Zoning. By utilizing statistical measures of variability (Standard Deviation, MAD, IQR) combined with Volume-Weighting and Time-Decay algorithms, this tool identifies where price is mathematically most likely to react. It treats S/R not as thin lines, but as dynamic zones of probability.
2. Core Logic and Mathematical Foundation:
To understand how to use this tool optimally, one must understand the "engine" under the hood. The script operates on four distinct pillars of logic:
A. Session-Based Data Collection:
The script does not look at every single tick. Instead, it aggregates data into "Sessions" (daily bars by default logic). It extracts the High, Low, and Total Volume for every session within the user-defined lookback period. This filters out intraday noise and focuses on the macro structure of the market.
B. Adaptive Statistical Variability:
Most Bollinger Band-style indicators use Standard Deviation (StdDev) to measure width. However, StdDev is heavily influenced by outliers (extreme wicks). This script offers a sophisticated Adaptive Method-Skewness Detection: The script calculates the skewness of the price distribution. Adaptive Selection: If the data is highly skewed (lots of outliers, typical in Crypto), it switches to MAD (Median Absolute Deviation). MAD is robust and ignores outliers. If the data is moderately skewed, it uses IQR (Interquartile Range). If the data is normal (Gaussian), it uses StdDev.
Benefit: This ensures the zone widths are accurate regardless of whether you are trading a stable Forex pair or a volatile Altcoin.
C. The Weighting Engine (Volume + Time)
Not all price history is equal. This script assigns a "Weight Score" to every session based on two factors:
Volume Weighting: Sessions with massive volume (institutional activity) are given higher importance. A high formed on low volume is less significant than a high formed on peak volume.
Time Decay: Recent price action is more relevant than price action from 50 bars ago. The script applies a decay factor (default 0.85). This means a session from yesterday has 100% impact, while a session from 10 days ago has significantly less influence on the zone calculation.
D. Clustering Algorithm
Once the data is weighted, the script runs a clustering algorithm. It looks for price levels where multiple session Highs (for Resistance) or Lows (for Support) congregate.
It requires a minimum number of points to form a zone (User Input: minPoints).
It merges nearby levels based on the Cluster Separation Factor.
This results in "Primary," "Secondary," and "Tertiary" zones based on the strength and quantity of data points in that cluster.
3. Detailed Features and Inputs Breakdown:
Group 1: Main Settings
Lookback Sessions (Default: 10): Defines how far back the script looks for pivots. A higher number (e.g., 50) creates long-term structural zones. A lower number (e.g., 5) creates short-term scalping zones.
Variability Method (Adaptive): As described above, leave this on "Adaptive" for the best results across different assets.
Zone Width Multiplier (Default: 0.75): Controls the vertical thickness of the zones. Increase this to 1.0 or 1.5 for highly volatile assets to ensure you catch the wicks.
Minimum Points per Zone: The strictness filter. If set to 3, a price level must be hit 3 times within the lookback to generate a zone. Higher numbers = fewer, but stronger zones.
Group 2: Weighting
Volume-Weighted Zones: Crucial for identifying "Smart Money" levels. Keep this TRUE.
Time Decay: Ensures the zones update dynamically. If price moves away from a level for a long time, the zone will fade in significance.
ATR-Normalized Zone Width: This is a dynamic volatility filter. If TRUE, the zone width expands and contracts based on the Average True Range. This is vital for maintaining accuracy during market breakouts or crashes.
Group 3: Zone Strength & Scoring
The script calculates a "Score" (0-100%) for every zone based on:
-Point Count: More hits = higher score.
-Touches: How many times price wicked into the zone recently.
-Intact Status: Has the zone been broken?
-Weight: Volume/Time weight of the constituent points.
-Track Zone Touches: Looks back n bars to see how often price respected this level.
-Touch Threshold: The sensitivity for counting a "touch."
Group 4: Visuals & Display
Extend Bars: How far to the right the boxes are drawn.
Show Labels: Displays the Score, Tier (Primary/Secondary), and Status (Retesting).
Detect Pivot Zones (Overlap): This is a killer feature. It detects where a Support Zone overlaps with a Resistance Zone.
Significance: These are "Flip Zones" (Old Resistance becomes New Support). They are colored differently (Orange by default) and represent high-probability entry areas.
Group 5: Signals & Alerts
Entry Signals: Plots Buy/Sell labels when price rejects a zone.
Detect Break & Retest: specifically looks for the "Break -> Pullback -> Bounce" pattern, labeled as "RETEST BUY/SELL".
Proximity Alert: Triggers when price gets within x% of a zone.
4. Understanding the Visuals (Interpreting the Chart)
When you load the script, you will see several visual elements. Here is how to read them:
The Boxes (Zones)
Red Shades: Resistance Zones.
Dark Red (Solid Border): Primary Resistance. The strongest wall.
Lighter Red (Dashed Border): Secondary/Tertiary. Weaker, but still relevant.
Green Shades: Support Zones.
Dark Green (Solid Border): Primary Support. The strongest floor.
Orange Boxes: Pivot Zones. These are areas where price has historically reacted as both support and resistance. These are the "Line in the Sand" for trend direction.
The Labels & Emojis
The script assigns emojis to zone strength:
🔥 (Fire): Score > 80%. A massive level. Expect a strong reaction.
⭐ (Star): Score > 60%. A solid structural level.
✓ (Check): Score > 40%. A standard level.
"⟳ RETESTING": Appears when a zone was broken, and price is currently pulling back to test it from the other side.
The Dashboard (Top Right)
A statistics table provides a "Head-Up Display" for the asset:
High/Low σ (Sigma): The variability of the highs and lows. If High σ is much larger than Low σ, it implies the tops are erratic (wicks) while bottoms are clean (flat).
Method: Shows which statistical method the Adaptive engine selected (e.g., "MAD (auto)").
ATR: Current volatility value used for normalization.
5. Strategies for Optimum Output
To get the most out of this script, you should not just blindly follow the lines. Use these specific strategies:
Strategy A: The "Zone Fade" (Range Trading)
This works best in sideways markets.
Identify a Primary Support (Green) and Primary Resistance (Red).
Wait for price to enter the zone.
Look for the "SUPPORT BOUNCE" or "RESISTANCE REJECTION" signal label.
Entry: Enter against the zone (Buy at support, Sell at resistance).
Stop Loss: Place just outside the zone width. Because the zones are calculated using volatility stats, a break of the zone usually means the trade is invalid.
Strategy B: The "Pivot Flip" (Trend Following)
This is the highest probability setup in trending markets.
Look for an Orange Pivot Zone.
Wait for price to break through a Resistance Zone cleanly.
Wait for the price to return to that zone (which may now turn Orange or act as Support).
Look for the "RETEST BUY" label.
Logic: Old resistance becoming new support is a classic sign of trend continuation. The script automates the detection of this exact geometric phenomenon.
Strategy C: The Volatility Squeeze
Look at the Dashboard. Compare High σ and Low σ.
If the values are dropping rapidly or becoming very small, the zones will contract (become narrow).
Narrow zones indicate a "Squeeze" or compression in price.
Prepare for a violent breakout. Do not fade (trade against) narrow zones; look to trade the breakout.
6. Optimization & Customization Guide
Different markets require different settings. Here is how to tune the script:
For Crypto & Volatile Stocks (Tesla, Nvidia)
Method: Set to Adaptive (Mandatory, as these assets have "Fat Tails").
Multiplier: Increase to 1.0 - 1.25. Crypto wicks are deep; you need wider zones to avoid getting stopped out prematurely.
Lookback: 20-30 sessions. Crypto has a long memory; short lookbacks generate too much noise.
For Forex (EURUSD, GBPJPY)
Method: You can force StdDev or IQR. Forex is more mean-reverting and Gaussian.
Multiplier: Decrease to 0.5 - 0.75. Forex levels are often very precise to the pip.
Volume Weighting: You may turn this OFF for Forex if your broker's volume data is unreliable (since Forex has no centralized volume), though tick volume often works fine.
For Scalping (1m - 15m Timeframes)
Lookback: Decrease to 5-10. You only care about the immediate session history.
Decay Factor: Decrease to 0.5. You want the script to forget about yesterday's price action very quickly.
Touch Lookback: Decrease to 20 bars.
For Swing Trading (4H - Daily Timeframes)
Lookback: Increase to 50.
Decay Factor: Increase to 0.95. Structural levels from weeks ago are still highly relevant.
Min Points: Increase to 3 or 4. Only show levels that have been tested multiple times.
7. Advantages Over Standard Tools:
Feature Standard S/R Indicator, Advanced Multi-Level S/R Calculation, Uses simple Pivots or Fractals, Uses Statistical Distributions (MAD/IQR). Zone Width Arbitrary or Fixed Adaptive based on Volatility & ATR.
Context Ignores Volume Volume Weighted (Smart Money tracking).
Time Relevance Old levels = New levels Time Decay (Recency bias applied).
Overlaps Usually ignores overlaps Detects Pivot Zones (Res/Sup Flip).
Scoring None 0-100% Strength Score per zone.
8. Conclusion:
The Advanced Multi-Level S/R Zones script is not just a drawing tool; it is a statistical analysis engine. By accounting for the skewness of data, the volume behind the moves, and the decay of time, it provides a strictly objective roadmap of the market structure.
For the optimum output, combine the Pivot Zone identification with the Retest Signals. This aligns you with the underlying flow of order blocks and prevents trading against the statistical probabilities of the market.
Wavelet Candle Constructor (Inc. Morlet) 2Here is the detailed description of the **Wavelet Candle** construction principles based on the code provided.
This indicator is not a simple smoothing mechanism (like a Moving Average). It utilizes the **Discrete Wavelet Transform (DWT)**, specifically the Stationary variant (SWT / à Trous Algorithm), to separate "noise" (high frequencies) from the "trend" (low frequencies).
Here is how it works step-by-step:
###1. The Wavelet Kernel (Coefficients)The heart of the algorithm lies in the coefficients (the `h` array in the `get_coeffs` function). Each wavelet type represents a different set of mathematical weights that define how price data is analyzed:
* **Haar:** The simplest wavelet. It acts like a simple average of neighboring candles. It reacts quickly but produces a "boxy" or "jagged" output.
* **Daubechies 4:** An asymmetric wavelet. It is better at detecting sudden trend changes and the fractal structure of the market, though it introduces a slight phase shift.
* **Symlet / Coiflet:** More symmetric than Daubechies. They attempt to minimize lag (phase shift) while maintaining smoothness.
* **Morlet (Gaussian):** Implemented in this code as a Gaussian approximation (bell curve). It provides the smoothest, most "organic" effect, ideal for filtering noise without jagged edges.
###2. The Convolution EngineInstead of a simple average, the code performs a mathematical operation called **convolution**:
For every candle on the chart, the algorithm takes past prices, multiplies them by the Wavelet Kernel weights, and sums them up. This acts as a **digital low-pass filter**—it allows the main price movements to pass through while cutting out the noise.
###3. The "à Trous" Algorithm (Stationary Wavelet Transform)This is the key difference between this indicator and standard data compression.
In a classic wavelet transform, every second data point is usually discarded (downsampling). Here, the **Stationary** approach is used:
* **Level 1:** Convolution every **1** candle.
* **Level 2:** Convolution every **2** candles (skipping one in between).
* **Level 3:** Convolution every **4** candles.
* **Level 4:** Convolution every **8** candles.
Because of this, **we do not lose time resolution**. The Wavelet Candle is drawn exactly where the original candle is, but it represents the trend structure from a broader perspective. The higher the `Decomposition Level`, the deeper the denoising (looking at a wider context).
###4. Independent OHLC ProcessingThe algorithm processes each component of the candle separately:
1. Filters the **Open** series.
2. Filters the **High** series.
3. Filters the **Low** series.
4. Filters the **Close** series.
This results in four smoothed curves: `w_open`, `w_high`, `w_low`, `w_close`.
###5. Geometric Reconstruction (Logic Repair)Since each price series is filtered independently, the mathematics can sometimes lead to physically impossible situations (e.g., the smoothed `Low` being higher than the smoothed `High`).
The code includes a repair section:
```pinescript
real_high = math.max(w_high, w_low)
real_high := math.max(real_high, math.max(w_open, w_close))
// Same logic for Low (math.min)
```
This guarantees that the final Wavelet Candle always has a valid construction: wicks encapsulate the body, and the `High` is strictly the highest point.
---
###Summary of ApplicationThis construction makes the Wavelet Candle an **excellent trend-following tool**.
* If the candle is **green**, it means that after filtering the noise (according to the selected wavelet), the market energy is bullish.
* If it is **red**, the energy is bearish.
* The wicks show volatility that exists within the bounds of the selected decomposition level.
Here is a descriptive comparison of **Wavelet Candles** against other popular chart types. As requested, this is a narrative explanation focusing on the differences in mechanics, interpretation philosophy, and the specific pros and cons of each approach.
---
###1. Wavelet Candles vs. Standard (Japanese) CandlesThis is a clash between "the raw truth" and "mathematical interpretation." Standard Japanese candles display raw market data—exactly what happened on the exchange. Wavelet Candles are a synthetic image created by a signal processor.
**Differences and Philosophy:**
A standard candle is full of emotion and noise. Every single price tick impacts its shape. The Wavelet Candle treats this noise as interference that must be removed to reveal the true energy of the trend. Wavelets decompose the price, reject high frequencies (noise), and reconstruct the candle using only low frequencies (the trend).
* **Wavelet Advantages:** The main advantage is clarity. Where a standard chart shows a series of confusing candles (e.g., a long green one, followed by a short red one, then a doji), the Wavelet Candle often draws a smooth, uniform wave in a single color. This makes it psychologically easier to hold a position and ignore temporary pullbacks.
* **Wavelet Disadvantages:** The biggest drawback is the loss of price precision. The Open, Close, High, and Low values on a Wavelet candle are calculated, not real. You **cannot** place Stop Loss orders or enter trades based on these levels, as the actual market price might be in a completely different place than the smoothed candle suggests. They also introduce lag, which depends on the chosen wavelet—whereas a standard candle reacts instantly.
###2. Wavelet Candles vs. Heikin AshiThese are close cousins, but they share very different "DNA." Both methods aim to smooth the trend, but they achieve it differently.
**Differences and Philosophy:**
Heikin Ashi (HA) is based on a simple recursive arithmetic average. The current HA candle depends on the previous one, making it react linearly.
The Wavelet Candle uses **convolution**. This means the shape of the current candle depends on a "window" (group) of past candles multiplied by weights (Gaussian curve, Daubechies, etc.). This results in a more "organic" and elastic reaction.
* **Wavelet Advantages:** Wavelets are highly customizable. With Heikin Ashi, you are stuck with one algorithm. With Wavelet Candles, you can change the kernel to "Haar" for a fast (boxy) reaction or "Morlet" for an ultra-smooth, wave-like effect. Wavelets handle the separation of market cycles better than simple HA averaging, which can generate many false color flips during consolidation.
* **Wavelet Disadvantages:** They are computationally much more complex and harder to understand intuitively ("Why is this candle red if the price is going up?"). In strong, vertical breakouts (pumps), Heikin Ashi often "chases" the price faster, whereas deep wavelet decomposition (High Level) may show more inertia and change color more slowly.
###3. Wavelet Candles vs. RenkoThis compares two different dimensions: Time vs. Price.
**Differences and Philosophy:**
Renko completely ignores time. A new brick is formed only when the price moves by a specific amount. If the market stands still for 5 hours, nothing happens on a Renko chart.
The Wavelet Candle is **time-synchronous**. If the market stands still for 5 hours, the Wavelet algorithm will draw a series of flat, small candles (the "wavelet decays").
* **Wavelet Advantages:** They preserve the context of time, which is crucial for traders who consider trading sessions (London/New York) or macroeconomic data releases. On a wavelet chart, you can see when volatility drops (candles become small), whereas Renko hides periods of stagnation, which can be misleading for options traders or intraday strategies.
* **Wavelet Disadvantages:** In sideways trends (chop), Wavelet Candles—despite the smoothing—will still draw a "snake" that flips colors (unless you set a very high decomposition level). Renko can remain perfectly clean and static during the same period, not drawing any new bricks, which for many traders is the ultimate filter against overtrading in a flat market.
###Summary**Wavelet Candles** are a tool for the analyst who wants to visualize the **structure of the wave and market cycle**, accepting some lag in exchange for noise reduction, but without giving up the time axis (like in Renko) or relying on simple averaging (like in Heikin Ashi). It serves best as a "roadmap" for the trend rather than a "sniper scope" for precise entries.
Kalman Filter [DCAUT]█ Kalman Filter
📊 ORIGINALITY & INNOVATION
The Kalman Filter represents an important adaptation of aerospace signal processing technology to financial market analysis. Originally developed by Rudolf E. Kalman in 1960 for navigation and guidance systems, this implementation brings the algorithm's noise reduction capabilities to price trend analysis.
This implementation addresses a common challenge in technical analysis: the trade-off between smoothness and responsiveness. Traditional moving averages must choose between being smooth (with increased lag) or responsive (with increased noise). The Kalman Filter improves upon this limitation through its recursive estimation approach, which continuously balances historical trend information with current price data based on configurable noise parameters.
The key advancement lies in the algorithm's adaptive weighting mechanism. Rather than applying fixed weights to historical data like conventional moving averages, the Kalman Filter dynamically adjusts its trust between the predicted trend and observed prices. This allows it to provide smoother signals during stable periods while maintaining responsiveness during genuine trend changes, helping to reduce whipsaws in ranging markets while not missing significant price movements.
📐 MATHEMATICAL FOUNDATION
The Kalman Filter operates through a two-phase recursive process:
Prediction Phase:
The algorithm first predicts the next state based on the previous estimate:
State Prediction: Estimates the next value based on current trend
Error Covariance Prediction: Calculates uncertainty in the prediction
Update Phase:
Then updates the prediction based on new price observations:
Kalman Gain Calculation: Determines the weight given to new measurements
State Update: Combines prediction with observation based on calculated gain
Error Covariance Update: Adjusts uncertainty estimate for next iteration
Core Parameters:
Process Noise (Q): Represents uncertainty in the trend model itself. Higher values indicate the trend can change more rapidly, making the filter more responsive to price changes.
Measurement Noise (R): Represents uncertainty in price observations. Higher values indicate less trust in individual price points, resulting in smoother output.
Kalman Gain Formula:
The Kalman Gain determines how much weight to give new observations versus predictions:
K = P(k|k-1) / (P(k|k-1) + R)
Where:
K is the Kalman Gain (0 to 1)
P(k|k-1) is the predicted error covariance
R is the measurement noise parameter
When K approaches 1, the filter trusts new measurements more (responsive).
When K approaches 0, the filter trusts its prediction more (smooth).
This dynamic adjustment mechanism allows the filter to adapt to changing market conditions automatically, providing an advantage over fixed-weight moving averages.
📊 COMPREHENSIVE SIGNAL ANALYSIS
Visual Trend Indication:
The Kalman Filter line provides color-coded trend information:
Green Line: Indicates the filter value is rising, suggesting upward price momentum
Red Line: Indicates the filter value is falling, suggesting downward price momentum
Gray Line: Indicates sideways movement with no clear directional bias
Crossover Signals:
Price-filter crossovers generate trading signals:
Golden Cross: Price crosses above the Kalman Filter line, suggests potential bullish momentum development, may indicate a favorable environment for long positions, filter will naturally turn green as it adapts to price moving higher
Death Cross: Price crosses below the Kalman Filter line, suggests potential bearish momentum development, may indicate consideration for position reduction or shorts, filter will naturally turn red as it adapts to price moving lower
Trend Confirmation:
The filter serves as a dynamic trend baseline:
Price Consistently Above Filter: Confirms established uptrend
Price Consistently Below Filter: Confirms established downtrend
Frequent Crossovers: Suggests ranging or choppy market conditions
Signal Reliability Factors:
Signal quality varies based on market conditions:
Higher reliability in trending markets with sustained directional moves
Lower reliability in choppy, range-bound conditions with frequent reversals
Parameter adjustment can help adapt to different market volatility levels
🎯 STRATEGIC APPLICATIONS
Trend Following Strategy:
Use the Kalman Filter as a dynamic trend baseline:
Enter long positions when price crosses above the filter
Enter short positions when price crosses below the filter
Exit when price crosses back through the filter in the opposite direction
Monitor filter slope (color) for trend strength confirmation
Dynamic Support/Resistance:
The filter can act as a moving support or resistance level:
In uptrends: Filter often provides dynamic support for pullbacks
In downtrends: Filter often provides dynamic resistance for bounces
Price rejections from the filter can offer entry opportunities in trend direction
Filter breaches may signal potential trend reversals
Multi-Timeframe Analysis:
Combine Kalman Filters across different timeframes:
Higher timeframe filter identifies primary trend direction
Lower timeframe filter provides precise entry and exit timing
Trade only in direction of higher timeframe trend for better probability
Use lower timeframe crossovers for position entry/exit within major trend
Volatility-Adjusted Configuration:
Adapt parameters to match market conditions:
Low Volatility Markets (Forex majors, stable stocks): Use lower process noise for stability, use lower measurement noise for sensitivity
Medium Volatility Markets (Most equities): Process noise default (0.05) provides balanced performance, measurement noise default (1.0) for general-purpose filtering
High Volatility Markets (Cryptocurrencies, volatile stocks): Use higher process noise for responsiveness, use higher measurement noise for noise reduction
Risk Management Integration:
Use filter as a trailing stop-loss level in trending markets
Tighten stops when price moves significantly away from filter (overextension)
Wider stops in early trend formation when filter is just establishing direction
Consider position sizing based on distance between price and filter
📋 DETAILED PARAMETER CONFIGURATION
Source Selection:
Determines which price data feeds the algorithm:
OHLC4 (default): Uses average of open, high, low, close for balanced representation
Close: Focuses purely on closing prices for end-of-period analysis
HL2: Uses midpoint of high and low for range-based analysis
HLC3: Typical price, gives more weight to closing price
HLCC4: Weighted close price, emphasizes closing values
Process Noise (Q) - Adaptation Speed Control:
This parameter controls how quickly the filter adapts to changes:
Technical Meaning:
Represents uncertainty in the underlying trend model
Higher values allow the estimated trend to change more rapidly
Lower values assume the trend is more stable and slow-changing
Practical Impact:
Lower Values: Produces very smooth output with minimal noise, slower to respond to genuine trend changes, best for long-term trend identification, reduces false signals in choppy markets
Medium Values: Balanced responsiveness and smoothness, suitable for swing trading applications, default (0.05) works well for most markets
Higher Values: More responsive to price changes, may produce more false signals in ranging markets, better for short-term trading and day trading, captures trend changes earlier, adjust freely based on market characteristics
Measurement Noise (R) - Smoothing Control:
This parameter controls how much the filter trusts individual price observations:
Technical Meaning:
Represents uncertainty in price measurements
Higher values indicate less trust in individual price points
Lower values make each price observation more influential
Practical Impact:
Lower Values: More reactive to each price change, less smoothing with more noise in output, may produce choppy signals
Medium Values: Balanced smoothing and responsiveness, default (1.0) provides general-purpose filtering
Higher Values: Heavy smoothing for very noisy markets, reduces whipsaws significantly but increases lag in trend change detection, best for cryptocurrency and highly volatile assets, can use larger values for extreme smoothing
Parameter Interaction:
The ratio between Process Noise and Measurement Noise determines overall behavior:
High Q / Low R: Very responsive, minimal smoothing
Low Q / High R: Very smooth, maximum lag reduction
Balanced Q and R: Middle ground for most applications
Optimization Guidelines:
Start with default values (Q=0.05, R=1.0)
If too many false signals: Increase R or decrease Q
If missing trend changes: Decrease R or increase Q
Test across different market conditions before live use
Consider different settings for different timeframes
📈 PERFORMANCE ANALYSIS & COMPETITIVE ADVANTAGES
Comparison with Traditional Moving Averages:
Versus Simple Moving Average (SMA):
The Kalman Filter typically responds faster to genuine trend changes
Produces smoother output than SMA of comparable length
Better noise reduction in ranging markets
More configurable for different market conditions
Versus Exponential Moving Average (EMA):
Similar responsiveness but with better noise filtering
Less prone to whipsaws in choppy conditions
More adaptable through dual parameter control (Q and R)
Can be tuned to match or exceed EMA responsiveness while maintaining smoothness
Versus Hull Moving Average (HMA):
Different noise reduction approach (recursive estimation vs. weighted calculation)
Kalman Filter offers more intuitive parameter adjustment
Both reduce lag effectively, but through different mechanisms
Kalman Filter may handle sudden volatility changes more gracefully
Response Characteristics:
Lag Time: Moderate and configurable through parameter adjustment
Noise Reduction: Good to excellent, particularly in volatile conditions
Trend Detection: Effective across multiple timeframes
False Signal Rate: Typically lower than simple moving averages in ranging markets
Computational Efficiency: Efficient recursive calculation suitable for real-time use
Optimal Use Cases:
Markets with mixed trending and ranging periods
Assets with moderate to high volatility requiring noise filtering
Multi-timeframe analysis requiring consistent methodology
Systematic trading strategies needing reliable trend identification
Situations requiring balance between responsiveness and smoothness
Known Limitations:
Parameters require adjustment for different market volatility levels
May still produce false signals during extreme choppy conditions
No single parameter set works optimally for all market conditions
Requires complementary indicators for comprehensive analysis
Historical performance characteristics may not persist in changing market conditions
USAGE NOTES
This indicator is designed for technical analysis and educational purposes. The Kalman Filter's effectiveness varies with market conditions, tending to perform better in markets with clear trending phases interrupted by consolidation. Like all technical indicators, it has limitations and should not be used as the sole basis for trading decisions, but rather as part of a comprehensive trading approach.
Algorithm performance varies with market conditions, and past characteristics do not guarantee future results. Always test thoroughly with different parameter settings across various market conditions before using in live trading. No technical indicator can predict future price movements with certainty, and all trading involves risk of loss.
ICT SIlver Bullet Trading Windows UK times🎯 Purpose of the Indicator
It’s designed to highlight key ICT “macro” and “micro” windows of opportunity, i.e., time ranges where liquidity grabs and algorithmic setups are most likely to occur. The ICT Silver Bullet concept is built on the idea that institutions execute in recurring intraday windows, and these often produce high-probability setups.
🕰️ Windows
London Macro Window
10:00 – 11:00 UK time
This aligns with a major liquidity window after the London equities open settles and London + EU traders reposition.
You’re looking for setups like liquidity sweeps, MSS (market structure shift), and FVG entries here.
New York Macro Window
15:00 – 16:00 UK time (10:00 – 11:00 NY time)
This is right after the NY equities open, a key ICT window for volatility and liquidity grabs.
Power Hour
Usually 20:00 – 21:00 UK time (3pm–4pm NY time), the last trading hour of NY equities.
ICT often refers to this as another manipulation window where setups can form before the daily close.
🔍 What the Indicator Does
Draws session boxes or shading: so you can visually see the London/NY/Power Hour windows directly on your chart.
Macro vs. Micro time frames:
Macro windows → The ones you set (London & NY) are the major daily algo execution windows.
Micro windows → Within those boxes, ICT expects smaller intraday setups (like a Silver Bullet entry from a sweep + FVG).
Guides your trade selection: it tells you when not to hunt trades everywhere, but instead to wait for price action confirmation inside those boxes.
🧩 How This Fits ICT Silver Bullet Trading
The ICT Silver Bullet strategy says:
Wait for one of the macro windows (London or NY).
Look for liquidity sweep → market structure shift → FVG.
Enter with defined risk inside that hour.
This indicator essentially does step 1 for you: it makes those high-probability windows visually obvious, so you don’t waste time trading random hours where algos aren’t active.
Auto-Fit Growth Trendline# **Theoretical Algorithmic Principles of the Auto-Fit Growth Trendline (AFGT)**
## **🎯 What Does This Algorithm Do?**
The Auto-Fit Growth Trendline is an advanced technical analysis system that **automates the identification of long-term growth trends** and **projects future price levels** based on historical cyclical patterns.
### **Primary Functionality:**
- **Automatically detects** the most significant lows in regular periods (monthly, quarterly, semi-annually, annually)
- **Constructs a dynamic trendline** that connects these historical lows
- **Projects the trend into the future** with high mathematical precision
- **Generates Fibonacci bands** that act as dynamic support and resistance levels
- **Automatically adapts** to different timeframes and market conditions
### **Strategic Purpose:**
The algorithm is designed to identify **fundamental value zones** where price has historically found support, enabling traders to:
- Identify optimal entry points for long positions
- Establish realistic price targets based on mathematical projections
- Recognize dynamic support and resistance levels
- Anticipate long-term price movements
---
## **🧮 Core Mathematical Foundations**
### **Adaptive Temporal Segmentation Theory**
The algorithm is based on **dynamic temporal partition theory**, where time is divided into mathematically coherent uniform intervals. It uses modular transformations to create bijective mappings between continuous timestamps and discrete periods, ensuring each temporal point belongs uniquely to a specific period.
**What does this achieve?** It allows the algorithm to automatically identify natural market cycles (annual, quarterly, etc.) without manual intervention, adapting to the inherent periodicity of each asset.
The temporal mapping function implements a **discrete affine transformation** that normalizes different frequencies (monthly, quarterly, semi-annual, annual) to a space of unique identifiers, enabling consistent cross-temporal comparative analysis.
---
## **📊 Local Extrema Detection Theory**
### **Multi-Point Retrospective Validation Principle**
Local minima detection is founded on **relative extrema theory with sliding window**. Instead of using a simple minimum finder, it implements a cross-validation system that examines the persistence of the extremum across multiple historical periods.
**What problem does this solve?** It eliminates false minima caused by temporal volatility, identifying only those points that represent true historical support levels with statistical significance.
This approach is based on the **statistical confirmation principle**, where a minimum is only considered valid if it maintains its extremum condition during a defined observation period, significantly reducing false positives caused by transitory volatility.
---
## **🔬 Robust Interpolation Theory with Outlier Control**
### **Contextual Adaptive Interpolation Model**
The mathematical core uses **piecewise linear interpolation with adaptive outlier correction**. The key innovation lies in implementing a **contextual anomaly detector** that identifies not only absolute extreme values, but relative deviations to the local context.
**Why is this important?** Financial markets contain extreme events (crashes, bubbles) that can distort projections. This system identifies and appropriately weights them without completely eliminating them, preserving directional information while attenuating distortions.
### **Implicit Bayesian Smoothing Algorithm**
When an outlier is detected (deviation >300% of local average), the system applies a **simplified Kalman filter** that combines the current observation with a local trend estimation, using a weight factor that preserves directional information while attenuating extreme fluctuations.
---
## **📈 Stabilized Extrapolation Theory**
### **Exponential Growth Model with Dampening**
Extrapolation is based on a **modified exponential growth model with progressive dampening**. It uses multiple historical points to calculate local growth ratios, implements statistical filtering to eliminate outliers, and applies a dampening factor that increases with extrapolation distance.
**What advantage does this offer?** Long-term projections in finance tend to be exponentially unrealistic. This system maintains short-to-medium term accuracy while converging toward realistic long-term projections, avoiding the typical "exponential explosions" of other methods.
### **Asymptotic Convergence Principle**
For long-term projections, the algorithm implements **controlled asymptotic convergence**, where growth ratios gradually converge toward pre-established limits, avoiding unrealistic exponential projections while preserving short-to-medium term accuracy.
---
## **🌟 Dynamic Fibonacci Projection Theory**
### **Continuous Proportional Scaling Model**
Fibonacci bands are constructed through **uniform proportional scaling** of the base curve, where each level represents a linear transformation of the main curve by a constant factor derived from the Fibonacci sequence.
**What is its practical utility?** It provides dynamic resistance and support levels that move with the trend, offering price targets and profit-taking points that automatically adapt to market evolution.
### **Topological Preservation Principle**
The system maintains the **topological properties** of the base curve in all Fibonacci projections, ensuring that spatial and temporal relationships are consistently preserved across all resistance/support levels.
---
## **⚡ Adaptive Computational Optimization**
### **Multi-Scale Resolution Theory**
It implements **automatic multi-resolution analysis** where data granularity is dynamically adjusted according to the analysis timeframe. It uses the **adaptive Nyquist principle** to optimize the signal-to-noise ratio according to the temporal observation scale.
**Why is this necessary?** Different timeframes require different levels of detail. A 1-minute chart needs more granularity than a monthly one. This system automatically optimizes resolution for each case.
### **Adaptive Density Algorithm**
Calculation point density is optimized through **adaptive sampling theory**, where calculation frequency is adjusted according to local trend curvature and analysis timeframe, balancing visual precision with computational efficiency.
---
## **🛡️ Robustness and Fault Tolerance**
### **Graceful Degradation Theory**
The system implements **multi-level graceful degradation**, where under error conditions or insufficient data, the algorithm progressively falls back to simpler but reliable methods, maintaining basic functionality under any condition.
**What does this guarantee?** That the indicator functions consistently even with incomplete data, new symbols with limited history, or extreme market conditions.
### **State Consistency Principle**
It uses **mathematical invariants** to guarantee that the algorithm's internal state remains consistent between executions, implementing consistency checks that validate data structure integrity in each iteration.
---
## **🔍 Key Theoretical Innovations**
### **A. Contextual vs. Absolute Outlier Detection**
It revolutionizes traditional outlier detection by considering not only the absolute magnitude of deviations, but their relative significance within the local context of the time series.
**Practical impact:** It distinguishes between legitimate market movements and technical anomalies, preserving important events like breakouts while filtering noise.
### **B. Extrapolation with Weighted Historical Memory**
It implements a memory system that weights different historical periods according to their relevance for current prediction, creating projections more adaptable to market regime changes.
**Competitive advantage:** It automatically adapts to fundamental changes in asset dynamics without requiring manual recalibration.
### **C. Automatic Multi-Timeframe Adaptation**
It develops an automatic temporal resolution selection system that optimizes signal extraction according to the intrinsic characteristics of the analysis timeframe.
**Result:** A single indicator that functions optimally from 1-minute to monthly charts without manual adjustments.
### **D. Intelligent Asymptotic Convergence**
It introduces the concept of controlled asymptotic convergence in financial extrapolations, where long-term projections converge toward realistic limits based on historical fundamentals.
**Added value:** Mathematically sound long-term projections that avoid the unrealistic extremes typical of other extrapolation methods.
---
## **📊 Complexity and Scalability Theory**
### **Optimized Linear Complexity Model**
The algorithm maintains **linear computational complexity** O(n) in the number of historical data points, guaranteeing scalability for extensive time series analysis without performance degradation.
### **Temporal Locality Principle**
It implements **temporal locality**, where the most expensive operations are concentrated in the most relevant temporal regions (recent periods and near projections), optimizing computational resource usage.
---
## **🎯 Convergence and Stability**
### **Probabilistic Convergence Theory**
The system guarantees **probabilistic convergence** toward the real underlying trend, where projection accuracy increases with the amount of available historical data, following **law of large numbers** principles.
**Practical implication:** The more history an asset has, the more accurate the algorithm's projections will be.
### **Guaranteed Numerical Stability**
It implements **intrinsic numerical stability** through the use of robust floating-point arithmetic and validations that prevent overflow, underflow, and numerical error propagation.
**Result:** Reliable operation even with extreme-priced assets (from satoshis to thousand-dollar stocks).
---
## **💼 Comprehensive Practical Application**
**The algorithm functions as a "financial GPS"** that:
1. **Identifies where we've been** (significant historical lows)
2. **Determines where we are** (current position relative to the trend)
3. **Projects where we're going** (future trend with specific price levels)
4. **Provides alternative routes** (Fibonacci bands as alternative targets)
This theoretical framework represents an innovative synthesis of time series analysis, approximation theory, and computational optimization, specifically designed for long-term financial trend analysis with robust and mathematically grounded projections.
RSI-Adaptive T3 [ChartPrime]The RSI-Adaptive T3 is a precision trend-following tool built around the legendary T3 smoothing algorithm developed by Tim Tillson , designed to enhance responsiveness while reducing lag compared to traditional moving averages. Current implementation takes it a step further by dynamically adapting the smoothing length based on real-time RSI conditions — allowing the T3 to “breathe” with market volatility. This dynamic length makes the curve faster in trending moves and smoother during consolidations.
To help traders visualize volatility and directional momentum, adaptive volatility bands are plotted around the T3 line, with visual crossover markers and a dynamic info panel on the chart. It’s ideal for identifying trend shifts, spotting momentum surges, and adapting strategy execution to the pace of the market.
HOIW IT WORKS
At its core, this indicator fuses two ideas:
The T3 Moving Average — a 6-stage recursively smoothed exponential average created by Tim Tillson , designed to reduce lag without sacrificing smoothness. It uses a volume factor to control curvature.
A Dynamic Length Engine — powered by the RSI. When RSI is low (market oversold), the T3 becomes shorter and more reactive. When RSI is high (overbought), the T3 becomes longer and smoother. This creates a feedback loop between price momentum and trend sensitivity.
// Step 1: Adaptive length via RSI
rsi = ta.rsi(src, rsiLen)
rsi_scale = 1 - rsi / 100
len = math.round(minLen + (maxLen - minLen) * rsi_scale)
pine_ema(src, length) =>
alpha = 2 / (length + 1)
sum = 0.0
sum := na(sum ) ? src : alpha * src + (1 - alpha) * nz(sum )
sum
// Step 2: T3 with adaptive length
e1 = pine_ema(src, len)
e2 = pine_ema(e1, len)
e3 = pine_ema(e2, len)
e4 = pine_ema(e3, len)
e5 = pine_ema(e4, len)
e6 = pine_ema(e5, len)
c1 = -v * v * v
c2 = 3 * v * v + 3 * v * v * v
c3 = -6 * v * v - 3 * v - 3 * v * v * v
c4 = 1 + 3 * v + v * v * v + 3 * v * v
t3 = c1 * e6 + c2 * e5 + c3 * e4 + c4 * e3
The result: an evolving trend line that adapts to market tempo in real-time.
KEY FEATURES
⯁ RSI-Based Adaptive Smoothing
The length of the T3 calculation dynamically adjusts between a Min Length and Max Length , based on the current RSI.
When RSI is low → the T3 shortens, tracking reversals faster.
When RSI is high → the T3 stretches, filtering out noise during euphoria phases.
Displayed length is shown in a floating table, colored on a gradient between min/max values.
⯁ T3 Calculation (Tim Tillson Method)
The script uses a 6-stage EMA cascade with a customizable Volume Factor (v) , as designed by Tillson (1998) .
Formula:
T3 = c1 * e6 + c2 * e5 + c3 * e4 + c4 * e3
This technique gives smoother yet faster curves than EMAs or DEMA/Triple EMA.
⯁ Visual Trend Direction & Transitions
The T3 line changes color dynamically:
Color Up (default: blue) → bullish curvature
Color Down (default: orange) → bearish curvature
Plot fill between T3 and delayed T3 creates a gradient ribbon to show momentum expansion/contraction.
Directional shift markers (“🞛”) are plotted when T3 crosses its own delayed value — helping traders spot trend flips or pullback entries.
⯁ Adaptive Volatility Bands
Optional upper/lower bands are plotted around the T3 line using a user-defined volatility window (default: 100).
Bands widen when volatility rises, and contract during compression — similar to Bollinger logic but centered on the adaptive T3.
Shaded band zones help frame breakout setups or mean-reversion zones.
⯁ Dynamic Info Table
A live stats panel shows:
Current adaptive length
Maximum smoothing (▲ MaxLen)
Minimum smoothing (▼ MinLen)
All values update in real time and are color-coded to match trend direction.
HOW TO USE
Use T3 crossovers to detect trend transitions, especially during periods of volatility compression.
Watch for volatility contraction in the bands — breakouts from narrow band periods often precede trend bursts.
The adaptive smoothing length can also be used to assess current market tempo — tighter = faster; wider = slower.
CONCLUSION
RSI-Adaptive T3 modernizes one of the most elegant smoothing algorithms in technical analysis with intelligent RSI responsiveness and built-in volatility bands. It gives traders a cleaner read on trend health, directional shifts, and expansion dynamics — all in a visually efficient package. Perfect for scalpers, swing traders, and algorithmic modelers alike, it delivers advanced logic in a plug-and-play format.
[blackcat] L2 Multi-Level Price Condition TrackerOVERVIEW
The L2 Multi-Level Price Condition Tracker represents an innovative approach to analyzing financial markets by simultaneously monitoring multiple price levels, thus providing traders with a holistic view of market dynamics. By combining dynamic calculations based on moving averages and price deviations, this tool aims to deliver precise and actionable insights into potential entry and exit points. It leverages sophisticated statistical measures to identify key thresholds that signify shifts in market sentiment, thereby aiding traders in making well-informed decisions. 🎯
Key benefits encompass:
• Comprehensive calculation of midpoints and average prices indicating short-term trend directions.
• Interactive visualization elements enhancing interpretability effortlessly.
• Real-time generation of buy/sell signals driven by precise condition evaluations.
TECHNICAL ANALYSIS COMPONENTS
📉 Midpoint Calculations:
Computes central reference points derived from high-low ranges establishing baseline supports/resistances.
Utilizes Simple Moving Averages (SMAs) along with standardized deviation formulas smoothing out volatility while preserving long-term trends accurately.
Facilitates identification of directional biases reflecting underlying market forces dynamically.
🕵️♂️ Advanced Price Level Detection:
Derives upper/lower bounds adjusting sensitivities adaptively responding to changing conditions flexibly.
Employs proprietary logic distinguishing between bullish/bearish sentiments promptly signaling transitions effectively.
Ensures consistent adherence to predefined statistical protocols maintaining accuracy robustly.
🎥 Dynamic Signal Generation:
Detects crossovers indicating dominance shifts between buyers/sellers promptly triggering timely alerts.
Integrates conditional logic reinforcing signal validity minimizing erroneous activations systematically.
Supports adaptive thresholds tuning sensitivities based on evolving market conditions flexibly accommodating varying scenarios.
INDICATOR FUNCTIONALITY
🔢 Core Algorithms:
Utilizes moving averages alongside standardized deviation formulas generating precise net volume measurements.
Implements Arithmetic Mean Line Algorithm (AMLA) smoothing techniques improving interpretability.
Ensures consistent alignment with established statistical principles preserving fidelity.
🖱️ User Interface Elements:
Dedicated plots displaying real-time midpoint markers facilitating swift decision-making.
Context-sensitive color coding distinguishing positive/negative deviations intuitively highlighting key activations clearly.
Background shading emphasizing proximity to crucial threshold activations enhancing visibility focusing attention on vital signals promptly.
STRATEGY IMPLEMENTATION
✅ Entry Conditions:
Confirm bullish/bearish setups validated through multiple confirmatory signals assessing concurrent market sentiment factors.
Validate entry decisions considering alignment between calculated midpoints and broader trend directions ensuring coherence.
Monitor cumulative breaches signifying potential trend reversals executing partial/total closes contingent upon predetermined loss limits preserving capital efficiently.
🚫 Exit Mechanisms:
Trigger exits upon hitting predefined thresholds derived from historical analyses promptly executing closures.
Execute partial/total closes contingent upon cumulative loss limits preserving capital efficiently managing exposures prudently.
Conduct periodic reviews gauging strategy effectiveness rigorously identifying areas needing refinement implementing corrective actions iteratively enhancing performance metrics steadily.
PARAMETER CONFIGURATIONS
🎯 Optimization Guidelines:
Lookback Period: Governs responsiveness versus stability balancing sensitivity/stability governing moving averages aligning with preferred granularity.
Price Source: Dictates primary data series driving volume calculations selecting relevant inputs accurately tailoring strategies accordingly.
💬 Customization Recommendations:
Commence with baseline defaults; iteratively refine parameters isolating individual impacts evaluating adjustments independently prior to combined modifications minimizing disruptions.
Prioritize minimizing erroneous trigger occurrences first optimizing signal fidelity sustaining balanced risk-reward profiles irrespective of chosen settings upholding disciplined approaches preserving capital efficiently.
ADVANCED RISK MANAGEMENT
🛡️ Proactive Risk Mitigation Techniques:
Enforce strict compliance with pre-defined maximum leverage constraints adhering strictly to guidelines managing exposures prudently.
Mandatorily apply trailing stop-loss orders conforming to script outputs enforcing discipline rigorously preventing adverse consequences.
Allocate positions proportionately relative to available capital reserves conducting periodic reviews gauging effectiveness continuously identifying improvement opportunities steadily.
⚠️ Potential Pitfalls & Solutions:
Address frequent violations arising during heightened volatility phases necessitating manual interventions judiciously preparing contingency plans proactively mitigating risks effectively.
Manage false alerts warranting immediate attention avoiding adverse consequences systematically implementing corrective actions reliably.
Prepare proactive responses amid adverse movements ensuring seamless functionality amidst fluctuating conditions fortifying resilience against anomalies robustly.
PERFORMANCE MONITORING METRICS
🔍 Evaluation Criteria:
Assess win percentages consistently across diverse trading instruments gauging reliability measuring profitability efficiency accurately evaluating downside risks comprehensively uncovering systematic biases potentially skewing outcomes.
Calculate average profit ratios per successful execution benchmarking actual vs expected performances documenting results meticulously tracking progress dynamically addressing identified shortcomings proactively fostering continuous improvements.
📈 Historical Data Analysis Tools:
Maintain detailed logs capturing every triggered event recording realized profits/losses comparing simulated projections accurately identifying discrepancies warranting investigation implementing iterative refinements steadily enhancing performance metrics progressively.
Identify recurrent systematic errors demanding corrective actions implementing iterative refinements steadily addressing identified shortcomings proactively fostering continuous enhancements dynamically improving robustness resiliently.
PROBLEM SOLVING ADVICE
🔧 Frequent Encountered Challenges:
Unpredictable behaviors emerging within thinly traded markets requiring filtration processes enhancing signal integrity excluding low-liquidity assets prone to erratic movements effectively.
Latency issues manifesting during abrupt price fluctuations causing missed opportunities introducing buffer intervals safeguarding major news/event impacts mitigating distortions seamlessly verifying reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations dependably.
💡 Effective Resolution Pathways:
Limit ongoing optimization attempts preventing model degradation maintaining optimal performance levels consistently recalibrating parameters periodically adapting strategies flexibly responding appropriately amidst varying conditions dynamically improving robustness resiliently.
Verify reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations dependably bolstering overall efficacy systematically addressing identified shortcomings dynamically fostering continuous advancements.
THANKS
Heartfelt acknowledgment extends to all developers contributing invaluable insights regarding multi-level price condition-based trading methodologies! ✨
[blackcat] L2 Z-Score of PriceOVERVIEW
The L2 Z-Score of Price indicator offers traders an insightful perspective into how current prices diverge from their historical norms through advanced statistical measures. By leveraging Z-scores, it provides a robust framework for identifying potential reversals in financial markets. The Z-score quantifies the number of standard deviations that a data point lies away from the mean, thus serving as a critical metric for recognizing overbought or oversold conditions. 🎯
Key benefits encompass:
• Precise calculation of Z-scores reflecting true price deviations.
• Interactive plotting features enhancing visual clarity.
• Real-time generation of buy/sell signals based on crossover events.
STATISTICAL ANALYSIS COMPONENTS
📉 Mean Calculation:
Utilizes Simple Moving Averages (SMAs) to establish baseline price references.
Provides smooth representations filtering short-term noise preserving long-term trends.
Fundamental for deriving subsequent deviation metrics accurately.
📈 Standard Deviation Measurement:
Quantifies dispersion around established means revealing underlying variability.
Crucial for assessing potential volatility levels dynamically adapting strategies accordingly.
Facilitates precise Z-score derivations ensuring statistical rigor.
🕵️♂️ Z-SCORE DETECTION:
Measures standardized distances indicating relative positions within distributions.
Helps pinpoint extreme conditions signaling impending reversals proactively.
Enables early identification of trend exhaustion phases prompting timely actions.
INDICATOR FUNCTIONALITY
🔢 Core Algorithms:
Integrates SMAs along with standardized deviation formulas generating precise Z-scores.
Employs Arithmetic Mean Line Algorithm (AMLA) smoothing techniques improving interpretability.
Ensures consistent adherence to predefined statistical protocols maintaining accuracy.
🖱️ User Interface Elements:
Dedicated plots displaying real-time Z-score markers facilitating swift decision-making.
Context-sensitive color coding distinguishing positive/negative deviations intuitively.
Background shading highlighting proximity to key threshold activations enhancing visibility.
STRATEGY IMPLEMENTATION
✅ Entry Conditions:
Confirm bullish/bearish setups validated through multiple confirmatory signals.
Validate entry decisions considering concurrent market sentiment factors.
Assess alignment between Z-score readings and broader trend directions ensuring coherence.
🚫 Exit Mechanisms:
Trigger exits upon hitting predetermined thresholds derived from historical analyses.
Monitor continuous breaches signifying potential trend reversals promptly executing closures.
Execute partial/total closes contingent upon cumulative loss limits preserving capital efficiently.
PARAMETER CONFIGURATIONS
🎯 Optimization Guidelines:
Length: Governs responsiveness versus smoothing trade-offs balancing sensitivity/stability.
Price Source: Dictates primary data series driving Z-score computations selecting relevant inputs accurately.
💬 Customization Recommendations:
Commence with baseline defaults; iteratively refine parameters isolating individual impacts.
Evaluate adjustments independently prior to combined modifications minimizing disruptions.
Prioritize minimizing erroneous trigger occurrences first optimizing signal fidelity.
Sustain balanced risk-reward profiles irrespective of chosen settings upholding disciplined approaches.
ADVANCED RISK MANAGEMENT
🛡️ Proactive Risk Mitigation Techniques:
Enforce strict compliance with pre-defined maximum leverage constraints adhering strictly to guidelines.
Mandatorily apply trailing stop-loss orders conforming to script outputs reinforcing discipline.
Allocate positions proportionately relative to available capital reserves managing exposures prudently.
Conduct periodic reviews gauging strategy effectiveness rigorously identifying areas needing refinement.
⚠️ Potential Pitfalls & Solutions:
Address frequent violations arising during heightened volatility phases necessitating manual interventions judiciously.
Manage false alerts warranting immediate attention avoiding adverse consequences systematically.
Prepare contingency plans mitigating margin call possibilities preparing proactive responses effectively.
Continuously assess automated system reliability amidst fluctuating conditions ensuring seamless functionality.
PERFORMANCE AUDITS & REFINEMENTS
🔍 Critical Evaluation Metrics:
Assess win percentages consistently across diverse trading instruments gauging reliability.
Calculate average profit ratios per successful execution measuring profitability efficiency accurately.
Measure peak drawdown durations alongside associated magnitudes evaluating downside risks comprehensively.
Analyze signal generation frequencies revealing hidden patterns potentially skewing outcomes uncovering systematic biases.
📈 Historical Data Analysis Tools:
Maintain comprehensive records capturing every triggered event meticulously documenting results.
Compare realized profits/losses against backtested simulations benchmarking actual vs expected performances accurately.
Identify recurrent systematic errors demanding corrective actions implementing iterative refinements steadily.
Document evolving performance metrics tracking progress dynamically addressing identified shortcomings proactively.
PROBLEM SOLVING ADVICE
🔧 Frequent Encountered Challenges:
Unpredictable behaviors emerging within thinly traded markets requiring filtration processes.
Latency issues manifesting during abrupt price fluctuations causing missed opportunities.
Overfitted models yielding suboptimal results post-extensive tuning demanding recalibrations.
Inaccuracies stemming from incomplete/inaccurate data feeds necessitating verification procedures.
💡 Effective Resolution Pathways:
Exclude low-liquidity assets prone to erratic movements enhancing signal integrity.
Introduce buffer intervals safeguarding major news/event impacts mitigating distortions effectively.
Limit ongoing optimization attempts preventing model degradation maintaining optimal performance levels consistently.
Verify reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations reliably.
USER ENGAGEMENT SEGMENT
🤝 Community Contributions Welcome
Highly encourage active participation sharing experiences & recommendations!
[blackcat] L3 Smart Money FlowCOMPREHENSIVE ANALYSIS OF THE L3 SMART MONEY FLOW INDICATOR
🌐 OVERVIEW:
The L3 Smart Money Flow indicator represents a sophisticated multi-dimensional analytics tool combining traditional momentum measurements with advanced institutional investor tracking capabilities. It's particularly effective at identifying large-scale capital movement dynamics that often precede significant price shifts.
Core Objectives:
• Detect subtle but meaningful price action anomalies indicating major player involvement
• Provide clear entry/exit markers based on multiple validated criteria
• Offer risk-managed positioning strategies suitable for various account sizes
• Maintain operational efficiency even during high volatility regimes
THEORETICAL BACKDROP AND METHODOLOGY
🎓 Conceptual Foundation Principles:
Utilizes Time-Varying Moving Averages (TVMA) responding adaptively to changing market states
Implements Extended Smoothing Algorithm (XSA) providing enhanced filtration characteristics
Employs asymmetric weight distribution favoring recent price observations over historical ones
→ Analyzes price-weighted closing prices incorporating volume influence indirectly
← Applies Asymmetric Local Maximum (ALMA) filters generating institution-specific trends
⟸ Combines multiple temporal perspectives producing robust directional assessments
✓ Calculates normalized momentum ratios comparing current state against extended range extremes
✗ Filters out insignificant fluctuations via double-stage verification process
⤾ Generates actionable alerts upon exceeding predefined significance boundaries
CONFIGURABLE PARAMETERS IN DEPTH
⚙️ Input Customization Options Detailed Explanation:
Temporal Resolution Control:
→ TVMA Length Setting:
Minimum value constraint ensuring mathematical validity
Higher numbers increase smoothing effect reducing reaction velocity
Lower intervals enhance responsiveness potentially increasing noise exposure
Validation Threshold Definition:
↓ Bull-Bear Boundary Level:
Establishes fundamental acceptance/rejection zones
Typically set near extreme values reflecting rare occurrence probability
Can be adjusted per instrument liquidity profiles if necessary
ADVANCED ALGORITHMIC PROCEDURES BREAKDOWN
💻 Internal Operation Architecture:
Base Calculations Infrastructure:
☑ Raw Data Preparation and Normalization
☐ High/Low/Closing Aggregation Processes
☒ Range Estimation Algorithms
Intermediate Transform Engine:
📈 Momentum Ratio Computation Workflow
↔ First Pass XSA Application Details
➖ Second Stage Refinement Mechanics
Final Output Synthesis Framework:
➢ Composite Reading Compilation Logic
➣ Validation Status Determination Process
➤ Alert Trigger Decision Making Structure
INTERACTIVE VISUAL INTERFACE COMPONENTS
🎨 User Experience Interface Elements:
🔵 Plotting Series Hierarchy:
→ Primary FundFlow Signal: White trace marking core oscillator progression
↑ Secondary Confirmation Overlay: Orange/Yellow highlighting validation status
🟥 Risk/Reward Boundaries: Aqua line delineating strategic areas requiring attention
🏷️ Interactive Marker System:
✔ "BUY": Green upward-pointing labels denoting confirmed long entries
❌ "SELL": Red downward-facing badges signaling short setups
PRACTICAL APPLICATION STRATEGY GUIDE
📋 Operational Deployment Instructions:
Strategic Planning Initiatives:
• Define precise profit targets considering realistic reward/risk scenarios
→ Set maximum acceptable loss thresholds protecting available resources adequately
↓ Develop contingency plans addressing unexpected adverse developments promptly
Live Trading Engagement Protocols:
→ Maintaining vigilant monitoring of label placement activities continuously
↓ Tracking order fill success rates across implemented grids regularly
↑ Evaluating system effectiveness compared alternative methodologies periodically
Performance Optimization Techniques:
✔ Implement incremental improvements iteratively throughout lifecycle
❌ Eliminate ineffective component variations systematically
⟹ Ensure proportional growth capability matching user needs appropriately
EFFICIENCY ENHANCEMENT APPROACHES
🚀 Ongoing Development Strategy:
Resource Management Focus Areas:
→ Minimizing redundant computation cycles through intelligent caching mechanisms
↓ Leveraging parallel processing capabilities where feasible efficiently
↑ Optimizing storage access patterns improving response times substantially
Scalability Consideration Factors:
✔ Adapting to varying account sizes/market capitalizations seamlessly
❌ Preventing bottlenecks limiting concurrent operation capacity
⟹ Ensuring balanced growth capability matching evolving requirements accurately
Maintenance Routine Establishment:
✓ Regular codebase updates incorporation keeping functionality current
↓ Periodic performance audits conducting verifying continued effectiveness
↑ Documentation refinement updating explaining any material modifications made
SYSTEMATIC RISK CONTROL MECHANISMS
🛡️ Comprehensive Protection Systems:
Position Sizing Governance:
∅ Never exceed predetermined exposure limitations strictly observed
± Scale entries proportionally according to available resources carefully
× Include slippage allowances within planning stages realistically
Emergency Response Procedures:
↩ Well-defined exit strategies including trailing stops activation logic
🌀 Contingency plan formulation covering worst-case scenario contingencies
⇄ Recovery procedure documentation outlining restoration steps methodically
EXODUS EXODUS by (DAFE) Trading Systems
EXODUS is a sophisticated trading algorithm built by Dskyz (DAFE) Trading Systems for competitive and competition purposes, designed to identify high-probability trades with robust risk management. this strategy leverages a multi-signal voting system, combining three core components—SPR, VWMO, and VEI—alongside ADX, choppiness filters, and ATR-based volatility gates to ensure trades are taken only in favorable market conditions. the algo uses a take-profit to stop-loss ratio, dynamic position sizing, and a strict voting mechanism requiring all signals to align before entering a trade.
EXODUS was not overfitted for any specific symbol. instead, it uses a generic tuned setting, making it versatile across various markets. while it can trade futures, it’s not currently set up for it but has the potential to do more with further development. visuals are intentionally minimal due to its competition focus, prioritizing performance over aesthetics. a more visually stunning version may be released in the future with enhanced graphics.
The Unique Core Components Developed for EXODUS
SPR (Session Price Recalibration)
SPR measures momentum during regular trading hours (RTH, 0930-1600, America/New_York) to catch session-specific trends.
spr_lookback = input.int(15, "SPR Lookback") this sets how many bars back SPR looks to calculate momentum (default 15 bars). it compares the current session’s price-volume score to the score 15 bars ago to gauge momentum strength.
how it works: a longer lookback smooths out the signal, focusing on bigger trends. a shorter one makes SPR more sensitive to recent moves.
how to adjust: on a 1-hour chart, 15 bars is 15 hours (about 2 trading days). if you’re on a shorter timeframe like 5 minutes, 15 bars is just 75 minutes, so you might want to increase it to 50 or 100 to capture more meaningful trends. if you’re trading a choppy stock, a shorter lookback (like 5) can help catch quick moves, but it might give more false signals.
spr_threshold = input.float (0.7, "SPR Threshold")
this is the cutoff for SPR to vote for a trade (default 0.7). if SPR’s normalized value is above 0.7, it votes for a long; below -0.7, it votes for a short.
how it works: SPR normalizes its momentum score by ATR, so this threshold ensures only strong moves count. a higher threshold means fewer trades but higher conviction.
how to adjust: if you’re getting too few trades, lower it to 0.5 to let more signals through. if you’re seeing too many false entries, raise it to 1.0 for stricter filtering. test on your chart to find a balance.
spr_atr_length = input.int(21, "SPR ATR Length") this sets the ATR period (default 21 bars) used to normalize SPR’s momentum score. ATR measures volatility, so this makes SPR’s signal relative to market conditions.
how it works: a longer ATR period (like 21) smooths out volatility, making SPR less jumpy. a shorter one makes it more reactive.
how to adjust: if you’re trading a volatile stock like TSLA, a longer period (30 or 50) can help avoid noise. for a calmer stock, try 10 to make SPR more responsive. match this to your timeframe—shorter timeframes might need a shorter ATR.
rth_session = input.session("0930-1600","SPR: RTH Sess.") rth_timezone = "America/New_York" this defines the session SPR uses (0930-1600, New York time). SPR only calculates momentum during these hours to focus on RTH activity.
how it works: it ignores pre-market or after-hours noise, ensuring SPR captures the main market action.
how to adjust: if you trade a different session (like London hours, 0300-1200 EST), change the session to match. you can also adjust the timezone if you’re in a different region, like "Europe/London". just make sure your chart’s timezone aligns with this setting.
VWMO (Volume-Weighted Momentum Oscillator)
VWMO measures momentum weighted by volume to spot sustained, high-conviction moves.
vwmo_momlen = input.int(21, "VWMO Momentum Length") this sets how many bars back VWMO looks to calculate price momentum (default 21 bars). it takes the price change (close minus close 21 bars ago).
how it works: a longer period captures bigger trends, while a shorter one reacts to recent swings.
how to adjust: on a daily chart, 21 bars is about a month—good for trend trading. on a 5-minute chart, it’s just 105 minutes, so you might bump it to 50 or 100 for more meaningful moves. if you want faster signals, drop it to 10, but expect more noise.
vwmo_volback = input.int(30, "VWMO Volume Lookback") this sets the period for calculating average volume (default 30 bars). VWMO weights momentum by volume divided by this average.
how it works: it compares current volume to the average to see if a move has strong participation. a longer lookback smooths the average, while a shorter one makes it more sensitive.
how to adjust: for stocks with spiky volume (like NVDA on earnings), a longer lookback (50 or 100) avoids overreacting to one-off spikes. for steady volume stocks, try 20. match this to your timeframe—shorter timeframes might need a shorter lookback.
vwmo_smooth = input.int(9, "VWMO Smoothing")
this sets the SMA period to smooth VWMO’s raw momentum (default 9 bars).
how it works: smoothing reduces noise in the signal, making VWMO more reliable for voting. a longer smoothing period cuts more noise but adds lag.
how to adjust: if VWMO is too jumpy (lots of false votes), increase to 15. if it’s too slow and missing trades, drop to 5. test on your chart to see what keeps the signal clean but responsive.
vwmo_threshold = input.float(10, "VWMO Threshold") this is the cutoff for VWMO to vote for a trade (default 10). above 10, it votes for a long; below -10, a short.
how it works: it ensures only strong momentum signals count. a higher threshold means fewer but stronger trades.
how to adjust: if you want more trades, lower it to 5. if you’re getting too many weak signals, raise it to 15. this depends on your market—volatile stocks might need a higher threshold to filter noise.
VEI (Velocity Efficiency Index)
VEI measures market efficiency and velocity to filter out choppy moves and focus on strong trends.
vei_eflen = input.int(14, "VEI Efficiency Smoothing") this sets the EMA period for smoothing VEI’s efficiency calc (bar range / volume, default 14 bars).
how it works: efficiency is how much price moves per unit of volume. smoothing it with an EMA reduces noise, focusing on consistent efficiency. a longer period smooths more but adds lag.
how to adjust: for choppy markets, increase to 20 to filter out noise. for faster markets, drop to 10 for quicker signals. this should match your timeframe—shorter timeframes might need a shorter period.
vei_momlen = input.int(8, "VEI Momentum Length") this sets how many bars back VEI looks to calculate momentum in efficiency (default 8 bars).
how it works: it measures the change in smoothed efficiency over 8 bars, then adjusts for inertia (volume-to-range). a longer period captures bigger shifts, while a shorter one reacts faster.
how to adjust: if VEI is missing quick reversals, drop to 5. if it’s too noisy, raise to 12. test on your chart to see what catches the right moves without too many false signals.
vei_threshold = input.float(4.5, "VEI Threshold") this is the cutoff for VEI to vote for a trade (default 4.5). above 4.5, it votes for a long; below -4.5, a short.
how it works: it ensures only strong, efficient moves count. a higher threshold means fewer trades but higher quality.
how to adjust: if you’re not getting enough trades, lower to 3. if you’re seeing too many false entries, raise to 6. this depends on your market—fast stocks like NQ1 might need a lower threshold.
Features
Multi-Signal Voting: requires all three signals (SPR, VWMO, VEI) to align for a trade, ensuring high-probability setups.
Risk Management: uses ATR-based stops (2.1x) and take-profits (4.1x), with dynamic position sizing based on a risk percentage (default 0.4%).
Market Filters: ADX (default 27) ensures trending conditions, choppiness index (default 54.5) avoids sideways markets, and ATR expansion (default 1.12) confirms volatility.
Dashboard: provides real-time stats like SPR, VWMO, VEI values, net P/L, win rate, and streak, with a clean, functional design.
Visuals
EXODUS prioritizes performance over visuals, as it was built for competitive and competition purposes. entry/exit signals are marked with simple labels and shapes, and a basic heatmap highlights market regimes. a more visually stunning update may be released later, with enhanced graphics and overlays.
Usage
EXODUS is designed for stocks and ETFs but can be adapted for futures with adjustments. it performs best in trending markets with sufficient volatility, as confirmed by its generic tuning across symbols like TSLA, AMD, NVDA, and NQ1. adjust inputs like SPR threshold, VWMO smoothing, or VEI momentum length to suit specific assets or timeframes.
Setting I used: (Again, these are a generic setting, each security needs to be fine tuned)
SPR LB = 19 SPR TH = 0.5 SPR ATR L= 21 SPR RTH Sess: 9:30 – 16:00
VWMO L = 21 VWMO LB = 18 VWMO S = 6 VWMO T = 8
VEI ES = 14 VEI ML = 21 VEI T = 4
R % = 0.4
ATR L = 21 ATR M (S) =1.1 TP Multi = 2.1 ATR min mult = 0.8 ATR Expansion = 1.02
ADX L = 21 Min ADX = 25
Choppiness Index = 14 Chop. Max T = 55.5
Backtesting: TSLA
Frame: Jan 02, 2018, 08:00 — May 01, 2025, 09:00
Slippage: 3
Commission .01
Disclaimer
this strategy is for educational purposes. past performance is not indicative of future results. trading involves significant risk, and you should only trade with capital you can afford to lose. always backtest and validate any strategy before using it in live markets.
(This publishing will most likely be taken down do to some miscellaneous rule about properly displaying charting symbols, or whatever. Once I've identified what part of the publishing they want to pick on, I'll adjust and repost.)
About the Author
Dskyz (DAFE) Trading Systems is dedicated to building high-performance trading algorithms. EXODUS is a product of rigorous research and development, aimed at delivering consistent, and data-driven trading solutions.
Use it with discipline. Use it with clarity. Trade smarter.
**I will continue to release incredible strategies and indicators until I turn this into a brand or until someone offers me a contract.
2025 Created by Dskyz, powered by DAFE Trading Systems. Trade smart, trade bold.
Finite Difference - Backward (mcbw_)In calculus there exists a 'derivative', which simply just measures the difference between two points on a curve. For well behaved mathematical functions there are infinitely many points and so there exists a derivative at every point. Where there are infinitely many points in a curve that curve is called 'continuous'. Continuous curves are very nice to deal with since each point on it exists almost exactly where its neighbors are. However, if the curve does not have infinitely many points on it, but instead has a finite number of points on it, that curve is called 'discrete' instead of continuous. Taking the derivative of discrete curves is much trickier business since there are none of the mathematical conveniences that a continuous offers. In the real world everything we measure is a discrete curve, including Price (since we measure it a finite number of times, aka each candlestick)!
The branch of Discrete Mathematics has found an approach to measure the derivative along a discrete curve, that approach is aptly called " Finite Difference ". To get a more accurate approximation of a discrete derivative, the finite difference approach uses weighted combinations of neighboring points. The most common type of finite difference is a 'central' difference, this uses a combination of points before and after the point of interest to approximate the discrete derivative. This is great for historical analysis but is not of much use for trading algorithms since it technically means using future prices to calculate the derivative of the current point. Instead we can use a less common variant called a ' Backwards Difference ' that only uses a combination of points before the current one to help approximate the current derivative.
In this script you can choose the " Order " of your derivative and the " Accuracy " of its approximation. This script is for educational purposes for folks building trading algorithms. Many trading algorithms often have an element of seeing how much Price has changed from the previous candle to the current candle. This approach is the lowest accuracy derivative possible, and using the backwards finite differences, made available for the first time on TradingView (!!), algorithms that use derivatives can now have higher orders of accuracy!
Happy Trading/Developing!
Accurate Bollinger Bands mcbw_ [True Volatility Distribution]The Bollinger Bands have become a very important technical tool for discretionary and algorithmic traders alike over the last decades. It was designed to give traders an edge on the markets by setting probabilistic values to different levels of volatility. However, some of the assumptions that go into its calculations make it unusable for traders who want to get a correct understanding of the volatility that the bands are trying to be used for. Let's go through what the Bollinger Bands are said to show, how their calculations work, the problems in the calculations, and how the current indicator I am presenting today fixes these.
--> If you just want to know how the settings work then skip straight to the end or click on the little (i) symbol next to the values in the indicator settings window when its on your chart <--
--------------------------- What Are Bollinger Bands ---------------------------
The Bollinger Bands were formed in the 1980's, a time when many retail traders interacted with their symbols via physically printed charts and computer memory for personal computer memory was measured in Kb (about a factor of 1 million smaller than today). Bollinger Bands are designed to help a trader or algorithm see the likelihood of price expanding outside of its typical range, the further the lines are from the current price implies the less often they will get hit. With a hands on understanding many strategies use these levels for designated levels of breakout trades or to assist in defining price ranges.
--------------------------- How Bollinger Bands Work ---------------------------
The calculations that go into Bollinger Bands are rather simple. There is a moving average that centers the indicator and an equidistant top band and bottom band are drawn at a fixed width away. The moving average is just a typical moving average (or common variant) that tracks the price action, while the distance to the top and bottom bands is a direct function of recent price volatility. The way that the distance to the bands is calculated is inspired by formulas from statistics. The standard deviation is taken from the candles that go into the moving average and then this is multiplied by a user defined value to set the bands position, I will call this value 'the multiple'. When discussing Bollinger Bands, that trading community at large normally discusses 'the multiple' as a multiplier of the standard deviation as it applies to a normal distribution (gaußian probability). On a normal distribution the number of standard deviations away (which trades directly use as 'the multiple') you are directly corresponds to how likely/unlikely something is to happen:
1 standard deviation equals 68.3%, meaning that the price should stay inside the 1 standard deviation 68.3% of the time and be outside of it 31.7% of the time;
2 standard deviation equals 95.5%, meaning that the price should stay inside the 2 standard deviation 95.5% of the time and be outside of it 4.5% of the time;
3 standard deviation equals 99.7%, meaning that the price should stay inside the 3 standard deviation 99.7% of the time and be outside of it 0.3% of the time.
Therefore when traders set 'the multiple' to 2, they interpret this as meaning that price will not reach there 95.5% of the time.
---------------- The Problem With The Math of Bollinger Bands ----------------
In and of themselves the Bollinger Bands are a great tool, but they have become misconstrued with some incorrect sense of statistical meaning, when they should really just be taken at face value without any further interpretation or implication.
In order to explain this it is going to get a bit technical so I will give a little math background and try to simplify things. First let's review some statistics topics (distributions, percentiles, standard deviations) and then with that understanding explore the incorrect logic of how Bollinger Bands have been interpreted/employed.
---------------- Quick Stats Review ----------------
.
(If you are comfortable with statistics feel free to skip ahead to the next section)
.
-------- I: Probability distributions --------
When you have a lot of data it is helpful to see how many times different results appear in your dataset. To visualize this people use "histograms", which just shows how many times each element appears in the dataset by stacking each of the same elements on top of each other to form a graph. You may be familiar with the bell curve (also called the "normal distribution", which we will be calling it by). The normal distribution histogram looks like a big hump around zero and then drops off super quickly the further you get from it. This shape (the bell curve) is very nice because it has a lot of very nifty mathematical properties and seems to show up in nature all the time. Since it pops up in so many places, society has developed many different shortcuts related to it that speed up all kinds of calculations, including the shortcut that 1 standard deviation = 68.3%, 2 standard deviations = 95.5%, and 3 standard deviations = 99.7% (these only apply to the normal distribution). Despite how handy the normal distribution is and all the shortcuts we have for it are, and how much it shows up in the natural world, there is nothing that forces your specific dataset to look like it. In fact, your data can actually have any possible shape. As we will explore later, economic and financial datasets *rarely* follow the normal distribution.
-------- II: Percentiles --------
After you have made the histogram of your dataset you have built the "probability distribution" of your own dataset that is specific to all the data you have collected. There is a whole complicated framework for how to accurately calculate percentiles but we will dramatically simplify it for our use. The 'percentile' in our case is just the number of data points we are away from the "middle" of the data set (normally just 0). Lets say I took the difference of the daily close of a symbol for the last two weeks, green candles would be positive and red would be negative. In this example my dataset of day by day closing price difference is:
week 1:
week 2:
sorting all of these value into a single dataset I have:
I can separate the positive and negative returns and explore their distributions separately:
negative return distribution =
positive return distribution =
Taking the 25th% percentile of these would just be taking the value that is 25% towards the end of the end of these returns. Or akin the 100%th percentile would just be taking the vale that is 100% at the end of those:
negative return distribution (50%) = -5
positive return distribution (50%) = +4
negative return distribution (100%) = -10
positive return distribution (100%) = +20
Or instead of separating the positive and negative returns we can also look at all of the differences in the daily close as just pure price movement and not account for the direction, in this case we would pool all of the data together by ignoring the negative signs of the negative reruns
combined return distribution =
In this case the 50%th and 100%th percentile of the combined return distribution would be:
combined return distribution (50%) = 4
combined return distribution (100%) = 10
Sometimes taking the positive and negative distributions separately is better than pooling them into a combined distribution for some purposes. Other times the combined distribution is better.
Most financial data has very different distributions for negative returns and positive returns. This is encapsulated in sayings like "Price takes the stairs up and the elevator down".
-------- III: Standard Deviation --------
The formula for the standard deviation (refereed to here by its shorthand 'STDEV') can be intimidating, but going through each of its elements will illuminate what it does. The formula for STDEV is equal to:
square root ( (sum ) / N )
Going back the the dataset that you might have, the variables in the formula above are:
'mean' is the average of your entire dataset
'x' is just representative of a single point in your dataset (one point at a time)
'N' is the total number of things in your dataset.
Going back to the STDEV formula above we can see how each part of it works. Starting with the '(x - mean)' part. What this does is it takes every single point of the dataset and measure how far away it is from the mean of the entire dataset. Taking this value to the power of two: '(x - mean) ^ 2', means that points that are very far away from the dataset mean get 'penalized' twice as much. Points that are very close to the dataset mean are not impacted as much. In practice, this would mean that if your dataset had a bunch of values that were in a wide range but always stayed in that range, this value ('(x - mean) ^ 2') would end up being small. On the other hand, if your dataset was full of the exact same number, but had a couple outliers very far away, this would have a much larger value since the square par of '(x - mean) ^ 2' make them grow massive. Now including the sum part of 'sum ', this just adds up all the of the squared distanced from the dataset mean. Then this is divided by the number of values in the dataset ('N'), and then the square root of that value is taken.
There is nothing inherently special or definitive about the STDEV formula, it is just a tool with extremely widespread use and adoption. As we saw here, all the STDEV formula is really doing is measuring the intensity of the outliers.
--------------------------- Flaws of Bollinger Bands ---------------------------
The largest problem with Bollinger Bands is the assumption that price has a normal distribution. This is assumption is massively incorrect for many reasons that I will try to encapsulate into two points:
Price return do not follow a normal distribution, every single symbol on every single timeframe has is own unique distribution that is specific to only itself. Therefore all the tools, shortcuts, and ideas that we use for normal distributions do not apply to price returns, and since they do not apply here they should not be used. A more general approach is needed that allows each specific symbol on every specific timeframe to be treated uniquely.
The distributions of price returns on the positive and negative side are almost never the same. A more general approach is needed that allows positive and negative returns to be calculated separately.
In addition to the issues of the normal distribution assumption, the standard deviation formula (as shown above in the quick stats review) is essentially just a tame measurement of outliers (a more aggressive form of outlier measurement might be taking the differences to the power of 3 rather than 2). Despite this being a bit of a philosophical question, does the measurement of outlier intensity as defined by the STDEV formula really measure what we want to know as traders when we're experiencing volatility? Or would adjustments to that formula better reflect what we *experience* as volatility when we are actively trading? This is an open ended question that I will leave here, but I wanted to pose this question because it is a key part of what how the Bollinger Bands work that we all assume as a given.
Circling back on the normal distribution assumption, the standard deviation formula used in the calculation of the bands only encompasses the deviation of the candles that go into the moving average and have no knowledge of the historical price action. Therefore the level of the bands may not really reflect how the price action behaves over a longer period of time.
------------ Delivering Factually Accurate Data That Traders Need------------
In light of the problems identified above, this indicator fixes all of these issue and delivers statistically correct information that discretionary and algorithmic traders can use, with truly accurate probabilities. It takes the price action of the last 2,000 candles and builds a huge dataset of distributions that you can directly select your percentiles from. It also allows you to have the positive and negative distributions calculated separately, or if you would like, you can pool all of them together in a combined distribution. In addition to this, there is a wide selection of moving averages directly available in the indicator to choose from.
Hedge funds, quant shops, algo prop firms, and advanced mechanical groups all employ the true return distributions in their work. Now you have access to the same type of data with this indicator, wherein it's doing all the lifting for you.
------------------------------ Indicator Settings ------------------------------
.
---- Moving average ----
Select the type of moving average you would like and its length
---- Bands ----
The percentiles that you enter here will be pulled directly from the return distribution of the last 2,000 candles. With the typical Bollinger Bands, traders would select 2 standard deviations and incorrectly think that the levels it highlights are the 95.5% levels. Now, if you want the true 95.5% level, you can just enter 95.5 into the percentile value here. Each of the three available bands takes the true percentile you enter here.
---- Separate Positive & Negative Distributions----
If this box is checked the positive and negative distributions are treated indecently, completely separate from each other. You will see that the width of the top and bottom bands will be different for each of the percentiles you enter.
If this box is unchecked then all the negative and positive distributions are pooled together. You will notice that the width of the top and bottom bands will be the exact same.
---- Distribution Size ----
This is the number of candles that the price return is calculated over. EG: to collect the price return over the last 33 candles, the difference of price from now to 33 candles ago is calculated for the last 2,000 candles, to build a return distribution of 2000 points of price differences over 33 candles.
NEGATIVE NUMBERS(<0) == exact number of candles to include;
EG: setting this value to -20 will always collect volatility distributions of 20 candles
POSITIVE NUMBERS(>0) == number of candles to include as a multiple of the Moving Average Length value set above;
EG: if the Moving Average Length value is set to 22, setting this value to 2 will use the last 22*2 = 44 candles for the collection of volatility distributions
MORE candles being include will generally make the bands WIDER and their size will change SLOWER over time.
I wish you focus, dedication, and earnest success on your journey.
Happy trading :)
AiTrend Pattern Matrix for kNN Forecasting (AiBitcoinTrend)The AiTrend Pattern Matrix for kNN Forecasting (AiBitcoinTrend) is a cutting-edge indicator that combines advanced mathematical modeling, AI-driven analytics, and segment-based pattern recognition to forecast price movements with precision. This tool is designed to provide traders with deep insights into market dynamics by leveraging multivariate pattern detection and sophisticated predictive algorithms.
👽 Core Features
Segment-Based Pattern Recognition
At its heart, the indicator divides price data into discrete segments, capturing key elements like candle bodies, high-low ranges, and wicks. These segments are normalized using ATR-based volatility adjustments to ensure robustness across varying market conditions.
AI-Powered k-Nearest Neighbors (kNN) Prediction
The predictive engine uses the kNN algorithm to identify the closest historical patterns in a multivariate dictionary. By calculating the distance between current and historical segments, the algorithm determines the most likely outcomes, weighting predictions based on either proximity (distance) or averages.
Dynamic Dictionary of Historical Patterns
The indicator maintains a rolling dictionary of historical patterns, storing multivariate data for:
Candle body ranges, High-low ranges, Wick highs and lows.
This dynamic approach ensures the model adapts continuously to evolving market conditions.
Volatility-Normalized Forecasting
Using ATR bands, the indicator normalizes patterns, reducing noise and enhancing the reliability of predictions in high-volatility environments.
AI-Driven Trend Detection
The indicator not only predicts price levels but also identifies market regimes by comparing current conditions to historically significant highs, lows, and midpoints. This allows for clear visualizations of trend shifts and momentum changes.
👽 Deep Dive into the Core Mathematics
👾 Segment-Based Multivariate Pattern Analysis
The indicator analyzes price data by dividing each bar into distinct segments, isolating key components such as:
Body Ranges: Differences between the open and close prices.
High-Low Ranges: Capturing the full volatility of a bar.
Wick Extremes: Quantifying deviations beyond the body, both above and below.
Each segment contributes uniquely to the predictive model, ensuring a rich, multidimensional understanding of price action. These segments are stored in a rolling dictionary of patterns, enabling the indicator to reference historical behavior dynamically.
👾 Volatility Normalization Using ATR
To ensure robustness across varying market conditions, the indicator normalizes patterns using Average True Range (ATR). This process scales each component to account for the prevailing market volatility, allowing the algorithm to compare patterns on a level playing field regardless of differing price scales or fluctuations.
👾 k-Nearest Neighbors (kNN) Algorithm
The AI core employs the kNN algorithm, a machine-learning technique that evaluates the similarity between the current pattern and a library of historical patterns.
Euclidean Distance Calculation:
The indicator computes the multivariate distance across four distinct dimensions: body range, high-low range, wick low, and wick high. This ensures a comprehensive and precise comparison between patterns.
Weighting Schemes: The contribution of each pattern to the forecast is either weighted by its proximity (distance) or averaged, based on user settings.
👾 Prediction Horizon and Refinement
The indicator forecasts future price movements (Y_hat) by predicting logarithmic changes in the price and projecting them forward using exponential scaling. This forecast is smoothed using a user-defined EMA filter to reduce noise and enhance actionable clarity.
👽 AI-Driven Pattern Recognition
Dynamic Dictionary of Patterns: The indicator maintains a rolling dictionary of N multivariate patterns, continuously updated to reflect the latest market data. This ensures it adapts seamlessly to changing market conditions.
Nearest Neighbor Matching: At each bar, the algorithm identifies the most similar historical pattern. The prediction is based on the aggregated outcomes of the closest neighbors, providing confidence levels and directional bias.
Multivariate Synthesis: By combining multiple dimensions of price action into a unified prediction, the indicator achieves a level of depth and accuracy unattainable by single-variable models.
Visual Outputs
Forecast Line (Y_hat_line):
A smoothed projection of the expected price trend, based on the weighted contribution of similar historical patterns.
Trend Regime Bands:
Dynamic high, low, and midlines highlight the current market regime, providing actionable insights into momentum and range.
Historical Pattern Matching:
The nearest historical pattern is displayed, allowing traders to visualize similarities
👽 Applications
Trend Identification:
Detect and follow emerging trends early using dynamic trend regime analysis.
Reversal Signals:
Anticipate market reversals with high-confidence predictions based on historically similar scenarios.
Range and Momentum Trading:
Leverage multivariate analysis to understand price ranges and momentum, making it suitable for both breakout and mean-reversion strategies.
Disclaimer: This information is for entertainment purposes only and does not constitute financial advice. Please consult with a qualified financial advisor before making any investment decisions.
Bayesian Price Projection Model [Pinescriptlabs]📊 Dynamic Price Projection Algorithm 📈
This algorithm combines **statistical calculations**, **technical analysis**, and **Bayesian theory** to forecast a future price while providing **uncertainty ranges** that represent upper and lower bounds. The calculations are designed to adjust projections by considering market **trends**, **volatility**, and the historical probabilities of reaching new highs or lows.
Here’s how it works:
🚀 Future Price Projection
A dynamic calculation estimates the future price based on three key elements:
1. **Trend**: Defines whether the market is predisposed to move up or down.
2. **Volatility**: Quantifies the magnitude of the expected change based on historical fluctuations.
3. **Time Factor**: Uses the logarithm of the projected period (`proyeccion_dias`) to adjust how time impacts the estimate.
🧠 **Bayesian Probabilistic Adjustment**
- Conditional probabilities are calculated using **Bayes' formula**:
\
This models future events using conditional information:
- **Probability of reaching a new all-time high** if the price is trending upward.
- **Probability of reaching a new all-time low** if the price is trending downward.
- These probabilities refine the future price estimate by considering:
- **Higher volatility** increases the likelihood of hitting extreme levels (highs/lows).
- **Market trends** influence the expected price movement direction.
🌟 **Volatility Calculation**
- Volatility is measured using the **ATR (Average True Range)** indicator with a 14-period window. This reflects the average amplitude of price fluctuations.
- To express volatility as a percentage, the ATR is normalized by dividing it by the closing price and multiplying it by 200.
- Volatility is then categorized into descriptive levels (e.g., **Very Low**, **Low**, **Moderate**, etc.) for better interpretation.
---
🎯 **Deviation Limits (Upper and Lower)**
- The upper and lower limits form a **projected range** around the estimated future price, providing a framework for uncertainty.
- These limits are calculated by adjusting the ATR using:
- A user-defined **multiplier** (`factor_desviacion`).
- **Bayesian probabilities** calculated earlier.
- The **square root of the projected period** (`proyeccion_dias`), incorporating the principle that uncertainty grows over time.
🔍 **Interpreting the Model**
This can be seen as a **dynamic probabilistic model** that:
- Combines **technical analysis** (trends and ATR).
- Refines probabilities using **Bayesian theory**.
- Provides a **visual projection range** to help you understand potential future price movements and associated uncertainties.
⚡ Whether you're analyzing **volatile markets** or confirming **bullish/bearish scenarios**, this tool equips you with a robust, data-driven approach! 🚀
Español :
📊 Algoritmo de Proyección de Precio Dinámico 📈
Este algoritmo combina **cálculos estadísticos**, **análisis técnico** y **la teoría de Bayes** para proyectar un precio futuro, junto con rangos de **incertidumbre** que representan los límites superior e inferior. Los cálculos están diseñados para ajustar las proyecciones considerando la **tendencia del mercado**, **volatilidad** y las probabilidades históricas de alcanzar nuevos máximos o mínimos.
Aquí se explica su funcionamiento:
🚀 **Proyección de Precio Futuro**
Se realiza un cálculo dinámico del precio futuro estimado basado en tres elementos clave:
1. **Tendencia**: Define si el mercado tiene predisposición a subir o bajar.
2. **Volatilidad**: Determina la magnitud del cambio esperado en función de las fluctuaciones históricas.
3. **Factor de Tiempo**: Usa el logaritmo del período proyectado (`proyeccion_dias`) para ajustar cómo el tiempo afecta la estimación.
🧠 **Ajuste Probabilístico con la Teoría de Bayes**
- Se calculan probabilidades condicionales mediante la fórmula de **Bayes**:
\
Esto permite modelar eventos futuros considerando información condicional:
- **Probabilidad de alcanzar un nuevo máximo histórico** si el precio sube.
- **Probabilidad de alcanzar un nuevo mínimo histórico** si el precio baja.
- Estas probabilidades ajustan la estimación del precio futuro considerando:
- **Mayor volatilidad** aumenta la probabilidad de alcanzar niveles extremos (máximos/mínimos).
- **La tendencia del mercado** afecta la dirección esperada del movimiento del precio.
🌟 **Cálculo de Volatilidad**
- La volatilidad se mide usando el indicador **ATR (Average True Range)** con un período de 14 velas. Este indicador refleja la amplitud promedio de las fluctuaciones del precio.
- Para obtener un valor porcentual, el ATR se normaliza dividiéndolo por el precio de cierre y multiplicándolo por 200.
- Además, se clasifica esta volatilidad en categorías descriptivas (e.g., **Muy Baja**, **Baja**, **Moderada**, etc.) para facilitar su interpretación.
🎯 **Límites de Desviación (Superior e Inferior)**
- Los límites superior e inferior representan un **rango proyectado** en torno al precio futuro estimado, proporcionando un marco para la incertidumbre.
- Estos límites se calculan ajustando el ATR según:
- Un **multiplicador** definido por el usuario (`factor_desviacion`).
- Las **probabilidades condicionales** calculadas previamente.
- La **raíz cuadrada del período proyectado** (`proyeccion_dias`), lo que incorpora el principio de que la incertidumbre aumenta con el tiempo.
---
🔍 **Interpretación del Modelo**
Este modelo se puede interpretar como un **modelo probabilístico dinámico** que:
- Integra **análisis técnico** (tendencias y ATR).
- Ajusta probabilidades utilizando **la teoría de Bayes**.
- Proporciona un **rango de proyección visual** para ayudarte a entender los posibles movimientos futuros del precio y su incertidumbre.
⚡ Ya sea que estés analizando **mercados volátiles** o confirmando **escenarios alcistas/bajistas**, ¡esta herramienta te ofrece un enfoque robusto y basado en datos! 🚀
Market Trades PinescriptlabsThis algorithm is designed to emulate the true order book of exchanges by showing the quantity of transactions of an asset in real-time, while identifying patterns of high activity and volatility in the market through the analysis of volume and price movements. 📈 Below, I explain how to understand and use the information provided by the chart, along with the trades table:
Identification of High Activity Zones 🚀
The algorithm calculates the average volume and the rate of price change to detect areas with spikes in activity. This is visualized on the chart with labels "Volatility Spike Buy" and "Volatility Spike Sell":
Volatility Spike Buy: Indicates an unusual increase in volatility in the buying market, suggesting a potential surge in buying interest. 🟢
Volatility Spike Sell: Signals an increase in volatility in the selling market, which may indicate selling pressure or a sudden massive sell-off. 🔴
Market Trades Table 📋
The table provides a detailed view of the latest trades:
Price: Displays the price at which each trade was executed. 💵
Quantity (Traded): Indicates the amount of the asset traded. 💰
Type of Trade (Buy/Sell): Differentiates between buy (Buy) and sell (Sell) operations based on volume and strength. 🔄
Date and Time: Refers to the start of the calculated trading candle. ⏰
Recency: Identifies the most recent trade to facilitate tracking of current activity. 🔍
Analysis of Trade Imbalance ⚖️
The imbalance between buys and sells is calculated based on the volume of both. This indicator helps to understand whether the market has a tendency toward buying or selling, showing if there is greater strength on one side of the market.
A positive imbalance suggests more buying pressure. 📊
A negative imbalance indicates greater selling pressure. 📉
Volume Presentation
Visualizes the volume of buying and selling in the market, allowing the identification of buying or selling strength through the size of the volume candle. 🔍
Español :
"Este algoritmo está diseñado para emular el verdadero libro de órdenes de los intercambios al mostrar la cantidad de transacciones de un activo en tiempo real, mientras identifica patrones de alta actividad y volatilidad en el mercado a través del análisis de volumen y movimientos de precios. 📈 A continuación, explico cómo entender y usar la información proporcionada por el gráfico, junto con la tabla de operaciones:"
Identificación de Zonas de Alta Actividad 🚀
El algoritmo calcula el volumen promedio y la velocidad de cambio de precio para detectar zonas con picos de actividad. Esto se visualiza en el gráfico con etiquetas de "Volatility Spike Buy" y "Volatility Spike Sell":
Volatility Spike Buy: Indica un incremento inusual de volatilidad en el mercado de compra, sugiriendo un posible interés de compra elevado. 🟢
Volatility Spike Sell: Señala un incremento de volatilidad en el mercado de venta, lo cual puede indicar presión de venta o una venta masiva repentina. 🔴
Tabla de Operaciones en el Mercado (Market Trades) 📋
La tabla proporciona una vista detallada de las últimas operaciones:
Precio: Muestra el precio al cual se realizó cada operación. 💵
Cantidad (Transaccionada): Indica la cantidad del activo transaccionada. 💰
Tipo de operación (Buy/Sell): Diferencia entre operaciones de compra (Buy) y de venta (Sell), dependiendo del volumen y fuerza. 🔄
Fecha y Hora: Refleja el inicio de la vela de negociación calculada. ⏰
Recency: Identifica la operación más reciente para facilitar el seguimiento de la actividad actual. 🔍
Análisis de Desequilibrio de Operaciones (Imbalance) ⚖️
El desequilibrio entre compras y ventas se calcula con base en el volumen de ambas. Este indicador ayuda a entender si el mercado tiene una tendencia hacia la compra o venta, mostrando si hay una mayor fuerza en uno de los lados del mercado.
Un desequilibrio positivo sugiere más presión de compra. 📊
Un desequilibrio negativo indica mayor presión de venta. 📉
Presentación en Volumen
Visualiza el volumen de compra y venta en el mercado, permitiendo identificar mediante el tamaño de la vela de volumen la fuerza, ya sea compradora o vendedora. 🔍






















