Rolling Correlation vs Another Symbol (SPY Default)This indicator visualizes the rolling correlation between the current chart symbol and another selected asset, helping traders understand how closely the two move together over time.
It calculates the Pearson correlation coefficient over a user-defined period (default 22 bars) and plots it as a color-coded line:
• Green line → positive correlation (move in the same direction)
• Red line → negative correlation (move in opposite directions)
• A gray dashed line marks the zero level (no correlation).
The background highlights periods of strong relationship:
• Light green when correlation > +0.7 (strong positive)
• Light red when correlation < –0.7 (strong negative)
Use this tool to quickly spot diversification opportunities, confirm hedges, or understand how assets interact during different market regimes.
Statistics
Standardization (Z-score)Standardization, often referred to as Z-score normalization, is a data preprocessing technique that rescales data to have a mean of 0 and a standard deviation of 1. The resulting values, known as Z-scores, indicate how many standard deviations an individual data point is from the mean of the dataset (or a rolling sample of it).
This indicator calculates and plots the Z-score for a given input series over a specified lookback period. It is a fundamental tool for statistical analysis, outlier detection, and preparing data for certain machine learning algorithms.
## Core Concepts
* **Standardization:** The process of transforming data to fit a standard normal distribution (or more generally, to have a mean of 0 and standard deviation of 1).
* **Z-score (Standard Score):** A dimensionless quantity that represents the number of standard deviations by which a data point deviates from the mean of its sample.
The formula for a Z-score is:
`Z = (x - μ) / σ`
Where:
* `x` is the individual data point (e.g., current value of the source series).
* `μ` (mu) is the mean of the sample (calculated over the lookback period).
* `σ` (sigma) is the standard deviation of the sample (calculated over the lookback period).
* **Mean (μ):** The average value of the data points in the sample.
* **Standard Deviation (σ):** A measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean, while a high standard deviation indicates that the values are spread out over a wider range.
## Common Settings and Parameters
| Parameter | Type | Default | Function | When to Adjust |
| :-------------- | :----------- | :------ | :------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Source | series float | close | The input data series (e.g., price, volume, indicator values). | Choose the series you want to standardize. |
| Lookback Period | int | 20 | The number of bars (sample size) used for calculating the mean (μ) and standard deviation (σ). Min 2. | A larger period provides more stable estimates of μ and σ but will be less responsive to recent changes. A shorter period is more reactive. `minval` is 2 because `ta.stdev` requires it. |
**Pro Tip:** Z-scores are excellent for identifying anomalies or extreme values. For instance, applying Standardization to trading volume can help quickly spot days with unusually high or low activity relative to the recent norm (e.g., Z-score > 2 or < -2).
## Calculation and Mathematical Foundation
The Z-score is calculated for each bar as follows, using a rolling window defined by the `Lookback Period`:
1. **Calculate Mean (μ):** The simple moving average (`ta.sma`) of the `Source` data over the specified `Lookback Period` is calculated. This serves as the sample mean `μ`.
`μ = ta.sma(Source, Lookback Period)`
2. **Calculate Standard Deviation (σ):** The standard deviation (`ta.stdev`) of the `Source` data over the same `Lookback Period` is calculated. This serves as the sample standard deviation `σ`.
`σ = ta.stdev(Source, Lookback Period)`
3. **Calculate Z-score:**
* If `σ > 0`: The Z-score is calculated using the formula:
`Z = (Current Source Value - μ) / σ`
* If `σ = 0`: This implies all values in the lookback window are identical (and equal to the mean). In this case, the Z-score is defined as 0, as the current source value is also equal to the mean.
* If `σ` is `na` (e.g., insufficient data in the lookback period), the Z-score is `na`.
> 🔍 **Technical Note:**
> * The `Lookback Period` must be at least 2 for `ta.stdev` to compute a valid standard deviation.
> * The Z-score calculation uses the sample mean and sample standard deviation from the rolling lookback window.
## Interpreting the Z-score
* **Magnitude and Sign:**
* A Z-score of **0** means the data point is identical to the sample mean.
* A **positive Z-score** indicates the data point is above the sample mean. For example, Z = 1 means the point is 1 standard deviation above the mean.
* A **negative Z-score** indicates the data point is below the sample mean. For example, Z = -1 means the point is 1 standard deviation below the mean.
* **Typical Range:** For data that is approximately normally distributed (bell-shaped curve):
* About 68% of Z-scores fall between -1 and +1.
* About 95% of Z-scores fall between -2 and +2.
* About 99.7% of Z-scores fall between -3 and +3.
* **Outlier Detection:** Z-scores significantly outside the -2 to +2 range, and especially outside -3 to +3, are often considered outliers or extreme values relative to the recent historical data in the lookback window.
* **Volatility Indication:** When applied to price, large absolute Z-scores can indicate moments of high volatility or significant deviation from the recent price trend.
The indicator plots horizontal lines at ±1, ±2, and ±3 standard deviations to help visualize these common thresholds.
## Common Applications
1. **Outlier Detection:** Identifying data points that are unusual or extreme compared to the rest of the sample. This is a primary use in financial markets for spotting abnormal price moves, volume spikes, etc.
2. **Comparative Analysis:** Allows for comparison of scores from different distributions that might have different means and standard deviations. For example, comparing the Z-score of returns for two different assets.
3. **Feature Scaling in Machine Learning:** Standardizing features to have a mean of 0 and standard deviation of 1 is a common preprocessing step for many machine learning algorithms (e.g., SVMs, logistic regression, neural networks) to improve performance and convergence.
4. **Creating Normalized Oscillators:** The Z-score itself can be used as a bounded (though not strictly between -1 and +1) oscillator, indicating how far the current price has deviated from its moving average in terms of standard deviations.
5. **Statistical Process Control:** Used in quality control charts to monitor if a process is within expected statistical limits.
## Limitations and Considerations
* **Assumption of Normality for Probabilistic Interpretation:** While Z-scores can always be calculated, the probabilistic interpretations (e.g., "68% of data within ±1σ") strictly apply to normally distributed data. Financial data is often not perfectly normal (e.g., it can have fat tails).
* **Sensitivity of Mean and Standard Deviation to Outliers:** The sample mean (μ) and standard deviation (σ) used in the Z-score calculation can themselves be influenced by extreme outliers within the lookback period. This can sometimes mask or exaggerate the Z-score of other points.
* **Choice of Lookback Period:** The Z-score is highly dependent on the `Lookback Period`. A short period makes it very sensitive to recent fluctuations, while a long period makes it smoother and less responsive. The appropriate period depends on the analytical goal.
* **Stationarity:** For time series data, Z-scores are calculated based on a rolling window. This implicitly assumes some level of local stationarity (i.e., the mean and standard deviation are relatively stable within the window).
Multi-Session Viewer and AnalyzerFully customizable multi-session viewer that takes session analysis to the next level. It allows you to fully customize each session to your liking. Includes a feature that highlights certain periods of time on the chart and a Time Range Marker.
It helps you analyze the instrument that you trade and pinpoint which times are more volatile than others. It also helps you choose the best time to trade your instrument and align your life schedule with the market.
NZDUSD Example:
- 3 major sessions displayed.
- Although this is NZDUSD, Sydney is not the best time to trade this pair. Volatility picks up at Tokyo open.
- I have time to trade in the evening from 18:00 to 22:00 PST. I live in a different time zone, whereas market is based on EST. How does the pair behave during the time I am available to trade based on my time zone? Time Range Marker feature allows you to see this clearly on the chart (black lines).
- I have some time in the morning to trade during New York session, but there is no way I am waking up at 05:00 PST. 06:30 PST seems doable. Blue highlighted area is good time to trade during New York session based on what Bob said. It seem like this aligns with when I am available and when I am able to trade. Volatility is also at its peak.
- I am also available to trade between London close and Tokyo open on some days of the week, but... based on what I see, green highlighted area is clearly showing that I probably don't want to waste my time trading this pair from London close and until Tokyo open. I will use this time for something else rather than be stuck in a range.
SPX Bull Market, Bear market and Corrections Since 1929 This script show visually with labels all the BULL & BEAR Market since 1929 with intermediary corrections.
Bear Market = Price drop of >=20% (based on closing price not intra day low)
Corrections = Price drop of >=10% and < 20% (based on closing price not intra day low, in intraday price it may go beyond 20% but closes in less than 20% )
The script doesn't update as we move forward , I need to manually update during every correction/bull/bear phases.
It is a good visual to study the past bull and bear market to gain some key insights!
LAST UPA FOR DA DAYWell been fing around most the day now, TBH this is showing results , Much respect to all along the journey , mess with the setting make them natural colors for you
Basic DCA Strategy by Wongsakon KhaisaengThe Core Principle and Philosophy Behind the Basic DCA Strategy
1. Introduction
The Basic DCA Strategy (Dollar-Cost Averaging) represents one of the most fundamental and enduring investment methodologies in the realm of systematic accumulation. The philosophy underpinning DCA is rooted not in speculation or prediction, but in disciplined participation. It assumes that the consistent act of investing a fixed amount of capital over time—regardless of short-term price volatility—can yield superior long-term outcomes through the natural smoothing effect of cost averaging.
This strategy, expressed through the Pine Script code above, formalizes the DCA concept into a fully systematic trading framework, enabling quantitative backtesting and objective evaluation of long-term accumulation efficiency.
2. Mechanism of Operation
At its technical core, the strategy executes a fixed-value buy order at every predefined interval within a specific accumulation period.
Each DCA event invests a constant “Investment Amount (USD)” irrespective of price fluctuations. When prices decline, this constant investment buys a larger quantity of the asset; when prices rise, it purchases fewer units. Over time, this behavior lowers the average cost basis of the accumulated position, effectively neutralizing short-term timing risks.
Mathematically, this is represented as:
Units Purchased = Investment Amount / Closing Price
Cost Basis = Total Invested USD / Total Units Acquired
Portfolio Value = Total Units Acquired × Current Price
The algorithm tracks cumulative investment, acquired units, and commissions dynamically, continuously recalculating key portfolio metrics such as total profit/loss (PnL), CAGR (Compound Annual Growth Rate), and maximum drawdown (peak-to-trough equity decline).
Furthermore, the script juxtaposes DCA results with a Buy & Hold benchmark, where the entire initial capital is invested at once. This comparison highlights the behavioral resilience and volatility resistance of the DCA method relative to market-timing strategies.
3. The Essence of DCA Philosophy
At its philosophical core, DCA is not a trading system, but a behavioral framework for rational capital deployment under uncertainty. It embodies the principle that time in the market often outweighs timing the market.
The DCA approach rejects the illusion of precision forecasting and embraces probabilistic humility—the recognition that even the most skilled investors cannot consistently predict short-term market fluctuations. Instead, it focuses on controlling what is controllable: the frequency, consistency, and size of investment actions.
This mindset reflects a broader principle of risk dispersion through temporal diversification. Rather than concentrating entry risk into a single price point (as in lump-sum investing), DCA spreads exposure across multiple time intervals, thereby converting volatility into opportunity.
In essence, volatility—often perceived as risk—is reframed as a mechanism for mean reversion advantage. The strategy thrives precisely because markets oscillate; each fluctuation provides a chance to accumulate at varied price levels, improving the weighted-average entry over time.
4. Long-Term Rationality Over Short-Term Emotion
DCA’s endurance stems from its ability to neutralize emotional biases inherent in human decision-making. Investors tend to overreact to market euphoria or panic—buying high out of greed and selling low out of fear. By automating purchases through predefined intervals, the DCA model enforces mechanical discipline, detaching decision-making from sentiment.
This transforms investing from an emotional endeavor into a systematic, algorithmic routine governed by rules rather than reactions. In doing so, DCA serves not only as a financial model but also as a psychological safeguard—aligning investor behavior with long-term compounding logic rather than short-term speculation.
5. Comparative Insight: DCA vs. Buy & Hold
While both DCA and Buy & Hold share a long-term investment horizon, they diverge in their treatment of entry timing. The Buy & Hold model assumes full deployment of capital at the beginning, maximizing exposure to growth but also to volatility. Conversely, DCA smooths the entry curve, trading off short-term returns for long-term stability and improved average entry price.
In environments characterized by volatility and cyclical corrections, DCA tends to outperform in terms of risk-adjusted returns, lower drawdowns, and improved investor adherence—since it reduces the psychological pain of entering at local peaks.
6. Conclusion
The Basic DCA Strategy exemplifies the synthesis of mathematical rigor and behavioral discipline. Its algorithmic construction in Pine Script transforms a classical investment philosophy into a quantifiable, testable, and transparent framework.
By automating fixed-amount purchases across time, the system operationalizes the central axiom of DCA: consistency over conviction. It is not concerned with predicting future prices but with ensuring persistent participation—trusting that the market’s upward bias and the power of compounding will reward patience more than precision.
Ultimately, DCA embodies the timeless principle that successful investing is less about forecasting markets, and more about designing behavior that can endure them.
Forex Dynamic Lot Size CalculatorForex Dynamic Lot Size Calculator for Forex. Works on USD Base and USD Quote pairs. Provides real-time data based on stop-loss location. Allows you to know in real-time how the number of lots you need to purchase to match your risk %.
Number of Lots is calculated based on total risk. Total risk is calculated based on Stop-Loss + Commission + Spread Fees + Slippage measured in pips. Also includes data such as break-even pips, net take profit, margin required, buying power used, and a few others. All are real-time and anchored to the current price.
The intention of creating this indicator is to help with risk management. You know exactly how many lots you need to get this very moment to have your total risk at lets say $250, which includes commission fees, spread fees, and slippage.
To put it simply, if I was to enter the trade right now and willing to risk exactly $250, how many lots will I need to get right this second?
---
- To use adjust Account Settings along with other variables.
- Stop Loss Mode can be Manual or Dynamic. If you select Dynamic, then you will have to adjust Stop Loss Level to where you can see the reference line on the screen. It is at 1.1 by default. Just enter current price and the line will appear. Adjust it by dragging it to where you want your stop loss to be.
- Take Profit Mode can also be Manual or Dynamic. I just keep my TP at Manual and use Quick Access to set Quick RR levels.
- Adjust Spreads and Slippage to your liking. I tried to have TV calculate current spread, but it seem like it doesn't have access to real-life data for me like MT5 does. I just use average instead. Both are optional, depending on your broker and type of account you use.
- Pip Value for the current pair, Return on Margin, and Break-even line can be turned on and off, based on your needs. I just get the Break-even value in pips from the pannel and use that as reference where I need to relocate my stop loss to break-ever (commission + spreds + slippage).
- Panel is fully customizable based on your liking. Important fields are highlighted along with reference lines.
💰 Position Size Table Compact Quickly see how many shares you can buy for preset investment amounts at the current price. This compact, customizable table is perfect for traders who want to calculate position sizes instantly without manual math.
Features
- Pre-set investment amounts: $500, $1000, $2000, $3000, $5000, $10000
- Per-row toggle: Show or hide specific investment amounts
- Live updates: Table recalculates as the stock price changes
- Customizable colors: Background, header, text, and border
- Master toggle: Hide or show the entire table on demand
Use it to
- Quickly calculate position sizes for multiple investment levels
- Plan trades efficiently and reduce manual calculation errors
- Keep your chart clean with a compact, flexible table
Signature [Pro+]Signature - Release
Indicator Table Features:
- Customizable indicator title display
- Real-time clock with timezone support
- 12-hour or 24-hour time format options
- Toggle AM/PM display for 12-hour format
- Individual text size control for clock
- Ticker symbol display
- Timeframe display
- Flexible table positioning (9 positions available)
- Customizable text colors, background colors, and border colors
- Font family selection (Default or Monospace)
- Individual control to enable/disable each element
- Text alignment control (Left, Center, Right)
Watermark Table Features:
- Customizable header text
- Customizable subtitle text
- Current date display with multiple format options
- Date month format: Full, Abbreviated, or Number
- Date year format: Full or Abbreviated
- Date separator options: dot, slash, dash, or space
- Flexible table positioning (9 positions available)
- Customizable text colors, background colors, and border colors
- Font family selection (Default or Monospace)
- Individual control to enable/disable each element
- Text alignment control (Left, Center, Right)
General Features:
- Two independent tables that can be positioned anywhere
- All styling consistent across both tables
- Minimal and clean design
- No performance impact on chart analysis
- Text alignment options for both indicator and watermark tables
Psychological Levels + Retest The script detects key round-number psychological levels such as 00, 50, and 100 zones based on the pair’s pip structure.
It then monitors price behavior around these zones using customizable buffers to highlight reaction areas.
When price breaks above a major or minor psychological level while trading above the 200 EMA, the indicator tracks for a retest confirmation. Once the required number of touches occurs within the defined buffer, the indicator marks the retest area and can trigger alerts for trade opportunities.
Auto Option Premium Viewer - csgnanamThat's a great idea! Clear documentation is key for anyone using your script. Since your selected code block is the dedicated space for documentation, I'll provide an explanation tailored for that context, covering the main features of your "Auto Option Premium Viewer" Canvas.
Here is a detailed explanation of the script's core functionality, which you can use in your Canvas description or in a separate document to help users understand it.
I will not edit the Canvas itself, as you requested a separate explanation.
***
### Explanation of the "Auto Option Premium Viewer" Canvas
This Pine Script indicator, **"Auto Option Premium Viewer — Full Auto Symbols (NSE format, improved detection),"** is designed to automatically fetch and display the At-The-Money (ATM) Call and Put option premiums for major NSE indices (NIFTY, BANKNIFTY, MIDCPNIFTY, FINNIFTY) in real-time.
The primary goal is to provide a single, clean chart overlay showing the total premium (CE + PE) for the options closest to the current spot price, without requiring the user to manually enter strike prices or steps.
#### 1. Automatic Index Detection (`AUTO` Functionality)
* **Smart Underlying Detection:** The script attempts to automatically detect the index you are currently viewing (`activeUnderlying`). For example, if your chart is set to `BANKNIFTY`, the indicator automatically focuses on Bank Nifty options.
* **Spot Ticker Mapping (The Fix):** To accurately find the spot price, the script uses a helper function (`getSpotTicker`) to map the common index name (like `FINNIFTY`) to the specific underlying ticker required by TradingView (like `CNXFINANCE` or `NIFTY_MID_SELECT`). This ensures accurate price referencing for ATM calculation across all indices.
#### 2. Fully Automated Strike & Step Sizing
* **No Base Strike Inputs:** The script dynamically calculates the At-The-Money (ATM) strike price based on the live spot price of the underlying index.
* **Fixed Strike Steps:** The strike increment (`current_step`) is hardcoded based on market conventions:
* **100:** NIFTY, BANKNIFTY, FINNIFTY
* **50:** MIDCPNIFTY
* **Dynamic ATM Calculation:** The live spot price is rounded to the nearest valid strike based on the correct step size. This automatically determines the central strike (B), along with the adjacent strikes (A and C) to ensure the fetched data is always relevant.
#### 3. Data Fetching and Display
* **Symbol Construction:** The `buildSymbol` function creates the exact NSE option symbol string (e.g., `NSE:NIFTY251028C26000`) required by the `request.security` function.
* **Option Price Request:** The script uses `request.security` to fetch the closing price (`close`) of the Call and Put options for the three relevant strikes (A, B, C) on a fixed **5-minute** timeframe (`dataTF`).
* **Plots:** The indicator displays three lines on the chart's lower panel:
1. **ATM CE Premium:** The price of the Call option closest to ATM.
2. **ATM PE Premium:** The price of the Put option closest to ATM.
3. **ATM Total Premium:** The sum of CE + PE, often used as a proxy for the minimum expected range or implied volatility.
This automatic setup makes the Canvas extremely efficient for quick analysis without needing to manually adjust any numerical settings.
Risk Leverage ToolRisk Leverage Tool – Calculate Position Size and Required Leverage
This script automatically calculates the optimal position size and the leverage needed based on the amount of capital you are willing to risk on a trade. It is designed for traders who want precise control over their risk management.
The script determines the distance between the entry and stop-loss price, calculates the maximum position size that fits within the defined risk, and derives the notional value of the trade. Based on the available margin, it then calculates the required leverage. It also displays the percentage of margin at risk if the stop-loss is hit.
All results are displayed in a table in the top-right corner of the chart. Additionally, a label appears at the entry price level showing the same data.
To use the tool, simply input your planned entry price, stop-loss price, the maximum risk amount in dollars, and the available margin in the settings menu. The script will update all values automatically in real time.
This tool works with any market where capital risk is expressed in absolute terms (such as USD), including futures, CFDs, and leveraged spot positions. For inverse contracts or percentage-based stops, manual adjustment is required.
Adaptive Trend SelectorThe Adaptive Trend Selector is a comprehensive trend-following tool designed to automatically identify the optimal moving average crossover strategy. It features adjustable parameters and an integrated backtester that delivers institutional-grade insights into the recommended strategy. The model continuously adapts to new data in real time by evaluating multiple moving average combinations, determining the best performing lengths, and presenting the backtest results in a clear, color-coded table that benchmarks performance against the buy-and-hold strategy.
At its core, the model systematically backtests a wide range of moving average combinations to identify the configuration that maximizes the selected optimization metric. Users can choose to optimize for absolute returns or risk-adjusted returns using the Sharpe, Sortino, or Calmar ratios. Alternatively, users can enable manual optimization to test custom fast and slow moving average lengths and view the corresponding backtest results. The label displays the Compounded Annual Growth Rate (CAGR) of the strategy, with the buy-and-hold CAGR in parentheses for comparison. The table presents the backtest results based on the fast and slow lengths displayed at the top:
Sharpe = CAGR per unit of standard deviation.
Sortino = CAGR per unit of downside deviation.
Calmar = CAGR relative to maximum drawdown.
Max DD = Largest peak-to-trough decline in value.
Beta (β) = Return sensitivity relative to buy-and-hold.
Alpha (α) = Excess annualized risk-adjusted returns.
Win Rate = Ratio of profitable trades to total trades.
Profit Factor = Total gross profit per unit of losses.
Expectancy = Average expected return per trade.
Trades/Year = Average number of trades per year.
This indicator is designed with flexibility in mind, enabling users to specify the start date of the backtesting period and the preferred moving average strategy. Supported strategies include the Exponential Moving Average (EMA), Simple Moving Average (SMA), Wilder’s Moving Average (RMA), Weighted Moving Average (WMA), and Volume-Weighted Moving Average (VWMA). To minimize overfitting, users can define constraints such as a minimum and maximum number of trades per year, as well as an optional optimization margin that prioritizes longer, more robust combinations by requiring shorter-length strategies to exceed this threshold. The table follows an intuitive color logic that enables quick performance comparison against buy-and-hold (B&H):
Sharpe = Green indicates better than B&H, while red indicates worse.
Sortino = Green indicates better than B&H, while red indicates worse.
Calmar = Green indicates better than B&H, while red indicates worse.
Max DD = Green indicates better than B&H, while red indicates worse.
Beta (β) = Green indicates better than B&H, while red indicates worse.
Alpha (α) = Green indicates above 0%, while red indicates below 0%.
Win Rate = Green indicates above 50%, while red indicates below 50%.
Profit Factor = Green indicates above 2, while red indicates below 1.
Expectancy = Green indicates above 0%, while red indicates below 0%.
In summary, the Adaptive Trend Selector is a powerful tool designed to help investors make data-driven decisions when selecting moving average crossover strategies. By optimizing for risk-adjusted returns, investors can confidently identify the best lengths using institutional-grade metrics. While results are based on the selected historical period, users should be mindful of potential overfitting, as past results may not persist under future market conditions. Since the model recalibrates to incorporate new data, the recommended lengths may evolve over time.
Market Screener - NarwingThis is a 20 cryptocurrency market screener, it's goal is to provide a broad view of the state of cryptocurrencies using 4 key components
1. ROC
2. Sharpe Ratio
3. Sortino Ratio
4. Omega Ratio
All these metrics are calculated twice with two different lengths, 7 day and 30 days
This allows for broad market screening instead of focusing on one particular asset
This tool is meant for research purposes only, never invest money you can't afford to lose
Mercury Retrograde — Daily boxes & bottom % (stable v6)水星逆行のアノマリー検証。対象は日経225の過去5年の値動き。水星逆行開始時の終値と水星逆行終了時の終値を比較。上昇率・下落率に応じて色分け。
Verification of Mercury Retrograde Anomalies. Subject: Nikkei 225 price movements over the past five years. Comparing closing prices at the start and end of Mercury retrograde periods. Color-coded based on percentage increase/decrease.
The Wick Report [Pro]Overview
The Wick Report visualizes how current wick development compares to long-term statistical behavior across multiple higher-timeframe candles.
It references embedded datasets to show where wick formation is historically common, rare, or unusually small for a given session or timeframe.
This provides a data-driven context for directional bias and wick-based targeting — without implying any form of prediction.
Candles that form little or no wick are statistically uncommon. The Wick Report highlights these conditions and displays their percentile rank, exceedance probability, and a derived “score” that reflects how far current wick behavior deviates from typical norms.
Key Features
• Multi-Timeframe Analysis – View wick statistics from 4H, 6H, 12H, Daily, or Weekly candles projected onto any chart.
• Wick Probabilities – Quantifies the historical likelihood of a wick extending beyond its current size.
• Percentile Mapping – Shows where each wick sits within its long-term distribution (e.g., P25 = smaller than 75 % of prior wicks).
• Score System – Automatically combines percentile and probability into a single normalized “target score” for simplified interpretation.
• Wick Modes – Choose how wick data is displayed to suit your analysis style:
– Auto — Detects candle direction automatically and draws the statistically relevant wick (upper or lower).
– Bullish Only — Displays only lower wicks from bullish candles.
– Bearish Only — Displays only upper wicks from bearish candles.
– Both — Draws both upper and lower wick zones simultaneously for full candle symmetry.
• Adaptive Visualization – Color-coded zones and dynamic labels update as higher-timeframe candles evolve.
• Threshold Filters – Optional probability or score filters to hide low-significance wicks.
About the Score
The score balances two opposing factors:
• High probability of a wick extending further, and
• Low percentile ranking (a smaller-than-normal wick).
A strong combination of both produces a higher score, highlighting candles where wick development is statistically most imbalanced.
The scale is purely comparative — derived from historical distributions, not forward prediction.
Target Score Rankings
Outstanding (70 +) – Extremely rare, high-confidence zones — typically at very low percentiles with strong exceedance probability.
Excellent (60–70) – High-confidence targets with clear statistical edge.
Good (50–60) – Solid probability zones, reliable reference levels.
Above Average (40–50) – Decent opportunities within normal ranges.
Average (30–40) – Neutral zones; use additional confirmation.
Below Average (20–30) – Low-confidence references.
Poor (< 20) – High percentiles with low probability; statistically common and uninformative.
Methodology & Use
The Wick Report uses historical wick distributions to classify how current wick sizes compare to typical behavior for the same timeframe and session hour.
When a candle forms a small or missing wick, the tool reports how often that condition historically remained unchanged through the rest of the candle’s interval.
This helps identify when wick development is statistically under- or over-extended.
The data is intended for contextual reference only — for example, combining a high-score, low-percentile wick on a higher timeframe with lower-timeframe structure may provide useful directional confluence.
It does not generate trade signals or predict future movement.
Proprietary Framework
The Wick Report uses embedded statistical datasets built from more than a decade of historical market behavior.
Each timeframe references pre-processed wick-size and exceedance distributions to display where the current wick sits within its long-term statistical range.
All computational methods and dataset structures remain proprietary.
Lump Sum Favorability (SPX & NDX)This indicator provides a visual dashboard to gauge the statistical favorability of deploying a "Lump Sum" investment into the SPX (S&P 500) or NDX (Nasdaq 100).
The primary goal is not to time the exact market bottom, but to identify zones of significant pessimism or euphoria. Historically, periods of indiscriminate selling have represented high-probability entry points for long-term investors.
The dashboard consists of two parts:
1. The Favorability Gauge: A 12-segment gauge that moves from Red (Unfavorable) to Teal (Favorable).
2. The Summary Text: An optional text box (enabled in settings) that provides a plain-English summary of the current market breadth.
---
The Method: Market Breadth
This indicator is not based on the price of the index itself. Price-based indicators (like an RSI on the SPX) can be misleading. In a market-cap-weighted index, a few mega-cap stocks can hold the index price up while the vast majority of "average" stocks are already in a deep bear market.
This tool uses Market Breadth to measure the true, underlying health and participation of the entire market.
How It Works
1. Data Source: The indicator pulls the daily percentage of companies within the selected index (SPX or NDX) that are trading above their 200-day moving average. (Data tickers: S5TH for SPX, NDTH for NDX).
2. Smoothing: This raw data is volatile. To filter out daily noise and confirm a persistent trend, the indicator calculates a 5-day Simple Moving Average (SMA) of this percentage. This is the value used by the indicator.
3. Interpretation:
High Value (>= 50%): More than half of the stocks are above their long-term average. This signifies the market is "Overheated" or in a risk-on phase. The favorability for a new lump sum investment is considered Low.
Low Value (< 50%): Less than half of the stocks are above their long-term average. This signifies "Oversold" conditions or capitulation. These moments historically offer the best favorability for starting a new long-term investment.
---
How to Use the Indicator
1. The Favorability Gauge
The gauge is designed to be intuitive: Red means "Stop/Caution," and Teal means "Go/Opportunity."
Note: The gauge's logic is inverted from the data value to achieve this simplicity.
Red Zone (Left): UNFAVORABLE
This corresponds to a high percentage of stocks being above their 200d MA (>= 50%). The market is considered Overheated, and the favorability for a new lump sum investment is low.
Teal Zone (Right): FAVORABLE
This corresponds to a low percentage of stocks being above their 200d MA (< 50%). The market is considered Oversold, and the favorability for a new lump sum investment is high.
2. The Summary Text
When "Show Summary Text" is enabled in the settings, a box will appear at the top-center of your chart. This box provides a clear, data-driven summary, such as:
"Currently, only 22% of S&P 500 companies are above their 200-day MA. Market is Oversold."
The color of this text will automatically change to match the market state (Red for Overheated, Teal for Oversold), providing instant confirmation of the gauge's reading.
---
Settings
Market: Choose the index to analyze: SPX (S&P 500) or NDX (Nasdaq 100).
Gauge Position: Select where the gauge dashboard should appear on your chart (default is Bottom Right).
Show Summary Text: Toggle the descriptive text box on or off (default is On).
---
This indicator is a statistical and historical guide, not a financial advice or timing signal. It is designed to measure favorability based on past market behavior, not to provide certainty.
Extreme oversold conditions can persist, and markets can always go lower. This tool should be used as one component of a broader investment and risk-management framework. Past performance is not a guarantee of future results.
GARCH Range PredictorThis was inspired by deltatrendtrading's video on GARCH models to predict daily trading ranges and identify favorable trading conditions. Based on advanced volatility forecasting techniques, it predicts whether a trading day's true range will exceed a threshold, helping traders decide when to trade or skip a session.
Key Features
GARCH(1,1) Volatility Modeling: Uses log-transformed true ranges with exponential moving average centering
Forward-Looking Predictions: Makes predictions at session start before the day unfolds
Dynamic or Static Thresholds: Choose between fixed dollar thresholds or adaptive 20-day averages
Accuracy Tracking: Monitors prediction accuracy with overall and recent (20-day) hit rates
Visual Session Boxes: Colors trading sessions green (trade) or red (skip) based on predictions
Real-Time Statistics: Displays current predictions, thresholds, and performance metrics
How It Works
Data Transformation: Log-transforms daily true ranges and centers them using an EMA
Variance Modeling: Updates GARCH variance using: σ²ₜ = ω + α(residual²) + β(σ²ₜ₋₁)
Prediction Generation: Back-transforms log predictions to dollar values
Signal Generation: Compares predictions to threshold to generate trade/skip signals
Performance Tracking: Validates predictions against actual outcomes
Parameters
GARCH Parameters (ω, α, β): Control volatility persistence and mean reversion
EMA Period: Smoothing period for log range centering
Threshold Settings: Static dollar amount or dynamic multiplier of recent averages
Session Time: Define regular trading hours for analysis
Best Use Cases
Breakout and momentum strategies that perform better on high-range days
Risk management by avoiding low-volatility sessions
Futures day trading (optimized for MNQ/NQ detection)
Any strategy where daily range impacts profitability
Important Notes
Requires 5+ sessions for initialization and warm-up
Accuracy depends heavily on proper parameter tuning for your specific instrument
Default parameters may need adjustment for different markets
Monitor the hit rate to validate effectiveness on your timeframe
RBLR - GSK Vizag AP IndiaThis indicator identifies the Opening Range High (ORH) and Low (ORL) based on the first 15 minutes of the Indian equity market session (9:15 AM to 9:30 AM IST). It draws horizontal lines extending these levels until market close (3:30 PM IST) and generates visual signals for price breakouts above ORH or below ORL, as well as reversals back into the range.
Key features:
- **Range Calculation**: Captures the high and low during the opening period using real-time bar data.
- **Line Extension**: Lines are dynamically extended bar-by-bar within the session for clear visualization.
- **Signals**:
- Green triangle up: Crossover above ORH (potential bullish breakout).
- Red triangle down: Crossunder below ORL (potential bearish breakout).
- Yellow labels: Reversals from breakout levels back into the range.
- **Labels**: "RAM BAAN" marks the ORH (inspired by a precise arrow from the Ramayana), and "LAKSHMAN REKHA" marks the ORL (inspired by a protective boundary line from the same epic).
- **Customization**: Toggle signals on/off and select line styles (Dotted, Dashed, Solid, or Smoothed, with transparency for Smoothed).
The state-tracking logic prevents redundant signals by monitoring if price remains outside the range after a breakout. This helps users observe range-bound behavior or directional moves without built-in alerts. This indicator is particularly useful for day trading on longer intraday timeframes (e.g., 15-minute charts) to identify session-wide trends and avoid noise in shorter frames. For best results, apply on intraday timeframes on NSE/BSE symbols. Note that lines and labels are limited to the script's max counts to avoid performance issues on long histories.
**Disclaimer**: This indicator is for educational and informational purposes only and does not constitute financial, investment, or trading advice. Trading in financial markets involves significant risk of loss and is not suitable for all investors. Past performance is not indicative of future results. Users should conduct their own research, consider their financial situation, and consult with qualified professionals before making any investment decisions. The author and TradingView assume no liability for any losses incurred from its use.
Liquidity Stress Index (SOFR - IORB)How to use:
> +10 bps — TIGHT
−5 +10 bps — NEUTRAL
< −5 bps — LOOSE
PG ATM Strike Line with Call & Put PremiumsPine Script: ATM Strike Line with Call & Put Premiums (Simplified)This Pine Script for TradingView displays the At-The-Money (ATM) strike price, futures price, call/put premiums (time value), and two ratios—Premium Ratio (PR) and Volume Ratio (VR)—for a user-selected underlying asset (e.g., NIFTY, BANKNIFTY, or stocks). It helps traders gauge near-term market direction using options data.How the Script WorksInputs:Expiry: Select year (e.g., '25), month (01–12), day (01–31) for option expiry (e.g., '251028').
Timeframe: Choose data timeframe (e.g., Daily, 15-min).
Symbol: Auto-detects chart symbol or select from Indian indices/stocks.
Strike: Auto-ATM (based on futures) or manual strike input.
Interval: Auto (e.g., 100 for NIFTY) or custom strike interval.
Colors: Customizable for ATM line, labels (Futures Price, CPR, PPR, VR, PR).
Calculations:Futures Price (FP): Fetches front-month futures price (e.g., NSE:NIFTY1!).
ATM Strike: Rounds futures price to nearest strike interval.
Option Data: Retrieves Last Traded Price (LTP) and volume for ATM call/put options (e.g., NSE:NIFTY251028C24200).
Call Premium (CPR): Call LTP minus intrinsic value (max(0, FP - Strike)).
Put Premium (PPR): Put LTP minus intrinsic value (max(0, Strike - FP)).
Premium Ratio (PR): PPR / CPR.
Volume Ratio (VR): Put Volume / Call Volume.
Visuals:Draws ATM strike line on chart.
Displays labels: FP (futures price), CPR (call premium), PPR (put premium), VR, PR.
VR/PR labels: Red (≥ 1.25, bearish), Green (≤ 0.75, bullish), Gray (0.75–1.25, neutral).
Updates on last confirmed bar to avoid redraws.
Using PR and VR for Market DirectionPremium Ratio (PR):PR ≥ 1.25 (Red): High put premiums suggest bearish sentiment (expect price drop).
PR ≤ 0.75 (Green): High call premiums suggest bullish sentiment (expect price rise).
0.75 < PR < 1.25 (Gray): Neutral, no clear direction.
Use: High PR favors bearish trades (e.g., buy puts); low PR favors bullish trades (e.g., buy calls).
Volume Ratio (VR):VR ≥ 1.25 (Red): High put volume indicates bearish activity.
VR ≤ 0.75 (Green): High call volume indicates bullish activity.
0.75 < VR < 1.25 (Gray): Neutral trading activity.
Use: High VR suggests bearish moves; low VR suggests bullish moves.
Combined Signals:High PR & VR: Strong bearish signal; consider put buying or call selling.
Low PR & VR: Strong bullish signal; consider call buying or put selling.
Mixed/Neutral: Use price action or support/resistance for confirmation.
Tips:Combine with technical analysis (e.g., trends, levels).
Match timeframe to trading horizon (e.g., 15-min for intraday).
Monitor FP for context; check volatility or news for accuracy.
ExampleNIFTY: FP = 24,237.50, ATM = 24,200, CPR = 120.25, PPR = 180.50, PR = 1.50 (Red), VR = 1.30 (Red).
Insight: High PR/VR suggests bearish bias; consider bearish trades if price nears resistance.
Action: Buy puts or exit longs, confirm with price action.
Conclusion: This script provides a concise tool for options traders, showing ATM strike, premiums, and PR/VR ratios. High PR/VR (≥ 1.25) signals bearish sentiment, low PR/VR (≤ 0.75) signals bullish sentiment, and neutral (0.75–1.25) suggests indecision. Combine with technical analysis for robust trading decisions in the Indian options market.
Digital Credit: Yields, Spreads & Regime
TN Preferreds is a yield-centric dashboard for bitcoin backed preferreds that overlays effective yields. It builds credit/benchmark spread series, a simple regime model (Risk-On / Cautious / Risk-Off), and a compact table that surfaces price, yield, target, upside and diagnostics—so you can quickly judge relative value and risk conditions.
What it does:
Plots effective yields for STRF/STRC/STRK/STRD (+ CNLTN toggle).
Pulls IG (FRED:BAMLC0A0CMEY), HY (FRED:BAMLH0A0HYM2EY) and US10Y as references.
Computes Credit Spreads vs US10Y and Benchmark Spreads (F−IG, C−IG, K−IG−1%, D−HY) with EMAs/SMA for context.
STRC monthly rate input: set 12 monthly percentages; the current month auto-applies to compute the dividend.
Targets & upside: yield-parity targets for each series + % move to target
Leader logic: picks the series with the strongest SMA-based spread improvement and estimates a leader target price.
Risk regime: EMA-based deltas across spreads define Risk-On / Cautious / Risk-Off; optional background + last-bar label.
Table view (bottom-right): price, eff. yield, target, upside, CS, BS, BS-EMA, BS-Diff, leader stats, regime deltas.
Notes:
Designed for overlay on any chart (format = percent, right scale). Works best with a yield based basis like US10Y
• FRED series must be available on your TradingView plan/region.
Educational tool, not investment advice. Always validate assumptions (dividends, conversion terms, required spreads).
LogNormalLibrary "LogNormal"
A collection of functions used to model skewed distributions as log-normal.
Prices are commonly modeled using log-normal distributions (ie. Black-Scholes) because they exhibit multiplicative changes with long tails; skewed exponential growth and high variance. This approach is particularly useful for understanding price behavior and estimating risk, assuming continuously compounding returns are normally distributed.
Because log space analysis is not as direct as using math.log(price) , this library extends the Error Functions library to make working with log-normally distributed data as simple as possible.
- - -
QUICK START
Import library into your project
Initialize model with a mean and standard deviation
Pass model params between methods to compute various properties
var LogNorm model = LN.init(arr.avg(), arr.stdev()) // Assumes the library is imported as LN
var mode = model.mode()
Outputs from the model can be adjusted to better fit the data.
var Quantile data = arr.quantiles()
var more_accurate_mode = mode.fit(model, data) // Fits value from model to data
Inputs to the model can also be adjusted to better fit the data.
datum = 123.45
model_equivalent_datum = datum.fit(data, model) // Fits value from data to the model
area_from_zero_to_datum = model.cdf(model_equivalent_datum)
- - -
TYPES
There are two requisite UDTs: LogNorm and Quantile . They are used to pass parameters between functions and are set automatically (see Type Management ).
LogNorm
Object for log space parameters and linear space quantiles .
Fields:
mu (float) : Log space mu ( µ ).
sigma (float) : Log space sigma ( σ ).
variance (float) : Log space variance ( σ² ).
quantiles (Quantile) : Linear space quantiles.
Quantile
Object for linear quantiles, most similar to a seven-number summary .
Fields:
Q0 (float) : Smallest Value
LW (float) : Lower Whisker Endpoint
LC (float) : Lower Whisker Crosshatch
Q1 (float) : First Quartile
Q2 (float) : Second Quartile
Q3 (float) : Third Quartile
UC (float) : Upper Whisker Crosshatch
UW (float) : Upper Whisker Endpoint
Q4 (float) : Largest Value
IQR (float) : Interquartile Range
MH (float) : Midhinge
TM (float) : Trimean
MR (float) : Mid-Range
- - -
TYPE MANAGEMENT
These functions reliably initialize and update the UDTs. Because parameterization is interdependent, avoid setting the LogNorm and Quantile fields directly .
init(mean, stdev, variance)
Initializes a LogNorm object.
Parameters:
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
set(ln, mean, stdev, variance)
Transforms linear measurements into log space parameters for a LogNorm object.
Parameters:
ln (LogNorm) : Object containing log space parameters.
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
quantiles(arr)
Gets empirical quantiles from an array of floats.
Parameters:
arr (array) : Float array object.
Returns: Quantile Object
- - -
DESCRIPTIVE STATISTICS
Using only the initialized LogNorm parameters, these functions compute a model's central tendency and standardized moments.
mean(ln)
Computes the linear mean from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
median(ln)
Computes the linear median from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
mode(ln)
Computes the linear mode from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
variance(ln)
Computes the linear variance from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
skewness(ln)
Computes the linear skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
kurtosis(ln, excess)
Computes the linear kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Kurtosis (true) or regular Kurtosis (false).
Returns: Between 0 and ∞
hyper_skewness(ln)
Computes the linear hyper skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
hyper_kurtosis(ln, excess)
Computes the linear hyper kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Hyper Kurtosis (true) or regular Hyper Kurtosis (false).
Returns: Between 0 and ∞
- - -
DISTRIBUTION FUNCTIONS
These wrap Gaussian functions to make working with model space more direct. Because they are contained within a log-normal library, they describe estimations relative to a log-normal curve, even though they fundamentally measure a Gaussian curve.
pdf(ln, x, empirical_quantiles)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate for which a density will be estimated.
empirical_quantiles (Quantile) : Quantiles as observed in the data (optional).
Returns: Between 0 and ∞
cdf(ln, x, precise)
A Cumulative Distribution Function estimates the area under a Log-Normal curve between Zero and a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ccdf(ln, x, precise)
A Complementary Cumulative Distribution Function estimates the area under a Log-Normal curve between a linear X coordinate and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
cdfinv(ln, a, precise)
An Inverse Cumulative Distribution Function reverses the Log-Normal cdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
ccdfinv(ln, a, precise)
An Inverse Complementary Cumulative Distribution Function reverses the Log-Normal ccdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
cdfab(ln, x1, x2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Log-Normal curve between two linear X coordinates (A and B).
Parameters:
ln (LogNorm) : Object of log space parameters.
x1 (float) : First linear X coordinate .
x2 (float) : Second linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ott(ln, x, precise)
A One-Tailed Test transforms a linear X coordinate into an absolute Z Score before estimating the area under a Log-Normal curve between Z and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 0.5
ttt(ln, x, precise)
A Two-Tailed Test transforms a linear X coordinate into symmetrical ± Z Scores before estimating the area under a Log-Normal curve from Zero to -Z, and +Z to Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ottinv(ln, a, precise)
An Inverse One-Tailed Test reverses the Log-Normal ott() by estimating a linear X coordinate for the right tail from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Half a normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
tttinv(ln, a, precise)
An Inverse Two-Tailed Test reverses the Log-Normal ttt() by estimating two linear X coordinates from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Linear space tuple :
- - -
UNCERTAINTY
Model-based measures of uncertainty, information, and risk.
sterr(sample_size, fisher_info)
The standard error of a sample statistic.
Parameters:
sample_size (float) : Number of observations.
fisher_info (float) : Fisher information.
Returns: Between 0 and ∞
surprisal(p, base)
Quantifies the information content of a single event.
Parameters:
p (float) : Probability of the event .
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
entropy(ln, base)
Computes the differential entropy (average surprisal).
Parameters:
ln (LogNorm) : Object of log space parameters.
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
perplexity(ln, base)
Computes the average number of distinguishable outcomes from the entropy.
Parameters:
ln (LogNorm)
base (float) : Logarithmic base used for Entropy (optional).
Returns: Between 0 and ∞
value_at_risk(ln, p, precise)
Estimates a risk threshold under normal market conditions for a given confidence level.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
value_at_risk_inv(ln, value_at_risk, precise)
Reverses the value_at_risk() by estimating the confidence level from the risk threshold.
Parameters:
ln (LogNorm) : Object of log space parameters.
value_at_risk (float) : Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
conditional_value_at_risk(ln, p, precise)
Estimates the average loss beyond a confidence level, aka. expected shortfall.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_value_at_risk_inv(ln, conditional_value_at_risk, precise)
Reverses the conditional_value_at_risk() by estimating the confidence level of an average loss.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_value_at_risk (float) : Conditional Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
partial_expectation(ln, x, precise)
Estimates the partial expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and µ
partial_expectation_inv(ln, partial_expectation, precise)
Reverses the partial_expectation() by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
partial_expectation (float) : Partial Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_expectation(ln, x, precise)
Estimates the conditional expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between X and ∞
conditional_expectation_inv(ln, conditional_expectation, precise)
Reverses the conditional_expectation by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_expectation (float) : Conditional Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
fisher(ln, log)
Computes the Fisher Information Matrix for the distribution, not a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the distribution
fisher(ln, x, log)
Computes the Fisher Information Matrix for a linear X coordinate, not the distribution itself.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the linear X coordinate
confidence_interval(ln, x, sample_size, confidence, precise)
Estimates a confidence interval for a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
sample_size (float) : Number of observations.
confidence (float) : Confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: CI for the linear X coordinate
- - -
CURVE FITTING
An overloaded function that helps transform values between spaces. The primary function uses quantiles, and the overloads wrap the primary function to make working with LogNorm more direct.
fit(x, a, b)
Transforms X coordinate between spaces A and B.
Parameters:
x (float) : Linear X coordinate from space A .
a (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
b (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
Returns: Adjusted X coordinate
- - -
EXPORTED HELPERS
Small utilities to simplify extensibility.
z_score(ln, x)
Converts a linear X coordinate into a Z Score.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate.
Returns: Between -∞ and +∞
x_coord(ln, z)
Converts a Z Score into a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
z (float) : Standard normal Z Score.
Returns: Between 0 and ∞
iget(arr, index)
Gets an interpolated value of a pseudo -element (fictional element between real array elements). Useful for quantile mapping.
Parameters:
arr (array) : Float array object.
index (float) : Index of the pseudo element.
Returns: Interpolated value of the arrays pseudo element.






















