RBLR - GSK Vizag AP IndiaThis indicator identifies the Opening Range High (ORH) and Low (ORL) based on the first 15 minutes of the Indian equity market session (9:15 AM to 9:30 AM IST). It draws horizontal lines extending these levels until market close (3:30 PM IST) and generates visual signals for price breakouts above ORH or below ORL, as well as reversals back into the range.
Key features:
- **Range Calculation**: Captures the high and low during the opening period using real-time bar data.
- **Line Extension**: Lines are dynamically extended bar-by-bar within the session for clear visualization.
- **Signals**:
- Green triangle up: Crossover above ORH (potential bullish breakout).
- Red triangle down: Crossunder below ORL (potential bearish breakout).
- Yellow labels: Reversals from breakout levels back into the range.
- **Labels**: "RAM BAAN" marks the ORH (inspired by a precise arrow from the Ramayana), and "LAKSHMAN REKHA" marks the ORL (inspired by a protective boundary line from the same epic).
- **Customization**: Toggle signals on/off and select line styles (Dotted, Dashed, Solid, or Smoothed, with transparency for Smoothed).
The state-tracking logic prevents redundant signals by monitoring if price remains outside the range after a breakout. This helps users observe range-bound behavior or directional moves without built-in alerts. This indicator is particularly useful for day trading on longer intraday timeframes (e.g., 15-minute charts) to identify session-wide trends and avoid noise in shorter frames. For best results, apply on intraday timeframes on NSE/BSE symbols. Note that lines and labels are limited to the script's max counts to avoid performance issues on long histories.
**Disclaimer**: This indicator is for educational and informational purposes only and does not constitute financial, investment, or trading advice. Trading in financial markets involves significant risk of loss and is not suitable for all investors. Past performance is not indicative of future results. Users should conduct their own research, consider their financial situation, and consult with qualified professionals before making any investment decisions. The author and TradingView assume no liability for any losses incurred from its use.
Statistics
Liquidity Stress Index (SOFR - IORB)How to use:
> +10 bps — TIGHT
−5 +10 bps — NEUTRAL
< −5 bps — LOOSE
PG ATM Strike Line with Call & Put PremiumsPine Script: ATM Strike Line with Call & Put Premiums (Simplified)This Pine Script for TradingView displays the At-The-Money (ATM) strike price, futures price, call/put premiums (time value), and two ratios—Premium Ratio (PR) and Volume Ratio (VR)—for a user-selected underlying asset (e.g., NIFTY, BANKNIFTY, or stocks). It helps traders gauge near-term market direction using options data.How the Script WorksInputs:Expiry: Select year (e.g., '25), month (01–12), day (01–31) for option expiry (e.g., '251028').
Timeframe: Choose data timeframe (e.g., Daily, 15-min).
Symbol: Auto-detects chart symbol or select from Indian indices/stocks.
Strike: Auto-ATM (based on futures) or manual strike input.
Interval: Auto (e.g., 100 for NIFTY) or custom strike interval.
Colors: Customizable for ATM line, labels (Futures Price, CPR, PPR, VR, PR).
Calculations:Futures Price (FP): Fetches front-month futures price (e.g., NSE:NIFTY1!).
ATM Strike: Rounds futures price to nearest strike interval.
Option Data: Retrieves Last Traded Price (LTP) and volume for ATM call/put options (e.g., NSE:NIFTY251028C24200).
Call Premium (CPR): Call LTP minus intrinsic value (max(0, FP - Strike)).
Put Premium (PPR): Put LTP minus intrinsic value (max(0, Strike - FP)).
Premium Ratio (PR): PPR / CPR.
Volume Ratio (VR): Put Volume / Call Volume.
Visuals:Draws ATM strike line on chart.
Displays labels: FP (futures price), CPR (call premium), PPR (put premium), VR, PR.
VR/PR labels: Red (≥ 1.25, bearish), Green (≤ 0.75, bullish), Gray (0.75–1.25, neutral).
Updates on last confirmed bar to avoid redraws.
Using PR and VR for Market DirectionPremium Ratio (PR):PR ≥ 1.25 (Red): High put premiums suggest bearish sentiment (expect price drop).
PR ≤ 0.75 (Green): High call premiums suggest bullish sentiment (expect price rise).
0.75 < PR < 1.25 (Gray): Neutral, no clear direction.
Use: High PR favors bearish trades (e.g., buy puts); low PR favors bullish trades (e.g., buy calls).
Volume Ratio (VR):VR ≥ 1.25 (Red): High put volume indicates bearish activity.
VR ≤ 0.75 (Green): High call volume indicates bullish activity.
0.75 < VR < 1.25 (Gray): Neutral trading activity.
Use: High VR suggests bearish moves; low VR suggests bullish moves.
Combined Signals:High PR & VR: Strong bearish signal; consider put buying or call selling.
Low PR & VR: Strong bullish signal; consider call buying or put selling.
Mixed/Neutral: Use price action or support/resistance for confirmation.
Tips:Combine with technical analysis (e.g., trends, levels).
Match timeframe to trading horizon (e.g., 15-min for intraday).
Monitor FP for context; check volatility or news for accuracy.
ExampleNIFTY: FP = 24,237.50, ATM = 24,200, CPR = 120.25, PPR = 180.50, PR = 1.50 (Red), VR = 1.30 (Red).
Insight: High PR/VR suggests bearish bias; consider bearish trades if price nears resistance.
Action: Buy puts or exit longs, confirm with price action.
Conclusion: This script provides a concise tool for options traders, showing ATM strike, premiums, and PR/VR ratios. High PR/VR (≥ 1.25) signals bearish sentiment, low PR/VR (≤ 0.75) signals bullish sentiment, and neutral (0.75–1.25) suggests indecision. Combine with technical analysis for robust trading decisions in the Indian options market.
LogNormalLibrary "LogNormal"
A collection of functions used to model skewed distributions as log-normal.
Prices are commonly modeled using log-normal distributions (ie. Black-Scholes) because they exhibit multiplicative changes with long tails; skewed exponential growth and high variance. This approach is particularly useful for understanding price behavior and estimating risk, assuming continuously compounding returns are normally distributed.
Because log space analysis is not as direct as using math.log(price) , this library extends the Error Functions library to make working with log-normally distributed data as simple as possible.
- - -
QUICK START
Import library into your project
Initialize model with a mean and standard deviation
Pass model params between methods to compute various properties
var LogNorm model = LN.init(arr.avg(), arr.stdev()) // Assumes the library is imported as LN
var mode = model.mode()
Outputs from the model can be adjusted to better fit the data.
var Quantile data = arr.quantiles()
var more_accurate_mode = mode.fit(model, data) // Fits value from model to data
Inputs to the model can also be adjusted to better fit the data.
datum = 123.45
model_equivalent_datum = datum.fit(data, model) // Fits value from data to the model
area_from_zero_to_datum = model.cdf(model_equivalent_datum)
- - -
TYPES
There are two requisite UDTs: LogNorm and Quantile . They are used to pass parameters between functions and are set automatically (see Type Management ).
LogNorm
Object for log space parameters and linear space quantiles .
Fields:
mu (float) : Log space mu ( µ ).
sigma (float) : Log space sigma ( σ ).
variance (float) : Log space variance ( σ² ).
quantiles (Quantile) : Linear space quantiles.
Quantile
Object for linear quantiles, most similar to a seven-number summary .
Fields:
Q0 (float) : Smallest Value
LW (float) : Lower Whisker Endpoint
LC (float) : Lower Whisker Crosshatch
Q1 (float) : First Quartile
Q2 (float) : Second Quartile
Q3 (float) : Third Quartile
UC (float) : Upper Whisker Crosshatch
UW (float) : Upper Whisker Endpoint
Q4 (float) : Largest Value
IQR (float) : Interquartile Range
MH (float) : Midhinge
TM (float) : Trimean
MR (float) : Mid-Range
- - -
TYPE MANAGEMENT
These functions reliably initialize and update the UDTs. Because parameterization is interdependent, avoid setting the LogNorm and Quantile fields directly .
init(mean, stdev, variance)
Initializes a LogNorm object.
Parameters:
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
set(ln, mean, stdev, variance)
Transforms linear measurements into log space parameters for a LogNorm object.
Parameters:
ln (LogNorm) : Object containing log space parameters.
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
quantiles(arr)
Gets empirical quantiles from an array of floats.
Parameters:
arr (array) : Float array object.
Returns: Quantile Object
- - -
DESCRIPTIVE STATISTICS
Using only the initialized LogNorm parameters, these functions compute a model's central tendency and standardized moments.
mean(ln)
Computes the linear mean from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
median(ln)
Computes the linear median from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
mode(ln)
Computes the linear mode from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
variance(ln)
Computes the linear variance from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
skewness(ln)
Computes the linear skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
kurtosis(ln, excess)
Computes the linear kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Kurtosis (true) or regular Kurtosis (false).
Returns: Between 0 and ∞
hyper_skewness(ln)
Computes the linear hyper skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
hyper_kurtosis(ln, excess)
Computes the linear hyper kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Hyper Kurtosis (true) or regular Hyper Kurtosis (false).
Returns: Between 0 and ∞
- - -
DISTRIBUTION FUNCTIONS
These wrap Gaussian functions to make working with model space more direct. Because they are contained within a log-normal library, they describe estimations relative to a log-normal curve, even though they fundamentally measure a Gaussian curve.
pdf(ln, x, empirical_quantiles)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate for which a density will be estimated.
empirical_quantiles (Quantile) : Quantiles as observed in the data (optional).
Returns: Between 0 and ∞
cdf(ln, x, precise)
A Cumulative Distribution Function estimates the area under a Log-Normal curve between Zero and a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ccdf(ln, x, precise)
A Complementary Cumulative Distribution Function estimates the area under a Log-Normal curve between a linear X coordinate and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
cdfinv(ln, a, precise)
An Inverse Cumulative Distribution Function reverses the Log-Normal cdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
ccdfinv(ln, a, precise)
An Inverse Complementary Cumulative Distribution Function reverses the Log-Normal ccdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
cdfab(ln, x1, x2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Log-Normal curve between two linear X coordinates (A and B).
Parameters:
ln (LogNorm) : Object of log space parameters.
x1 (float) : First linear X coordinate .
x2 (float) : Second linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ott(ln, x, precise)
A One-Tailed Test transforms a linear X coordinate into an absolute Z Score before estimating the area under a Log-Normal curve between Z and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 0.5
ttt(ln, x, precise)
A Two-Tailed Test transforms a linear X coordinate into symmetrical ± Z Scores before estimating the area under a Log-Normal curve from Zero to -Z, and +Z to Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ottinv(ln, a, precise)
An Inverse One-Tailed Test reverses the Log-Normal ott() by estimating a linear X coordinate for the right tail from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Half a normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
tttinv(ln, a, precise)
An Inverse Two-Tailed Test reverses the Log-Normal ttt() by estimating two linear X coordinates from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Linear space tuple :
- - -
UNCERTAINTY
Model-based measures of uncertainty, information, and risk.
sterr(sample_size, fisher_info)
The standard error of a sample statistic.
Parameters:
sample_size (float) : Number of observations.
fisher_info (float) : Fisher information.
Returns: Between 0 and ∞
surprisal(p, base)
Quantifies the information content of a single event.
Parameters:
p (float) : Probability of the event .
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
entropy(ln, base)
Computes the differential entropy (average surprisal).
Parameters:
ln (LogNorm) : Object of log space parameters.
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
perplexity(ln, base)
Computes the average number of distinguishable outcomes from the entropy.
Parameters:
ln (LogNorm)
base (float) : Logarithmic base used for Entropy (optional).
Returns: Between 0 and ∞
value_at_risk(ln, p, precise)
Estimates a risk threshold under normal market conditions for a given confidence level.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
value_at_risk_inv(ln, value_at_risk, precise)
Reverses the value_at_risk() by estimating the confidence level from the risk threshold.
Parameters:
ln (LogNorm) : Object of log space parameters.
value_at_risk (float) : Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
conditional_value_at_risk(ln, p, precise)
Estimates the average loss beyond a confidence level, aka. expected shortfall.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_value_at_risk_inv(ln, conditional_value_at_risk, precise)
Reverses the conditional_value_at_risk() by estimating the confidence level of an average loss.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_value_at_risk (float) : Conditional Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
partial_expectation(ln, x, precise)
Estimates the partial expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and µ
partial_expectation_inv(ln, partial_expectation, precise)
Reverses the partial_expectation() by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
partial_expectation (float) : Partial Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_expectation(ln, x, precise)
Estimates the conditional expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between X and ∞
conditional_expectation_inv(ln, conditional_expectation, precise)
Reverses the conditional_expectation by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_expectation (float) : Conditional Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
fisher(ln, log)
Computes the Fisher Information Matrix for the distribution, not a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the distribution
fisher(ln, x, log)
Computes the Fisher Information Matrix for a linear X coordinate, not the distribution itself.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the linear X coordinate
confidence_interval(ln, x, sample_size, confidence, precise)
Estimates a confidence interval for a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
sample_size (float) : Number of observations.
confidence (float) : Confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: CI for the linear X coordinate
- - -
CURVE FITTING
An overloaded function that helps transform values between spaces. The primary function uses quantiles, and the overloads wrap the primary function to make working with LogNorm more direct.
fit(x, a, b)
Transforms X coordinate between spaces A and B.
Parameters:
x (float) : Linear X coordinate from space A .
a (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
b (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
Returns: Adjusted X coordinate
- - -
EXPORTED HELPERS
Small utilities to simplify extensibility.
z_score(ln, x)
Converts a linear X coordinate into a Z Score.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate.
Returns: Between -∞ and +∞
x_coord(ln, z)
Converts a Z Score into a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
z (float) : Standard normal Z Score.
Returns: Between 0 and ∞
iget(arr, index)
Gets an interpolated value of a pseudo -element (fictional element between real array elements). Useful for quantile mapping.
Parameters:
arr (array) : Float array object.
index (float) : Index of the pseudo element.
Returns: Interpolated value of the arrays pseudo element.
Kalman VWAP Filter [BackQuant]Kalman VWAP Filter
A precision-engineered price estimator that fuses Kalman filtering with the Volume-Weighted Average Price (VWAP) to create a smooth, adaptive representation of fair value. This hybrid model intelligently balances responsiveness and stability, tracking trend shifts with minimal noise while maintaining a statistically grounded link to volume distribution.
If you would like to see my original Kalman Filter, please find it here:
Concept overview
The Kalman VWAP Filter is built on two core ideas from quantitative finance and control theory:
Kalman filtering — a recursive Bayesian estimator used to infer the true underlying state of a noisy system (in this case, fair price).
VWAP anchoring — a dynamic reference that weights price by traded volume, representing where the majority of transactions have occurred.
By merging these concepts, the filter produces a line that behaves like a "smart moving average": smooth when noise is high, fast when markets trend, and self-adjusting based on both market structure and user-defined noise parameters.
How it works
Measurement blend : Combines the chosen Price Source (e.g., close or hlc3) with either a Session VWAP or a Rolling VWAP baseline. The VWAP Weight input controls how much the filter trusts traded volume versus price movement.
Kalman recursion : Each bar updates an internal "state estimate" using the Kalman gain, which determines how much to trust new observations vs. the prior state.
Noise parameters :
Process Noise controls agility — higher values make the filter more responsive but also more volatile.
Measurement Noise controls smoothness — higher values make it steadier but slower to adapt.
Filter order (N) : Defines how many parallel state estimates are used. Larger orders yield smoother output by layering multiple one-dimensional Kalman passes.
Final output : A refined price trajectory that captures VWAP-adjusted fair value while dynamically adjusting to real-time volatility and order flow.
Why this matters
Most smoothing techniques (EMA, SMA, Hull) trade off lag for smoothness. Kalman filtering, however, adaptively rebalances that tradeoff each bar using probabilistic weighting, allowing it to follow market state changes more efficiently. Anchoring it to VWAP integrates microstructure context — capturing where liquidity truly lies rather than only where price moves.
Use cases
Trend tracking : Color-coded candle painting highlights shifts in slope direction, revealing early trend transitions.
Fair value mapping : The line represents a continuously updated equilibrium price between raw price action and VWAP flow.
Adaptive moving average replacement : Outperforms static MAs in variable volatility regimes by self-adjusting smoothness.
Execution & reversion logic : When price diverges from the Kalman VWAP, it may indicate short-term imbalance or overextension relative to volume-adjusted fair value.
Cross-signal framework : Use with standard VWAP or other filters to identify convergence or divergence between liquidity-weighted and state-estimated prices.
Parameter guidance
Process Noise : 0.01–0.05 for swing traders, 0.1–0.2 for intraday scalping.
Measurement Noise : 2–5 for normal use, 8+ for very smooth tracking.
VWAP Weight : 0.2–0.4 balances both price and VWAP influence; 1.0 locks output directly to VWAP dynamics.
Filter Order (N) : 3–5 for reactive short-term filters; 8–10 for smoother institutional-style baselines.
Interpretation
When price > Kalman VWAP and slope is positive → bullish pressure; buyers dominate above fair value.
When price < Kalman VWAP and slope is negative → bearish pressure; sellers dominate below fair value.
Convergence of price and Kalman VWAP often signals equilibrium; strong divergence suggests imbalance.
Crosses between Kalman VWAP and the base VWAP can hint at shifts in short-term vs. long-term liquidity control.
Summary
The Kalman VWAP Filter blends statistical estimation with market microstructure awareness, offering a refined alternative to static smoothing indicators. It adapts in real time to volatility and order flow, helping traders visualize balance, transition, and momentum through a lens of probabilistic fair value rather than simple price averaging.
Trading Toolkit - Comprehensive AnalysisTrading Toolkit – Comprehensive Analysis
A unified trading analysis toolkit with four sections:
📊 Company Info
Fundamentals, market cap, sector, and earnings countdown.
📅 Performance
Date‑range analysis with key metrics.
🎯 Market Sentiment
CNN‑style Fear & Greed Index (7 components) + 150‑SMA positioning.
🛡️ Risk Levels
ATR/MAD‑based stop‑loss and take‑profit calculations.
Key Features
CNN‑style Fear & Greed approximation using:
Momentum: S&P 500 vs 125‑DMA
Price Strength: NYSE 52‑week highs vs lows
Market Breadth: McClellan Volume Summation (Up/Down volume)
Put/Call Ratio: 5‑day average (inverted)
Volatility: VIX vs 50‑DMA (inverted)
Safe‑Haven Demand: 20‑day SPY–IEF return spread
Junk‑Bond Demand: HY vs IG credit spread (inverted)
Normalization: z‑score → percentile (0–100) with ±3 clipping.
CNN‑aligned thresholds:
Extreme Fear: 0–24 | Fear: 25–44 | Neutral: 45–54 | Greed: 55–74 | Extreme Greed: 75+.
Risk tools: ATR & MAD volatility measures with configurable multipliers.
Flexible layout: vertical or side‑by‑side columns.
Data Sources
S&P 500: CBOE:SPX or AMEX:SPY
NYSE: INDEX:HIGN, INDEX:LOWN, USI:UVOL, USI:DVOL
Options: USI:PCC (Total PCR), fallback INDEX:CPCS (Equity PCR)
Volatility: CBOE:VIX
Treasuries: NASDAQ:IEF
Credit Spreads: FRED:BAMLH0A0HYM2, FRED:BAMLC0A0CM
Risk Management
ATR risk bands: 🟢 ≤3%, 🟡 3–6%, ⚪ 6–10%, 🟠 10–15%, 🔴 >15%
MAD‑based stop‑loss and take‑profit calculations.
Author: Daniel Dahan
(AI Generated, Merged & enhanced version with CNN‑style Fear & Greed)
Trading Lot & Margin Calculator
# 💹 Trading Lot & Margin Calculator - Professional Risk Management Tool
## 🎯 Overview
A comprehensive, all-in-one calculator dashboard that helps traders determine optimal position sizes, calculate margin requirements, and manage risk effectively across multiple asset classes. This indicator displays directly on your chart as a customizable table, providing real-time calculations based on current market prices.
## ✨ Key Features
### 📊 Three Powerful Calculation Modes:
**1. Calculate Lot Size (Risk-Based Position Sizing)**
- Input your risk percentage and stop loss in pips
- Automatically calculates the optimal lot size for your risk tolerance
- Respects margin limitations (configurable margin % cap)
- Ensures positions don't exceed minimum lot size (0.01)
- Perfect for risk management and proper position sizing
**2. Calculate Margin Cost**
- Input desired lot size
- See exactly how much margin is required
- Shows percentage of deposit used
- Displays free margin remaining
- Warns when insufficient funds
**3. Margin to Lots**
- Specify a fixed margin amount you want to use
- Calculator shows how many lots/contracts you can buy
- Ideal for traders with fixed margin budgets
## 🤖 Auto-Detection of Instruments
The calculator **automatically detects** what you're trading and adjusts calculations accordingly:
### ✅ Fully Supported:
- **💱 Forex Pairs** - All majors, minors, exotics (EURUSD, GBPJPY, etc.)
- Standard lot: 100,000 units
- JPY pairs: 0.01 pip size, others: 0.0001
- **🛢️ Commodities** - Gold, Silver, Oil
- XAUUSD (Gold): 100 oz per lot
- XAGUSD (Silver): 5,000 oz per lot
- Oil (WTI/Brent): 1,000 barrels per lot
- **📈 Indices** - US500, NAS100, US30, DAX, etc.
- Correct contract sizes per point
- **📊 Stocks** - All individual stocks
- 1 lot = 1 share
- Direct share calculations
### ⚠️ Known Limitation:
- **₿ Crypto calculations may not work properly** on all crypto pairs. Use manual contract size if needed.
## 📋 Dashboard Information Displayed:
- 🎯 Optimal/Requested Lot Size
- 💰 Margin Required
- 📊 Margin % of Deposit
- 💵 Free Margin Remaining
- 💎 Position Value
- 📈 Pip/Point Value
- ⚠️ Safety Warnings (insufficient funds, high risk, etc.)
- 🔍 Detected Instrument Type
- 📦 Contract Size
## ⚙️ Customizable Settings:
**Account Settings:**
- Account Deposit
- Leverage (1:1 to 1:1000)
- Max Margin % of Deposit (default 5% for safety)
**Risk Management:**
- Risk Percentage (for lot size calculation)
- Stop Loss in Pips
- Lot Amount (for margin cost calculation)
- Margin to Use (for margin-to-lots calculation)
**Display Options:**
- Show/Hide Dashboard
- Position: Top/Middle/Bottom, Left/Right
- Auto-detect instrument ON/OFF
- Manual contract size override
## 🎨 Professional Design
- Clean, modern table interface
- Color-coded warnings (red = danger, yellow = caution, green = safe)
- Large, readable text
- Minimal screen space usage
- Non-intrusive overlay
## 💡 Use Cases:
1. **Day Traders** - Quick position sizing based on account risk
2. **Swing Traders** - Calculate optimal positions for longer-term setups
3. **Risk Managers** - Ensure positions stay within margin limits
4. **Beginners** - Learn proper position sizing and risk management
5. **Multi-Asset Traders** - Seamlessly switch between forex, commodities, indices, and stocks
## ⚠️ Important Notes:
- ✅ Works on all timeframes
- ✅ Updates in real-time with price changes
- ✅ Minimum lot size enforced (0.01)
- ✅ Margin calculations use current chart price
- ⚠️ **Crypto calculations may be inaccurate** - verify with your broker
- 📌 Always verify calculations with your broker's specifications
- 📌 Contract sizes may vary by broker
## 🚀 How to Use:
1. Add indicator to any chart
2. Click settings ⚙️ icon
3. Enter your account details (deposit, leverage)
4. Choose calculation mode
5. Input your parameters
6. View optimal lot size and margin requirements on dashboard
## 📈 Perfect For:
- Forex traders managing multiple currency pairs
- Commodity traders (Gold, Silver, Oil)
- Index traders (S&P 500, NASDAQ, etc.)
- Stock traders
- Anyone who wants professional risk management
## 🛡️ Risk Management Features:
- Configurable margin % cap prevents over-leveraging
- Risk-based position sizing protects your account
- Warnings for high risk, insufficient funds, margin limitations
- Prevents positions below minimum lot size
---
**Trade smarter, not harder. Calculate before you trade!** 📊💪
---
## Version Notes:
- Pine Script v6
- Overlay mode for chart display
- No external dependencies
- Lightweight and fast
**Disclaimer:** This calculator is for educational and informational purposes only. Always verify calculations with your broker and trade at your own risk. Past performance does not guarantee future results.
---
Broad Market for Crypto + index# Broad Market Indicator for Crypto
## Overview
The Broad Market Indicator for Crypto helps traders assess the strength and divergence of individual cryptocurrency assets relative to the overall market. By comparing price deviations across multiple assets, this indicator reveals whether a specific coin is moving in sync with or diverging from the broader crypto market trend.
## How It Works
This indicator calculates percentage deviations from simple moving averages (SMA) for both individual assets and an equal-weighted market index. The core methodology:
1. **Deviation Calculation**: For each asset, the indicator measures how far the current price has moved from its SMA over a specified lookback period (default: 24 hours). The deviation is expressed as a percentage: `(Current Price - SMA) / SMA × 100`
2. **Market Index Construction**: An equal-weighted index is built from selected cryptocurrencies (up to 15 assets). The default composition includes major crypto assets: BTC, ETH, BNB, SOL, XRP, ADA, AVAX, LINK, DOGE, and TRX.
3. **Comparative Analysis**: The indicator displays both the current instrument's deviation and the market index deviation on the same panel, making it easy to spot relative strength or weakness.
## Key Features
- **Customizable Asset Selection**: Choose up to 15 different cryptocurrencies to include in your market index
- **Flexible Configuration**: Toggle individual assets on/off for display and index calculation
- **Current Instrument Tracking**: Automatically plots the deviation of whatever chart you're viewing
- **Visual Clarity**: Color-coded lines for easy differentiation between assets, with the market index shown as a filled area
- **Adjustable Lookback Period**: Modify the SMA period to match your trading timeframe
## How to Use
### Identifying Market Divergences
- When the current instrument deviates significantly above the index, it shows relative strength
- When it deviates below, it indicates relative weakness
- Assets clustering around zero suggest neutral market conditions
### Trend Confirmation
- If both the index and your asset are rising together (positive deviation), it confirms a broad market uptrend
- Divergence between asset and index can signal unique fundamental factors or early trend changes
### Entry/Exit Signals
- Extreme deviations from the index may indicate overbought/oversold conditions relative to the market
- Convergence back toward the index line can signal mean reversion opportunities
## Settings
- **Lookback Period**: Adjust the SMA calculation period (default: 24 hours)
- **Asset Configuration**: Select which cryptocurrencies to monitor and include in the index
- **Display Options**: Show/hide individual assets, current instrument, and market index
- **Color Customization**: Personalize colors for better visual analysis
## Best Practices
- Use on higher timeframes (4H, Daily) for more reliable signals
- Combine with volume analysis for confirmation
- Consider fundamental news when assets show extreme divergence
- Adjust the asset basket to match your trading focus (DeFi, L1s, memecoins, etc.)
## Technical Notes
- The indicator uses `request.security()` to fetch data from multiple symbols
- Deviations are calculated independently for each asset
- The zero line represents perfect alignment with the moving average
- Index calculation automatically adjusts based on active assets
## Default Assets
1. BTC (Bitcoin) - BINANCE:BTCUSDT
2. ETH (Ethereum) - BINANCE:ETHUSDT
3. BNB (Binance Coin) - BINANCE:BNBUSDT
4. SOL (Solana) - BINANCE:SOLUSDT
5. XRP (Ripple) - BINANCE:XRPUSDT
6. ADA (Cardano) - BINANCE:ADAUSDT
7. AVAX (Avalanche) - BINANCE:AVAXUSDT
8. LINK (Chainlink) - BINANCE:LINKUSDT
9. DOGE (Dogecoin) - BINANCE:DOGEUSDT
10. TRX (Tron) - BINANCE:TRXUSDT
Additional slots (11-15) are available for custom asset selection.
---
This indicator is particularly useful for cryptocurrency traders seeking to understand market breadth and identify opportunities where specific assets are diverging from overall market sentiment.
天干地支标注(当前视窗范围 + 居中标签)🇨🇳 中文说明
天干地支标注(自动匹配周期)
本指标会根据图表的时间周期(年、月、日、小时、分钟)自动计算并在每根 K 线上方显示对应的天干地支。
• 自动识别图表周期(年/月/日/时/分)
• 仅显示当前视窗内的柱子,性能高、不卡顿
• 可自定义每隔 N 根显示一次(默认每根)
• 支持居中矩形标签(label.style_label_center),清晰易读
• 无需区分暗黑/亮色主题,自动兼容所有图表样式
可作为金融时间序列与中国传统历法(干支纪时)结合的参考工具,
在时间周期研究、风水、气运周期、江恩时间分析等领域有辅助价值。
⸻
🇬🇧 English Description (for international visibility)
Heavenly Stems & Earthly Branches Marker (Auto-Adaptive Version)
This indicator automatically calculates and displays the corresponding Chinese Heavenly Stems and Earthly Branches (Ganzhi) for each candlestick, based on the chart’s timeframe (Year, Month, Day, Hour, or Minute).
• Auto-detects chart timeframe
• Draws only within the current visible window (optimized performance)
• Adjustable display interval (e.g., show every N bars)
• Uses centered label style for clarity
• Compatible with both dark and light themes
Useful for combining Chinese calendar cycles with financial time analysis, time-cycle studies, or Gann-style timing models.
ICOptimizerLibrary "ICOptimizer"
Library for IC-based parameter optimization
findOptimalParam(testParams, icValues, currentParam, smoothing)
Find optimal parameter from array of IC values
Parameters:
testParams (array) : Array of parameter values being tested
icValues (array) : Array of IC values for each parameter (same size as testParams)
currentParam (float) : Current parameter value (for smoothing)
smoothing (simple float) : Smoothing factor (0-1, e.g., 0.2 means 20% new, 80% old)
Returns: New parameter value, its IC, and array index
adaptiveParamWithStarvation(opt, testParams, icValues, smoothing, starvationThreshold, starvationJumpSize)
Adaptive parameter selection with starvation handling
Parameters:
opt (ICOptimizer) : ICOptimizer object
testParams (array) : Array of parameter values
icValues (array) : Array of IC values for each parameter
smoothing (simple float) : Normal smoothing factor
starvationThreshold (simple int) : Number of updates before triggering starvation mode
starvationJumpSize (simple float) : Jump size when in starvation (as fraction of range)
Returns: Updated parameter and IC
detectAndAdjustDomination(longCount, shortCount, currentLongLevel, currentShortLevel, dominationRatio, jumpSize, minLevel, maxLevel)
Detect signal imbalance and adjust parameters
Parameters:
longCount (int) : Number of long signals in period
shortCount (int) : Number of short signals in period
currentLongLevel (float) : Current long threshold
currentShortLevel (float) : Current short threshold
dominationRatio (simple int) : Ratio threshold (e.g., 4 = 4:1 imbalance)
jumpSize (simple float) : Size of adjustment
minLevel (simple float) : Minimum allowed level
maxLevel (simple float) : Maximum allowed level
Returns:
calcIC(signals, returns, lookback)
Parameters:
signals (float)
returns (float)
lookback (simple int)
classifyIC(currentIC, icWindow, goodPercentile, badPercentile)
Parameters:
currentIC (float)
icWindow (simple int)
goodPercentile (simple int)
badPercentile (simple int)
evaluateSignal(signal, forwardReturn)
Parameters:
signal (float)
forwardReturn (float)
updateOptimizerState(opt, signal, forwardReturn, currentIC, metaICPeriod)
Parameters:
opt (ICOptimizer)
signal (float)
forwardReturn (float)
currentIC (float)
metaICPeriod (simple int)
calcSuccessRate(successful, total)
Parameters:
successful (int)
total (int)
createICStatsTable(opt, paramName, normalSuccess, normalTotal)
Parameters:
opt (ICOptimizer)
paramName (string)
normalSuccess (int)
normalTotal (int)
initOptimizer(initialParam)
Parameters:
initialParam (float)
ICOptimizer
Fields:
currentParam (series float)
currentIC (series float)
metaIC (series float)
totalSignals (series int)
successfulSignals (series int)
goodICSignals (series int)
goodICSuccess (series int)
nonBadICSignals (series int)
nonBadICSuccess (series int)
goodICThreshold (series float)
badICThreshold (series float)
updateCounter (series int)
IC optimiser libLibrary "IC optimiser lib"
Library for IC-based parameter optimization
findOptimalParam(testParams, icValues, currentParam, smoothing)
Find optimal parameter from array of IC values
Parameters:
testParams (array) : Array of parameter values being tested
icValues (array) : Array of IC values for each parameter (same size as testParams)
currentParam (float) : Current parameter value (for smoothing)
smoothing (simple float) : Smoothing factor (0-1, e.g., 0.2 means 20% new, 80% old)
Returns: New parameter value, its IC, and array index
adaptiveParamWithStarvation(opt, testParams, icValues, smoothing, starvationThreshold, starvationJumpSize)
Adaptive parameter selection with starvation handling
Parameters:
opt (ICOptimizer) : ICOptimizer object
testParams (array) : Array of parameter values
icValues (array) : Array of IC values for each parameter
smoothing (simple float) : Normal smoothing factor
starvationThreshold (simple int) : Number of updates before triggering starvation mode
starvationJumpSize (simple float) : Jump size when in starvation (as fraction of range)
Returns: Updated parameter and IC
detectAndAdjustDomination(longCount, shortCount, currentLongLevel, currentShortLevel, dominationRatio, jumpSize, minLevel, maxLevel)
Detect signal imbalance and adjust parameters
Parameters:
longCount (int) : Number of long signals in period
shortCount (int) : Number of short signals in period
currentLongLevel (float) : Current long threshold
currentShortLevel (float) : Current short threshold
dominationRatio (simple int) : Ratio threshold (e.g., 4 = 4:1 imbalance)
jumpSize (simple float) : Size of adjustment
minLevel (simple float) : Minimum allowed level
maxLevel (simple float) : Maximum allowed level
Returns:
calcIC(signals, returns, lookback)
Parameters:
signals (float)
returns (float)
lookback (simple int)
classifyIC(currentIC, icWindow, goodPercentile, badPercentile)
Parameters:
currentIC (float)
icWindow (simple int)
goodPercentile (simple int)
badPercentile (simple int)
evaluateSignal(signal, forwardReturn)
Parameters:
signal (float)
forwardReturn (float)
updateOptimizerState(opt, signal, forwardReturn, currentIC, metaICPeriod)
Parameters:
opt (ICOptimizer)
signal (float)
forwardReturn (float)
currentIC (float)
metaICPeriod (simple int)
calcSuccessRate(successful, total)
Parameters:
successful (int)
total (int)
createICStatsTable(opt, paramName, normalSuccess, normalTotal)
Parameters:
opt (ICOptimizer)
paramName (string)
normalSuccess (int)
normalTotal (int)
initOptimizer(initialParam)
Parameters:
initialParam (float)
ICOptimizer
Fields:
currentParam (series float)
currentIC (series float)
metaIC (series float)
totalSignals (series int)
successfulSignals (series int)
goodICSignals (series int)
goodICSuccess (series int)
nonBadICSignals (series int)
nonBadICSuccess (series int)
goodICThreshold (series float)
badICThreshold (series float)
updateCounter (series int)
ATR %ATR % Oscillator
A simple and effective Average True Range (ATR) indicator displayed as a percentage of the current price in a separate panel.
FEATURES:
• ATR displayed as percentage of current price for easy cross-asset comparison
• EMA smoothing line using the same period as ATR
• Configurable ATR period (default: 20)
• Clean visualization with zero reference line
HOW IT WORKS:
The indicator calculates ATR and converts it to a percentage: (ATR / Close) × 100
This normalization allows you to:
- Compare volatility across different instruments regardless of price
- Identify high and low volatility periods
- Use the EMA line to spot volatility trends
PARAMETERS:
ATR Period - The lookback period for ATR calculation (default: 20)
Timeframe - Choose any timeframe for ATR calculation independently from the chart timeframe (default: chart timeframe)
Kelly Wave Position Matrix 20251024 V1 ZENYOUNGA simple table is designed for use when opening a position. It applies the Kelly formula to calculate a more scientific position size based on win rate and risk–reward ratio. At the same time, it displays 1.65× ATR stop-loss levels for both long and short positions to serve as a reference for comparing with existing stop-loss placements.
Additionally, the table back-calculates the corresponding position size based on a 2% total capital loss limit, using the actual loss ratio. It also shows the current wave trend status as a pre-filtering condition.
Overall, this table integrates the core elements of trading — trend (wave confirmation), win rate, risk–reward ratio, and position sizing — making it an effective checklist before entering a trade. Its purpose is to help achieve a probabilistic edge and ensure positive expected value in trading decisions.
Statistical Price Deviation Index (MAD/VWMA)SPDI is a statistical oscillator designed to detect potential price reversal zones by measuring how far price deviates from its typical behavior within a defined rolling window.
Instead of using momentum or moving averages like traditional indicators, SPDI applies robust statistics - a rolling median and Mean Absolute Deviation (MAD) - to calculate a normalized measure of price displacement. This normalization keeps the output bounded (from −1 to +1 by default), producing a stable and consistent oscillator that adapts to changing volatility conditions.
The second line in SPDI uses a Volume-Weighted Moving Average (VWMA) instead of a simple price median. This creates a complementary oscillator showing statistically weighted deviations based on traded volume. When both oscillators align in their extremes, strong confluence reversal signals are generated.
How It Works
For each bar, SPDI calculates the median price of the last N bars (default 100).
It then measures how far the current bar’s midpoint deviates from that rolling median.
The Mean Absolute Deviation (MAD) of those distances defines a “normal” range of fluctuation.
The deviation is normalized and compressed via a tanh mapping, keeping the oscillator in fixed boundaries (−1 to +1).
The same logic is applied to the VWMA line to gauge volume-weighted deviations.
How to Use
The blue line (Price MAD) represents pure price deviation.
The green line (VWMA Disp) shows the volume-weighted deviation.
Overbought (red) zones indicate statistically extreme upward deviation -> potential short-term overextension.
Oversold (green) zones indicate statistically extreme downward deviation -> potential rebound area.
Confluence signals (both lines hitting the same extreme) often mark strong reversal points.
Settings Tips
Lookback length controls how much historical data defines “normal” behavior. Larger = smoother, smaller = more sensitive.
Smoothing (RMA length) can reduce noise without changing the overall statistical logic.
Output scale can be set to either −1..+1 or 0..100, depending on your visual preference.
Alerts and color fills are fully customizable in the Style tab.
Summary:
SPDI transforms raw price and volume data into a statistically bounded deviation index. When both Price MAD and VWMA Disp reach joint extremes, it highlights probable market turning points - offering traders a clean, data-driven way to spot potential reversals ahead of time.
WorldCup Dashboard + Institutional Sessions© 2025 NewMeta™ — Educational use only.
# Full, Premium Description
## WorldCup Dashboard + Institutional Sessions
**A trade-ready, intraday framework that combines market structure, real flow, and institutional timing.**
This toolkit fuses **Institutional Sessions** with a **price–volume decision engine** so you can see *who is active*, *where value sits*, and *whether the drive is real*. You get: **CVD/Delta**, volume-weighted **Momentum**, **Aggression** spikes, **FVG (MTF)** with nearest side, **Daily Volume Profile (VAH/POC/VAL)**, **ATR regime**, a **24h position gauge**, classic **candle patterns**, IBH/IBL + **first-hour “true close”** lines, and a **10-vote confluence scoreboard**—all in one view.
---
## What’s inside (and how to trade it)
### 🌍 Institutional Sessions (Sydney • Tokyo • London • New York)
* Session boxes + a highlighted **first hour**.
* Plots the **true close** (first-hour close) as a running line with a label.
**Use:** Many desks anchor risk to this print. Above = bullish bias; below = bearish. **IBH/IBL** breaks during London/NY carry the most signal.
### 📊 CVD / Delta (Flow)
* Net buyer vs seller pressure with smooth trend state.
**Use:** **Rising CVD + acceptance above mid/POC** confirms continuation. Bearish price + rising CVD = caution (possible absorption).
### ⚡ Volume-Weighted Momentum
* Momentum adjusted by participation quality (volume).
**Use:** Momentum>MA and >0 → trend drive is “real”; <0 and falling → distribution risk.
### 🔥 Aggression Detector
* ROC × normalized volume × wick factor to flag **forceful** candles.
**Use:** On spikes, avoid fading blindly—wait for pullbacks into **aligned FVG** or for aggression to cool.
### 🟦🟪 Fair Value Gaps (with MTF)
* Detects up to 3 recent FVGs and marks the **nearest** side to price.
**Use:** Trend pullbacks into **bullish FVG** for longs; bounces into **bearish FVG** for shorts. Optional threshold to filter weak gaps.
### 🧭 24h Gauge (positioning)
* Shows current price across the 24h low⇢high with a mid reference.
**Use:** Above mid and pushing upper third = momentum continuation setups; below mid = sell the rips bias.
### 🧱 Daily Volume Profile (manual per day)
* **VAH / POC / VAL** derived from discretized rows.
**Use:** **POC below** supports longs; **POC above** caps rallies. Fade VAH/VAL in ranges; treat them as break/hold levels in trends.
### 📈 ATR Regime
* **ATR vs ATR-avg** with direction and regime flag (**HIGH / NORMAL / LOW**).
**Use:** HIGH ⇒ give trades room & favor trend following. LOW ⇒ fade edges, scale targets.
### 🕯️ Candle Patterns (contextual, not standalone)
* Engulfings, Morning/Evening Star, 3 Soldiers/Crows, Harami, Hammer/Shooting Star, Double Top/Bottom.
**Use:** Only with session + flow + momentum alignment.
### 🤝 Price–Volume Classification
* Labels each bar as **continuation**, **exhaustion**, **distribution**, or **healthy pullback**.
**Use:** Align continuation reads with trend; treat “Price↑ + Vol↓” as a caution flag.
### 🧪 Confluence Scoreboard & B/S Meter
* Ten elements vote: 🔵 bull, ⚪ neutral, 🟣 bear.
**Use:** Execution filter—take setups when the board’s skew matches your trade direction.
---
## Playbooks (actionable)
**Trend Pullback (Long)**
1. London/NY active, Momentum↑, CVD↑, price above 24h mid & POC.
2. Pullback into **nearest bullish FVG**.
3. Invalidate under FVG low or **true-close** line.
4. Targets: IBH → VAH → 24h high.
**Range Fade (Short)**
1. Asia/quiet regime, **Price↑ + Vol↓** into **VAH**, ATR low.
2. Nearest FVG bearish or scoreboard skew bearish.
3. Invalidate above VAH/IBH.
4. Targets: POC → VAL.
**News/Impulse**
Aggression spike? Don’t chase. Let it pull back into the aligned FVG; require CVD/Momentum agreement before entry.
---
## Alerts (included)
* **Bull/Bear Confluence ≥ 7/10**
* **Intraday Target Achieved** / **Daily Target Achieved**
* **Session True-Close Retests** (Sydney/Tokyo/London/NY)
*(Keep alerts “Once per bar” unless you specifically want intrabar triggers.)*
---
## Setup Tips
* **UTC**: Choose the reference that matches how you track sessions (default UTC+2).
* **Volume threshold**: 2.0× is a strong baseline; raise for noisy alts, lower for majors.
* **CVD smoothing**: 14–24 for scalps; 24–34 for slower markets.
* **ATR lengths**: Keep defaults unless your asset has a persistent regime shift.
---
## Why this framework?
Because **timing (sessions)**, **truth (flow)**, and **location (value/FVG)** together beat any single signal. You get *who is trading*, *how strong the push is*, and *where risk lives*—on one screen—so execution is faster and cleaner.
---
**Disclaimer**: Educational use only. Not financial advice. Markets are risky—backtest and size responsibly.
IIR One-Pole Price Filter [BackQuant]IIR One-Pole Price Filter
A lightweight, mathematically grounded smoothing filter derived from signal processing theory, designed to denoise price data while maintaining minimal lag. It provides a refined alternative to the classic Exponential Moving Average (EMA) by directly controlling the filter’s responsiveness through three interchangeable alpha modes: EMA-Length , Half-Life , and Cutoff-Period .
Concept overview
An IIR (Infinite Impulse Response) filter is a type of recursive filter that blends current and past input values to produce a smooth, continuous output. The "one-pole" version is its simplest form, consisting of a single recursive feedback loop that exponentially decays older price information. This makes it both memory-efficient and responsive , ideal for traders seeking a precise balance between noise reduction and reaction speed.
Unlike standard moving averages, the IIR filter can be tuned in physically meaningful terms (such as half-life or cutoff frequency) rather than just arbitrary periods. This allows the trader to think about responsiveness in the same way an engineer or physicist would interpret signal smoothing.
Why use it
Filters out market noise without introducing heavy lag like higher-order smoothers.
Adapts to various trading speeds and time horizons by changing how alpha (responsiveness) is parameterized.
Provides consistent and mathematically interpretable control of smoothing, suitable for both discretionary and algorithmic systems.
Can serve as the core component in adaptive strategies, volatility normalization, or trend extraction pipelines.
Alpha Modes Explained
EMA-Length : Classic exponential decay with alpha = 2 / (L + 1). Equivalent to a standard EMA but exposed directly for fine control.
Half-Life : Defines the number of bars it takes for the influence of a price input to decay by half. More intuitive for time-domain analysis.
Cutoff-Period : Inspired by analog filter theory, defines the cutoff frequency (in bars) beyond which price oscillations are heavily attenuated. Lower periods = faster response.
Formula in plain terms
Each bar updates as:
yₜ = yₜ₋₁ + alpha × (priceₜ − yₜ₋₁)
Where alpha is the smoothing coefficient derived from your chosen mode.
Smaller alpha → smoother but slower response.
Larger alpha → faster but noisier response.
Practical application
Trend detection : When the filter line rises, momentum is positive; when it falls, momentum is negative.
Signal timing : Use the crossover of the filter vs its previous value (or price) as an entry/exit condition.
Noise suppression : Apply on volatile assets or lower timeframes to remove flicker from raw price data.
Foundation for advanced filters : The one-pole IIR serves as a building block for multi-pole cascades, adaptive smoothers, and spectral filters.
Customization options
Alpha Scale : Multiplies the final alpha to fine-tune aggressiveness without changing the mode’s core math.
Color Painting : Candles can be painted green/red by trend direction for visual clarity.
Line Width & Transparency : Adjust the visual intensity to integrate cleanly with your charting style.
Interpretation tips
A smooth yet reactive line implies optimal tuning — minimal delay with reduced false flips.
A sluggish line suggests alpha is too small (increase responsiveness).
A noisy, twitchy line means alpha is too large (increase smoothing).
Half-life tuning often feels more natural for aligning filter speed with price cycles or bar duration.
Summary
The IIR One-Pole Price Filter is a signal smoother that merges simplicity with mathematical rigor. Whether you’re filtering for entry signals, generating trend overlays, or constructing larger multi-stage systems, this filter delivers stability, clarity, and precision control over noise versus lag, an essential tool for any quantitative or systematic trading approach.
Liquidity Stress Index SOFR - IORBLiquidity Stress Index (SOFR - IORB)
This indicator tracks the spread between the Secured Overnight Financing Rate (SOFR) and the Interest on Reserve Balances (IORB) set by the Federal Reserve.
A persistently positive spread may indicate funding stress or liquidity shortages in the repo market, as it suggests overnight lending rates exceed the risk-free rate banks earn at the Fed.
Useful for monitoring monetary policy transmission or market/liquidity stress.
Advanced HMM - 3 States CompleteHidden Markov Model
Aconsistent challenge for quantitative traders is the frequent behaviour modification of financial
markets, often abruptly, due to changing periods of government policy, regulatory environment
and other macroeconomic effects. Such periods are known as market regimes. Detecting such
changes is a common, albeit difficult, process undertaken by quantitative market participants.
These various regimes lead to adjustments of asset returns via shifts in their means, variances,
autocorrelation and covariances. This impacts the effectiveness of time series methods that rely
on stationarity. In particular it can lead to dynamically-varying correlation, excess kurtosis ("fat
tails"), heteroskedasticity (volatility clustering) and skewed returns.
There is a clear need to effectively detect these regimes. This aids optimal deployment of
quantitative trading strategies and tuning the parameters within them. The modeling task then
becomes an attempt to identify when a new regime has occurred adjusting strategy deployment,
risk management and position sizing criteria accordingly.
A principal method for carrying out regime detection is to use a statistical time series tech
nique known as a Hidden Markov Model . These models are well-suited to the task since they
involve inference on "hidden" generative processes via "noisy" indirect observations correlated
to these processes. In this instance the hidden, or latent, process is the underlying regime state,
while the asset returns are the indirect noisy observations that are influenced by these states.
MAIN FEATURES OF THE INDICATOR
The "Advanced HMM - 3 States Complete" indicator is an advanced technical analysis tool that uses Hidden Markov Model (HMM) to identify three main market regimes: BULL, BEAR, and SIDEWAYS.
🎯 KEY FEATURES:
1. HMM-based Trend Detection
3 market states: Bull (0), Bear (1), Sideways (2)
Dynamic probabilities: Calculates probability for each state based on price data
Transition matrix: Models state transitions between regimes
2. Analytical Features
Price volatility: Log returns and standard deviation
Momentum: Rate of Change (ROC)
Volume: Volume ratio vs moving average
Data normalization: Standardizes features to common scale
3. Visual Trading Signals
text
📍 BUY Signals:
- Green upward triangle below bars
- "LONG" label in green
📍 SELL Signals:
- Red downward triangle above bars
- "SHORT" label in red
📍 EXIT Signals:
- Orange X marks when transitioning to sideways
4. Information Display
Probability table (top-right): Shows percentage for each state
State label: Current regime with probability percentages
Chart background color: Reflects dominant market state
5. Automated Alerts
Alerts when new Bull/Bear market detected
Alerts when market transitions to sideways
Configurable TradingView notifications
6. Customizable Parameters
pinescript
length: 100 // Lookback period
smoothing_period: 20 // Probability smoothing
volatility_threshold: 0.5 // Volatility threshold
💡 PRACTICAL APPLICATIONS:
Identify primary trends with quantified probabilities
Entry/exit signals based on state transitions
Risk management during sideways markets
Trend confirmation when combined with other indicators
This indicator is particularly useful for market regime analysis and identifying trend transition points using advanced statistical probability methods.
🔧 TECHNICAL IMPLEMENTATION:
Composite observation: Weighted combination of returns (40%), momentum (30%), and volatility (30%)
Gaussian emission probabilities: Different distributions for each state
Manual HMM updates: Avoids matrix computation limitations in Pine Script
Real-time smoothing: EMA applied to state probabilities
The indicator provides institutional-grade regime detection in a visually intuitive package suitable for both discretionary and systematic traders.
ATR Gauge - Audiophile StyleThe ATR Gauge - Audiophile Style indicator is a custom visualization tool. It's designed to give you a quick, retro-inspired snapshot of market volatility using the Average True Range (ATR) metric. Think of it as a dashboard widget styled like the VU meters on old-school audiophile equipment (e.g., vintage stereo amps from brands like McIntosh or Marantz)—simple, elegant, and functional. It sits in one of the corners of your chart and helps you gauge how "hot" or "cool" the current price action is compared to recent levels.
Why This Gauge?: Standard ATR plots as a line on your chart, but this turns it into a visual "meter" focused on the last 24 hours. It's like a speedometer for volatility—quick to read at a glance. Useful for day traders, scalpers, or anyone monitoring intraday risk without cluttering the main chart.
Match on Selectable Percentage Change + RangeIndicator Overview:
Match on Selectable Percentage Change + Range is a powerful analytical tool designed for traders and analysts who want to identify historical price bars that match a specific percentage variation, and then evaluate how price evolved in the following days. It combines precision filtering with visual tabular feedback, making it ideal for pattern recognition, backtesting, and scenario analysis.
What It Does
This indicator scans historical bars to find instances where the percentage change between two consecutive closes matches a user-defined target (± a customizable tolerance). Once matches are found, it displays:
The date of each match (most recent first)
The actual variation searched
The percentage change after 2, 10, 20, and 30 bars
The min-max range (in %) over those same periods
All results are shown in a dynamic table directly on the chart.
Inputs & Controls
Input Description
Which variation do you want to analyze? (%)
Set the target percentage change to look for (e.g. 2.5%)
% deviation from the variation to be considered (%) Define the tolerance range around the target (e.g. ±0.5%)
Bars to analyze (max 9999) Set how many past bars to scan
Show match table Toggle to enable/disable the entire table
Show percentage variations (2d, 10d, 20d, 30d) Toggle to show/hide post-match percentage changes
Show min-max ranges (2d, 10d, 20d, 30d) Toggle to show/hide post-match high/low ranges
Table Structure
Each row in the table represents a historical match. Columns include:
Date: When the match occurred
Variation in: The actual % change that triggered the match
2d / 10d / 20d / 30d: % change after those days
Min-Max 2d / 10d / 20d / 30d: Range of price movement after those days
Color coding helps quickly identify bullish (green) vs bearish (red) outcomes.
Use Cases
Backtesting: See how similar past moves evolved over time
Scenario modeling: Estimate potential outcomes after a known variation
Pattern recognition: Spot recurring setups or volatility clusters
Risk analysis: Understand post-variation drawdowns and upside potential
Tips for Use
Use tighter deviation (e.g. 0.3%) for precision, or wider (e.g. 1%) for broader pattern capture.
Combine with other indicators to validate setups (e.g. volume, RSI, trend filters).
Toggle off variation or range columns to focus only on the metrics you need.
Gold–Bitcoin Correlation (Offset Model) by KManus88This indicator analyzes the correlation between Gold (XAU/USD) and Bitcoin (BTC/USD) using a time-offset model adjustable by the user.
The goal is to detect cyclical leads or lags between both assets, highlighting how capital flows into Gold may precede or follow movements in the crypto market.
Key Features:
Dynamic correlation calculation between Gold and Bitcoin.
Adjustable offset in days (default: 107) to fine-tune the temporal shift.
Automatic labels and on-chart visualization.
Compatible with multiple timeframes and logarithmic scales.
Interpretation:
Positive correlation suggests synchronized trends between both assets.
Negative correlation signals divergence or rotation of liquidity.
The time-offset parameter helps estimate when a shift in Gold could later reflect in Bitcoin.
Recommended use:
For macro-financial and global liquidity cycle analysis.
As a complementary tool in cross-asset momentum strategies.
© 2025 – Developed by KManus88 | Inspired by monetary correlation studies and global liquidity cycles.
This script is for educational purposes only and does not constitute financial advice.
Smooth Theil-SenI wanted to build a Theil-Sen estimator that could run on more than one bar and produce smoother output than the standard implementation. Theil-Sen regression is a non-parametric method that calculates the median slope between all pairs of points in your dataset, which makes it extremely robust to outliers. The problem is that median operations produce discrete jumps, especially when you're working with limited sample sizes. Every time the median shifts from one value to another, you get a step change in your regression line, which creates visual choppiness that can be distracting even though the underlying calculations are sound.
The solution I ended up going with was convolving a Gaussian kernel around the center of the sorted lists to get a more continuous median estimate. Instead of just picking the middle value or averaging the two middle values when you have an even sample size, the Gaussian kernel weights the values near the center more heavily and smoothly tapers off as you move away from the median position. This creates a weighted average that behaves like a median in terms of robustness but produces much smoother transitions as new data points arrive and the sorted list shifts.
There are variance tradeoffs with this approach since you're no longer using the pure median, but they're minimal in practice. The kernel weighting stays concentrated enough around the center that you retain most of the outlier resistance that makes Theil-Sen useful in the first place. What you gain is a regression line that updates smoothly instead of jumping discretely, which makes it easier to spot genuine trend changes versus just the statistical noise of median recalculation. The smoothness is particularly noticeable when you're running the estimator over longer lookback periods where the sorted list is large enough that small kernel adjustments have less impact on the overall center of mass.
The Gaussian kernel itself is a bell curve centered on the median position, with a standard deviation you can tune to control how much smoothing you want. Tighter kernels stay closer to the pure median behavior and give you more discrete steps. Wider kernels spread the weighting further from the center and produce smoother output at the cost of slightly reduced outlier resistance. The default settings strike a balance that keeps the estimator robust while removing most of the visual jitter.
Running Theil-Sen on multiple bars means calculating slopes between all pairs of points across your lookback window, sorting those slopes, and then applying the Gaussian kernel to find the weighted center of that sorted distribution. This is computationally more expensive than simple moving averages or even standard linear regression, but Pine Script handles it well enough for reasonable lookback lengths. The benefit is that you get a trend estimate that doesn't get thrown off by individual spikes or anomalies in your price data, which is valuable when working with noisy instruments or during volatile periods where traditional regression lines can swing wildly.
The implementation maintains sorted arrays for both the slope calculations and the final kernel weighting, which keeps everything organized and makes the Gaussian convolution straightforward. The kernel weights are precalculated based on the distance from the center position, then applied as multipliers to the sorted slope values before summing to get the final smoothed median slope. That slope gets combined with an intercept calculation to produce the regression line values you see plotted on the chart.
What this really demonstrates is that you can take classical statistical methods like Theil-Sen and adapt them with signal processing techniques like kernel convolution to get behavior that's more suited to real-time visualization. The pure mathematical definition of a median is discrete by nature, but financial charts benefit from smooth, continuous lines that make it easier to track changes over time. By introducing the Gaussian kernel weighting, you preserve the core robustness of the median-based approach while gaining the visual smoothness of methods that use weighted averages. Whether that smoothness is worth the minor variance tradeoff depends on your use case, but for most charting applications, the improved readability makes it a good compromise.
Mean Reversion Oscillator [Alpha Extract]An advanced composite oscillator system specifically designed to identify extreme market conditions and high-probability mean reversion opportunities, combining five proven oscillators into a single, powerful analytical framework.
By integrating multiple momentum and volume-based indicators with sophisticated extreme level detection, this oscillator provides precise entry signals for contrarian trading strategies while filtering out false reversals through momentum confirmation.
🔶 Multi-Oscillator Composite Framework
Utilizes a comprehensive approach that combines Bollinger %B, RSI, Stochastic, Money Flow Index, and Williams %R into a unified composite score. This multi-dimensional analysis ensures robust signal generation by capturing different aspects of market extremes and momentum shifts.
// Weighted composite (equal weights)
normalized_bb = bb_percent
normalized_rsi = rsi
normalized_stoch = stoch_d_val
normalized_mfi = mfi
normalized_williams = williams_r
composite_raw = (normalized_bb + normalized_rsi + normalized_stoch + normalized_mfi + normalized_williams) / 5
composite = ta.sma(composite_raw, composite_smooth)
🔶 Advanced Extreme Level Detection
Features a sophisticated dual-threshold system that distinguishes between moderate and extreme market conditions. This hierarchical approach allows traders to identify varying degrees of mean reversion potential, from moderate oversold/overbought conditions to extreme levels that demand immediate attention.
🔶 Momentum Confirmation System
Incorporates a specialized momentum histogram that confirms mean reversion signals by analyzing the rate of change in the composite oscillator. This prevents premature entries during strong trending conditions while highlighting genuine reversal opportunities.
// Oscillator momentum (rate of change)
osc_momentum = ta.mom(composite, 5)
histogram = osc_momentum
// Momentum confirmation
momentum_bullish = histogram > histogram
momentum_bearish = histogram < histogram
// Confirmed signals
confirmed_bullish = bullish_entry and momentum_bullish
confirmed_bearish = bearish_entry and momentum_bearish
🔶 Dynamic Visual Intelligence
The oscillator line adapts its color intensity based on proximity to extreme levels, providing instant visual feedback about market conditions. Background shading creates clear zones that highlight when markets enter moderate or extreme territories.
🔶 Intelligent Signal Generation
Generates precise entry signals only when the composite oscillator crosses extreme thresholds with momentum confirmation. This dual-confirmation approach significantly reduces false signals while maintaining sensitivity to genuine mean reversion opportunities.
How It Works
🔶 Composite Score Calculation
The indicator simultaneously tracks five different oscillators, each normalized to a 0-100 scale, then combines them into a smoothed composite score. This approach eliminates the noise inherent in single-oscillator analysis while capturing the consensus view of multiple momentum indicators.
// Mean reversion entry signals
bullish_entry = ta.crossover(composite, 100 - extreme_level) and composite < (100 - extreme_level)
bearish_entry = ta.crossunder(composite, extreme_level) and composite > extreme_level
// Bollinger %B calculation
bb_basis = ta.sma(src, bb_length)
bb_dev = bb_mult * ta.stdev(src, bb_length)
bb_percent = (src - bb_lower) / (bb_upper - bb_lower) * 100
🔶 Extreme Zone Identification
The system automatically identifies when markets reach statistically significant extreme levels, both moderate (65/35) and extreme (80/20). These zones represent areas where mean reversion has the highest probability of success based on historical market behavior.
🔶 Momentum Histogram Analysis
A specialized momentum histogram tracks the velocity of oscillator changes, helping traders distinguish between healthy corrections and potential trend reversals. The histogram's color-coded display makes momentum shifts immediately apparent.
🔶 Divergence Detection Framework
Built-in divergence analysis identifies situations where price and oscillator movements diverge, often signaling impending reversals. Diamond-shaped markers highlight these critical divergence patterns for enhanced pattern recognition.
🔶 Real-Time Information Dashboard
An integrated information table provides instant access to current oscillator readings, market status, and individual component values. This dashboard eliminates the need to manually check multiple indicators while trading.
🔶 Individual Component Display
Optional display of individual oscillator components allows traders to understand which specific indicators are driving the composite signal. This transparency enables more informed decision-making and deeper market analysis.
🔶 Adaptive Background Coloring
Intelligent background shading automatically adjusts based on market conditions, creating visual zones that correspond to different levels of mean reversion potential. The subtle color gradations make pattern recognition effortless.
1D
3D
🔶 Comprehensive Alert System
Multi-tier alert system covers confirmed entry signals, divergence patterns, and extreme level breaches. Each alert type provides specific context about the detected condition, enabling traders to respond appropriately to different signal strengths.
🔶 Customizable Threshold Management
Fully adjustable extreme and moderate levels allow traders to fine-tune the indicator's sensitivity to match different market volatilities and trading timeframes. This flexibility ensures optimal performance across various market conditions.
🔶 Why Choose AE - Mean Reversion Oscillator?
This indicator provides the most comprehensive approach to mean reversion trading by combining multiple proven oscillators with advanced confirmation mechanisms. By offering clear visual hierarchies for different extreme levels and requiring momentum confirmation for signals, it empowers traders to identify high-probability contrarian opportunities while avoiding false reversals. The sophisticated composite methodology ensures that signals are both statistically significant and practically actionable, making it an essential tool for traders focused on mean reversion strategies across all market conditions.






















