Weighted KDE Mode🙏🏻 The ‘ultimate’ typical value estimator, for the highest computational cost @ time complexity O(n^2). I am not afraid to say: this is the last resort BFG9000 you can ‘ever’ get to make dem market demons kneel before y’all
Quickguide
pls read it, you won’t find it anywhere else in open access
When to use:
If current market activity is so crazy || things on your charts are really so bad (contaminated data && (data has very heavy tails || very pronounced peak)), the only option left is to use the peak (mode) of Kernel Density Estimate , instead of median not even mentioning mean. So when WMA won’t help, when WPNR won’t help, you need this thing.
Setting it up:
Interval: choose what u need, you can use usual moving windows, but I also added yearly and session anchors alike in old VWAP (always prefer 24h instead of Session if your plan allows). Other options like cumulative window are also there.
Parameters: this script ain't no joke, it needs time to make calculations, so I added a setting to calculate only for the last N bars (when “starting at bar N” is put on 0). If it’s not zero it acts as a starting point after which the calculations happen (useful for backtesting). Other parameters keep em as they are, keep student5 kernel , turn off appropriate weights if u apply it to other than chart data, on other studies etc.
But instead of listening to me just experiment with parameters and see what they change, would take 5 mins max
Been always saying that VWAP is ish, not time-aware etc, volume info is incorporated in a lil bit wrong way… So I decided not just to fix VWAP (you can do it yourself in 5 mins), but instead to drop there the Ultimate xD typical value estimator that is ever possible to do. Time aware, volume / inferred volume aware, resistant to all kinds of BS. This is your shieldwall.
How it works:
You can easily do a weighted kernel density estimation, in our case including temporal and intensity information while accumulating densities. Here are some details worth mentioning about the thing:
Kernels are raw (not unit variance), that’s easier to work with later.
h_constants for each kernel were calculated ^^ given that ^^ with python mpmath module with high decimal precision.
In bandwidth calculation instead of using empirical standard deviation as a scaler, I use... ta.range(src, len) / math.sqrt(12)
...that takes data range and converts it to standard deviation, assuming data is uniformly distributed. That’s exactly what we need: a scaler that is coherent with the KDE, that has nothing to do with stdevs, as the kernels except for gaussian ones (that we don’t even need to use). More importantly, if u take multiple windows and see over time which distro they approach on the long term, that would be the uniform one (not the normal one as many think). Sometimes windows are multimodal, sometimes Laplace like etc, so in general all together they are uniform ish.
The one and only kernel you really need is Student t with v = 5 , for the use case I highlighted in the first part of the post for TV users. It’s as far as u can get until ish becomes crazy like undefined variance etc. It has the highest kurtosis = 9 of all distros, perfect for the real use case I mentioned. Otherwise, you don’t even need KDE 4 real, but still I included other senseful kernels for comparison or in case I am trippin there.
Btw, don’t believe in all that hype about Epanechnikov kernel which in essence is made from beta distribution with alpha = beta = 2, idk why folk call it with that weird name, it’s beta2 kernel. Yes on papers it really minimises AMISE (that’s how I calculated h constants for all dem kernels in the script), but for really crazy data (proper use case for us), it ain't provides even ‘closely’ compared with student5 kernel. Not much else to add.
Shout out to @RicardoSantos for inspiration, I saw your KDE script a long time ago brotha, finna got my hands on it.
∞
Statistics
Linear Moments█ OVERVIEW
The Linear Moments indicator, also known as L-moments, is a statistical tool used to estimate the properties of a probability distribution. It is an alternative to conventional moments and is more robust to outliers and extreme values.
█ CONCEPTS
█ Four moments of a distribution
We have mentioned the concept of the Moments of a distribution in one of our previous posts. The method of Linear Moments allows us to calculate more robust measures that describe the shape features of a distribution and are anallougous to those of conventional moments. L-moments therefore provide estimates of the location, scale, skewness, and kurtosis of a probability distribution.
The first L-moment, λ₁, is equivalent to the sample mean and represents the location of the distribution. The second L-moment, λ₂, is a measure of the dispersion of the distribution, similar to the sample standard deviation. The third and fourth L-moments, λ₃ and λ₄, respectively, are the measures of skewness and kurtosis of the distribution. Higher order L-moments can also be calculated to provide more detailed information about the shape of the distribution.
One advantage of using L-moments over conventional moments is that they are less affected by outliers and extreme values. This is because L-moments are based on order statistics, which are more resistant to the influence of outliers. By contrast, conventional moments are based on the deviations of each data point from the sample mean, and outliers can have a disproportionate effect on these deviations, leading to skewed or biased estimates of the distribution parameters.
█ Order Statistics
L-moments are statistical measures that are based on linear combinations of order statistics, which are the sorted values in a dataset. This approach makes L-moments more resistant to the influence of outliers and extreme values. However, the computation of L-moments requires sorting the order statistics, which can lead to a higher computational complexity.
To address this issue, we have implemented an Online Sorting Algorithm that efficiently obtains the sorted dataset of order statistics, reducing the time complexity of the indicator. The Online Sorting Algorithm is an efficient method for sorting large datasets that can be updated incrementally, making it well-suited for use in trading applications where data is often streamed in real-time. By using this algorithm to compute L-moments, we can obtain robust estimates of distribution parameters while minimizing the computational resources required.
█ Bias and efficiency of an estimator
One of the key advantages of L-moments over conventional moments is that they approach their asymptotic normal closer than conventional moments. This means that as the sample size increases, the L-moments provide more accurate estimates of the distribution parameters.
Asymptotic normality is a statistical property that describes the behavior of an estimator as the sample size increases. As the sample size gets larger, the distribution of the estimator approaches a normal distribution, which is a bell-shaped curve. The mean and variance of the estimator are also related to the true mean and variance of the population, and these relationships become more accurate as the sample size increases.
The concept of asymptotic normality is important because it allows us to make inferences about the population based on the properties of the sample. If an estimator is asymptotically normal, we can use the properties of the normal distribution to calculate the probability of observing a particular value of the estimator, given the sample size and other relevant parameters.
In the case of L-moments, the fact that they approach their asymptotic normal more closely than conventional moments means that they provide more accurate estimates of the distribution parameters as the sample size increases. This is especially useful in situations where the sample size is small, such as when working with financial data. By using L-moments to estimate the properties of a distribution, traders can make more informed decisions about their investments and manage their risk more effectively.
Below we can see the empirical dsitributions of the Variance and L-scale estimators. We ran 10000 simulations with a sample size of 100. Here we can clearly see how the L-moment estimator approaches the normal distribution more closely and how such an estimator can be more representative of the underlying population.
█ WAYS TO USE THIS INDICATOR
The Linear Moments indicator can be used to estimate the L-moments of a dataset and provide insights into the underlying probability distribution. By analyzing the L-moments, traders can make inferences about the shape of the distribution, such as whether it is symmetric or skewed, and the degree of its spread and peakedness. This information can be useful in predicting future market movements and developing trading strategies.
One can also compare the L-moments of the dataset at hand with the L-moments of certain commonly used probability distributions. Finance is especially known for the use of certain fat tailed distributions such as Laplace or Student-t. We have built in the theoretical values of L-kurtosis for certain common distributions. In this way a person can compare our observed L-kurtosis with the one of the selected theoretical distribution.
█ FEATURES
Source Settings
Source - Select the source you wish the indicator to calculate on
Source Selection - Selec whether you wish to calculate on the source value or its log return
Moments Settings
Moments Selection - Select the L-moment you wish to be displayed
Lookback - Determine the sample size you wish the L-moments to be calculated with
Theoretical Distribution - This setting is only for investingating the kurtosis of our dataset. One can compare our observed kurtosis with the kurtosis of a selected theoretical distribution.
Historical Volatility EstimatorsHistorical volatility is a statistical measure of the dispersion of returns for a given security or market index over a given period. This indicator provides different historical volatility model estimators with percentile gradient coloring and volatility stats panel.
█ OVERVIEW There are multiple ways to estimate historical volatility. Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation. The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.
█ CONCEPTS We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.
Close-to-Close HV Estimator: Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation. Note: the TradingView build in historical volatility value is a bit off because it uses population standard deviation instead of sample deviation. N – 1 should be used here to get rid of the sampling bias.
Pros:
• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility, most of the time they are talking about the close to close estimator.
Cons:
• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.
• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.
• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute deviation makes the historical volatility more stable with extreme values.
Parkinson Hv Estimator:
• Parkinson was one of the first to come up with improvements to historical volatility calculation. • Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility. So Parkinson HV is also known as Parkinson High Low HV. • It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility. Note: By Dividing the Parkinson Volatility by Close-to-Close volatility you can get a similar result to Variance Ratio Test. It is called the Parkinson number. It can be used to test if the market follows a random walk. (It is mentioned in Nassim Taleb's Dynamic Hedging book but it seems like he made a mistake and wrote the ratio wrongly.)
Garman-Klass Estimator:
• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.
• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility.
Rogers-Satchell Estimator:
• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.
• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.
• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility.
Garman-Klass Yang-Zhang extension:
• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.
• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.
Yang-Zhang Estimator:
• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.
• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.
• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings. Note: There are already some volatility estimators coded on TradingView. Some of them are right, some of them are wrong. But for Yang Zhang Estimator I have not seen a correct version on TV.
EWMA Estimator:
• EWMA stands for Exponentially Weighted Moving Average. The Close-to-Close and all other estimators here are all equally weighted.
• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.
• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.
• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.
Adjusted Mean Absolute Deviation Estimator:
• This estimator does not use standard deviation to calculate volatility. It uses the distance log return is from its moving average as volatility.
• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility. The paper suggests this estimator has more predictive power.
• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.
• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility.
Adjusted Median Absolute Deviation Estimator:
• This is another estimator that does not use standard deviation to measure volatility.
• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.
• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.
█ FEATURES
• You can select the volatility estimator models in the Volatility Model input
• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.
• Alpha is used to adjust the Yang Zhang volatility estimator value.
• Percentile Length is used to Adjust Percentile coloring lookbacks.
• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility. The lower the percentile value, the colder the color will be, which indicates low volatility.
• When percentile coloring is off, it won’t show the gradient color.
• You can also use invert color to make the high volatility a cold color and a low volatility high color. Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.
• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.
• You can choose the plot style, such as lines, columns, areas in the plotstyle input.
• When the show information panel is on, a small panel will display on the right.
• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile. 50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is. HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.
█ HOW TO USE If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values. The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).
█ FINAL THOUGHTS HV vs ATR The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility. We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis, it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator. Other Estimators There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility. We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.
Visible RangeOverview This is a precision tool designed for quantitative traders and engineers who need exact control over their chart's visual scope. Unlike standard time calculations that fail in markets with trading breaks (like A-Shares, Futures, or Stocks), this indicator uses a loop-back mechanism to count the actual number of visible bars, ensuring your indicators (e.g., MA60, MA200) have sufficient sample data.
Why use this? If you use multi-timeframe layouts (e.g., Daily/Hourly/15s), it is critical to know exactly how much data is visible.
The Problem: In markets like the Chinese A-Share market (T+1, 4-hour trading day), calculating Time Range / Timeframe results in massive errors because it includes closed market hours (lunch breaks, nights, weekends).
The Solution: This script iterates through the visible range to count the true bar_index, providing 100% accurate data density metrics.
Key Features
True Bar Counting: Uses a for loop to count actual candles, ignoring market breaks. perfect for non-24/7 markets.
Integer Precision: Displays time ranges (Days, Hours, Mins, Secs) in clean integers. No messy decimals.
Compact UI: Displays information in a single line (e.g., View: 30 Days (120 Bars)), default to the Top Right corner to save screen space.
Fully Customizable: Adjustable position, text size, and colors to fit any dark/light theme.
Performance Optimized: Includes max_bars_back limits to prevent browser lag on deep history lookups.
Settings
Position: Default Top Right (can be moved to any corner).
Max Bar Count: Default 5000 (Safety limit for loop calculation).
Expected Move BandsExpected move is the amount that an asset is predicted to increase or decrease from its current price, based on the current levels of volatility.
In this model, we assume asset price follows a log-normal distribution and the log return follows a normal distribution.
Note: Normal distribution is just an assumption, it's not the real distribution of return
Settings:
"Estimation Period Selection" is for selecting the period we want to construct the prediction interval.
For "Current Bar", the interval is calculated based on the data of the previous bar close. Therefore changes in the current price will have little effect on the range. What current bar means is that the estimated range is for when this bar close. E.g., If the Timeframe on 4 hours and 1 hour has passed, the interval is for how much time this bar has left, in this case, 3 hours.
For "Future Bars", the interval is calculated based on the current close. Therefore the range will be very much affected by the change in the current price. If the current price moves up, the range will also move up, vice versa. Future Bars is estimating the range for the period at least one bar ahead.
There are also other source selections based on high low.
Time setting is used when "Future Bars" is chosen for the period. The value in time means how many bars ahead of the current bar the range is estimating. When time = 1, it means the interval is constructing for 1 bar head. E.g., If the timeframe is on 4 hours, then it's estimating the next 4 hours range no matter how much time has passed in the current bar.
Note: It's probably better to use "probability cone" for visual presentation when time > 1
Volatility Models :
Sample SD: traditional sample standard deviation, most commonly used, use (n-1) period to adjust the bias
Parkinson: Uses High/ Low to estimate volatility, assumes continuous no gap, zero mean no drift, 5 times more efficient than Close to Close
Garman Klass: Uses OHLC volatility, zero drift, no jumps, about 7 times more efficient
Yangzhang Garman Klass Extension: Added jump calculation in Garman Klass, has the same value as Garman Klass on markets with no gaps.
about 8 x efficient
Rogers: Uses OHLC, Assume non-zero mean volatility, handles drift, does not handle jump 8 x efficient
EWMA: Exponentially Weighted Volatility. Weight recently volatility more, more reactive volatility better in taking account of volatility autocorrelation and cluster.
YangZhang: Uses OHLC, combines Rogers and Garmand Klass, handles both drift and jump, 14 times efficient, alpha is the constant to weight rogers volatility to minimize variance.
Median absolute deviation: It's a more direct way of measuring volatility. It measures volatility without using Standard deviation. The MAD used here is adjusted to be an unbiased estimator.
Volatility Period is the sample size for variance estimation. A longer period makes the estimation range more stable less reactive to recent price. Distribution is more significant on a larger sample size. A short period makes the range more responsive to recent price. Might be better for high volatility clusters.
Standard deviations:
Standard Deviation One shows the estimated range where the closing price will be about 68% of the time.
Standard Deviation two shows the estimated range where the closing price will be about 95% of the time.
Standard Deviation three shows the estimated range where the closing price will be about 99.7% of the time.
Note: All these probabilities are based on the normal distribution assumption for returns. It's the estimated probability, not the actual probability.
Manually Entered Standard Deviation shows the range of any entered standard deviation. The probability of that range will be presented on the panel.
People usually assume the mean of returns to be zero. To be more accurate, we can consider the drift in price from calculating the geometric mean of returns. Drift happens in the long run, so short lookback periods are not recommended. Assuming zero mean is recommended when time is not greater than 1.
When we are estimating the future range for time > 1, we typically assume constant volatility and the returns to be independent and identically distributed. We scale the volatility in term of time to get future range. However, when there's autocorrelation in returns( when returns are not independent), the assumption fails to take account of this effect. Volatility scaled with autocorrelation is required when returns are not iid. We use an AR(1) model to scale the first-order autocorrelation to adjust the effect. Returns typically don't have significant autocorrelation. Adjustment for autocorrelation is not usually needed. A long length is recommended in Autocorrelation calculation.
Note: The significance of autocorrelation can be checked on an ACF indicator.
ACF
The multimeframe option enables people to use higher period expected move on the lower time frame. People should only use time frame higher than the current time frame for the input. An error warning will appear when input Tf is lower. The input format is multiplier * time unit. E.g. : 1D
Unit: M for months, W for Weeks, D for Days, integers with no unit for minutes (E.g. 240 = 240 minutes). S for Seconds.
Smoothing option is using a filter to smooth out the range. The filter used here is John Ehler's supersmoother. It's an advance smoothing technique that gets rid of aliasing noise. It affects is similar to a simple moving average with half the lookback length but smoother and has less lag.
Note: The range here after smooth no long represent the probability
Panel positions can be adjusted in the settings.
X position adjusts the horizontal position of the panel. Higher X moves panel to the right and lower X moves panel to the left.
Y position adjusts the vertical position of the panel. Higher Y moves panel up and lower Y moves panel down.
Step line display changes the style of the bands from line to step line. Step line is recommended because it gets rid of the directional bias of slope of expected move when displaying the bands.
Warnings:
People should not blindly trust the probability. They should be aware of the risk evolves by using the normal distribution assumption. The real return has skewness and high kurtosis. While skewness is not very significant, the high kurtosis should be noticed. The Real returns have much fatter tails than the normal distribution, which also makes the peak higher. This property makes the tail ranges such as range more than 2SD highly underestimate the actual range and the body such as 1 SD slightly overestimate the actual range. For ranges more than 2SD, people shouldn't trust them. They should beware of extreme events in the tails.
Different volatility models provide different properties if people are interested in the accuracy and the fit of expected move, they can try expected move occurrence indicator. (The result also demonstrate the previous point about the drawback of using normal distribution assumption).
Expected move Occurrence Test
The prediction interval is only for the closing price, not wicks. It only estimates the probability of the price closing at this level, not in between. E.g., If 1 SD range is 100 - 200, the price can go to 80 or 230 intrabar, but if the bar close within 100 - 200 in the end. It's still considered a 68% one standard deviation move.
Hurst Exponent - Detrended Fluctuation AnalysisIn stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analyzing time series that appear to be long-memory processes and noise.
█ OVERVIEW
We have introduced the concept of Hurst Exponent in our previous open indicator Hurst Exponent (Simple). It is an indicator that measures market state from autocorrelation. However, we apply a more advanced and accurate way to calculate Hurst Exponent rather than simple approximation. Therefore, we recommend using this version of Hurst Exponent over our previous publication going forward. The method we used here is called detrended fluctuation analysis. (For folks that are not interested in the math behind the calculation, feel free to skip to "features" and "how to use" section. However, it is recommended that you read it all to gain a better understanding of the mathematical reasoning).
█ Detrend Fluctuation Analysis
Detrended Fluctuation Analysis was first introduced by by Peng, C.K. (Original Paper) in order to measure the long-range power-law correlations in DNA sequences . DFA measures the scaling-behavior of the second moment-fluctuations, the scaling exponent is a generalization of Hurst exponent.
The traditional way of measuring Hurst exponent is the rescaled range method. However DFA provides the following benefits over the traditional rescaled range method (RS) method:
• Can be applied to non-stationary time series. While asset returns are generally stationary, DFA can measure Hurst more accurately in the instances where they are non-stationary.
• According the the asymptotic distribution value of DFA and RS, the latter usually overestimates Hurst exponent (even after Anis- Llyod correction) resulting in the expected value of RS Hurst being close to 0.54, instead of the 0.5 that it should be. Therefore it's harder to determine the autocorrelation based on the expected value. The expected value is significantly closer to 0.5 making that threshold much more useful, using the DFA method on the Hurst Exponent (HE).
• Lastly, DFA requires lower sample size relative to the RS method. While the RS method generally requires thousands of observations to reduce the variance of HE, DFA only needs a sample size greater than a hundred to accomplish the above mentioned.
█ Calculation
DFA is a modified root-mean-squares (RMS) analysis of a random walk. In short, DFA computes the RMS error of linear fits over progressively larger bins (non-overlapped “boxes” of similar size) of an integrated time series.
Our signal time series is the log returns. First we subtract the mean from the log return to calculate the demeaned returns. Then, we calculate the cumulative sum of demeaned returns resulting in the cumulative sum being mean centered and we can use the DFA method on this. The subtraction of the mean eliminates the “global trend” of the signal. The advantage of applying scaling analysis to the signal profile instead of the signal, allows the original signal to be non-stationary when needed. (For example, this process converts an i.i.d. white noise process into a random walk.)
We slice the cumulative sum into windows of equal space and run linear regression on each window to measure the linear trend. After we conduct each linear regression. We detrend the series by deducting the linear regression line from the cumulative sum in each windows. The fluctuation is the difference between cumulative sum and regression.
We use different windows sizes on the same cumulative sum series. The window sizes scales are log spaced. Eg: powers of 2, 2,4,8,16... This is where the scale free measurements come in, how we measure the fractal nature and self similarity of the time series, as well as how the well smaller scale represent the larger scale.
As the window size decreases, we uses more regression lines to measure the trend. Therefore, the fitness of regression should be better with smaller fluctuation. It allows one to zoom into the “picture” to see the details. The linear regression is like rulers. If you use more rulers to measure the smaller scale details you will get a more precise measurement.
The exponent we are measuring here is to determine the relationship between the window size and fitness of regression (the rate of change). The more complex the time series are the more it will depend on decreasing window sizes (using more linear regression lines to measure). The less complex or the more trend in the time series, it will depend less. The fitness is calculated by the average of root mean square errors (RMS) of regression from each window.
Root mean Square error is calculated by square root of the sum of the difference between cumulative sum and regression. The following chart displays average RMS of different window sizes. As the chart shows, values for smaller window sizes shows more details due to higher complexity of measurements.
The last step is to measure the exponent. In order to measure the power law exponent. We measure the slope on the log-log plot chart. The x axis is the log of the size of windows, the y axis is the log of the average RMS. We run a linear regression through the plotted points. The slope of regression is the exponent. It's easy to see the relationship between RMS and window size on the chart. Larger RMS equals less fitness of the regression. We know the RMS will increase (fitness will decrease) as we increases window size (use less regressions to measure), we focus on the rate of RMS increasing (how fast) as window size increases.
If the slope is < 0.5, It means the rate of of increase in RMS is small when window size increases. Therefore the fit is much better when it's measured by a large number of linear regression lines. So the series is more complex. (Mean reversion, negative autocorrelation).
If the slope is > 0.5, It means the rate of increase in RMS is larger when window sizes increases. Therefore even when window size is large, the larger trend can be measured well by a small number of regression lines. Therefore the series has a trend with positive autocorrelation.
If the slope = 0.5, It means the series follows a random walk.
█ FEATURES
• Sample Size is the lookback period for calculation. Even though DFA requires a lower sample size than RS, a sample size larger > 50 is recommended for accurate measurement.
• When a larger sample size is used (for example = 1000 lookback length), the loading speed may be slower due to a longer calculation. Date Range is used to limit numbers of historical calculation bars. When loading speed is too slow, change the data range "all" into numbers of weeks/days/hours to reduce loading time. (Credit to allanster)
• “show filter” option applies a smoothing moving average to smooth the exponent.
• Log scale is my work around for dynamic log space scaling. Traditionally the smallest log space for bars is power of 2. It requires at least 10 points for an accurate regression, resulting in the minimum lookback to be 1024. I made some changes to round the fractional log space into integer bars requiring the said log space to be less than 2.
• For a more accurate calculation a larger "Base Scale" and "Max Scale" should be selected. However, when the sample size is small, a larger value would cause issues. Therefore, a general rule to be followed is: A larger "Base Scale" and "Max Scale" should be selected for a larger the sample size. It is recommended for the user to try and choose a larger scale if increasing the value doesn't cause issues.
The following chart shows the change in value using various scales. As shown, sometimes increasing the value makes the value itself messy and overshoot.
When using the lowest scale (4,2), the value seems stable. When we increase the scale to (8,2), the value is still alright. However, when we increase it to (8,4), it begins to look messy. And when we increase it to (16,4), it starts overshooting. Therefore, (8,2) seems to be optimal for our use.
█ How to Use
Similar to Hurst Exponent (Simple). 0.5 is a level for determine long term memory.
• In the efficient market hypothesis, market follows a random walk and Hurst exponent should be 0.5. When Hurst Exponent is significantly different from 0.5, the market is inefficient.
• When Hurst Exponent is > 0.5. Positive Autocorrelation. Market is Trending. Positive returns tend to be followed by positive returns and vice versa.
• Hurst Exponent is < 0.5. Negative Autocorrelation. Market is Mean reverting. Positive returns trends to follow by negative return and vice versa.
However, we can't really tell if the Hurst exponent value is generated by random chance by only looking at the 0.5 level. Even if we measure a pure random walk, the Hurst Exponent will never be exactly 0.5, it will be close like 0.506 but not equal to 0.5. That's why we need a level to tell us if Hurst Exponent is significant.
So we also computed the 95% confidence interval according to Monte Carlo simulation. The confidence level adjusts itself by sample size. When Hurst Exponent is above the top or below the bottom confidence level, the value of Hurst exponent has statistical significance. The efficient market hypothesis is rejected and market has significant inefficiency.
The state of market is painted in different color as the following chart shows. The users can also tell the state from the table displayed on the right.
An important point is that Hurst Value only represents the market state according to the past value measurement. Which means it only tells you the market state now and in the past. If Hurst Exponent on sample size 100 shows significant trend, it means according to the past 100 bars, the market is trending significantly. It doesn't mean the market will continue to trend. It's not forecasting market state in the future.
However, this is also another way to use it. The market is not always random and it is not always inefficient, the state switches around from time to time. But there's one pattern, when the market stays inefficient for too long, the market participants see this and will try to take advantage of it. Therefore, the inefficiency will be traded away. That's why Hurst exponent won't stay in significant trend or mean reversion too long. When it's significant the market participants see that as well and the market adjusts itself back to normal.
The Hurst Exponent can be used as a mean reverting oscillator itself. In a liquid market, the value tends to return back inside the confidence interval after significant moves(In smaller markets, it could stay inefficient for a long time). So when Hurst Exponent shows significant values, the market has just entered significant trend or mean reversion state. However, when it stays outside of confidence interval for too long, it would suggest the market might be closer to the end of trend or mean reversion instead.
Larger sample size makes the Hurst Exponent Statistics more reliable. Therefore, if the user want to know if long term memory exist in general on the selected ticker, they can use a large sample size and maximize the log scale. Eg: 1024 sample size, scale (16,4).
Following Chart is Bitcoin on Daily timeframe with 1024 lookback. It suggests the market for bitcoin tends to have long term memory in general. It generally has significant trend and is more inefficient at it's early stage.
VB Finviz-style MTF Screener📊 VB Multi-Timeframe Stock Screener (Daily + 4H + 1H)
A structured, high-signal stock screener that blends Daily fundamentals, 4H trend confirmation, and 1H entry timing to surface strong trading opportunities with institutional discipline.
🟦 1. Daily Screener — Core Stock Selection
All fundamental and structural filters run strictly on Daily data for maximum stability and signal quality.
Daily filters include:
📈 Average Volume & Relative Volume
💲 Minimum Price Threshold
📊 Beta vs SPY
🏢 Market Cap (Billions)
🔥 ATR Liquidity Filter
🧱 Float Requirements
📘 Price Above Daily SMA50
🚀 Minimum Gap-Up Condition
This layer acts like a Finviz-style engine, identifying stocks worth trading before momentum or timing is considered.
🟩 2. 4H Trend Confirmation — Momentum Check
Once a stock passes the Daily screen, the 4-hour timeframe validates trend strength:
🔼 Price above 4H MA
📈 MA pointing upward
This removes structurally good stocks that are not in a healthy trend.
🟧 3. 1H Entry Alignment — Timing Layer
The Hourly timeframe refines near-term timing:
🔼 Price above 1H MA
📉 Short-term upward movement detected
This step ensures the stock isn’t just good on paper—it’s moving now.
🧪 MTF Debug Table (Your Transparency Engine)
A live diagnostic table shows:
All Daily values
All 4H checks
All 1H checks
Exact PASS/FAIL per condition
Perfect for tuning thresholds or understanding why a ticker qualifies or fails.
🎯 Who This Screener Is For
Swing traders
Momentum/trend traders
Systematic and rules-based traders
Traders who want clean, multi-timeframe alignment
By combining Daily fundamentals, 4H trend structure, and 1H momentum, this screener filters the market down to the stocks that are strong, aligned, and ready.
️Omega RatioThe Omega Ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It is defined as the probability-weighted ratio, of gains versus losses for some threshold return target. The ratio is an alternative for the widely used Sharpe ratio and is based on information the Sharpe ratio discards.
█ OVERVIEW
As we have mentioned many times, stock market returns are usually not normally distributed. Therefore the models that assume a normal distribution of returns may provide us with misleading information. The Omega Ratio improves upon the common normality assumption among other risk-return ratios by taking into account the distribution as a whole.
█ CONCEPTS
Two distributions with the same mean and variance, would according to the most commonly used Sharpe Ratio suggest that the underlying assets of the distribution offer the same risk-return ratio. But as we have mentioned in our Moments indicator, variance and standard deviation are not a sufficient measure of risk in the stock market since other shape features of a distribution like skewness and excess kurtosis come into play. Omega Ratio tackles this problem by employing all four Moments of the distribution and therefore taking into account the differences in the shape features of the distributions. Another important feature of the Omega Ratio is that it does not require any estimation but is rather calculated directly from the observed data. This gives it an advantage over standard statistical estimators that require estimation of parameters and are therefore sampling uncertainty in its calculations.
█ WAYS TO USE THIS INDICATOR
Omega calculates a probability-adjusted ratio of gains to losses, relative to the Minimum Acceptable Return (MAR). This means that at a given MAR using the simple rule of preferring more to less, an asset with a higher value of Omega is preferable to one with a lower value. The indicator displays the values of Omega at increasing levels of MARs and creating the so-called Omega Curve. Knowing this one can compare Omega Curves of different assets and decide which is preferable given the MAR of your strategy. The indicator plots two Omega Curves. One for the on chart symbol and another for the off chart symbol that u can use for comparison.
When comparing curves of different assets make sure their trading days are the same in order to ensure the same period for the Omega calculations. Value interpretation: Omega<1 will indicate that the risk outweighs the reward and therefore there are more excess negative returns than positive. Omega>1 will indicate that the reward outweighs the risk and that there are more excess positive returns than negative. Omega=1 will indicate that the minimum acceptable return equals the mean return of an asset. And that the probability of gain is equal to the probability of loss.
█ FEATURES
• "Low-Risk security" lets you select the security that you want to use as a benchmark for Omega calculations.
• "Omega Period" is the size of the sample that is used for the calculations.
• “Increments” is the number of Minimal Acceptable Return levels the calculation is carried on. • “Other Symbol” lets you select the source of the second curve.
• “Color Settings” you can set the color for each curve.
Fast Autocorrelation Estimator█ Overview:
The Fast ACF and PACF Estimation indicator efficiently calculates the autocorrelation function (ACF) and partial autocorrelation function (PACF) using an online implementation. It helps traders identify patterns and relationships in financial time series data, enabling them to optimize their trading strategies and make better-informed decisions in the markets.
█ Concepts:
Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay.
This indicator displays autocorrelation based on lag number. The autocorrelation is not displayed based over time on the x-axis. It's based on the lag number which ranges from 1 to 30. The calculations can be done with "Log Returns", "Absolute Log Returns" or "Original Source" (the price of the asset displayed on the chart).
When calculating autocorrelation, the resulting value will range from +1 to -1, in line with the traditional correlation statistic. An autocorrelation of +1 represents a perfect correlation (an increase seen in one time series leads to a proportionate increase in the other time series). An autocorrelation of -1, on the other hand, represents a perfect inverse correlation (an increase seen in one time series results in a proportionate decrease in the other time series). Lag number indicates which historical data point is autocorrelated. For example, if lag 3 shows significant autocorrelation, it means current data is influenced by the data three bars ago.
The Fast Online Estimation of ACF and PACF Indicator is a powerful tool for analyzing the linear relationship between a time series and its lagged values in TradingView. The indicator implements an online estimation of the Autocorrelation Function (ACF) and the Partial Autocorrelation Function (PACF) up to 30 lags, providing a real-time assessment of the underlying dependencies in your time series data. The Autocorrelation Function (ACF) measures the linear relationship between a time series and its lagged values, capturing both direct and indirect dependencies. The Partial Autocorrelation Function (PACF) isolates the direct dependency between the time series and a specific lag while removing the effect of any indirect dependencies.
This distinction is crucial in understanding the underlying relationships in time series data and making more informed decisions based on those relationships. For example, let's consider a time series with three variables: A, B, and C. Suppose that A has a direct relationship with B, B has a direct relationship with C, but A and C do not have a direct relationship. The ACF between A and C will capture the indirect relationship between them through B, while the PACF will show no significant relationship between A and C, as it accounts for the indirect dependency through B. Meaning that when ACF is significant at for lag 5, the dependency detected could be caused by an observation that came in between, and PACF accounts for that. This indicator leverages the Fast Moments algorithm to efficiently calculate autocorrelations, making it ideal for analyzing large datasets or real-time data streams. By using the Fast Moments algorithm, the indicator can quickly update ACF and PACF values as new data points arrive, reducing the computational load and ensuring timely analysis. The PACF is derived from the ACF using the Durbin-Levinson algorithm, which helps in isolating the direct dependency between a time series and its lagged values, excluding the influence of other intermediate lags.
█ How to Use the Indicator:
Interpreting autocorrelation values can provide valuable insights into the market behavior and potential trading strategies.
When applying autocorrelation to log returns, and a specific lag shows a high positive autocorrelation, it suggests that the time series tends to move in the same direction over that lag period. In this case, a trader might consider using a momentum-based strategy to capitalize on the continuation of the current trend. On the other hand, if a specific lag shows a high negative autocorrelation, it indicates that the time series tends to reverse its direction over that lag period. In this situation, a trader might consider using a mean-reversion strategy to take advantage of the expected reversal in the market.
ACF of log returns:
Absolute returns are often used to as a measure of volatility. There is usually significant positive autocorrelation in absolute returns. We will often see an exponential decay of autocorrelation in volatility. This means that current volatility is dependent on historical volatility and the effect slowly dies off as the lag increases. This effect shows the property of "volatility clustering". Which means large changes tend to be followed by large changes, of either sign, and small changes tend to be followed by small changes.
ACF of absolute log returns:
Autocorrelation in price is always significantly positive and has an exponential decay. This predictably positive and relatively large value makes the autocorrelation of price (not returns) generally less useful.
ACF of price:
█ Significance:
The significance of a correlation metric tells us whether we should pay attention to it. In this script, we use 95% confidence interval bands that adjust to the size of the sample. If the observed correlation at a specific lag falls within the confidence interval, we consider it not significant and the data to be random or IID (identically and independently distributed). This means that we can't confidently say that the correlation reflects a real relationship, rather than just random chance. However, if the correlation is outside of the confidence interval, we can state with 95% confidence that there is an association between the lagged values. In other words, the correlation is likely to reflect a meaningful relationship between the variables, rather than a coincidence. A significant difference in either ACF or PACF can provide insights into the underlying structure of the time series data and suggest potential strategies for traders. By understanding these complex patterns, traders can better tailor their strategies to capitalize on the observed dependencies in the data, which can lead to improved decision-making in the financial markets.
Significant ACF but not significant PACF: This might indicate the presence of a moving average (MA) component in the time series. A moving average component is a pattern where the current value of the time series is influenced by a weighted average of past values. In this case, the ACF would show significant correlations over several lags, while the PACF would show significance only at the first few lags and then quickly decay.
Significant PACF but not significant ACF: This might indicate the presence of an autoregressive (AR) component in the time series. An autoregressive component is a pattern where the current value of the time series is influenced by a linear combination of past values at specific lags.
Often we find both significant ACF and PACF, in that scenario simply and AR or MA model might not be sufficient and a more complex model such as ARMA or ARIMA can be used.
█ Features:
Source selection: User can choose either 'Log Returns' , 'Absolute Returns' or 'Original Source' for the input data.
Autocorrelation Selection: User can choose either 'ACF' or 'PACF' for the plot selection.
Plot Selection: User can choose either 'Autocorrelarrogram' or 'Historical Autocorrelation' for plotting the historical autocorrelation at a specified lag.
Max Lag: User can select the maximum number of lags to plot.
Precision: User can set the number of decimal points to display in the plot.
FRAN CRASH PLAY RULESPurpose
It creates a fixed information panel in the top right corner of your chart that shows the "FRAN CRASH PLAY RULES" - a checklist of criteria for identifying potential crash play setups.
Key Features
Display Panel:
Shows 5 trading rules as bullet points
Permanently visible in the top right corner
Stays fixed while you scroll or zoom the chart
Current Rules Displayed:
DYNAMIC 3 TO 5 LEG RUN
NEAR VERTICAL ACCELERATION
FINAL BAR OF THE RUN UP MUST BE THE BIGGEST
3 FINGER SPREAD / DUAL SPACE
ATLEAST 2 OF 5 CRITERIA NEEDS TO HIT
Customization Options:
Editable Text - Change any of the 5 rules through the settings
Text Color - Adjust the color of the text
Text Size - Choose from tiny, small, normal, large, or huge
Background Color - Customize the panel background and transparency
Frame Color - Change the border color
Show/Hide Frame - Toggle the border on or off
Use Case
This indicator serves as a constant visual reminder of your trading strategy criteria, helping you stay disciplined and only take trades that meet your specific crash play requirements. It's essentially a "cheat sheet" that lives on your chart so you don't have to memorize or look elsewhere for your trading rules.
SYMBOL NOTES - UNCORRELATED TRADING GROUPSWrite symbol-specific notes that only appear on that chart. Organized into 6 uncorrelated groups for safe multi-pair trading.
📝 SYMBOL NOTES - UNCORRELATED TRADING GROUPS
This indicator solves two problems every serious trader faces:
1. Keeping Track of Your Analysis
Write notes for each trading pair and they'll only appear when you view that specific chart. No more forgetting your key levels, trade ideas, or analysis!
2. Avoiding Correlated Risk
The symbols are organized into 6 groups where ALL pairs within each group are completely UNCORRELATED. Trade any combination from the same group without worrying about double exposure.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 THE PROBLEM THIS SOLVES
Have you ever:
- Opened XAUUSD and EURUSD at the same time, then Fed news hit and BOTH positions went against you?
- Traded GBPUSD and GBPJPY together, then BOE announcement stopped out both trades?
- Forgotten what levels you were watching on a pair?
This indicator helps you avoid these costly mistakes!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📁 THE 6 UNCORRELATED GROUPS
Each group contains pairs that share NO common currency:
```
GRUP 1: XAUUSD • EURGBP • NZDJPY • AUDCHF • NATGAS
GRUP 2: EURUSD • GBPJPY • AUDNZD • CADCHF
GRUP 3: GBPUSD • EURJPY • AUDCAD • NZDCHF
GRUP 4: USDJPY • EURCHF • GBPAUD • NZDCAD
GRUP 5: USDCAD • EURAUD • GBPCHF
GRUP 6: NAS100 • DAX40 • UK100 • JPN225
```
**Example - GRUP 1:**
- XAUUSD → Uses USD + Gold
- EURGBP → Uses EUR + GBP
- NZDJPY → Uses NZD + JPY
- AUDCHF → Uses AUD + CHF
- NATGAS → Commodity (independent)
= 7 different currencies, ZERO overlap!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**✅ HOW TO USE**
1. Add indicator to any chart
2. Open Settings (gear icon ⚙️)
3. Find your symbol's group and input field
4. Write your note (support levels, trade ideas, etc.)
5. Switch charts - your note appears only on that symbol!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚙️ SETTINGS
- Note Position: Choose where the note box appears (6 positions)
- Text Size: Tiny, Small, Normal, or Large
- Show Group Name: Display which correlation group
- Show Symbol Name: Display current symbol
- Colors: Customize background, text, group label, and border colors
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 TRADING STRATEGY TIPS
Safe Multi-Pair Trading:
1. Pick ONE group for the day
2. Look for setups on ANY symbol in that group
3. Open positions freely - they won't correlate!
4. Even if major news hits, only ONE position is affected
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔧 COMPATIBLE WITH
- All major forex brokers
- Prop firms (FTMO, Alpha Capital, etc.)
- Works on any timeframe
- Futures symbols supported (MGC, M6E, etc.)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ES-VIX Expected Daily MoveThis indicator calculates the expected daily price movement for ES futures based on current volatility levels as measured by the VIX (CBOE Volatility Index).
Formula:
Expected Daily Move = (ES Price × VIX Price) / √252 / 100
The calculation converts the annualized VIX volatility into an expected daily move by dividing by the square root of 252 (the approximate number of trading days per year).
Features:
Real-time calculation using current ES futures price and VIX level
Histogram visualization in a separate pane for easy trend analysis
Information table displaying:
Current ES futures price
Current VIX level
Expected daily move in points
Expected daily move as a percentage
Fibonacci Projection with Volume & Delta Profile (Zeiierman)█ Overview
Fibonacci Projection with Volume & Delta Profile (Zeiierman) blends classic Fibonacci swing analysis with modern volume-flow reading to create a unified, projection-based market framework. The indicator automatically detects the latest swing high and swing low, builds a complete Fibonacci structure, and then projects future extension targets with clear visual pathways.
What makes this tool unique is the integration of two volume-based systems directly into the Fibonacci structure. A Fib-aligned Volume Profile shows how bullish and bearish volume accumulated inside the swing range, while a separate Delta Profile reveals the imbalance of buy–sell pressure inside each Fibonacci interval. Together, these elements transform the standard Fibonacci tool into a multi-dimensional structural and volume-flow map.
█ How It Works
The indicator first detects the most recent swing high and swing low using the Period setting. That swing defines the Fibonacci range, from which the script draws retracement levels (0.236–0.786) and builds a forward projection path using the chosen Projection Level and a 1.272 extension.
Along this path, it draws projection lines, target boxes, and percentage labels that show how far each projected leg extends relative to the previous one.
Inside the same swing range, the script builds a Fib-based Volume Profile by splitting price into rows and assigning each bar’s volume as bullish (close > open) or bearish (close ≤ open). On top of that, it calculates a Volume Delta Profile between each pair of fib levels, showing whether buyers or sellers dominated that band and how strong that imbalance was.
█ How to Use
This tool helps traders quickly understand market structure and where the price may be heading next. The projection engine shows the most likely future targets, highlights strong or weak legs in the move, and updates automatically whenever a new swing forms. This ensures you always see the most relevant and up-to-date projection path.
The Fib Volume Profile shows where volume supported the move and where it did not. Thick bullish buckets reveal zones where buyers stepped in aggressively, often becoming retestable support. Thick bearish buckets highlight zones of resistance or rejection, particularly useful if projected levels align with prior liquidity.
The Delta Profile adds a second dimension to volume reading by showing where buy–sell pressure was truly imbalanced. A projected Fibonacci target that aligns with a strong bullish delta, for example, may suggest continuation. A projection into a band dominated by bearish delta may warn of reversal or hesitation.
█ Settings
Period – bars used to determine swing high/low
Projection Level – chosen Fib ratio for projection path
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
SCOTTGO - Day Trade Stock Quote V4This Pine Script indicator, titled "SCOTTGO - Day Trade Stock Quote V4," is a comprehensive, customizable dashboard designed for active traders. It acts as a single, centralized reference point, displaying essential financial and technical data directly on your chart in a compact table overlay.
📊 Key Information Provided
The indicator is split into sections, aggregating various critical data points to provide a holistic picture of the stock's current state and momentum:
1. Ownership & Short Flow
This section provides fundamental context and short-interest data:
Market Cap, Shares Float, and Shares Outstanding: Key figures on the company's size and publicly tradable shares.
Short Volume %: Indicates the percentage of trading activity driven by short sellers.
Daily Change %: Shows the day's price movement relative to the previous close.
2. Price & Volatility
This tracks historical and immediate price levels:
Previous Close, Day High/Low: Key daily reference prices.
52-Week High/Low: Important long-term boundaries.
Earnings Date: A crucial fundamental date (currently displayed as a placeholder).
3. Momentum & Volume
These metrics are essential for understanding intraday buying and selling pressure:
Volume & Average Volume: The current trade volume compared to its historical average.
Relative Volume (RVOL): Measures how much volume is currently trading compared to the average rate for that time period (shown for both Daily and 5-Minute rates).
Volume Buzz (%): A percentage representation of how much current volume exceeds or falls below the average.
ADR % & ATR %: Measures of volatility.
RSI, U/D Ratio, and P/E Ratio: Momentum and valuation indicators.
4. Context
This provides background information on the security:
Includes the Symbol, Exchange, Industry, and Sector (note: some fields use placeholder data as this information is not always available via Pine Script).
⚙️ Customization
The dashboard is highly customizable via the indicator settings:
You can control the visibility of every single metric using the Section toggles.
You can change the position (Top Left, Top Right, etc.), size, and colors of the entire table.
In summary, this script is a powerful tool for day traders who need to monitor a large number of fundamental, technical, and volatility metrics simultaneously without cluttering the main chart area.
NYSE CME Market Session Clock This indicator can only work on short-term timeframes, since the time before the opening and before the closing of the session is updated only with the appearance of a new candle.
Bitcoin vs. S&P 500 Performance Comparison**Full Description:**
**Overview**
This indicator provides an intuitive visual comparison of Bitcoin's performance versus the S&P 500 by shading the chart background based on relative strength over a rolling lookback period.
**How It Works**
- Calculates percentage returns for both Bitcoin and the S&P 500 (ES1! futures) over a specified lookback period (default: 75 bars)
- Compares the returns and shades the background accordingly:
- **Green/Teal Background**: Bitcoin is outperforming the S&P 500
- **Red/Maroon Background**: S&P 500 is outperforming Bitcoin
- Displays a real-time performance difference label showing the exact percentage spread
**Key Features**
✓ Rolling performance comparison using customizable lookback period (default 75 bars)
✓ Clean visual representation with adjustable transparency
✓ Works on any timeframe (optimized for daily charts)
✓ Real-time performance differential display
✓ Uses ES1! (E-mini S&P 500 continuous futures) for accurate comparison
✓ Fine-tuning adjustment factor for precise calibration
**Use Cases**
- Identify market regimes where Bitcoin outperforms or underperforms traditional equities
- Spot trend changes in relative performance
- Assess risk-on vs risk-off periods
- Compare Bitcoin's momentum against broader market conditions
- Time entries/exits based on relative strength shifts
**Settings**
- **S&P 500 Symbol**: Default ES1! (can be changed to SPX or other indices)
- **Lookback Period**: Number of bars for performance calculation (default: 75)
- **Adjustment Factor**: Fine-tune calibration to match specific data feeds
- **Transparency Controls**: Customize background shading intensity
- **Show/Hide Label**: Toggle performance difference display
**Best Practices**
- Use on daily timeframe for swing trading and position analysis
- Combine with other momentum indicators for confirmation
- Watch for color transitions as potential regime change signals
- Consider using multiple timeframes for comprehensive analysis
**Technical Details**
The indicator calculates rolling percentage returns using the formula: ((Current Price / Price ) - 1) × 100, then compares Bitcoin's return to the S&P 500's return over the same period. The background color dynamically updates based on which asset is showing stronger performance.
SIDD Table Volume multiframe (Modified)🚀 SIDD Volume Table – The Most Powerful Multi-Timeframe Volume Dashboard
Designed by Siddhartha Mukherjee (SIDD)
Free for the community.
Get an unfair edge with the cleanest, fastest, and most accurate multi-timeframe volume analyzer available on TradingView. This tool reveals where buyers and sellers are truly active across multiple timeframes—helping you confirm trends, avoid traps, and enter with confidence.
🔥 Why Traders Love This Indicator
✅ 1. Multi-Timeframe Volume Domination
Instantly view Buy% / Sell% / Total Volume for:
1m • 5m • 15m • 1H • 4H • 1D • 1W
Choose any combination you want!
✅ 2. Advanced Buy/Sell Volume Logic
Not simple volume…
This tool breaks it into:
Buy Volume% (green dominance)
Sell Volume% (red dominance)
Using candle structure (H-L-C), giving far more accurate pressure detection.
✅ 3. Realtime Candle Countdown
Never guess when a candle will close again.
Get:
Seconds (1m)
MM:SS (5m/15m/1H)
DD:HH:MM:SS (4H, 1D, 1W)
Perfect for scalpers, swing traders, and index traders.
✅ 4. Beautiful & Customizable Dashboard
Choose position anywhere on screen
Auto size or choose Tiny → Huge
Color-coded Bias (Green Buyers, Red Sellers)
Clean layout built for modern charts
Your chart stays clean while your data stays powerful.
💡 What This Helps You Identify
Where buyers are gaining strength
Where sellers are dominating
Multi-timeframe alignment (the key to big moves)
Real reversal pressure
Volume divergence across timeframes
Trend confirmation before breakouts
Perfect for:
NIFTY / BANKNIFTY / Stocks / Crypto / FX / Commodities
🧠 Who Should Use This?
Intraday traders
Swing traders
Options traders
Futures traders
Crypto scalpers
Professional volume analysts
If volume matters to you → this indicator becomes a must-have.
🛠 Built with Precision
Non-repainting
Multi-TF aligned
Fast + lightweight arrays
Uses BTC/ETH feed to stabilize ticks
Zero chart clutter
❤️ Free for Everyone
This tool is released 100% free to help the community trade with clarity and confidence.
Leave a like ⭐, comment 💬, or follow if you want more such institutional-grade tools.
⚠️ Disclaimer
This is for educational/analytical use only.
Not financial advice. Trade at your own risk.
Price Channel ScalpingMy X account:@CTF_bule_lotus
1. Core Logic (Price Channel Breakout)
The strategy relies on a single, simple indicator: the highest high of the past 20 bars.
When the current price breaks above this 20-period high, a stop entry is used to initiate a long position.
This design avoids prediction.
The model waits for the market to demonstrate momentum before participating.
2. Trade Direction (Long Only)
The strategy exclusively trades long positions and does not take shorts.
This choice is based on:
ETH’s historically upward-biased structure
Avoiding noise from two-sided signals during high-volatility periods
Keeping the direction consistent, which is beneficial for scalping-style systems
3. Risk Management (Fixed TP / SL)
Immediately after entry, the strategy sets two fixed exit conditions:
Take Profit: +10 price units
Stop Loss: –10 price units
Both are automatically converted using the market’s minimum tick size to ensure cross-instrument applicability.
This fixed TP/SL structure is typical in scalping systems:
small wins, fast exits, controlled losses, high turnover.
4. Transaction Costs
A 0.03% fee is applied to every trade throughout the entire backtest.
This fee level reflects the cost structure of major centralized exchanges, making results closer to real-world conditions.
5. Data & Time Range (2016–2025 Full Sample)
The backtest uses ETH’s complete historical dataset from 2016 to 2025.
No subjective filtering is applied—large moves, flash crashes, and black-swan events are all included.
The strategy does not rely on heavy parameter tuning, reducing the risk of overfitting.
6. Backtest Results (Including Fees)
Under this fixed, rule-based structure, the cumulative return is:
1,202,002.77% (2016 → 2025)
Even after including transaction fees, performance is driven by:
High trade frequency and small profit targets
Strict loss containment
Capturing momentum during breakout regimes
7. Transparency & Reproducibility
I will publish the full Pine Script implementation, including:
Entry logic
Exit logic
Fee configuration
All parameters
Backtesting framework
Transparency and reproducibility remain the core principles of this research.
DarkPool's Dashboard v2 DarkPool's Dashboard v2 is a comprehensive "Heads-Up Display" (HUD) designed to aggregate critical market data into a single, customizable table overlaid on the price chart. Its primary goal is to declutter the trading workspace by removing the need for multiple separate indicator panes (like RSI, MACD, and Volume below the chart).
The core of the system is a composite Momentum Score, which calculates a value between -100 and +100 based on a weighted average of RSI, MACD, Stochastic, and Rate of Change (ROC). This score drives the main "Signal" output (e.g., STRONG BUY, HOLD, SELL). Additionally, the dashboard integrates a suite of volume analysis tools—including VWAP, OBV, and Volume Delta—alongside volatility and trend filters to provide a complete market health check at a glance.
Key Features
Composite Momentum Score: A unified metric combining four oscillators to gauge the true strength of the move.
Volume Intelligence: Monitors Relative Volume (RVOL), On-Balance Volume (OBV), Volume Delta, and VWAP status.
Trend & Filter Engine: Visualizes trend direction using EMAs and filters signals based on Volatility (ATR) and Trend Strength (EMA Separation).
Dynamic UI: A fully scalable and customizable table that can be positioned anywhere on the screen, with options to toggle specific data rows on or off.
Alert System: Integrated alerts for Volume Spikes, Divergences, and VWAP crossovers.
How to Use
1. Reading the Main Signal The top rows of the dashboard provide the immediate trade bias:
Signal: Displays text such as "STRONG BUY," "BUY," "HOLD," "SELL," or "STRONG SELL."
Momentum Score: A numeric value next to the signal.
> 50: Strong Bullish Momentum.
20 to 50: Moderate Bullish Momentum.
-20 to 20: Neutral / Hold (Chop).
<-20: Bearish Momentum.
2. Volume Analysis
Volume Bar: Visualizes the current volume relative to the Moving Average.
Spike: If the bar turns Orange/Yellow, a Volume Spike (default 2x average) has occurred.
VWAP: Indicates if the price is trading "Above" or "Below" the Volume Weighted Average Price.
Money Flow (MFI): Checks for institutional buying/selling pressure. "OB" means Overbought, "OS" means Oversold.
3. Trend & Volatility
Trend: Shows "UP" or "DOWN" based on Fast/Slow EMA crossovers.
Volatility: Measures the daily range. "HIGH" volatility suggests expansion, while "LOW" suggests compression (potential breakout pending).
4. Filtering Bad Signals The dashboard includes an "ATR Filter" and "Trend Confirmation" logic.
If the market is moving sideways (low ATR), the dashboard may default to "HOLD" or "NEUTRAL" even if oscillators are crossing, preventing false entries during consolidation.
Configuration Settings
Dashboard Settings
Table Position/Width/Scale: adjust the size and location of the table to fit your screen resolution (e.g., increase scale for 4K monitors).
Colors/Transparency: Customize the background and text colors to match your chart theme.
Indicator Settings
Oscillators: Adjust lengths for RSI, MACD, and Stochastic to tune sensitivity.
Volume: Enable or disable specific volume metrics like OBV or Delta.
Display Options: You can toggle specific rows off (e.g., turn off "ADX" or "SMA" if you do not use them) to compact the table.
Filter Settings
Enable ATR Filter: Toggles volatility filtering.
Trend Confirmation Bars: How many bars the trend must persist before the dashboard flips its bias (helps avoid fake-outs).
Disclaimer This indicator is provided for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a guarantee of future results. Trading cryptocurrencies and financial markets involves a high level of risk. Always perform your own due diligence before making any trading decisions.
ATR Risk Manager v5.2 [Auto-Extrapolate]If you ever had problems knowing how much contracts to use for a particular timeframe to keep your risk within acceptable levels, then this indicator should help. You just have to define your accepted risk based on ATR and also percetage of your drawdown, then the indicator will tell you how many contracts you should use. If the risk is too high, it will also tell you not to trade. This is only for futures NQ MNQ ES MES GC MGC CL MCL MYM and M2K.
Omega Correlation [OmegaTools]Omega Correlation (Ω CRR) is a cross-asset analytics tool designed to quantify both the strength of the relationship between two instruments and the tendency of one to move ahead of the other. It is intended for traders who work with indices, futures, FX, commodities, equities and ETFs, and who require something more robust than a simple linear correlation line.
The indicator operates in two distinct modes, selected via the “Show” parameter: Correlation and Anticipation. In Correlation mode, the script focuses on how tightly the current chart and the chosen second asset move together. In Anticipation mode, it shifts to a lead–lag perspective and estimates whether the second asset tends to behave as a leader or a follower relative to the symbol on the chart.
In both modes, the core inputs are the chart symbol and a user-selected second symbol. Internally, both assets are transformed into normalized log-returns: the script computes logarithmic returns, removes short-term mean and scales by realized volatility, then clips extreme values. This normalisation allows the tool to compare behaviour across assets with different price levels and volatility profiles.
In Correlation mode, the indicator computes a composite correlation score that typically ranges between –1 and +1. Values near +1 indicate strong and persistent positive co-movement, values near zero indicate an unstable or weak link, and values near –1 indicate a stable anti-correlation regime. The composite score is constructed from three components.
The first component is a normalized return co-movement measure. After transforming both instruments into normalized returns, the script evaluates how similar those returns are bar by bar. When the two assets consistently deliver returns of similar sign and magnitude, this component is high and positive. When they frequently diverge or move in opposite directions, it becomes negative. This captures short-term co-movement in a volatility-adjusted way.
The second component focuses on high–low swing alignment. Rather than looking only at closes, it examines the direction of changes in highs and lows for each bar. If both instruments are printing higher highs and higher lows together, or lower highs and lower lows together, the swing structure is considered aligned. Persistent alignment contributes positively to the correlation score, while repeated mismatches between the swing directions reduce it. This helps differentiate between superficial price noise and structural similarity in trend behaviour.
The third component is a classical Pearson correlation on closing prices, computed over a longer lookback. This serves as a stabilising backbone that summarises general co-movement over a broader window. By combining normalized return co-movement, swing alignment and standard price correlation with calibrated weights, the Correlation mode provides a richer view than a single linear measure, capturing both short-term dynamic interaction and longer-term structural linkage.
In Anticipation mode, Omega Correlation estimates whether the second asset tends to lead or lag the current chart. The output is again a continuous score around the range. Positive values suggest that the second asset is acting more as a leader, with its past moves bearing informative value for subsequent moves of the chart symbol. Negative values indicate that the second asset behaves more like a laggard or follower. Values near zero suggest that no stable lead–lag structure can be identified.
The anticipation score is built from four elements inspired by quantitative lead–lag and price discovery analysis. The first element is a residual lead correlation, conceptually similar to Granger-style logic. The script first measures how much of the chart symbol’s normalized returns can be explained by its own lagged values. It then removes that component and studies the correlation between the residuals and lagged returns of the second asset. If the second asset’s past returns consistently explain what the chart symbol does beyond its own autoregressive behaviour, this residual correlation becomes significantly positive.
The second element is an asymmetric lead–lag structure measure. It compares the strength of relationships in both directions across multiple lags: the correlation of the current symbol with lagged versions of the second asset (candidate leader) versus the correlation of lagged values of the current symbol with the present values of the second asset. If the forward direction (second asset leading the first) is systematically stronger than the backward direction, the structure is skewed toward genuine leadership of the second asset.
The third element is a relative price discovery score, constructed by building a dynamic hedge ratio between the two prices and defining a spread. The indicator looks at how changes in each asset contribute to correcting deviations in this spread over time. When the chart symbol tends to do most of the adjustment while the second asset remains relatively stable, it suggests that the second asset is taking a greater role in determining the equilibrium price and the chart symbol is adjusting to it. The difference in adjustment intensity between the two instruments is summarised into a single score.
The fourth element is a breakout follow-through causality component. The script scans for breakout events on the second asset, where its price breaks out of a recent high or low range while the chart symbol has not yet done so. It then evaluates whether the chart symbol subsequently confirms the breakout direction, remains neutral, or moves against it. Events where the second asset breaks and the first asset later follows in the same direction add positive contribution, while failed or contrarian follow-through reduce this component. The contribution is also lightly modulated by the strength of the breakout, via the underlying normalized return.
The four elements of the Anticipation mode are combined into a single leading correlation score, providing a compact and interpretable measure of whether the second asset currently behaves as an effective early signal for the symbol you trade.
To aid interpretation, Omega Correlation builds dynamic bands around the active series (correlation or anticipation). It estimates a long-term central tendency and a typical deviation around it, plotting upper and lower bands that highlight unusually high or low values relative to recent history. These bands can be used to distinguish routine fluctuations from genuinely extreme regimes.
The script also computes percentile-based levels for the correlation series and uses them to track two special price levels on the main chart: lost correlation levels and gained correlation levels. When the correlation drops below an upper percentile threshold, the current price is stored as a lost correlation level and plotted as a horizontal line. When the correlation rises above a lower percentile threshold, the current price is stored as a gained correlation level. These levels mark zones where a historically strong relationship between the two markets broke down or re-emerged, and can be used to frame divergence, convergence and spread opportunities.
An information panel summarises, in real time, whether the second asset is behaving more as a leading, lagging or independent instrument according to the anticipation score, and suggests whether the current environment is more conducive to de-alignment, re-alignment or classic spread behaviour based on the correlation regime. This makes the tool directly interpretable even for users who are not familiar with all the underlying statistical details.
Typical applications for Omega Correlation include intermarket analysis (for example, index vs index, commodity vs related equity sector, FX vs bonds), dynamic hedge sizing, regime detection for algorithmic strategies, and the identification of lead–lag structures where a macro driver or benchmark can be monitored as an early signal for the instrument actually traded. The indicator can be applied across intraday and higher timeframes, with the understanding that the strength and nature of relationships will differ across horizons.
Omega Correlation is designed as an advanced analytical framework, not as a standalone trading system. Correlation and lead–lag relationships are statistical in nature and can change abruptly, especially around macro events, regime shifts or liquidity shocks. A positive anticipation reading does not guarantee that the second asset will always move first, and a high correlation regime can break without warning. All outputs of this tool should be combined with independent analysis, sound risk management and, when appropriate, backtesting or forward testing on the user’s specific instruments and timeframes.
The intention behind Omega Correlation is to bring techniques inspired by quantitative research, such as normalized return analysis, residual correlation, asymmetric lead–lag structure, price discovery logic and breakout event studies, into an accessible TradingView indicator. It is intended for traders who want a structured, professional way to understand how markets interact and to incorporate that information into their discretionary or systematic decision-making processes.
Roshan Dash Ultimate Trading DashboardHas the key moving averages sma (10,20,50,200) in daily and above timeframe. And for lower timeframe it has ema (10,20,50,200) and vwap. Displays key information like marketcap, sector, lod%, atr, atr% and distance of atr from 50sma . All things which help determine whether or not to take trade.






















