Volume Profile with a few polylinesThe base of "Volume Profile with a few polylines" is another script of mine, Volume Profile (Maps) .
The structure of maps is used to gather the data. However, the drawings is done with polylines.
This enables coders to draw an entire volume profile with just a few polylines, while the range is broader.
This results in the benefit to draw more "lines" than with line.new() / box.new() alone.
🔶 CONCEPTS
🔹 Polylines
polyline.new creates a new polyline instance and displays it on the chart, sequentially connecting all of the points in the `points` array with line segments.
The segments in the drawing can be straight or curved depending on the `curved` parameter.
In this script, points are connected, starting from the bottom. The created line moves up until there is a price level where a volume value needs to be displayed,
at which the line goes to the left to the concerning volume value, coming back at the same price level until the line returns to its initial x-axis,
after which the line will continue to rise until all values are displayed.
A polyline can contain maximum 10000 points (10K).
Since the line has to go back and forth, each price/volume line takes 3 points.
In the case that 20K bars all have a different price, we would need 60K points, or just 6 polylines. A maximum of 100 polylines can be displayed.
The 3 highest volume values are displayed with line.new(), each with their own colour.
🔹 Maps
A map object is a collection that consists of key - value pairs
Each key is unique and can only appear once. When adding a new value with a key that the map already contains, that value replaces the old value associated with the key .
You can change the value of a particular key though, for example adding volume (value) at the same price (key), the latter technique is used in this script.
Volume is added to the map, associated with a particular price (default close, can be set at high, low, open,...)
When the map already contains the same price (key), the value (volume) is added to the existing volume at the associated price.
A map can contain maximum 50K values, which is more than enough to hold 20K bars (Basic 5K - Premium plan 20K), so the whole history can be put into a map.
🔹 Rounding function
This publication contains 2 round functions, which can be used to widen the Volume Profile
Round
• "Round" set at zero -> nothing changes to the source number
• "Round" set below zero -> x digit(s) after the decimal point, starting from the right side, and rounded.
• "Round" set above zero -> x digit(s) before the decimal point, starting from the right side, and rounded.
Example: 123456.789
0->123456.789
1->123456.79
2->123456.8
3->123457
-1->123460
-2->123500
Step
Another option is custom steps.
After setting "Round" to "Step", choose the desired steps in price,
Examples
• 2 -> 1234.00, 1236.00, 1238.00, 1240.00
• 5 -> 1230.00, 1235.00, 1240.00, 1245.00
• 100 -> 1200.00, 1300.00, 1400.00, 1500.00
• 0.05 -> 1234.00, 1234.05, 1234.10, 1234.15
•••
🔶 FEATURES
🔹 Volume * currency
Let's take as example BTCUSD, relative to USD, 10 volume at a price of 100 BTCUSD will be very different than 10 volume at a price of 30000 (1K vs. 300K)
If you want volume to be associated with USD, enable Volume * currency . Volume will then be multiplied by the price:
• 10 volume, 1 BTC = 100 -> 1000
• 10 volume, 1 BTC = 30K -> 300K
Polylines has the attributes curved & closed.
When "curved" is enabled the drawing will connect all points from the `points` array using curved line segments.
When "closed" is enabled the drawing will also connect the first point to the last point from the `points` array, resulting in a closed polyline.
They are default disabled, but can be enabled:
🔶 DETAILS
🔹 Put
When the map doesn't contain a price, it will be added, using map.put(id, key, value)
In our code:
map.put(originalMap, price, volume)
or
originalMap.put(price, volume)
A key (price) is now associated with a value (volume) -> key : value
Since all keys are unique, we don't have to know its position to extract the value, we just need to know the key -> map.get(id, key)
We use map.get() when a certain key already exists in the map, and we want to add volume with that value.
if originalMap.contains(price)
originalMap.put(price, originalMap.get(price) + volume)
-> At the last bar, all prices (source) are now associated with volume.
🔶 SETTINGS
Source : Set source of choice; default close , can be set as high , low , open , ...
Volume & currency : Enable to multiply volume with price (see Features )
Amount of bars : Set amount of bars which you want to include in the Volume Profile
🔹 Round -> ' Round/Step '
Round -> see Concepts
Step -> see Concepts
🔹 Display Volume Profile
Offset: shifts the Volume Profile (max. 500 bars to the right of last bar, see Features )
Max width Volume Profile: largest volume will be x bars wide, the rest is displayed as a ratio against largest volume (see Features )
Colours
Curved: make lines curved
Closed: connect last with first point
🔶 LIMITATIONS
• Lines won't go further than first bar (coded).
• The Volume Profile can be placed maximum 500 bar to the right of last price.
"curve"に関するスクリプトを検索
Asset Rotation System [InvestorUnknown]Overview
This system creates a comprehensive trend "matrix" by analyzing the performance of six assets against both the US Dollar and each other. The objective is to identify and hold the asset that is currently outperforming all others, thereby focusing on maintaining an investment in the most "optimal" asset at any given time.
- - - Key Features - - -
1. Trend Classification:
The system evaluates the trend for each of the six assets, both individually against USD and in pairs (assetX/assetY), to determine which asset is currently outperforming others.
Utilizes five distinct trend indicators: RSI (50 crossover), CCI, SuperTrend, DMI, and Parabolic SAR.
Users can customize the trend analysis by selecting all indicators or choosing a single one via the "Trend Classification Method" input setting.
2. Backtesting:
Calculates an equity curve for each asset and for the system itself, which assumes holding only the asset deemed optimal at any time.
Customizable start date for backtesting; by default, it begins either 5000 bars ago (the maximum in TradingView) or at the inception of the youngest asset included, whichever is shorter. If the youngest asset's history exceeds 5000 bars, the system uses 5000 bars to prevent errors.
The equity curve is dynamically colored based on the asset held at each point, with this coloring also reflected on the chart via barcolor().
Performance metrics like returns, standard deviation of returns, Sharpe, Sortino, and Omega ratios, along with maximum drawdown, are computed for each asset and the system's equity curve.
3 Alerts:
Supports alerts for when a new, confirmed optimal asset is identified. However, due to TradingView limitations, the specific asset cannot be included in the alert message.
- - - Usage - - -
1. Select Assets/Tickers:
Choose which assets or tickers you want to include in the rotation system. Ensure that all selected tickers are denominated in USD to maintain consistency in analysis.
2. Configure Trend Classification:
Decide on the trend classification method from the available options (RSI, CCI, SuperTrend, DMI, or Parabolic SAR, All) and adjust the settings to your preferences. This customization allows you to tailor the system to different market conditions or your specific trading strategy.
3. Utilize Backtesting for Calibration:
Use the backtesting results, including equity curves and performance metrics, to fine-tune your chosen trend indicators.
Be cautious not to overemphasize performance maximization, as this can lead to overfitting. The goal is to achieve a robust system that performs well across various market conditions, rather than just optimizing for past data.
- - - Parameters - - -
Tickers:
Asset 1: Select the symbol for the first asset.
Asset 2: Select the symbol for the second asset.
Asset 3: Select the symbol for the third asset.
Asset 4: Select the symbol for the fourth asset.
Asset 5: Select the symbol for the fifth asset.
Asset 6: Select the symbol for the sixth asset.
General Settings:
Trend Classification Method: Choose from RSI, CCI, SuperTrend, DMI, PSAR, or "All" to determine how trends are analyzed.
Use Custom Starting Date for Backtest: Toggle to use a custom date for beginning the backtest.
Custom Starting Date: Set the custom start date for backtesting.
Plot Perf. Metrics Table: Option to display performance metrics in a table on the chart.
RSI (Relative Strength Index):
RSI Source: Choose the price data source for RSI calculation.
RSI Length: Set the period for the RSI calculation.
CCI (Commodity Channel Index):
CCI Source: Select the price data source for CCI calculation.
CCI Length: Determine the period for the CCI.
SuperTrend:
SuperTrend Factor: Adjust the sensitivity of the SuperTrend indicator.
SuperTrend Length: Set the period for the SuperTrend calculation.
DMI (Directional Movement Index):
DMI Length: Define the period for DMI calculations.
Parabolic SAR:
PSAR Start: Initial acceleration factor for the Parabolic SAR.
PSAR Increment: Increment value for the acceleration factor.
PSAR Max Value: Maximum value the acceleration factor can reach.
Notes/Recommendations:
While this system is operational, it's important to recognize that it relies on "basic" indicators, which may not be ideal for generating trading signals on their own. I strongly suggest that users delve into the code to grasp the underlying logic of the system. Consider customizing it by integrating more sophisticated and higher-quality trend-following indicators to enhance its performance and reliability.
Disclaimer:
This system's backtest results are historical and do not predict future performance. Use for educational purposes only; not investment advice.
PLR-Z For Loop🧠 Overview
PLR-Z For Loop is a trend-following indicator built on the Power Law Residual Z-score model of Bitcoin price behavior. By measuring how far price deviates from a long-term power law regression and applying a custom scoring loop, this tool identifies consistent directional pressure in market structure. Designed for BTC, this indicator helps traders align with macro trends.
🧩 Key Features
Power Law Residual Model: Tracks deviations of BTC price from its long-term logarithmic growth curve.
Z-Score Normalization: Applies long-horizon statistical normalization (400/1460 bars) to smooth residual deviations into a usable trend signal.
Loop-Based Trend Filter: Iteratively scores how often the current Z-score exceeds prior values, emphasizing trend persistence over volatility.
Optional Smoothing: Toggleable exponential smoothing helps filter noise in choppier market conditions.
Directional Regime Coloring: Aqua (bullish) and Red (bearish) visuals reinforce trend alignment across plots and candles.
🔍 How It Works
Power Law Curve: Price is compared against a logarithmic regression model fitted to historical BTC price evolution (starting July 2010), defining structural support, resistance, and centerline levels.
Residual Z-Score: The residual is calculated as the log-difference between price and the power law center.
This residual is then normalized using a rolling mean (400 days) and standard deviation (1460 days) to create a long-term Z-score.
Loop Scoring Logic:
A loop compares the current Z-score to a configurable number of past bars.
Each higher comparison adds +1, and each lower one subtracts -1.
The result is a trend persistence score (z_loop) that grows with consistent directional momentum.
Smoothing Option: A user-defined EMA smooths the score, if enabled, to reduce short-term signal noise.
Signal Logic:
Long signal when trend score exceeds long_threshold.
Short signal when score drops below short_threshold.
Directional State (CD): Internally manages the current market regime (1 = long, -1 = short), controlling all visual output.
🔁 Use Cases & Applications
Macro Trend Alignment: Ideal for traders and analysts tracking Bitcoin’s structural momentum over long timeframes.
Trend Persistence Filter: Helps confirm whether the current move is part of a sustained trend or short-lived volatility.
Best Suited for BTC: Built specifically on the BNC BLX price history and Bitcoin’s power law behavior. Not designed for use with other assets.
✅ Conclusion
PLR-Z For Loop reframes Bitcoin’s long-term power law model into a trend-following tool by scoring the persistence of deviations above or below fair value. It shifts the focus from valuation-based mean reversion to directional momentum, making it a valuable signal for traders seeking high-conviction participation in BTC’s broader market cycles.
⚠️ Disclaimer
The content provided by this indicator is for educational and informational purposes only. Nothing herein constitutes financial or investment advice. Trading and investing involve risk, including the potential loss of capital. Always backtest and apply risk management suited to your strategy.
Hurst Exponent Oscillator [PhenLabs]📊 Hurst Exponent Oscillator -
Version: PineScript™ v5
📌 Description
The Hurst Exponent Oscillator (HEO) by PhenLabs is a powerful tool developed for traders who want to distinguish between trending, mean-reverting, and random market behaviors with clarity and precision. By estimating the Hurst Exponent—a statistical measure of long-term memory in financial time series—this indicator helps users make sense of underlying market dynamics that are often not visible through traditional moving averages or oscillators.
Traders can quickly know if the market is likely to continue its current direction (trending), revert to the mean, or behave randomly, allowing for more strategic timing of entries and exits. With customizable smoothing and clear visual cues, the HEO enhances decision-making in a wide range of trading environments.
🚀 Points of Innovation
Integrates advanced Hurst Exponent calculation via Rescaled Range (R/S) analysis, providing unique market character insights.
Offers real-time visual cues for trending, mean-reverting, or random price action zones.
User-controllable EMA smoothing reduces noise for clearer interpretation.
Dynamic coloring and fill for immediate visual categorization of market regime.
Configurable visual thresholds for critical Hurst levels (e.g., 0.4, 0.5, 0.6).
Fully customizable appearance settings to fit different charting preferences.
🔧 Core Components
Log Returns Calculation: Computes log returns of the selected price source to feed into the Hurst calculation, ensuring robust and scale-independent analysis.
Rescaled Range (R/S) Analysis: Assesses the dispersion and cumulative deviation over a rolling window, forming the core statistical basis for the Hurst exponent estimate.
Smoothing Engine: Applies Exponential Moving Average (EMA) smoothing to the raw Hurst value for enhanced clarity.
Dynamic Rolling Windows: Utilizes arrays to maintain efficient, real-time calculations over user-defined lengths.
Adaptive Color Logic: Assigns different highlight and fill colors based on the current Hurst value zone.
🔥 Key Features
Visually differentiates between trending, mean-reverting, and random market modes.
User-adjustable lookback and smoothing periods for tailored sensitivity.
Distinct fill and line styles for each regime to avoid ambiguity.
On-chart reference lines for strong trending and mean-reverting thresholds.
Works with any price series (close, open, HL2, etc.) for versatile application.
🎨 Visualization
Hurst Exponent Curve: Primary plotted line (smoothed if EMA is used) reflects the ongoing estimate of the Hurst exponent.
Colored Zone Filling: The area between the Hurst line and the 0.5 reference line is filled, with color and opacity dynamically indicating the current market regime.
Reference Lines: Dash/dot lines mark standard Hurst thresholds (0.4, 0.5, 0.6) to contextualize the current regime.
All visual elements can be customized for thickness, color intensity, and opacity for user preference.
📖 Usage Guidelines
Data Settings
Hurst Calculation Length
Default: 100
Range: 10-300
Description: Number of bars used in Hurst calculation; higher values mean longer-term analysis, lower values for quicker reaction.
Data Source
Default: close
Description: Select which data series to analyze (e.g., Close, Open, HL2).
Smoothing Length (EMA)
Default: 5
Range: 1-50
Description: Length for smoothing the Hurst value; higher settings yield smoother but less responsive results.
Style Settings
Trending Color (Hurst > 0.5)
Default: Blue tone
Description: Color used when trending regime is detected.
Mean-Reverting Color (Hurst < 0.5)
Default: Orange tone
Description: Color used when mean-reverting regime is detected.
Neutral/Random Color
Default: Soft blue
Description: Color when market behavior is indeterminate or shifting.
Fill Opacity
Default: 70-80
Range: 0-100
Description: Transparency of area fills—higher opacity for stronger visual effect.
Line Width
Default: 2
Range: 1-5
Description: Thickness of the main indicator curve.
✅ Best Use Cases
Identifying if a market is regime-shifting from trending to mean-reverting (or vice versa).
Filtering signals in automated or systematic trading strategies.
Spotting periods of randomness where trading signals should be deprioritized.
Enhancing mean-reversion or trend-following models with regime-awareness.
⚠️ Limitations
Not predictive: Reflects current and recent market state, not future direction.
Sensitive to input parameters—overfitting may occur if settings are changed too frequently.
Smoothing can introduce lag in regime recognition.
May not work optimally in markets with structural breaks or extreme volatility.
💡 What Makes This Unique
Employs advanced statistical market analysis (Hurst exponent) rarely found in standard toolkits.
Offers immediate regime visualization through smart dynamic coloring and zone fills.
🔬 How It Works
Rolling Log Return Calculation:
Each new price creates a log return, forming the basis for robust, non-linear analysis. This ensures all price differences are treated proportionally.
Rescaled Range Analysis:
A rolling window maintains cumulative deviations and computes the statistical “range” (max-min of deviations). This is compared against the standard deviation to estimate “memory”.
Exponent Calculation & Smoothing:
The raw Hurst value is translated from the log of the rescaled range ratio, and then optionally smoothed via EMA to dampen noise and false signals.
Regime Detection Logic:
The smoothed value is checked against 0.5. Values above = trending; below = mean-reverting; near 0.5 = random. These control plot/fill color and zone display.
💡 Note:
Use longer calculation lengths for major market character study, and shorter ones for tactical, short-term adaptation. Smoothing balances noise vs. lag—find a best fit for your trading style. Always combine regime awareness with broader technical/fundamental context for best results.
Smooth First Derivative IndicatorIntroducing the Smooth First Derivative indicator. For each time step, the script numerically differentiates the price data using prior datapoints from the look-back window. The resulting time derivative (the rate of price change over time) is presented as a centered oscillator.
A first derivative is a versatile tool used in functional data analysis. When applied to price data, it can be applied to analyze momentum, confirm trend direction, and identify pivot points.
Model Description:
The model assumes that, within the look-back window, price data can be well approximated by a smooth differentiable function. The first derivative can then be computed numerically using a noise-robust one-sided differentiator. The current version of the script employs smooth differentiators developed by P. Holoborodko (www.holoborodko.com). Note that the Indicator should not be confused with Constance Brown's Derivative Oscillator.
Input parameter:
The Bandwidth parameter sets the number of points in the moving look-back window and thus determines the smoothness of the first derivative curve. Note that a smoother Indicator shows a greater lag.
Interpretation:
When using this Indicator, one should recall that the first derivative can simply be interpreted as the slope of the curve:
- The maximum (minimum) in the Indicator corresponds to the point at which the market experiences the maximum upward (downward) slope, i.e., the inflection point. The steeper the slope, the greater the Indicator value.
- The positive-to-negative zero-crossing in the Indicator suggests that the market has formed a local maximum (potential start of a downtrend or a period of consolidation). Likewise, a zero-crossing from negative to positive is a potential bullish signal.
Auto Fractal [theUltimator5]This indicator is what I call the Auto Fractal. It is a unique algorithm that looks back in time, finds a segment on the chart that closest matches the recent price action, then projects the price forwards. It effectively finds chart patterns and shows you what the price did the last time the same/similar chart pattern was observed.
Creating an algorithm to match abstract curves to other abstract curves and provide a confidence score was the fundamental problem that needed to be solved in order to create this indicator, which curve matches with surprising accuracy.
The most effective method to "curve match" that I found is the Pearson Coefficient, set by a segment length and a lookback period. After the highest coefficient curve is located, the curve then gets scaled and offset to match the current price.
The past segment is drawn over the current price (orange line), giving a visualization of the two curves and how closely they match each other. The indicator then projects the price forwards in time based on the price action of the chart from the historical segment (dashed fuchsia line).
A bounding box also gets drawn around the historical segment to give you a clear visual of where the price is getting pulled from for proper analysis and ease of use.
The Pearson Coefficient % is shown in a table in the top right-hand corner of the chart and can be toggled off if desired. The values range from -100% (perfectly inverse correlation) to +100% (perfectly correlated) with 0 meaning no correlation whatsoever. The closer to +100% the value is, the better the segment match.
As with most/all of my indicators, user interface and simplicity was at the top of my priority list. I designed this to be easily readable and intuitive to both novice and veteran traders, without cluttering the chart.
Note:
This indicator is extremely heavy in terms of memory usage due to nested for loops, and takes several seconds to initially load the chart overlay. If the lookback period is increased too high (>600) then the indicator may time out and fail to load anything. If nothing loads on the chart, try reducing the lookback length and wait up to 10 seconds for lines to appear.
Functionally Weighted Moving AverageOVERVIEW
An anchor-able moving average that weights historical prices with mathematical curves (shaping functions) such as Smoothstep , Ease In / Out , or even a Cubic Bézier . This level of configurability lends itself to more versatile price modeling, over conventional moving averages.
SESSION ANCHORS
Aside from VWAP, conventional moving averages do not allow you to use the first bar of each session as an anchor. This can make averages less useful near the open when price is sufficiently different from yesterdays close. For example, in this screenshot the EMA (blue) lags behind the sessionally anchored FWMA (yellow) at the open, making it slower to indicate a pivot higher.
An incrementing length is what makes a moving average anchor-able. VWAP is designed to do this, indefinitely growing until a new anchor resets the average (which is why it doesn't have a length parameter). But conventional MA's are designed to have a set length (they do not increment). Combining these features, the FWMA treats the length like a maximum rather than a set length, incrementing up to it from the anchor (when enabled).
Quick aside: If you code and want to anchor a conventional MA, the length() function in my UtilityLibrary will help you do this.
Incrementing an averages length introduces near-anchor volatility. For this reason, the FWMA also includes an option to saturate the anchor with the source , making values near the anchor more resistant to change. The following screenshot illustrates how saturation affects the average near the anchor when disabled (aqua) and enabled (fuchsia).
AVERAGING MATH
While there's nothing special about the math, it's worth documenting exactly how the average is affected by the anchor.
Average = Dot Product / Sum of Weights
Dot Product
This is the sum of element-wise multiplication between the Price and Weight arrays.
Dot Product = Price1 × Weight1 + Price2 × Weight2 + Price3 × Weight3 ...
When the Price and Weight arrays are equally sized (aka. the length is no longer incrementing from the anchor), there's a 1-1 mapping between Price and Weight indices. Anchoring, however, purges historical data from the Price array, making it temporarily smaller. When this happens, a dot product is synthesized by linearly interpolating for proportional indices (rather than a 1-1 mapping) to maintain the intended shape of weights.
Synthetic Dot Product = FirstPrice × FirstWeight + ... MidPrice × MidWeight ... + LastPrice × LastWeight
Sum of Weights
Exactly what it sounds like, the sum of weights used by the dot product operation. The sum of used weights may be less than the sum of all weights when the dot product is synthesized.
Sum of Weights = Weight1 + Weight2 + Weight3 ...
CALCULATING WEIGHTS
Shaping functions are mathematical curves used for interpolation. They are what give the Functionally Weighted Moving Average its name, and define how each historical price in the look back period is weighted.
The included shaping functions are:
Linear (conventional WMA)
Smoothstep (S curve)
Ease In Out (adjustable S curve)
Ease In (first half of Ease In Out)
Ease Out (second half of Ease In Out)
Ease Out In (eases out and then back in)
Cubic Bézier (aka. any curve you want)
In the following screenshot, the only difference between the three FWMA's is the shaping function (Ease In, Ease In Out, and Ease Out) illustrating how different curves can influence the responsiveness of an average.
And here is the same example, but with anchor saturation disabled .
ADJUSTING WEIGHTS
Each function outputs a range of values between 0 and 1. While you can't expand or shrink the range, you can nudge it higher or lower using the Scalar . For example, setting the scalar to -0.2 remaps to , and +0.2 remaps to . The following screenshot illustrates how -0.2 (lightest blue) and +0.2 (darkest blue) affect the average.
Easing functions can be further adjusted with the Degree (how much the shaping function curves). There's an interactive example of this here and the following illustrates how a degrees 0, 1, and 20 (dark orange, orange, and light orange) affect the average.
This level of configurability completely changes how a moving average models price for a given length, making the FWMA extremely versatile.
INPUTS
You can configure:
Length (how many historical bars to average)
Source (the bar value to average)
Offset (horizontal offset of the plot)
Weight (the shaping function)
Scalar (how much to adjust each weight)
Degree (how much to ease in / out)
Bézier Points (controls shape of Bézier)
Divisor & Anchor parameters
Style of the plot
BUT ... WHY?
We use moving averages to anticipate trend initialization, continuation, and termination. For a given look back period (length) we want the average to represent the data as accurately and smoothly as possible. The better it does this, the better it is at modeling price.
In this screenshot, both the FWMA (yellow) and EMA (blue) have a length of 9. They are both smooth, but one of them more accurately models price.
You wouldn't necessarily want to trade with these FWMA parameters, but knowing it does a better job of modeling price allows you to confidently expand the model to larger timeframes for bigger moves. Here, both the FWMA (yellow) and EMA (blue) have a length of 195 (aka. 50% of NYSE market hours).
INSPIRATION
I predominantly trade ETF derivatives and hold the position that markets are chaotic, not random . The salient difference being that randomness is entirely unpredictable, and chaotic systems can be modeled. The kind of analysis I value requires a very good pricing model.
The term "model" sounds more intimidating than it is. Math terms do that sometimes. It's just a mathematical estimation . That's it. For example, a regression is an "average regressing" model (aka. mean reversion ), and LOWESS (Locally Weighted Scatterplot Smoothing) is a statistically rigorous local regression .
LOWESS is excellent for modeling data. Also, it's not practical for trading. It's computationally expensive and uses data to the right of the point it's averaging, which is impossible in realtime (everything to the right is in the future). But many techniques used within LOWESS are still valuable.
My goal was to create an efficient real time emulation of LOWESS. Specifically I wanted something that was weighted non-linearly, was efficient, left-side only, and data faithful. Incorporate trading paradigms (like anchoring) and you get a Functionally Weighted Moving Average.
The formulas for determining the weights in LOWESS are typically chosen just because they seem to work well. Meaning ... they can be anything, and there's no justification other than "looks about right". So having a variety of functions (aka. kernels) for the FWMA, and being able to slide the weight range higher or lower, allows you to also make it "look about right".
William Cleveland, prominent figure in statistics known for his contributions to LOWESS, preferred using a tri-cube weighting function. Using Weight = Ease Out In with the Degrees = 3 is comparable to this. Enjoy!
Distribution Histogram [SS]This is the frequency histogram indicator. It does just that—creates a frequency histogram distribution based on your desired lookback period. It then uses Pine's new Polyline function to plot a normal curve of the expected results for a normal distribution. This allows you to see quite a few things:
🎯 Firstly, it allows you to see where the accumulation rests in terms of a bell curve. The histogram represents a bell curve, and you can visually observe what the curve would look like.
🎯 Secondly, it will assess the normal distribution and the degree of skewness based on the curve itself. The indicator imports the SPTS statistics library to assess the distribution using Kurtosis and Skewness. However, it also adds functionality in this regard by making a qualitative assessment of the data. For example, if there are heavy left tails or heavier right tails present in the histogram, the indicator will alert you that a heavier left or right tail has been observed.
🎯 Thirdly, it provides you with the kurtosis and skewness of the dataset.
🎯 Fourthly, it provides the mean, median, and mode of the dataset, as well as the maximum and minimum values within the dataset.
🎯 Lastly, it provides you with the ability to toggle on tips/explanations of the curve itself. Simply toggle on "Show Distribution Explanation" in the settings menu:
How is the indicator helpful for trading?
If you are a mean reversion trader, this helps you identify the areas and price ranges of high and low accumulation. It also allows you to ascertain the probability by looking at the standard deviation of the bell curve. Remember, the majority of values should fall between -1 and 1 standard deviation of the mean (68%).
If it is revealed that the distribution has a heavier right or left tail, you will know that the stock is more likely to experience sudden drops and shifts in the curve in one direction or the other. Heavier left tails will tend to shift to the values on the far left, and vice versa for right tails.
Customization
You can turn off and on the following:
👉 The normal curve,
👉 The standard deviation levels, and
👉 The distribution explanations and tips.
Conclusion: And that is the indicator! Hope you enjoy it!
Orion Algo Strategy v2.0Hi everyone.
I decided to make the latest Orion Algo open to people. I don't have enough time to work on it lately, so I figured it would be best that everyone can have it to work on it. I took out some stuff from the original but it should give an idea on how things work. I made two strategies with this so far so you can use that to come up with your own. I recommend the DCA strategy because it gives you the most bang for Orion Algo's buck. It's pretty good at finding long entries.
Overall I hope you guys like this one. Also, Banano is the best crypto currency :)
-INFO-
Orion Algo is a trading algorithm designed to help traders find the highs and lows of the market before, during, and after they happen. We wanted to give an indicator to people that was simple to use. In fact we created the algorithm in such a way that it currently only needs a single input from the user. Since no indicator can predict the market perfectly, Orion should be used as just another tool (although quite a sharp one) for you to trade with. Fundamental knowledge of price action and TA should be used with Orion Algo.
Being an oscillator, Orion currently has a bias towards market volatility . So you will want to be trading markets over 30% volatility . We have plans to develop future versions that take this into account and adjust automatically for dead conditions. Also, while there are some similarities across all oscillators, what sets ours apart is the prediction curve. The prediction curve looks at the current signal values and gives it a relative score to approximate tops and bottoms 1-2 bars ahead of the signal curve. We also designed a velocity curve that attempts to predict the signal curve 2+ bars ahead. You can find the relative change in velocity in the Info panel. The bottom momentum wave is based on the signal curve and helps find overall market direction of higher time-frames while in a lower one.
Settings and How to Use them:
User Agreement – Orion Algo is a tool for you to use while trading. We aren’t responsible for losses OR the gains you make with it. By clicking the checkbox on the left you are agreeing to the terms.
Super Smooth – Smooths the main signal line based on the value inside the box. Lower values shift the pivot points to the left but also make things more noisy. Higher values move things to the right making it lag a bit more while creating a smoother signal. 8 is a good value to start with.
Theme – Changes the color scheme of Orion.
Dashboard – Turns on a dashboard with useful stats, such as Delta v, Volatility , Rsi , etc. Changing the value box will move the dashboard left and right.
Prediction – A secondary prediction model that attempts to predict a reversal before it happens (0-2bars). This can be noisy some times so make your best judgement. Curve will toggle a curve view of the prediction. Pivots will toggle bull/bear dots.
∆v – Delta v (change in velocity). This shows momentum of the signal. Crossing 0 signals a reversal. If you see the delta v changing direction, it may signify a reversal in the several bars depending on the overall momentum of the market.
Momentum Wave – Uses the signal as a macro trend indicator. Changes in direction of the wave can signify macro changes in the market. Average will toggle an averaging algorithm of the momentum waves and makes it easy to understand.
-STRATEGIES-
Simple - Just buy and sell on the dots
DCA - Uses the settings in the script for entries. If a buy dot appears then it will buy, if the price goes below the percentage it will wait for another dot before entering. This drastically improves DCA potential.
Support and Resistance LevelsDetecting Support and Resistance Levels
Description:
Support & Resistance levels are essential for every trader to define the decision points of the markets. If you are long and the market falls below the previous support level, you most probably have got the wrong position and better exit.
This script uses the first and second deviation of a curve to find the turning points and extremes of the price curve.
The deviation of a curve is nothing else than the momentum of a curve (and inertia is another name for momentum). It defines the slope of the curve. If the slope of a curve is zero, you have found a local extreme. The curve will change from rising to falling or the other way round.
The second deviation, or the momentum of momentum, shows you the turning points of the first deviation. This is important, as at this point the original curve will switch from acceleration to break mode.
Using the logic laid out above the support&resistance indicator will show the turning points of the market in a timely manner. Depending on level of market-smoothing it will show the long term or short term turning points.
This script first calculates the first and second deviation of the smoothed market, and in a second step runs the turning point detection.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours/Days
ErrorFunctionsLibrary "ErrorFunctions"
A collection of functions used to approximate the area beneath a Gaussian curve.
Because an ERF (Error Function) is an integral, there is no closed-form solution to calculating the area beneath the curve. Meaning all ERFs are approximations; precisely wrong, but mostly accurate. How close you need to get to the actual area depends entirely on your use case, with more precision being less efficient.
The internal precision of floats in Pine Script is 1e-16 (16 decimals, aka. double precision). This library adapts well known algorithms designed to efficiently reach double precision. Single precision alternates are also included. All of them were made free to use, modify, and distribute by their original authors.
HASTINGS
Adaptation of a single precision ERF by Cecil Hastings Jr, published through Princeton University in 1955. It was later documented by Abramowitz and Stegun as equation 7.1.26 in their 1972 Handbook of Mathematical Functions. Fast, efficient, and ideal when precision beyond a few decimals is unnecessary.
GILES
Adaptation of a single precision Inverse ERF by Michael Giles, published through the University of Oxford in 2012. It reverses the ERF, estimating an X coordinate from an area. It too is fast, efficient, and ideal when precision beyond a few decimals is unnecessary.
LIBC
Adaptation of the double precision ERF & ERFC in the standard C library (aka. libc). It is also the same ERF & ERFC that SciPy uses. While not quite as efficient as the Hastings approximation, it's still very fast and fully maximizes Pines precision.
BOOST
Adaptation of the double precision Inverse ERF & Inverse ERFC in the Boost Math C++ library. SciPy uses these as well. These reverse the ERF & ERFC, estimating an X coordinate from an area. It too isn't quite as efficient as the Giles approximation, but still fast and fully maximizes Pines precision.
While these algorithms are not exported directly, they are available through their exported counterparts.
- - -
ERROR FUNCTIONS
erf(x, precise)
An Error Function estimates the theoretical error of a measurement.
Parameters:
x (float) : (float) Upper limit of the integration.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between -1 and 1.
erfc(x, precise)
A Complementary Error Function estimates the difference between a theoretical error and infinity.
Parameters:
x (float) : (float) Lower limit of the integration.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 2.
erfinv(x, precise)
An Inverse Error Function reverses the erf() by estimating the original measurement from the theoretical error.
Parameters:
x (float) : (float) Theoretical error.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ± infinity.
erfcinv(x, precise)
An Inverse Complementary Error Function reverses the erfc() by estimating the original measurement from the difference between the theoretical error and infinity.
Parameters:
x (float) : (float) Difference between the theoretical error and infinity.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ± infinity.
- - -
DISTRIBUTION FUNCTIONS
pdf(x, m, s)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
x (float) : (float) X coordinate for which a density will be estimated.
m (float) : (float) Mean
s (float) : (float) Sigma
Returns: (float) Between 0 and ∞.
cdf(z, precise)
A Cumulative Distribution Function estimates the area under a Gaussian curve between negative infinity and the Z Score.
Parameters:
z (float) : (float) Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
cdfinv(a, precise)
An Inverse Cumulative Distribution Function reverses the cdf() by estimating the Z Score from an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between -∞ and +∞
cdfab(z1, z2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Gaussian curve between two Z Scores (A and B).
Parameters:
z1 (float) : (float) First Z Score.
z2 (float) : (float) Second Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
ttt(z, precise)
A Two-Tailed Test estimates the area under a Gaussian curve between symmetrical ± Z scores and ± infinity.
Parameters:
z (float) : (float) One of the symmetrical Z Scores.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
tttinv(a, precise)
An Inverse Two-Tailed Test reverses the ttt() by estimating the absolute Z Score from an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ∞.
ott(z, precise)
A One-Tailed Test estimates the area under a Gaussian curve between an absolute Z Score and infinity.
Parameters:
z (float) : (float) Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
ottinv(a, precise)
An Inverse One-Tailed Test Reverses the ott() by estimating the Z Score from a an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ∞.
Trailing Management (Zeiierman)█ Overview
The Trailing Management (Zeiierman) indicator is designed for traders who seek an automated and dynamic approach to managing trailing stops. It helps traders make systematic decisions regarding when to enter and exit trades based on the calculated risk-reward ratio. By providing a clear visual representation of trailing stop levels and risk-reward metrics, the indicator is an essential tool for both novice and experienced traders aiming to enhance their trading discipline.
The Trailing Management (Zeiierman) indicator integrates a Break-Even Curve feature to enhance its utility in trailing stop management and risk-reward optimization. The Break-Even Curve illuminates the precise point at which a trade neither gains nor loses value, offering clarity on the risk-reward landscape. Furthermore, this precise point is calculated based on the required win rate and the risk/reward ratio. This calculation aids traders in understanding the type of strategy they need to employ at any given time to be profitable. In other words, traders can, at any given point, assess the kind of strategy they need to utilize to make money, depending on the price's position within the risk/reward box.
█ How It Works
The indicator operates by computing the highest high and the lowest low over a user-defined period and then applying this information to determine optimal trailing stop levels for both long and short positions.
Directional Bias:
It establishes the direction of the market trend by comparing the index of the highest high and the lowest low within the lookback period.
Bullish
Bearish
Trailing Stop Adjustment:
The trailing stops are adjusted using one of three methods: an automatic calculation based on the median of recent peak differences, pivot points, or a fixed percentage defined by the user.
The Break-Even Curve:
The Break-Even Curve, along with the risk/reward ratio, is determined through the trailing method. This approach utilizes the current closing price as a hypothetical entry point for trades. All calculations, including those for the curve, are based on this current closing price, ensuring real-time accuracy and relevance. As market conditions fluctuate, the curve dynamically adjusts, offering traders a visual benchmark that signifies the break-even point. This real-time adjustment provides traders with an invaluable tool, allowing them to visually track how shifts in the market could impact the point at which their trades neither gain nor lose value.
Example:
Let's say the price is at the midpoint of the risk/reward box; this means that the risk/reward ratio should be 1:1, and the minimum win rate is 50% to break even.
In this example, we can see that the price is near the stop-loss level. If you are about to take a trade in this area and would respect your stop, you only need to have a minimum win rate of 11% to earn money, given the risk/reward ratio, assuming that you hold the trade to the target.
In other words, traders can, at any given point, assess the kind of strategy they need to employ to make money based on the price's position within the risk/reward box.
█ How to Use
Market Bias:
When using the Auto Bias feature, the indicator calculates the underlying market bias and displays it as either bullish or bearish. This helps traders align their trades with the underlying market trend.
Risk Management:
By observing the plotted trailing stops and the risk-reward ratios, traders can make strategic decisions to enter or exit positions, effectively managing the risk.
Strategy selection:
The Break-Even Curve is a powerful tool for managing risk, allowing traders to visualize the relationship between their trailing stops and the market's price movements. By understanding where the break-even point lies, traders can adjust their strategies to either lock in profits or cut losses.
Based on the plotted risk/reward box and the location of the price within this box, traders can easily see the win rate required by their strategy to make money in the long run, given the risk/reward ratio.
Consider this example: The market is bullish, as indicated by the bias, and the indicator suggests looking into long trades. The price is near the top of the risk/reward box, which means entering the market right now carries a huge risk, and the potential reward is very low. To take this trade, traders must have a strategy with a win rate of at least 90%.
█ Settings
Trailing Method:
Auto: The indicator calculates the trailing stop dynamically based on market conditions.
Pivot: The trailing stop is adjusted to the highest high (long positions) or lowest low (short positions) identified within a specified lookback period. This method uses the pivotal points of the market to set the trailing stop.
Percentage: The trailing stop is set at a fixed percentage away from the peak high or low.
Trailing Size (prd):
This setting defines the lookback period for the highest high and lowest low, which affects the sensitivity of the trailing stop to price movements.
Percentage Step (perc):
If the 'Percentage' method is selected, this setting determines the fixed percentage for the trailing stop distance.
Set Bias (bias):
Allows users to set a market bias which can be Bullish, Bearish, or Auto, affecting how the trailing stop is adjusted in relation to the market trend.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
[blackcat] L2 Ehlers Truncated BP FilterLevel: 2
Background
John F. Ehlers introuced Truncated BandPass (BP) Filter in Jul, 2020.
Function
In Dr. Ehlers' article “Truncated Indicators” in Jul, 2020, he introduces a method that can be used to modify some indicators, improving how accurately they are able to track and respond to price action. By limiting the data range, that is, truncating the data, indicators may be able to better handle extreme price events. A reasonable goal, especially during times of high volatility. John Ehlers shows how to improve a bandpass filter’s ability to reflect price by limiting the data range. Filtering out the temporary spikes and price extremes should positively affect the indicator stability. Enter a new indicator ——— the Truncated BandPass (BP) filter.
Cumulative indicators, such as the EMA or MACD, are affected not only by previous candles, but by a theoretically infinite history of candles. Although this effect is often assumed to be negligible, John Ehlers demonstrates in his article that it is not so. Or at least not for a narrow-band bandpass filter.
Bandpass filters are normally used for detecting cycles in price curves. But they do not work well with steep edges in the price curve. Sudden price jumps cause a narrow-band filter to “ring like a bell” and generate artificial cycles that can cause false triggers. As a solution, Ehlers proposes to truncate the candle history of the filter. Limiting the history to 10 bars effectively dampened the filter output and produced a better representation of the cycles in the price curve. For limiting the history of a cumulative indicator, John Ehlers proposes “Truncated Indicators,” John Ehlers takes us aside to look at the impact of sharp price movements on two fundamentally different types of filters: finite impulse response, and infinite impulse response filters. Given recent market conditions, this is a very well timed subject.
As demostrated in this script, Ehlers suggests “truncation” as an approach to the way the trader calculates filters. He explains why truncation is not appropriate for finite impulse response filters but why truncation can be beneficial to infinite impulse response filters. He then explains how to apply truncation to infinite impulse response filters using his bandpass filter as an example.
Key Signal
BPT --> Truncated BandPass (BP) Filter fast line
Trigger --> Truncated BandPass (BP) Filter slow line
Pros and Cons
100% John F. Ehlers definition translation, even variable names are the same. This help readers who would like to use pine to read his book.
Remarks
The 98th script for Blackcat1402 John F. Ehlers Week publication.
Readme
In real life, I am a prolific inventor. I have successfully applied for more than 60 international and regional patents in the past 12 years. But in the past two years or so, I have tried to transfer my creativity to the development of trading strategies. Tradingview is the ideal platform for me. I am selecting and contributing some of the hundreds of scripts to publish in Tradingview community. Welcome everyone to interact with me to discuss these interesting pine scripts.
The scripts posted are categorized into 5 levels according to my efforts or manhours put into these works.
Level 1 : interesting script snippets or distinctive improvement from classic indicators or strategy. Level 1 scripts can usually appear in more complex indicators as a function module or element.
Level 2 : composite indicator/strategy. By selecting or combining several independent or dependent functions or sub indicators in proper way, the composite script exhibits a resonance phenomenon which can filter out noise or fake trading signal to enhance trading confidence level.
Level 3 : comprehensive indicator/strategy. They are simple trading systems based on my strategies. They are commonly containing several or all of entry signal, close signal, stop loss, take profit, re-entry, risk management, and position sizing techniques. Even some interesting fundamental and mass psychological aspects are incorporated.
Level 4 : script snippets or functions that do not disclose source code. Interesting element that can reveal market laws and work as raw material for indicators and strategies. If you find Level 1~2 scripts are helpful, Level 4 is a private version that took me far more efforts to develop.
Level 5 : indicator/strategy that do not disclose source code. private version of Level 3 script with my accumulated script processing skills or a large number of custom functions. I had a private function library built in past two years. Level 5 scripts use many of them to achieve private trading strategy.
Dynamic Equity Allocation Model"Cash is Trash"? Not Always. Here's Why Science Beats Guesswork.
Every retail trader knows the frustration: you draw support and resistance lines, you spot patterns, you follow market gurus on social media—and still, when the next bear market hits, your portfolio bleeds red. Meanwhile, institutional investors seem to navigate market turbulence with ease, preserving capital when markets crash and participating when they rally. What's their secret?
The answer isn't insider information or access to exotic derivatives. It's systematic, scientifically validated decision-making. While most retail traders rely on subjective chart analysis and emotional reactions, professional portfolio managers use quantitative models that remove emotion from the equation and process multiple streams of market information simultaneously.
This document presents exactly such a system—not a proprietary black box available only to hedge funds, but a fully transparent, academically grounded framework that any serious investor can understand and apply. The Dynamic Equity Allocation Model (DEAM) synthesizes decades of financial research from Nobel laureates and leading academics into a practical tool for tactical asset allocation.
Stop drawing colorful lines on your chart and start thinking like a quant. This isn't about predicting where the market goes next week—it's about systematically adjusting your risk exposure based on what the data actually tells you. When valuations scream danger, when volatility spikes, when credit markets freeze, when multiple warning signals align—that's when cash isn't trash. That's when cash saves your portfolio.
The irony of "cash is trash" rhetoric is that it ignores timing. Yes, being 100% cash for decades would be disastrous. But being 100% equities through every crisis is equally foolish. The sophisticated approach is dynamic: aggressive when conditions favor risk-taking, defensive when they don't. This model shows you how to make that decision systematically, not emotionally.
Whether you're managing your own retirement portfolio or seeking to understand how institutional allocation strategies work, this comprehensive analysis provides the theoretical foundation, mathematical implementation, and practical guidance to elevate your investment approach from amateur to professional.
The choice is yours: keep hoping your chart patterns work out, or start using the same quantitative methods that professionals rely on. The tools are here. The research is cited. The methodology is explained. All you need to do is read, understand, and apply.
The Dynamic Equity Allocation Model (DEAM) is a quantitative framework for systematic allocation between equities and cash, grounded in modern portfolio theory and empirical market research. The model integrates five scientifically validated dimensions of market analysis—market regime, risk metrics, valuation, sentiment, and macroeconomic conditions—to generate dynamic allocation recommendations ranging from 0% to 100% equity exposure. This work documents the theoretical foundations, mathematical implementation, and practical application of this multi-factor approach.
1. Introduction and Theoretical Background
1.1 The Limitations of Static Portfolio Allocation
Traditional portfolio theory, as formulated by Markowitz (1952) in his seminal work "Portfolio Selection," assumes an optimal static allocation where investors distribute their wealth across asset classes according to their risk aversion. This approach rests on the assumption that returns and risks remain constant over time. However, empirical research demonstrates that this assumption does not hold in reality. Fama and French (1989) showed that expected returns vary over time and correlate with macroeconomic variables such as the spread between long-term and short-term interest rates. Campbell and Shiller (1988) demonstrated that the price-earnings ratio possesses predictive power for future stock returns, providing a foundation for dynamic allocation strategies.
The academic literature on tactical asset allocation has evolved considerably over recent decades. Ilmanen (2011) argues in "Expected Returns" that investors can improve their risk-adjusted returns by considering valuation levels, business cycles, and market sentiment. The Dynamic Equity Allocation Model presented here builds on this research tradition and operationalizes these insights into a practically applicable allocation framework.
1.2 Multi-Factor Approaches in Asset Allocation
Modern financial research has shown that different factors capture distinct aspects of market dynamics and together provide a more robust picture of market conditions than individual indicators. Ross (1976) developed the Arbitrage Pricing Theory, a model that employs multiple factors to explain security returns. Following this multi-factor philosophy, DEAM integrates five complementary analytical dimensions, each tapping different information sources and collectively enabling comprehensive market understanding.
2. Data Foundation and Data Quality
2.1 Data Sources Used
The model draws its data exclusively from publicly available market data via the TradingView platform. This transparency and accessibility is a significant advantage over proprietary models that rely on non-public data. The data foundation encompasses several categories of market information, each capturing specific aspects of market dynamics.
First, price data for the S&P 500 Index is obtained through the SPDR S&P 500 ETF (ticker: SPY). The use of a highly liquid ETF instead of the index itself has practical reasons, as ETF data is available in real-time and reflects actual tradability. In addition to closing prices, high, low, and volume data are captured, which are required for calculating advanced volatility measures.
Fundamental corporate metrics are retrieved via TradingView's Financial Data API. These include earnings per share, price-to-earnings ratio, return on equity, debt-to-equity ratio, dividend yield, and share buyback yield. Cochrane (2011) emphasizes in "Presidential Address: Discount Rates" the central importance of valuation metrics for forecasting future returns, making these fundamental data a cornerstone of the model.
Volatility indicators are represented by the CBOE Volatility Index (VIX) and related metrics. The VIX, often referred to as the market's "fear gauge," measures the implied volatility of S&P 500 index options and serves as a proxy for market participants' risk perception. Whaley (2000) describes in "The Investor Fear Gauge" the construction and interpretation of the VIX and its use as a sentiment indicator.
Macroeconomic data includes yield curve information through US Treasury bonds of various maturities and credit risk premiums through the spread between high-yield bonds and risk-free government bonds. These variables capture the macroeconomic conditions and financing conditions relevant for equity valuation. Estrella and Hardouvelis (1991) showed that the shape of the yield curve has predictive power for future economic activity, justifying the inclusion of these data.
2.2 Handling Missing Data
A practical problem when working with financial data is dealing with missing or unavailable values. The model implements a fallback system where a plausible historical average value is stored for each fundamental metric. When current data is unavailable for a specific point in time, this fallback value is used. This approach ensures that the model remains functional even during temporary data outages and avoids systematic biases from missing data. The use of average values as fallback is conservative, as it generates neither overly optimistic nor pessimistic signals.
3. Component 1: Market Regime Detection
3.1 The Concept of Market Regimes
The idea that financial markets exist in different "regimes" or states that differ in their statistical properties has a long tradition in financial science. Hamilton (1989) developed regime-switching models that allow distinguishing between different market states with different return and volatility characteristics. The practical application of this theory consists of identifying the current market state and adjusting portfolio allocation accordingly.
DEAM classifies market regimes using a scoring system that considers three main dimensions: trend strength, volatility level, and drawdown depth. This multidimensional view is more robust than focusing on individual indicators, as it captures various facets of market dynamics. Classification occurs into six distinct regimes: Strong Bull, Bull Market, Neutral, Correction, Bear Market, and Crisis.
3.2 Trend Analysis Through Moving Averages
Moving averages are among the oldest and most widely used technical indicators and have also received attention in academic literature. Brock, Lakonishok, and LeBaron (1992) examined in "Simple Technical Trading Rules and the Stochastic Properties of Stock Returns" the profitability of trading rules based on moving averages and found evidence for their predictive power, although later studies questioned the robustness of these results when considering transaction costs.
The model calculates three moving averages with different time windows: a 20-day average (approximately one trading month), a 50-day average (approximately one quarter), and a 200-day average (approximately one trading year). The relationship of the current price to these averages and the relationship of the averages to each other provide information about trend strength and direction. When the price trades above all three averages and the short-term average is above the long-term, this indicates an established uptrend. The model assigns points based on these constellations, with longer-term trends weighted more heavily as they are considered more persistent.
3.3 Volatility Regimes
Volatility, understood as the standard deviation of returns, is a central concept of financial theory and serves as the primary risk measure. However, research has shown that volatility is not constant but changes over time and occurs in clusters—a phenomenon first documented by Mandelbrot (1963) and later formalized through ARCH and GARCH models (Engle, 1982; Bollerslev, 1986).
DEAM calculates volatility not only through the classic method of return standard deviation but also uses more advanced estimators such as the Parkinson estimator and the Garman-Klass estimator. These methods utilize intraday information (high and low prices) and are more efficient than simple close-to-close volatility estimators. The Parkinson estimator (Parkinson, 1980) uses the range between high and low of a trading day and is based on the recognition that this information reveals more about true volatility than just the closing price difference. The Garman-Klass estimator (Garman and Klass, 1980) extends this approach by additionally considering opening and closing prices.
The calculated volatility is annualized by multiplying it by the square root of 252 (the average number of trading days per year), enabling standardized comparability. The model compares current volatility with the VIX, the implied volatility from option prices. A low VIX (below 15) signals market comfort and increases the regime score, while a high VIX (above 35) indicates market stress and reduces the score. This interpretation follows the empirical observation that elevated volatility is typically associated with falling markets (Schwert, 1989).
3.4 Drawdown Analysis
A drawdown refers to the percentage decline from the highest point (peak) to the lowest point (trough) during a specific period. This metric is psychologically significant for investors as it represents the maximum loss experienced. Calmar (1991) developed the Calmar Ratio, which relates return to maximum drawdown, underscoring the practical relevance of this metric.
The model calculates current drawdown as the percentage distance from the highest price of the last 252 trading days (one year). A drawdown below 3% is considered negligible and maximally increases the regime score. As drawdown increases, the score decreases progressively, with drawdowns above 20% classified as severe and indicating a crisis or bear market regime. These thresholds are empirically motivated by historical market cycles, in which corrections typically encompassed 5-10% drawdowns, bear markets 20-30%, and crises over 30%.
3.5 Regime Classification
Final regime classification occurs through aggregation of scores from trend (40% weight), volatility (30%), and drawdown (30%). The higher weighting of trend reflects the empirical observation that trend-following strategies have historically delivered robust results (Moskowitz, Ooi, and Pedersen, 2012). A total score above 80 signals a strong bull market with established uptrend, low volatility, and minimal losses. At a score below 10, a crisis situation exists requiring defensive positioning. The six regime categories enable a differentiated allocation strategy that not only distinguishes binarily between bullish and bearish but allows gradual gradations.
4. Component 2: Risk-Based Allocation
4.1 Volatility Targeting as Risk Management Approach
The concept of volatility targeting is based on the idea that investors should maximize not returns but risk-adjusted returns. Sharpe (1966, 1994) defined with the Sharpe Ratio the fundamental concept of return per unit of risk, measured as volatility. Volatility targeting goes a step further and adjusts portfolio allocation to achieve constant target volatility. This means that in times of low market volatility, equity allocation is increased, and in times of high volatility, it is reduced.
Moreira and Muir (2017) showed in "Volatility-Managed Portfolios" that strategies that adjust their exposure based on volatility forecasts achieve higher Sharpe Ratios than passive buy-and-hold strategies. DEAM implements this principle by defining a target portfolio volatility (default 12% annualized) and adjusting equity allocation to achieve it. The mathematical foundation is simple: if market volatility is 20% and target volatility is 12%, equity allocation should be 60% (12/20 = 0.6), with the remaining 40% held in cash with zero volatility.
4.2 Market Volatility Calculation
Estimating current market volatility is central to the risk-based allocation approach. The model uses several volatility estimators in parallel and selects the higher value between traditional close-to-close volatility and the Parkinson estimator. This conservative choice ensures the model does not underestimate true volatility, which could lead to excessive risk exposure.
Traditional volatility calculation uses logarithmic returns, as these have mathematically advantageous properties (additive linkage over multiple periods). The logarithmic return is calculated as ln(P_t / P_{t-1}), where P_t is the price at time t. The standard deviation of these returns over a rolling 20-trading-day window is then multiplied by √252 to obtain annualized volatility. This annualization is based on the assumption of independently identically distributed returns, which is an idealization but widely accepted in practice.
The Parkinson estimator uses additional information from the trading range (High minus Low) of each day. The formula is: σ_P = (1/√(4ln2)) × √(1/n × Σln²(H_i/L_i)) × √252, where H_i and L_i are high and low prices. Under ideal conditions, this estimator is approximately five times more efficient than the close-to-close estimator (Parkinson, 1980), as it uses more information per observation.
4.3 Drawdown-Based Position Size Adjustment
In addition to volatility targeting, the model implements drawdown-based risk control. The logic is that deep market declines often signal further losses and therefore justify exposure reduction. This behavior corresponds with the concept of path-dependent risk tolerance: investors who have already suffered losses are typically less willing to take additional risk (Kahneman and Tversky, 1979).
The model defines a maximum portfolio drawdown as a target parameter (default 15%). Since portfolio volatility and portfolio drawdown are proportional to equity allocation (assuming cash has neither volatility nor drawdown), allocation-based control is possible. For example, if the market exhibits a 25% drawdown and target portfolio drawdown is 15%, equity allocation should be at most 60% (15/25).
4.4 Dynamic Risk Adjustment
An advanced feature of DEAM is dynamic adjustment of risk-based allocation through a feedback mechanism. The model continuously estimates what actual portfolio volatility and portfolio drawdown would result at the current allocation. If risk utilization (ratio of actual to target risk) exceeds 1.0, allocation is reduced by an adjustment factor that grows exponentially with overutilization. This implements a form of dynamic feedback that avoids overexposure.
Mathematically, a risk adjustment factor r_adjust is calculated: if risk utilization u > 1, then r_adjust = exp(-0.5 × (u - 1)). This exponential function ensures that moderate overutilization is gently corrected, while strong overutilization triggers drastic reductions. The factor 0.5 in the exponent was empirically calibrated to achieve a balanced ratio between sensitivity and stability.
5. Component 3: Valuation Analysis
5.1 Theoretical Foundations of Fundamental Valuation
DEAM's valuation component is based on the fundamental premise that the intrinsic value of a security is determined by its future cash flows and that deviations between market price and intrinsic value are eventually corrected. Graham and Dodd (1934) established in "Security Analysis" the basic principles of fundamental analysis that remain relevant today. Translated into modern portfolio context, this means that markets with high valuation metrics (high price-earnings ratios) should have lower expected returns than cheaply valued markets.
Campbell and Shiller (1988) developed the Cyclically Adjusted P/E Ratio (CAPE), which smooths earnings over a full business cycle. Their empirical analysis showed that this ratio has significant predictive power for 10-year returns. Asness, Moskowitz, and Pedersen (2013) demonstrated in "Value and Momentum Everywhere" that value effects exist not only in individual stocks but also in asset classes and markets.
5.2 Equity Risk Premium as Central Valuation Metric
The Equity Risk Premium (ERP) is defined as the expected excess return of stocks over risk-free government bonds. It is the theoretical heart of valuation analysis, as it represents the compensation investors demand for bearing equity risk. Damodaran (2012) discusses in "Equity Risk Premiums: Determinants, Estimation and Implications" various methods for ERP estimation.
DEAM calculates ERP not through a single method but combines four complementary approaches with different weights. This multi-method strategy increases estimation robustness and avoids dependence on single, potentially erroneous inputs.
The first method (35% weight) uses earnings yield, calculated as 1/P/E or directly from operating earnings data, and subtracts the 10-year Treasury yield. This method follows Fed Model logic (Yardeni, 2003), although this model has theoretical weaknesses as it does not consistently treat inflation (Asness, 2003).
The second method (30% weight) extends earnings yield by share buyback yield. Share buybacks are a form of capital return to shareholders and increase value per share. Boudoukh et al. (2007) showed in "The Total Shareholder Yield" that the sum of dividend yield and buyback yield is a better predictor of future returns than dividend yield alone.
The third method (20% weight) implements the Gordon Growth Model (Gordon, 1962), which models stock value as the sum of discounted future dividends. Under constant growth g assumption: Expected Return = Dividend Yield + g. The model estimates sustainable growth as g = ROE × (1 - Payout Ratio), where ROE is return on equity and payout ratio is the ratio of dividends to earnings. This formula follows from equity theory: unretained earnings are reinvested at ROE and generate additional earnings growth.
The fourth method (15% weight) combines total shareholder yield (Dividend + Buybacks) with implied growth derived from revenue growth. This method considers that companies with strong revenue growth should generate higher future earnings, even if current valuations do not yet fully reflect this.
The final ERP is the weighted average of these four methods. A high ERP (above 4%) signals attractive valuations and increases the valuation score to 95 out of 100 possible points. A negative ERP, where stocks have lower expected returns than bonds, results in a minimal score of 10.
5.3 Quality Adjustments to Valuation
Valuation metrics alone can be misleading if not interpreted in the context of company quality. A company with a low P/E may be cheap or fundamentally problematic. The model therefore implements quality adjustments based on growth, profitability, and capital structure.
Revenue growth above 10% annually adds 10 points to the valuation score, moderate growth above 5% adds 5 points. This adjustment reflects that growth has independent value (Modigliani and Miller, 1961, extended by later growth theory). Net margin above 15% signals pricing power and operational efficiency and increases the score by 5 points, while low margins below 8% indicate competitive pressure and subtract 5 points.
Return on equity (ROE) above 20% characterizes outstanding capital efficiency and increases the score by 5 points. Piotroski (2000) showed in "Value Investing: The Use of Historical Financial Statement Information" that fundamental quality signals such as high ROE can improve the performance of value strategies.
Capital structure is evaluated through the debt-to-equity ratio. A conservative ratio below 1.0 multiplies the valuation score by 1.2, while high leverage above 2.0 applies a multiplier of 0.8. This adjustment reflects that high debt constrains financial flexibility and can become problematic in crisis times (Korteweg, 2010).
6. Component 4: Sentiment Analysis
6.1 The Role of Sentiment in Financial Markets
Investor sentiment, defined as the collective psychological attitude of market participants, influences asset prices independently of fundamental data. Baker and Wurgler (2006, 2007) developed a sentiment index and showed that periods of high sentiment are followed by overvaluations that later correct. This insight justifies integrating a sentiment component into allocation decisions.
Sentiment is difficult to measure directly but can be proxied through market indicators. The VIX is the most widely used sentiment indicator, as it aggregates implied volatility from option prices. High VIX values reflect elevated uncertainty and risk aversion, while low values signal market comfort. Whaley (2009) refers to the VIX as the "Investor Fear Gauge" and documents its role as a contrarian indicator: extremely high values typically occur at market bottoms, while low values occur at tops.
6.2 VIX-Based Sentiment Assessment
DEAM uses statistical normalization of the VIX by calculating the Z-score: z = (VIX_current - VIX_average) / VIX_standard_deviation. The Z-score indicates how many standard deviations the current VIX is from the historical average. This approach is more robust than absolute thresholds, as it adapts to the average volatility level, which can vary over longer periods.
A Z-score below -1.5 (VIX is 1.5 standard deviations below average) signals exceptionally low risk perception and adds 40 points to the sentiment score. This may seem counterintuitive—shouldn't low fear be bullish? However, the logic follows the contrarian principle: when no one is afraid, everyone is already invested, and there is limited further upside potential (Zweig, 1973). Conversely, a Z-score above 1.5 (extreme fear) adds -40 points, reflecting market panic but simultaneously suggesting potential buying opportunities.
6.3 VIX Term Structure as Sentiment Signal
The VIX term structure provides additional sentiment information. Normally, the VIX trades in contango, meaning longer-term VIX futures have higher prices than short-term. This reflects that short-term volatility is currently known, while long-term volatility is more uncertain and carries a risk premium. The model compares the VIX with VIX9D (9-day volatility) and identifies backwardation (VIX > 1.05 × VIX9D) and steep backwardation (VIX > 1.15 × VIX9D).
Backwardation occurs when short-term implied volatility is higher than longer-term, which typically happens during market stress. Investors anticipate immediate turbulence but expect calming. Psychologically, this reflects acute fear. The model subtracts 15 points for backwardation and 30 for steep backwardation, as these constellations signal elevated risk. Simon and Wiggins (2001) analyzed the VIX futures curve and showed that backwardation is associated with market declines.
6.4 Safe-Haven Flows
During crisis times, investors flee from risky assets into safe havens: gold, US dollar, and Japanese yen. This "flight to quality" is a sentiment signal. The model calculates the performance of these assets relative to stocks over the last 20 trading days. When gold or the dollar strongly rise while stocks fall, this indicates elevated risk aversion.
The safe-haven component is calculated as the difference between safe-haven performance and stock performance. Positive values (safe havens outperform) subtract up to 20 points from the sentiment score, negative values (stocks outperform) add up to 10 points. The asymmetric treatment (larger deduction for risk-off than bonus for risk-on) reflects that risk-off movements are typically sharper and more informative than risk-on phases.
Baur and Lucey (2010) examined safe-haven properties of gold and showed that gold indeed exhibits negative correlation with stocks during extreme market movements, confirming its role as crisis protection.
7. Component 5: Macroeconomic Analysis
7.1 The Yield Curve as Economic Indicator
The yield curve, represented as yields of government bonds of various maturities, contains aggregated expectations about future interest rates, inflation, and economic growth. The slope of the yield curve has remarkable predictive power for recessions. Estrella and Mishkin (1998) showed that an inverted yield curve (short-term rates higher than long-term) predicts recessions with high reliability. This is because inverted curves reflect restrictive monetary policy: the central bank raises short-term rates to combat inflation, dampening economic activity.
DEAM calculates two spread measures: the 2-year-minus-10-year spread and the 3-month-minus-10-year spread. A steep, positive curve (spreads above 1.5% and 2% respectively) signals healthy growth expectations and generates the maximum yield curve score of 40 points. A flat curve (spreads near zero) reduces the score to 20 points. An inverted curve (negative spreads) is particularly alarming and results in only 10 points.
The choice of two different spreads increases analysis robustness. The 2-10 spread is most established in academic literature, while the 3M-10Y spread is often considered more sensitive, as the 3-month rate directly reflects current monetary policy (Ang, Piazzesi, and Wei, 2006).
7.2 Credit Conditions and Spreads
Credit spreads—the yield difference between risky corporate bonds and safe government bonds—reflect risk perception in the credit market. Gilchrist and Zakrajšek (2012) constructed an "Excess Bond Premium" that measures the component of credit spreads not explained by fundamentals and showed this is a predictor of future economic activity and stock returns.
The model approximates credit spread by comparing the yield of high-yield bond ETFs (HYG) with investment-grade bond ETFs (LQD). A narrow spread below 200 basis points signals healthy credit conditions and risk appetite, contributing 30 points to the macro score. Very wide spreads above 1000 basis points (as during the 2008 financial crisis) signal credit crunch and generate zero points.
Additionally, the model evaluates whether "flight to quality" is occurring, identified through strong performance of Treasury bonds (TLT) with simultaneous weakness in high-yield bonds. This constellation indicates elevated risk aversion and reduces the credit conditions score.
7.3 Financial Stability at Corporate Level
While the yield curve and credit spreads reflect macroeconomic conditions, financial stability evaluates the health of companies themselves. The model uses the aggregated debt-to-equity ratio and return on equity of the S&P 500 as proxies for corporate health.
A low leverage level below 0.5 combined with high ROE above 15% signals robust corporate balance sheets and generates 20 points. This combination is particularly valuable as it represents both defensive strength (low debt means crisis resistance) and offensive strength (high ROE means earnings power). High leverage above 1.5 generates only 5 points, as it implies vulnerability to interest rate increases and recessions.
Korteweg (2010) showed in "The Net Benefits to Leverage" that optimal debt maximizes firm value, but excessive debt increases distress costs. At the aggregated market level, high debt indicates fragilities that can become problematic during stress phases.
8. Component 6: Crisis Detection
8.1 The Need for Systematic Crisis Detection
Financial crises are rare but extremely impactful events that suspend normal statistical relationships. During normal market volatility, diversified portfolios and traditional risk management approaches function, but during systemic crises, seemingly independent assets suddenly correlate strongly, and losses exceed historical expectations (Longin and Solnik, 2001). This justifies a separate crisis detection mechanism that operates independently of regular allocation components.
Reinhart and Rogoff (2009) documented in "This Time Is Different: Eight Centuries of Financial Folly" recurring patterns in financial crises: extreme volatility, massive drawdowns, credit market dysfunction, and asset price collapse. DEAM operationalizes these patterns into quantifiable crisis indicators.
8.2 Multi-Signal Crisis Identification
The model uses a counter-based approach where various stress signals are identified and aggregated. This methodology is more robust than relying on a single indicator, as true crises typically occur simultaneously across multiple dimensions. A single signal may be a false alarm, but the simultaneous presence of multiple signals increases confidence.
The first indicator is a VIX above the crisis threshold (default 40), adding one point. A VIX above 60 (as in 2008 and March 2020) adds two additional points, as such extreme values are historically very rare. This tiered approach captures the intensity of volatility.
The second indicator is market drawdown. A drawdown above 15% adds one point, as corrections of this magnitude can be potential harbingers of larger crises. A drawdown above 25% adds another point, as historical bear markets typically encompass 25-40% drawdowns.
The third indicator is credit market spreads above 500 basis points, adding one point. Such wide spreads occur only during significant credit market disruptions, as in 2008 during the Lehman crisis.
The fourth indicator identifies simultaneous losses in stocks and bonds. Normally, Treasury bonds act as a hedge against equity risk (negative correlation), but when both fall simultaneously, this indicates systemic liquidity problems or inflation/stagflation fears. The model checks whether both SPY and TLT have fallen more than 10% and 5% respectively over 5 trading days, adding two points.
The fifth indicator is a volume spike combined with negative returns. Extreme trading volumes (above twice the 20-day average) with falling prices signal panic selling. This adds one point.
A crisis situation is diagnosed when at least 3 indicators trigger, a severe crisis at 5 or more indicators. These thresholds were calibrated through historical backtesting to identify true crises (2008, 2020) without generating excessive false alarms.
8.3 Crisis-Based Allocation Override
When a crisis is detected, the system overrides the normal allocation recommendation and caps equity allocation at maximum 25%. In a severe crisis, the cap is set at 10%. This drastic defensive posture follows the empirical observation that crises typically require time to develop and that early reduction can avoid substantial losses (Faber, 2007).
This override logic implements a "safety first" principle: in situations of existential danger to the portfolio, capital preservation becomes the top priority. Roy (1952) formalized this approach in "Safety First and the Holding of Assets," arguing that investors should primarily minimize ruin probability.
9. Integration and Final Allocation Calculation
9.1 Component Weighting
The final allocation recommendation emerges through weighted aggregation of the five components. The standard weighting is: Market Regime 35%, Risk Management 25%, Valuation 20%, Sentiment 15%, Macro 5%. These weights reflect both theoretical considerations and empirical backtesting results.
The highest weighting of market regime is based on evidence that trend-following and momentum strategies have delivered robust results across various asset classes and time periods (Moskowitz, Ooi, and Pedersen, 2012). Current market momentum is highly informative for the near future, although it provides no information about long-term expectations.
The substantial weighting of risk management (25%) follows from the central importance of risk control. Wealth preservation is the foundation of long-term wealth creation, and systematic risk management is demonstrably value-creating (Moreira and Muir, 2017).
The valuation component receives 20% weight, based on the long-term mean reversion of valuation metrics. While valuation has limited short-term predictive power (bull and bear markets can begin at any valuation), the long-term relationship between valuation and returns is robustly documented (Campbell and Shiller, 1988).
Sentiment (15%) and Macro (5%) receive lower weights, as these factors are subtler and harder to measure. Sentiment is valuable as a contrarian indicator at extremes but less informative in normal ranges. Macro variables such as the yield curve have strong predictive power for recessions, but the transmission from recessions to stock market performance is complex and temporally variable.
9.2 Model Type Adjustments
DEAM allows users to choose between four model types: Conservative, Balanced, Aggressive, and Adaptive. This choice modifies the final allocation through additive adjustments.
Conservative mode subtracts 10 percentage points from allocation, resulting in consistently more cautious positioning. This is suitable for risk-averse investors or those with limited investment horizons. Aggressive mode adds 10 percentage points, suitable for risk-tolerant investors with long horizons.
Adaptive mode implements procyclical adjustment based on short-term momentum: if the market has risen more than 5% in the last 20 days, 5 percentage points are added; if it has declined more than 5%, 5 points are subtracted. This logic follows the observation that short-term momentum persists (Jegadeesh and Titman, 1993), but the moderate size of adjustment avoids excessive timing bets.
Balanced mode makes no adjustment and uses raw model output. This neutral setting is suitable for investors who wish to trust model recommendations unchanged.
9.3 Smoothing and Stability
The allocation resulting from aggregation undergoes final smoothing through a simple moving average over 3 periods. This smoothing is crucial for model practicality, as it reduces frequent trading and thus transaction costs. Without smoothing, the model could fluctuate between adjacent allocations with every small input change.
The choice of 3 periods as smoothing window is a compromise between responsiveness and stability. Longer smoothing would excessively delay signals and impede response to true regime changes. Shorter or no smoothing would allow too much noise. Empirical tests showed that 3-period smoothing offers an optimal ratio between these goals.
10. Visualization and Interpretation
10.1 Main Output: Equity Allocation
DEAM's primary output is a time series from 0 to 100 representing the recommended percentage allocation to equities. This representation is intuitive: 100% means full investment in stocks (specifically: an S&P 500 ETF), 0% means complete cash position, and intermediate values correspond to mixed portfolios. A value of 60% means, for example: invest 60% of wealth in SPY, hold 40% in money market instruments or cash.
The time series is color-coded to enable quick visual interpretation. Green shades represent high allocations (above 80%, bullish), red shades low allocations (below 20%, bearish), and neutral colors middle allocations. The chart background is dynamically colored based on the signal, enhancing readability in different market phases.
10.2 Dashboard Metrics
A tabular dashboard presents key metrics compactly. This includes current allocation, cash allocation (complement), an aggregated signal (BULLISH/NEUTRAL/BEARISH), current market regime, VIX level, market drawdown, and crisis status.
Additionally, fundamental metrics are displayed: P/E Ratio, Equity Risk Premium, Return on Equity, Debt-to-Equity Ratio, and Total Shareholder Yield. This transparency allows users to understand model decisions and form their own assessments.
Component scores (Regime, Risk, Valuation, Sentiment, Macro) are also displayed, each normalized on a 0-100 scale. This shows which factors primarily drive the current recommendation. If, for example, the Risk score is very low (20) while other scores are moderate (50-60), this indicates that risk management considerations are pulling allocation down.
10.3 Component Breakdown (Optional)
Advanced users can display individual components as separate lines in the chart. This enables analysis of component dynamics: do all components move synchronously, or are there divergences? Divergences can be particularly informative. If, for example, the market regime is bullish (high score) but the valuation component is very negative, this signals an overbought market not fundamentally supported—a classic "bubble warning."
This feature is disabled by default to keep the chart clean but can be activated for deeper analysis.
10.4 Confidence Bands
The model optionally displays uncertainty bands around the main allocation line. These are calculated as ±1 standard deviation of allocation over a rolling 20-period window. Wide bands indicate high volatility of model recommendations, suggesting uncertain market conditions. Narrow bands indicate stable recommendations.
This visualization implements a concept of epistemic uncertainty—uncertainty about the model estimate itself, not just market volatility. In phases where various indicators send conflicting signals, the allocation recommendation becomes more volatile, manifesting in wider bands. Users can understand this as a warning to act more cautiously or consult alternative information sources.
11. Alert System
11.1 Allocation Alerts
DEAM implements an alert system that notifies users of significant events. Allocation alerts trigger when smoothed allocation crosses certain thresholds. An alert is generated when allocation reaches 80% (from below), signaling strong bullish conditions. Another alert triggers when allocation falls to 20%, indicating defensive positioning.
These thresholds are not arbitrary but correspond with boundaries between model regimes. An allocation of 80% roughly corresponds to a clear bull market regime, while 20% corresponds to a bear market regime. Alerts at these points are therefore informative about fundamental regime shifts.
11.2 Crisis Alerts
Separate alerts trigger upon detection of crisis and severe crisis. These alerts have highest priority as they signal large risks. A crisis alert should prompt investors to review their portfolio and potentially take defensive measures beyond the automatic model recommendation (e.g., hedging through put options, rebalancing to more defensive sectors).
11.3 Regime Change Alerts
An alert triggers upon change of market regime (e.g., from Neutral to Correction, or from Bull Market to Strong Bull). Regime changes are highly informative events that typically entail substantial allocation changes. These alerts enable investors to proactively respond to changes in market dynamics.
11.4 Risk Breach Alerts
A specialized alert triggers when actual portfolio risk utilization exceeds target parameters by 20%. This is a warning signal that the risk management system is reaching its limits, possibly because market volatility is rising faster than allocation can be reduced. In such situations, investors should consider manual interventions.
12. Practical Application and Limitations
12.1 Portfolio Implementation
DEAM generates a recommendation for allocation between equities (S&P 500) and cash. Implementation by an investor can take various forms. The most direct method is using an S&P 500 ETF (e.g., SPY, VOO) for equity allocation and a money market fund or savings account for cash allocation.
A rebalancing strategy is required to synchronize actual allocation with model recommendation. Two approaches are possible: (1) rule-based rebalancing at every 10% deviation between actual and target, or (2) time-based monthly rebalancing. Both have trade-offs between responsiveness and transaction costs. Empirical evidence (Jaconetti, Kinniry, and Zilbering, 2010) suggests rebalancing frequency has moderate impact on performance, and investors should optimize based on their transaction costs.
12.2 Adaptation to Individual Preferences
The model offers numerous adjustment parameters. Component weights can be modified if investors place more or less belief in certain factors. A fundamentally-oriented investor might increase valuation weight, while a technical trader might increase regime weight.
Risk target parameters (target volatility, max drawdown) should be adapted to individual risk tolerance. Younger investors with long investment horizons can choose higher target volatility (15-18%), while retirees may prefer lower volatility (8-10%). This adjustment systematically shifts average equity allocation.
Crisis thresholds can be adjusted based on preference for sensitivity versus specificity of crisis detection. Lower thresholds (e.g., VIX > 35 instead of 40) increase sensitivity (more crises are detected) but reduce specificity (more false alarms). Higher thresholds have the reverse effect.
12.3 Limitations and Disclaimers
DEAM is based on historical relationships between indicators and market performance. There is no guarantee these relationships will persist in the future. Structural changes in markets (e.g., through regulation, technology, or central bank policy) can break established patterns. This is the fundamental problem of induction in financial science (Taleb, 2007).
The model is optimized for US equities (S&P 500). Application to other markets (international stocks, bonds, commodities) would require recalibration. The indicators and thresholds are specific to the statistical properties of the US equity market.
The model cannot eliminate losses. Even with perfect crisis prediction, an investor following the model would lose money in bear markets—just less than a buy-and-hold investor. The goal is risk-adjusted performance improvement, not risk elimination.
Transaction costs are not modeled. In practice, spreads, commissions, and taxes reduce net returns. Frequent trading can cause substantial costs. Model smoothing helps minimize this, but users should consider their specific cost situation.
The model reacts to information; it does not anticipate it. During sudden shocks (e.g., 9/11, COVID-19 lockdowns), the model can only react after price movements, not before. This limitation is inherent to all reactive systems.
12.4 Relationship to Other Strategies
DEAM is a tactical asset allocation approach and should be viewed as a complement, not replacement, for strategic asset allocation. Brinson, Hood, and Beebower (1986) showed in their influential study "Determinants of Portfolio Performance" that strategic asset allocation (long-term policy allocation) explains the majority of portfolio performance, but this leaves room for tactical adjustments based on market timing.
The model can be combined with value and momentum strategies at the individual stock level. While DEAM controls overall market exposure, within-equity decisions can be optimized through stock-picking models. This separation between strategic (market exposure) and tactical (stock selection) levels follows classical portfolio theory.
The model does not replace diversification across asset classes. A complete portfolio should also include bonds, international stocks, real estate, and alternative investments. DEAM addresses only the US equity allocation decision within a broader portfolio.
13. Scientific Foundation and Evaluation
13.1 Theoretical Consistency
DEAM's components are based on established financial theory and empirical evidence. The market regime component follows from regime-switching models (Hamilton, 1989) and trend-following literature. The risk management component implements volatility targeting (Moreira and Muir, 2017) and modern portfolio theory (Markowitz, 1952). The valuation component is based on discounted cash flow theory and empirical value research (Campbell and Shiller, 1988; Fama and French, 1992). The sentiment component integrates behavioral finance (Baker and Wurgler, 2006). The macro component uses established business cycle indicators (Estrella and Mishkin, 1998).
This theoretical grounding distinguishes DEAM from purely data-mining-based approaches that identify patterns without causal theory. Theory-guided models have greater probability of functioning out-of-sample, as they are based on fundamental mechanisms, not random correlations (Lo and MacKinlay, 1990).
13.2 Empirical Validation
While this document does not present detailed backtest analysis, it should be noted that rigorous validation of a tactical asset allocation model should include several elements:
In-sample testing establishes whether the model functions at all in the data on which it was calibrated. Out-of-sample testing is crucial: the model should be tested in time periods not used for development. Walk-forward analysis, where the model is successively trained on rolling windows and tested in the next window, approximates real implementation.
Performance metrics should be risk-adjusted. Pure return consideration is misleading, as higher returns often only compensate for higher risk. Sharpe Ratio, Sortino Ratio, Calmar Ratio, and Maximum Drawdown are relevant metrics. Comparison with benchmarks (Buy-and-Hold S&P 500, 60/40 Stock/Bond portfolio) contextualizes performance.
Robustness checks test sensitivity to parameter variation. If the model only functions at specific parameter settings, this indicates overfitting. Robust models show consistent performance over a range of plausible parameters.
13.3 Comparison with Existing Literature
DEAM fits into the broader literature on tactical asset allocation. Faber (2007) presented a simple momentum-based timing system that goes long when the market is above its 10-month average, otherwise cash. This simple system avoided large drawdowns in bear markets. DEAM can be understood as a sophistication of this approach that integrates multiple information sources.
Ilmanen (2011) discusses various timing factors in "Expected Returns" and argues for multi-factor approaches. DEAM operationalizes this philosophy. Asness, Moskowitz, and Pedersen (2013) showed that value and momentum effects work across asset classes, justifying cross-asset application of regime and valuation signals.
Ang (2014) emphasizes in "Asset Management: A Systematic Approach to Factor Investing" the importance of systematic, rule-based approaches over discretionary decisions. DEAM is fully systematic and eliminates emotional biases that plague individual investors (overconfidence, hindsight bias, loss aversion).
References
Ang, A. (2014) *Asset Management: A Systematic Approach to Factor Investing*. Oxford: Oxford University Press.
Ang, A., Piazzesi, M. and Wei, M. (2006) 'What does the yield curve tell us about GDP growth?', *Journal of Econometrics*, 131(1-2), pp. 359-403.
Asness, C.S. (2003) 'Fight the Fed Model', *The Journal of Portfolio Management*, 30(1), pp. 11-24.
Asness, C.S., Moskowitz, T.J. and Pedersen, L.H. (2013) 'Value and Momentum Everywhere', *The Journal of Finance*, 68(3), pp. 929-985.
Baker, M. and Wurgler, J. (2006) 'Investor Sentiment and the Cross-Section of Stock Returns', *The Journal of Finance*, 61(4), pp. 1645-1680.
Baker, M. and Wurgler, J. (2007) 'Investor Sentiment in the Stock Market', *Journal of Economic Perspectives*, 21(2), pp. 129-152.
Baur, D.G. and Lucey, B.M. (2010) 'Is Gold a Hedge or a Safe Haven? An Analysis of Stocks, Bonds and Gold', *Financial Review*, 45(2), pp. 217-229.
Bollerslev, T. (1986) 'Generalized Autoregressive Conditional Heteroskedasticity', *Journal of Econometrics*, 31(3), pp. 307-327.
Boudoukh, J., Michaely, R., Richardson, M. and Roberts, M.R. (2007) 'On the Importance of Measuring Payout Yield: Implications for Empirical Asset Pricing', *The Journal of Finance*, 62(2), pp. 877-915.
Brinson, G.P., Hood, L.R. and Beebower, G.L. (1986) 'Determinants of Portfolio Performance', *Financial Analysts Journal*, 42(4), pp. 39-44.
Brock, W., Lakonishok, J. and LeBaron, B. (1992) 'Simple Technical Trading Rules and the Stochastic Properties of Stock Returns', *The Journal of Finance*, 47(5), pp. 1731-1764.
Calmar, T.W. (1991) 'The Calmar Ratio', *Futures*, October issue.
Campbell, J.Y. and Shiller, R.J. (1988) 'The Dividend-Price Ratio and Expectations of Future Dividends and Discount Factors', *Review of Financial Studies*, 1(3), pp. 195-228.
Cochrane, J.H. (2011) 'Presidential Address: Discount Rates', *The Journal of Finance*, 66(4), pp. 1047-1108.
Damodaran, A. (2012) *Equity Risk Premiums: Determinants, Estimation and Implications*. Working Paper, Stern School of Business.
Engle, R.F. (1982) 'Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation', *Econometrica*, 50(4), pp. 987-1007.
Estrella, A. and Hardouvelis, G.A. (1991) 'The Term Structure as a Predictor of Real Economic Activity', *The Journal of Finance*, 46(2), pp. 555-576.
Estrella, A. and Mishkin, F.S. (1998) 'Predicting U.S. Recessions: Financial Variables as Leading Indicators', *Review of Economics and Statistics*, 80(1), pp. 45-61.
Faber, M.T. (2007) 'A Quantitative Approach to Tactical Asset Allocation', *The Journal of Wealth Management*, 9(4), pp. 69-79.
Fama, E.F. and French, K.R. (1989) 'Business Conditions and Expected Returns on Stocks and Bonds', *Journal of Financial Economics*, 25(1), pp. 23-49.
Fama, E.F. and French, K.R. (1992) 'The Cross-Section of Expected Stock Returns', *The Journal of Finance*, 47(2), pp. 427-465.
Garman, M.B. and Klass, M.J. (1980) 'On the Estimation of Security Price Volatilities from Historical Data', *Journal of Business*, 53(1), pp. 67-78.
Gilchrist, S. and Zakrajšek, E. (2012) 'Credit Spreads and Business Cycle Fluctuations', *American Economic Review*, 102(4), pp. 1692-1720.
Gordon, M.J. (1962) *The Investment, Financing, and Valuation of the Corporation*. Homewood: Irwin.
Graham, B. and Dodd, D.L. (1934) *Security Analysis*. New York: McGraw-Hill.
Hamilton, J.D. (1989) 'A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle', *Econometrica*, 57(2), pp. 357-384.
Ilmanen, A. (2011) *Expected Returns: An Investor's Guide to Harvesting Market Rewards*. Chichester: Wiley.
Jaconetti, C.M., Kinniry, F.M. and Zilbering, Y. (2010) 'Best Practices for Portfolio Rebalancing', *Vanguard Research Paper*.
Jegadeesh, N. and Titman, S. (1993) 'Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency', *The Journal of Finance*, 48(1), pp. 65-91.
Kahneman, D. and Tversky, A. (1979) 'Prospect Theory: An Analysis of Decision under Risk', *Econometrica*, 47(2), pp. 263-292.
Korteweg, A. (2010) 'The Net Benefits to Leverage', *The Journal of Finance*, 65(6), pp. 2137-2170.
Lo, A.W. and MacKinlay, A.C. (1990) 'Data-Snooping Biases in Tests of Financial Asset Pricing Models', *Review of Financial Studies*, 3(3), pp. 431-467.
Longin, F. and Solnik, B. (2001) 'Extreme Correlation of International Equity Markets', *The Journal of Finance*, 56(2), pp. 649-676.
Mandelbrot, B. (1963) 'The Variation of Certain Speculative Prices', *The Journal of Business*, 36(4), pp. 394-419.
Markowitz, H. (1952) 'Portfolio Selection', *The Journal of Finance*, 7(1), pp. 77-91.
Modigliani, F. and Miller, M.H. (1961) 'Dividend Policy, Growth, and the Valuation of Shares', *The Journal of Business*, 34(4), pp. 411-433.
Moreira, A. and Muir, T. (2017) 'Volatility-Managed Portfolios', *The Journal of Finance*, 72(4), pp. 1611-1644.
Moskowitz, T.J., Ooi, Y.H. and Pedersen, L.H. (2012) 'Time Series Momentum', *Journal of Financial Economics*, 104(2), pp. 228-250.
Parkinson, M. (1980) 'The Extreme Value Method for Estimating the Variance of the Rate of Return', *Journal of Business*, 53(1), pp. 61-65.
Piotroski, J.D. (2000) 'Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers', *Journal of Accounting Research*, 38, pp. 1-41.
Reinhart, C.M. and Rogoff, K.S. (2009) *This Time Is Different: Eight Centuries of Financial Folly*. Princeton: Princeton University Press.
Ross, S.A. (1976) 'The Arbitrage Theory of Capital Asset Pricing', *Journal of Economic Theory*, 13(3), pp. 341-360.
Roy, A.D. (1952) 'Safety First and the Holding of Assets', *Econometrica*, 20(3), pp. 431-449.
Schwert, G.W. (1989) 'Why Does Stock Market Volatility Change Over Time?', *The Journal of Finance*, 44(5), pp. 1115-1153.
Sharpe, W.F. (1966) 'Mutual Fund Performance', *The Journal of Business*, 39(1), pp. 119-138.
Sharpe, W.F. (1994) 'The Sharpe Ratio', *The Journal of Portfolio Management*, 21(1), pp. 49-58.
Simon, D.P. and Wiggins, R.A. (2001) 'S&P Futures Returns and Contrary Sentiment Indicators', *Journal of Futures Markets*, 21(5), pp. 447-462.
Taleb, N.N. (2007) *The Black Swan: The Impact of the Highly Improbable*. New York: Random House.
Whaley, R.E. (2000) 'The Investor Fear Gauge', *The Journal of Portfolio Management*, 26(3), pp. 12-17.
Whaley, R.E. (2009) 'Understanding the VIX', *The Journal of Portfolio Management*, 35(3), pp. 98-105.
Yardeni, E. (2003) 'Stock Valuation Models', *Topical Study*, 51, Yardeni Research.
Zweig, M.E. (1973) 'An Investor Expectations Stock Price Predictive Model Using Closed-End Fund Premiums', *The Journal of Finance*, 28(1), pp. 67-78.
LogNormalLibrary "LogNormal"
A collection of functions used to model skewed distributions as log-normal.
Prices are commonly modeled using log-normal distributions (ie. Black-Scholes) because they exhibit multiplicative changes with long tails; skewed exponential growth and high variance. This approach is particularly useful for understanding price behavior and estimating risk, assuming continuously compounding returns are normally distributed.
Because log space analysis is not as direct as using math.log(price) , this library extends the Error Functions library to make working with log-normally distributed data as simple as possible.
- - -
QUICK START
Import library into your project
Initialize model with a mean and standard deviation
Pass model params between methods to compute various properties
var LogNorm model = LN.init(arr.avg(), arr.stdev()) // Assumes the library is imported as LN
var mode = model.mode()
Outputs from the model can be adjusted to better fit the data.
var Quantile data = arr.quantiles()
var more_accurate_mode = mode.fit(model, data) // Fits value from model to data
Inputs to the model can also be adjusted to better fit the data.
datum = 123.45
model_equivalent_datum = datum.fit(data, model) // Fits value from data to the model
area_from_zero_to_datum = model.cdf(model_equivalent_datum)
- - -
TYPES
There are two requisite UDTs: LogNorm and Quantile . They are used to pass parameters between functions and are set automatically (see Type Management ).
LogNorm
Object for log space parameters and linear space quantiles .
Fields:
mu (float) : Log space mu ( µ ).
sigma (float) : Log space sigma ( σ ).
variance (float) : Log space variance ( σ² ).
quantiles (Quantile) : Linear space quantiles.
Quantile
Object for linear quantiles, most similar to a seven-number summary .
Fields:
Q0 (float) : Smallest Value
LW (float) : Lower Whisker Endpoint
LC (float) : Lower Whisker Crosshatch
Q1 (float) : First Quartile
Q2 (float) : Second Quartile
Q3 (float) : Third Quartile
UC (float) : Upper Whisker Crosshatch
UW (float) : Upper Whisker Endpoint
Q4 (float) : Largest Value
IQR (float) : Interquartile Range
MH (float) : Midhinge
TM (float) : Trimean
MR (float) : Mid-Range
- - -
TYPE MANAGEMENT
These functions reliably initialize and update the UDTs. Because parameterization is interdependent, avoid setting the LogNorm and Quantile fields directly .
init(mean, stdev, variance)
Initializes a LogNorm object.
Parameters:
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
set(ln, mean, stdev, variance)
Transforms linear measurements into log space parameters for a LogNorm object.
Parameters:
ln (LogNorm) : Object containing log space parameters.
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
quantiles(arr)
Gets empirical quantiles from an array of floats.
Parameters:
arr (array) : Float array object.
Returns: Quantile Object
- - -
DESCRIPTIVE STATISTICS
Using only the initialized LogNorm parameters, these functions compute a model's central tendency and standardized moments.
mean(ln)
Computes the linear mean from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
median(ln)
Computes the linear median from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
mode(ln)
Computes the linear mode from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
variance(ln)
Computes the linear variance from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
skewness(ln)
Computes the linear skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
kurtosis(ln, excess)
Computes the linear kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Kurtosis (true) or regular Kurtosis (false).
Returns: Between 0 and ∞
hyper_skewness(ln)
Computes the linear hyper skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
hyper_kurtosis(ln, excess)
Computes the linear hyper kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Hyper Kurtosis (true) or regular Hyper Kurtosis (false).
Returns: Between 0 and ∞
- - -
DISTRIBUTION FUNCTIONS
These wrap Gaussian functions to make working with model space more direct. Because they are contained within a log-normal library, they describe estimations relative to a log-normal curve, even though they fundamentally measure a Gaussian curve.
pdf(ln, x, empirical_quantiles)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate for which a density will be estimated.
empirical_quantiles (Quantile) : Quantiles as observed in the data (optional).
Returns: Between 0 and ∞
cdf(ln, x, precise)
A Cumulative Distribution Function estimates the area under a Log-Normal curve between Zero and a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ccdf(ln, x, precise)
A Complementary Cumulative Distribution Function estimates the area under a Log-Normal curve between a linear X coordinate and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
cdfinv(ln, a, precise)
An Inverse Cumulative Distribution Function reverses the Log-Normal cdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
ccdfinv(ln, a, precise)
An Inverse Complementary Cumulative Distribution Function reverses the Log-Normal ccdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
cdfab(ln, x1, x2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Log-Normal curve between two linear X coordinates (A and B).
Parameters:
ln (LogNorm) : Object of log space parameters.
x1 (float) : First linear X coordinate .
x2 (float) : Second linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ott(ln, x, precise)
A One-Tailed Test transforms a linear X coordinate into an absolute Z Score before estimating the area under a Log-Normal curve between Z and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 0.5
ttt(ln, x, precise)
A Two-Tailed Test transforms a linear X coordinate into symmetrical ± Z Scores before estimating the area under a Log-Normal curve from Zero to -Z, and +Z to Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ottinv(ln, a, precise)
An Inverse One-Tailed Test reverses the Log-Normal ott() by estimating a linear X coordinate for the right tail from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Half a normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
tttinv(ln, a, precise)
An Inverse Two-Tailed Test reverses the Log-Normal ttt() by estimating two linear X coordinates from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Linear space tuple :
- - -
UNCERTAINTY
Model-based measures of uncertainty, information, and risk.
sterr(sample_size, fisher_info)
The standard error of a sample statistic.
Parameters:
sample_size (float) : Number of observations.
fisher_info (float) : Fisher information.
Returns: Between 0 and ∞
surprisal(p, base)
Quantifies the information content of a single event.
Parameters:
p (float) : Probability of the event .
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
entropy(ln, base)
Computes the differential entropy (average surprisal).
Parameters:
ln (LogNorm) : Object of log space parameters.
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
perplexity(ln, base)
Computes the average number of distinguishable outcomes from the entropy.
Parameters:
ln (LogNorm)
base (float) : Logarithmic base used for Entropy (optional).
Returns: Between 0 and ∞
value_at_risk(ln, p, precise)
Estimates a risk threshold under normal market conditions for a given confidence level.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
value_at_risk_inv(ln, value_at_risk, precise)
Reverses the value_at_risk() by estimating the confidence level from the risk threshold.
Parameters:
ln (LogNorm) : Object of log space parameters.
value_at_risk (float) : Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
conditional_value_at_risk(ln, p, precise)
Estimates the average loss beyond a confidence level, aka. expected shortfall.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_value_at_risk_inv(ln, conditional_value_at_risk, precise)
Reverses the conditional_value_at_risk() by estimating the confidence level of an average loss.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_value_at_risk (float) : Conditional Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
partial_expectation(ln, x, precise)
Estimates the partial expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and µ
partial_expectation_inv(ln, partial_expectation, precise)
Reverses the partial_expectation() by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
partial_expectation (float) : Partial Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_expectation(ln, x, precise)
Estimates the conditional expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between X and ∞
conditional_expectation_inv(ln, conditional_expectation, precise)
Reverses the conditional_expectation by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_expectation (float) : Conditional Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
fisher(ln, log)
Computes the Fisher Information Matrix for the distribution, not a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the distribution
fisher(ln, x, log)
Computes the Fisher Information Matrix for a linear X coordinate, not the distribution itself.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the linear X coordinate
confidence_interval(ln, x, sample_size, confidence, precise)
Estimates a confidence interval for a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
sample_size (float) : Number of observations.
confidence (float) : Confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: CI for the linear X coordinate
- - -
CURVE FITTING
An overloaded function that helps transform values between spaces. The primary function uses quantiles, and the overloads wrap the primary function to make working with LogNorm more direct.
fit(x, a, b)
Transforms X coordinate between spaces A and B.
Parameters:
x (float) : Linear X coordinate from space A .
a (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
b (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
Returns: Adjusted X coordinate
- - -
EXPORTED HELPERS
Small utilities to simplify extensibility.
z_score(ln, x)
Converts a linear X coordinate into a Z Score.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate.
Returns: Between -∞ and +∞
x_coord(ln, z)
Converts a Z Score into a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
z (float) : Standard normal Z Score.
Returns: Between 0 and ∞
iget(arr, index)
Gets an interpolated value of a pseudo -element (fictional element between real array elements). Useful for quantile mapping.
Parameters:
arr (array) : Float array object.
index (float) : Index of the pseudo element.
Returns: Interpolated value of the arrays pseudo element.
Polynomial Regression HeatmapPolynomial Regression Heatmap – Advanced Trend & Volatility Visualizer
Overview
The Polynomial Regression Heatmap is a sophisticated trading tool designed for traders who require a clear and precise understanding of market trends and volatility. By applying a second-degree polynomial regression to price data, the indicator generates a smooth trend curve, augmented with adaptive volatility bands and a dynamic heatmap. This framework allows users to instantly recognize trend direction, potential reversals, and areas of market strength or weakness, translating complex price action into a visually intuitive map.
Unlike static trend indicators, the Polynomial Regression Heatmap adapts to changing market conditions. Its visual design—including color-coded candles, regression bands, optional polynomial channels, and breakout markers—ensures that price behavior is easy to interpret. This makes it suitable for scalping, swing trading, and longer-term strategies across multiple asset classes.
How It Works
The core of the indicator relies on fitting a second-degree polynomial to a defined lookback period of price data. This regression curve captures the non-linear nature of market movements, revealing the true trajectory of price beyond the distortions of noise or short-term volatility.
Adaptive upper and lower bands are constructed using ATR-based scaling, surrounding the regression line to reflect periods of high and low volatility. When price moves toward or beyond these bands, it signals areas of potential overextension or support/resistance.
The heatmap colors each candle based on its relative position within the bands. Green shades indicate proximity to the upper band, red shades indicate proximity to the lower band, and neutral tones represent mid-range positioning. This continuous gradient visualization provides immediate feedback on trend strength, market balance, and potential turning points.
Optional polynomial channels can be overlaid around the regression curve. These three-line channels are based on regression residuals and a fixed width multiplier, offering additional reference points for analyzing price deviations, trend continuation, and reversion zones.
Signals and Breakouts
The Polynomial Regression Heatmap includes statistical pivot-based signals to highlight actionable price movements:
Buy Signals – A triangular marker appears below the candle when a pivot low occurs below the lower regression band.
Sell Signals – A triangular marker appears above the candle when a pivot high occurs above the upper regression band.
These markers identify significant deviations from the regression curve while accounting for volatility, providing high-quality visual cues for potential entry points.
The indicator ensures clarity by spacing markers vertically using ATR-based calculations, preventing overlap during periods of high volatility. Users can rely on these signals in combination with heatmap intensity and regression slope for contextual confirmation.
Interpretation
Trend Analysis :
The slope of the polynomial regression line represents trend direction. A rising curve indicates bullish bias, a falling curve indicates bearish bias, and a flat curve indicates consolidation.
Steeper slopes suggest stronger momentum, while gradual slopes indicate more moderate trend conditions.
Volatility Assessment :
Band width provides an instant visual measure of market volatility. Narrow bands correspond to low volatility and potential consolidation, whereas wide bands indicate higher volatility and significant price swings.
Heatmap Coloring :
Candle colors visually represent price position within the bands. This allows traders to quickly identify zones of bullish or bearish pressure without performing complex calculations.
Channel Analysis (Optional) :
The polynomial channel defines zones for evaluating potential overextensions or retracements. Price interacting with these lines may suggest areas where mean-reversion or trend continuation is likely.
Breakout Signals :
Buy and Sell markers highlight pivot points relative to the regression and volatility bands. These are statistical signals, not arbitrary triggers, and should be interpreted in context with trend slope, band width, and heatmap intensity.
Strategy Integration
The Polynomial Regression Heatmap supports multiple trading approaches:
Trend Following – Enter trades in the direction of the regression slope while using the heatmap for momentum confirmation.
Pullback Entries – Use breakouts or deviations from the regression bands as low-risk entry points during trend continuation.
Mean Reversion – Price reaching outer channel boundaries can indicate potential reversal or retracement opportunities.
Multi-Timeframe Alignment – Overlay on higher and lower timeframes to filter noise and improve entry timing.
Stop-loss levels can be set just beyond the opposing regression band, while take-profit targets can be informed by the distance between the bands or the curvature of the polynomial line.
Advanced Techniques
For traders seeking greater precision:
Combine the Polynomial Regression Heatmap with volume, momentum, or volatility indicators to validate signals.
Observe the width and slope of the regression bands over time to anticipate expanding or contracting volatility.
Track sequences of breakout signals in conjunction with heatmap intensity for systematic trade management.
Adjusting regression length allows customization for different assets or timeframes, balancing responsiveness and smoothing. The combination of polynomial curve, adaptive bands, heatmap, and optional channels provides a comprehensive statistical framework for informed decision-making.
Inputs and Customization
Regression Length – Determines the number of bars used for polynomial fitting. Shorter lengths increase responsiveness; longer lengths improve smoothing.
Show Bands – Toggle visibility of the ATR-based regression bands.
Show Channel – Enable or disable the polynomial channel overlay.
Color Settings – Customize bullish, bearish, neutral, and accent colors for clarity and visual preference.
All other internal parameters are fixed to ensure consistent statistical behavior and minimize potential misconfiguration.
Why Use Polynomial Regression Heatmap
The Polynomial Regression Heatmap transforms complex price action into a clear, actionable visual framework. By combining non-linear trend mapping, adaptive volatility bands, heatmap visualization, and breakout signals, it provides a multi-dimensional perspective that is both quantitative and intuitive.
This indicator allows traders to focus on execution, interpret market structure at a glance, and evaluate trend strength, overextensions, and potential reversals in real time. Its design is compatible with scalping, swing trading, and long-term strategies, providing a robust tool for disciplined, data-driven trading.
Neural Pulse System [Alpha Extract]Neural Pulse System (NPS)
The Neural Pulse System (NPS) is a custom technical indicator that analyzes price action through a probabilistic lens, offering a dynamic view of bullish and bearish tendencies.
Unlike traditional binary classification models, NPS employs Ordinary Least Squares (OLS) regression with dynamically computed coefficients to produce a smooth probability output ranging from -1 to 1.
Paired with ATR-based bands, this indicator provides an intuitive and volatility-aware approach to trend analysis.
🔶 CALCULATION
The Neural Pulse System utilizes OLS regression to compute probabilities of bullish or bearish price action while incorporating ATR-based bands for volatility context:
Dynamic Coefficients: Coefficients are recalculated in real-time and scaled up to ensure the regression adapts to evolving market conditions.
Ordinary Least Squares (OLS): Uses OLS regression instead of gradient descent for more precise and efficient coefficient estimation.
ATR Bands: Smoothed Average True Range (ATR) bands serve as dynamic boundaries, framing the regression within market volatility.
Probability Output: Instead of a binary result, the output is a continuous probability curve (-1 to 1), helping traders gauge the strength of bullish or bearish momentum.
Formula:
OLS Regression = Line of best fit minimizing squared errors
Probability Signal = Transformed regression output scaled to -1 (bearish) to 1 (bullish)
ATR Bands = Smoothed Average True Range (ATR) to frame price movements within market volatility
🔶 DETAILS
📊 Visual Features:
Probability Curve: Smooth probability signal ranging from -1 (bearish) to 1 (bullish)
ATR Bands: Price action is constrained within volatility bands, preventing extreme deviations
Color-Coded Signals:
Blue to Green: Increasing probability of bullish momentum
Orange to Red: Increasing probability of bearish momentum
Interpretation:
Bullish Bias: Probability output consistently above 0 suggests a bullish trend.
Bearish Bias: Probability output consistently below 0 indicates bearish pressure.
Reversals: Extreme values near -1 or 1, followed by a move toward 0, may signal potential trend reversals.
🔶 EXAMPLES
📌 Trend Identification: Use the probability output to gauge trend direction.
📌Example: On a 1-hour chart, NPS moves from -0.5 to 0.8 as price breaks resistance, signaling a bullish trend.
Reversal Signals: Watch for probability extremes near -1 or 1 followed by a reversal toward 0.
Example: NPS hits 0.9, price touches the upper ATR band, then both retreat—indicating a potential pullback.
📌 Example snapshots:
Volatility Context: ATR bands help assess whether price action aligns with typical market conditions.
Example: During low volatility, the probability signal hovers near 0, and ATR bands tighten, suggesting a potential breakout.
🔶 SETTINGS
Customization Options:
ATR Period – Defines lookback length for ATR calculation (shorter = more responsive, longer = smoother).
ATR Multiplier – Adjusts band width for better volatility capture.
Regression Length – Controls how many bars feed into the coefficient calculation (longer = smoother, shorter = more reactive).
Scaling Factor – Adjusts the strength of regression coefficients.
Output Smoothing – Option to apply a moving average for a cleaner probability curve
Dynamic ALMA with signalsEnhanced ALMA with Signals
This TradingView indicator is designed to enhance your trading strategy by utilizing the Arnaud Legoux Moving Average (ALMA), a unique moving average that provides smoother price action while minimizing lag. The script not only plots the ALMA line but also dynamically adjusts its parameters based on market volatility to adapt to different trading conditions. Additionally, it highlights potential bounce points off the line, as well as breakout points, giving traders clear signals for potential support, resistance levels, and breakouts.
Key Features:
Dynamic ALMA Line with Glow Effect:
The core of this indicator is the ALMA line, which is dynamically adjusted to market volatility, providing more accurate signals in varying conditions. The line adapts to both trending and consolidating markets by adjusting its sensitivity in real time. A glow effect is created by plotting the ALMA line multiple times with increasing transparency, making it visually distinct.
Bounce Detection Signals with Volatility Filter:
The script detects and labels potential support and resistance bounces based on the crossover and crossunder of the price with the ALMA line, further filtered by a volatility condition. This helps in filtering out false signals during low-volatility conditions, making the signals more reliable.
Visual Enhancements:
Custom glow effects and labels for bounce detection enhance chart readability and help traders quickly identify key levels.
Inputs:
Base Window Size: Sets the number of bars used in calculating the ALMA, allowing traders to adjust the sensitivity of the moving average. This parameter is dynamically adjusted based on current market volatility.
Offset: Determines the position of the ALMA curve. Higher values move the curve further away from the price. This value remains constant for stability.
Sigma: Controls the smoothness of the ALMA curve; a higher sigma results in a smoother curve. This value also remains constant.
ATR Period and Threshold Multiplier: Used to calculate the Average True Range (ATR) for the volatility filter, which determines whether the market conditions are sufficiently volatile to consider bounce signals.
How It Works:
Dynamic ALMA Calculation:
The script calculates the ALMA (Arnaud Legoux Moving Average) using the ta.alma function, dynamically adjusting the window size based on market volatility measured by the ATR (Average True Range). This ensures that the ALMA line remains responsive in high-volatility environments and smooth in low-volatility conditions.
Glow Effect:
To create a glow effect around the ALMA line, the script plots the ALMA multiple times with varying degrees of transparency. This visual enhancement helps the ALMA line stand out on the chart.
Bounce Detection with Volatility Filter:
The script uses two conditions to detect potential bounces:
Support Bounce: Detected when the low of the bar crosses above the ALMA line (ta.crossover(low, alma)) and the close is above the ALMA, while the volatility filter confirms sufficient market activity. This suggests potential support at the ALMA line.
Resistance Bounce: Detected when the high of the bar crosses below the ALMA line (ta.crossunder(high, alma)) and the close is below the ALMA, while the volatility filter confirms sufficient market activity. This indicates potential resistance at the ALMA line.
Labeling Bounce Points:
When a bounce is detected, the script labels it on the chart:
Support Bounces (S): Labeled with a blue "S" below the bar where a support bounce is detected.
Resistance Bounces (R): Labeled with a white "R" above the bar where a resistance bounce is detected.
Usage:
This enhanced indicator helps traders visualize key support and resistance levels more effectively by dynamically adjusting the ALMA moving average to market conditions. By detecting and labeling potential bounce points and filtering these signals based on volatility, traders can better identify entry and exit points in their trading strategy. The dynamic adjustments and visual enhancements make it easier to spot critical levels quickly and adapt to changing market conditions.
Customize the inputs to fit your trading style, and use this enhanced ALMA indicator to gain a more refined understanding of market trends, potential reversals, and breakouts.
SMIIO + VolumeThis indicator generates long and short signals.
The operation of the indicator is as follows;
First, true strength index is calculated with closing prices. We call this the "ergodic" curve.
Then the average of the ergodic (ema) is calculated to obtain the "signal" curve.
To calculate the "oscillator", the signal is subtracted from ergodic (oscillator = ergodic - signal).
The last variable to be used in the calculation is the average volume, calculated with sma.
Calculation for long signal;
- If the ergodic curve cross up the zero line (ergodic > 0 AND ergodic < 0) and,
- If the current oscillator is greater than the previous oscillator (oscillator > oscillator ) and,
- If the current ergonic is greater than the previous signal (ergonic > signal) and,
- If the current volume is greater than the average volume (volume > averageVolume) and,
- If the current candle closing price is greater than the opening price (close > open)
If all the above conditions are fullfilled, the long input signal is issued with "Buy" label.
Calculation for short signal;
- If the ergodic curve cross down the zero line (ergodic < 0 AND ergodic > 0) and,
- If the current oscillator is smaller than the previous oscillator (oscillator < oscillator ) and,
- If the current ergonic is smaller than the previous signal (ergonic < signal) and,
- If the current volume is greater than the average volume (volume > averageVolume) and,
- If the current candle closing price is smaller than the opening price (close < open)
If all the above conditions are fullfilled, the short input signal is issued with "Sell" label.
Treasury Yields Heatmap [By MUQWISHI]▋ INTRODUCTION :
The “Treasury Yields Heatmap” generates a dynamic heat map table, showing treasury yield bond values corresponding with dates. In the last column, it presents the status of the yield curve, discerning whether it’s in a normal, flat, or inverted configuration, which determined by using Pearson's linear regression coefficient. This tool is built to offer traders essential insights for effectively tracking bond values and monitoring yield curve status, featuring the flexibility to input a starting period, timeframe, and select from a range of major countries' bond data.
_______________________
▋ OVERVIEW:
______________________
▋ YIELD CURVE:
It is determined through Pearson's linear regression coefficient and considered…
R ≥ 0.7 → Normal
0.7 > R ≥ 0.35 → Slight Normal
0.35 > R > -0.35 → Flat
-0.35 ≥ R > -0.7 → Slight Inverted
-0.7 ≥ R → Inverted
_______________________
▋ INDICATOR SETTINGS:
#Section One: Table Setting
#Section Two: Technical Setting
(1) Country: Select country’s treasury yields data
(2) Timeframe: Time interval.
(3) Fetch By:
(3A) Date: Retrieve data by beginning of date.
(3B) Period: Retrieve data by specifying the number of time series back.
Enjoy. Please let me know if you have any questions.
Thank you.
Nonlinear Regression, Zero-lag Moving Average [Loxx]Nonlinear Regression and Zero-lag Moving Average
Technical indicators are widely used in financial markets to analyze price data and make informed trading decisions. This indicator presents an implementation of two popular indicators: Nonlinear Regression and Zero-lag Moving Average (ZLMA). Let's explore the functioning of these indicators and discuss their significance in technical analysis.
Nonlinear Regression
The Nonlinear Regression indicator aims to fit a nonlinear curve to a given set of data points. It calculates the best-fit curve by minimizing the sum of squared errors between the actual data points and the predicted values on the curve. The curve is determined by solving a system of equations derived from the data points.
We define a function "nonLinearRegression" that takes two parameters: "src" (the input data series) and "per" (the period over which the regression is calculated). It calculates the coefficients of the nonlinear curve using the least squares method and returns the predicted value for the current period. The nonlinear regression curve provides insights into the overall trend and potential reversals in the price data.
Zero-lag Moving Average (ZLMA)
Moving averages are widely used to smoothen price data and identify trend directions. However, traditional moving averages introduce a lag due to the inclusion of past data. The Zero-lag Moving Average (ZLMA) overcomes this lag by dynamically adjusting the weights of past values, resulting in a more responsive moving average.
We create a function named "zlma" that calculates the ZLMA. It takes two parameters: "src" (the input data series) and "per" (the period over which the ZLMA is calculated). The ZLMA is computed by first calculating a weighted moving average (LWMA) using a linearly decreasing weight scheme. The LWMA is then used to calculate the ZLMA by applying the same weight scheme again. The ZLMA provides a smoother representation of the price data while reducing lag.
Combining Nonlinear Regression and ZLMA
The ZLMA is applied to the input data series using the function "zlma(src, zlmaper)". The ZLMA values are then passed as input to the "nonLinearRegression" function, along with the specified period for nonlinear regression. The output of the nonlinear regression is stored in the variable "out".
To enhance the visual representation of the indicator, colors are assigned based on the relationship between the nonlinear regression value and a signal value (sig) calculated from the previous period's nonlinear regression value. If the current "out" value is greater than the previous "sig" value, the color is set to green; otherwise, it is set to red.
The indicator also includes optional features such as coloring the bars based on the indicator's values and displaying signals for potential long and short positions. The signals are generated based on the crossover and crossunder of the "out" and "sig" values.
Wrapping Up
This indicator combines two important concepts: Nonlinear Regression and Zero-lag Moving Average indicators, which are valuable tools for technical analysis in financial markets. These indicators help traders identify trends, potential reversals, and generate trading signals. By combining the nonlinear regression curve with the zero-lag moving average, this indicator provides a comprehensive view of the price dynamics. Traders can customize the indicator's settings and use it in conjunction with other analysis techniques to make well-informed trading decisions.
Crude Oil: Backwardation Vs ContangoCrude Oil, CL
Plots Futures Curve: Futures contract prices over the next 3.5 years; to easily visualize Backwardation Vs Contango(carrying charge) markets.
Carrying charge (contract prices increasing into the future) = normal, representing the costs of carrying/storage of a commodity. When this is flipped to Backwardation(As the above; contract prices decreasing into the future): it's a bullish sign: Buyers want this commodity, and they want it NOW.
Note: indicator does not map to time axis in the same way as price; it simply plots the progression of contract months out into the future; left to right; so timeframe DOESN'T MATTER for this plot
TO UPDATE (every year or so): in REQUEST CONTRACTS section, delete old contracts (top) and add new ones (bottom). Then in PLOTTING section, Delete old contract labels (bottom); add new contract labels (top); adjust the X in 'bar_index-(X+_historical)' numbers accordingly
This is one of several similar Futures Curve indicators: Meats | Metals | Grains | VIX | Crude Oil
If you want to build from this; to work on other commodities; be aware that Tradingview limits the number of contract calls to 40 (hence the multiple indicators)
Tips:
-Right click and reset chart if you can't see the plot; or if you have trouble with the scaling.
-Right click and add to new scale if you prefer this not to overlay directly on price. Or move to new pane below.
-If this takes too long to load (due to so many security calls); comment out the more distant future half of the contracts; and their respective labels. Or comment out every other contract and every other label if you prefer.
--Added historical input: input days back in time; to see the historical shape of the Futures curve via selecting 'days back' snapshot
updated 20th June 2022
© twingall
Moving Average Delta (Deviation = Absolute/Pips Simple MA)MAD stands for Moving Average Delta, it calculates the difference between moving average and price. The curve shows the difference in Pips.
By calculating the delta between two points we can see more small changes in the direction of the moving average curve which are normally hard to see. You can see the MAD curve as look through the microscope at a simple moving average curve. It may help predicting a trend change before it happens, the sample shows a beginning trend change from long to short.
Interpretation:
If the MAD curve is bigger than 0, the moving average is above the price
conversely;
If the MAD curve is smaller than 0, the moving average is below the price
Before a trend change, the moving average gets flatter, the MAD curve points to towards the zero
We can see what is the maximum rising/falling of the difference and predict an upcomming trend change
Usage:






















