FCMS - Arrays Exercise - High Low with Arrays - StudyThis is just a script to exercise the use of arrays on pine script.
I think we could say that every for loop we had in pine script before the arrays, is eligible to become an array.
Our script will get more efficient and more reliable.
As every "if" case is elegible to became a function
I confess I was addicted to use if, else if, else in my codes, but recently i've been updating my scripts and it's became more efficient
I couldn't find an array function that insert an value removing the oldest one, so I'm using this condition to "shift" the first value.
I'll update as soon as I find a better way to do it.
In any case, for this specific goal, we already have an built in function, as I let on script
Arrays
Pivot Support / Resistance Panel [JV]Hello Traders,
First all of thanks to LonesomeTheBlue for making me grasp arrays, a wonderful addition to Pine Script.
This indicator uses arrays to find Pivot Points and mark them as Support / Resistance.
It displays an info panel with the latest values.
This code was written using the following standards:
• PineCoders Coding Conventions for Pine: www.pinecoders.com
Configurable options:
Up to 6 Support / Resistance Levels.
Pivot Lookback Period.
Panel Color.
Text Color.
Panel Offset.
Panel Size.
Enjoy!
Array SMACalcuating SMA on an Array
In this script i show you how to calculate SMA on an array.
Several values are plotted just for illustration.
Steps to follow:
- make sure you have an array with values (source array)
- create a blank array (pref. with the same size)
- call the function array_sma
This function fills the empty array with the SMA values of the source array.
Logistic RSI, STOCH, ROC, AO, ... by DGTExperimental attemt of applying Logistic Map Equation for some of widly used indicators.
With this study "Awesome Oscillator (AO)", "Rate of Change (ROC)", "Relative Strength Index (RSI)", "Stochastic (STOCH)" and a custom interpretation of Logistic Map Equation is presented
Calculations with Logistic Map Equation makes sense when the calculated results are iterated many times within the same equation.
Here is the Logistic Map Equation : Xn+1 = r * Xn * (1 - Xn)
Where, the value of r is the key for this equation which changes amazingly the behaviour of the Logistic Map.
The value we have asigned for r is less then 1 and greater than 0 ( 0 < r < 1) and in this case the iterations performed with the maximum number of output series allowed by Pine is quite enough for our purpose and thanks to arrays we can easiliy store them for further processing
What we have as output:
Each iteration result is then plotted (excluding plotting the first iteration), as circles or line based on user preference
Values above and below zero level (0) are coloured differently to emphasis bull and bear power
Finally Standard Deviation of Array's Elements is ploted as line. Users may choose to display this line only
So where it comes the indicators "Awesome Oscillator (AO)", "Rate of Change (ROC)", "Relative Strength Index (RSI)", "Stochastic (STOCH)".
Those are the indicators whose values are assigned to our key varaiable in the Logistic Map equation forulma which is r
Further details regarding Logistic Map can found under the description of “Logistic EMA w/ Signals by DGT” study
Disclaimer:
Trading success is all about following your trading strategy and the indicators should fit within your trading strategy, and not to be traded upon solely
The script is for informational and educational purposes only. Use of the script does not constitute professional and/or financial advice. You alone have the sole responsibility of evaluating the script output and risks associated with the use of the script. In exchange for using the script, you agree not to hold dgtrd TradingView user liable for any possible claim for damages arising from any decision you make based on use of the script
Polynomial Regression Bands + Channel [DW]This is an experimental study designed to calculate polynomial regression for any order polynomial that TV is able to support.
This study aims to educate users on polynomial curve fitting, and the derivation process of Least Squares Moving Averages (LSMAs).
I also designed this study with the intent of showcasing some of the capabilities and potential applications of TV's fantastic new array functions.
Polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as a polynomial of nth degree (order).
For clarification, linear regression can also be described as a first order polynomial regression. The process of deriving linear, quadratic, cubic, and higher order polynomial relationships is all the same.
In addition, although deriving a polynomial regression equation results in a nonlinear output, the process of solving for polynomials by least squares is actually a special case of multiple linear regression.
So, just like in multiple linear regression, polynomial regression can be solved in essentially the same way through a system of linear equations.
In this study, you are first given the option to smooth the input data using the 2 pole Super Smoother Filter from John Ehlers.
I chose this specific filter because I find it provides superior smoothing with low lag and fairly clean cutoff. You can, of course, implement your own filter functions to see how they compare if you feel like experimenting.
Filtering noise prior to regression calculation can be useful for providing a more stable estimation since least squares regression can be rather sensitive to noise.
This is especially true on lower sampling lengths and higher degree polynomials since the regression output becomes more "overfit" to the sample data.
Next, data arrays are populated for the x-axis and y-axis values. These are the main datasets utilized in the rest of the calculations.
To keep the calculations more numerically stable for higher periods and orders, the x array is filled with integers 1 through the sampling period rather than using current bar numbers.
This process can be thought of as shifting the origin of the x-axis as new data emerges.
This keeps the axis values significantly lower than the 10k+ bar values, thus maintaining more numerical stability at higher orders and sample lengths.
The data arrays are then used to create a pseudo 2D matrix of x power sums, and a vector of x power*y sums.
These matrices are a representation the system of equations that need to be solved in order to find the regression coefficients.
Below, you'll see some examples of the pattern of equations used to solve for our coefficients represented in augmented matrix form.
For example, the augmented matrix for the system equations required to solve a second order (quadratic) polynomial regression by least squares is formed like this:
(∑x^0 ∑x^1 ∑x^2 | ∑(x^0)y)
(∑x^1 ∑x^2 ∑x^3 | ∑(x^1)y)
(∑x^2 ∑x^3 ∑x^4 | ∑(x^2)y)
The augmented matrix for the third order (cubic) system is formed like this:
(∑x^0 ∑x^1 ∑x^2 ∑x^3 | ∑(x^0)y)
(∑x^1 ∑x^2 ∑x^3 ∑x^4 | ∑(x^1)y)
(∑x^2 ∑x^3 ∑x^4 ∑x^5 | ∑(x^2)y)
(∑x^3 ∑x^4 ∑x^5 ∑x^6 | ∑(x^3)y)
This pattern continues for any n ordered polynomial regression, in which the coefficient matrix is a n + 1 wide square matrix with the last term being ∑x^2n, and the last term of the result vector being ∑(x^n)y.
Thanks to this pattern, it's rather convenient to solve the for our regression coefficients of any nth degree polynomial by a number of different methods.
In this script, I utilize a process known as LU Decomposition to solve for the regression coefficients.
Lower-upper (LU) Decomposition is a neat form of matrix manipulation that expresses a 2D matrix as the product of lower and upper triangular matrices.
This decomposition method is incredibly handy for solving systems of equations, calculating determinants, and inverting matrices.
For a linear system Ax=b, where A is our coefficient matrix, x is our vector of unknowns, and b is our vector of results, LU Decomposition turns our system into LUx=b.
We can then factor this into two separate matrix equations and solve the system using these two simple steps:
1. Solve Ly=b for y, where y is a new vector of unknowns that satisfies the equation, using forward substitution.
2. Solve Ux=y for x using backward substitution. This gives us the values of our original unknowns - in this case, the coefficients for our regression equation.
After solving for the regression coefficients, the values are then plugged into our regression equation:
Y = a0 + a1*x + a1*x^2 + ... + an*x^n, where a() is the ()th coefficient in ascending order and n is the polynomial degree.
From here, an array of curve values for the period based on the current equation is populated, and standard deviation is added to and subtracted from the equation to calculate the channel high and low levels.
The calculated curve values can also be shifted to the left or right using the "Regression Offset" input
Changing the offset parameter will move the curve left for negative values, and right for positive values.
This offset parameter shifts the curve points within our window while using the same equation, allowing you to use offset datapoints on the regression curve to calculate the LSMA and bands.
The curve and channel's appearance is optionally approximated using Pine's v4 line tools to draw segments.
Since there is a limitation on how many lines can be displayed per script, each curve consists of 10 segments with lengths determined by a user defined step size. In total, there are 30 lines displayed at once when active.
By default, the step size is 10, meaning each segment is 10 bars long. This is because the default sampling period is 100, so this step size will show the approximate curve for the entire period.
When adjusting your sampling period, be sure to adjust your step size accordingly when curve drawing is active if you want to see the full approximate curve for the period.
Note that when you have a larger step size, you will see more seemingly "sharp" turning points on the polynomial curve, especially on higher degree polynomials.
The polynomial functions that are calculated are continuous and differentiable across all points. The perceived sharpness is simply due to our limitation on available lines to draw them.
The approximate channel drawings also come equipped with style inputs, so you can control the type, color, and width of the regression, channel high, and channel low curves.
I also included an input to determine if the curves are updated continuously, or only upon the closing of a bar for reduced runtime demands. More about why this is important in the notes below.
For additional reference, I also included the option to display the current regression equation.
This allows you to easily track the polynomial function you're using, and to confirm that the polynomial is properly supported within Pine.
There are some cases that aren't supported properly due to Pine's limitations. More about this in the notes on the bottom.
In addition, I included a line of text beneath the equation to indicate how many bars left or right the calculated curve data is currently shifted.
The display label comes equipped with style editing inputs, so you can control the size, background color, and text color of the equation display.
The Polynomial LSMA, high band, and low band in this script are generated by tracking the current endpoints of the regression, channel high, and channel low curves respectively.
The output of these bands is similar in nature to Bollinger Bands, but with an obviously different derivation process.
By displaying the LSMA and bands in tandem with the polynomial channel, it's easy to visualize how LSMAs are derived, and how the process that goes into them is drastically different from a typical moving average.
The main difference between LSMA and other MAs is that LSMA is showing the value of the regression curve on the current bar, which is the result of a modelled relationship between x and the expected value of y.
With other MA / filter types, they are typically just averaging or frequency filtering the samples. This is an important distinction in interpretation. However, both can be applied similarly when trading.
An important distinction with the LSMA in this script is that since we can model higher degree polynomial relationships, the LSMA here is not limited to only linear as it is in TV's built in LSMA.
Bar colors are also included in this script. The color scheme is based on disparity between source and the LSMA.
This script is a great study for educating yourself on the process that goes into polynomial regression, as well as one of the many processes computers utilize to solve systems of equations.
Also, the Polynomial LSMA and bands are great components to try implementing into your own analysis setup.
I hope you all enjoy it!
--------------------------------------------------------
NOTES:
- Even though the algorithm used in this script can be implemented to find any order polynomial relationship, TV has a limit on the significant figures for its floating point outputs.
This means that as you increase your sampling period and / or polynomial order, some higher order coefficients will be output as 0 due to floating point round-off.
There is currently no viable workaround for this issue since there isn't a way to calculate more significant figures than the limit.
However, in my humble opinion, fitting a polynomial higher than cubic to most time series data is "overkill" due to bias-variance tradeoff.
Although, this tradeoff is also dependent on the sampling period. Keep that in mind. A good rule of thumb is to aim for a nice "middle ground" between bias and variance.
If TV ever chooses to expand its significant figure limits, then it will be possible to accurately calculate even higher order polynomials and periods if you feel the desire to do so.
To test if your polynomial is properly supported within Pine's constraints, check the equation label.
If you see a coefficient value of 0 in front of any of the x values, reduce your period and / or polynomial order.
- Although this algorithm has less computational complexity than most other linear system solving methods, this script itself can still be rather demanding on runtime resources - especially when drawing the curves.
In the event you find your current configuration is throwing back an error saying that the calculation takes too long, there are a few things you can try:
-> Refresh your chart or hide and unhide the indicator.
The runtime environment on TV is very dynamic and the allocation of available memory varies with collective server usage.
By refreshing, you can often get it to process since you're basically just waiting for your allotment to increase. This method works well in a lot of cases.
-> Change the curve update frequency to "Close Only".
If you've tried refreshing multiple times and still have the error, your configuration may simply be too demanding of resources.
v4 drawing objects, most notably lines, can be highly taxing on the servers. That's why Pine has a limit on how many can be displayed in the first place.
By limiting the curve updates to only bar closes, this will significantly reduce the runtime needs of the lines since they will only be calculated once per bar.
Note that doing this will only limit the visual output of the curve segments. It has no impact on regression calculation, equation display, or LSMA and band displays.
-> Uncheck the display boxes for the drawing objects.
If you still have troubles after trying the above options, then simply stop displaying the curve - unless it's important to you.
As I mentioned, v4 drawing objects can be rather resource intensive. So a simple fix that often works when other things fail is to just stop them from being displayed.
-> Reduce sampling period, polynomial order, or curve drawing step size.
If you're having runtime errors and don't want to sacrifice the curve drawings, then you'll need to reduce the calculation complexity.
If you're using a large sampling period, or high order polynomial, the operational complexity becomes significantly higher than lower periods and orders.
When you have larger step sizes, more historical referencing is used for x-axis locations, which does have an impact as well.
By reducing these parameters, the runtime issue will often be solved.
Another important detail to note with this is that you may have configurations that work just fine in real time, but struggle to load properly in replay mode.
This is because the replay framework also requires its own allotment of runtime, so that must be taken into consideration as well.
- Please note that the line and label objects are reprinted as new data emerges. That's simply the nature of drawing objects vs standard plots.
I do not recommend or endorse basing your trading decisions based on the drawn curve. That component is merely to serve as a visual reference of the current polynomial relationship.
No repainting occurs with the Polynomial LSMA and bands though. Once the bar is closed, that bar's calculated values are set.
So when using the LSMA and bands for trading purposes, you can rest easy knowing that history won't change on you when you come back to view them.
- For those who intend on utilizing or modifying the functions and calculations in this script for their own scripts, I included debug dialogues in the script for all of the arrays to make the process easier.
To use the debugs, see the "Debugs" section at the bottom. All dialogues are commented out by default.
The debugs are displayed using label objects. By default, I have them all located to the right of current price.
If you wish to display multiple debugs at once, it will be up to you to decide on display locations at your leisure.
When using the debugs, I recommend commenting out the other drawing objects (or even all plots) in the script to prevent runtime issues and overlapping displays.
CC - Array-meta Consolidated Interval Display (ACID)This script extends my other two Array examples (which I've also provided to you open source):
The Ticker-centric 5m,15m,45m,1h,4h,1d resolution labels using arrays:
And the more Macro VIX,GLD,TLT,QQQ,SPY,IWM 1d resolution labels using arrays:
This script aims to show how to use min/max/avg with Arrays easily. My next example after this will be exploring the usage of variance versus covariance ratios over different periodic interval resolutions. Currently, this is using the following intervals: 5m,15m,45m,1h,4h,1d. It takes these intervals, calculates the values at those resolutions and puts the absolute min and max from the 5 minute to the 1 day resolutions.
It's more of an example of the power that arrays can hold, as all this truly is right now is a min/max bound calculator. The real gem lies in the avg calculators for multiple resoltuions tied into a single label with readable data. Check out the code and let me know what you think. If you need more examples, the other two scripts I mentioned before are also open source.
Using this on intervals of less than 1D sometimes times out, the way I wrote it is memory intensive, may not work for non-pro users.
Thanks!
NONE OF THIS IS FOREWARD LOOKING STATEMENTS, THIS IS NOT A PREDECTIVE ANALYSIS TOOL. THIS IS A RESEARCH ATTEMPT AT A NOVEL INDICATOR. I am not responsible for outcomes using it.
Please use and give criticisms freely. I am experimenting with combining resolutions and comparing covariance values at different levels right now, so let me know your thoughts! The last indicator will likely not be open source, but may be depending on how complex I get.
CC - Macro Consolidated Interval Display (MCID)Ever wish you didn't have to rapidly flip between 6 different tickers to get the full picture?
Yeah, me too. Do you also wish that you kind of understood how the shift / unshift function works for arrays?
Yeah, I did too. Both of those birds are taken care of with one stone!
The Macro Consolidated Interval Display uses the new Array structure and security to display data for VIX, GLD, TLT, QQQ, SPY and IWM (at a 1D interval) SIMUTANEOUSLY! Regardless of which ticker you're looking at you can get the full picture of macro futures data without flipping around to get it.
This is my first script trying to use arrays. It basically shows the following a 1d interval:
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP for VIX.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP for GLD.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP for TLT.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP for QQQ.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP for SPY.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP for IWM
To make it more or less busy, I've allowed you to toggle off any of the levels you wish. I've also chosen to leave this as open source, as it's nothing too experimental, and I hope that it can gain some traction as an Array example that the public can use! If you don't like the different values that are shown, use this source code example as a spring-board to put values that you do care about onto the labels.
If this code has helped you at all please drop me a like or some constructive criticism if you do not think it's worth a like.
Good luck and happy trading friends. This should be compatible with my CID as well:
If this gets traction, I will post something similar for a dynamic combination of tickers and intervals that you can set yourself.
CC - Consolidated Interval Display (CID)Ever wish you didn't have to rapidly flip between 6 different intervals to get the full picture?
Yeah, me too. Do you also wish that you kind of understood how the shift / unshift function works for arrays?
Yeah, I did too. Both of those birds are taken care of with one stone!
The Consolidated Interval Display uses the new Array structure and security to display data for 5m, 15m, 45m, 1h, 4h and 1d intervals SIMUTANEOUSLY! Regardless of which interval you're looking at you can get the full picture of numerical data without flipping around to get it.
This is my first script trying to use arrays. It basically shows the following for the given ticker:
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP at the 5 minute level.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP at the 15 minute level.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP at the 45 minute level.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP at the 1 hour level.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP at the 4 hour level.
ATR14, RSI7, RSI14, SMA50, SMA200 and VWAP at the 1 day level.
To make it more or less busy, I've allowed you to toggle off any of the levels you wish. I've also chosen to leave this as open source, as it's nothing too experimental, and I hope that it can gain some traction as an Array example that the public can use! If you don't like the different values that are shown, use this source code example as a spring-board to put values that you do care about onto the labels.
If this code has helped you at all please drop me a like or some constructive criticism if you do not think it's worth a like.
Good luck and happy trading friends.
If this gets traction, I will post something similar for a combination of SPY, VIX, GOLD, QQQ, IWM and TLT.
Z-Score The z-score is a way of counting the number of standard deviations between a given data value and the mean of the data set.
Z-score = (x̄ - μ) / (σ / √ n)
x̄ = sample mean (using the array.avg function = array(a,close ), where i = 1 to 21)
μ = population mean ( = avg(close, n))
σ = standard deviation of the population ( = stdev(close,n))
n = number of 'close' or trading day closes
n = input
... Note: The previous indicator is part of a larger series of indicators
Resampling Filter Pack [DW]This is an experimental study that calculates filter values at user defined sample rates.
This study is aimed to provide users with alternative functions for filtering price at custom sample rates.
First, source data is resampled using the desired rate and cycle offset. The highest possible rate is 1 bar per sample (BPS).
There are three resampling methods to choose from:
-> BPS - Resamples based on the number of bars.
-> Interval - Resamples based on time in multiples of current charting timeframe.
-> PA - Resamples based on changes in price action by a specified size. The PA algorithm in this script is derived from my Range Filter algorithm.
The range for PA method can be sized in points, pips, ticks, % of price, ATR, average change, and absolute quantity.
Then, the data is passed through one of my custom built filter functions designed to calculate filter values upon trigger conditions rather than bars.
In this study, these functions are used to calculate resampled prices based on bar rates, but they can be used and modified for a number of purposes.
The available conditional sampling filters in this study are:
-> Simple Moving Average (SMA)
-> Exponential Moving Average (EMA)
-> Zero Lag Exponential Moving Average (ZLEMA)
-> Double Exponential Moving Average (DEMA)
-> Rolling Moving Average (RMA)
-> Weighted Moving Average (WMA)
-> Hull Moving Average (HMA)
-> Exponentially Weighted Hull Moving Average (EWHMA)
-> Two Pole Butterworth Low Pass Filter (BLP)
-> Two Pole Gaussian Low Pass Filter (GLP)
-> Super Smoother Filter (SSF)
Downsampling is a powerful filtering approach that can be applied in numerous ways. However, it does suffer from a trade off, like most studies do.
Reducing the sample rate will completely eliminate certain levels of noise, at the cost of some spectral distortion. The lower your sample rate is, the more distortion you'll see.
With that being said, for analyzing trends, downsampling may prove to be one of your best friends!
Pinescript Bubble Sort using ArraysThe new feature of arrays allows for a multitude of new possibilities within Pinescript. This script implements a bubble sort function with most probable efficiency of О(n^2) with a best-case being O(n). This sort does not require large amounts of memory to process and has advantages when sorting small lists of data.
The main advantages: Bubble sort is an in-place sorting algorithm. It does not require extra memory or even stack space like in the case of merge sort or quicksort.
The main disadvantages: In the worst case the time complexity is equal to O(n^2) which is not efficient in comparison to other sorts which can have a time complexity of O(n*logn).
The Pseudocode for a bubble sort is as follows:
begin BubbleSort(list)
for all elements of list
if list > list
swap(list , list )
end if
end for
return list
end BubbleSort
The results of the sort are plotted against the unsorted list and overlayed on the chart.
A big thanks to Alex Grover for the help.
Range Filter [DW]This is an experimental study designed to filter out minor price action for a clearer view of trends.
Inspired by the QQE's volatility filter, this filter applies the process directly to price rather than to a smoothed RSI.
First, a smooth average price range is calculated for the basis of the filter and multiplied by a specified amount.
Next, the filter is calculated by gating price movements that do not exceed the specified range.
Lastly the target ranges are plotted to display the prices that will trigger filter movement.
Custom bar colors are included. The color scheme is based on the filtered price trend.