FunctionNNLayerLibrary "FunctionNNLayer"
Generalized Neural Network Layer method.
function(inputs, weights, n_nodes, activation_function, bias, alpha, scale) Generalized Layer.
Parameters:
inputs : float array, input values.
weights : float array, weight values.
n_nodes : int, number of nodes in layer.
activation_function : string, default='sigmoid', name of the activation function used.
bias : float, default=1.0, bias to pass into activation function.
alpha : float, default=na, if required to pass into activation function.
scale : float, default=na, if required to pass into activation function.
Returns: float
Machinelearning
FunctionNNPerceptronLibrary "FunctionNNPerceptron"
Perceptron Function for Neural networks.
function(inputs, weights, bias, activation_function, alpha, scale) generalized perceptron node for Neural Networks.
Parameters:
inputs : float array, the inputs of the perceptron.
weights : float array, the weights for inputs.
bias : float, default=1.0, the default bias of the perceptron.
activation_function : string, default='sigmoid', activation function applied to the output.
alpha : float, default=na, if required for activation.
scale : float, default=na, if required for activation.
@outputs float
MLActivationFunctionsLibrary "MLActivationFunctions"
Activation functions for Neural networks.
binary_step(value) Basic threshold output classifier to activate/deactivate neuron.
Parameters:
value : float, value to process.
Returns: float
linear(value) Input is the same as output.
Parameters:
value : float, value to process.
Returns: float
sigmoid(value) Sigmoid or logistic function.
Parameters:
value : float, value to process.
Returns: float
sigmoid_derivative(value) Derivative of sigmoid function.
Parameters:
value : float, value to process.
Returns: float
tanh(value) Hyperbolic tangent function.
Parameters:
value : float, value to process.
Returns: float
tanh_derivative(value) Hyperbolic tangent function derivative.
Parameters:
value : float, value to process.
Returns: float
relu(value) Rectified linear unit (RELU) function.
Parameters:
value : float, value to process.
Returns: float
relu_derivative(value) RELU function derivative.
Parameters:
value : float, value to process.
Returns: float
leaky_relu(value) Leaky RELU function.
Parameters:
value : float, value to process.
Returns: float
leaky_relu_derivative(value) Leaky RELU function derivative.
Parameters:
value : float, value to process.
Returns: float
relu6(value) RELU-6 function.
Parameters:
value : float, value to process.
Returns: float
softmax(value) Softmax function.
Parameters:
value : float array, values to process.
Returns: float
softplus(value) Softplus function.
Parameters:
value : float, value to process.
Returns: float
softsign(value) Softsign function.
Parameters:
value : float, value to process.
Returns: float
elu(value, alpha) Exponential Linear Unit (ELU) function.
Parameters:
value : float, value to process.
alpha : float, default=1.0, predefined constant, controls the value to which an ELU saturates for negative net inputs. .
Returns: float
selu(value, alpha, scale) Scaled Exponential Linear Unit (SELU) function.
Parameters:
value : float, value to process.
alpha : float, default=1.67326324, predefined constant, controls the value to which an SELU saturates for negative net inputs. .
scale : float, default=1.05070098, predefined constant.
Returns: float
exponential(value) Pointer to math.exp() function.
Parameters:
value : float, value to process.
Returns: float
function(name, value, alpha, scale) Activation function.
Parameters:
name : string, name of activation function.
value : float, value to process.
alpha : float, default=na, if required.
scale : float, default=na, if required.
Returns: float
derivative(name, value, alpha, scale) Derivative Activation function.
Parameters:
name : string, name of activation function.
value : float, value to process.
alpha : float, default=na, if required.
scale : float, default=na, if required.
Returns: float
MLLossFunctionsLibrary "MLLossFunctions"
Methods for Loss functions.
mse(expects, predicts) Mean Squared Error (MSE) " MSE = 1/N * sum ((y - y')^2) ".
Parameters:
expects : float array, expected values.
predicts : float array, prediction values.
Returns: float
binary_cross_entropy(expects, predicts) Binary Cross-Entropy Loss (log).
Parameters:
expects : float array, expected values.
predicts : float array, prediction values.
Returns: float
Financial Astrology Indexes ML Daily TrendDaily trend indicator based on financial astrology cycles detected with advanced machine learning techniques for some of the most important market indexes: DJI, UK100, SPX, IBC, IXIC, NI225, BANKNIFTY, NIFTY and GLD fund (not index) for Gold predictions. The daily price trend is forecasted through planets cycles (angular aspects, speed phases, declination zone), fast cycles are based on Moon, Mercury, Venus and Sun and Mid term cycles are based on Mars, Vesta and Ceres . The combination of all this cycles produce a daily price trend prediction that is encoded into a PineScript array using binary format "0 or 1" that represent sell and buy signals respectively. The indicator provides signals since 2021-01-01 to 2022-12-31, the past months signals purpose is to support backtesting of the indicator combined with other technical indicator entries like MAs, RSI or Stochastic . For future predictions besides 2022 a machine learning models re-train phase will be required.
When the signal moving average is increasing from 0 to 1 indicates an increase of buy force, when is decreasing from 1 to 0 indicates an increase in sell force, finally, when is sideways around the 0.4-0.6 area predicts a period of buy/sell forces equilibrium, traders indecision which result in a price congestion within a narrow price range.
We also have published same indicator for Crypto-Currencies research portfolio:
DISCLAIMER: This indicator is experimental and don’t provide financial or investment advice, the main purpose is to demonstrate the predictive power of financial astrology. Any allocation of funds following the documented machine learning model prediction is a high-risk endeavour and it’s the users responsibility to practice healthy risk management according to your situation.
Financial Astrology Crypto ML Daily TrendThis daily trend indicator is based on financial astrology cycles detected with advanced machine learning techniques for the crypto-currencies research portfolio: ADA, BAT, BNB, BTC, DASH, EOS, ETC, ETH, LINK, LTC, XLM, XMR, XRP, ZEC and ZRX. The daily price trend is forecasted through this planets cycles (angular aspects, speed, declination), fast ones are based on Moon, Mercury, Venus and Sun and Mid term cycles are based on Mars, Vesta and Ceres. The combination of all this cycles produce a daily price trend prediction that is encoded into a PineScript array using binary format "0 or 1" that represent sell and buy signals respectively. The indicator provides signals since 2021-01-01 to 2022-12-31, the past months signals purpose is to support backtesting of the indicator combined with other technical indicator entries like MAs, RSI or Stochastic. For future predictions besides 2022 a machine learning models re-train phase will be required.
The resolution of this indicator is 1D, you can tune a parameter where you can determine how many future bars of daily trend are plotted and adjust an hours shift to anticipate future signals into current bar in order to produce a leading indicator effect to anticipate the trend changes with some hours of anticipation. Combined with technical analysis indicators this daily trend is very powerful because can help to produce approximately 60% of profitable signals based on the backtesting results. You can look at our open source Github repositories to validate accuracy using the backtesting strategies we have implemented in Jesse Crypto Trading Framework as proof of concept of the predictive potential of this indicator. Alternatively, we have implemented a PineScript strategy that use this indicator, just consider that we are pending to do signals update to the period July 2021 to December 2022: This strategy have accumulated more than 110 likes and many traders have validated the predictive power of Financial Astrology.
DISCLAIMER: This indicator is experimental and don’t provide financial or investment advice, the main purpose is to demonstrate the predictive power of financial astrology. Any allocation of funds following the documented machine learning model prediction is a high-risk endeavour and it’s the users responsibility to practice healthy risk management according to your situation.
VWMA with kNN Machine Learning: MFI/ADXThis is an experimental strategy that uses a Volume-weighted MA (VWMA) crossing together with Machine Learning kNN filter that uses ADX and MFI to predict, whether the signal is useful. k-nearest neighbours (kNN) is one of the simplest Machine Learning classification algorithms: it puts input parameters in a multidimensional space, and then when a new set of parameters are given, it makes a prediction based on plurality vote of its k neighbours.
Money Flow Index (MFI) is an oscillator similar to RSI, but with volume taken into account. Average Directional Index (ADX) is an indicator of trend strength. By putting them together on two-dimensional space and checking, whether nearby values have indicated a strong uptrend or downtrend, we hope to filter out bad signals from the MA crossing strategy.
This is an experiment, so any feedback would be appreciated. It was tested on BTC/USDT pair on 5 minute timeframe. I am planning to expand this strategy in the future to include more moving averages and filters.
Morun Astro Trend MAs cross StrategyAstrology machine learning cycles indicator signals with technical MAs indicators strategy, based on signals index of Github project github.com
Machine Learning: kNN-based Strategy (update)kNN-based Strategy (FX and Crypto)
Description:
This update to the popular kNN-based strategy features:
improvements in the business logic,
an adjustible k value for the kNN model,
one more feature (MOM),
a streamlined signal filter and
some other minor fixes.
Now this script works in all timeframes !
I intentionally decided to publish this script separately
in order for the users to see the differences.
Machine Learning: LVQ-based StrategyLVQ-based Strategy (FX and Crypto)
Description:
Learning Vector Quantization (LVQ) can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all learning-based approach. It is based on prototype supervised learning classification task and trains its weights through a competitive learning algorithm.
Algorithm:
Initialize weights
Train for 1 to N number of epochs
- Select a training example
- Compute the winning vector
- Update the winning vector
Classify test sample
The LVQ algorithm offers a framework to test various indicators easily to see if they have got any *predictive value*. One can easily add cog, wpr and others.
Note: TradingViews's playback feature helps to see this strategy in action. The algo is tested with BTCUSD/1Hour.
Warning: This is a preliminary version! Signals ARE repainting.
***Warning***: Signals LARGELY depend on hyperparams (lrate and epochs).
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+++/Days
Machine Learning: Logistic RegressionMulti-timeframe Strategy based on Logistic Regression algorithm
Description:
This strategy uses a classic machine learning algorithm that came from statistics - Logistic Regression (LR).
The first and most important thing about logistic regression is that it is not a 'Regression' but a 'Classification' algorithm. The name itself is somewhat misleading. Regression gives a continuous numeric output but most of the time we need the output in classes (i.e. categorical, discrete). For example, we want to classify emails into “spam” or 'not spam', classify treatment into “success” or 'failure', classify statement into “right” or 'wrong', classify election data into 'fraudulent vote' or 'non-fraudulent vote', classify market move into 'long' or 'short' and so on. These are the examples of logistic regression having a binary output (also called dichotomous).
You can also think of logistic regression as a special case of linear regression when the outcome variable is categorical, where we are using log of odds as dependent variable. In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function.
Basically, the theory behind Logistic Regression is very similar to the one from Linear Regression, where we seek to draw a best-fitting line over data points, but in Logistic Regression, we don’t directly fit a straight line to our data like in linear regression. Instead, we fit a S shaped curve, called Sigmoid, to our observations, that best SEPARATES data points. Technically speaking, the main goal of building the model is to find the parameters (weights) using gradient descent.
In this script the LR algorithm is retrained on each new bar trying to classify it into one of the two categories. This is done via the logistic_regression function by updating the weights w in the loop that continues for iterations number of times. In the end the weights are passed through the sigmoid function, yielding a prediction.
Mind that some assets require to modify the script's input parameters. For instance, when used with BTCUSD and USDJPY, the 'Normalization Lookback' parameter should be set down to 4 (2,...,5..), and optionally the 'Use Price Data for Signal Generation?' parameter should be checked. The defaults were tested with EURUSD.
Note: TradingViews's playback feature helps to see this strategy in action.
Warning: Signals ARE repainting.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours/Days
Machine Learning: Perceptron-based strategyPerceptron-based strategy
Description:
The Learning Perceptron is the simplest possible artificial neural network (ANN), consisting of just a single neuron and capable of learning a certain class of binary classification problems. The idea behind ANNs is that by selecting good values for the weight parameters (and the bias), the ANN can model the relationships between the inputs and some target.
Generally, ANN neurons receive a number of inputs, weight each of those inputs, sum the weights, and then transform that sum using a special function called an activation function. The output of that activation function is then either used as the prediction (in a single neuron model) or is combined with the outputs of other neurons for further use in more complex models.
The purpose of the activation function is to take the input signal (that’s the weighted sum of the inputs and the bias) and turn it into an output signal. Think of this activation function as firing (activating) the neuron when it returns 1, and doing nothing when it returns 0. This sort of computation is accomplished with a function called step function: f(z) = {1 if z > 0 else 0}. This function then transforms any weighted sum of the inputs and converts it into a binary output (either 1 or 0). The trick to making this useful is finding (learning) a set of weights that lead to good predictions using this activation function.
Training our perceptron is simply a matter of initializing the weights to zero (or random value) and then implementing the perceptron learning rule, which just updates the weights based on the error of each observation with the current weights. This has the effect of moving the classifier’s decision boundary in the direction that would have helped it classify the last observation correctly. This is achieved via a for loop which iterates over each observation, making a prediction of each observation, calculating the error of that prediction and then updating the weights accordingly. In this way, weights are gradually updated until they converge. Each sweep through the training data is called an epoch.
In this script the perceptron is retrained on each new bar trying to classify this bar by drawing the moving average curve above or below the bar.
This script was tested with BTCUSD, USDJPY, and EURUSD.
Note: TradingViews's playback feature helps to see this strategy in action.
Warning: Signals ARE repainting.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+/Days
Machine Learning: kNN-based Strategy (mtf)This is a multi-timeframe version of the kNN-based strategy.
Machine Learning: kNN-based StrategykNN-based Strategy (FX and Crypto)
Description:
This strategy uses a classic machine learning algorithm - k Nearest Neighbours (kNN) - to let you find a prediction for the next (tomorrow's, next month's, etc.) market move. Being an unsupervised machine learning algorithm, kNN is one of the most simple learning algorithms.
To do a prediction of the next market move, the kNN algorithm uses the historic data, collected in 3 arrays - feature1, feature2 and directions, - and finds the k-nearest
neighbours of the current indicator(s) values.
The two dimensional kNN algorithm just has a look on what has happened in the past when the two indicators had a similar level. It then looks at the k nearest neighbours,
sees their state and thus classifies the current point.
The kNN algorithm offers a framework to test all kinds of indicators easily to see if they have got any *predictive value*. One can easily add cog, wpr and others.
Note: TradingViews's playback feature helps to see this strategy in action.
Warning: Signals ARE repainting.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+++/Days
Machine Learning / Longs [Experimental]Hello Traders/Programmers,
For long time I thought that if it's possible to make a script that has own memory and criterias in Pine. it would learn and find patterns as images according to given criterias. after we have arrays of strings, lines, labels I tried and made this experimental script. The script works only for Long positions.
Now lets look at how it works:
On each candle it creates an image of last 8 candles. before the image is created it finds highest/lowest levels of 8 candles, and creates a string with the lengths 64 (8 * 8). and for each square, it checks if it contains wick, green or red body, green or red body with wicks. see the following picture:
Each square gets the value:
0: nothing in it
1: only wick in it
2: only red body in it
3. only green body in it
4: red body and wick in it
5: green body and wick in it
And then it checks if price went up equal or higher than user-defined profit. if yes then it adds the image to the memory/array. and I call this part as Learning Part.
what I mean by image is:
if there is 1 or more element in the memory, it creates image for current 8 candles and checks the memory if there is a similar images. If the image has similarity higher than user-defined similarty level then if show the label "Matched" and similarity rate and the image in the memory. if it find any with the similarity rate is equal/greater than user-defined level then it stop searching more.
As an example matched image:
and then price increased and you got the profit :)
Options:
Period: if there is possible profit higher than user-defined minimum profit in that period, it checks the images from 2. to X. bars.
Min Profit: you need to set the minimum expected profit accordingly. for example in 1m chart don't enter %10 as min profit :)
Similarity Rate: as told above, you can set minimum similarity rate, higher similarity rate means better results but if you set higher rates, number of images will decrease. set it wisely :)
Max Memory Size: you can set number of images (that gives the profit equal/higher than you set) to be saved that in memory
Change Bar Color: optionally it can change bar colors if current image is found in the memory
Current version of the script doesn't check if the price reach the minimum profit target, so no statistics.
This is completely experimental work and I made it for fun. No one or no script can predict the future. and you should not try to predict the future.
P.S. it starts searching on last bar, it doesn't check historical bars. if you want you should check it in replay mode :)
if you get calculation time out error then hide/unhide the script. ;)
Enjoy!
ANN BTC MTF Golden Cross Period MACDHi, this is the MACD version of the ANN BTC Multi Timeframe Script.
The MACD Periods were approximated to the Golden Cross values.
MACD Lengths :
Signal Length = 25
Fast Length = 50
Slow Length = 200
Regards.
NAND PerceptronExperimental NAND Perceptron based upon Python template that aims to predict NAND Gate Outputs. A Perceptron is one of the foundational building blocks of nearly all advanced Neural Network layers and models for Algo trading and Machine Learning.
The goal behind this script was threefold:
To prove and demonstrate that an ACTUAL working neural net can be implemented in Pine, even if incomplete.
To pave the way for other traders and coders to iterate on this script and push the boundaries of Tradingview strategies and indicators.
To see if a self-contained neural network component for parameter optimization within Pinescript was hypothetically possible.
NOTE: This is a highly experimental proof of concept - this is NOT a ready-made template to include or integrate into existing strategies and indicators, yet (emphasis YET - neural networks have a lot of potential utility and potential when utilized and implemented properly).
Hardcoded NAND Gate outputs with Bias column (X0):
// NAND Gate + X0 Bias and Y-true
// X0 // X1 // X2 // Y
// 1 // 0 // 0 // 1
// 1 // 0 // 1 // 1
// 1 // 1 // 0 // 1
// 1 // 1 // 1 // 0
Column X0 is bias feature/input
Column X1 and X2 are the NAND Gate
Column Y is the y-true values for the NAND gate
yhat is the prediction at that timestep
F0,F1,F2,F3 are the Dot products of the Weights (W0,W1,W2) and the input features (X0,X1,X2)
Learning rate and activation function threshold are enabled by default as input parameters
Uncomment sections for more training iterations/epochs:
Loop optimizations would be amazing to have for a selectable length for training iterations/epochs but I'm not sure if it's possible in Pine with how this script is structured.
Error metrics and loss have not been implemented due to difficulty with script length and iterations vs epochs - I haven't been able to configure the input parameters to successfully predict the right values for all four y-true values for the NAND gate (only been able to get 3/4; If you're able to get all four predictions to be correct, let me know, please).
// //---- REFERENCE for final output
// A3 := 1, y0 true
// B3 := 1, y1 true
// C3 := 1, y2 true
// D3 := 0, y3 true
PLEASE READ: Source article/template and main code reference:
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
CBOE PCR Factor Dependent Variable Odd Generator This script is the my Dependent Variable Odd Generator script :
with the Put / Call Ratio ( PCR ) appended, only for CBOE and the instruments connected to it.
For CBOE this script is more accurate and faster than Dependent Variable Odd Generator. And the stagnant market odds are better and more realistic.
Do not use for timeframe periods less than 1 day.
Because PCR data may give repaint error.
My advice is to use the 1-week bars to gain insight into your analysis.
This code is open source under the MIT license. If you have any improvements or corrections to suggest, please send me a pull request via the github repository github.com
I hope it will help your work.Best regards!
ANN MACD (BTC)
Logic is correct.
But I prefer to say experimental because the sample set is narrow. (300 columns)
Let's start:
6 inputs : Volume Change , Bollinger Low Band chg. , Bollinger Mid Band chg., Bollinger Up Band chg. , RSI change , MACD histogram change.
1 output : Future bar change (Historical)
Training timeframe : 15 mins (Analysis TF > 4 hours (My opinion))
Learning cycles : 337
Training error: 0.009999
Input columns: 6
Output columns: 1
Excluded columns: 0
Grid
Training example rows: 301
Validating example rows: 0
Querying example rows: 0
Excluded example rows: 0
Duplicated example rows: 0
Network
Input nodes connected: 6
Hidden layer 1 nodes: 8
Hidden layer 2 nodes: 0
Hidden layer 3 nodes: 0
Output nodes: 1
Learning rate : 0.6 Momentum : 0.8
More info :
EDIT : This code is open source under the MIT License. If you have any improvements or corrections to suggest, please send me a pull request via the github repository github.com
ANN MACD Future Forecast (SPY 1D) NOTE : Deep learning was conducted in a narrow sample set for testing purposes. So this script is Experimental .
This system is based on the following article and is inspired by an external program:
hackernoon.com
None of the artificial neural networks in Tradingview work and are not based on completely correct logic. Unlike others in this system:
IMPORTANT NOTE: If the tangent activation function is used, the input data must also have tangent values (compared to the previous values of 1 bar).
Inputs were prepared according to this judgment.
1. The tangent function which is the activation function is written correctly. (The tangent function in the article: ActivationFunctionTanh (v) => (1 - exp (-2 * v)) / (1 + exp (-2 * v)))
2. Missing bias parts in the formulas were added.
3. The output function is taken from the next day (historical), so that the next bar can be predicted, which is the truth.
4.The forecast value of the next bar is subtracted from the current bar change and the market direction is determined.
5.When the future forecast and the current close are added together, the resulting data is called seed.
The seed carries data both from the present and from yesterday and from the future.
6.And this seed was subjected to the MACD method.
Thus, due to exponential averages, more importance will be given to recent developments and
The acceleration situations will show us the direction.
However, a short position should be taken for crossover and a long position for crossunder .
Because the predicted values work in reverse.Even though we use the same period (9,12,26) it is much faster!
7. There is no future code that can cause Repaint.
However, the color after closing should be checked.
The system is completely correct.
However, a very narrow sample was selected.
100 data: Tangent diffs ; volume change, bollinger bands values changes (Upband , Midband , Lowband) and LazyBear's Squeeze Momentum Indicator (SQZMOM_LB) change and the next bar data (historical) price change were put into the deep learning test.
IMPORTANT NOTE : The larger the sample set and the more effective dependent variables, the higher the hit rate of the deep learning test!
EDIT : This code is open source under the MIT License. If you have any improvements or corrections to suggest, please send me a pull request via the github repository github.com
Stay tuned. Best regards!
Dependent Variable Odd Generator For Machine Learning TechniquesCAUTION : Not suitable for strategy, open to development.
If can we separate the stagnant market from other markets, can we be so much more accurate?
This project was written to research it. It is just the tiny part of the begining.
And this is a very necessary but very small side function in the main function. Lets start :
Hi users, I had this idea in my mind for a long time but I had a hard time finding the parameters that would make the market stagnant. This idea is my first original command system. Although it is very difficult to make sense of the stagnant market, I think that this command system can achieve realistic proportions. With 's money flow index, I opened the track to determine the level. On the other hand, the prices were also using a money flow index, and it forced me to make the limitations between the levels in a logical way. But the good thing is that since the bollinger bandwidth uses a larger period, we are able to print normal values at extreme buy and sell values.
In terms of price, we can define excessive purchase and sale values as the period is smaller. I have repeatedly looked at the limit values that determine the bull, bear, and bollinger bandwidth (mfi), and I think this is the right one. Then I have included these values in the probability set.
The bull and bear market did not form the intersection of the cluster, and because there are connected events, the stagnant market, which is the intersection, will be added to the other markets with the same venn diagram logic and the sum of the probability set will be 1. is equal to. I hope that we can renew the number generators in the very important parameters of machine learning such as Markov Process with generators dependent on dependent variables, which bring us closer to reality. This function is open to development and can be made of various ideas on machine learning. Best wishes.
This code is open source under the MIT license. If you have any improvements or corrections to suggest, please send me a pull request via the github repository github.com