STD-Adaptive T3 Channel w/ Ehlers Swiss Army Knife Mod. [Loxx]STD-Adaptive T3 Channel w/ Ehlers Swiss Army Knife Mod. is an adaptive T3 indicator using standard deviation adaptivity and Ehlers Swiss Army Knife indicator to adjust the alpha value of the T3 calculation. This helps identify trends and reduce noise. In addition. I've included a Keltner Channel to show reversal/exhaustion zones.
What is the Swiss Army Knife Indicator?
John Ehlers explains the calculation here: www.mesasoftware.com
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Поиск скриптов по запросу "ha溢价率"
T3 Velocity [Loxx]T3 Velocity is a simple velocity indicator using T3 moving average that uses gradient colors to better identify trends.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
R-squared Adaptive T3 w/ DSL [Loxx]R-squared Adaptive T3 w/ DSL is the following T3 indicator but with Discontinued Signal Lines added to reduce noise and thereby increase signal accuracy. This adaptation makes this indicator lower TF scalp friendly.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
EMA and FEMA Signla/DSL smoothing
Loxx's Expanded Source Types
Pips-Stepped, R-squared Adaptive T3 [Loxx]Pips-Stepped, R-squared Adaptive T3 is a a T3 moving average with optional adaptivity, trend following, and pip-stepping. This indicator also uses optional flat coloring to determine chops zones. This indicator is R-squared adaptive. This is also an experimental indicator.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination (R-squared), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average.
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
Included:
Bar coloring
Signals
Alerts
Flat coloring
R-squared Adaptive T3 [Loxx]R-squared Adaptive T3 is an R-squared adaptive version of Tilson's T3 moving average. This adaptivity was originally proposed by mladen on various forex forums. This is considered experimental but shows how to use r-squared adapting methods to moving averages. In theory, the T3 is a six-pole non-linear Kalman filter.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis. Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD, Momentum, Relative Strength Index) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA (simple moving average) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA(n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA.
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE/2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE/2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE/2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA, popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE/2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA(3) has lag 1, and EMA(11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA(3) through itself 5 times than if I just take EMA(11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA(3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA(7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA(n) = EMA(n) + EMA(time series - EMA(n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA. The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA(n) + EMA(time series - EMA(n))*.7;
This is algebraically the same as:
EMA(n)*1.7-EMA(EMA(n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD(n,v) = EMA(n)*(1+v)-EMA(EMA(n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA, and when v=1, GD is DEMA. In between, GD is a cooler DEMA. By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD(GD(GD(n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA(n)) to correct themselves. In Technical Analysis, these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Drawdown RangeHello death eaters, presenting a unique script which can be used for fundamental analysis or mean reversion based trades.
Process of deriving this table is as below:
Find out ATH for given day
Calculate the drawdown from ATH for the day and drawdown percentage
Based on the drawdown percentage, increment the count of basket which is based on input iNumber of ranges . For example, if number of ranges is 5, then there will be 5 baskets. First basket will fit drawdown percentage 0-20% and each subsequent ones will accommodate next 20% range.
Repeat the process from start to last bar. Once done, table will plot how much percentage of days belong to which basket.
For example, from the below chart of NASDAQ:AAPL
We can deduce following,
Historically stock has traded within 1% drawdown from ATH for 6.59% of time. This is the max amount of time stock has stayed in specific range of drawdown from ATH.
Stock has traded at the drawdown range of 82-83% from ATH for 0.17% of time. This is the least amount of time the stock has stayed in specific range of drawdown from ATH.
At present, stock is trading 2-3% below ATH and this has happened for about 2.46% of total days in trade
Maximum drawdown the stock has suffered is 83%
Lets take another example of NASDAQ:TSLA
Stock is trading at 21-22% below ATH. But, historically the max drawdown range where stock has traded is within 0-1%. Now, if we make this range to show 20 divisions instead of 100, it will look something like this:
Table suggests that stock is trading about 20-25% below ATH - which is right. But, table also suggests that stock has spent most number of days within this drawdown range when we divide it by 20 baskets instad of 100. I would probably wait for price to break out of this range before going long or short. At present, it seems a stage ranging stage. I might think about selling PUTs or covered CALLs outside this range.
Similarly, if you look at AMEX:SPY , 36% of the time, price has stayed within 5% from ATH - makes it a compelling bull case!!
NYSE:BABA is trading at 50-55% below ATH - which is the most it has retraced so far. In general, it is used to be within 15-20% from ATH
NOW, Bit of explanation on input options.
Number of Ranges : Says how many baskets the drawdown map needs to be divided into.
Reference : You can take ATH as reference or chose a time window between which the highest need to be considered for drawdown. This can be useful for megacaps which has gone beyond initial phase of uncertainity. There is no point looking at 80% drawdown AAPL had during 1990s. More approriate to look at it post 2000s where it started making higher impact and growth.
Cumulative Percentage : When this is unchecked, percentage division shows 0-nth percentage instad of percentage ranges. For example this is how it looks on SPY:
We can see that SPY has remained within 6% from ATH for more than 50% of the time.
Hope this is helpful. Happy trading :)
PS: this can be used in conjunction with Drawdown-Price-vs-Fundamentals to pick value stocks at discounted price while also keeping an eye on range tendencies of it.
Thanks to @mattX5 for the ideas and discussion today :)
{Gunzo} Heiken Ashi RibbonsHeiken Ashi Ribbons is a trend-following indicator which gives entry and exit points for short-term, medium-term and long term trading (using Exponential Moving Averages and Heiken Ashi formulas).
OVERVIEW :
The Heiken Ashi Ribbons indicator is composed of 3 moving average ribbons (slow, normal and fast) that are computed using the Heiken Ashi formulas. The 3 ribbons give a clear vision of the current trend as they use moving averages that smooth out the price and filter noise from short term fluctuations. In a simplified way, you can consider each ribbon as a moving average with a larger body size.
If the price is above the slow ribbon, we consider the asset as trending up in the short term (trending down otherwise). If the price is above the fast ribbon, we consider the asset as trending up in the long term (trending down otherwise).
CALCULATION :
First of all, to compute a ribbon for this indicator we calculate a moving average (EMA by default) for common sources (OHLC) :
EMA (open), EMA (high), EMA (low), EMA (close)
We then apply the Heiken Ashi formulas to the moving averages calculated previously.
HA (open) = HA (open) previous + HA (close) previous
HA (close) = ( EMA (open) + EMA (high) + EMA (low) + EMA (close) ) / 4
HA (high) = max( EMA (open), EMA (close), EMA (high) )
HA (low) = min ( EMA (open), EMA (close), EMA (low) )
The ribbon displayed (by default) on the chart is the area between HA (open) and HA (close).
SETTINGS :
1st Moving average length : Length of the slow moving average
2nd Moving average length : Length of the normal moving average
3rd Moving average length : Length of the fast moving average
Moving average method : Moving average calculation method (EMA : Exponential Moving Average, SMA : Simple Moving Average, WMA : Weighted Moving Average)
Ribbon type : standard ribbon uses the area between HA (open) and HA (close). Large ribbon uses the area between HA (low) and HA (high)
Display ribbon as candles : change the type of visualization between area and candles
Display short term buy/sell signals : Display short term buy/sell signals (crosses) when the fast moving average and normal moving average are crossing
Display long term buy/sell signals : Display long buy/sell signals (circles) when the fast moving average and slow moving average are crossing
Display ribbon trending up signals : Display ribbon direction change (triangle up) when the trend of the ribbon changes to trending up
Display ribbon trending down signals : Display ribbon direction change (triangle down) when the trend of the ribbon changes to trending down
VISUALIZATIONS :
This indicator has 2 possible visualizations :
Ribbons : the ribbons can be considered as enhanced moving averages for trading purposes. They represent the area between the Heiken Ashi of the moving average of the open and closing price. The color of the moving average line is green when the ribbon is trending up and red when the ribbon is trending down.
Signals : Various signals can be displayed at the bottom of the chart (Buy/Sell signals, Ribbon direction changes signals).
USAGE :
This indicator can be used in many strategies, just like when you are using multiple moving averages. You should test these strategies and use the one that best fits your trading style.
Strategy based on crossovers :
When the fast ribbon crosses above the normal ribbon, it is a short term buy signal (it is recommended to wait for a confirmation)
When the fast ribbon crosses under the normal ribbon, it is a short term sell signal (it is recommended to wait for a confirmation)
When the fast ribbon crosses above the slow ribbon, it is a long term buy signal
When the fast ribbon crosses over the slow ribbon, it is a long term buy signal
Strategy based on price position :
When the prices closes above the ribbon, it is a buy signal (long term if above slow ribbon, short term if above fast ribbon)
When the prices closes below the ribbon, it is a sell signal (long term if below slow ribbon, short term if below fast ribbon)
Strategy based on price bouncing :
When the price decreases and reaches the green long term ribbon, the price candles may not be able to cross the ribbon. If the price increases, we consider that move as a bounce on the ribbon, which is a buy signal
When the price increases and reaches a red long term ribbon, the price candles may not be able to cross the ribbon. If the price decreases, we consider that move as a bounce on the ribbon, which is a sell signal
Strategy based on ribbon direction :
When the direction of the ribbon changes, the trend of the asset is changing which may lead to a crossover to the next candles if the trend is continuing in that direction (it is recommended to validate the entry points with a second indicator as this strategy may have some false signals).
MultiLayer Acceleration/Deceleration Strategy [Skyrexio]Overview
MultiLayer Acceleration/Deceleration Strategy leverages the combination of Acceleration/Deceleration Indicator(AC), Williams Alligator, Williams Fractals and Exponential Moving Average (EMA) to obtain the high probability long setups. Moreover, strategy uses multi trades system, adding funds to long position if it considered that current trend has likely became stronger. Acceleration/Deceleration Indicator is used for creating signals, while Alligator and Fractal are used in conjunction as an approximation of short-term trend to filter them. At the same time EMA (default EMA's period = 100) is used as high probability long-term trend filter to open long trades only if it considers current price action as an uptrend. More information in "Methodology" and "Justification of Methodology" paragraphs. The strategy opens only long trades.
Unique Features
No fixed stop-loss and take profit: Instead of fixed stop-loss level strategy utilizes technical condition obtained by Fractals and Alligator to identify when current uptrend is likely to be over (more information in "Methodology" and "Justification of Methodology" paragraphs)
Configurable Trading Periods: Users can tailor the strategy to specific market windows, adapting to different market conditions.
Multilayer trades opening system: strategy uses only 10% of capital in every trade and open up to 5 trades at the same time if script consider current trend as strong one.
Short and long term trend trade filters: strategy uses EMA as high probability long-term trend filter and Alligator and Fractal combination as a short-term one.
Methodology
The strategy opens long trade when the following price met the conditions:
1. Price closed above EMA (by default, period = 100). Crossover is not obligatory.
2. Combination of Alligator and Williams Fractals shall consider current trend as an upward (all details in "Justification of Methodology" paragraph)
3. Acceleration/Deceleration shall create one of two types of long signals (all details in "Justification of Methodology" paragraph). Buy stop order is placed one tick above the candle's high of last created long signal.
4. If price reaches the order price, long position is opened with 10% of capital.
5. If currently we have opened position and price creates and hit the order price of another one long signal, another one long position will be added to the previous with another one 10% of capital. Strategy allows to open up to 5 long trades simultaneously.
6. If combination of Alligator and Williams Fractals shall consider current trend has been changed from up to downtrend, all long trades will be closed, no matter how many trades has been opened.
Script also has additional visuals. If second long trade has been opened simultaneously the Alligator's teeth line is plotted with the green color. Also for every trade in a row from 2 to 5 the label "Buy More" is also plotted just below the teeth line. With every next simultaneously opened trade the green color of the space between teeth and price became less transparent.
Strategy settings
In the inputs window user can setup strategy setting: EMA Length (by default = 100, period of EMA, used for long-term trend filtering EMA calculation). User can choose the optimal parameters during backtesting on certain price chart.
Justification of Methodology
Let's explore the key concepts of this strategy and understand how they work together. We'll begin with the simplest: the EMA.
The Exponential Moving Average (EMA) is a type of moving average that assigns greater weight to recent price data, making it more responsive to current market changes compared to the Simple Moving Average (SMA). This tool is widely used in technical analysis to identify trends and generate buy or sell signals. The EMA is calculated as follows:
1.Calculate the Smoothing Multiplier:
Multiplier = 2 / (n + 1), Where n is the number of periods.
2. EMA Calculation
EMA = (Current Price) × Multiplier + (Previous EMA) × (1 − Multiplier)
In this strategy, the EMA acts as a long-term trend filter. For instance, long trades are considered only when the price closes above the EMA (default: 100-period). This increases the likelihood of entering trades aligned with the prevailing trend.
Next, let’s discuss the short-term trend filter, which combines the Williams Alligator and Williams Fractals. Williams Alligator
Developed by Bill Williams, the Alligator is a technical indicator that identifies trends and potential market reversals. It consists of three smoothed moving averages:
Jaw (Blue Line): The slowest of the three, based on a 13-period smoothed moving average shifted 8 bars ahead.
Teeth (Red Line): The medium-speed line, derived from an 8-period smoothed moving average shifted 5 bars forward.
Lips (Green Line): The fastest line, calculated using a 5-period smoothed moving average shifted 3 bars forward.
When the lines diverge and align in order, the "Alligator" is "awake," signaling a strong trend. When the lines overlap or intertwine, the "Alligator" is "asleep," indicating a range-bound or sideways market. This indicator helps traders determine when to enter or avoid trades.
Fractals, another tool by Bill Williams, help identify potential reversal points on a price chart. A fractal forms over at least five consecutive bars, with the middle bar showing either:
Up Fractal: Occurs when the middle bar has a higher high than the two preceding and two following bars, suggesting a potential downward reversal.
Down Fractal: Happens when the middle bar shows a lower low than the surrounding two bars, hinting at a possible upward reversal.
Traders often use fractals alongside other indicators to confirm trends or reversals, enhancing decision-making accuracy.
How do these tools work together in this strategy? Let’s consider an example of an uptrend.
When the price breaks above an up fractal, it signals a potential bullish trend. This occurs because the up fractal represents a shift in market behavior, where a temporary high was formed due to selling pressure. If the price revisits this level and breaks through, it suggests the market sentiment has turned bullish.
The breakout must occur above the Alligator’s teeth line to confirm the trend. A breakout below the teeth is considered invalid, and the downtrend might still persist. Conversely, in a downtrend, the same logic applies with down fractals.
In this strategy if the most recent up fractal breakout occurs above the Alligator's teeth and follows the last down fractal breakout below the teeth, the algorithm identifies an uptrend. Long trades can be opened during this phase if a signal aligns. If the price breaks a down fractal below the teeth line during an uptrend, the strategy assumes the uptrend has ended and closes all open long trades.
By combining the EMA as a long-term trend filter with the Alligator and fractals as short-term filters, this approach increases the likelihood of opening profitable trades while staying aligned with market dynamics.
Now let's talk about Acceleration/Deceleration signals. AC indicator is calculated using the Awesome Oscillator, so let's first of all briefly explain what is Awesome Oscillator and how it can be calculated. The Awesome Oscillator (AO), developed by Bill Williams, is a momentum indicator designed to measure market momentum by contrasting recent price movements with a longer-term historical perspective. It helps traders detect potential trend reversals and assess the strength of ongoing trends.
The formula for AO is as follows:
AO = SMA5(Median Price) − SMA34(Median Price)
where:
Median Price = (High + Low) / 2
SMA5 = 5-period Simple Moving Average of the Median Price
SMA 34 = 34-period Simple Moving Average of the Median Price
The Acceleration/Deceleration (AC) Indicator, introduced by Bill Williams, measures the rate of change in market momentum. It highlights shifts in the driving force of price movements and helps traders spot early signs of trend changes. The AC Indicator is particularly useful for identifying whether the current momentum is accelerating or decelerating, which can indicate potential reversals or continuations. For AC calculation we shall use the AO calculated above is the following formula:
AC = AO − SMA5(AO), where SMA5(AO)is the 5-period Simple Moving Average of the Awesome Oscillator
When the AC is above the zero line and rising, it suggests accelerating upward momentum.
When the AC is below the zero line and falling, it indicates accelerating downward momentum.
When the AC is below zero line and rising it suggests the decelerating the downtrend momentum. When AC is above the zero line and falling, it suggests the decelerating the uptrend momentum.
Now we can explain which AC signal types are used in this strategy. The first type of long signal is when AC value is below zero line. In this cases we need to see three rising bars on the histogram in a row after the falling one. The second type of signals occurs above the zero line. There we need only two rising AC bars in a row after the falling one to create the signal. The signal bar is the last green bar in this sequence. The strategy places the buy stop order one tick above the candle's high, which corresponds to the signal bar on AC indicator.
After that we can have the following scenarios:
Price hit the order on the next candle in this case strategy opened long with this price.
Price doesn't hit the order price, the next candle set lower high. If current AC bar is increasing buy stop order changes by the script to the high of this new bar plus one tick. This procedure repeats until price finally hit buy order or current AC bar become decreasing. In the second case buy order cancelled and strategy wait for the next AC signal.
If long trades are initiated, the strategy continues utilizing subsequent signals until the total number of trades reaches a maximum of 5. All open trades are closed when the trend shifts to a downtrend, as determined by the combination of the Alligator and Fractals described earlier.
Why we use AC signals? If currently strategy algorithm considers the high probability of the short-term uptrend with the Alligator and Fractals combination pointed out above and the long-term trend is also suggested by the EMA filter as bullish. Rising AC bars after period of falling AC bars indicates the high probability of local pull back end and there is a high chance to open long trade in the direction of the most likely main uptrend. The numbers of rising bars are different for the different AC values (below or above zero line). This is needed because if AC below zero line the local downtrend is likely to be stronger and needs more rising bars to confirm that it has been changed than if AC is above zero.
Why strategy use only 10% per signal? Sometimes we can see the false signals which appears on sideways. Not risking that much script use only 10% per signal. If the first long trade has been open and price continue going up and our trend approximation by Alligator and Fractals is uptrend, strategy add another one 10% of capital to every next AC signal while number of active trades no more than 5. This capital allocation allows to take part in long trades when current uptrend is likely to be strong and use only 10% of capital when there is a high probability of sideways.
Backtest Results
Operating window: Date range of backtests is 2023.01.01 - 2024.11.01. It is chosen to let the strategy to close all opened positions.
Commission and Slippage: Includes a standard Binance commission of 0.1% and accounts for possible slippage over 5 ticks.
Initial capital: 10000 USDT
Percent of capital used in every trade: 10%
Maximum Single Position Loss: -5.15%
Maximum Single Profit: +24.57%
Net Profit: +2108.85 USDT (+21.09%)
Total Trades: 111 (36.94% win rate)
Profit Factor: 2.391
Maximum Accumulated Loss: 367.61 USDT (-2.97%)
Average Profit per Trade: 19.00 USDT (+1.78%)
Average Trade Duration: 75 hours
How to Use
Add the script to favorites for easy access.
Apply to the desired timeframe and chart (optimal performance observed on 3h BTC/USDT).
Configure settings using the dropdown choice list in the built-in menu.
Set up alerts to automate strategy positions through web hook with the text: {{strategy.order.alert_message}}
Disclaimer:
Educational and informational tool reflecting Skyrex commitment to informed trading. Past performance does not guarantee future results. Test strategies in a simulated environment before live implementation
These results are obtained with realistic parameters representing trading conditions observed at major exchanges such as Binance and with realistic trading portfolio usage parameters.
MultiLayer Awesome Oscillator Saucer Strategy [Skyrexio]Overview
MultiLayer Awesome Oscillator Saucer Strategy leverages the combination of Awesome Oscillator (AO), Williams Alligator, Williams Fractals and Exponential Moving Average (EMA) to obtain the high probability long setups. Moreover, strategy uses multi trades system, adding funds to long position if it considered that current trend has likely became stronger. Awesome Oscillator is used for creating signals, while Alligator and Fractal are used in conjunction as an approximation of short-term trend to filter them. At the same time EMA (default EMA's period = 100) is used as high probability long-term trend filter to open long trades only if it considers current price action as an uptrend. More information in "Methodology" and "Justification of Methodology" paragraphs. The strategy opens only long trades.
Unique Features
No fixed stop-loss and take profit: Instead of fixed stop-loss level strategy utilizes technical condition obtained by Fractals and Alligator to identify when current uptrend is likely to be over (more information in "Methodology" and "Justification of Methodology" paragraphs)
Configurable Trading Periods: Users can tailor the strategy to specific market windows, adapting to different market conditions.
Multilayer trades opening system: strategy uses only 10% of capital in every trade and open up to 5 trades at the same time if script consider current trend as strong one.
Short and long term trend trade filters: strategy uses EMA as high probability long-term trend filter and Alligator and Fractal combination as a short-term one.
Methodology
The strategy opens long trade when the following price met the conditions:
1. Price closed above EMA (by default, period = 100). Crossover is not obligatory.
2. Combination of Alligator and Williams Fractals shall consider current trend as an upward (all details in "Justification of Methodology" paragraph)
3. Awesome Oscillator shall create the "Saucer" long signal (all details in "Justification of Methodology" paragraph). Buy stop order is placed one tick above the candle's high of last created "Saucer signal".
4. If price reaches the order price, long position is opened with 10% of capital.
5. If currently we have opened position and price creates and hit the order price of another one "Saucer" signal another one long position will be added to the previous with another one 10% of capital. Strategy allows to open up to 5 long trades simultaneously.
6. If combination of Alligator and Williams Fractals shall consider current trend has been changed from up to downtrend, all long trades will be closed, no matter how many trades has been opened.
Script also has additional visuals. If second long trade has been opened simultaneously the Alligator's teeth line is plotted with the green color. Also for every trade in a row from 2 to 5 the label "Buy More" is also plotted just below the teeth line. With every next simultaneously opened trade the green color of the space between teeth and price became less transparent.
Strategy settings
In the inputs window user can setup strategy setting: EMA Length (by default = 100, period of EMA, used for long-term trend filtering EMA calculation). User can choose the optimal parameters during backtesting on certain price chart.
Justification of Methodology
Let's go through all concepts used in this strategy to understand how they works together. Let's start from the easies one, the EMA. Let's briefly explain what is EMA. The Exponential Moving Average (EMA) is a type of moving average that gives more weight to recent prices, making it more responsive to current price changes compared to the Simple Moving Average (SMA). It is commonly used in technical analysis to identify trends and generate buy or sell signals. It can be calculated with the following steps:
1.Calculate the Smoothing Multiplier:
Multiplier = 2 / (n + 1), Where n is the number of periods.
2. EMA Calculation
EMA = (Current Price) × Multiplier + (Previous EMA) × (1 − Multiplier)
In this strategy uses EMA an initial long term trend filter. It allows to open long trades only if price close above EMA (by default 50 period). It increases the probability of taking long trades only in the direction of the trend.
Let's go to the next, short-term trend filter which consists of Alligator and Fractals. Let's briefly explain what do these indicators means. The Williams Alligator, developed by Bill Williams, is a technical indicator designed to spot trends and potential market reversals. It uses three smoothed moving averages, referred to as the jaw, teeth, and lips:
Jaw (Blue Line): The slowest of the three, based on a 13-period smoothed moving average shifted 8 bars ahead.
Teeth (Red Line): The medium-speed line, derived from an 8-period smoothed moving average shifted 5 bars forward.
Lips (Green Line): The fastest line, calculated using a 5-period smoothed moving average shifted 3 bars forward.
When these lines diverge and are properly aligned, the "alligator" is considered "awake," signaling a strong trend. Conversely, when the lines overlap or intertwine, the "alligator" is "asleep," indicating a range-bound or sideways market. This indicator assists traders in identifying when to act on or avoid trades.
The Williams Fractals, another tool introduced by Bill Williams, are used to pinpoint potential reversal points on a price chart. A fractal forms when there are at least five consecutive bars, with the middle bar displaying the highest high (for an up fractal) or the lowest low (for a down fractal), relative to the two bars on either side.
Key Points:
Up Fractal: Occurs when the middle bar has a higher high than the two preceding and two following bars, suggesting a potential downward reversal.
Down Fractal: Happens when the middle bar shows a lower low than the surrounding two bars, hinting at a possible upward reversal.
Traders often combine fractals with other indicators to confirm trends or reversals, improving the accuracy of trading decisions.
How we use their combination in this strategy? Let’s consider an uptrend example. A breakout above an up fractal can be interpreted as a bullish signal, indicating a high likelihood that an uptrend is beginning. Here's the reasoning: an up fractal represents a potential shift in market behavior. When the fractal forms, it reflects a pullback caused by traders selling, creating a temporary high. However, if the price manages to return to that fractal’s high and break through it, it suggests the market has "changed its mind" and a bullish trend is likely emerging.
The moment of the breakout marks the potential transition to an uptrend. It’s crucial to note that this breakout must occur above the Alligator's teeth line. If it happens below, the breakout isn’t valid, and the downtrend may still persist. The same logic applies inversely for down fractals in a downtrend scenario.
So, if last up fractal breakout was higher, than Alligator's teeth and it happened after last down fractal breakdown below teeth, algorithm considered current trend as an uptrend. During this uptrend long trades can be opened if signal was flashed. If during the uptrend price breaks down the down fractal below teeth line, strategy considered that uptrend is finished with the high probability and strategy closes all current long trades. This combination is used as a short term trend filter increasing the probability of opening profitable long trades in addition to EMA filter, described above.
Now let's talk about Awesome Oscillator's "Sauser" signals. Briefly explain what is the Awesome Oscillator. The Awesome Oscillator (AO), created by Bill Williams, is a momentum-based indicator that evaluates market momentum by comparing recent price activity to a broader historical context. It assists traders in identifying potential trend reversals and gauging trend strength.
AO = SMA5(Median Price) − SMA34(Median Price)
where:
Median Price = (High + Low) / 2
SMA5 = 5-period Simple Moving Average of the Median Price
SMA 34 = 34-period Simple Moving Average of the Median Price
Now we know what is AO, but what is the "Saucer" signal? This concept was introduced by Bill Williams, let's briefly explain it and how it's used by this strategy. Initially, this type of signal is a combination of the following AO bars: we need 3 bars in a row, the first one shall be higher than the second, the third bar also shall be higher, than second. All three bars shall be above the zero line of AO. The price bar, which corresponds to third "saucer's" bar is our signal bar. Strategy places buy stop order one tick above the price bar which corresponds to signal bar.
After that we can have the following scenarios.
Price hit the order on the next candle in this case strategy opened long with this price.
Price doesn't hit the order price, the next candle set lower low. If current AO bar is increasing buy stop order changes by the script to the high of this new bar plus one tick. This procedure repeats until price finally hit buy order or current AO bar become decreasing. In the second case buy order cancelled and strategy wait for the next "Saucer" signal.
If long trades has been opened strategy use all the next signals until number of trades doesn't exceed 5. All trades are closed when the trend changes to downtrend according to combination of Alligator and Fractals described above.
Why we use "Saucer" signals? If AO above the zero line there is a high probability that price now is in uptrend if we take into account our two trend filters. When we see the decreasing bars on AO and it's above zero it's likely can be considered as a pullback on the uptrend. When we see the stop of AO decreasing and the first increasing bar has been printed there is a high probability that this local pull back is finished and strategy open long trade in the likely direction of a main trend.
Why strategy use only 10% per signal? Sometimes we can see the false signals which appears on sideways. Not risking that much script use only 10% per signal. If the first long trade has been open and price continue going up and our trend approximation by Alligator and Fractals is uptrend, strategy add another one 10% of capital to every next saucer signal while number of active trades no more than 5. This capital allocation allows to take part in long trades when current uptrend is likely to be strong and use only 10% of capital when there is a high probability of sideways.
Backtest Results
Operating window: Date range of backtests is 2023.01.01 - 2024.11.25. It is chosen to let the strategy to close all opened positions.
Commission and Slippage: Includes a standard Binance commission of 0.1% and accounts for possible slippage over 5 ticks.
Initial capital: 10000 USDT
Percent of capital used in every trade: 10%
Maximum Single Position Loss: -5.10%
Maximum Single Profit: +22.80%
Net Profit: +2838.58 USDT (+28.39%)
Total Trades: 107 (42.99% win rate)
Profit Factor: 3.364
Maximum Accumulated Loss: 373.43 USDT (-2.98%)
Average Profit per Trade: 26.53 USDT (+2.40%)
Average Trade Duration: 78 hours
These results are obtained with realistic parameters representing trading conditions observed at major exchanges such as Binance and with realistic trading portfolio usage parameters.
How to Use
Add the script to favorites for easy access.
Apply to the desired timeframe and chart (optimal performance observed on 3h BTC/USDT).
Configure settings using the dropdown choice list in the built-in menu.
Set up alerts to automate strategy positions through web hook with the text: {{strategy.order.alert_message}}
Disclaimer:
Educational and informational tool reflecting Skyrex commitment to informed trading. Past performance does not guarantee future results. Test strategies in a simulated environment before live implementation
Non-Psychological Levels🟩 Non-Psychological Levels is a structural analysis tool that segments price action into objective ranges, identifying Broken and Unbroken levels without relying on psychological or time-based assumptions. By emphasizing mechanically derived price behavior, it provides traders with a clear framework for analyzing support and resistance in a consistent and unbiased manner across various market conditions.
This indicator introduces a new approach to understanding market structure by focusing on price movement within defined segments, free from behavioral patterns, round numbers, or specific time intervals. While the indicator is time-agnostic in design, it works within the natural time progression of the chart, ensuring that segmentation aligns with the inherent structure of price movement. Broken levels, where price has breached a structural boundary, and Unbroken levels, which remain intact, are visualized with horizontal lines. These structural zones are complemented by dynamically boxed segments that contextualize both historical and ongoing price behavior.
By offering an objective perspective, the Non-Psychological Levels indicator complements psychology-based tools, helping traders explore market dynamics from multiple angles. When structural levels align with psychological zones, they reinforce critical price areas; when they differ, they provide opportunities to analyze price behavior from an alternative lens. This indicator is designed as both an educational framework and a practical tool, encouraging a deeper understanding of structural price behavior in technical analysis.
⭕ THEORY AND CONCEPT ⭕
The Non-Psychological Levels indicator is grounded in the principle of analyzing price behavior without reliance on psychological assumptions or time-based factors. Its primary purpose is to provide a structural framework for identifying support and resistance levels by focusing solely on price movement within mechanically defined segments. By removing external influences such as sentiment, time intervals, or market sessions, the indicator offers an unbiased lens through which traders can observe price dynamics.
Non-psychology, as defined here, refers to an approach that excludes behavioral and emotional patterns—like fear, greed, or herd mentality—from price analysis. Traditional tools often depend on these patterns to identify zones such as pivots or Fibonacci retracements, but these methods can be inconsistent in volatile markets. In contrast, the Non-Psychological Levels indicator focuses entirely on what price is doing, free from assumptions about trader behavior or external time constraints.
The indicator’s time-agnostic and mechanically driven design segments price action into consistent ranges, highlighting "Broken" levels (where price breaches structural boundaries) and "Unbroken" levels (where price holds). These structural zones remain unaffected by subjective or external influences, ensuring clarity and consistency across different markets and timeframes. By doing so, the indicator reveals a pure view of price structure, independent of psychological biases.
Importantly, the Non-Psychological Levels indicator is not intended to replace psychology-based tools but to complement them. When its structural levels align with psychological zones like round numbers or session highs/lows, the significance of these areas is reinforced. Conversely, when the levels differ, the contrast provides traders with alternative insights into market dynamics. This dual perspective—blending mechanical objectivity with behavioral analysis—enhances the depth and flexibility of market evaluation.
The following principles outline the theoretical foundation of the indicator and its unique contribution to structural price analysis:
Time-Agnostic Design : The indicator avoids reliance on time-based factors like daily opens, session intervals, or specific events. Instead, it segments price action using bar indexes, ensuring that structural levels are identified independently of external time variables. While the x-axis of a chart inherently represents time, this indicator abstracts away its influence, allowing traders to focus purely on price movement without the bias of temporal context.
Mechanical and Neutral Framework : Every calculation within the indicator is predetermined by a set of mechanical rules, ensuring no subjective input or interpretation affects the results. This objectivity guarantees that levels are derived solely from observed price behavior, providing a reliable framework that traders can trust to remain consistent across different assets, timeframes, and market conditions.
Broken and Unbroken Levels : Broken levels represent zones where price has breached a structural boundary, while Unbroken levels highlight areas where price has consistently respected its range. This distinction provides a clear and systematic method for identifying key support and resistance levels, offering insights into where future price interactions are most likely to occur.
Neutral Price Behavior : By dividing price action into equal segments, the indicator removes the influence of external factors like trader sentiment or psychological expectations. Each segment independently determines significant levels based purely on price action, enabling a structural view of the market that abstracts away behavioral or emotional biases.
Complement to Psychological Tools : While the indicator itself avoids behavioral assumptions, its levels can align with psychological zones like round numbers, pivots, or Fibonacci levels. When these structural and psychological levels overlap, it reinforces the importance of key areas, while divergences offer opportunities to examine price behavior from a new perspective.
Educational Value : The indicator encourages traders to explore the contrast between structural and psychological analysis. By introducing a framework that isolates price behavior from external influences, it challenges traditional methods of technical analysis, fostering deeper insights into market structure and behavior.
🔍 UNDERSTANDING STRUCTURAL LEVELS 🔍
The Non-Psychological Levels indicator offers a straightforward yet powerful way to understand market structure by segmenting price action into mechanically defined ranges. This segmentation highlights two key elements: "Broken" levels, where price has breached structural boundaries, and "Unbroken" levels, which remain intact and respected by price action. Together, these components create a framework for identifying potential areas of support and resistance.
Broken Levels : These are structural boundaries that price has surpassed, indicating areas where previous support or resistance failed. Broken levels often signal transitions in price behavior, such as shifts in momentum or the start of trending movements. They provide insight into zones where price has already tested and moved beyond.
Unbroken Levels : These levels remain intact within a given price segment, marking areas where price has consistently respected boundaries. Unbroken levels are particularly useful for identifying potential reversal points or zones of continued support or resistance. Their persistence across price action often makes them reliable indicators of market structure.
The visual segmentation of price action into distinct ranges allows traders to observe how price transitions between structural zones. For example:
- Clusters of Unbroken levels near the current price may suggest strong support or resistance, offering areas of interest for reversals or breakouts.
- Gaps between Unbroken levels highlight areas of price inefficiency or low interaction, which may become significant if revisited.
By focusing solely on structural price behavior, the Non-Psychological Levels indicator enables traders to analyze price independently of time or psychological factors. This makes it a valuable tool for understanding price dynamics objectively, whether used on its own or alongside other indicators.
🛠️ SETTINGS 🛠️
The Non-Psychological Levels indicator offers various customizable settings to help users tailor its visualization to their specific trading style and market conditions. These settings allow adjustments to sensitivity, level projection, and the source of price calculations (e.g., wicks or closing prices). Below, we outline each setting and its impact on the chart, along with examples to illustrate their functionality.
Custom Settings
Sensitivity : This setting adjusts the balance between detailed and broader structural levels by controlling the number of segments. Higher values result in more segments, revealing finer price levels, while lower values consolidate segments to highlight major price movements.
Source : Allows the user to choose between 'Wick' or 'Close' for detecting levels. Selecting 'Wick' emphasizes the absolute highs and lows of price action, while 'Close' focuses on closing prices within each segment.
Level Labels : Configures the visual representation of price levels, allowing users to toggle between price values, symbols (▲ ▼), or disabling labels altogether. This setting ensures clarity in how Broken and Unbroken levels are displayed on the chart.
Unbroken Levels : - - - Users can customize the colors and label styles for Unbroken levels, which highlight areas where price has respected structural boundaries.
Broken Levels : -|- Similar to Unbroken levels, users can specify the visual appearance of Broken levels, including color customization for Broken highs and lows. These settings help distinguish areas where price has breached a structural boundary.
Projection Options : This setting allows users to control how broken and unbroken levels are visually extended on the chart. The Future option projects lines forward to the right of the current price, showing potential future relevance of levels. The All option extends lines both forward and backward, providing a comprehensive view of how levels align with historical and potential future price action. The None option disables projections, keeping the chart focused solely on current segment levels without any extensions.
Segments : Includes options for customizing the segment visualization:
- Live Segment : Toggles the display of a highlighted box representing the current developing segment, helping users focus on ongoing price action.
- Boxes : Allows users to display filled boxes around each segment for additional visual emphasis.
- Segment Colors : Users can define separate colors for support (lower) and resistance (upper) segments, making it easier to interpret directional trends.
- Boundaries : Enables or disables vertical lines to mark segment boundaries, providing a clearer view of structural divisions.
Repaint : This setting allows users to enable or disable triangle labels within the live segment. When enabled, the triangles dynamically update to reflect real-time price behavior during the live bar but will repaint until the bar is fully confirmed. Disabling this option prevents the triangles from appearing during the live bar, reducing potential confusion as they may otherwise flash on and off during price updates. This setting ensures users can choose their preferred visualization while maintaining clarity in real-time analysis.
Color Settings : Offers extensive customization for all visual elements, including Broken and Unbroken levels, segment boundaries, and live segments. These settings ensure the indicator can adapt to individual preferences for chart readability.
🖼️ CHART EXAMPLES 🖼️
The following chart examples illustrate different configurations and features of the Non-Psychological Levels indicator. These examples highlight how the indicator’s settings influence the visualization of structural price behavior, helping traders understand its functionality in various scenarios.
Broken and Unbroken Levels : Orange prices are Broken HIghs. Blue prices are Broken Lows. Green and Red are Unbroken.
Boundaries : Enable Boundaries to visualize segments.
High Sensitivity Setting : A high sensitivity setting produces fewer segments and levels, emphasizing broader price ranges and major structural zones. This configuration is better suited for higher timeframes or identifying overarching trends.
Low Sensitivity Setting : A low sensitivity setting results in a greater number of segments and levels, offering a granular view of price structure. This configuration is ideal for analyzing detailed price movements on lower timeframes.
Live Segment with Triangles Enabled : This example shows the live segment box with triangle labels enabled. These triangles update dynamically during the live bar but may repaint until the bar is confirmed, helping traders observe real-time price behavior.
Broken and Unbroken Levels : This example highlights Broken levels (where price has breached structural boundaries and are drawn through subsequent price action) and Unbroken levels (where price has respected structural boundaries). These distinctions visually identify areas of potential support and resistance.
Broken and Unbroken Levels with Projection: All : This example demonstrates the "Project All" feature, where broken and unbroken levels are extended both forward and backward on the chart. This visualization highlights historical and potential future support and resistance zones, helping traders better understand how price interacts with these structural levels over time.
Segment Boxes with Boundaries : Filled boxes around individual segments visually distinguish each price interval, offering clarity in observing structural price transitions.
📊 SUMMARY 📊
The Non-Psychological Levels indicator provides a unique framework for analyzing structural price behavior through the identification of Broken and Unbroken levels. These levels act as a mechanical representation of support and resistance, independent of psychological biases or time-based factors. By focusing purely on price movement within defined segments, the indicator offers a neutral and consistent approach to understanding market dynamics.
This method complements traditional tools by providing an unbiased perspective. When structural levels align with psychological zones—such as round numbers or session-based highs and lows—they reinforce the significance of these areas as key price zones. When they diverge, the indicator introduces an alternative view, prompting further exploration of price behavior. This dual perspective enhances the depth of analysis by combining the mechanical and behavioral aspects of price action.
The Non-Psychological Levels indicator is not designed to generate trading signals or predict future price movements but serves as a visual and educational tool. Its adaptability across all markets and timeframes allows traders to integrate it into their broader strategies. By highlighting structural price dynamics, the indicator offers a fresh perspective on market analysis while remaining compatible with other technical tools.
⚙️ COMPATIBILITY AND LIMITATIONS ⚙️
Asset Compatibility :
The Non-Psychological Levels indicator is compatible with all asset classes, including cryptocurrencies, forex, stocks, and commodities. It can be applied to any chart or timeframe, making it a flexible tool for structural price analysis. Users should adjust the Sensitivity setting to ensure the segmentation aligns with the price behavior of the specific asset being analyzed. For instance, higher sensitivity values are more suitable for assets with large price ranges, while lower values work well for assets with tighter ranges.
Visual Range Dependency :
The indicator is optimized to perform calculations only within the visible range of the chart. This is a significant advantage, as it prevents unnecessary calculations and maintains efficient performance. However, because of this dependency, levels may appear to "recalculate" when the chart is zoomed in or out quickly or shifted abruptly. While this does not affect the integrity of the levels, it may cause a temporary lag as the indicator adjusts to the new visual range.
Persistence of Levels Beyond Visibility :
Even if levels are not visible on the chart due to zoom or scroll settings, they still exist in the background and are recalculated when revisited. This ensures that the structural price analysis remains consistent, regardless of the chart view.
Box Limitations in Pine Script :
The indicator is subject to Pine Script's inherent limitation of 500 boxes. This means that no more than 500 segments or level boxes can be drawn on the chart simultaneously. For most configurations, this limitation is mitigated by focusing on the visual range, but users employing very low sensitivity settings may exceed the limit. In such cases, only the most recent 500 boxes will be displayed, potentially omitting earlier segments.
Lag with Low Sensitivity Settings :
When sensitivity is set to a low value, the indicator creates many more segments, resulting in finer granularity and a higher number of boxes. While this provides detailed structural levels, it may increase the likelihood of exceeding Pine Script’s 500-box limit or cause a temporary lag when rendering a dense set of boxes over a wide visual range. Users should adjust sensitivity to balance detail with performance, especially on assets with high volatility or broad price ranges.
Live Segment Caution :
The live segment box updates in real time to reflect price movements as the segment is still developing. Since the segment high and segment low are not yet finalized, users should interpret this feature as a dynamic visualization of current price behavior rather than a definitive structural analysis. This ensures clarity during ongoing price action while maintaining the integrity of the indicator's framework.
Cross-Market Versatility :
The indicator’s time-agnostic and mechanical design ensures that it functions identically across all markets and timeframes. However, users should consider the unique characteristics of different markets when interpreting the results, as certain assets (e.g., highly volatile cryptocurrencies) may require sensitivity adjustments for optimal segmentation.
Visual Range Dependency: Levels recalculate efficiently within the chart's visible range but may lag temporarily when zooming or scrolling quickly.
These considerations ensure that the Non-Psychological Levels indicator remains robust and versatile while highlighting some inherent limitations of Pine Script and real-time recalculations. Users can mitigate these constraints by carefully adjusting sensitivity and understanding how the visual range dependency affects performance.
⚠️ DISCLAIMER ⚠️
The Non-Psychological Levels indicator is a visual analysis tool and is not designed as a predictive or trading signal indicator. Its primary purpose is to highlight structural price levels, providing an objective framework for understanding support and resistance within mechanically segmented price action.
The indicator operates within the visible range of the chart to ensure efficiency and adaptiveness, but this recalculation should not be interpreted as a forecast of future price behavior. While the structural levels may align with significant price zones in hindsight, they are purely a reflection of observed price dynamics and should not be used as standalone trading signals.
This indicator is intended as an educational and visual aid to complement other analysis methods. Users are encouraged to integrate it into a broader trading strategy and make adjustments to the settings based on their individual needs and market conditions.
🧠 BEYOND THE CODE 🧠
The Non-Psychological Levels indicator, like other xxattaxx indicators , is designed with education and community collaboration in mind. Its open-source nature encourages exploration, experimentation, and the development of new approaches to price analysis. By focusing on structural price behavior rather than psychological or time-based factors, this indicator introduces a fresh perspective for users to study.
Beyond its visual utility, the indicator serves as an educational framework for understanding the concept of non-psychological analysis. It offers traders an opportunity to explore price dynamics in a purely mechanical way, challenging conventional methods and fostering deeper insights into structural behavior. This approach is especially valuable for those interested in exploring new concepts or seeking alternative perspectives on market analysis.
Your comments, suggestions, and discussions are invaluable in shaping the future of this project. We actively encourage your feedback and contributions, which will directly help us refine and improve the Non-Psychological Levels indicator. We look forward to seeing the creative ways in which you use and enhance this tool. MVS
Fractal Trend Detector [Skyrexio]Introduction
Fractal Trend Detector leverages the combination of Williams fractals and Alligator Indicator to help traders to understand with the high probability what is the current trend: bullish or bearish. It visualizes the potential uptrend with the coloring bars in green, downtrend - in red color. Indicator also contains two additional visualizations, the strong uptrend and downtrend as the green and red zones and the white line - trend invalidation level (more information in "Methodology and it's justification" paragraph)
Features
Optional strong up and downtrends visualization: with the specified parameter in settings user can add/hide the green and red zones of the strong up and downtrends.
Optional trend invalidation level visualization: with the specified parameter in settings user can add/hide the white line which shows the current trend invalidation price.
Alerts: user can set up the alert and have notifications when uptrend/downtrend has been started, strong uptrend/downtrend started.
Methodology and it's justification
In this script we apply the concept of trend given by Bill Williams in his book "Trading Chaos". This approach leverages the Alligator and Fractals in conjunction. Let's briefly explain these two components.
The Williams Alligator, created by Bill Williams, is a technical analysis tool used to identify trends and potential market reversals. It consists of three moving averages, called the jaw, teeth, and lips, which represent different time periods:
Jaw (Blue Line): The slowest line, showing a 13-period smoothed moving average shifted 8 bars forward.
Teeth (Red Line): The medium-speed line, an 8-period smoothed moving average shifted 5 bars forward.
Lips (Green Line): The fastest line, a 5-period smoothed moving average shifted 3 bars forward.
When the lines are spread apart and aligned, the "alligator" is "awake," indicating a strong trend. When the lines intertwine, the "alligator" is "sleeping," signaling a non-trending or range-bound market. This indicator helps traders identify when to enter or avoid trades.
Williams Fractals, introduced by Bill Williams, are a technical analysis tool used to identify potential reversal points on a price chart. A fractal is a series of at least five consecutive bars where the middle bar has the highest high (for a up fractal) or the lowest low (for a down fractal), compared to the two bars on either side.
Key Points:
Up fractal: Formed when the middle bar shows a higher high than the two preceding and two following bars, signaling a potential turning point downward.
Down fractal: Formed when the middle bar has a lower low than the two surrounding bars, indicating a potential upward reversal.
Fractals are often used with other indicators to confirm trend direction or reversal, helping traders make more informed trading decisions.
How we can use its combination? Let's explain the uptrend example. The up fractal breakout to the upside can be interpret as bullish sign, there is a high probability that uptrend has just been started. It can be explained as following: the up fractal created is the potential change in market's behavior. A lot of traders made a decision to sell and it created the pullback with the fractal at the top. But if price is able to reach the fractal's top and break it, this is a high probability sign that market "changed his opinion" and bullish trend has been started. The moment of breaking is the potential changing to the uptrend. Here is another one important point, this breakout shall happen above the Alligator's teeth line. If not, this crossover doesn't count and the downtrend potentially remaining. The inverted logic is true for the down fractals and downtrend.
According to this methodology we received the high probability up and downtrend changes, but we can even add it. If current trend established by the indicator as the uptrend and alligator's lines have the following order: lips is higher than teeth, teeth is higher than jaw, script count it as a strong uptrend and start print the green zone - zone between lips and jaw. It can be used as a high probability support of the current bull market. The inverted logic can be used for bearish trend and red zones: if lips is lower than teeth and teeth is lower than jaw it's interpreted by the indicator as a strong down trend.
Indicator also has the trend invalidation line (white line). If current bar is green and market condition is interpreted by the script as an uptrend you will see the invalidation line below current price. This is the price level which shall be crossed by the price to change up trend to down trend according to algorithm. This level is recalculated on every candle. The inverted logic is valid for downtrend.
How to use indicator
Apply it to desired chart and time frame. It works on every time frame.
Setup the settings with enabling/disabling visualization of strong up/downtrend zones and trend invalidation line. "Show Strong Bullish/Bearish Trends" and "Show Trend Invalidation Price" checkboxes in the settings. By default they are turned on.
Analyze the price action. Indicator colored candle in green if it's more likely that current state is uptrend, in red if downtrend has the high probability to be now. Green zones between two lines showing if current uptrend is likely to be strong. This zone can be used as a high probability support on the uptrend. The red zone show high probability of strong downtrend and can be used as a resistance. White line is showing the level where uptrend or downtrend is going be invalidated according to indicator's algorithm. If current bar is green invalidation line will be below the current price, if red - above the current price.
Set up the alerts if it's needed. Indicator has four custom alerts called "Uptrend has been started" when current bar closed as green and the previous was not green, "Downtrend has been started" when current bar closed red and the previous was not red, "Uptrend became strong" if script started printing the green zone "Downtrend became strong" if script started printing the red zone.
Disclaimer:
Educational and informational tool reflecting Skyrex commitment to informed trading. Past performance does not guarantee future results. Test indicators before live implementation.
Moments Functions
This script is a TradingView Pine Script (version 5) for calculating and plotting statistical moments of a financial series. Here's a breakdown of what it does:
Script Overview
Purpose:
The script calculates and visualizes moments such as Mean, Variance, Skewness, and Kurtosis of a price series.
It also provides the option to display log returns and various statistical bands.
Inputs:
Moments Selection: Choose from Mean, Variance, Skewness, or Excess Kurtosis.
Source Settings: Define the lookback period and source data (e.g., closing price or log returns).
Plot Settings: Control visibility and styling of plots, bands, and information panels.
Colors Settings: Customize colors for different plot elements.
Functions:
f_va(): Computes sample variance.
f_sd(): Computes sample standard deviation.
f_skew(): Computes sample skewness.
f_kurt(): Computes sample kurtosis.
seskew(): Calculates the standard error of skewness.
sekurt(): Calculates the standard error of kurtosis.
skewcv(): Computes critical values for skewness.
kurtcv(): Computes critical values for kurtosis.
Outputs:
Plots:
Moment values (Mean, Variance, Skewness, Kurtosis).
Log Returns (if selected).
Standard Deviation Bands (if selected).
Critical Values for Skewness and Kurtosis (if selected).
Information Panel: Displays current statistical values and their significance.
Customization:
Users can customize appearance and behavior of the script through various input options, including colors, line thickness, and background settings.
Key Variables and Constants
Constants:
zscoreS and zscoreL: Z-scores for confidence intervals based on sample size.
skewrv and kurtrv: Reference values for skewness and excess kurtosis.
Sample Functions:
f_va() and f_sd(): Custom functions to calculate sample variance and standard deviation.
f_skew() and f_kurt(): Custom functions to calculate skewness and kurtosis.
Critical Values:
Functions skewcv() and kurtcv() calculate critical values used to assess statistical significance of skewness and kurtosis.
Plotting
Plot Types:
Mean, variance, skewness, and excess kurtosis are plotted based on user selection.
Log returns are plotted if enabled.
Standard deviation bands and critical values are plotted if enabled.
Labels:
Information panel labels display mean, variance/standard deviation, skewness, and kurtosis values along with their significance.
Example Usage
To use this script:
Add it to a TradingView chart.
Adjust inputs to configure which statistical moments to display, the source data, and the appearance of the plots.
Review the plotted data and labels to analyze the statistical properties of the selected price series.
This script is useful for traders and analysts looking to perform advanced statistical analysis on financial data directly within TradingView.
When comparing two stock prices over a period of time, the statistical moments—mean, variance, skewness, and kurtosis—can provide a deep insight into the behavior of the stock prices and their distributions. Here’s what each moment signifies in this context:
1. Mean
Definition: The mean (or average) is the sum of the stock prices over the period divided by the number of data points. It represents the central value of the price series.
Interpretation: When comparing two stocks, the mean tells you the average price level of each stock over the period. A higher mean indicates that, on average, the stock price is higher compared to another stock with a lower mean.
Comparison Insight: If Stock A has a higher mean price than Stock B, it implies that Stock A's prices are generally higher than those of Stock B over the given period.
2. Variance
Definition: Variance measures the dispersion or spread of the stock prices around the mean. It is the average of the squared differences from the mean.
Interpretation: A higher variance indicates that the stock prices fluctuate more widely from the mean, implying greater volatility. Conversely, a lower variance indicates more stable and predictable prices.
Comparison Insight: Comparing the variances of two stocks helps in assessing which stock has more price volatility. If Stock A has a higher variance than Stock B, it means Stock A's prices are more volatile and less predictable compared to Stock B.
3. Skewness
Definition: Skewness measures the asymmetry of the distribution of stock prices around the mean. It can be positive, negative, or zero:
Positive Skewness: The distribution has a long right tail, with more frequent small returns and fewer large positive returns.
Negative Skewness: The distribution has a long left tail, with more frequent small returns and fewer large negative returns.
Zero Skewness: The distribution is symmetric around the mean.
Interpretation: Skewness tells you about the direction of outliers in the stock price distribution. Positive skewness means a higher probability of large positive returns, while negative skewness means a higher probability of large negative returns.
Comparison Insight: By comparing skewness, you can understand the nature of extreme returns for two stocks. For example, if Stock A has positive skewness and Stock B has negative skewness, Stock A might have more frequent large gains, whereas Stock B might have more frequent large losses.
4. Kurtosis
Definition: Kurtosis measures the "tailedness" of the distribution of stock prices. It indicates how much of the distribution is in the tails versus the center. High kurtosis means more outliers (extreme returns), while low kurtosis means fewer outliers.
Interpretation:
High Kurtosis: Indicates a higher likelihood of extreme price movements (both high and low) compared to a normal distribution.
Low Kurtosis: Indicates that extreme price movements are less common.
Comparison Insight: Comparing kurtosis between two stocks shows which stock has more extreme returns. If Stock A has higher kurtosis than Stock B, it means Stock A has more frequent extreme price changes, suggesting more risk or opportunities for large gains or losses.
Summary
Mean: Compares average price levels.
Variance: Compares price volatility.
Skewness: Compares the asymmetry of price movements.
Kurtosis: Compares the likelihood of extreme price changes.
By analyzing these statistical moments, you can gain a comprehensive view of how the two stocks behave relative to each other, which can inform investment decisions based on risk, return expectations, and the nature of price movements.
Simple Neural Network Transformed RSI [QuantraSystems]Simple Neural Network Transformed RSI
Introduction
The Simple Neural Network Transformed RSI (ɴɴᴛ ʀsɪ) stands out as a formidable tool for traders who specialize in lower timeframe trading.
It is an innovative enhancement of the traditional RSI readings with simple neural network smoothing techniques.
This unique blend results in fairly accurate signals, tailored for swift market movements. The ɴɴᴛ ʀsɪ is particularly resistant to the usual market noise found in lower timeframes, ensuring a clearer view of short-term trends.
Furthermore, its diverse range of visualization options adds versatility, making it a valuable tool for traders seeking to capitalize on short-duration market dynamics.
Legend
In the Image you can see the BTCUSD 1D Chart with the ɴɴᴛ ʀsɪ in Trend Following Mode to display the current trend. This is visualized with the barcoloring.
Its Overbought and Oversold zones start at 50% and end at 100% of the selected Standard Deviation (default σ = 2), which can indicate extremely rare situations which can lead to either a softening momentum in the trend or even a mean reversion situation.
Here you can also see the original Indicator line and the Heikin Ashi transformed Indicator bars - more on that now.
Notes
Quantra Standard Value Contents:
To draw out all the information from the indicator calculation we have added a Heikin-Ashi (HA) Candle Visualization.
This HA transformation smoothens out the indicator values and gives a more informative look into Momentum and Trend of the Indicator itself.
This allows early entries and exits by observing the HA transformed Indicator values.
To diversify, different visualization options are available, either a classic line, HA transformed or Hybrid, which contains both of the previous.
To make Quantra's Indicators as useful and versatile as possible we have created options
to change the barcoloring and thus the derived signal from the indicator based on different modes.
Option to choose different Modes:
Trend Following (Indicator above mid line counts as uptrend, below is downtrend)
Extremities (Everything going beyond the Deviation Bands in a Mean Reversion manner is highlighted)
Candles (Color of HA candles as barcolor)
Reversion (HA ONLY) (Reversion Signals via the triangles if HA candles change state outside of the Deviation Bands)
- Reversion Signals are indicated by the triangles in the Heikin-Ashi or Hybrid visualization when the HA Candles revert
from downwards to upwards or the other way around OUTSIDE of the SD Bands.
Depending on the Indicator they signal OB/OS areas and can either work as high probability entries and exits for Mean Reversion trades or
indicate Momentum slow downs and potential ranges.
Please use another indicator to confirm this.
Case Study
To effectively utilize the NNT-RSI, traders should know their style and familiarize themselves with the available options.
As stated above, you have multiple modes available that you can combine as you need and see fit.
In the given example mostly only the mode was used in an isolated fashion.
Trend Following:
Purely relied on State Change - Midline crossover
Could be combined with Momentum or Reversion analysis for better entries/exits.
Extremities:
Ideal entry/exit is in the accordingly colored OS/OB Area, the Reversion signaled the latest possible entry/exit.
HA Candles:
Specifically applicable for strong trends. Powerful and fast tool.
Can whip if used as sole condition.
Reversions:
Shows the single entry and exit bars which have a positive expected value outcome.
Can also be used as confirmation or as last signal.
Please note that we always advise to find more confluence by additional indicators.
Traders are encouraged to test and determine the most suitable settings for their specific trading strategies and timeframes.
In the showcased trades the default settings were used.
Methodology
The Simple Neural Network Transformed RSI uses a simple neural network logic to process RSI values, smoothing them for more accurate trend analysis.
This is achieved through a linear combination of RSI values over a specified input length, weighted evenly to produce a neural network output.
// Simple neural network logic (linear combination with weighted aggregation)
var float inputs = array.new_float(nnLength, na)
for i = 0 to nnLength - 1
array.set(inputs, i, rsi1 )
nnOutput = 0.0
for i = 0 to nnLength - 1
nnOutput := nnOutput + array.get(inputs, i) * (1 / nnLength)
nnOutput
This output is then compared against a standard or dynamic mean line to generate trend following signals.
Mean = ta.sma(nnOutput, sdLook)
cross = useMean? 50 : Mean
The indicator also incorporates Heikin Ashi candlestick calculations to provide additional insights into market dynamics, such as trend strength and potential reversals.
// Calculate Heikin Ashi representation
ha = ha(
na(nnOutput ) ? nnOutput : nnOutput ,
math.max(nnOutput, nnOutput ),
math.min(nnOutput, nnOutput ),
nnOutput)
Standard deviation bands are used to create dynamic overbought and oversold zones, further enhancing the tool's analytical capabilities.
// Calculate Dynamic OB/OS Zones
stdv_bands(_src, _length, _mult) =>
float basis = ta.sma(_src, _length)
float dev = _mult * ta.stdev(_src, _length)
= stdv_bands(nnOutput, sdLook,sdMult/2)
= stdv_bands(nnOutput, sdLook, sdMult)
The Standard Deviation bands take defined parameters from the user, in this case sigma of ideally between 2 to 3,
to help the indicator detect extremely improbable conditions and thus take an inversely probable signal from it to forward to the user.
The parameter settings and also the visualizations allow for ample customizations by the trader.
For questions or recommendations, please feel free to seek contact in the comments.
Goertzel Cycle Composite Wave [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Cycle Composite Wave indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
*** To decrease the load time of this indicator, only XX many bars back will render to the chart. You can control this value with the setting "Number of Bars to Render". This doesn't have anything to do with repainting or the indicator being endpointed***
█ Brief Overview of the Goertzel Cycle Composite Wave
The Goertzel Cycle Composite Wave is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The Goertzel Cycle Composite Wave is considered a non-repainting and endpointed indicator. This means that once a value has been calculated for a specific bar, that value will not change in subsequent bars, and the indicator is designed to have a clear start and end point. This is an important characteristic for indicators used in technical analysis, as it allows traders to make informed decisions based on historical data without the risk of hindsight bias or future changes in the indicator's values. This means traders can use this indicator trading purposes.
The repainting version of this indicator with forecasting, cycle selection/elimination options, and data output table can be found here:
Goertzel Browser
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the cycles. The color of the lines indicates whether the wave is increasing or decreasing.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast: These inputs define the window size for the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Cycle Composite Wave Code
The Goertzel Cycle Composite Wave code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Cycle Composite Wave function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past sizes (WindowSizePast), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Cycle Composite Wave algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Cycle Composite Wave code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Cycle Composite Wave code calculates the waveform of the significant cycles for specified time windows. The windows are defined by the WindowSizePast parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in a matrix:
The calculated waveforms for the cycle is stored in the matrix - goeWorkPast. This matrix holds the waveforms for the specified time windows. Each row in the matrix represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Cycle Composite Wave function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Cycle Composite Wave code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Cycle Composite Wave's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for specified time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast:
The WindowSizePast is updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
The matrix goeWorkPast is initialized to store the Goertzel results for specified time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for waveforms:
The goertzel array is initialized to store the endpoint Goertzel.
Calculating composite waveform (goertzel array):
The composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Drawing composite waveform (pvlines):
The composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms and visualizes them on the chart using colored lines.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
Limited applicability:
The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Cycle Composite Wave indicator can be interpreted by analyzing the plotted lines. The indicator plots two lines: composite waves. The composite wave represents the composite wave of the price data.
The composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend.
Interpreting the Goertzel Cycle Composite Wave indicator involves identifying the trend of the composite wave lines and matching them with the corresponding bullish or bearish color.
█ Conclusion
The Goertzel Cycle Composite Wave indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Cycle Composite Wave indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Cycle Composite Wave indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Trading Made Easy ATR BandsAs always, this is not financial advice and use at your own risk. Trading is risky and can cost you significant sums of money if you are not careful. Make sure you always have a proper entry and exit plan that includes defining your risk before you enter a trade.
Background:
This is my take on two relatively famous indicators that paint the colour of your candles in order to help identify trend direction and smooth out market noise. The Elder Impulse System was designed by Dr . Alexander Elder in his book Come Into My Trading Room and attempts to identify the change of trends and when these trends speed up and slow down (school.stockcharts.com). The system used a 13 period EMA and a MACD histogram, and compared each of these indicators to the previous period. In short, when both the histogram and the EMA were rising, the trend was accelerating to the upside and when both were falling, accelerating to the downside. Conversely, when the indicators were not in alignment, say the MACD falling but the EMA rising, it signaled a slowing down of momentum. The downside of this indicator is that it be can rather jumpy, focusing on a short period EMA for 50% of its calculation, leaving a trader to potentially sit on the sidelines during opportune pull backs to enter winning positions, or exit early when there is still a lot of gas left in the tank.
A similar concept has been employed by John Carter and his organization, SimplerTrading, with the 10X bars indicator. However, here they use the famous Directional Movement Index (DMI) created by J. Welles Wilder as the basis for their bars (www.simplertrading.com). John Carter states that the use of this indicator can lead to getting in earlier on more, bigger, and faster setups. The downside of this indicator is the reliance on the ADX calculations to keep you out of rangebound trades. Anyone who is familiar with the DMI system understands it has unparalleled ability to identify longer term trends, but it is also quite slow, leaving the trader to miss a good portion of the initial runup due to this ADX portion that is very slow to get moving and also slow to signal exits.
In short, both of these systems are designed with one thing in mind: keeping the trader on the right side of the move --- but both suffer from the same issue but on opposite sides of the spectrum. One is too fast and the other is too slow. Ultimately, leaving profits on the table for the trader when such a situation could be avoided.
Here I present my own take on these and have made the “Trading Made Easy ATR Bands”. I name it this because trading is much easier when you trade with the prevailing trend, and this system identifies these periods quite effectively while doing a better job of handling the speed flux of most markets. The base formula uses the DMI as its main calculation and the relationship between the DMI+ and DMI- lines, respectively, like the 10X bars. While the trader can investigate these on their own to understand these more intimately, essentially the DMI+ and DMI- lines are calculating the highs and lows respectively of each bar compared to a period in the past and smoothed with the true range, a measurement of volatility . What this ultimately presents is a picture of uptrends and downtrends, where price is making consistently more highs or more lows over a period of time. Where I have modified this relative to the 10X bars is I have ignored the ADX calculations. Further, values over 25 have been discussed as “strong” momentum, in my calculations, I have sped this up to 20 to get a trader into the move earlier. Second, I have added an additional calculation based around the 21-period exponential moving average calculated against its previous output. This then, like the Elder Impulse System, has two forms of market momentum as its calculation to smooth out noise, but has the benefit of being less jumpy, like the original 10X bar system. I have added a series of exponential moving averages following the Fibonacci sequence from 8-144 as a system of dynamic support and resistance showing the sentiment of both the shorter and longer term market participants. Last, I have added a series of Keltner Channels , from 1X-4X, that encompass the 21 period EMA as a base line. The 21 EMA is a stable in all of John Carter’s work and I do believe he is correct that the market is mostly structured around this line, since it roughly approximates one month of trading data. It is not uncommon to see price expand and contract back to this line over and over again.
Trade Signals:
Strong Bullish Momentum – The system will generate a green bar when the DMI+ line is over the DMI- line, the DMI+ line is equal or greater than 20 and the 21 EMA has increased relative to its last close.
Weak Bullish Momentum – The system will generate a blue bar in several scenarios. First, when the DMI+ line is over the DMI- line but the DMI+ line is not over 20 and the EMA is equal or less than the previous close. It will also print a blue bar if either the DMI or the EMA are not aligned, such as the DMI+ is over the DMI- but not over 20 but the EMA has risen compared to the last bar. Last, it will also print a blue bar if the DMI- is over the DMI+ but the EMA is rising.
Strong Bearish Momentum – The system will generate a red bar when the DMI- line is over the DMI+ line, the DMI- line is equal or greater than 20, and the 21 EMA has fallen relative to its last close.
Weak Bearish Momentum – The system will generate an orange bar in several scenarios. First when the DMI- line is over the DMI+ line but the DMI- line is not over 20 and the EMA is equal or greater than the last bar. It will also print an orange bar if either the DMI or the EMA are not aligned, such as the DMI- is over the DMI+ but not over 20 but the EMA has fallen. Lastly, it will also print an orange bar if the DMI+ line is over the DMI- and the EMA has fallen relative to the last bar.
Uses:
1) Like the Elder Impulse System and 10X Bar systems, these should be used as trade filters only.. It is in the trader’s best interest to trade with the trends and these bars identify these periods but may not always generate the most opportune time to enter a market. For instance, trying to short a market when the market is in a phase of Strong Bullish Momentum would not be wise, and vice versa with trying to open long positions when the market is exhibiting Strong Bearish Momentum. Use multiple forms of evidence to confirm the signals shown before entering any trade and to not take these signals on their without confluence of ideas. A viable system could use the Elder Triple Screen System (for reference, see this decent write up --- www.dailyforex.com) with the Trading Made Easy Bands as your “Tide” or longer term filter, and a further trading plan to establish an entry on a short time frame pull back.
2) Interim Trend Exhaustion – Keltner channels work as moving standard deviations from the 21 EMA . 3X multipliers will encompass 99.7% of price and 4X will encompass 99.9% of price away from the 21 EMA . During a trend it would be a good idea to lock in partial profits when price reaches these outer extrema as it is very highly probable that a retracement back to the mean is approaching. While not part of the system, and not recommended to be used by this system, a mean reversion trader could in theory look for reversals at these extrema points and trade a mean reversion strategy back to the 21EMA, but is a much riskier trade with lower probability of success. A trend trader should look to enter trades when a signal is given within the 1ATR or 2ATR zone as this is when price has not really started accelerating yet and is likely to see continued momentum in that direction.
888 BOT #backtest█ 888 BOT #backtest (open source)
This is an Expert Advisor 'EA' or Automated trading script for ‘longs’ and ‘shorts’, which uses only a Take Profit or, in the worst case, a Stop Loss to close the trade.
It's a much improved version of the previous ‘Repanocha’. It doesn`t use 'Trailing Stop' or 'security()' functions (although using a security function doesn`t mean that the script repaints) and all signals are confirmed, therefore the script doesn`t repaint in alert mode and is accurate in backtest mode.
Apart from the previous indicators, some more and other functions have been added for Stop-Loss, re-entry and leverage.
It uses 8 indicators, (many of you already know what they are, but in case there is someone new), these are the following:
1. Jurik Moving Average
It's a moving average created by Mark Jurik for professionals which eliminates the 'lag' or delay of the signal. It's better than other moving averages like EMA , DEMA , AMA or T3.
There are two ways to decrease noise using JMA . Increasing the 'LENGTH' parameter will cause JMA to move more slowly and therefore reduce noise at the expense of adding 'lag'
The 'JMA LENGTH', 'PHASE' and 'POWER' parameters offer a way to select the optimal balance between 'lag' and over boost.
Green: Bullish , Red: Bearish .
2. Range filter
Created by Donovan Wall, its function is to filter or eliminate noise and to better determine the price trend in the short term.
First, a uniform average price range 'SAMPLING PERIOD' is calculated for the filter base and multiplied by a specific quantity 'RANGE MULTIPLIER'.
The filter is then calculated by adjusting price movements that do not exceed the specified range.
Finally, the target ranges are plotted to show the prices that will trigger the filter movement.
Green: Bullish , Red: Bearish .
3. Average Directional Index ( ADX Classic) and ( ADX Masanakamura)
It's an indicator designed by Welles Wilder to measure the strength and direction of the market trend. The price movement is strong when the ADX has a positive slope and is above a certain minimum level 'ADX THRESHOLD' and for a given period 'ADX LENGTH'.
The green color of the bars indicates that the trend is bullish and that the ADX is above the level established by the threshold.
The red color of the bars indicates that the trend is down and that the ADX is above the threshold level.
The orange color of the bars indicates that the price is not strong and will surely lateralize.
You can choose between the classic option and the one created by a certain 'Masanakamura'. The main difference between the two is that in the first it uses RMA () and in the second SMA () in its calculation.
4. Parabolic SAR
This indicator, also created by Welles Wilder, places points that help define a trend. The Parabolic SAR can follow the price above or below, the peculiarity that it offers is that when the price touches the indicator, it jumps to the other side of the price (if the Parabolic SAR was below the price it jumps up and vice versa) to a distance predetermined by the indicator. At this time the indicator continues to follow the price, reducing the distance with each candle until it is finally touched again by the price and the process starts again. This procedure explains the name of the indicator: the Parabolic SAR follows the price generating a characteristic parabolic shape, when the price touches it, stops and turns ( SAR is the acronym for 'stop and reverse'), giving rise to a new cycle. When the points are below the price, the trend is up, while the points above the price indicate a downward trend.
5. RSI with Volume
This indicator was created by LazyBear from the popular RSI .
The RSI is an oscillator-type indicator used in technical analysis and also created by Welles Wilder that shows the strength of the price by comparing individual movements up or down in successive closing prices.
LazyBear added a volume parameter that makes it more accurate to the market movement.
A good way to use RSI is by considering the 50 'RSI CENTER LINE' centerline. When the oscillator is above, the trend is bullish and when it is below, the trend is bearish .
6. Moving Average Convergence Divergence ( MACD ) and ( MAC-Z )
It was created by Gerald Appel. Subsequently, the histogram was added to anticipate the crossing of MA. Broadly speaking, we can say that the MACD is an oscillator consisting of two moving averages that rotate around the zero line. The MACD line is the difference between a short moving average 'MACD FAST MA LENGTH' and a long moving average 'MACD SLOW MA LENGTH'. It's an indicator that allows us to have a reference on the trend of the asset on which it is operating, thus generating market entry and exit signals.
We can talk about a bull market when the MACD histogram is above the zero line, along with the signal line, while we are talking about a bear market when the MACD histogram is below the zero line.
There is the option of using the MAC-Z indicator created by LazyBear, which according to its author is more effective, by using the parameter VWAP ( volume weighted average price ) 'Z-VWAP LENGTH' together with a standard deviation 'STDEV LENGTH' in its calculation.
7. Volume Condition
Volume indicates the number of participants in this war between bulls and bears, the more volume the more likely the price will move in favor of the trend. A low trading volume indicates a lower number of participants and interest in the instrument in question. Low volumes may reveal weakness behind a price movement.
With this condition, those signals whose volume is less than the volume SMA for a period 'SMA VOLUME LENGTH' multiplied by a factor 'VOLUME FACTOR' are filtered. In addition, it determines the leverage used, the more volume , the more participants, the more probability that the price will move in our favor, that is, we can use more leverage. The leverage in this script is determined by how many times the volume is above the SMA line.
The maximum leverage is 8.
8. Bollinger Bands
This indicator was created by John Bollinger and consists of three bands that are drawn superimposed on the price evolution graph.
The central band is a moving average, normally a simple moving average calculated with 20 periods is used. ('BB LENGTH' Number of periods of the moving average)
The upper band is calculated by adding the value of the simple moving average X times the standard deviation of the moving average. ('BB MULTIPLIER' Number of times the standard deviation of the moving average)
The lower band is calculated by subtracting the simple moving average X times the standard deviation of the moving average.
the band between the upper and lower bands contains, statistically, almost 90% of the possible price variations, which means that any movement of the price outside the bands has special relevance.
In practical terms, Bollinger bands behave as if they were an elastic band so that, if the price touches them, it has a high probability of bouncing.
Sometimes, after the entry order is filled, the price is returned to the opposite side. If price touch the Bollinger band in the same previous conditions, another order is filled in the same direction of the position to improve the average entry price, (% MINIMUM BETTER PRICE ': Minimum price for the re-entry to be executed and that is better than the price of the previous position in a given %) in this way we give the trade a chance that the Take Profit is executed before. The downside is that the position is doubled in size. 'ACTIVATE DIVIDE TP': Divide the size of the TP in half. More probability of the trade closing but less profit.
█ STOP LOSS and RISK MANAGEMENT.
A good risk management is what can make your equity go up or be liquidated.
The % risk is the percentage of our capital that we are willing to lose by operation. This is recommended to be between 1-5%.
% Risk: (% Stop Loss x % Equity per trade x Leverage) / 100
First the strategy is calculated with Stop Loss, then the risk per operation is determined and from there, the amount per operation is calculated and not vice versa.
In this script you can use a normal Stop Loss or one according to the ATR. Also activate the option to trigger it earlier if the risk percentage is reached. '% RISK ALLOWED'
'STOP LOSS CONFIRMED': The Stop Loss is only activated if the closing of the previous bar is in the loss limit condition. It's useful to prevent the SL from triggering when they do a ‘pump’ to sweep Stops and then return the price to the previous state.
█ BACKTEST
The objective of the Backtest is to evaluate the effectiveness of our strategy. A good Backtest is determined by some parameters such as:
- RECOVERY FACTOR: It consists of dividing the 'net profit' by the 'drawdown’. An excellent trading system has a recovery factor of 10 or more; that is, it generates 10 times more net profit than drawdown.
- PROFIT FACTOR: The ‘Profit Factor’ is another popular measure of system performance. It's as simple as dividing what win trades earn by what loser trades lose. If the strategy is profitable then by definition the 'Profit Factor' is going to be greater than 1. Strategies that are not profitable produce profit factors less than one. A good system has a profit factor of 2 or more. The good thing about the ‘Profit Factor’ is that it tells us what we are going to earn for each dollar we lose. A profit factor of 2.5 tells us that for every dollar we lose operating we will earn 2.5.
- SHARPE: (Return system - Return without risk) / Deviation of returns.
When the variations of gains and losses are very high, the deviation is very high and that leads to a very poor ‘Sharpe’ ratio. If the operations are very close to the average (little deviation) the result is a fairly high 'Sharpe' ratio. If a strategy has a 'Sharpe' ratio greater than 1 it is a good strategy. If it has a 'Sharpe' ratio greater than 2, it is excellent. If it has a ‘Sharpe’ ratio less than 1 then we don't know if it is good or bad, we have to look at other parameters.
- MATHEMATICAL EXPECTATION: (% winning trades X average profit) + (% losing trades X average loss).
To earn money with a Trading system, it is not necessary to win all the operations, what is really important is the final result of the operation. A Trading system has to have positive mathematical expectation as is the case with this script: ME = (0.87 x 30.74$) - (0.13 x 56.16$) = (26.74 - 7.30) = 19.44$ > 0
The game of roulette, for example, has negative mathematical expectation for the player, it can have positive winning streaks, but in the long term, if you continue playing you will end up losing, and casinos know this very well.
PARAMETERS
'BACKTEST DAYS': Number of days back of historical data for the calculation of the Backtest.
'ENTRY TYPE': For '% EQUITY' if you have $ 10,000 of capital and select 7.5%, for example, your entry would be $ 750 without leverage. If you select CONTRACTS for the 'BTCUSDT' pair, for example, it would be the amount in 'Bitcoins' and if you select 'CASH' it would be the amount in $ dollars.
'QUANTITY (LEVERAGE 1X)': The amount for an entry with X1 leverage according to the previous section.
'MAXIMUM LEVERAGE': It's the maximum allowed multiplier of the quantity entered in the previous section according to the volume condition.
The settings are for Bitcoin at Binance Futures (BTC: USDTPERP) in 15 minutes.
For other pairs and other timeframes, the settings have to be adjusted again. And within a month, the settings will be different because we all know the market and the trend are changing.
Universal Global SessionUniversal Global Session
This Script combines the world sessions of: Stocks, Forex, Bitcoin Kill Zones, strategic points, all configurable, in a single Script, to capitalize the opening and closing times of global exchanges as investment assets, becoming an Universal Global Session .
It is based on the great work of @oscarvs ( BITCOIN KILL ZONES v2 ) and the scripts of @ChrisMoody. Thank you Oscar and Chris for your excellent judgment and great work.
At the end of this writing you can find all the internet references of the extensive documentation that I present here. To maximize your benefits in the use of this Script, I recommend that you read the entire document to create an objective and practical criterion.
All the hours of the different exchanges are presented at GMT -6. In Market24hClock you can adjust it to your preferences.
After a deep investigation I have been able to show that the different world sessions reveal underlying investment cycles, where it is possible to find sustained changes in the nominal behavior of the trend before the passage from one session to another and in the natural overlaps between the sessions. These underlying movements generally occur 15 minutes before the start, close or overlap of the session, when the session properly starts and also 15 minutes after respectively. Therefore, this script is designed to highlight these particular trending behaviors. Try it, discover your own conclusions and let me know in the notes, thank you.
Foreign Exchange Market Hours
It is the schedule by which currency market participants can buy, sell, trade and speculate on currencies all over the world. It is open 24 hours a day during working days and closes on weekends, thanks to the fact that operations are carried out through a network of information systems, instead of physical exchanges that close at a certain time. It opens Monday morning at 8 am local time in Sydney —Australia— (which is equivalent to Sunday night at 7 pm, in New York City —United States—, according to Eastern Standard Time), and It closes at 5pm local time in New York City (which is equivalent to 6am Saturday morning in Sydney).
The Forex market is decentralized and driven by local sessions, where the hours of Forex trading are based on the opening range of each active country, becoming an efficient transfer mechanism for all participants. Four territories in particular stand out: Sydney, Tokyo, London and New York, where the highest volume of operations occurs when the sessions in London and New York overlap. Furthermore, Europe is complemented by major financial centers such as Paris, Frankfurt and Zurich. Each day of forex trading begins with the opening of Australia, then Asia, followed by Europe, and finally North America. As markets in one region close, another opens - or has already opened - and continues to trade in the currency market. The seven most traded currencies in the world are: the US dollar, the euro, the Japanese yen, the British pound, the Australian dollar, the Canadian dollar, and the New Zealand dollar.
Currencies are needed around the world for international trade, this means that operations are not dominated by a single exchange market, but rather involve a global network of brokers from around the world, such as banks, commercial companies, central banks, companies investment management, hedge funds, as well as retail forex brokers and global investors. Because this market operates in multiple time zones, it can be accessed at any time except during the weekend, therefore, there is continuously at least one open market and there are some hours of overlap between the closing of the market of one region and the opening of another. The international scope of currency trading means that there are always traders around the world making and satisfying demands for a particular currency.
The market involves a global network of exchanges and brokers from around the world, although time zones overlap, the generally accepted time zone for each region is as follows:
Sydney 5pm to 2am EST (10pm to 7am UTC)
London 3am to 12 noon EST (8pm to 5pm UTC)
New York 8am to 5pm EST (1pm to 10pm UTC)
Tokyo 7pm to 4am EST (12am to 9am UTC)
Trading Session
A financial asset trading session refers to a period of time that coincides with the daytime trading hours for a given location, it is a business day in the local financial market. This may vary according to the asset class and the country, therefore operators must know the hours of trading sessions for the securities and derivatives in which they are interested in trading. If investors can understand market hours and set proper targets, they will have a much greater chance of making a profit within a workable schedule.
Kill Zones
Kill zones are highly liquid events. Many different market participants often come together and perform around these events. The activity itself can be event-driven (margin calls or option exercise-related activity), portfolio management-driven (asset allocation rebalancing orders and closing buy-in), or institutionally driven (larger players needing liquidity to complete the size) or a combination of any of the three. This intense cross-current of activity at a very specific point in time often occurs near significant technical levels and the established trends emerging from these events often persist until the next Death Zone approaches or enters.
Kill Zones are evolving with time and the course of world history. Since the end of World War II, New York has slowly invaded London's place as the world center for commercial banking. So much so that during the latter part of the 20th century, New York was considered the new center of the financial universe. With the end of the cold war, that leadership appears to have shifted towards Europe and away from the United States. Furthermore, Japan has slowly lost its former dominance in the global economic landscape, while Beijing's has increased dramatically. Only time will tell how these death zones will evolve given the ever-changing political, economic, and socioeconomic influences of each region.
Financial Markets
New York
New York (NYSE Chicago, NASDAQ)
7:30 am - 2:00 pm
It is the second largest currency platform in the world, followed largely by foreign investors as it participates in 90% of all operations, where movements on the New York Stock Exchange (NYSE) can have an immediate effect (powerful) on the dollar, for example, when companies merge and acquisitions are finalized, the dollar can instantly gain or lose value.
A. Complementary Stock Exchanges
Brazil (BOVESPA - Brazilian Stock Exchange)
07:00 am - 02:55 pm
Canada (TSX - Toronto Stock Exchange)
07:30 am - 02:00 pm
New York (NYSE - New York Stock Exchange)
08:30 am - 03:00 pm
B. North American Trading Session
07:00 am - 03:00 pm
(from the beginning of the business day on NYSE and NASDAQ, until the end of the New York session)
New York, Chicago and Toronto (Canada) open the North American session. Characterized by the most aggressive trading within the markets, currency pairs show high volatility. As the US markets open, trading is still active in Europe, however trading volume generally decreases with the end of the European session and the overlap between the US and Europe.
C. Strategic Points
US main session starts in 1 hour
07:30 am
The euro tends to drop before the US session. The NYSE, CHX and TSX (Canada) trading sessions begin 1 hour after this strategic point. The North American session begins trading Forex at 07:00 am.
This constitutes the beginning of the overlap of the United States and the European market that spans from 07:00 am to 10:35 am, often called the best time to trade EUR / USD, it is the period of greatest liquidity for the main European currencies since it is where they have their widest daily ranges.
When New York opens at 07:00 am the most intense trading begins in both the US and European markets. The overlap of European and American trading sessions has 80% of the total average trading range for all currency pairs during US business hours and 70% of the total average trading range for all currency pairs during European business hours. The intersection of the US and European sessions are the most volatile overlapping hours of all.
Influential news and data for the USD are released between 07:30 am and 09:00 am and play the biggest role in the North American Session. These are the strategically most important moments of this activity period: 07:00 am, 08:00 am and 08:30 am.
The main session of operations in the United States and Canada begins
08:30 am
Start of main trading sessions in New York, Chicago and Toronto. The European session still overlaps the North American session and this is the time for large-scale unpredictable trading. The United States leads the market. It is difficult to interpret the news due to speculation. Trends develop very quickly and it is difficult to identify them, however trends (especially for the euro), which have developed during the overlap, often turn the other way when Europe exits the market.
Second hour of the US session and last hour of the European session
09:30 am
End of the European session
10:35 am
The trend of the euro will change rapidly after the end of the European session.
Last hour of the United States session
02:00 pm
Institutional clients and very large funds are very active during the first and last working hours of almost all stock exchanges, knowing this allows to better predict price movements in the opening and closing of large markets. Within the last trading hours of the secondary market session, a pullback can often be seen in the EUR / USD that continues until the opening of the Tokyo session. Generally it happens if there was an upward price movement before 04:00 pm - 05:00 pm.
End of the trade session in the United States
03:00 pm
D. Kill Zones
11:30 am - 1:30 pm
New York Kill Zone. The United States is still the world's largest economy, so by default, the New York opening carries a lot of weight and often comes with a huge injection of liquidity. In fact, most of the world's marketable assets are priced in US dollars, making political and economic activity within this region even more important. Because it is relatively late in the world's trading day, this Death Zone often sees violent price swings within its first hour, leading to the proven adage "never trust the first hour of trading in America. North.
---------------
London
London (LSE - London Stock Exchange)
02:00 am - 10:35 am
Britain dominates the currency markets around the world, and London is its main component. London, a central trading capital of the world, accounts for about 43% of world trade, many Forex trends often originate from London.
A. Complementary Stock Exchange
Dubai (DFM - Dubai Financial Market)
12:00 am - 03:50 am
Moscow (MOEX - Moscow Exchange)
12:30 am - 10:00 am
Germany (FWB - Frankfurt Stock Exchange)
01:00 am - 10:30 am
Afríca (JSE - Johannesburg Stock Exchange)
01:00 am - 09:00 am
Saudi Arabia (TADAWUL - Saudi Stock Exchange)
01:00 am - 06:00 am
Switzerland (SIX - Swiss Stock Exchange)
02:00 am - 10:30 am
B. European Trading Session
02:00 am - 11:00 am
(from the opening of the Frankfurt session to the close of the Order Book on the London Stock Exchange / Euronext)
It is a very liquid trading session, where trends are set that start during the first trading hours in Europe and generally continue until the beginning of the US session.
C. Middle East Trading Session
12:00 am - 06:00 am
(from the opening of the Dubai session to the end of the Riyadh session)
D. Strategic Points
European session begins
02:00 am
London, Frankfurt and Zurich Stock Exchange enter the market, overlap between Europe and Asia begins.
End of the Singapore and Asia sessions
03:00 am
The euro rises almost immediately or an hour after Singapore exits the market.
Middle East Oil Markets Completion Process
05:00 am
Operations are ending in the European-Asian market, at which time Dubai, Qatar and in another hour in Riyadh, which constitute the Middle East oil markets, are closing. Because oil trading is done in US dollars, and the region with the trading day coming to an end no longer needs the dollar, consequently, the euro tends to grow more frequently.
End of the Middle East trading session
06:00 am
E. Kill Zones
5:00 am - 7:00 am
London Kill Zone. Considered the center of the financial universe for more than 500 years, Europe still has a lot of influence in the banking world. Many older players use the European session to establish their positions. As such, the London Open often sees the most significant trend-setting activity on any trading day. In fact, it has been suggested that 80% of all weekly trends are set through the London Kill Zone on Tuesday.
F. Kill Zones (close)
2:00 pm - 4:00 pm
London Kill Zone (close).
---------------
Tokyo
Tokyo (JPX - Tokyo Stock Exchange)
06:00 pm - 12:00 am
It is the first Asian market to open, receiving most of the Asian trade, just ahead of Hong Kong and Singapore.
A. Complementary Stock Exchange
Singapore (SGX - Singapore Exchange)
07:00 pm - 03:00 am
Hong Kong (HKEx - Hong Kong Stock Exchange)
07:30 pm - 02:00 am
Shanghai (SSE - Shanghai Stock Exchange)
07:30 pm - 01:00 am
India (NSE - India National Stock Exchange)
09:45 pm - 04:00 am
B. Asian Trading Session
06:00 pm - 03:00 am
From the opening of the Tokyo session to the end of the Singapore session
The first major Asian market to open is Tokyo which has the largest market share and is the third largest Forex trading center in the world. Singapore opens in an hour, and then the Chinese markets: Shanghai and Hong Kong open 30 minutes later. With them, the trading volume increases and begins a large-scale operation in the Asia-Pacific region, offering more liquidity for the Asian-Pacific currencies and their crosses. When European countries open their doors, more liquidity will be offered to Asian and European crossings.
C. Strategic Points
Second hour of the Tokyo session
07:00 pm
This session also opens the Singapore market. The commercial dynamics grows in anticipation of the opening of the two largest Chinese markets in 30 minutes: Shanghai and Hong Kong, within these 30 minutes or just before the China session begins, the euro usually falls until the same moment of the opening of Shanghai and Hong Kong.
Second hour of the China session
08:30 pm
Hong Kong and Shanghai start trading and the euro usually grows for more than an hour. The EUR / USD pair mixes up as Asian exporters convert part of their earnings into both US dollars and euros.
Last hour of the Tokyo session
11:00 pm
End of the Tokyo session
12:00 am
If the euro has been actively declining up to this time, China will raise the euro after the Tokyo shutdown. Hong Kong, Shanghai and Singapore remain open and take matters into their own hands causing the growth of the euro. Asia is a huge commercial and industrial region with a large number of high-quality economic products and gigantic financial turnover, making the number of transactions on the stock exchanges huge during the Asian session. That is why traders, who entered the trade at the opening of the London session, should pay attention to their terminals when Asia exits the market.
End of the Shanghai session
01:00 am
The trade ends in Shanghai. This is the last trading hour of the Hong Kong session, during which market activity peaks.
D. Kill Zones
10:00 pm - 2:00 am
Asian Kill Zone. Considered the "Institutional" Zone, this zone represents both the launch pad for new trends as well as a recharge area for the post-American session. It is the beginning of a new day (or week) for the world and as such it makes sense that this zone often sets the tone for the remainder of the global business day. It is ideal to pay attention to the opening of Tokyo, Beijing and Sydney.
--------------
Sidney
Sydney (ASX - Australia Stock Exchange)
06:00 pm - 12:00 am
A. Complementary Stock Exchange
New Zealand (NZX - New Zealand Stock Exchange)
04:00 pm - 10:45 pm
It's where the global trading day officially begins. While it is the smallest of the megamarkets, it sees a lot of initial action when markets reopen Sunday afternoon as individual traders and financial institutions are trying to regroup after the long hiatus since Friday afternoon. On weekdays it constitutes the end of the current trading day where the change in the settlement date occurs.
B. Pacific Trading Session
04:00 pm - 12:00 am
(from the opening of the Wellington session to the end of the Sydney session)
Forex begins its business hours when Wellington (New Zealand Exchange) opens local time on Monday. Sydney (Australian Stock Exchange) opens in 2 hours. It is a session with a fairly low volatility, configuring itself as the calmest session of all. Strong movements appear when influential news is published and when the Pacific session overlaps the Asian Session.
C. Strategic Points
End of the Sydney session
12:00 am
---------------
Conclusions
The best time to trade is during overlaps in trading times between open markets. Overlaps equate to higher price ranges, creating greater opportunities.
Regarding press releases (news), it should be noted that these in the currency markets have the power to improve a normally slow trading period. When a major announcement is made regarding economic data, especially when it goes against the predicted forecast, the coin can lose or gain value in a matter of seconds. In general, the more economic growth a country produces, the more positive the economy is for international investors. Investment capital tends to flow to countries that are believed to have good growth prospects and subsequently good investment opportunities, leading to the strengthening of the country's exchange rate. Also, a country that has higher interest rates through its government bonds tends to attract investment capital as foreign investors seek high-yield opportunities. However, stable economic growth and attractive yields or interest rates are inextricably intertwined. It's important to take advantage of market overlaps and keep an eye out for press releases when setting up a trading schedule.
References:
www.investopedia.com
www.investopedia.com
www.investopedia.com
www.investopedia.com
market24hclock.com
market24hclock.com
Other alts compensated capitalization [Peregringlk]DISCLAIMER: I'm not a native English speaker, so let me know please about mistakes in my wording.
Introduction
==========
This indicator (the middle one in the image) shows how the "others altcoins" (all altcoins except coins with high capitalization) are adding own value to its capitalization by removing BTC price changes. By "own value" I mean USD value gaining by actual buys in BTC markets beyong arbitrage effects of BTC price changes.
The main idea is that, if bitcoin has increased is value by 20%, and the other altcoins has increased its capitalization by 30%, the chart will only plot an increased of 10%. In other words, it will show its increased capitalization measured in BTC (the combined altcoin/BTC market is uptrending). Its purpose is to try to identify altseasons. A bit more concisely, the graph will only grow when both USD and BTC capitalization are growing. If any of them are going down, the graph will go down as well.
Rationale
========
- Altseasons are characterized by an incresed in BTC value of almost every altcoin during some period of time, although not all at once, but distributed over the altseason. For example, in the crazy altseason of Dec17/Jan18, almost every (low capitalized) altcoin increased its BTC value by a minimum of +300%, some at the beginning of the season, some at the end.
- When this happens, BTC loss capitalization dominance, but this also can happen if BTC is downtrending while altcoins are being bought in BTC markets but its USD value doesn't change too much. This happens when altcoins are uptrending in BTC price, but there are actually no gain of USD value because the BTC gain in value is not enough to compensate the BTC fall in price. Since BTC is losing USD price, but altcoins are not, dominance falls. So, looking at BTC dominance is not enough to spot possible beginnings of altseasons, because of arbitrage of other effects.
- The "big altcoins" are removed from the counting because one single big capitalized altcoin that grows, let's say, a 20%, will have an observable effect on the total altcoin capitalization, even if the rest of the altcoins are stagnated in price. For example, at today's date (8th April 2020), Ethereum by itself has the 23.89% of the total altcoins capitalization. A +10% in Ethereum price will increase the total altcoin capitalization by a +2.38%. I wanted to remove that effect to focus on generalized price changes of all altcoins. Remember that there are only 9 big altcoins 9 coins representing the 71% of the alts capitalization, while there are exists more than 5000 altcoins in total.
- Another key factor is that I want to focus on what happens in alt/BTC markets, because almost every altcoin can be traded against BTC, and most of them can only be traded against BTC. However, big altcoins can usually be traded against USD or other alt coins or fiat currencies as well. Removing the big alts from the equation helps (just a bit) to simplify the interpretation of the chart because arbitrage effects of those "impactfull" alts are limited (although not removed, because arbitrage also happens cross-markets).
- There are situations where BTC price is going up, alts USD capitalization is going up as well, but alts BTC capitalization is going down because altcoins are being sold in BTC markets, it just happens that the speed of the selling is not high enough as to compensated the increased in BTC price. That makes the USD capitalization grows, while alts are really being dumped in BTC markets. I wanted to reflect that effect as well by making sure that the graph is growing only when both USD and BTC capitalization of alts are growing.
Interpretation
============
If you want, you can see this chart as if plotting the Other alts capitalization as if priced against a fictional coin FCOIN, that start by having a price of 1, that combines the up and downs of both BTC price and alts USD capitalization in a very conservative way: if FCOIN price goes up, means that the other alts are gained USD value but only when they have overcome BTC price changes. Otherwise, it goes down.
If this fictional FCOIN has went up during some days straight with a total gain of maybe, greater than 10%, we are maybe in front of the start of an altseason. Sometimes, maybe (it requires some more years to extract a theory out of here), it can be used as proxy of the BTC near future (trend changes or continuations): if this FCOIN goes up, while BTC is doing nothing relevant or even is going down, it could signal that "people" is getting prepared and a generalized altcoin accumulation process has started, because of a combined people's assumption that BTC will start to have an stable uptrend, or will continue the current trend soon. There's some matches in the past about that, but there are also false positives, as usual.
Additionally, four customizable EMAs are added to the script, by default 21, 50, 100 and 150.
Definitions
=========
- Let's call `altcap_btc` the altcoin capitalization in USD, divided by BTC price. In other words, `altcap_btc` is the capitalization in terms of BTC.
- Let's call `x` the BTC price change rate as `btc_price_current_candle / btc_price_previous_candle`. So, if BTC has grown a +20%, `x = 1.20`, and if BTC has gone down a -20%, `x = 0.80`.
- Let's call `y` the `altcap_btc` price change rate, calculated as before but for `altcap_btc`.
- For pure math equivalence, `x * y` is thus the USD capitalization change rate.
Calculation
=========
For plotting the graph, for each candle, I choose a change rate, and then I plot the total accumulated change rate as by `ch0 * ch1 * ch2 * .... * ch_today`, where each `chX` is the choosen change rate of each candle since the beginning of the chart. So, if the "alts compensated value" has grown yesterday +20% and today's -10%, `1.20 * 0.9 = 1.08`, which means that in two days the compensated value has grown an 8% in total.
- If `x * y > 1` (USD cap is growing), I take `y` as change rate (alt/btc change rate).
- If both `x` and `y` are `> 1`, then the graph grows because I'm taking `y`.
- If `x > 1` and `y < 1`, the graph goes down because I'm taking `y`, reflecting the BTC markets are dumping.
- If `x < 1` and `y > 1`, the graph goes up because I'm taking `y`, reflecting the BTC markets are pumping so much that it overcomes the btc fall.
- `x < 1` and `y < 1` is impossible here because `x * y` must be `> 1` by precondition.
- If `x * y < 1` (USD cap is going down), I take `y` or `x * y` depending on the individual change rates:
- If `x` and `y` go in different directions (one up and the other down), I take `x * y` to reflect that USD capitalization has gone down. I don't take `y` here because it could be `> 1`, and I don't want to make the graph grow if alts are lossing USD value. Also, if `y < 1` and I take `y` the graph will go down faster than USD capitalization and I want to show that "alts compensated value is gown down slower than BTC because some boughts are happening". I don't take `x` either here for the same reasons.
- If both `x` and `y` are `< 1`, I take `y`, because otherwise the graph would be less than 0.000001 today after two years of bleeding, making literally impossible to see if alts "grow tomorrow".
- `x > 1` and `y > 1` is impossible here because `x * y` must be `< 1` by precondition.
Delta Volume Candles [LucF]█ OVERVIEW
This indicator plots on-chart volume delta information using candles that can replace your normal candles, tops and bottoms appended to normal candles, optional MAs of those tops and bottoms levels, a divergence channel and a chart background. The indicator calculates volume delta using intrabar analysis, meaning that it uses the lower timeframe bars constituting each chart bar.
█ CONCEPTS
Volume Delta
The volume delta concept divides a bar's volume in "up" and "down" volumes. The delta is calculated by subtracting down volume from up volume. Many calculation techniques exist to isolate up and down volume within a bar. The simplest use the polarity of interbar price changes to assign their volume to up or down slots, e.g., On Balance Volume or the Klinger Oscillator . Others such as Chaikin Money Flow use assumptions based on a bar's OHLC values. The most precise calculation method uses tick data and assigns the volume of each tick to the up or down slot depending on whether the transaction occurs at the bid or ask price. While this technique is ideal, it requires huge amounts of data on historical bars, which considerably limits the historical depth of charts and the number of symbols for which tick data is available. Furthermore, historical tick data is not yet available on TradingView.
This indicator uses intrabar analysis to achieve a compromise between the simplest and most precise methods of calculating volume delta. It is currently the most precise method usable on TradingView charts. TradingView's Volume Profile built-in indicators use it, as do the CVD - Cumulative Volume Delta Candles and CVD - Cumulative Volume Delta (Chart) indicators published from the TradingView account . My Delta Volume Channels and Volume Delta Columns Pro indicators also use intrabar analysis. Other volume delta indicators such as my Realtime 5D Profile use realtime chart updates to calculate volume delta without intrabar analysis, but that type of indicator only works in real time; they cannot calculate on historical bars.
This is the logic I use to determine the polarity of intrabars, which determines the up or down slot where its volume is added:
• If the intrabar's open and close values are different, their relative position is used.
• If the intrabar's open and close values are the same, the difference between the intrabar's close and the previous intrabar's close is used.
• As a last resort, when there is no movement during an intrabar, and it closes at the same price as the previous intrabar, the last known polarity is used.
Once all intrabars making up a chart bar have been analyzed and the up or down property of each intrabar's volume determined, the up volumes are added, and the down volumes subtracted. The resulting value is volume delta for that chart bar, which can be used as an estimate of the buying/selling pressure on an instrument. Not all markets have volume information. Without it, this indicator is useless.
Intrabar analysis
Intrabars are chart bars at a lower timeframe than the chart's. The timeframe used to access intrabars determines the number of intrabars accessible for each chart bar. On a 1H chart, each chart bar of an active market will, for example, usually contain 60 bars at the lower timeframe of 1min, provided there was market activity during each minute of the hour.
This indicator automatically calculates an appropriate lower timeframe using the chart's timeframe and the settings you use in the script's "Intrabars" section of the inputs. As it can access lower timeframes as small as seconds when available, the indicator can be used on charts at relatively small timeframes such as 1min, provided the market is active enough to produce bars at second timeframes.
The quantity of intrabars analyzed in each chart bar determines:
• The precision of calculations (more intrabars yield more precise results).
• The chart coverage of calculations (there is a 100K limit to the quantity of intrabars that can be analyzed on any chart,
so the more intrabars you analyze per chart bar, the less chart bars can be calculated by the indicator).
The information box displayed at the bottom right of the chart shows the lower timeframe used for intrabars, as well as the average number of intrabars detected for chart bars and statistics on chart coverage.
Balances
This indicator calculates five balances from volume delta values. The balances are oscillators with a zero centerline; positive values are bullish, and negative values are bearish. It is important to understand the balances as they can be used to:
• Color candle bodies.
• Calculate body and top and bottom divergences.
• Color an EMA channel.
• Color the chart's background.
• Configure markers and alerts.
The five balances are:
1 — Bar Balance : This is the only balance using instant values; it is simply the subtraction of the down volume from the up volume on the bar, so the instant volume delta for that bar.
2 — Average Balance : Calculates a distinct EMA for both the up and down volumes, and subtracts the down EMA from the up EMA.
The result is akin to MACD's histogram because it is the subtraction of two moving averages.
3 — Momentum Balance : Starts by calculating, separately for both up and down volumes, the difference between the same EMAs used in "Average Balance" and
an SMA of twice the period used for the "Average Balance" EMAs. The difference for the up side is subtracted from the difference for the down side,
and an RSI of that value is calculated and brought over the −50/+50 scale.
4 — Relative Balance : The reference values used in the calculation are the up and down EMAs used in the "Average Balance".
From those, we calculate two intermediate values using how much the instant up and down volumes on the bar exceed their respective EMA — but with a twist.
If the bar's up volume does not exceed the EMA of up volume, a zero value is used. The same goes for the down volume with the EMA of down volume.
Once we have our two intermediate values for the up and down volumes exceeding their respective MA, we subtract them. The final value is an ALMA of that subtraction.
The rationale behind using zero values when the bar's up/down volume does not exceed its EMA is to only take into account the more significant volume.
If both instant volume values exceed their MA, then the difference between the two is the signal's value.
The signal is called "relative" because the intermediate values are the difference between the instant up/down volumes and their respective MA.
This balance flatlines when the bar's up/down volumes do not exceed their EMAs, which makes it useful to spot areas where trader interest dwindles, such as consolidations.
The smaller the period of the final value's ALMA, the more easily it will flatline. These flat zones should be considered no-trade zones.
5 — Percent Balance : This balance is the ALMA of the ratio of the "Bar Balance" over the total volume for that bar.
From the balances and marker conditions, two more values are calculated:
1 — Marker Bias : This sums the up/down (+1/‒1) occurrences of the markers 1 to 4 over a period you define, so it ranges from −4 to +4, times the period.
Its calculation will depend on the modes used to calculate markers 3 and 4.
2 — Combined Balances : This is the sum of the bull/bear (+1/−1) states of each of the five balances, so it ranges from −5 to +5.
The periods for all of these balances can be configured in the "Periods" section at the bottom of the script's inputs. As you cannot see the balances on the chart, you can use my Volume Delta Columns Pro indicator in a pane; it can plot the same balances, so you will be able to analyze them.
Divergences
In the context of this indicator, a divergence is any bar where the bear/bull state of a balance (above/below its zero centerline) diverges from the polarity of a chart bar. No directional bias is assigned to divergences when they occur. Candle bodies and tops/bottoms can each be colored differently on divergences detected from distinct balances.
Divergence Channel
The divergence channel is the space between two levels (by default, the bar's open and close ) saved when divergences occur. When price (by default the close ) has breached a channel and a new divergence occurs, a new channel is created. Until that new channel is breached, bars where additional divergences occur will expand the channel's levels if the bar's price points are outside the channel.
Prices breaches of the divergence channel will change its state. Divergence channels can be in one of three different states:
• Bull (green): Price has breached the channel to the upside.
• Bear (red): Price has breached the channel to the downside.
• Neutral (gray): The channel has not yet been breached.
█ HOW TO USE THE INDICATOR
I do not make videos to explain how to use my indicators. I do, however, try hard to include in their description everything one needs to understand what they do. From there, it's up to you to explore and figure out if they can be useful in your trading practice. Communicating in videos what this description and the script's tooltips contain would make for very long videos that would likely exceed the attention span of most people who find this description too long. There is no quick way to understand an indicator such as this one because it uses many different concepts and has quite a bit of settings one can use to modify its visuals and behavior — thus how one uses it. I will happily answer questions on the inner workings of the indicator, but I do not answer questions like "How do I trade using this indicator?" A useful answer to that question would require an in-depth analysis of who you are, your trading methodology and objectives, which I do not have time for. I do not teach trading.
Start by loading the indicator on an active chart containing volume information. See here if you need help.
The default configuration displays:
• Normal candles where the bodies are only colored if the bar's volume has increased since the last bar.
If you want to use this indicator's candles, you may want to disable your chart's candles by clicking the eye icon to the right of the symbol's name in the top left of the chart.
• A top or bottom appended to the normal candles. It represents the difference between up and down volume for that bar
and is positioned at the top or bottom, depending on its polarity. If up volume is greater than down volume, a top is displayed. If down volume is greater, a bottom is plotted.
The size of tops and bottoms is determined by calculating a factor which is the proportion of volume delta over the bar's total volume.
That factor is then used to calculate the top or bottom size relative to a baseline of the average candle body size of the last 100 bars.
• An information box in the bottom right displaying intrabar and chart coverage information.
• A light red background when the intrabar volume differs from the chart's volume by more than 1%.
The script's inputs contain tooltips explaining most of the fields. I will not repeat them here. Following is a brief description of each section of the indicator's inputs which will give you an idea of what the indicator can do:
Normal Candles is where you configure the replacement candles plotted by the script. You can choose from different coloring schemes for their bodies and specify a unique color for bodies where a divergence calculated using the method you choose occurs.
Volume Tops & Botttoms is where you configure the display of tops and bottoms, and their EMAs. The EMAs are calculated from the high point of tops and the low point of bottoms. They can act as a channel to evaluate price, and you can choose to color the channel using a gradient reflecting the advances/declines in the balance of your choice.
Divergence Channel is where you set up the appearance and behavior of the divergence channel. These areas represent levels where price and volume delta information do not converge. They can be interpreted as regions with no clear direction from where one will look for breaches. You can configure the channel to take into account one or both types of divergences you have configured for candle bodies and tops/bottoms.
Background allows you to configure a gradient background color that reflects the advances/declines in the balance of your choice. You can use this to provide context to the volume delta values from bars. You can also control the background color displayed on volume discrepancies between the intrabar and the chart's timeframe.
Intrabars is where you choose the calculation mode determining the lower timeframe used to access intrabars. The indicator uses the chart's timeframe and the type of market you are on to calculate the lower timeframe. Your setting there should reflect which compromise you prefer between the precision of calculations and chart coverage. This is also where you control the display of the information box in the lower right corner of the chart.
Markers allows you to control the plotting of chart markers on different conditions. Their configuration determines when alerts generated from the indicator will fire. Note that in order to generate alerts from this script, they must be created from your chart. See this Help Center page to learn how. Only the last 500 markers will be visible on the chart, but this will not affect the generation of alerts.
Periods is where you configure the periods for the balances and the EMAs used in the indicator.
The raw values calculated by this script can be inspected using the Data Window.
█ INTERPRETATION
Rightly or wrongly, volume delta is considered by many a useful complement to the interpretation of price action. I use it extensively in an attempt to find convergence between my read of volume delta and price movement — not so much as a predictor of future price movement. No system or person can predict the future. Accordingly, I consider people who speak or act as if they know the future with certainty to be dangerous to themselves and others; they are charlatans, imprudent or blissfully ignorant.
I try to avoid elaborate volume delta interpretation schemes involving too many variables and prefer to keep things simple:
• Trends that have more chances of continuing should be accompanied by VD of the same polarity.
In trends, I am looking for "slow and steady". I work from the assumption that traders and systems often overreact, which translates into unproductive volatility.
Wild trends are more susceptible to overreactions.
• I prefer steady VD values over wildly increasing ones, as large VD increases often come with increased price volatility, which can backfire.
Large VD values caused by stopping volume will also often occur on trend reversals with abnormally high candles.
• Prices escaping divergence channels may be leading a trend in that direction, although there is no telling how long that trend will last; could be just a few bars or hundreds.
When price is in a channel, shifts in VD balances can sometimes give us an idea of the direction where price has the most chance of breaking.
• Dwindling VD will often indicate trend exhaustion and predate reversals by many bars, but the problem is that mere pauses in a trend will often produce the same behavior in VD.
I think it is too perilous to infer rigidly from VD decreases.
Divergence Channel
Here I have configured the divergence channels to be visible. First, I set the bodies to display divergences on the default Bar Balance. They are indicated by yellow bodies. Then I activated the divergence channels by choosing to draw levels on body divergences and checked the "Fill" checkbox to fill the channel with the same color as the levels. The divergence channel is best understood as a direction-less area from where a breach can be acted on if other variables converge with the breach's direction:
Tops and Bottoms EMAs
I find these EMAs rather interesting. They have no equivalent elsewhere, as they are calculated from the top and bottom values this indicator plots. The only similarity they have with volume-weighted MAs, including VWAP, is that they use price and volume. This indicator's Tops and Bottoms EMAs, however, use the price and volume delta. While the channel differs from other channels in how it is calculated, it can be used like others, as a baseline from which to evaluate price movement or, alternatively, as stop levels. Remember that you can change the period used for the EMAs in the "Periods" section of the inputs.
This chart shows the EMAs in action, filled with a gradient representing the advances/decline from the Momentum balance. Notice the anomaly in the chart's latest bars where the Momentum balance gradient has been indicating a bullish bias for some time, during which price was mostly below the EMAs. Price has just broken above the channel on positive VD. My interpretation of this situation would be that it is a risky opportunity for a long trade in the larger context where the market has been in a downtrend since the 5th. Intrepid traders choosing to enter here could do so with a "make or break" tight stop that will minimize their losses should the market continue its downtrend while hopefully preserving the potential upside of price continuing on the longer-term uptrend prevalent since the 28th:
█ NOTES
Volume
If you use indicators such as this one which depends on volume information, it is important to realize that the volume data they consume comes from data feeds, and that all data feeds are NOT created equally. Those who create the data feeds we use must make decisions concerning the nature of the transactions they tally and the way they are tallied in each feed, and these decisions affect the nature of our volume data. My Volume X-ray publication discusses some of the reasons why volume information from different timeframes, brokers/exchanges or sectors may vary considerably. I encourage you to read it. This indicator's display of a warning through a background color on volume discrepancies between the timeframe used to access intrabars and the chart's timeframe is an attempt to help you realize these variations in feeds. Don't take things for granted, and understand that the quality of a given feed's volume information affects the quality of the results this indicator calculates.
Markets as ecosystems
I believe it is perilous to think that behavioral patterns you discover in one market through the lens of this or any other indicator will necessarily port to other markets. While this may sometimes be the case, it will often not. Why is that? Because each market is its own ecosystem. As cities do, all markets share some common characteristics, but they also all have their idiosyncrasies. A proportion of a city's inhabitants is always composed of outsiders who come and go, but a core population of regulars and systems is usually the force that actually defines most of the city's observable characteristics. I believe markets work somewhat the same way; they may look the same, but if you live there for a while and pay attention, you will notice the idiosyncrasies. Some things that work in some markets will, accordingly, not work in others. Please keep that in mind when you draw conclusions.
On Up/Down or Buy/Sell Volume
Buying or selling volume are misnomers, as every unit of volume transacted is both bought and sold by two different traders. While this does not keep me from using the terms, there is no such thing as “buy only” or “sell only” volume. Trader lingo is riddled with peculiarities. Without access to order book information, traders work with the assumption that when price moves up during a bar, there was more buying pressure than selling pressure, just as when buy market orders take out limit ask orders in the order book at successively higher levels. The built-in volume indicator available on TradingView uses this logic to color the volume columns green or red. While this script’s calculations are more precise because it analyses intrabars to calculate its information, it uses pretty much the same imperfect logic. Until Pine scripts can have access to how much volume was transacted at the bid/ask prices, our volume delta calculations will remain a mere proxy.
Repainting
• The values calculated on the realtime bar will update as new information comes from the feed.
• Historical values may recalculate if the historical feed is updated or when calculations start from a new point in history.
• Markers and alerts will not repaint as they only occur on a bar's close. Keep this in mind when viewing markers on historical bars,
where one could understandably and incorrectly assume they appear at the bar's open.
To learn more about repainting, see the Pine Script™ User Manual's page on the subject .
Superfluity
In "The Bed of Procrustes", Nassim Nicholas Taleb writes: To bankrupt a fool, give him information . This indicator can display a lot of information. The inevitable adaptation period you will need to figure out how to use it should help you eliminate all the visuals you do not need. The more you eliminate, the easier it will be to focus on those that are the most useful to your trading practice. Don't be a fool.
█ THANKS
Thanks to alexgrover for his Dekidaka-Ashi indicator. His volume plots on candles were the inspiration for my top/bottom plots.
Kudos to PineCoders for their libraries. I use two of them in this script: Time and lower_tf .
The first versions of this script used functionality that I would not have known about were it not for these two guys:
— A guy called Kuan who commented on a Backtest Rookies presentation of their Volume Profile indicator.
— theheirophant , my partner in the exploration of the sometimes weird abysses of request.security() ’s behavior at lower timeframes.
Backtesting & Trading Engine [PineCoders]The PineCoders Backtesting and Trading Engine is a sophisticated framework with hybrid code that can run as a study to generate alerts for automated or discretionary trading while simultaneously providing backtest results. It can also easily be converted to a TradingView strategy in order to run TV backtesting. The Engine comes with many built-in strats for entries, filters, stops and exits, but you can also add you own.
If, like any self-respecting strategy modeler should, you spend a reasonable amount of time constantly researching new strategies and tinkering, our hope is that the Engine will become your inseparable go-to tool to test the validity of your creations, as once your tests are conclusive, you will be able to run this code as a study to generate the alerts required to put it in real-world use, whether for discretionary trading or to interface with an execution bot/app. You may also find the backtesting results the Engine produces in study mode enough for your needs and spend most of your time there, only occasionally converting to strategy mode in order to backtest using TV backtesting.
As you will quickly grasp when you bring up this script’s Settings, this is a complex tool. While you will be able to see results very quickly by just putting it on a chart and using its built-in strategies, in order to reap the full benefits of the PineCoders Engine, you will need to invest the time required to understand the subtleties involved in putting all its potential into play.
Disclaimer: use the Engine at your own risk.
Before we delve in more detail, here’s a bird’s eye view of the Engine’s features:
More than 40 built-in strategies,
Customizable components,
Coupling with your own external indicator,
Simple conversion from Study to Strategy modes,
Post-Exit analysis to search for alternate trade outcomes,
Use of the Data Window to show detailed bar by bar trade information and global statistics, including some not provided by TV backtesting,
Plotting of reminders and generation of alerts on in-trade events.
By combining your own strats to the built-in strats supplied with the Engine, and then tuning the numerous options and parameters in the Inputs dialog box, you will be able to play what-if scenarios from an infinite number of permutations.
USE CASES
You have written an indicator that provides an entry strat but it’s missing other components like a filter and a stop strategy. You add a plot in your indicator that respects the Engine’s External Signal Protocol, connect it to the Engine by simply selecting your indicator’s plot name in the Engine’s Settings/Inputs and then run tests on different combinations of entry stops, in-trade stops and profit taking strats to find out which one produces the best results with your entry strat.
You are building a complex strategy that you will want to run as an indicator generating alerts to be sent to a third-party execution bot. You insert your code in the Engine’s modules and leverage its trade management code to quickly move your strategy into production.
You have many different filters and want to explore results using them separately or in combination. Integrate the filter code in the Engine and run through different permutations or hook up your filtering through the external input and control your filter combos from your indicator.
You are tweaking the parameters of your entry, filter or stop strat. You integrate it in the Engine and evaluate its performance using the Engine’s statistics.
You always wondered what results a random entry strat would yield on your markets. You use the Engine’s built-in random entry strat and test it using different combinations of filters, stop and exit strats.
You want to evaluate the impact of fees and slippage on your strategy. You use the Engine’s inputs to play with different values and get immediate feedback in the detailed numbers provided in the Data Window.
You just want to inspect the individual trades your strategy generates. You include it in the Engine and then inspect trades visually on your charts, looking at the numbers in the Data Window as you move your cursor around.
You have never written a production-grade strategy and you want to learn how. Inspect the code in the Engine; you will find essential components typical of what is being used in actual trading systems.
You have run your system for a while and have compiled actual slippage information and your broker/exchange has updated his fees schedule. You enter the information in the Engine and run it on your markets to see the impact this has on your results.
FEATURES
Before going into the detail of the Inputs and the Data Window numbers, here’s a more detailed overview of the Engine’s features.
Built-in strats
The engine comes with more than 40 pre-coded strategies for the following standard system components:
Entries,
Filters,
Entry stops,
2 stage in-trade stops with kick-in rules,
Pyramiding rules,
Hard exits.
While some of the filter and stop strats provided may be useful in production-quality systems, you will not devise crazy profit-generating systems using only the entry strats supplied; that part is still up to you, as will be finding the elusive combination of components that makes winning systems. The Engine will, however, provide you with a solid foundation where all the trade management nitty-gritty is handled for you. By binding your custom strats to the Engine, you will be able to build reliable systems of the best quality currently allowed on the TV platform.
On-chart trade information
As you move over the bars in a trade, you will see trade numbers in the Data Window change at each bar. The engine calculates the P&L at every bar, including slippage and fees that would be incurred were the trade exited at that bar’s close. If the trade includes pyramided entries, those will be taken into account as well, although for those, final fees and slippage are only calculated at the trade’s exit.
You can also see on-chart markers for the entry level, stop positions, in-trade special events and entries/exits (you will want to disable these when using the Engine in strategy mode to see TV backtesting results).
Customization
You can couple your own strats to the Engine in two ways:
1. By inserting your own code in the Engine’s different modules. The modular design should enable you to do so with minimal effort by following the instructions in the code.
2. By linking an external indicator to the engine. After making the proper selections in the engine’s Settings and providing values respecting the engine’s protocol, your external indicator can, when the Engine is used in Indicator mode only:
Tell the engine when to enter long or short trades, but let the engine’s in-trade stop and exit strats manage the exits,
Signal both entries and exits,
Provide an entry stop along with your entry signal,
Filter other entry signals generated by any of the engine’s entry strats.
Conversion from strategy to study
TradingView strategies are required to backtest using the TradingView backtesting feature, but if you want to generate alerts with your script, whether for automated trading or just to trigger alerts that you will use in discretionary trading, your code has to run as a study since, for the time being, strategies can’t generate alerts. From hereon we will use indicator as a synonym for study.
Unless you want to maintain two code bases, you will need hybrid code that easily flips between strategy and indicator modes, and your code will need to restrict its use of strategy() calls and their arguments if it’s going to be able to run both as an indicator and a strategy using the same trade logic. That’s one of the benefits of using this Engine. Once you will have entered your own strats in the Engine, it will be a matter of commenting/uncommenting only four lines of code to flip between indicator and strategy modes in a matter of seconds.
Additionally, even when running in Indicator mode, the Engine will still provide you with precious numbers on your individual trades and global results, some of which are not available with normal TradingView backtesting.
Post-Exit Analysis for alternate outcomes (PEA)
While typical backtesting shows results of trade outcomes, PEA focuses on what could have happened after the exit. The intention is to help traders get an idea of the opportunity/risk in the bars following the trade in order to evaluate if their exit strategies are too aggressive or conservative.
After a trade is exited, the Engine’s PEA module continues analyzing outcomes for a user-defined quantity of bars. It identifies the maximum opportunity and risk available in that space, and calculates the drawdown required to reach the highest opportunity level post-exit, while recording the number of bars to that point.
Typically, if you can’t find opportunity greater than 1X past your trade using a few different reasonable lengths of PEA, your strategy is doing pretty good at capturing opportunity. Remember that 100% of opportunity is never capturable. If, however, PEA was finding post-trade maximum opportunity of 3 or 4X with average drawdowns of 0.3 to those areas, this could be a clue revealing your system is exiting trades prematurely. To analyze PEA numbers, you can uncomment complete sets of plots in the Plot module to reveal detailed global and individual PEA numbers.
Statistics
The Engine provides stats on your trades that TV backtesting does not provide, such as:
Average Profitability Per Trade (APPT), aka statistical expectancy, a crucial value.
APPT per bar,
Average stop size,
Traded volume .
It also shows you on a trade-by-trade basis, on-going individual trade results and data.
In-trade events
In-trade events can plot reminders and trigger alerts when they occur. The built-in events are:
Price approaching stop,
Possible tops/bottoms,
Large stop movement (for discretionary trading where stop is moved manually),
Large price movements.
Slippage and Fees
Even when running in indicator mode, the Engine allows for slippage and fees to be included in the logic and test results.
Alerts
The alert creation mechanism allows you to configure alerts on any combination of the normal or pyramided entries, exits and in-trade events.
Backtesting results
A few words on the numbers calculated in the Engine. Priority is given to numbers not shown in TV backtesting, as you can readily convert the script to a strategy if you need them.
We have chosen to focus on numbers expressing results relative to X (the trade’s risk) rather than in absolute currency numbers or in other more conventional but less useful ways. For example, most of the individual trade results are not shown in percentages, as this unit of measure is often less meaningful than those expressed in units of risk (X). A trade that closes with a +25% result, for example, is a poor outcome if it was entered with a -50% stop. Expressed in X, this trade’s P&L becomes 0.5, which provides much better insight into the trade’s outcome. A trade that closes with a P&L of +2X has earned twice the risk incurred upon entry, which would represent a pre-trade risk:reward ratio of 2.
The way to go about it when you think in X’s and that you adopt the sound risk management policy to risk a fixed percentage of your account on each trade is to equate a currency value to a unit of X. E.g. your account is 10K USD and you decide you will risk a maximum of 1% of it on each trade. That means your unit of X for each trade is worth 100 USD. If your APPT is 2X, this means every time you risk 100 USD in a trade, you can expect to make, on average, 200 USD.
By presenting results this way, we hope that the Engine’s statistics will appeal to those cognisant of sound risk management strategies, while gently leading traders who aren’t, towards them.
We trade to turn in tangible profits of course, so at some point currency must come into play. Accordingly, some values such as equity, P&L, slippage and fees are expressed in currency.
Many of the usual numbers shown in TV backtests are nonetheless available, but they have been commented out in the Engine’s Plot module.
Position sizing and risk management
All good system designers understand that optimal risk management is at the very heart of all winning strategies. The risk in a trade is defined by the fraction of current equity represented by the amplitude of the stop, so in order to manage risk optimally on each trade, position size should adjust to the stop’s amplitude. Systems that enter trades with a fixed stop amplitude can get away with calculating position size as a fixed percentage of current equity. In the context of a test run where equity varies, what represents a fixed amount of risk translates into different currency values.
Dynamically adjusting position size throughout a system’s life is optimal in many ways. First, as position sizing will vary with current equity, it reproduces a behavioral pattern common to experienced traders, who will dial down risk when confronted to poor performance and increase it when performance improves. Second, limiting risk confers more predictability to statistical test results. Third, position sizing isn’t just about managing risk, it’s also about maximizing opportunity. By using the maximum leverage (no reference to trading on margin here) into the trade that your risk management strategy allows, a dynamic position size allows you to capture maximal opportunity.
To calculate position sizes using the fixed risk method, we use the following formula: Position = Account * MaxRisk% / Stop% [, which calculates a position size taking into account the trade’s entry stop so that if the trade is stopped out, 100 USD will be lost. For someone who manages risk this way, common instructions to invest a certain percentage of your account in a position are simply worthless, as they do not take into account the risk incurred in the trade.
The Engine lets you select either the fixed risk or fixed percentage of equity position sizing methods. The closest thing to dynamic position sizing that can currently be done with alerts is to use a bot that allows syntax to specify position size as a percentage of equity which, while being dynamic in the sense that it will adapt to current equity when the trade is entered, does not allow us to modulate position size using the stop’s amplitude. Changes to alerts are on the way which should solve this problem.
In order for you to simulate performance with the constraint of fixed position sizing, the Engine also offers a third, less preferable option, where position size is defined as a fixed percentage of initial capital so that it is constant throughout the test and will thus represent a varying proportion of current equity.
Let’s recap. The three position sizing methods the Engine offers are:
1. By specifying the maximum percentage of risk to incur on your remaining equity, so the Engine will dynamically adjust position size for each trade so that, combining the stop’s amplitude with position size will yield a fixed percentage of risk incurred on current equity,
2. By specifying a fixed percentage of remaining equity. Note that unless your system has a fixed stop at entry, this method will not provide maximal risk control, as risk will vary with the amplitude of the stop for every trade. This method, as the first, does however have the advantage of automatically adjusting position size to equity. It is the Engine’s default method because it has an equivalent in TV backtesting, so when flipping between indicator and strategy mode, test results will more or less correspond.
3. By specifying a fixed percentage of the Initial Capital. While this is the least preferable method, it nonetheless reflects the reality confronted by most system designers on TradingView today. In this case, risk varies both because the fixed position size in initial capital currency represents a varying percentage of remaining equity, and because the trade’s stop amplitude may vary, adding another variability vector to risk.
Note that the Engine cannot display equity results for strategies entering trades for a fixed amount of shares/contracts at a variable price.
SETTINGS/INPUTS
Because the initial text first published with a script cannot be edited later and because there are just too many options, the Engine’s Inputs will not be covered in minute detail, as they will most certainly evolve. We will go over them with broad strokes; you should be able to figure the rest out. If you have questions, just ask them here or in the PineCoders Telegram group.
Display
The display header’s checkbox does nothing.
For the moment, only one exit strategy uses a take profit level, so only that one will show information when checking “Show Take Profit Level”.
Entries
You can activate two simultaneous entry strats, each selected from the same set of strats contained in the Engine. If you select two and they fire simultaneously, the main strat’s signal will be used.
The random strat in each list uses a different seed, so you will get different results from each.
The “Filter transitions” and “Filter states” strats delegate signal generation to the selected filter(s). “Filter transitions” signals will only fire when the filter transitions into bull/bear state, so after a trade is stopped out, the next entry may take some time to trigger if the filter’s state does not change quickly. When you choose “Filter states”, then a new trade will be entered immediately after an exit in the direction the filter allows.
If you select “External Indicator”, your indicator will need to generate a +2/-2 (or a positive/negative stop value) to enter a long/short position, providing the selected filters allow for it. If you wish to use the Engine’s capacity to also derive the entry stop level from your indicator’s signal, then you must explicitly choose this option in the Entry Stops section.
Filters
You can activate as many filters as you wish; they are additive. The “Maximum stop allowed on entry” is an important component of proper risk management. If your system has an average 3% stop size and you need to trade using fixed position sizes because of alert/execution bot limitations, you must use this filter because if your system was to enter a trade with a 15% stop, that trade would incur 5 times the normal risk, and its result would account for an abnormally high proportion in your system’s performance.
Remember that any filter can also be used as an entry signal, either when it changes states, or whenever no trade is active and the filter is in a bull or bear mode.
Entry Stops
An entry stop must be selected in the Engine, as it requires a stop level before the in-trade stop is calculated. Until the selected in-trade stop strat generates a stop that comes closer to price than the entry stop (or respects another one of the in-trade stops kick in strats), the entry stop level is used.
It is here that you must select “External Indicator” if your indicator supplies a +price/-price value to be used as the entry stop. A +price is expected for a long entry and a -price value will enter a short with a stop at price. Note that the price is the absolute price, not an offset to the current price level.
In-Trade Stops
The Engine comes with many built-in in-trade stop strats. Note that some of them share the “Length” and “Multiple” field, so when you swap between them, be sure that the length and multiple in use correspond to what you want for that stop strat. Suggested defaults appear with the name of each strat in the dropdown.
In addition to the strat you wish to use, you must also determine when it kicks in to replace the initial entry’s stop, which is determined using different strats. For strats where you can define a positive or negative multiple of X, percentage or fixed value for a kick-in strat, a positive value is above the trade’s entry fill and a negative one below. A value of zero represents breakeven.
Pyramiding
What you specify in this section are the rules that allow pyramiding to happen. By themselves, these rules will not generate pyramiding entries. For those to happen, entry signals must be issued by one of the active entry strats, and conform to the pyramiding rules which act as a filter for them. The “Filter must allow entry” selection must be chosen if you want the usual system’s filters to act as additional filtering criteria for your pyramided entries.
Hard Exits
You can choose from a variety of hard exit strats. Hard exits are exit strategies which signal trade exits on specific events, as opposed to price breaching a stop level in In-Trade Stops strategies. They are self-explanatory. The last one labelled When Take Profit Level (multiple of X) is reached is the only one that uses a level, but contrary to stops, it is above price and while it is relative because it is expressed as a multiple of X, it does not move during the trade. This is the level called Take Profit that is show when the “Show Take Profit Level” checkbox is checked in the Display section.
While stops focus on managing risk, hard exit strategies try to put the emphasis on capturing opportunity.
Slippage
You can define it as a percentage or a fixed value, with different settings for entries and exits. The entry and exit markers on the chart show the impact of slippage on the entry price (the fill).
Fees
Fees, whether expressed as a percentage of position size in and out of the trade or as a fixed value per in and out, are in the same units of currency as the capital defined in the Position Sizing section. Fees being deducted from your Capital, they do not have an impact on the chart marker positions.
In-Trade Events
These events will only trigger during trades. They can be helpful to act as reminders for traders using the Engine as assistance to discretionary trading.
Post-Exit Analysis
It is normally on. Some of its results will show in the Global Numbers section of the Data Window. Only a few of the statistics generated are shown; many more are available, but commented out in the Plot module.
Date Range Filtering
Note that you don’t have to change the dates to enable/diable filtering. When you are done with a specific date range, just uncheck “Date Range Filtering” to disable date filtering.
Alert Triggers
Each selection corresponds to one condition. Conditions can be combined into a single alert as you please. Just be sure you have selected the ones you want to trigger the alert before you create the alert. For example, if you trade in both directions and you want a single alert to trigger on both types of exits, you must select both “Long Exit” and “Short Exit” before creating your alert.
Once the alert is triggered, these settings no longer have relevance as they have been saved with the alert.
When viewing charts where an alert has just triggered, if your alert triggers on more than one condition, you will need the appropriate markers active on your chart to figure out which condition triggered the alert, since plotting of markers is independent of alert management.
Position sizing
You have 3 options to determine position size:
1. Proportional to Stop -> Variable, with a cap on size.
2. Percentage of equity -> Variable.
3. Percentage of Initial Capital -> Fixed.
External Indicator
This is where you connect your indicator’s plot that will generate the signals the Engine will act upon. Remember this only works in Indicator mode.
DATA WINDOW INFORMATION
The top part of the window contains global numbers while the individual trade information appears in the bottom part. The different types of units used to express values are:
curr: denotes the currency used in the Position Sizing section of Inputs for the Initial Capital value.
quote: denotes quote currency, i.e. the value the instrument is expressed in, or the right side of the market pair (USD in EURUSD ).
X: the stop’s amplitude, itself expressed in quote currency, which we use to express a trade’s P&L, so that a trade with P&L=2X has made twice the stop’s amplitude in profit. This is sometimes referred to as R, since it represents one unit of risk. It is also the unit of measure used in the APPT, which denotes expected reward per unit of risk.
X%: is also the stop’s amplitude, but expressed as a percentage of the Entry Fill.
The numbers appearing in the Data Window are all prefixed:
“ALL:” the number is the average for all first entries and pyramided entries.
”1ST:” the number is for first entries only.
”PYR:” the number is for pyramided entries only.
”PEA:” the number is for Post-Exit Analyses
Global Numbers
Numbers in this section represent the results of all trades up to the cursor on the chart.
Average Profitability Per Trade (X): This value is the most important gauge of your strat’s worthiness. It represents the returns that can be expected from your strat for each unit of risk incurred. E.g.: your APPT is 2.0, thus for every unit of currency you invest in a trade, you can on average expect to obtain 2 after the trade. APPT is also referred to as “statistical expectancy”. If it is negative, your strategy is losing, even if your win rate is very good (it means your winning trades aren’t winning enough, or your losing trades lose too much, or both). Its counterpart in currency is also shown, as is the APPT/bar, which can be a useful gauge in deciding between rivalling systems.
Profit Factor: Gross of winning trades/Gross of losing trades. Strategy is profitable when >1. Not as useful as the APPT because it doesn’t take into account the win rate and the average win/loss per trade. It is calculated from the total winning/losing results of this particular backtest and has less predictive value than the APPT. A good profit factor together with a poor APPT means you just found a chart where your system outperformed. Relying too much on the profit factor is a bit like a poker player who would think going all in with two’s against aces is optimal because he just won a hand that way.
Win Rate: Percentage of winning trades out of all trades. Taken alone, it doesn’t have much to do with strategy profitability. You can have a win rate of 99% but if that one trade in 100 ruins you because of poor risk management, 99% doesn’t look so good anymore. This number speaks more of the system’s profile than its worthiness. Still, it can be useful to gauge if the system fits your personality. It can also be useful to traders intending to sell their systems, as low win rate systems are more difficult to sell and require more handholding of worried customers.
Equity (curr): This the sum of initial capital and the P&L of your system’s trades, including fees and slippage.
Return on Capital is the equivalent of TV’s Net Profit figure, i.e. the variation on your initial capital.
Maximum drawdown is the maximal drawdown from the highest equity point until the drop . There is also a close to close (meaning it doesn’t take into account in-trade variations) maximum drawdown value commented out in the code.
The next values are self-explanatory, until:
PYR: Avg Profitability Per Entry (X): this is the APPT for all pyramided entries.
PEA: Avg Max Opp . Available (X): the average maximal opportunity found in the Post-Exit Analyses.
PEA: Avg Drawdown to Max Opp . (X): this represents the maximum drawdown (incurred from the close at the beginning of the PEA analysis) required to reach the maximal opportunity point.
Trade Information
Numbers in this section concern only the current trade under the cursor. Most of them are self-explanatory. Use the description’s prefix to determine what the values applies to.
PYR: Avg Profitability Per Entry (X): While this value includes the impact of all current pyramided entries (and only those) and updates when you move your cursor around, P&L only reflects fees at the trade’s last bar.
PEA: Max Opp . Available (X): It’s the most profitable close reached post-trade, measured from the trade’s Exit Fill, expressed in the X value of the trade the PEA follows.
PEA: Drawdown to Max Opp . (X): This is the maximum drawdown from the trade’s Exit Fill that needs to be sustained in order to reach the maximum opportunity point, also expressed in X. Note that PEA numbers do not include slippage and fees.
EXTERNAL SIGNAL PROTOCOL
Only one external indicator can be connected to a script; in order to leverage its use to the fullest, the engine provides options to use it as either an entry signal, an entry/exit signal or a filter. When used as an entry signal, you can also use the signal to provide the entry’s stop. Here’s how this works:
For filter state: supply +1 for bull (long entries allowed), -1 for bear (short entries allowed).
For entry signals: supply +2 for long, -2 for short.
For exit signals: supply +3 for exit from long, -3 for exit from short.
To send an entry stop level with an entry signal: Send positive stop level for long entry (e.g. 103.33 to enter a long with a stop at 103.33), negative stop level for short entry (e.g. -103.33 to enter a short with a stop at 103.33). If you use this feature, your indicator will have to check for exact stop levels of 1.0, 2.0 or 3.0 and their negative counterparts, and fudge them with a tick in order to avoid confusion with other signals in the protocol.
Remember that mere generation of the values by your indicator will have no effect until you explicitly allow their use in the appropriate sections of the Engine’s Settings/Inputs.
An example of a script issuing a signal for the Engine is published by PineCoders.
RECOMMENDATIONS TO ASPIRING SYSTEM DESIGNERS
Stick to higher timeframes. On progressively lower timeframes, margins decrease and fees and slippage take a proportionally larger portion of profits, to the point where they can very easily turn a profitable strategy into a losing one. Additionally, your margin for error shrinks as the equilibrium of your system’s profitability becomes more fragile with the tight numbers involved in the shorter time frames. Avoid <1H time frames.
Know and calculate fees and slippage. To avoid market shock, backtest using conservative fees and slippage parameters. Systems rarely show unexpectedly good returns when they are confronted to the markets, so put all chances on your side by being outrageously conservative—or a the very least, realistic. Test results that do not include fees and slippage are worthless. Slippage is there for a reason, and that’s because our interventions in the market change the market. It is easier to find alpha in illiquid markets such as cryptos because not many large players participate in them. If your backtesting results are based on moving large positions and you don’t also add the inevitable slippage that will occur when you enter/exit thin markets, your backtesting will produce unrealistic results. Even if you do include large slippage in your settings, the Engine can only do so much as it will not let slippage push fills past the high or low of the entry bar, but the gap may be much larger in illiquid markets.
Never test and optimize your system on the same dataset , as that is the perfect recipe for overfitting or data dredging, which is trying to find one precise set of rules/parameters that works only on one dataset. These setups are the most fragile and often get destroyed when they meet the real world.
Try to find datasets yielding more than 100 trades. Less than that and results are not as reliable.
Consider all backtesting results with suspicion. If you never entertained sceptic tendencies, now is the time to begin. If your backtest results look really good, assume they are flawed, either because of your methodology, the data you’re using or the software doing the testing. Always assume the worse and learn proper backtesting techniques such as monte carlo simulations and walk forward analysis to avoid the traps and biases that unchecked greed will set for you. If you are not familiar with concepts such as survivor bias, lookahead bias and confirmation bias, learn about them.
Stick to simple bars or candles when designing systems. Other types of bars often do not yield reliable results, whether by design (Heikin Ashi) or because of the way they are implemented on TV (Renko bars).
Know that you don’t know and use that knowledge to learn more about systems and how to properly test them, about your biases, and about yourself.
Manage risk first , then capture opportunity.
Respect the inherent uncertainty of the future. Cleanse yourself of the sad arrogance and unchecked greed common to newcomers to trading. Strive for rationality. Respect the fact that while backtest results may look promising, there is no guarantee they will repeat in the future (there is actually a high probability they won’t!), because the future is fundamentally unknowable. If you develop a system that looks promising, don’t oversell it to others whose greed may lead them to entertain unreasonable expectations.
Have a plan. Understand what king of trading system you are trying to build. Have a clear picture or where entries, exits and other important levels will be in the sort of trade you are trying to create with your system. This stated direction will help you discard more efficiently many of the inevitably useless ideas that will pop up during system design.
Be wary of complexity. Experienced systems engineers understand how rapidly complexity builds when you assemble components together—however simple each one may be. The more complex your system, the more difficult it will be to manage.
Play! . Allow yourself time to play around when you design your systems. While much comes about from working with a purpose, great ideas sometimes come out of just trying things with no set goal, when you are stuck and don’t know how to move ahead. Have fun!
@LucF
NOTES
While the engine’s code can supply multiple consecutive entries of longs or shorts in order to scale positions (pyramid), all exits currently assume the execution bot will exit the totality of the position. No partial exits are currently possible with the Engine.
Because the Engine is literally crippled by the limitations on the number of plots a script can output on TV; it can only show a fraction of all the information it calculates in the Data Window. You will find in the Plot Module vast amounts of commented out lines that you can activate if you also disable an equivalent number of other plots. This may be useful to explore certain characteristics of your system in more detail.
When backtesting using the TV backtesting feature, you will need to provide the strategy parameters you wish to use through either Settings/Properties or by changing the default values in the code’s header. These values are defined in variables and used not only in the strategy() statement, but also as defaults in the Engine’s relevant Inputs.
If you want to test using pyramiding, then both the strategy’s Setting/Properties and the Engine’s Settings/Inputs need to allow pyramiding.
If you find any bugs in the Engine, please let us know.
THANKS
To @glaz for allowing the use of his unpublished MA Squize in the filters.
To @everget for his Chandelier stop code, which is also used as a filter in the Engine.
To @RicardoSantos for his pseudo-random generator, and because it’s from him that I first read in the Pine chat about the idea of using an external indicator as input into another. In the PineCoders group, @theheirophant then mentioned the idea of using it as a buy/sell signal and @simpelyfe showed a piece of code implementing the idea. That’s the tortuous story behind the use of the external indicator in the Engine.
To @admin for the Volatility stop’s original code and for the donchian function lifted from Ichimoku .
To @BobHoward21 for the v3 version of Volatility Stop .
To @scarf and @midtownsk8rguy for the color tuning.
To many other scripters who provided encouragement and suggestions for improvement during the long process of writing and testing this piece of code.
To J. Welles Wilder Jr. for ATR, used extensively throughout the Engine.
To TradingView for graciously making an account available to PineCoders.
And finally, to all fellow PineCoders for the constant intellectual stimulation; it is a privilege to share ideas with you all. The Engine is for all TradingView PineCoders, of course—but especially for you.
Look first. Then leap.
AO/AC Trading Zones Strategy [Skyrexio] Overview
AO/AC Trading Zones Strategy leverages the combination of Awesome Oscillator (AO), Acceleration/Deceleration Indicator (AC), Williams Fractals, Williams Alligator and Exponential Moving Average (EMA) to obtain the high probability long setups. Moreover, strategy uses multi trades system, adding funds to long position if it considered that current trend has likely became stronger. Combination of AO and AC is used for creating so-called trading zones to create the signals, while Alligator and Fractal are used in conjunction as an approximation of short-term trend to filter them. At the same time EMA (default EMA's period = 100) is used as high probability long-term trend filter to open long trades only if it considers current price action as an uptrend. More information in "Methodology" and "Justification of Methodology" paragraphs. The strategy opens only long trades.
Unique Features
No fixed stop-loss and take profit: Instead of fixed stop-loss level strategy utilizes technical condition obtained by Fractals and Alligator to identify when current uptrend is likely to be over. In some special cases strategy uses AO and AC combination to trail profit (more information in "Methodology" and "Justification of Methodology" paragraphs)
Configurable Trading Periods: Users can tailor the strategy to specific market windows, adapting to different market conditions.
Multilayer trades opening system: strategy uses only 10% of capital in every trade and open up to 5 trades at the same time if script consider current trend as strong one.
Short and long term trend trade filters: strategy uses EMA as high probability long-term trend filter and Alligator and Fractal combination as a short-term one.
Methodology
The strategy opens long trade when the following price met the conditions:
1. Price closed above EMA (by default, period = 100). Crossover is not obligatory.
2. Combination of Alligator and Williams Fractals shall consider current trend as an upward (all details in "Justification of Methodology" paragraph)
3. Both AC and AO shall print two consecutive increasing values. At the price candle close which corresponds to this condition algorithm opens the first long trade with 10% of capital.
4. If combination of Alligator and Williams Fractals shall consider current trend has been changed from up to downtrend, all long trades will be closed, no matter how many trades has been opened.
5. If AO and AC both continue printing the rising values strategy opens the long trade on each candle close with 10% of capital while number of opened trades reaches 5.
6. If AO and AC both has printed 5 rising values in a row algorithm close all trades if candle's low below the low of the 5-th candle with rising AO and AC values in a row.
Script also has additional visuals. If second long trade has been opened simultaneously the Alligator's teeth line is plotted with the green color. Also for every trade in a row from 2 to 5 the label "Buy More" is also plotted just below the teeth line. With every next simultaneously opened trade the green color of the space between teeth and price became less transparent.
Strategy settings
In the inputs window user can setup strategy setting:
EMA Length (by default = 100, period of EMA, used for long-term trend filtering EMA calculation).
User can choose the optimal parameters during backtesting on certain price chart.
Justification of Methodology
Let's explore the key concepts of this strategy and understand how they work together. We'll begin with the simplest: the EMA.
The Exponential Moving Average (EMA) is a type of moving average that assigns greater weight to recent price data, making it more responsive to current market changes compared to the Simple Moving Average (SMA). This tool is widely used in technical analysis to identify trends and generate buy or sell signals. The EMA is calculated as follows:
1.Calculate the Smoothing Multiplier:
Multiplier = 2 / (n + 1), Where n is the number of periods.
2. EMA Calculation
EMA = (Current Price) × Multiplier + (Previous EMA) × (1 − Multiplier)
In this strategy, the EMA acts as a long-term trend filter. For instance, long trades are considered only when the price closes above the EMA (default: 100-period). This increases the likelihood of entering trades aligned with the prevailing trend.
Next, let’s discuss the short-term trend filter, which combines the Williams Alligator and Williams Fractals. Williams Alligator
Developed by Bill Williams, the Alligator is a technical indicator that identifies trends and potential market reversals. It consists of three smoothed moving averages:
Jaw (Blue Line): The slowest of the three, based on a 13-period smoothed moving average shifted 8 bars ahead.
Teeth (Red Line): The medium-speed line, derived from an 8-period smoothed moving average shifted 5 bars forward.
Lips (Green Line): The fastest line, calculated using a 5-period smoothed moving average shifted 3 bars forward.
When the lines diverge and align in order, the "Alligator" is "awake," signaling a strong trend. When the lines overlap or intertwine, the "Alligator" is "asleep," indicating a range-bound or sideways market. This indicator helps traders determine when to enter or avoid trades.
Fractals, another tool by Bill Williams, help identify potential reversal points on a price chart. A fractal forms over at least five consecutive bars, with the middle bar showing either:
Up Fractal: Occurs when the middle bar has a higher high than the two preceding and two following bars, suggesting a potential downward reversal.
Down Fractal: Happens when the middle bar shows a lower low than the surrounding two bars, hinting at a possible upward reversal.
Traders often use fractals alongside other indicators to confirm trends or reversals, enhancing decision-making accuracy.
How do these tools work together in this strategy? Let’s consider an example of an uptrend.
When the price breaks above an up fractal, it signals a potential bullish trend. This occurs because the up fractal represents a shift in market behavior, where a temporary high was formed due to selling pressure. If the price revisits this level and breaks through, it suggests the market sentiment has turned bullish.
The breakout must occur above the Alligator’s teeth line to confirm the trend. A breakout below the teeth is considered invalid, and the downtrend might still persist. Conversely, in a downtrend, the same logic applies with down fractals.
In this strategy if the most recent up fractal breakout occurs above the Alligator's teeth and follows the last down fractal breakout below the teeth, the algorithm identifies an uptrend. Long trades can be opened during this phase if a signal aligns. If the price breaks a down fractal below the teeth line during an uptrend, the strategy assumes the uptrend has ended and closes all open long trades.
By combining the EMA as a long-term trend filter with the Alligator and fractals as short-term filters, this approach increases the likelihood of opening profitable trades while staying aligned with market dynamics.
Now let's talk about the trading zones concept and its signals. To understand this we need to briefly introduce what is AO and AC. The Awesome Oscillator (AO), developed by Bill Williams, is a momentum indicator designed to measure market momentum by contrasting recent price movements with a longer-term historical perspective. It helps traders detect potential trend reversals and assess the strength of ongoing trends.
The formula for AO is as follows:
AO = SMA5(Median Price) − SMA34(Median Price)
where:
Median Price = (High + Low) / 2
SMA5 = 5-period Simple Moving Average of the Median Price
SMA 34 = 34-period Simple Moving Average of the Median Price
The Acceleration/Deceleration (AC) Indicator, introduced by Bill Williams, measures the rate of change in market momentum. It highlights shifts in the driving force of price movements and helps traders spot early signs of trend changes. The AC Indicator is particularly useful for identifying whether the current momentum is accelerating or decelerating, which can indicate potential reversals or continuations. For AC calculation we shall use the AO calculated above is the following formula:
AC = AO − SMA5(AO) , where SMA5(AO)is the 5-period Simple Moving Average of the Awesome Oscillator
When the AC is above the zero line and rising, it suggests accelerating upward momentum.
When the AC is below the zero line and falling, it indicates accelerating downward momentum.
When the AC is below zero line and rising it suggests the decelerating the downtrend momentum. When AC is above the zero line and falling, it suggests the decelerating the uptrend momentum.
Now let's discuss the trading zones concept and how it can create the signal. Zones are created by the combination of AO and AC. We can divide three zone types:
Greed zone: when the AO and AC both are rising
Red zone: when the AO and AC both are decreasing
Gray zone: when one of AO or AC is rising, the other is falling
Gray zone is considered as uncertainty. AC and AO are moving in the opposite direction. Strategy skip such price action to decrease the chance to stuck in the losing trade during potential sideways. Red zone is also not interesting for the algorithm because both indicators consider the trend as bearish, but strategy opens only long trades. It is waiting for the green zone to increase the chance to open trade in the direction of the potential uptrend. When we have 2 candles in a row in the green zone script executes a long trade with 10% of capital.
Two green zone candles in a row is considered by algorithm as a bullish trend, but now so strong, that's the reason why trade is going to be closed when the combination of Alligator and Fractals will consider the the trend change from bullish to bearish. If id did not happens, algorithm starts to count the green zone candles in a row. When we have 5 in a row script change the trade closing condition. Such situation is considered is a high probability strong bull market and all trades will be closed if candle's low will be lower than fifth green zone candle's low. This is used to increase probability to secure the profit. If long trades are initiated, the strategy continues utilizing subsequent signals until the total number of trades reaches a maximum of 5. Each trade uses 10% of capital.
Why we use trading zones signals? If currently strategy algorithm considers the high probability of the short-term uptrend with the Alligator and Fractals combination pointed out above and the long-term trend is also suggested by the EMA filter as bullish. Rising AC and AO values in the direction of the most likely main trend signaling that we have the high probability of the fastest bullish phase on the market. The main idea is to take part in such rapid moves and add trades if this move continues its acceleration according to indicators.
Backtest Results
Operating window: Date range of backtests is 2023.01.01 - 2024.12.31. It is chosen to let the strategy to close all opened positions.
Commission and Slippage: Includes a standard Binance commission of 0.1% and accounts for possible slippage over 5 ticks.
Initial capital: 10000 USDT
Percent of capital used in every trade: 10%
Maximum Single Position Loss: -9.49%
Maximum Single Profit: +24.33%
Net Profit: +4374.70 USDT (+43.75%)
Total Trades: 278 (39.57% win rate)
Profit Factor: 2.203
Maximum Accumulated Loss: 668.16 USDT (-5.43%)
Average Profit per Trade: 15.74 USDT (+1.37%)
Average Trade Duration: 60 hours
How to Use
Add the script to favorites for easy access.
Apply to the desired timeframe and chart (optimal performance observed on 4h BTC/USDT).
Configure settings using the dropdown choice list in the built-in menu.
Set up alerts to automate strategy positions through web hook with the text: {{strategy.order.alert_message}}
Disclaimer:
Educational and informational tool reflecting Skyrex commitment to informed trading. Past performance does not guarantee future results. Test strategies in a simulated environment before live implementation
These results are obtained with realistic parameters representing trading conditions observed at major exchanges such as Binance and with realistic trading portfolio usage parameters.
Supply and demandHi all!
This is my take on supply/demand. The gist is that it creates a zone if there is a big enough reaction. This is configurable in settings as "Minimum range (ATR factor)" (the Average True Length of length 14) that is the distance that the price must travel and "Reaction bars" that is the maximum number of bars that price must travel this distance. The zones that are shown are the ones that have a retest, break and retest or is unmitigated (untouched). If a zone is mitigated (entered) or broken it is temporarily hidden. For a zone to be created it needs to have this reaction and the previous bar does not.
So this script will show you zones that are fresh (unmitigated), retested or broken and retested. This means that the zones that are shown have "proven" that they are good zones through this. Basically it means that the script creates a bunch of zones and then picks the good once. This makes the script have some latency, but will hopefully give you good zones. A zone is completely removed if it's broken twice (it's okay if it's broken once and can still have a retest after it has flipped from previous supply (or resistance) into demand (or support)).
Here is a zone (the one that has the lowest opacity) that is broken and retested that could have resulted in a good long trade (the settings are default but has a stop in the beginning of 2024):
You have a setting to remove zones that are pierced (broken by price wicks). The following zone is pierced by price (in the beginning of May) that will not be shown after the start of May if you have "Pierced" checked (the indicator has default settings but a stop in the middle of April):
You have a trend section. Zones that create a reaction upwards can only be created if the trend is considered to be up, and vice versa. The options here are "SMA50" (the current price needs to be over the Simple Moving Average of length 50) and "SMA50, SMA200" (price needs to be over the Simple Moving Average of length 50 and the Simple Moving Average of length 50 needs to be over the Simple Moving Average of length 200). If these conditions are met the trend is considered to be up, otherwise it's down. You can disable this by choosing "No detection".
The zones that are shown also need to be within a limit (of the current price). This limit is 10 (factor of the Average True Range if length 14) by default. Set this to 0 to deactivate. This is useful for not showing zones that are far away from current price and therefore unlikely to be interacted with.
You can stop the calculation of zones (through the "Stop" value in the settings). This is useful to see if previous zones were any good. I used it in my testing of the script but left it because it can be nice to have.
The zones created by the script have different transparency based upon the zone's interaction. The clearest zones are the ones that are unmitigated, the second clearest ones are the ones having a retest and lastly the zones which are most unclear are the ones having a break and then a retest.
You can see the concept of this script to be a mix of supply/demand and support/resistance, having zones being unmitigated (untouched) as the most important but also show the zones having an interaction (in the form of a retest or a break and retest).
This is from a previous supply (or resistance) zone that has flipped into demand (or support) and has shown to be a good zone through a retest followed by a rally (default settings):
This zone has multiple retest and then rallies that could have given a good long trades (it has the default settings but a "Stop" time at 2022-01-14):
TODO:
- Create zones based on pivots
- Handle overlapping zones
- Incorporate volume in the creation and/or interaction with zones
- Add alerts
- Add ability to set maximum zone width
- Add ability to set the maximum number of retest bars
- ...?
The example for this publication has the default settings bit a "Stop" and a tighter "Limit" of 4.
I hope this explanation makes sense, let me know otherwise. Also let me know if you have any suggestions on improvements.
Best of trading luck!