GKD-V Semi-Variance [Loxx]Giga Kaleidoscope Semi-Variance is a Volatility / Volume module included in Loxx's "Giga Kaleidoscope Modularized Trading System".
█ Giga Kaleidoscope Modularized Trading System
What is Loxx's "Giga Kaleidoscope Modularized Trading System"?
The Giga Kaleidoscope Modularized Trading System is a trading system built on the philosophy of the NNFX (No Nonsense Forex) algorithmic trading.
What is an NNFX algorithmic trading strategy?
The NNFX algorithm is built on the principles of trend, momentum, and volatility . There are six core components in the NNFX trading algorithm:
1. Volatility - price volatility ; e.g., Average True Range , True Range Double, Close-to-Close, etc.
2. Baseline - a moving average to identify price trend
3. Confirmation 1 - a technical indicator used to identify trends.
4. Confirmation 2 - a technical indicator used to identify trends.
5. Continuation - a technical indicator used to identify trends.
6. Volatility / Volume - a technical indicator used to identify volatility / volume breakouts/breakdown.
7. Exit - a technical indicator used to determine when a trend is exhausted.
How does Loxx's GKD (Giga Kaleidoscope Modularized Trading System) implement the NNFX algorithm outlined above?
Loxx's GKD v1.0 system has five types of modules (indicators/strategies). These modules are:
1. GKD-BT - Backtesting module (Volatility , Number 1 in the NNFX algorithm)
2. GKD-B - Baseline module (Baseline and Volatility / Volume , Numbers 1 and 2 in the NNFX algorithm)
3. GKD-C - Confirmation 1/2 and Continuation module (Confirmation 1/2 and Continuation, Numbers 3, 4, and 5 in the NNFX algorithm)
4. GKD-V - Volatility / Volume module (Confirmation 1/2, Number 6 in the NNFX algorithm)
5. GKD-E - Exit module (Exit, Number 7 in the NNFX algorithm)
(additional module types will added in future releases)
Each module interacts with every module by passing data between modules. Data is passed between each module as described below:
GKD-B => GKD-V => GKD-C(1) => GKD-C(2) => GKD-C(Continuation) => GKD-E => GKD-BT
That is, the Baseline indicator passes its data to Volatility / Volume . The Volatility / Volume indicator passes its values to the Confirmation 1 indicator. The Confirmation 1 indicator passes its values to the Confirmation 2 indicator. The Confirmation 2 indicator passes its values to the Continuation indicator. The Continuation indicator passes its values to the Exit indicator, and finally, the Exit indicator passes its values to the Backtest strategy.
This chaining of indicators requires that each module conform to Loxx's GKD protocol, therefore allowing for the testing of every possible combination of technical indicators that make up the six components of the NNFX algorithm.
What does the application of the GKD trading system look like?
Example trading system:
Backtest: Strategy with 1-3 take profits, trailing stop loss, multiple types of PnL volatility, and 2 backtesting styles
Baseline: Hull Moving Average
Volatility/Volume: Semi-variance as shown on the chart above
Confirmation 1: Halftrend Averages
Confirmation 2: Williams Percent Range
Continuation: Fisher Transform
Exit: Rex Oscillator
Each GKD indicator is denoted with a module identifier of either: GKD-BT, GKD-B, GKD-C, GKD-V, or GKD-E. This allows traders to understand to which module each indicator belongs and where each indicator fits into the GKD protocol chain.
Giga Kaleidoscope Modularized Trading System Signals (based on the NNFX algorithm)
Standard Entry
1. GKD-C Confirmation 1 Signal
2. GKD-B Baseline agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility / Volume agrees
Baseline Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility / Volume agrees
6. GKD-C Confirmation 1 signal was less than 7 candles prior
Continuation Entry
1. Standard Entry, Baseline Entry, or Pullback; entry triggered previously
2. GKD-B Baseline hasn't crossed since entry signal trigger
3. GKD-C Confirmation Continuation Indicator signals
4. GKD-C Confirmation 1 agrees
5. GKD-B Baseline agrees
6. GKD-C Confirmation 2 agrees
1-Candle Rule Standard Entry
1. GKD-C Confirmation 1 signal
2. GKD-B Baseline agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
Next Candle:
1. Price retraced (Long: close < close or Short: close > close)
2. GKD-B Baseline agrees
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility / Volume agrees
1-Candle Rule Baseline Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 1 signal was less than 7 candles prior
Next Candle:
1. Price retraced (Long: close < close or Short: close > close)
2. GKD-B Baseline agrees
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility / Volume Agrees
PullBack Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is beyond 1.0x Volatility of Baseline
Next Candle:
1. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility / Volume Agrees
█ Semi-Variance
What is Semi-Variance
Semi-variance is a statistical measure that represents the variability of returns in a financial portfolio or investment that fall below a certain threshold or benchmark return. Unlike traditional variance, which measures the dispersion of all returns, semi-variance only focuses on the downside risk or the deviation of returns that are below a specified target return.
Semi-variance is calculated as the average of the squared deviations of returns below the target return. It provides information on the risk of losing money and helps investors assess the stability and reliability of a portfolio or investment. A high semi-variance indicates a higher level of downside risk, while a low semi-variance suggests a lower level of downside risk.
In finance, semi-variance is often used in conjunction with other risk measures, such as standard deviation, beta, and value-at-risk, to give a comprehensive understanding of a portfolio's risk-return profile.
This indicator calculates the difference betwen upward volatility and downward volatility. You can choose between guassian normalized or regular.
Other things to note
The GKD trading system requires that a GKD-V indicator be present in the indicator chain, but the GKD-V indicator doesn't need to be active. You can turn on/off the Volatility Ratio as you wish so you can backtest your trading strategy with the filter on or off.
Additional features will be added in future releases.
Историческая волатильность
Cryptos Pump Hunter[liwei666]🔥 Cryptos Pump Hunter captured high volatility symbols in real-time, Up to 40 symbols can be monitored at same time.
Help you find the most profitable symbol with excellent visualization.
🔥 Indicator Design logic
🎯 The core pump/dump logic is quite simple
1. calc past bars highest and lowest High price, get movement by this formula
" movement = (highest - lowest) / lowest * 100 "
2. order by 'movement' value descending, you will get a volatility List
3. use Table tool display List, The higher the 'movement', the higher the ranking.
🔥 Settings
🎯 2 input properties impact on the results, 2 input impact on display effects, others look picture below.
pump_bars_cnt : lookback bar to calc pump/dump
resolution for pump : 1min to 1D
show_top1 : when ranking list top1 change, will draw a label
show pump : when symbol over threhold, draw a pump lable
🔥 How TO USE
🎯 only trade high volatility symbols
1. focus on top1 symbol on Table panel at top-right postion, trading symbols at label in chart.
2. Short when 'postion' ~ 0, Long when 'postion' ~ 1 on Table Cell
🎯 Monitor the symbols you like
1. 100+ symbols added in script, cancel remarks in code line if symbol is your want
2. add 1 line code if symbol not exist. if you want monitor 'ETHUSDTPERP ', then add
" ETHUSDTPERP = create_symbol_obj('BINANCE:ETHUSDTPERP'), array.unshift(symbol_a, ETHUSDTPERP ) "
🎯 Alert will be add soon, any questions or suggestion please comment below, I would appreciate it greatly.
Hope this indicator will be useful for you :)
enjoy! 🚀🚀🚀
Average True Range PercentWhen writing the Quickfingers Luc base scanner (Marvin) script, I wanted a measure of volatility that would be comparable between charts. The traditional Average True Range (ATR) indicator calculates a discrete number providing the average true range of that chart for a specified number of periods. The ATR is not comparable across different price charts.
Average True Range Percent (ATRP) measures the true range for the period, converts it to a percentage using the average of the period's range ((high + low) / 2) and then smooths the percentage. The ATRP provides a measure of volatility that is comparable between charts showing their relative volatility.
Enjoy.
VolatilityAlgoThis indicator allows you to calculate the precise volatility in real time
> Allows analyzing the periods of high/low volatility
> Also to do a technical analysis on the volatility of each bar
> It works with all assets as well as all periods
Here are the different Values:
Upper Volatility Calculation
1 open to close
2 open to high
3 upper shadow
Lower Volatility Calculation
4 open to close
5 open to low
6 lower shadow
GKD-C Trading Channel Index [Loxx]Giga Kaleidoscope Trading Channel Index is a Confirmation module included in Loxx's "Giga Kaleidoscope Modularized Trading System".
█ Giga Kaleidoscope Modularized Trading System
What is Loxx's "Giga Kaleidoscope Modularized Trading System"?
The Giga Kaleidoscope Modularized Trading System is a trading system built on the philosophy of the NNFX (No Nonsense Forex) algorithmic trading.
What is an NNFX algorithmic trading strategy?
The NNFX algorithm is built on the principles of trend, momentum, and volatility. There are six core components in the NNFX trading algorithm:
1. Volatility - price volatility; e.g., Average True Range, True Range Double, Close-to-Close, etc.
2. Baseline - a moving average to identify price trend
3. Confirmation 1 - a technical indicator used to identify trends.
4. Confirmation 2 - a technical indicator used to identify trends.
5. Continuation - a technical indicator used to identify trends.
6. Volatility/Volume - a technical indicator used to identify volatility/volume breakouts/breakdown.
7. Exit - a technical indicator used to determine when a trend is exhausted.
How does Loxx's GKD (Giga Kaleidoscope Modularized Trading System) implement the NNFX algorithm outlined above?
Loxx's GKD v1.0 system has five types of modules (indicators/strategies). These modules are:
1. GKD-BT - Backtesting module (Volatility, Number 1 in the NNFX algorithm)
2. GKD-B - Baseline module (Baseline and Volatility/Volume, Numbers 1 and 2 in the NNFX algorithm)
3. GKD-C - Confirmation 1/2 and Continuation module (Confirmation 1/2 and Continuation, Numbers 3, 4, and 5 in the NNFX algorithm)
4. GKD-V - Volatility/Volume module (Confirmation 1/2, Number 6 in the NNFX algorithm)
5. GKD-E - Exit module (Exit, Number 7 in the NNFX algorithm)
(additional module types will added in future releases)
Each module interacts with every module by passing data between modules. Data is passed between each module as described below:
GKD-B => GKD-V => GKD-C(1) => GKD-C(2) => GKD-C(Continuation) => GKD-E => GKD-BT
That is, the Baseline indicator passes its data to Volatility/Volume. The Volatility/Volume indicator passes its values to the Confirmation 1 indicator. The Confirmation 1 indicator passes its values to the Confirmation 2 indicator. The Confirmation 2 indicator passes its values to the Continuation indicator. The Continuation indicator passes its values to the Exit indicator, and finally, the Exit indicator passes its values to the Backtest strategy.
This chaining of indicators requires that each module conform to Loxx's GKD protocol, therefore allowing for the testing of every possible combination of technical indicators that make up the six components of the NNFX algorithm.
What does the application of the GKD trading system look like?
Example trading system:
Backtest: Strategy with 1-3 take profits, trailing stop loss, multiple types of PnL volatility, and 2 backtesting styles
Baseline: Hull Moving Average as shown on the chart above
Volatility/Volume: Average Directional Index (ADX) as shown on the chart above
Confirmation 1: Trading Channel Index as shown on the chart above
Confirmation 2: Jurik Turning Point Oscillator
Continuation: Fisher Transform
Exit: Rex Oscillator
Each GKD indicator is denoted with a module identifier of either: GKD-BT, GKD-B, GKD-C, GKD-V, or GKD-E. This allows traders to understand to which module each indicator belongs and where each indicator fits into the GKD protocol chain.
Giga Kaleidoscope Modularized Trading System Signals (based on the NNFX algorithm)
Standard Entry
1. GKD-C Confirmation 1 Signal
2. GKD-B Baseline agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume agrees
Baseline Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume agrees
6. GKD-C Confirmation 1 signal was less than 7 candles prior
Continuation Entry
1. Standard Entry, Baseline Entry, or Pullback; entry triggered previously
2. GKD-B Baseline hasn't crossed since entry signal trigger
3. GKD-C Confirmation Continuation Indicator signals
4. GKD-C Confirmation 1 agrees
5. GKD-B Baseline agrees
6. GKD-C Confirmation 2 agrees
1-Candle Rule Standard Entry
1. GKD-C Confirmation 1 signal
2. GKD-B Baseline agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
Next Candle:
1. Price retraced (Long: close < close or Short: close > close )
2. GKD-B Baseline agrees
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume agrees
1-Candle Rule Baseline Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
4. GKD-C Confirmation 1 signal was less than 7 candles prior
Next Candle:
1. Price retraced (Long: close < close or Short: close > close )
2. GKD-B Baseline agrees
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume Agrees
PullBack Entry
1. GKD-B Baseline signal
2. GKD-C Confirmation 1 agrees
3. Price is beyond 1.0x Volatility of Baseline
Next Candle:
1. Price is within a range of 0.2x Volatility and 1.0x Volatility of the Goldie Locks Mean
3. GKD-C Confirmation 1 agrees
4. GKD-C Confirmation 2 agrees
5. GKD-V Volatility/Volume Agrees
█ Trading Channel Index
What is Trading Channel Index
The Trading Channel Index measures the location of average daily price relative to a smoothed average of average daily price. It is derived from the average difference between these two values.
Requirements
Inputs
Confirmation 1 and Solo Confirmation: GKD-V Volatility / Volume indicator
Confirmation 2: GKD-C Confirmation indicator
Continuation: GKD-C Confirmation indicator
Outputs
Confirmation 2 and Solo Confirmation: GKD-E Exit indicator
Confirmation 1: GKD-C Confirmation indicator
Continuation: GKD-E Exit indicator
Additional features will be added in future releases.
This indicator is only available to ALGX Trading VIP group members . You can see the Author's Instructions below to get more information on how to get access.
GKD-V Volatility Ratio [Loxx]Giga Kaleidoscope Volatility Ratio is a Volatility/Volume module included in Loxx's "Giga Kaleidoscope Modularized Trading System".
What is Loxx's "Giga Kaleidoscope Modularized Trading System"?
The Giga Kaleidoscope Modularized Trading System is a trading system built on the philosophy of the NNFX (No Nonsense Forex) algorithmic trading.
What is an NNFX algorithmic trading strategy?
The NNFX algorithm is built on the principles of trend, momentum, and volatility. There are six core components in the NNFX trading algorithm:
1. Volatility - price volatility; e.g., Average True Range, True Range Double, Close-to-Close, etc.
2. Baseline - a moving average to identify price trend (such as "Baseline" shown on the chart above)
3. Confirmation 1 - a technical indicator used to identify trends. This should agree with the "Baseline"
4. Confirmation 2 - a technical indicator used to identify trends. This filters/verifies the trend identified by "Baseline" and "Confirmation 1"
5. Volatility/Volume - a technical indicator used to identify volatility/volume breakouts/breakdown.
6. Exit - a technical indicator used to determine when a trend is exhausted.
How does Loxx's GKD (Giga Kaleidoscope Modularized Trading System) implement the NNFX algorithm outlined above?
Loxx's GKD v1.0 system has five types of modules (indicators/strategies). These modules are:
1. GKD-BT - Backtesting module (Volatility, Number 1 in the NNFX algorithm)
2. GKD-B - Baseline module (Baseline and Volatility/Volume, Numbers 1 and 2 in the NNFX algorithm)
3. GKD-C - Confirmation 1/2 module (Confirmation 1/2, Numbers 3 and 4 in the NNFX algorithm)
4. GKD-V - Volatility/Volume module (Confirmation 1/2, Number 5 in the NNFX algorithm)
5. GKD-E - Exit module (Exit, Number 6 in the NNFX algorithm)
(additional module types will added in future releases)
Each module interacts with every module by passing data between modules. Data is passed between each module as described below:
GKD-B => GKD-V => GKD-C(1) => GKD-C(2) => GKD-E => GKD-BT
That is, the Baseline indicator passes its data to Volatility/Volume. The Volatility/Volume indicator passes its values to the Confirmation 1 indicator. The Confirmation 1 indicator passes its values to the Confirmation 2 indicator. The Confirmation 2 indicator passes its values to the Exit indicator, and finally, the Exit indicator passes its values to the Backtest strategy.
This chaining of indicators requires that each module conform to Loxx's GKD protocol, therefore allowing for the testing of every possible combination of technical indicators that make up the six components of the NNFX algorithm.
What does the application of the GKD trading system look like?
Example trading system:
Backtest: Strategy with 1-3 take profits, trailing stop loss, multiple types of PnL volatility, and 2 backtesting styles
Baseline: Hull Moving Average
Volatility/Volume: Volatility Ratio as shown on the chart above
Confirmation 1: Vortex
Confirmation 2: Fisher Transform
Exit: Rex Oscillator
Each GKD indicator is denoted with a module identifier of either: GKD-BT, GKD-B, GKD-C, GKD-V, or GKD-E. This allows traders to understand to which module each indicator belongs and where each indicator fits into the GKD protocol chain.
Now that you have a general understanding of the NNFX algorithm and the GKD trading system. Let's go over what's inside the GKD-V Volatility Ratio itself.
What is Volatility Ratio?
Volatility Ratio is a comparison between volatility and its moving average. This indicator includes 11 different types of volatility as well as 63 different moving averages.
You can read about the moving average types here:
Volatility Types Included
v1.0 Included Volatility
Close-to-Close
Close-to-Close volatility is a classic and most commonly used volatility measure, sometimes referred to as historical volatility .
Volatility is an indicator of the speed of a stock price change. A stock with high volatility is one where the price changes rapidly and with a bigger amplitude. The more volatile a stock is, the riskier it is.
Close-to-close historical volatility calculated using only stock's closing prices. It is the simplest volatility estimator. But in many cases, it is not precise enough. Stock prices could jump considerably during a trading session, and return to the open value at the end. That means that a big amount of price information is not taken into account by close-to-close volatility .
Despite its drawbacks, Close-to-Close volatility is still useful in cases where the instrument doesn't have intraday prices. For example, mutual funds calculate their net asset values daily or weekly, and thus their prices are not suitable for more sophisticated volatility estimators.
Parkinson
Parkinson volatility is a volatility measure that uses the stock’s high and low price of the day.
The main difference between regular volatility and Parkinson volatility is that the latter uses high and low prices for a day, rather than only the closing price. That is useful as close to close prices could show little difference while large price movements could have happened during the day. Thus Parkinson's volatility is considered to be more precise and requires less data for calculation than the close-close volatility .
One drawback of this estimator is that it doesn't take into account price movements after market close. Hence it systematically undervalues volatility . That drawback is taken into account in the Garman-Klass's volatility estimator.
Garman-Klass
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (Geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Rogers-Satchell
Rogers-Satchell is an estimator for measuring the volatility of securities with an average return not equal to zero.
Unlike Parkinson and Garman-Klass estimators, Rogers-Satchell incorporates drift term (mean return not equal to zero). As a result, it provides a better volatility estimation when the underlying is trending.
The main disadvantage of this method is that it does not take into account price movements between trading sessions. It means an underestimation of volatility since price jumps periodically occur in the market precisely at the moments between sessions.
A more comprehensive estimator that also considers the gaps between sessions was developed based on the Rogers-Satchel formula in the 2000s by Yang-Zhang. See Yang Zhang Volatility for more detail.
Yang-Zhang
Yang Zhang is a historical volatility estimator that handles both opening jumps and the drift and has a minimum estimation error.
We can think of the Yang-Zhang volatility as the combination of the overnight (close-to-open volatility ) and a weighted average of the Rogers-Satchell volatility and the day’s open-to-close volatility . It considered being 14 times more efficient than the close-to-close estimator.
Garman-Klass-Yang-Zhang
Garman-Klass-Yang-Zhang (GKYZ) volatility estimator consists of using the returns of open, high, low, and closing prices in its calculation.
GKYZ volatility estimator takes into account overnight jumps but not the trend, i.e. it assumes that the underlying asset follows a GBM process with zero drift. Therefore the GKYZ volatility estimator tends to overestimate the volatility when the drift is different from zero. However, for a GBM process, this estimator is eight times more efficient than the close-to-close volatility estimator.
Exponential Weighted Moving Average
The Exponentially Weighted Moving Average (EWMA) is a quantitative or statistical measure used to model or describe a time series. The EWMA is widely used in finance, the main applications being technical analysis and volatility modeling.
The moving average is designed as such that older observations are given lower weights. The weights fall exponentially as the data point gets older – hence the name exponentially weighted.
The only decision a user of the EWMA must make is the parameter lambda. The parameter decides how important the current observation is in the calculation of the EWMA. The higher the value of lambda, the more closely the EWMA tracks the original time series.
Standard Deviation of Log Returns
This is the simplest calculation of volatility . It's the standard deviation of ln(close/close(1))
Pseudo GARCH(2,2)
This is calculated using a short- and long-run mean of variance multiplied by θ.
θavg(var ;M) + (1 − θ) avg (var ;N) = 2θvar/(M+1-(M-1)L) + 2(1-θ)var/(M+1-(M-1)L)
Solving for θ can be done by minimizing the mean squared error of estimation; that is, regressing L^-1var - avg (var; N) against avg (var; M) - avg (var; N) and using the resulting beta estimate as θ.
Average True Range
The average true range (ATR) is a technical analysis indicator, introduced by market technician J. Welles Wilder Jr. in his book New Concepts in Technical Trading Systems, that measures market volatility by decomposing the entire range of an asset price for that period.
The true range indicator is taken as the greatest of the following: current high less the current low; the absolute value of the current high less the previous close; and the absolute value of the current low less the previous close. The ATR is then a moving average, generally using 14 days, of the true ranges.
True Range Double
A special case of ATR that attempts to correct for volatility skew.
Signals
1. Traditional: The traditional signal is volatility is either high or low. This is non-directional. When the Volatility Ratio is above 1, then there is enough volatility to trade long or short. This signal type has a bar risings option that requires that the Volatility Ratio is not only above 1 but also rising for the last XX bars.
2. Crossing: This is experimental. When a cross-up above 1 or cross-down below one occurs then, and only then, is there enough volatility to trade long or short. This is also non-directional.
3. Both Traditional and Crossing
X-bar Rule
If a signal registers XX bars ago, then the signal is still valid. This is an optional feature.
Other things to note
The GKD trading system requires that a GKD-V indicator be present in the indicator chain, but the GKD-V indicator doesn't need to be active. You can turn on/off the Volatility Ratio as you wish so you can backtest your trading strategy with the filter on or off.
Additional features will be added in future releases.
This indicator is only available to ALGX Trading VIP group members . You can see the Author's Instructions below to get more information on how to get access.
GKD-B Baseline [Loxx]Giga Kaleidoscope Baseline is a Baseline module included in Loxx's "Giga Kaleidoscope Modularized Trading System".
What is Loxx's "Giga Kaleidoscope Modularized Trading System"?
The Giga Kaleidoscope Modularized Trading System is a trading system built on the philosophy of the NNFX (No Nonsense Forex) algorithmic trading.
What is an NNFX algorithmic trading strategy?
The NNFX algorithm is built on the principles of trend, momentum, and volatility. There are six core components in the NNFX trading algorithm:
1. Volatility - price volatility; e.g., Average True Range, True Range Double, Close-to-Close, etc.
2. Baseline - a moving average to identify price trend (such as "Baseline" shown on the chart above)
3. Confirmation 1 - a technical indicator used to identify trend. This should agree with the "Baseline"
4. Confirmation 2 - a technical indicator used to identify trend. This filters/verifies the trend identified by "Baseline" and "Confirmation 1"
5. Volatility/Volume - a technical indicator used to identify volatility/volume breakouts/breakdown.
6. Exit - a technical indicator used to determine when trend is exhausted.
How does Loxx's GKD (Giga Kaleidoscope Modularized Trading System) implement the NNFX algorithm outlined above?
Loxx's GKD v1.0 system has five types of modules (indicators/strategies). These modules are:
1. GKD-BT - Backtesting module (Volatility, Number 1 in the NNFX algorithm)
2. GKD-B - Baseline module (Baseline and Volatility/Volume, Numbers 1 and 2 in the NNFX algorithm)
3. GKD-C - Confirmation 1/2 module (Confirmation 1/2, Numbers 3 and 4 in the NNFX algorithm)
4. GKD-V - Volatility/Volume module (Confirmation 1/2, Number 5 in the NNFX algorithm)
5. GKD-E - Exit module (Exit, Number 6 in the NNFX algorithm)
(additional module types will added in future releases)
Each module interacts with every module by passing data between modules. Data is passed between each module as described below:
GKD-B => GKD-V => GKD-C(1) => GKD-C(2) => GKD-E => GKD-BT
That is, the Baseline indicator passes its data to Volatility/Volume. The Volatility/Volume indicator passes its values to the Confirmation 1 indicator. The Confirmation 1 indicator passes its values to the Confirmation 2 indicator. The Confirmation 2 indicator passes its values to the Exit indicator, and finally, the Exit indicator passes its values to the Backtest strategy.
This chaining of indicators requires that each module conform to Loxx's GKD protocol, therefore allowing for the testing of every possible combination of technical indicators that make up the six components of the NNFX algorithm.
What does the application of the GKD trading system look like?
Example trading system:
Backtest: Strategy with 1-3 take profits, trailing stop loss, multiple types of PnL volatility, and 2 backtesting styles
Baseline: Hull Moving Average as shown on the chart above
Volatility/Volume: Jurik Volty
Confirmation 1: Vortex
Confirmation 2: Fisher Transform
Exit: Rex Oscillator
Each GKD indicator is denoted with a module identifier of either: GKD-BT, GKD-B, GKD-C, GKD-V, or GKD-E. This allows traders understand to which module each indicator belongs and where each indicator fits into the GKD protocol chain.
Now that you have a general understanding of the NNFX algorithm and the GKD trading system. let's go over what's inside the GKD-B Baseline itself.
GKD Baseline Special Features and Notable Inputs
GKD Baseline v1.0 includes 63 different moving averages:
Adaptive Moving Average - AMA
ADXvma - Average Directional Volatility Moving Average
Ahrens Moving Average
Alexander Moving Average - ALXMA
Deviation Scaled Moving Average - DSMA
Donchian
Double Exponential Moving Average - DEMA
Double Smoothed Exponential Moving Average - DSEMA
Double Smoothed FEMA - DSFEMA
Double Smoothed Range Weighted EMA - DSRWEMA
Double Smoothed Wilders EMA - DSWEMA
Double Weighted Moving Average - DWMA
Ehlers Optimal Tracking Filter - EOTF
Exponential Moving Average - EMA
Fast Exponential Moving Average - FEMA
Fractal Adaptive Moving Average - FRAMA
Generalized DEMA - GDEMA
Generalized Double DEMA - GDDEMA
Hull Moving Average (Type 1) - HMA1
Hull Moving Average (Type 2) - HMA2
Hull Moving Average (Type 3) - HMA3
Hull Moving Average (Type 4) - HMA4
IE /2 - Early T3 by Tim Tilson
Integral of Linear Regression Slope - ILRS
Instantaneous Trendline
Kalman Filter
Kaufman Adaptive Moving Average - KAMA
Laguerre Filter
Leader Exponential Moving Average
Linear Regression Value - LSMA ( Least Squares Moving Average )
Linear Weighted Moving Average - LWMA
McGinley Dynamic
McNicholl EMA
Non-Lag Moving Average
Ocean NMA Moving Average - ONMAMA
Parabolic Weighted Moving Average
Probability Density Function Moving Average - PDFMA
Quadratic Regression Moving Average - QRMA
Regularized EMA - REMA
Range Weighted EMA - RWEMA
Recursive Moving Trendline
Simple Decycler - SDEC
Simple Jurik Moving Average - SJMA
Simple Moving Average - SMA
Sine Weighted Moving Average
Smoothed LWMA - SLWMA
Smoothed Moving Average - SMMA
Smoother
Super Smoother
T3
Three-pole Ehlers Butterworth
Three-pole Ehlers Smoother
Triangular Moving Average - TMA
Triple Exponential Moving Average - TEMA
Two-pole Ehlers Butterworth
Two-pole Ehlers smoother
Variable Index Dynamic Average - VIDYA
Variable Moving Average - VMA
Volume Weighted EMA - VEMA
Volume Weighted Moving Average - VWMA
Zero-Lag DEMA - Zero Lag Exponential Moving Average
Zero-Lag Moving Average
Zero Lag TEMA - Zero Lag Triple Exponential Moving Average
Adaptive Moving Average - AMA
Description. The Adaptive Moving Average (AMA) is a moving average that changes its sensitivity to price moves depending on the calculated volatility. It becomes more sensitive during periods when the price is moving smoothly in a certain direction and becomes less sensitive when the price is volatile.
ADXvma - Average Directional Volatility Moving Average
Linnsoft's ADXvma formula is a volatility-based moving average, with the volatility being determined by the value of the ADX indicator.
The ADXvma has the SMA in Chande's CMO replaced with an EMA , it then uses a few more layers of EMA smoothing before the "Volatility Index" is calculated.
A side effect is, those additional layers slow down the ADXvma when you compare it to Chande's Variable Index Dynamic Average VIDYA .
The ADXVMA provides support during uptrends and resistance during downtrends and will stay flat for longer, but will create some of the most accurate market signals when it decides to move.
Ahrens Moving Average
Richard D. Ahrens's Moving Average promises "Smoother Data" that isn't influenced by the occasional price spike. It works by using the Open and the Close in his formula so that the only time the Ahrens Moving Average will change is when the candlestick is either making new highs or new lows.
Alexander Moving Average - ALXMA
This Moving Average uses an elaborate smoothing formula and utilizes a 7 period Moving Average. It corresponds to fitting a second-order polynomial to seven consecutive observations. This moving average is rarely used in trading but is interesting as this Moving Average has been applied to diffusion indexes that tend to be very volatile.
Deviation Scaled Moving Average - DSMA
The Deviation-Scaled Moving Average is a data smoothing technique that acts like an exponential moving average with a dynamic smoothing coefficient. The smoothing coefficient is automatically updated based on the magnitude of price changes. In the Deviation-Scaled Moving Average, the standard deviation from the mean is chosen to be the measure of this magnitude. The resulting indicator provides substantial smoothing of the data even when price changes are small while quickly adapting to these changes.
Donchian
Donchian Channels are three lines generated by moving average calculations that comprise an indicator formed by upper and lower bands around a midrange or median band. The upper band marks the highest price of a security over N periods while the lower band marks the lowest price of a security over N periods.
Double Exponential Moving Average - DEMA
The Double Exponential Moving Average ( DEMA ) combines a smoothed EMA and a single EMA to provide a low-lag indicator. It's primary purpose is to reduce the amount of "lagging entry" opportunities, and like all Moving Averages, the DEMA confirms uptrends whenever price crosses on top of it and closes above it, and confirms downtrends when the price crosses under it and closes below it - but with significantly less lag.
Double Smoothed Exponential Moving Average - DSEMA
The Double Smoothed Exponential Moving Average is a lot less laggy compared to a traditional EMA . It's also considered a leading indicator compared to the EMA , and is best utilized whenever smoothness and speed of reaction to market changes are required.
Double Smoothed FEMA - DSFEMA
Same as the Double Exponential Moving Average (DEMA), but uses a faster version of EMA for its calculation.
Double Smoothed Range Weighted EMA - DSRWEMA
Range weighted exponential moving average (EMA) is, unlike the "regular" range weighted average calculated in a different way. Even though the basis - the range weighting - is the same, the way how it is calculated is completely different. By definition this type of EMA is calculated as a ratio of EMA of price*weight / EMA of weight. And the results are very different and the two should be considered as completely different types of averages. The higher than EMA to price changes responsiveness when the ranges increase remains in this EMA too and in those cases this EMA is clearly leading the "regular" EMA. This version includes double smoothing.
Double Smoothed Wilders EMA - DSWEMA
Welles Wilder was frequently using one "special" case of EMA (Exponential Moving Average) that is due to that fact (that he used it) sometimes called Wilder's EMA. This version is adding double smoothing to Wilder's EMA in order to make it "faster" (it is more responsive to market prices than the original) and is still keeping very smooth values.
Double Weighted Moving Average - DWMA
Double weighted moving average is an LWMA (Linear Weighted Moving Average). Instead of doing one cycle for calculating the LWMA, the indicator is made to cycle the loop 2 times. That produces a smoother values than the original LWMA
Ehlers Optimal Tracking Filter - EOTF
The Elher's Optimum Tracking Filter quickly adjusts rapid shifts in the price and yet is relatively smooth when the price has a sideways action. The operation of this filter is similar to Kaufman’s Adaptive Moving
Average
Exponential Moving Average - EMA
The EMA places more significance on recent data points and moves closer to price than the SMA ( Simple Moving Average ). It reacts faster to volatility due to its emphasis on recent data and is known for its ability to give greater weight to recent and more relevant data. The EMA is therefore seen as an enhancement over the SMA .
Fast Exponential Moving Average - FEMA
An Exponential Moving Average with a short look-back period.
Fractal Adaptive Moving Average - FRAMA
The Fractal Adaptive Moving Average by John Ehlers is an intelligent adaptive Moving Average which takes the importance of price changes into account and follows price closely enough to display significant moves whilst remaining flat if price ranges. The FRAMA does this by dynamically adjusting the look-back period based on the market's fractal geometry.
Generalized DEMA - GDEMA
The double exponential moving average (DEMA), was developed by Patrick Mulloy in an attempt to reduce the amount of lag time found in traditional moving averages. It was first introduced in the February 1994 issue of the magazine Technical Analysis of Stocks & Commodities in Mulloy's article "Smoothing Data with Faster Moving Averages.". Instead of using fixed multiplication factor in the final DEMA formula, the generalized version allows you to change it. By varying the "volume factor" form 0 to 1 you apply different multiplications and thus producing DEMA with different "speed" - the higher the volume factor is the "faster" the DEMA will be (but also the slope of it will be less smooth). The volume factor is limited in the calculation to 1 since any volume factor that is larger than 1 is increasing the overshooting to the extent that some volume factors usage makes the indicator unusable.
Generalized Double DEMA - GDDEMA
The double exponential moving average (DEMA), was developed by Patrick Mulloy in an attempt to reduce the amount of lag time found in traditional moving averages. It was first introduced in the February 1994 issue of the magazine Technical Analysis of Stocks & Commodities in Mulloy's article "Smoothing Data with Faster Moving Averages''. This is an extension of the Generalized DEMA using Tim Tillsons (the inventor of T3) idea, and is using GDEMA of GDEMA for calculation (which is the "middle step" of T3 calculation). Since there are no versions showing that middle step, this version covers that too. The result is smoother than Generalized DEMA, but is less smooth than T3 - one has to do some experimenting in order to find the optimal way to use it, but in any case, since it is "faster" than the T3 (Tim Tillson T3) and still smooth, it looks like a good compromise between speed and smoothness.
Hull Moving Average (Type 1) - HMA1
Alan Hull's HMA makes use of weighted moving averages to prioritize recent values and greatly reduce lag whilst maintaining the smoothness of a traditional Moving Average. For this reason, it's seen as a well-suited Moving Average for identifying entry points. This version uses SMA for smoothing.
Hull Moving Average (Type 2) - HMA2
Alan Hull's HMA makes use of weighted moving averages to prioritize recent values and greatly reduce lag whilst maintaining the smoothness of a traditional Moving Average. For this reason, it's seen as a well-suited Moving Average for identifying entry points. This version uses EMA for smoothing.
Hull Moving Average (Type 3) - HMA3
Alan Hull's HMA makes use of weighted moving averages to prioritize recent values and greatly reduce lag whilst maintaining the smoothness of a traditional Moving Average. For this reason, it's seen as a well-suited Moving Average for identifying entry points. This version uses LWMA for smoothing.
Hull Moving Average (Type 4) - HMA4
Alan Hull's HMA makes use of weighted moving averages to prioritize recent values and greatly reduce lag whilst maintaining the smoothness of a traditional Moving Average. For this reason, it's seen as a well-suited Moving Average for identifying entry points. This version uses SMMA for smoothing.
IE /2 - Early T3 by Tim Tilson and T3 new
T3 is basically an EMA on steroids, You can read about T3 here:
Integral of Linear Regression Slope - ILRS
A Moving Average where the slope of a linear regression line is simply integrated as it is fitted in a moving window of length N (natural numbers in maths) across the data. The derivative of ILRS is the linear regression slope. ILRS is not the same as a SMA ( Simple Moving Average ) of length N, which is actually the midpoint of the linear regression line as it moves across the data.
Instantaneous Trendline
The Instantaneous Trendline is created by removing the dominant cycle component from the price information which makes this Moving Average suitable for medium to long-term trading.
Kalman Filter
Kalman filter is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies. This means that the filter was originally designed to work with noisy data. Also, it is able to work with incomplete data. Another advantage is that it is designed for and applied in dynamic systems; our price chart belongs to such systems. This version is true to the original design of the trade-ready Kalman Filter where velocity is the triggering mechanism.
Kalman Filter is a more accurate smoothing/prediction algorithm than the moving average because it is adaptive: it accounts for estimation errors and tries to adjust its predictions from the information it learned in the previous stage. Theoretically, Kalman Filter consists of measurement and transition components.
Kaufman Adaptive Moving Average - KAMA
Developed by Perry Kaufman, Kaufman's Adaptive Moving Average (KAMA) is a moving average designed to account for market noise or volatility. KAMA will closely follow prices when the price swings are relatively small and the noise is low.
Laguerre Filter
The Laguerre Filter is a smoothing filter which is based on Laguerre polynomials. The filter requires the current price, three prior prices, a user defined factor called Alpha to fill its calculation.
Adjusting the Alpha coefficient is used to increase or decrease its lag and its smoothness.
Leader Exponential Moving Average
The Leader EMA was created by Giorgos E. Siligardos who created a Moving Average which was able to eliminate lag altogether whilst maintaining some smoothness. It was first described during his research paper "MACD Leader" where he applied this to the MACD to improve its signals and remove its lagging issue. This filter uses his leading MACD's "modified EMA" and can be used as a zero lag filter.
Linear Regression Value - LSMA ( Least Squares Moving Average )
LSMA as a Moving Average is based on plotting the end point of the linear regression line. It compares the current value to the prior value and a determination is made of a possible trend, eg. the linear regression line is pointing up or down.
Linear Weighted Moving Average - LWMA
LWMA reacts to price quicker than the SMA and EMA . Although it's similar to the Simple Moving Average , the difference is that a weight coefficient is multiplied to the price which means the most recent price has the highest weighting, and each prior price has progressively less weight. The weights drop in a linear fashion.
McGinley Dynamic
John McGinley created this Moving Average to track prices better than traditional Moving Averages. It does this by incorporating an automatic adjustment factor into its formula, which speeds (or slows) the indicator in trending, or ranging, markets.
McNicholl EMA
Dennis McNicholl developed this Moving Average to use as his center line for his "Better Bollinger Bands" indicator and was successful because it responded better to volatility changes over the standard SMA and managed to avoid common whipsaws.
Non-lag moving average
The Non Lag Moving average follows price closely and gives very quick signals as well as early signals of price change. As a standalone Moving Average, it should not be used on its own, but as an additional confluence tool for early signals.
Ocean NMA Moving Average - ONMAMA
Created by Jim Sloman, the NMA is a moving average that automatically adjusts to volatility without being programmed to do so. For more info, read his guide "Ocean Theory, an Introduction"
Parabolic Weighted Moving Average
The Parabolic Weighted Moving Average is a variation of the Linear Weighted Moving Average . The Linear Weighted Moving Average calculates the average by assigning different weights to each element in its calculation. The Parabolic Weighted Moving Average is a variation that allows weights to be changed to form a parabolic curve. It is done simply by using the Power parameter of this indicator.
Probability Density Function Moving Average - PDFMA
Probability density function based MA is a sort of weighted moving average that uses probability density function to calculate the weights. By its nature it is similar to a lot of digital filters.
Quadratic Regression Moving Average - QRMA
A quadratic regression is the process of finding the equation of the parabola that best fits a set of data. This moving average is an obscure concept that was posted to Forex forums in around 2008.
Regularized EMA - REMA
The regularized exponential moving average (REMA) by Chris Satchwell is a variation on the EMA (see Exponential Moving Average) designed to be smoother but not introduce too much extra lag.
Range Weighted EMA - RWEMA
This indicator is a variation of the range weighted EMA. The variation comes from a possible need to make that indicator a bit less "noisy" when it comes to slope changes. The method used for calculating this variation is the method described by Lee Leibfarth in his article "Trading With An Adaptive Price Zone".
Recursive Moving Trendline
Dennis Meyers's Recursive Moving Trendline uses a recursive (repeated application of a rule) polynomial fit, a technique that uses a small number of past values estimations of price and today's price to predict tomorrow's price.
Simple Decycler - SDEC
The Ehlers Simple Decycler study is a virtually zero-lag technical indicator proposed by John F. Ehlers. The original idea behind this study (and several others created by John F. Ehlers) is that market data can be considered a continuum of cycle periods with different cycle amplitudes. Thus, trending periods can be considered segments of longer cycles, or, in other words, low-frequency segments. Applying the right filter might help identify these segments.
Simple Loxx Moving Average - SLMA
A three stage moving average combining an adaptive EMA, a Kalman Filter, and a Kauffman adaptive filter.
Simple Moving Average - SMA
The SMA calculates the average of a range of prices by adding recent prices and then dividing that figure by the number of time periods in the calculation average. It is the most basic Moving Average which is seen as a reliable tool for starting off with Moving Average studies. As reliable as it may be, the basic moving average will work better when it's enhanced into an EMA .
Sine Weighted Moving Average
The Sine Weighted Moving Average assigns the most weight at the middle of the data set. It does this by weighting from the first half of a Sine Wave Cycle and the most weighting is given to the data in the middle of that data set. The Sine WMA closely resembles the TMA (Triangular Moving Average).
Smoothed LWMA - SLWMA
A smoothed version of the LWMA
Smoothed Moving Average - SMMA
The Smoothed Moving Average is similar to the Simple Moving Average ( SMA ), but aims to reduce noise rather than reduce lag. SMMA takes all prices into account and uses a long lookback period. Due to this, it's seen as an accurate yet laggy Moving Average.
Smoother
The Smoother filter is a faster-reacting smoothing technique which generates considerably less lag than the SMMA ( Smoothed Moving Average ). It gives earlier signals but can also create false signals due to its earlier reactions. This filter is sometimes wrongly mistaken for the superior Jurik Smoothing algorithm.
Super Smoother
The Super Smoother filter uses John Ehlers’s “Super Smoother” which consists of a Two pole Butterworth filter combined with a 2-bar SMA ( Simple Moving Average ) that suppresses the 22050 Hz Nyquist frequency: A characteristic of a sampler, which converts a continuous function or signal into a discrete sequence.
Three-pole Ehlers Butterworth
The 3 pole Ehlers Butterworth (as well as the Two pole Butterworth) are both superior alternatives to the EMA and SMA . They aim at producing less lag whilst maintaining accuracy. The 2 pole filter will give you a better approximation for price, whereas the 3 pole filter has superior smoothing.
Three-pole Ehlers smoother
The 3 pole Ehlers smoother works almost as close to price as the above mentioned 3 Pole Ehlers Butterworth. It acts as a strong baseline for signals but removes some noise. Side by side, it hardly differs from the Three Pole Ehlers Butterworth but when examined closely, it has better overshoot reduction compared to the 3 pole Ehlers Butterworth.
Triangular Moving Average - TMA
The TMA is similar to the EMA but uses a different weighting scheme. Exponential and weighted Moving Averages will assign weight to the most recent price data. Simple moving averages will assign the weight equally across all the price data. With a TMA (Triangular Moving Average), it is double smoother (averaged twice) so the majority of the weight is assigned to the middle portion of the data.
Triple Exponential Moving Average - TEMA
The TEMA uses multiple EMA calculations as well as subtracting lag to create a tool which can be used for scalping pullbacks. As it follows price closely, its signals are considered very noisy and should only be used in extremely fast-paced trading conditions.
Two-pole Ehlers Butterworth
The 2 pole Ehlers Butterworth (as well as the three pole Butterworth mentioned above) is another filter that cuts out the noise and follows the price closely. The 2 pole is seen as a faster, leading filter over the 3 pole and follows price a bit more closely. Analysts will utilize both a 2 pole and a 3 pole Butterworth on the same chart using the same period, but having both on chart allows its crosses to be traded.
Two-pole Ehlers smoother
A smoother version of the Two pole Ehlers Butterworth. This filter is the faster version out of the 3 pole Ehlers Butterworth. It does a decent job at cutting out market noise whilst emphasizing a closer following to price over the 3 pole Ehlers .
Variable Index Dynamic Average - VIDYA
Variable Index Dynamic Average Technical Indicator ( VIDYA ) was developed by Tushar Chande. It is an original method of calculating the Exponential Moving Average ( EMA ) with the dynamically changing period of averaging.
Variable Moving Average - VMA
The Variable Moving Average (VMA) is a study that uses an Exponential Moving Average being able to automatically adjust its smoothing factor according to the market volatility.
Volume Weighted EMA - VEMA
An EMA that uses a volume and price weighted calculation instead of the standard price input.
Volume Weighted Moving Average - VWMA
A Volume Weighted Moving Average is a moving average where more weight is given to bars with heavy volume than with light volume. Thus the value of the moving average will be closer to where most trading actually happened than it otherwise would be without being volume weighted.
Zero-Lag DEMA - Zero Lag Double Exponential Moving Average
John Ehlers's Zero Lag DEMA's aim is to eliminate the inherent lag associated with all trend following indicators which average a price over time. Because this is a Double Exponential Moving Average with Zero Lag, it has a tendency to overshoot and create a lot of false signals for swing trading. It can however be used for quick scalping or as a secondary indicator for confluence.
Zero-Lag Moving Average
The Zero Lag Moving Average is described by its creator, John Ehlers , as a Moving Average with absolutely no delay. And it's for this reason that this filter will cause a lot of abrupt signals which will not be ideal for medium to long-term traders. This filter is designed to follow price as close as possible whilst de-lagging data instead of basing it on regular data. The way this is done is by attempting to remove the cumulative effect of the Moving Average.
Zero-Lag TEMA - Zero Lag Triple Exponential Moving Average
Just like the Zero Lag DEMA , this filter will give you the fastest signals out of all the Zero Lag Moving Averages. This is useful for scalping but dangerous for medium to long-term traders, especially during market Volatility and news events. Having no lag, this filter also has no smoothing in its signals and can cause some very bizarre behavior when applied to certain indicators.
Exotic Triggers
This version of Baseline allows the user to select from exotic or source triggers. An exotic trigger determines trend by either slope or some other mechanism that is special to each moving average. A source trigger is one of 32 different source types from Loxx's Exotic Source Types. You can read about these source types here:
Volatility Goldie Locks Zone
This volatility filter is the standard first pass filter that is used for all NNFX systems despite the additional volatility/volume filter used in step 5. For this filter, price must fall into a range of maximum and minimum values calculated using multiples of volatility. Unlike the standard NNFX systems, this version of volatility filtering is separated from the core Baseline and uses it's own moving average with Loxx's Exotic Source Types. The green and red dots at the top of the chart denote whether a candle qualifies for a either or long or short respectively. The green and red triangles at the bottom of the chart denote whether the trigger has crossed up or down and qualifies inside the Goldie Locks zone. White coloring of the Goldie Locks Zone mean line is where volatility is too low to trade.
Volatility Types Included
v1.0 Included Volatility
Close-to-Close
Close-to-Close volatility is a classic and most commonly used volatility measure, sometimes referred to as historical volatility .
Volatility is an indicator of the speed of a stock price change. A stock with high volatility is one where the price changes rapidly and with a bigger amplitude. The more volatile a stock is, the riskier it is.
Close-to-close historical volatility calculated using only stock's closing prices. It is the simplest volatility estimator. But in many cases, it is not precise enough. Stock prices could jump considerably during a trading session, and return to the open value at the end. That means that a big amount of price information is not taken into account by close-to-close volatility .
Despite its drawbacks, Close-to-Close volatility is still useful in cases where the instrument doesn't have intraday prices. For example, mutual funds calculate their net asset values daily or weekly, and thus their prices are not suitable for more sophisticated volatility estimators.
Parkinson
Parkinson volatility is a volatility measure that uses the stock’s high and low price of the day.
The main difference between regular volatility and Parkinson volatility is that the latter uses high and low prices for a day, rather than only the closing price. That is useful as close to close prices could show little difference while large price movements could have happened during the day. Thus Parkinson's volatility is considered to be more precise and requires less data for calculation than the close-close volatility .
One drawback of this estimator is that it doesn't take into account price movements after market close. Hence it systematically undervalues volatility . That drawback is taken into account in the Garman-Klass's volatility estimator.
Garman-Klass
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (Geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Rogers-Satchell
Rogers-Satchell is an estimator for measuring the volatility of securities with an average return not equal to zero.
Unlike Parkinson and Garman-Klass estimators, Rogers-Satchell incorporates drift term (mean return not equal to zero). As a result, it provides a better volatility estimation when the underlying is trending.
The main disadvantage of this method is that it does not take into account price movements between trading sessions. It means an underestimation of volatility since price jumps periodically occur in the market precisely at the moments between sessions.
A more comprehensive estimator that also considers the gaps between sessions was developed based on the Rogers-Satchel formula in the 2000s by Yang-Zhang. See Yang Zhang Volatility for more detail.
Yang-Zhang
Yang Zhang is a historical volatility estimator that handles both opening jumps and the drift and has a minimum estimation error.
We can think of the Yang-Zhang volatility as the combination of the overnight (close-to-open volatility ) and a weighted average of the Rogers-Satchell volatility and the day’s open-to-close volatility . It considered being 14 times more efficient than the close-to-close estimator.
Garman-Klass-Yang-Zhang
Garman-Klass-Yang-Zhang (GKYZ) volatility estimator consists of using the returns of open, high, low, and closing prices in its calculation.
GKYZ volatility estimator takes into account overnight jumps but not the trend, i.e. it assumes that the underlying asset follows a GBM process with zero drift. Therefore the GKYZ volatility estimator tends to overestimate the volatility when the drift is different from zero. However, for a GBM process, this estimator is eight times more efficient than the close-to-close volatility estimator.
Exponential Weighted Moving Average
The Exponentially Weighted Moving Average (EWMA) is a quantitative or statistical measure used to model or describe a time series. The EWMA is widely used in finance, the main applications being technical analysis and volatility modeling.
The moving average is designed as such that older observations are given lower weights. The weights fall exponentially as the data point gets older – hence the name exponentially weighted.
The only decision a user of the EWMA must make is the parameter lambda. The parameter decides how important the current observation is in the calculation of the EWMA. The higher the value of lambda, the more closely the EWMA tracks the original time series.
Standard Deviation of Log Returns
This is the simplest calculation of volatility . It's the standard deviation of ln(close/close(1))
Pseudo GARCH(2,2)
This is calculated using a short- and long-run mean of variance multiplied by θ.
θavg(var ;M) + (1 − θ) avg (var ;N) = 2θvar/(M+1-(M-1)L) + 2(1-θ)var/(M+1-(M-1)L)
Solving for θ can be done by minimizing the mean squared error of estimation; that is, regressing L^-1var - avg (var; N) against avg (var; M) - avg (var; N) and using the resulting beta estimate as θ.
Average True Range
The average true range (ATR) is a technical analysis indicator, introduced by market technician J. Welles Wilder Jr. in his book New Concepts in Technical Trading Systems, that measures market volatility by decomposing the entire range of an asset price for that period.
The true range indicator is taken as the greatest of the following: current high less the current low; the absolute value of the current high less the previous close; and the absolute value of the current low less the previous close. The ATR is then a moving average, generally using 14 days, of the true ranges.
True Range Double
A special case of ATR that attempts to correct for volatility skew.
Additional features will be added in future releases.
This indicator is only available to ALGX Trading VIP group members . You can see the Author's Instructions below to get more information on how to get access.
OHLC ToolOHLC Tool allows you to display Current or Historical OHLC Values as horizontal lines that extend to the right on your chart.
Features
Variable Lookback to display a specific historical bar's values. Default = 1 (Previous Candle)
Customizable Timeframe to view HTF Candle values.
Custom Line Colors, Styles, and Thicknesses.
Price Scale Value Display Capability.
For displaying the line values and labels on the price scale you will need to enable:
"Indicator and financials name labels"
and
"Indicator and financials value labels"
These options are found in the Price Scale Menu under Labels. Price Scale Menu > Labels
When you do this you will notice your other indicator values will also be on the price scale,
if you wish to disable these, go to the indicator settings under the "Style" Tab, Uncheck the "Labels on price scale" box.
Indicator Settings > Style > "Labels on price scale"
Enjoy!
Volatility Compression Ratio by M-CarloHello traders. I created this simple indicator to use as a FILTER.
He does not provide any operational signals but tells us if we are in a period of volatility compression or expansion and it can work on all market.
This filter works great for all strategies that work on breakouts
The concept is this: I will enter at breakout of a price level that I consider important, only if there is a volatility compression and not in the case of expansion of volatility.
Technically the calculation is very simple:
Step 1: I calculate the ATR at "x" periods, I set 7 by default because I get better results but you can change it as you like using the "atr length" field. You can also choose whether to calculate the ATR via RMA, SMA or EMA.
Step 2: I Calculate a simple average of the previous ATR over a longer period, longer period than set with the "length multiplier" parameter, which multiplies the "atr length" value by "x" times. Here I set the default 3 but you can change it as you like.
Step 3: I divide the ATR value calculated in step1 by its long-term average calculated in step2, obtaining a value that will oscillate above and below the value of 1
So:
if the indicator is above the value of 1 it means that volatility is expanding
If the indicator is below the value of 1 it means that we are in a period of volatility compression (and as we know volatility explodes sooner or later)
If you have any questions write to me and I hope this filter helps you! Have good Trading!
Seasonal tendency: week-on-week % change and 10yr Averages-shows week-on-week % change, and 10yr averages of these % changes
-scan across the 10yr averages to get a good idea of the seasonality of an asset
-best used on commodities with strong seasonal tendencies (Gold, Wheat, Coffee, Lean hogs etc)
-works only on daily timeframe
-by default it will compare SMA(length) in the following way, BTC: Sunday cf previous Sunday | ES/Gold: Monday cf previous Monday
-for most assets, 5 daily bars in a week (SMA(5)) => that's the default. For BTC can change this to 7.
~~inputs:
-change input year to show any previous decade of asset's history; the table will display over that year on the chart
-choose expression for Average of % change week on week: SMA, ohlc4, vwma, vwap (default SMA)
-choose number of daily bars in a week (i.e. SMA length)
-change label sizes/colors
~~notes:
-When applied to current year: will print the 10yr average for previous weeks in the year; 9yr average for future weeks in the year
-drawings and SMA plot on the above chart are just to show visually how the week's average is calculated, and how this lines up with the label
-current week of year will highlight in large font orange by default
-the first 2 weeks of the year are omitted because of a bug i can't figure out, which throws out bad numbers.
-cannot print all the values for each of previous 10yrs; 'code too long' error. Could likely do this via using matrices but would require a rewrite
17th Dec 2022
@twingall
Average True Range Refurbished💡 Objective
This script is a rebuild of the pre-existing ATR indicator, with improvements and fine-tuning.
🪄Improvements
1. Normalization option (range 0 to 100)
2. Optional calculation of the ratio between current volatility and average volatility
3. Optional smoothing
4. Show a moving average
5. Show Bollinger Bands with 3 bands
6. Change bar colors according to ATR and Bollinger Bands
📚 Definition
'The Average True Range (ATR) is a tool used in technical analysis to measure volatility. Unlike many of today's popular indicators, the ATR is not used to indicate the direction of price. Rather, it is a metric used solely to measure volatility, especially volatility caused by price gaps or limit moves.'
(TradingView)
Visible RangeA quick and easy "at a glance" display for the viewable candles. It does repaint, but that is a non-issue, as it is simply a quick and handy tool to visually see a quick peek at the visible range.
The highest, lowest, and average are displayed, with labels for the percentage distance from the current close value and total range.
Automatic color for each based on relative distance for consistency in stable or volatile conditions.
Expected Move w/ Volatility Panel (advanced) [Loxx]This indicator shows the expected range of movement of price given the assumption that price is log-normally distributed. This includes 3 multiples of standard deviation and 1 user selected level input as a multiple of standard deviation. Expected assumes that volatility remains static on the next bar. In reality, this may or may not be the case, so use caution when making broad assumptions about the levels shown when using this indicator. However, these levels match the same levels on Loxx's backtests and Multi-Panel indicator. These static levels are used as the take profit targets and stoploss on all Loxx's scripts previously posted.
This indicator can be be used on all timeframes, but the internal timeframe must be higher than the current timeframe or an error is thrown. The purpose for internal MTF is so that you can track the deviation range from higher timeframes on lower timeframes. When "current bar" is selected, this indicator will change with live prices changes. This is useful if you wish to enter a trade before the current bar closes and need to know the deviation ranges before the close. Current bar is also useful to see the past ranges of literally that bar. When "past bar" is selected, then the values shown on the current bar are values that were calculated on the last bar. The previous bar setting is useful to track price changes with the assumption that you entered a trade at the close of the previous bar. The default set to the previous bar. (careful, this default setting won't match Loxx's Muti-Panel tool since the Multi-Panel is built using the current bar. To make them match, you must change this setting to current bar)
I've included the ability for you to smooth the output around a moving average. Included are Loxx's Moving Averages. There are 41 to choose from. See more details here:
Smoothing applied yielding Keltner Channels
Also included are various UI options to manipulate line styling and colors.
Volatility Panel
Shows information about user selected volatility included confidence range of the chosen volatility. The following volatility types are included with additional volatility types to added in future releases.
Close-to-Close
Close-to-Close volatility is a classic and most commonly used volatility measure, sometimes referred to as historical volatility .
Volatility is an indicator of the speed of a stock price change. A stock with high volatility is one where the price changes rapidly and with a bigger amplitude. The more volatile a stock is, the riskier it is.
Close-to-close historical volatility calculated using only stock's closing prices. It is the simplest volatility estimator. But in many cases, it is not precise enough. Stock prices could jump considerably during a trading session, and return to the open value at the end. That means that a big amount of price information is not taken into account by close-to-close volatility .
Despite its drawbacks, Close-to-Close volatility is still useful in cases where the instrument doesn't have intraday prices. For example, mutual funds calculate their net asset values daily or weekly, and thus their prices are not suitable for more sophisticated volatility estimators.
Parkinson
Parkinson volatility is a volatility measure that uses the stock’s high and low price of the day.
The main difference between regular volatility and Parkinson volatility is that the latter uses high and low prices for a day, rather than only the closing price. That is useful as close to close prices could show little difference while large price movements could have happened during the day. Thus Parkinson's volatility is considered to be more precise and requires less data for calculation than the close-close volatility .
One drawback of this estimator is that it doesn't take into account price movements after market close. Hence it systematically undervalues volatility . That drawback is taken into account in the Garman-Klass's volatility estimator.
Garman-Klass
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Rogers-Satchell
Rogers-Satchell is an estimator for measuring the volatility of securities with an average return not equal to zero.
Unlike Parkinson and Garman-Klass estimators, Rogers-Satchell incorporates drift term (mean return not equal to zero). As a result, it provides a better volatility estimation when the underlying is trending.
The main disadvantage of this method is that it does not take into account price movements between trading sessions. It means an underestimation of volatility since price jumps periodically occur in the market precisely at the moments between sessions.
A more comprehensive estimator that also considers the gaps between sessions was developed based on the Rogers-Satchel formula in the 2000s by Yang-Zhang. See Yang Zhang Volatility for more detail.
Yang-Zhang
Yang Zhang is a historical volatility estimator that handles both opening jumps and the drift and has a minimum estimation error.
We can think of the Yang-Zhang volatility as the combination of the overnight (close-to-open volatility ) and a weighted average of the Rogers-Satchell volatility and the day’s open-to-close volatility . It considered being 14 times more efficient than the close-to-close estimator.
Garman-Klass-Yang-Zhang
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Exponential Weighted Moving Average
The Exponentially Weighted Moving Average (EWMA) is a quantitative or statistical measure used to model or describe a time series. The EWMA is widely used in finance, the main applications being technical analysis and volatility modeling.
The moving average is designed as such that older observations are given lower weights. The weights fall exponentially as the data point gets older – hence the name exponentially weighted.
The only decision a user of the EWMA must make is the parameter lambda. The parameter decides how important the current observation is in the calculation of the EWMA. The higher the value of lambda, the more closely the EWMA tracks the original time series.
Standard Deviation of Log Returns
This is the simplest calculation of volatility . It's the standard deviation of ln(close/close(1))
Pseudo GARCH(2,2)
This is calculated using a short- and long-run mean of variance multiplied by θ.
θavg(var ;M) + (1 − θ) avg (var ;N) = 2θvar/(M+1-(M-1)L) + 2(1-θ)var/(M+1-(M-1)L)
Solving for θ can be done by minimizing the mean squared error of estimation; that is, regressing L^-1var - avg (var; N) against avg (var; M) - avg (var; N) and using the resulting beta estimate as θ.
Average True Range
The average true range (ATR) is a technical analysis indicator, introduced by market technician J. Welles Wilder Jr. in his book New Concepts in Technical Trading Systems, that measures market volatility by decomposing the entire range of an asset price for that period.
The true range indicator is taken as the greatest of the following: current high less the current low; the absolute value of the current high less the previous close; and the absolute value of the current low less the previous close. The ATR is then a moving average, generally using 14 days, of the true ranges.
True Range Double
A special case of ATR that attempts to correct for volatility skew.
Chi-squared Confidence Interval:
Confidence interval of volatility is calculated using an inverse CDF of a Chi-Squared Distribution. You can change the volatility input used to either realized, upper confidence interval, or lower confidence interval. This is included in case you'd like to see how far price can extend if volatility hits it's upper or lower confidence levels. Generally, you'd just used realized volatility , so I wouldn't change this setting.
Inverse CDF of a Chi-Squared Distribution
The chi-square distribution is a one-parameter family of curves. The parameter ν is the degrees of freedom.
The icdf of the chi-square distribution is
x=F^−1(p∣ν) = {x:F(x∣ν) = p}
where
p=F(x∣ν)= ∫ (t^(v-2)/2 * e^t/2) / (2^(v/2) / Γ(v/2))
ν is the degrees of freedom, and Γ( · ) is the Gamma function. The result p is the probability that a single observation from the chi-square distribution with ν degrees of freedom falls in the interval .
Related Indicators
Multi-Panel: Trade-Volatility-Probability
Variety Distribution Probability Cone
Implied Volatility Suite (TG Fork)Displays the Implied Volatility, which is usually calculated from options, but here is calculated indirectly from spot price directly, either using a model or model-free using the VIXfix.
The model-free VIXfix based approach can detect times of high volatility, which usually coincides with panic and hence lowest prices. Inversely, the model-based approach can detect times of highest greed.
Forked and updated by Tartigradia to fix some issues in the calculations, convert to pinescript v5 and reverse engineered to reproduce the "Implied Volatility Rank & Model Free IVR" indicator by the same author (but closed source) and allow to plot both model-based and model-free implied volatilities simultaneously.
If you like this indicator, please show the original author SegaRKO some love:
Multi-Panel: Trade-Volatility-Probability [Loxx]Multi-Panel: Trade-Volatility-Probability shows user selected and volatility-based price levels and probabilities on the chart. This is useful for both options and all styles of up/down trading methods that rely on volatility.
Trading Panel: Shows trading information to take profits and stop-loss based on multiples of volatility. Also shows equity inputs by the user to calculate optimal position size
Key things to note about the Trading Panel
-Trade side: Long or short. you change this this to change the take profit and SL levels in displayed on the table to be used w/ up/down trading styles that rely on volatility stops
-Account size: User enters total balance available for trade
-Risk: Total % of account size you're willing to lose should the SL be hit
-Position size: Size of the position given the SL and your preferred Risk
-Take profit/Stop loss levels: Based on multipliers selected by the user in settings. These shouldn't be changed unless you really know what you're doing with volatility stops
-Entry: Source price. can be 1 of 37 different prices. See Loxx's Expanded Source Types:
Volatility Panel: Shows information about the volatility the user selected to be used to take profit/stop-loss/range calculations. Volatility types included are:
Close-to-Close
Close-to-Close volatility is a classic and most commonly used volatility measure, sometimes referred to as historical volatility .
Volatility is an indicator of the speed of a stock price change. A stock with high volatility is one where the price changes rapidly and with a bigger amplitude. The more volatile a stock is, the riskier it is.
Close-to-close historical volatility calculated using only stock's closing prices. It is the simplest volatility estimator. But in many cases, it is not precise enough. Stock prices could jump considerably during a trading session, and return to the open value at the end. That means that a big amount of price information is not taken into account by close-to-close volatility .
Despite its drawbacks, Close-to-Close volatility is still useful in cases where the instrument doesn't have intraday prices. For example, mutual funds calculate their net asset values daily or weekly, and thus their prices are not suitable for more sophisticated volatility estimators.
Parkinson
Parkinson volatility is a volatility measure that uses the stock’s high and low price of the day.
The main difference between regular volatility and Parkinson volatility is that the latter uses high and low prices for a day, rather than only the closing price. That is useful as close to close prices could show little difference while large price movements could have happened during the day. Thus Parkinson's volatility is considered to be more precise and requires less data for calculation than the close-close volatility.
One drawback of this estimator is that it doesn't take into account price movements after market close. Hence it systematically undervalues volatility. That drawback is taken into account in the Garman-Klass's volatility estimator.
Garman-Klass
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Rogers-Satchell
Rogers-Satchell is an estimator for measuring the volatility of securities with an average return not equal to zero.
Unlike Parkinson and Garman-Klass estimators, Rogers-Satchell incorporates drift term (mean return not equal to zero). As a result, it provides a better volatility estimation when the underlying is trending.
The main disadvantage of this method is that it does not take into account price movements between trading sessions. It means an underestimation of volatility since price jumps periodically occur in the market precisely at the moments between sessions.
A more comprehensive estimator that also considers the gaps between sessions was developed based on the Rogers-Satchel formula in the 2000s by Yang-Zhang. See Yang Zhang Volatility for more detail.
Yang-Zhang
Yang Zhang is a historical volatility estimator that handles both opening jumps and the drift and has a minimum estimation error.
We can think of the Yang-Zhang volatility as the combination of the overnight (close-to-open volatility ) and a weighted average of the Rogers-Satchell volatility and the day’s open-to-close volatility . It considered being 14 times more efficient than the close-to-close estimator.
Garman-Klass-Yang-Zhang
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Exponential Weighted Moving Average
The Exponentially Weighted Moving Average (EWMA) is a quantitative or statistical measure used to model or describe a time series. The EWMA is widely used in finance, the main applications being technical analysis and volatility modeling.
The moving average is designed as such that older observations are given lower weights. The weights fall exponentially as the data point gets older – hence the name exponentially weighted.
The only decision a user of the EWMA must make is the parameter lambda. The parameter decides how important the current observation is in the calculation of the EWMA. The higher the value of lambda, the more closely the EWMA tracks the original time series.
Standard Deviation of Log Returns
This is the simplest calculation of volatility . It's the standard deviation of ln(close/close(1))
Pseudo GARCH(2,2)
This is calculated using a short- and long-run mean of variance multiplied by θ.
θavg(var ;M) + (1 − θ) avg (var ;N) = 2θvar/(M+1-(M-1)L) + 2(1-θ)var/(M+1-(M-1)L)
Solving for θ can be done by minimizing the mean squared error of estimation; that is, regressing L^-1var - avg (var; N) against avg (var; M) - avg (var; N) and using the resulting beta estimate as θ.
Average True Range
The average true range (ATR) is a technical analysis indicator, introduced by market technician J. Welles Wilder Jr. in his book New Concepts in Technical Trading Systems, that measures market volatility by decomposing the entire range of an asset price for that period.
The true range indicator is taken as the greatest of the following: current high less the current low; the absolute value of the current high less the previous close; and the absolute value of the current low less the previous close. The ATR is then a moving average, generally using 14 days, of the true ranges.
True Range Double
A special case of ATR that attempts to correct for volatility skew.
Chi-squared Confidence Interval:
Confidence interval of volatility is calculated using an inverse CDF of a Chi-Squared Distribution. You can change the volatility input used to either realized, upper confidence interval, or lower confidence interval. This is included in case you'd like to see how far price can extend if volatility hits it's upper or lower confidence levels. Generally, you'd just used realized volatility, so I wouldn't change this setting.
Inverse CDF of a Chi-Squared Distribution
The chi-square distribution is a one-parameter family of curves. The parameter ν is the degrees of freedom.
The icdf of the chi-square distribution is
x=F^−1(p∣ν) = {x:F(x∣ν) = p}
where
p=F(x∣ν)= ∫ (t^(v-2)/2 * e^t/2) / (2^(v/2) / Γ(v/2))
ν is the degrees of freedom, and Γ( · ) is the Gamma function. The result p is the probability that a single observation from the chi-square distribution with ν degrees of freedom falls in the interval .
Additional notes on Volatility Panel
-Shows both current timeframe volatility per candle at whatever date backward you select
-Shows annualized volatility basaed on selected days per year and per bar volatility; this is automaitcally caulculated no matter the timeframe used. This means that it'll calculate annualized volatility for the current candle even on the 1 second timeframe. Days per year should be 252 for everything but cryptocurrency; however, for all types of tradable assets, anything over the 3 day timeframe will calculate on 365 days.
Probability Panel
This panel shows the probability levels of a user selected upper and lower price boundary. This includes the inside range of volatility between the lower and upper price levels and the outside probability below the lower price level and above the upper price level. These values are calculated using the CDF (cumulative density function) of a normal distribution. In simpler terms, CDF returns area under a bell curve between two points left and right, or for our purposes, high and low. This yeilds the probabilities you see in the Probability Panel. See the following graphic to visualize how this works:
The red line is the entry bar; the yellow line is the "mean" but in this case just the chosen source price.
Other things to know
You can turn on/off all labels and levels and fills
Volatility Cone [Loxx]When it comes to forecasting volatility, it seems that the old axiom about weather is applicable: "Everyone talks about it, but no one can do much about it!" Volatility cones are a tool that may be useful in one’s attempt to do something about predicting the future volatility of an asset.
A "volatility cone" is a plot of the range of volatilities within a fixed probability band around the true parameter, as a function of sample length. Volatility cone is a visualization tool for the display of historical volatility term structure. It was introduced by Burghardt and Lane in early 1990 and is popular in the option trading community. This is mostly a static indicator due to processor load and is restricted to the daily time frame.
Why cones?
When we enter the options arena, in an effort to "trade volatility," we want to be able to compare current levels of implied volatility with recent historical volatility in an effort to assess the relative value of the option(s) under consideration Volatility cones can be an effective tool to help us with this assessment. A volatility cone is an analytical application designed to help determine if the current levels of historical or implied volatilities for a given underlying, its options, or any of the new volatility instruments, such as VolContractTM futures, VIX futures, or VXX and VXZ ETNs, are likely to persist in the future. As such, volatility cones are intended to help the user assess the likely volatility that an underlying will go on to display over a certain period. Those who employ volatility cones as a diagnostic tool are relying upon the principle of "reversion to the mean." This means that unusually high levels of volatility are expected to drift or move lower (revert) to their average (mean) levels, while relatively low volatility readings are expected to rise, eventually, to more "normal" values.
How to use
Suppose you want to analyze an options contract expiring in 3-months and this current option has an current implied volatility 25.5%. Suppose also that realized volatility (y-axis) at the 3-month mark (90 on the x-axis) is 45%, median in 35%, the 25th percentile is 30%, and the low is 25%. Comparing this range to the implied volatility you would maybe conclude that this is a relatively "cheap" option contract. To help you visualize implied volatility on the chart given an expiration date in bars, the indicator includes the ability to enter up to three expirations in bars and each expirations current implied volatility
By ascertaining the various historical levels of volatility corresponding to a given time horizon for the options futures under consideration, we’re better prepared to judge the relative "cheapness" or "expensiveness" of the instrument.
Volatility options
Close-to-Close
Close-to-Close volatility is a classic and most commonly used volatility measure, sometimes referred to as historical volatility .
Volatility is an indicator of the speed of a stock price change. A stock with high volatility is one where the price changes rapidly and with a bigger amplitude. The more volatile a stock is, the riskier it is.
Close-to-close historical volatility calculated using only stock's closing prices. It is the simplest volatility estimator. But in many cases, it is not precise enough. Stock prices could jump considerably during a trading session, and return to the open value at the end. That means that a big amount of price information is not taken into account by close-to-close volatility .
Despite its drawbacks, Close-to-Close volatility is still useful in cases where the instrument doesn't have intraday prices. For example, mutual funds calculate their net asset values daily or weekly, and thus their prices are not suitable for more sophisticated volatility estimators.
Parkinson
Parkinson volatility is a volatility measure that uses the stock’s high and low price of the day.
The main difference between regular volatility and Parkinson volatility is that the latter uses high and low prices for a day, rather than only the closing price. That is useful as close to close prices could show little difference while large price movements could have happened during the day. Thus Parkinson's volatility is considered to be more precise and requires less data for calculation than the close-close volatility. One drawback of this estimator is that it doesn't take into account price movements after market close. Hence it systematically undervalues volatility. That drawback is taken into account in the Garman-Klass's volatility estimator.
Garman-Klass
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Rogers-Satchell
Rogers-Satchell is an estimator for measuring the volatility of securities with an average return not equal to zero.
Unlike Parkinson and Garman-Klass estimators, Rogers-Satchell incorporates drift term (mean return not equal to zero). As a result, it provides a better volatility estimation when the underlying is trending.
The main disadvantage of this method is that it does not take into account price movements between trading sessions. It means an underestimation of volatility since price jumps periodically occur in the market precisely at the moments between sessions.
A more comprehensive estimator that also considers the gaps between sessions was developed based on the Rogers-Satchel formula in the 2000s by Yang-Zhang. See Yang Zhang Volatility for more detail.
Yang-Zhang
Yang Zhang is a historical volatility estimator that handles both opening jumps and the drift and has a minimum estimation error.
We can think of the Yang-Zhang volatility as the combination of the overnight (close-to-open volatility ) and a weighted average of the Rogers-Satchell volatility and the day’s open-to-close volatility . It considered being 14 times more efficient than the close-to-close estimator.
Garman-Klass-Yang-Zhang
Garman Klass is a volatility estimator that incorporates open, low, high, and close prices of a security.
Garman-Klass volatility extends Parkinson's volatility by taking into account the opening and closing price. As markets are most active during the opening and closing of a trading session, it makes volatility estimation more accurate.
Garman and Klass also assumed that the process of price change is a process of continuous diffusion (geometric Brownian motion). However, this assumption has several drawbacks. The method is not robust for opening jumps in price and trend movements.
Despite its drawbacks, the Garman-Klass estimator is still more effective than the basic formula since it takes into account not only the price at the beginning and end of the time interval but also intraday price extremums.
Researchers Rogers and Satchel have proposed a more efficient method for assessing historical volatility that takes into account price trends. See Rogers-Satchell Volatility for more detail.
Exponential Weighted Moving Average
The Exponentially Weighted Moving Average (EWMA) is a quantitative or statistical measure used to model or describe a time series. The EWMA is widely used in finance, the main applications being technical analysis and volatility modeling.
The moving average is designed as such that older observations are given lower weights. The weights fall exponentially as the data point gets older – hence the name exponentially weighted.
The only decision a user of the EWMA must make is the parameter lambda. The parameter decides how important the current observation is in the calculation of the EWMA. The higher the value of lambda, the more closely the EWMA tracks the original time series.
Standard Deviation of Log Returns
This is the simplest calculation of volatility . It's the standard deviation of ln(close/close(1))
Sampling periods used
5, 10, 20, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330, and 360
Historical Volatility plot
Purple outer lines: High and low volatility values corresponding to x-axis time
Blue inner lines: 25th and 75th percentiles of volatility corresponding to x-axis time
Green line: Median volatility values corresponding to x-axis time
White dashed line: Realized volatility corresponding to x-axis time
Additional things to know
Due to UI constraints on TradingView it will be easier to visualize this indicator by double-clicking the bottom pane where it appears and then expanded the y- and x-axis to view the entire chart.
You can click on each point on the graph to see what the volatility of that point is.
Option expiration dates will show up as large dots on the graph. You can input your own values in the settings.
ATR - Average True Range + Dynamic Trend w/ Signals | by Octopu$↕ ATR - Average True Range + Dynamic Trend w/ Signals | by Octopu$
What is ATR?
ATR stands for Average True Range
A Technical Analysis Indicator that measures market volatility by decomposing the range of a Security Price in a specific period.
The ATR can be used as a High Low Spectrum,
As well as a variation of a Moving Average, considering the ranges on a timeframe, generally this being 14 days.
Shorter periods can be used (will generate more signals) or longer periods for steadier trends (for fewer signals)
A ticker on a high volatility has a high ATR.
A ticker on a low volatility has a low ATR.
It is an useful resource for a trading system:
Can be used to enter or exit trades and/or also measure the daily spectrum of a stock.
Does not necessarily points price direction, but takes into account gaps and strong legs.
Can also be used as trading positions confirmation,
Rather be it for stop losses or take profits,
As well as setting trailing stops or limit orders.
This tool offers a great Risk to Reward Ratio, considering the fact you will be aware of the possible moves that an asset can perform.
This indicator should not be used as a standalone tool.
(The combination of factors relies on your own knowledge about Confluence Factors along with your Due Diligence)
This indicator is not an advice to buy or sell securities.
www.tradingview.com
SPY
ANY Ticker. ANY Timeframe.
(Used SPY 5m as Example only)
Features:
• ATR ( Average True Range )
• Range UP and DOWN
• Movement from Price Line
• Dynamic ATR
• Cross/Test Signals
• Live and Last Close
Options:
• Specific Factors Setup
• Length Customization
• Toggle On/Off
• Color PIcker
• Styling Options
Notes:
v1.0
Indicator release.
Changes and updates can come in the future for additional functionalities or per requests. Follow and Stay Tuned!
Did you like it? Please Support and Shoot me a message! I'd appreciate if you dropped by to say thanks! Thank you.
- Octopu$
🐙
Asymmetric Dispersion High Lowdear fellows,
this indicator is an effort to determine the range where the prices are likely to fall within in the current candle.
how it is calculated
1. obtain
a. gain from the open to the high
b. loss from the open to the low
in the last 20 (by default) candles and
in the last 200 (10*20 by default) candles
2. perform
a. the geometric average (sma of the log returns) over these gains and losses
b. their respective standard deviation
3. plot from the open of each candle
a. the average + 2 standard deviations (2 by default) of the short window size
b. same for the long window size (which is overlapped)
what it shows
1. where the current candle is likely to move with 95% likelyhood
how it can be interpreted
1. a gauge for volatility in the short and long term
2. a visual inbalance between likelyhood to go up or down according to dispersion in relation to current prices or candle open.
3. a confirmation of crossings of, for instance, support and resistances once the cloud is completely above or below.
in regard to bollinger bands (which are and excellent well proven indicator)
1. it segregates upward moves from the downward ones.
2. it is hardly crossed by prices
3. it is centered on the current candle open, instead of the moving average.
we welcome feedback and critic.
best regards and success wishes.
HMA w/ SSE-Dynamic EWMA Volatility Bands [Loxx]This indicator is for educational purposes to lay the groundwork for future closed/open source indicators. Some of thee future indicators will employ parameter estimation methods described below, others will require complex solvers such as the Nelder-Mead algorithm on log likelihood estimations to derive optimal parameter values for omega, gamma, alpha, and beta for GARCH(1,1) MLE and other volatility metrics. For our purposes here, we estimate the rolling lambda (λ) value used to calculate EWMA by minimizing of the sum of the squared errors minus the long-run variance--a rolling window of the one year mean of squared log-returns. In practice, practitioners will use a λ equal to a standardized value put out by institutions such as JP Morgan. Even simpler than this, others use a ratio of (per - 1) / (per + 1) to derive λ where per is the lookback period for EWMA. Due to computation limits in Pine, we'll likely not see a true GARCH(1,1) MLE on Pine for quite some time, but future closed source indicators will contain some very interesting industry hacks to get close by employing modifications to EWMA. Enjoy!
Exponentially weighted volatility and its relationship to GARCH(1,1)
Exponentially weighted volatility--also called exponentially weighted moving average volatility (EWMA)--puts more weight on more recent observations. EWMA is calculated as follows:
σ*2 = λσ(n - 1)^2 + (1 − λ)u(n - 1)^2
The estimate, σn, of the volatility for day n (made at the end of day n − 1) is calculated from σn −1 (the estimate that was made at the end of day n − 2 of the volatility for day n − 1) and u^n−1 (the most recent daily percentage change).
The EWMA approach has the attractive feature that the data storage requirements are modest. At any given time, we need to remember only the current estimate of the variance rate and the most recent observation on the value of the market variable. When we get a new observation on the value of the market variable, we calculate a new daily percentage change to update our estimate of the variance rate. The old estimate of the variance rate and the old value of the market variable can then be discarded.
The EWMA approach is designed to track changes in the volatility. Suppose there is a big move in the market variable on day n − 1 so that u2n−1 is large. This causes our estimate of the current volatility to move upward. The value of λ governs how responsive the estimate of the daily volatility is to the most recent daily percentage change. A low value of λ leads to a great deal of weight being given to the u(n−1)^2 when σn is calculated. In this case, the estimates produced for the volatility on successive days are themselves highly volatile. A high value of λ (i.e., a value close to 1.0) produces estimates of the daily volatility that respond relatively slowly to new information provided by the daily percentage change.
The RiskMetrics database, which was originally created by JPMorgan and made publicly available in 1994, used the EWMA model with λ = 0.94 for updating daily volatility estimates. The company found that, across a range of different market variables, this value of λ gives forecasts of the variance rate that come closest to the realized variance rate. In 2006, RiskMetrics switched to using a long memory model. This is a model where the weights assigned to the u(n -i)^2 as i increases decline less fast than in EWMA.
GARCH(1,1) Model
The EWMA model is a particular case of GARCH(1,1) where γ = 0, α = 1 − λ, and β = λ. The “(1,1)” in GARCH(1,1) indicates that σ^2 is based on the most recent observation of u^2 and the most recent estimate of the variance rate. The more general GARCH(p, q) model calculates σ^2 from the most recent p observations on u2 and the most recent q estimates of the variance rate.7 GARCH(1,1) is by far the most popular of the GARCH models. Setting ω = γVL, the GARCH(1,1) model can also be written:
σ(n)^2 = ω + αu(n-1)^2 + βσ(n-1)^2
What this indicator does
Calculate log returns log(close/close(1))
Calculates Lambda (λ) dynamically by minimizing the sum of squared errors. I've restricted this to the daily timeframe so as to not bloat the code with additional logic required to derive an annualized EWMA historical volatility metric.
After the Lambda is derived, EWMA is calculated one last time and the result is the daily volatility
This daily volatility is multiplied by the source and the multiplier +/- the HMA to create the volatility bands
Finally, daily volatility is multiplied by the square-root of days per year to derive annualized volatility. Years are trading days for the asset, for most everything but crypto, its 252, for crypto is 365.
Reset Strike Options-Type 2 (Gray Whaley) [Loxx]For a reset option type 2, the strike is reset in a similar way as a reset option 1. That is, the strike is reset to the asset price at a predetermined future time, if the asset price is below (above) the initial strike price for a call (put). The payoff for such a reset call is max(S - X, 0), and max(X - S, 0) for a put, where X is equal to the original strike X if not reset, and equal to the reset strike if reset. Gray and Whaley (1999) have derived a closed-form solution for the price of European reset strike options. The price of the call option is then given by (via "The Complete Guide to Option Pricing Formulas")
c = Se^(b-r)T2 * M(a1, y1; p) - Xe^(-rT2) * M(a2, y2; p) - Se^(b-r)T1 * N(-a1) * N(z2) * e^-r(T2-T1) + Se^(b-r)T2 * N(-a1) * N(z1)
p = Se^(b-r)T1 * N(a1) * N(-z2) * e^-r(T2-T1) + Se^(b-r)T2 * N(a1) * N(-z1) + Xe^(-rT2) * M(-a2, -y2; p) - Se^(b-r)T2 * M(-a1, -y1; p)
where b is the cost-of-carry of the underlying asset, a is the volatility of the relative price changes in the asset, and r is the risk-free interest rate. K is the strike price of the option, T1 the time to reset (in years), and T2 is its time to expiration. N(x) and M(a,b; p) are, respectively, the univariate and bivariate cumulative normal distribution functions. Further
a1 = (log(S/X) + (b+v^2/2)T1) / v*T1^0.5 ... a2 = a1 - v*T1^0.5
z1 = ((b+v^2/2)(T2-T1)) / v*(T2-T1)^0.5 ... z2 = z1 - v*(T2-T1)^0.5
y1 = (log(S/X) + (b+v^2/2)T1) / v*T1^0.5 ... y2 = a1 - v*T1^0.5
and p = (T1/T2)^0.5. For reset options with multiple reset rights, see Dai, Kwok, and Wu (2003) and Liao and Wang (2003).
Inputs
Asset price ( S )
Strike price ( K )
Reset time ( T1 )
Time to maturity ( T2 )
Risk-free rate ( r )
Cost of carry ( b )
Volatility ( s )
Numerical Greeks or Greeks by Finite Difference
Analytical Greeks are the standard approach to estimating Delta, Gamma etc... That is what we typically use when we can derive from closed form solutions. Normally, these are well-defined and available in text books. Previously, we relied on closed form solutions for the call or put formulae differentiated with respect to the Black Scholes parameters. When Greeks formulae are difficult to develop or tease out, we can alternatively employ numerical Greeks - sometimes referred to finite difference approximations. A key advantage of numerical Greeks relates to their estimation independent of deriving mathematical Greeks. This could be important when we examine American options where there may not technically exist an exact closed form solution that is straightforward to work with. (via VinegarHill FinanceLabs)
Numerical Greeks Outputs
Delta D
Elasticity L
Gamma G
DGammaDvol
GammaP G
Vega
DvegaDvol
VegaP
Theta Q (1 day)
Rho r
Rho futures option r
Phi/Rho2
Carry
DDeltaDvol
Speed
Strike Delta
Strike gamma
Things to know
Only works on the daily timeframe and for the current source price.
You can adjust the text size to fit the screen
Writer Extendible Option [Loxx]These options can be exercised at their initial maturity date /I but are extended to T2 if the option is out-of-the-money at ti. The payoff from a writer-extendible call option at time T1 (T1 < T2) is (via "The Complete Guide to Option Pricing Formulas")
c(S, X1, X2, t1, T2) = (S - X1) if S>= X1 else cBSM(S, X2, T2-T1)
and for a writer-extendible put is
c(S, X1, X2, T1, T2) = (X1 - S) if S< X1 else pBSM(S, X2, T2-T1)
Writer-Extendible Call
c = cBSM(S, X1, T1) + Se^(b-r)T2 * M(Z1, -Z2; -p) - X2e^-rT2 * M(Z1 - vT^0.5, -Z2 + vT^0.5; -p)
Writer-Extendible Put
p = cBSM(S, X1, T1) + X2e^-rT2 * M(-Z1 + vT^0.5, Z2 - vT^0.5; -p) - Se^(b-r)T2 * M(-Z1, Z2; -p)
b=r options on non-dividend paying stock
b=r-q options on stock or index paying a dividend yield of q
b=0 options on futures
b=r-rf currency options (where rf is the rate in the second currency)
Inputs
Asset price ( S )
Initial strike price ( X1 )
Extended strike price ( X2 )
Initial time to maturity ( t1 )
Extended time to maturity ( T2 )
Risk-free rate ( r )
Cost of carry ( b )
Volatility ( s )
Numerical Greeks or Greeks by Finite Difference
Analytical Greeks are the standard approach to estimating Delta, Gamma etc... That is what we typically use when we can derive from closed form solutions. Normally, these are well-defined and available in text books. Previously, we relied on closed form solutions for the call or put formulae differentiated with respect to the Black Scholes parameters. When Greeks formulae are difficult to develop or tease out, we can alternatively employ numerical Greeks - sometimes referred to finite difference approximations. A key advantage of numerical Greeks relates to their estimation independent of deriving mathematical Greeks. This could be important when we examine American options where there may not technically exist an exact closed form solution that is straightforward to work with. (via VinegarHill FinanceLabs)
Numerical Greeks Output
Delta
Elasticity
Gamma
DGammaDvol
GammaP
Vega
DvegaDvol
VegaP
Theta (1 day)
Rho
Rho futures option
Phi/Rho2
Carry
DDeltaDvol
Speed
Things to know
Only works on the daily timeframe and for the current source price.
You can adjust the text size to fit the screen
Reset Strike Options-Type 1 [Loxx]In a reset call (put) option, the strike is reset to the asset price at a predetermined future time, if the asset price is below (above) the initial strike price. This makes the strike path-dependent. The payoff for a call at maturity is equal to max((S-X)/X, 0) where is equal to the original strike X if not reset, and equal to the reset strike if reset. Similarly, for a put, the payoff is max((X-S)/X, 0) Gray and Whaley (1997) x have derived a closed-form solution for such an option. For a call, we have
c = e^(b-r)(T2-T1) * N(-a2) * N(z1) * e^(-rt1) - e^(-rT2) * N(-a2)*N(z2) - e^(-rT2) * M(a2, y2; p) + (S/X) * e^(b-r)T2 * M(a1, y1; p)
and for a put,
p = e^(-rT2) * N(a2) * N(-z2) - e^(b-r)(T2-T1) * N(a2) * N(-z1) * e^(-rT1) + e^(-rT2) * M(-a2, -y2; p) - (S/X) * e^(b-r)T2 * M(-a1, -y1; p)
where b is the cost-of-carry of the underlying asset, a is the volatil- ity of the relative price changes in the asset, and r is the risk-free interest rate. X is the strike price of the option, r the time to reset (in years), and T is its time to expiration. N(x) and M(a, b; p) are, respec- tively, the univariate and bivariate cumulative normal distribution functions. The remaining parameters are p = (T1/T2)^0.5 and
a1 = (log(S/X) + (b+v^2/2)T1) / vT1^0.5 ... a2 = a1 - vT1^0.5
z1 = (b+v^2/2)(T2-T1)/v(T2-T1)^0.5 ... z2 = z1 - v(T2-T1)^0.5
y1 = log(S/X) + (b+v^2)T2 / vT2^0.5 ... y2 = y1 - vT2^0.5
b=r options on non-dividend paying stock
b=r-q options on stock or index paying a dividend yield of q
b=0 options on futures
b=r-rf currency options (where rf is the rate in the second currency)
Inputs
Asset price ( S )
Initial strike price ( X1 )
Extended strike price ( X2 )
Initial time to maturity ( t1 )
Extended time to maturity ( T2 )
Risk-free rate ( r )
Cost of carry ( b )
Volatility ( s )
Numerical Greeks or Greeks by Finite Difference
Analytical Greeks are the standard approach to estimating Delta, Gamma etc... That is what we typically use when we can derive from closed form solutions. Normally, these are well-defined and available in text books. Previously, we relied on closed form solutions for the call or put formulae differentiated with respect to the Black Scholes parameters. When Greeks formulae are difficult to develop or tease out, we can alternatively employ numerical Greeks - sometimes referred to finite difference approximations. A key advantage of numerical Greeks relates to their estimation independent of deriving mathematical Greeks. This could be important when we examine American options where there may not technically exist an exact closed form solution that is straightforward to work with. (via VinegarHill FinanceLabs)
Numerical Greeks Ouput
Delta
Elasticity
Gamma
DGammaDvol
GammaP
Vega
DvegaDvol
VegaP
Theta (1 day)
Rho
Rho futures option
Phi/Rho2
Carry
DDeltaDvol
Speed
Things to know
Only works on the daily timeframe and for the current source price.
You can adjust the text size to fit the screen
Fade-in Options [Loxx]A fade-in call has the same payoff as a standard call except the size of the payoff is weighted by how many fixings the asset price were inside a predefined range (L, U). If the asset price is inside the range for every fixing, the payoff will be identical to a plain vanilla option. More precisely, for a call option, the payoff will be max(S(T) - X, 0) X 1/n Sum(n(i)), where n is the total number of fixings and n(i) = 1 if at fixing i the asset price is inside the range, and n(i) = 0 otherwise. Similarly, for a put, the payoff is max(X - S(T), 0) X 1/n Sum(n(i)).
Brockhaus, Ferraris, Gallus, Long, Martin, and Overhaus (1999) describe a closed-form formula for fade-in options. For a call the value is given by
max(X - S(T), 0) X 1/n Sum(n(i))
describe a closed-form formula for fade-in options. For a call the value is given by
c = 1/n * Sum(S^((b-r)*T) * (M(-d5, d1; -p) - M(-d3, d1; -p)) - Xe^(-rT) * (M(-d6, d2; -p) - M(-d4, d2; -p))
where n is the number of fixings, p = (t1^0.5/T^0.5), t1 = iT/n
d1 = (log(S/X) + (b + v^2/2)*T) / (v * T^0.5) ... d2 = d1 - v*T^0.5
d3 = (log(S/L) + (b + v^2/2)*t1) / (v * t1^0.5) ... d4 = d3 - v*t1^0.5
d5 = (log(S/U) + (b + v^2/2)*t1) / (v * t1^0.5) ... d6 = d5 - v*t1^0.5
The value of a put is similarly
p = 1/n * Sum(Xe^(-rT) * (M(-d6, -d2; -p) - M(-d4, -d2; -p))) - S^((b-r)*T) * (M(-d5, -d1; -p) - M(-d3, -d1; -p)
b=r options on non-dividend paying stock
b=r-q options on stock or index paying a dividend yield of q
b=0 options on futures
b=r-rf currency options (where rf is the rate in the second currency)
Inputs
Asset price ( S )
Strike price ( K )
Lower barrier ( L )
Upper barrier ( U )
Time to maturity ( T )
Risk-free rate ( r )
Cost of carry ( b )
Volatility ( s )
Fixings ( n )
cnd1(x) = Cumulative Normal Distribution
nd(x) = Standard Normal Density Function
cbnd3() = Cumulative Bivariate Distribution
convertingToCCRate(r, cmp ) = Rate compounder
Numerical Greeks or Greeks by Finite Difference
Analytical Greeks are the standard approach to estimating Delta, Gamma etc... That is what we typically use when we can derive from closed form solutions. Normally, these are well-defined and available in text books. Previously, we relied on closed form solutions for the call or put formulae differentiated with respect to the Black Scholes parameters. When Greeks formulae are difficult to develop or tease out, we can alternatively employ numerical Greeks - sometimes referred to finite difference approximations. A key advantage of numerical Greeks relates to their estimation independent of deriving mathematical Greeks. This could be important when we examine American options where there may not technically exist an exact closed form solution that is straightforward to work with. (via VinegarHill FinanceLabs)
Things to know
Only works on the daily timeframe and for the current source price.
You can adjust the text size to fit the screen